Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

6719 Articles
article-image-openscenegraph-methods-improving-rendering-efficiency
Packt
04 Feb 2011
11 min read
Save for later

OpenSceneGraph: methods for improving rendering efficiency

Packt
04 Feb 2011
11 min read
Improving your application There are a lot of tricks to improve the rendering performance of applications with a large amount of data. But the essence of them is easy to understand: the smaller the number of resources (geometries, display lists, texture objects, and so on) allocated, the faster and smoother the user application is. You might benefit from the previous article on Implementing Multithreaded Operations and Rendering in OpenSceneGraph. There are lots of ideas on how to find the bottleneck of an inefficient application. For example, you can replace certain objects by simple boxes, or replace textures in your application by 1x1 images to see if the performance can increase, thanks to the reduction of geometries and texture objects. The statistics class (osgViewer::StatsHandler, or press the S key in the osgviewer) can also provide helpful information. To achieve a less-enough scene resource, we can refer to the following table and try to optimize our applications if they are not running in good shape: ProblemInfluencePossible solutionToo many geometriesLow frame rate and huge resource cost Use LOD and culling techniques to reduce the vertices of the drawables. Use primitive sets and the index mechanism rather than duplicate vertices. Merge geometries into one, if possible. This is because one geometry object allocates one display list, and too many display lists occupy too much of the video memory. Share geometries, vertices, and nodes as often as possible. Too many dynamic objects (configured with the setDataVariance() method)Low frame rate because the DRAW phase must wait until all dynamic objects finish updating Don't use the DYNAMIC flag on nodes and drawables that do not need to be modified on the fly.   Don't set the root node to be dynamic unless you are sure that you require this, because data variance can be inherited in the scene graph. Too many texture objectsLow frame rate and huge resource cost Share rendering states and textures as much as you can. Lower the resolution and compress them using the DXTC format if possible. Use osg::TextureRectangle to handle non-power-of-two sized textures, and osg::Texture2D for regular 2D textures. Use LOD to simplify and manage nodes with large-sized textures. The scene graph structure is "loose", that is, nodes are not grouped together effectively.Very high cull and draw time, and many redundant state changes If there are too many parent nodes, each with only one child, which means the scene has as many group nodes as leaf nodes, and even as many drawables as leaf nodes, the performance will be totally ruined. You should rethink your scene graph and group nodes that have close features and behaviors more effectively. Loading and unloading resources too frequentlyLower and lower running speed and wasteful memory fragmentationUse the buffer pool to allocate and release resources. OSG has already done this to textures and buffer objects, by default. An additional helper is the osgUtil::Optimizer class. This can traverse the scene graph before starting the simulation loop and do different kinds of optimizations in order to improve efficiency, including removing redundant nodes, sharing duplicated states, checking and merging geometries, optimizing texture settings, and so on. You may start the optimizing operation with the following code segment: osgUtil::Optimizer optimizer; optimizer.optimize( node ); Some parts of the optimizer are optional. You can see the header file include/osgUtil/Optimizer for details. Time for action – sharing textures with a customized callback We would like to explain the importance of scene optimization by providing an extreme situation where massive textures are allocated without sharing the same ones. We have a basic solution to collect and reuse loaded images in a file reading callback, and then share all textures that use the same image object and have the same parameters. The idea of sharing textures can be used to construct massive scene graphs, such as digital cities; otherwise, the video card memory will soon be eaten up and thus cause the whole application to slow down and crash. Include the necessary headers: #include <osg/Texture2D> #include <osg/Geometry> #include <osg/Geode> #include <osg/Group> #include <osgDB/ReadFile> #include <osgViewer/Viewer> The function for quickly producing massive data can be used in this example, once more. This time we will apply a texture attribute to each quad. That means that we are going to have a huge number of geometries, and the same amount of texture objects, which will be a heavy burden for rendering the scene smoothly: #define RAND(min, max) ((min) + (float)rand()/(RAND_MAX+1) * ((max)-(min))) osg::Geode* createMassiveQuads( unsigned int number, const std::string& imageFile ) { osg::ref_ptr<osg::Geode> geode = new osg::Geode; for ( unsigned int i=0; i<number; ++i ) { osg::Vec3 randomCenter; randomCenter.x() = RAND(-100.0f, 100.0f); randomCenter.y() = RAND(1.0f, 100.0f); randomCenter.z() = RAND(-100.0f, 100.0f); osg::ref_ptr<osg::Drawable> quad = osg::createTexturedQuadGeometry( randomCenter, osg::Vec3(1.0f, 0.0f, 0.0f), osg::Vec3(0.0f, 0.0f, 1.0f) ); osg::ref_ptr<osg::Texture2D> texture = new osg::Texture2D; texture->setImage( osgDB::readImageFile(imageFile) ); quad->getOrCreateStateSet()->setTextureAttributeAndModes( 0, texture.get() ); geode->addDrawable( quad.get() ); } return geode.release(); } The createMassiveQuads() function is, of course, awkward and ineffective here. However, it demonstrates a common situation: assuming that an application needs to often load image files and create texture objects on the fly, it is necessary to check if an image has been loaded already and then share the corresponding textures automatically. The memory occupancy will be obviously reduced if there are plenty of textures that are reusable. To achieve this, we should first record all loaded image filenames, and then create a map that saves the corresponding osg::Image objects. Whenever a new readImageFile() request arrives, the osgDB::Registry instance will try using a preset osgDB::ReadFileCallback to perform the actual loading work. If the callback doesn't exist, it will call the readImageImplementation() to choose an appropriate plug-in that will load the image and return the resultant object. Therefore, we can take over the reading image process by inheriting the osgDB::ReadFileCallback class and implementing a new functionality that compares the filename and re-uses the existing image objects, with the customized getImageByName() function: class ReadAndShareImageCallback : public osgDB::ReadFileCallback { public: virtual osgDB::ReaderWriter::ReadResult readImage( const std::string& filename, const osgDB::Options* options ); protected: osg::Image* getImageByName( const std::string& filename ) { ImageMap::iterator itr = _imageMap.find(filename); if ( itr!=_imageMap.end() ) return itr->second.get(); return NULL; } typedef std::map<std::string, osg::ref_ptr<osg::Image> > ImageMap; ImageMap _imageMap; }; The readImage() method should be overridden to replace the current reading implementation. It will return the previously-imported instance if the filename matches an element in the _imageMap, and will add any newly-loaded image object and its name to _imageMap, in order to ensure that the same file won't be imported again: osgDB::ReaderWriter::ReadResult ReadAndShareImageCallback::read Image( const std::string& filename, const osgDB::Options* options ) { osg::Image* image = getImageByName( filename ); if ( !image ) { osgDB::ReaderWriter::ReadResult rr; rr = osgDB::Registry::instance()->readImageImplementation( filename, options); if ( rr.success() ) _imageMap[filename] = rr.getImage(); return rr; } return image; } Now we get into the main entry. The file-reading callback is set by the setReadFileCallback() method of the osgDB::Registry class, which is designed as a singleton. Meanwhile, we have to enable another important run-time optimizer, named osgDB::SharedStateManager, that can be defined by setSharedStateManager() or getOrCreateSharedStateManager(). The latter will assign a default instance to the registry: osgDB::Registry::instance()->setReadFileCallback( new ReadAndShareImageCallback ); osgDB::Registry::instance()->getOrCreateSharedStateManager(); Create the massive scene graph. It consists of two groups of quads, each of which uses a unified image file to decorate the quad geometry. In total, 1,000 quads will be created, along with 1,000 newly-allocated textures. Certainly, there are too many redundant texture objects (because they are generated from only two image files) in this case: osg::ref_ptr<osg::Group> root = new osg::Group; root->addChild( createMassiveQuads(500, "Images/lz.rgb") ); root->addChild( createMassiveQuads(500, "Images/osg64.png") ); The osgDB::SharedStateManager is used for maximizing the reuse of textures and state sets. It is actually a node visitor, traversing all child nodes' state sets and comparing them when the share() method is invoked. State sets and textures with the same attributes and data will be combined into one: osgDB::SharedStateManager* ssm = osgDB::Registry::instance()->getSharedStateManager(); if ( ssm ) ssm->share( root.get() ); Finalize the viewer: osgViewer::Viewer viewer; viewer.setSceneData( root.get() ); return viewer.run(); Now the application starts with a large number of textured quads. With the ReadAndShareImageCallback sharing image objects, and the osgDB::SharedStateManager sharing textures, the rendering process can work without a hitch. Try commenting out the lines of setReadFileCallback() and getOrCreateSharedStateManager() and restart the application, and then see what has happened. The Windows Task Manager is helpful in displaying the amount of currently-used memory here: What just happened? You may be curious about the implementation of osgDB::SharedStateManager. It collects rendering states and textures that firstly appear in the scene graph, and then replaces duplicated states of successive nodes with the recorded ones. It compares two states' member attributes in order to decide whether the new state should be recorded (because it's not the same as any of the recorded ones) or replaced (because it is a duplication of the previous one). For texture objects, the osgDB::SharedStateManager will determine if they are exactly the same by checking the data() pointer of the osg::Image object, rather than by comparing every pixel of the image. Thus, the customized ReadAndShareImageCallback class is used here to share image objects with the same filename first, and the osgDB::SharedStateManager shares textures with the same image object and other attributes. The osgDB::DatabasePager also makes use of osgDB::SharedStateManager to share states of external scene graphs when dynamically loading and unloading paged nodes. This is done automatically if getOrCreateSharedStateManager() is executed. Have a go hero – sharing public models Can we also share models with the same name in an application? The answer is absolutely yes. The osgDB::ReadFileCallback could be used again by overriding the virtual method readNode(). Other preparations include a member std::map for recording filename and node pointer pairs, and a user-defined getNodeByName() method as we have just done in the last example. Paging huge scene data Are you still struggling with the optimization of huge scene data? Don't always pay attention to the rendering API itself. There is no "super" rendering engine in the world that can work with unlimited datasets. Consider using the scene paging mechanism at this time, which can load and unload objects according to the current viewport and frustum. It is also important to design a better structure for indexing regions of spatial data, like quad-tree, octree, R-tree, and the binary space partitioning (BSP). Making use of the quad-tree A classic quad-tree structure decomposes the whole 2D region into four square children (we call them cells here), and recursively subdivides each cell into four regions, until a cell reaches its target capacity and stops splitting (a so-called leaf). Each cell in the tree either has exactly four children, or has no children. It is mostly useful for representing terrains or scenes on 2D planes. The quad-tree structure is useful for view-frustum culling terrain data. Because the terrain is divided into small pieces that are a part of it, we can easily render pieces of small data in the frustum, and discard those that are invisible. This can effectively unload a large number of chunks of a terrain from memory at a time, and load them back when necessary—which is the basic principle of dynamic data paging. This process can be progressive: when the terrain model is far enough from the viewer, we may only handle its root and first levels. But as it is drawing near, we can traverse down to corresponding levels of the quad-tree, and cull and unload as many cells as possible, to keep the load balance of the scene.
Read more
  • 0
  • 0
  • 6549

article-image-wpf-45-application-and-windows
Packt
24 Sep 2012
14 min read
Save for later

WPF 4.5 Application and Windows

Packt
24 Sep 2012
14 min read
Creating a window Windows are the typical top level controls in WPF. By default, a MainWindow class is created by the application wizard and automatically shown upon running the application. In this recipe, we'll take a look at creating and showing other windows that may be required during the lifetime of an application. Getting ready Make sure Visual Studio is up and running. How to do it... We'll create a new class derived from Window and show it when a button is clicked: Create a new WPF application named CH05.NewWindows. Right-click on the project node in Solution explorer, and select Add | Window…: In the resulting dialog, type OtherWindow in the Name textbox and click on Add. A file named OtherWindow.xaml should open in the editor. Add a TextBlock to the existing Grid, as follows: <TextBlock Text="This is the other window" FontSize="20" VerticalAlignment="Center" HorizontalAlignment="Center" /> Open MainWindow.xaml. Add a Button to the Grid with a Click event handler: <Button Content="Open Other Window" FontSize="30" Click="OnOpenOtherWindow" /> In the Click event handler, add the following code: void OnOpenOtherWindow(object sender, RoutedEventArgs e) { var other = new OtherWindow(); other.Show(); } Run the application, and click the button. The other window should appear and live happily alongside the main window: How it works... A Window is technically a ContentControl, so can contain anything. It's made visible using the Show method. This keeps the window open as long as it's not explicitly closed using the classic close button, or by calling the Close method. The Show method opens the window as modeless—meaning the user can return to the previous window without restriction. We can click the button more than once, and consequently more Window instances would show up. There's more... The first window shown can be configured using the Application.StartupUri property, typically set in App.xaml. It can be changed to any other window. For example, to show the OtherWindow from the previous section as the first window, open App.xaml and change the StartupUri property to OtherWindow.xaml: StartupUri="OtherWindow.xaml" Selecting the startup window dynamically Sometimes the first window is not known in advance, perhaps depending on some state or setting. In this case, the StartupUri property is not helpful. We can safely delete it, and provide the initial window (or even windows) by overriding the Application.OnStartup method as follows (you'll need to add a reference to the System.Configuration assembly for the following to compile): protected override void OnStartup(StartupEventArgs e) {    Window mainWindow = null;    // check some state or setting as appropriate          if(ConfigurationManager.AppSettings["AdvancedMode"] == "1")       mainWindow = new OtherWindow();    else       mainWindow = new MainWindow();    mainWindow.Show(); } This allows complete flexibility in determining what window or windows should appear at application startup. Accessing command line arguments The WPF application created by the New Project wizard does not expose the ubiquitous Main method. WPF provides this for us – it instantiates the Application object and eventually loads the main window pointed to by the StartupUri property. The Main method, however, is not just a starting point for managed code, but also provides an array of strings as the command line arguments passed to the executable (if any). As Main is now beyond our control, how do we get the command line arguments? Fortunately, the same OnStartup method provides a StartupEventArgs object, in which the Args property is mirrored from Main. The downloadable source for this chapter contains the project CH05.CommandLineArgs, which shows an example of its usage. Here's the OnStartup override: protected override void OnStartup(StartupEventArgs e) { string text = "Hello, default!"; if(e.Args.Length > 0) text = e.Args[0]; var win = new MainWindow(text); win.Show(); } The MainWindow instance constructor has been modified to accept a string that is later used by the window. If a command line argument is supplied, it is used. Creating a dialog box A dialog box is a Window that is typically used to get some data from the user, before some operation can proceed. This is sometimes referred to as a modal window (as opposed to modeless, or non-modal). In this recipe, we'll take a look at how to create and manage such a dialog box. Getting ready Make sure Visual Studio is up and running. How to do it... We'll create a dialog box that's invoked from the main window to request some information from the user: Create a new WPF application named CH05.Dialogs. Add a new Window named DetailsDialog.xaml (a DetailsDialog class is created). Visual Studio opens DetailsDialog.xaml. Set some Window properties: FontSize to 16, ResizeMode to NoResize, SizeToContent to Height, and make sure the Width is set to 300: ResizeMode="NoResize" SizeToContent="Height" Width="300" FontSize="16" Add four rows and two columns to the existing Grid, and add some controls for a simple data entry dialog as follows: <Grid.RowDefinitions> <RowDefinition Height="Auto" /> <RowDefinition Height="Auto"/> <RowDefinition Height="Auto"/> <RowDefinition Height="Auto"/> </Grid.RowDefinitions> <Grid.ColumnDefinitions> <ColumnDefinition Width="Auto" /> <ColumnDefinition /> </Grid.ColumnDefinitions> <TextBlock Text="Please enter details:" Grid.ColumnSpan="2" Margin="4,4,4,20" HorizontalAlignment="Center"/> <TextBlock Text="Name:" Grid.Row="1" Margin="4"/> <TextBox Grid.Column="1" Grid.Row="1" Margin="4" x_Name="_name"/> <TextBlock Text="City:" Grid.Row="2" Margin="4"/> <TextBox Grid.Column="1" Grid.Row="2" Margin="4" x_Name="_city"/> <StackPanel Grid.Row="3" Orientation="Horizontal" Margin="4,20,4,4" Grid.ColumnSpan="2" HorizontalAlignment="Center"> <Button Content="OK" Margin="4"  /> <Button Content="Cancel" Margin="4" /> </StackPanel> This is how it should look in the designer: The dialog should expose two properties for the name and city the user has typed in. Open DetailsDialog.xaml.cs. Add two simple properties: public string FullName { get; private set; } public string City { get; private set; } We need to show the dialog from somewhere in the main window. Open MainWindow.xaml, and add the following markup to the existing Grid: <Grid.RowDefinitions>     <RowDefinition Height="Auto" />     <RowDefinition /> </Grid.RowDefinitions> <Button Content="Enter Data" Click="OnEnterData"         Margin="4" FontSize="16"/> <TextBlock FontSize="24" x_Name="_text" Grid.Row="1"     VerticalAlignment="Center" HorizontalAlignment="Center"/> In the OnEnterData handler, add the following: private void OnEnterData(object sender, RoutedEventArgs e) { var dlg = new DetailsDialog(); if(dlg.ShowDialog() == true) { _text.Text = string.Format( "Hi, {0}! I see you live in {1}.", dlg.FullName, dlg.City); } } Run the application. Click the button and watch the dialog appear. The buttons don't work yet, so your only choice is to close the dialog using the regular close button. Clearly, the return value from ShowDialog is not true in this case. When the OK button is clicked, the properties should be set accordingly. Add a Click event handler to the OK button, with the following code: private void OnOK(object sender, RoutedEventArgs e) { FullName = _name.Text; City = _city.Text; DialogResult = true; Close(); } The Close method dismisses the dialog, returning control to the caller. The DialogResult property indicates the returned value from the call to ShowDialog when the dialog is closed. Add a Click event handler for the Cancel button with the following code: private void OnCancel(object sender, RoutedEventArgs e) { DialogResult = false; Close(); } Run the application and click the button. Enter some data and click on OK: You will see the following window: How it works... A dialog box in WPF is nothing more than a regular window shown using ShowDialog instead of Show. This forces the user to dismiss the window before she can return to the invoking window. ShowDialog returns a Nullable (can be written as bool? in C#), meaning it can have three values: true, false, and null. The meaning of the return value is mostly up to the application, but typically true indicates the user dismissed the dialog with the intention of making something happen (usually, by clicking some OK or other confirmation button), and false means the user changed her mind, and would like to abort. The null value can be used as a third indicator to some other application-defined condition. The DialogResult property indicates the value returned from ShowDialog because there is no other way to convey the return value from the dialog invocation directly. That's why the OK button handler sets it to true and the Cancel button handler sets it to false (this also happens when the regular close button is clicked, or Alt + F4 is pressed). Most dialog boxes are not resizable. This is indicated with the ResizeMode property of the Window set to NoResize. However, because of WPF's flexible layout, it certainly is relatively easy to keep a dialog resizable (and still manageable) where it makes sense (such as when entering a potentially large amount of text in a TextBox – it would make sense if the TextBox could grow if the dialog is enlarged). There's more... Most dialogs can be dismissed by pressing Enter (indicating the data should be used) or pressing Esc (indicating no action should take place). This is possible to do by setting the OK button's IsDefault property to true and the Cancel button's IsCancel property to true. The default button is typically drawn with a heavier border to indicate it's the default button, although this eventually depends on the button's control template. If these settings are specified, the handler for the Cancel button is not needed. Clicking Cancel or pressing Esc automatically closes the dialog (and sets DiaglogResult to false). The OK button handler is still needed as usual, but it may be invoked by pressing Enter, no matter what control has the keyboard focus within the Window. The CH05.DefaultButtons project from the downloadable source for this chapter demonstrates this in action. Modeless dialogs A dialog can be show as modeless, meaning it does not force the user to dismiss it before returning to other windows in the application. This is done with the usual Show method call – just like any Window. The term dialog in this case usually denotes some information expected from the user that affects other windows, sometimes with the help of another button labelled "Apply". The problem here is mostly logical—how to convey the information change. The best way would be using data binding, rather than manually modifying various objects. We'll take an extensive look at data binding in the next chapter. Using the common dialog boxes Windows has its own built-in dialog boxes for common operations, such as opening files, saving a file, and printing. Using these dialogs is very intuitive from the user's perspective, because she has probably used those dialogs before in other applications. WPF wraps some of these (native) dialogs. In this recipe, we'll see how to use some of the common dialogs. Getting ready Make sure Visual Studio is up and running. How to do it... We'll create a simple image viewer that uses the Open common dialog box to allow the user to select an image file to view: Create a new WPF Application named CH05.CommonDialogs. Open MainWindow.xaml. Add the following markup to the existing Grid: <Grid.RowDefinitions>     <RowDefinition Height="Auto" />     <RowDefinition /> </Grid.RowDefinitions> <Button Content="Open Image" FontSize="20" Click="OnOpenImage"         HorizontalAlignment="Center" Margin="4" /> <Image Grid.Row="1" x_Name="_img" Stretch="Uniform" /> Add a Click event handler for the button. In the handler, we'll first create an OpenFileDialog instance and initialize it (add a using to the Microsoft.Win32 namespace): void OnOpenImage(object sender, RoutedEventArgs e) { var dlg = new OpenFileDialog { Filter = "Image files|*.png;*.jpg;*.gif;*.bmp", Title = "Select image to open", InitialDirectory = Environment.GetFolderPath( Environment.SpecialFolder.MyPictures) }; Now we need to show the dialog and use the selected file (if any): if(dlg.ShowDialog() == true) { try { var bmp = new BitmapImage(new Uri(dlg.FileName)); _img.Source = bmp; } catch(Exception ex) { MessageBox.Show(ex.Message, "Open Image"); } } Run the application. Click the button and navigate to an image file and select it. You should see something like the following: How it works... The OpenFileDialog class wraps the Win32 open/save file dialog, providing easy enough access to its capabilities. It's just a matter of instantiating the object, setting some properties, such as the file types (Filter property) and then calling ShowDialog. This call, in turn, returns true if the user selected a file and false otherwise (null is never returned, although the return type is still defined as Nullable for consistency). The look of the Open file dialog box may be different in various Windows versions. This is mostly unimportant unless some automated UI testing is done. In this case, the way the dialog looks or operates may have to be taken into consideration when creating the tests. The filename itself is returned in the FileName property (full path). Multiple selections are possible by setting the MultiSelect property to true (in this case the FileNames property returns the selected files). There's more... WPF similarly wraps the Save As common dialog with the SaveFileDialog class (in the Microsoft.Win32 namespace as well). Its use is very similar to OpenFileDialog (in fact, both inherit from the abstract FileDialog class). What about folder selection (instead of files)? The WPF OpenFileDialog does not support that. One solution is to use Windows Forms' FolderBrowseDialog class. Another good solution is to use the Windows API Code Pack described shortly. Another common dialog box WPF wraps is PrintDialog (in System.Windows.Controls). This shows the familiar print dialog, with options to select a printer, orientation, and so on. The most straightforward way to print would be calling PrintVisual (after calling ShowDialog), providing anything that derives from the Visual abstract class (which include all elements). General printing is a complex topic and is beyond the scope of this book. What about colors and fonts? Windows also provides common dialogs for selecting colors and fonts. However, these are not wrapped by WPF. There are several alternatives: Use the equivalent Windows Forms classes (FontDialog and ColorDialog, both from System.Windows.Forms) Wrap the native dialogs yourself Look for alternatives on the Web The first option is possible, but has two drawbacks: first, it requires adding reference to the System.Windows.Forms assembly; this adds a dependency at compile time, and increases memory consumption at run time, for very little gain. The second drawback has to do with the natural mismatch between Windows Forms and WPF. For example, ColorDialog returns a color as a System.Drawing.Color, but WPF uses System.Windows.Media.Color. This requires mapping a GDI+ color (WinForms) to WPF's color, which is cumbersome at best. The second option of doing your own wrapping is a non-trivial undertaking and requires good interop knowledge. The other downside is that the default color and font common dialogs are pretty old (especially the color dialog), so there's much room for improvement. The third option is probably the best one. There are more than a few good candidates for color and font pickers. For a color dialog, for example, you can use the ColorPicker or ColorCanvas provided with the Extended WPF toolkit library on CodePlex (http://wpftoolkit.codeplex.com/). Here's how these may look (ColorCanvas on the left-hand side, and one of the possible views of ColorPicker on the right-hand side): The Windows API Code Pack The Windows API Code Pack is a Microsoft project on CodePlex (http://archive.msdn.microsoft.com/WindowsAPICodePack) that provides many .NET wrappers to native Windows features, in various areas, such as shell, networking, Windows 7 features (this is less important now as WPF 4 added first class support for Windows 7), power management, and DirectX. One of the Shell features in the library is a wrapper for the Open dialog box that allows selecting a folder instead of a file. This has no dependency on the WinForms assembly.
Read more
  • 0
  • 0
  • 6547

article-image-design-a-restful-web-api-with-java-tutorial
Pavan Ramchandani
12 Jun 2018
12 min read
Save for later

Design a RESTful web API with Java [Tutorial]

Pavan Ramchandani
12 Jun 2018
12 min read
In today's tutorial, you will learn to design REST services. We will break down the key design considerations you need to make when building RESTful web APIs. In particular, we will focus on the core elements of the REST architecture style: Resources and their identifiers Interaction semantics for RESTful APIs (HTTP methods) Representation of resources Hypermedia controls This article is an excerpt from a book written by Balachandar Bogunuva Mohanram, titled RESTful Java Web Services, Second Edition. This book will help you build robust, scalable and secure RESTful web services, making use of the JAX-RS and Jersey framework extensions. Let's start by discussing the guidelines for identifying resources in a problem domain. Richardson Maturity Model—Leonardo Richardson has developed a model to help with assessing the compliance of a service to REST architecture style. The model defines four levels of maturity, starting from level-0 to level-3 as the highest maturity level. The maturity levels are decided considering the aforementioned principle elements of the REST architecture. Identifying resources in the problem domain The basic steps that yoneed to take while building a RESTful web API for a specific problem domain are: Identify all possible objects in the problem domain. This can be done by identifying all the key nouns in the problem domain. For example, if you are building an application to manage employees in a department, the obvious nouns are department and employee. The next step is to identify the objects that can be manipulated using CRUD operations. These objects can be classified as resources. Note that you should be careful while choosing resources. Based on the usage pattern, you can classify resources as top-level and nested resources (which are the children of a top-level resource). Also, there is no need to expose all resources for use by the client; expose only those resources that are required for implementing the business use case. Transforming operations to HTTP methods Once you have identified all resources, as the next step, you may want to map the operations defined on the resources to the appropriate HTTP methods. The most commonly used HTTP methods (verbs) in RESTful web APIs are POST, GET, PUT, and DELETE. Note that there is no one-to-one mapping between the CRUD operations defined on the resources and the HTTP methods. Understanding of idempotent and safe operation concepts will help with using the correct HTTP method. An operation is called idempotent if multiple identical requests produce the same result. Similarly, an idempotent RESTful web API will always produce the same result on the server irrespective of how many times the request is executed with the same parameters; however, the response may change between requests. An operation is called safe if it does not modify the state of the resources. Check out the following table: MethodIdempotentSafeGETYESYESOPTIONSYESYESHEADYESYESPOSTNONOPATCHNONOPUTYESNODELETEYESNO Here are some tips for identifying the most appropriate HTTP method for the operations that you want to perform on the resources: GET: You can use this method for reading a representation of a resource from the server. According to the HTTP specification, GET is a safe operation, which means that it is only intended for retrieving data, not for making any state changes. As this is an idempotent operation, multiple identical GET requests will behave in the same manner. A GET method can return the 200 OK HTTP response code on the successful retrieval of resources. If there is any error, it can return an appropriate status code such as 404 NOT FOUND or 400 BAD REQUEST. DELETE: You can use this method for deleting resources. On successful deletion, DELETE can return the 200 OK status code. According to the HTTP specification, DELETE is an idempotent operation. Note that when you call DELETE on the same resource for the second time, the server may return the 404 NOT FOUND status code since it was already deleted, which is different from the response for the first request. The change in response for the second call is perfectly valid here. However, multiple DELETE calls on the same resource produce the same result (state) on the server. PUT: According to the HTTP specification, this method is idempotent. When a client invokes the PUT method on a resource, the resource available at the given URL is completely replaced with the resource representation sent by the client. When a client uses the PUT request on a resource, it has to send all the available properties of the resource to the server, not just the partial data that was modified within the request. You can use PUT to create or update a resource if all attributes of the resource are available with the client. This makes sure that the server state does not change with multiple PUT requests. On the other hand, if you send partial resource content in a PUT request multiple times, there is a chance that some other clients might have updated some attributes that are not present in your request. In such cases, the server cannot guarantee that the state of the resource on the server will remain identical when the same request is repeated, which breaks the idempotency rule. POST: This method is not idempotent. This method enables you to use the POST method to create or update resources when you do not know all the available attributes of a resource. For example, consider a scenario where the identifier field for an entity resource is generated at the server when the entity is persisted in the data store. You can use the POST method for creating such resources as the client does not have an identifier attribute while issuing the request. Here is a simplified example that illustrates this scenario. In this example, the employeeID attribute is generated on the server: POST hrapp/api/employees HTTP/1.1 Host: packtpub.com {employee entity resource in JSON} On the successful creation of a resource, it is recommended to return the status of 201 Created and the location of the newly created resource. This allows the client to access the newly created resource later (with server-generated attributes). The sample response for the preceding example will look as follows: 201 Created Location: hrapp/api/employees/1001 Best practice Use caching only for idempotent and safe HTTP methods, as others have an impact on the state of the resources. Understanding the difference between PUT and POST A common question that you will encounter while designing a RESTful web API is when you should use the PUT and POST methods? Here's the simplified answer: You can use PUT for creating or updating a resource, when the client has the full resource content available. In this case, all values are with the client and the server does not generate a value for any of the fields. You will use POST for creating or updating a resource if the client has only partial resource content available. Note that you are losing the idempotency support with POST. An idempotent method means that you can call the same API multiple times without changing the state. This is not true for the POST method; each POST method call may result in a server state change. PUT is idempotent, and POST is not. If you have strong customer demands, you can support both methods and let the client choose the suitable one on the basis of the use case. Naming RESTful web resources Resources are a fundamental concept in RESTful web services. A resource represents an entity that is accessible via the URI that you provide. The URI, which refers to a resource (which is known as a RESTful web API), should have a logically meaningful name. Having meaningful names improves the intuitiveness of the APIs and, thereby, their usability. Some of the widely followed recommendations for naming resources are shown here: It is recommended you use nouns to name both resources and path segments that will appear in the resource URI. You should avoid using verbs for naming resources and resource path segments. Using nouns to name a resource improves the readability of the corresponding RESTful web API, particularly when you are planning to release the API over the internet for the general public. You should always use plural nouns to refer to a collection of resources. Make sure that you are not mixing up singular and plural nouns while forming the REST URIs. For instance, to get all departments, the resource URI must look like /departments. If you want to read a specific department from the collection, the URI becomes /departments/{id}. Following the convention, the URI for reading the details of the HR department identified by id=10 should look like /departments/10. The following table illustrates how you can map the HTTP methods (verbs) to the operations defined for the departments' resources: ResourceGETPOSTPUTDELETE/departmentsGet all departmentsCreate a new departmentBulk update on departmentsDelete all departments/departments/10Get the HR department with id=10Not allowedUpdate the HR departmentDelete the HR department While naming resources, use specific names over generic names. For instance, to read all programmers' details of a software firm, it is preferable to have a resource URI of the form /programmers (which tells about the type of resource), over the much generic form /employees. This improves the intuitiveness of the APIs by clearly communicating the type of resources that it deals with. Keep the resource names that appear in the URI in lowercase to improve the readability of the resulting resource URI. Resource names may include hyphens; avoid using underscores and other punctuation. If the entity resource is represented in the JSON format, field names used in the resource must conform to the following guidelines: Use meaningful names for the properties Follow the camel case naming convention: The first letter of the name is in lowercase, for example, departmentName The first character must be a letter, an underscore (_), or a dollar sign ($), and the subsequent characters can be letters, digits, underscores, and/or dollar signs Avoid using the reserved JavaScript keywords If a resource is related to another resource(s), use a subresource to refer to the child resource. You can use the path parameter in the URI to connect a subresource to its base resource. For instance, the resource URI path to get all employees belonging to the HR department (with id=10) will look like /departments/10/employees. To get the details of employee with id=200 in the HR department, you can use the following URI: /departments/10/employees/200. The resource path URI may contain plural nouns representing a collection of resources, followed by a singular resource identifier to return a specific resource item from the collection. This pattern can repeat in the URI, allowing you to drill down a collection for reading a specific item. For instance, the following URI represents an employee resource identified by id=200 within the HR department: /departments/hr/employees/200. Although the HTTP protocol does not place any limit on the length of the resource URI, it is recommended not to exceed 2,000 characters because of the restriction set by many popular browsers. Best practice: Avoid using actions or verbs in the URI as it refers to a resource. Using HATEOAS in response representation Hypertext as the Engine of Application State (HATEOAS) refers to the use of hypermedia links in the resource representations. This architectural style lets the clients dynamically navigate to the desired resource by traversing the hypermedia links present in the response body. There is no universally accepted single format for representing links between two resources in JSON. Hypertext Application Language The Hypertext API Language (HAL) is a promising proposal that sets the conventions for expressing hypermedia controls (such as links) with JSON or XML. Currently, this proposal is in the draft stage. It mainly describes two concepts for linking resources: Embedded resources: This concept provides a way to embed another resource within the current one. In the JSON format, you will use the _embedded attribute to indicate the embedded resource. Links: This concept provides links to associated resources. In the JSON format, you will use the _links attribute to link resources. Here is the link to this proposal: http://tools.ietf.org/html/draft-kelly-json-hal-06. It defines the following properties for each resource link: href: This property indicates the URI to the target resource representation template: This property would be true if the URI value for href has any PATH variable inside it (template) title: This property is used for labeling the URI hreflang: This property specifies the language for the target resource title: This property is used for documentation purposes name: This property is used for uniquely identifying a link The following example demonstrates how you can use the HAL format for describing the department resource containing hyperlinks to the associated employee resources. This example uses the JSON HAL for representing resources, which is represented using the application/hal+json media type: GET /departments/10 HTTP/1.1 Host: packtpub.com Accept: application/hal+json HTTP/1.1 200 OK Content-Type: application/hal+json { "_links": { "self": { "href": "/departments/10" }, "employees": { "href": "/departments/10/employees" }, "employee": { "href": "/employees/{id}", "templated": true } }, "_embedded": { "manager": { "_links": { "self": { "href": "/employees/1700" } }, "firstName": "Chinmay", "lastName": "Jobinesh", "employeeId": "1700", } }, "departmentId": 10, "departmentName": "Administration" } To summarize, we discussed the details of designing RESTful web APIs including identifying the resources, using HTTP methods, and naming the web resources. Additionally we got introduced to Hypertext application language. Read More: Getting started with Django RESTful Web Services Testing RESTful Web Services with Postman Documenting RESTful Java web services using Swagger
Read more
  • 0
  • 0
  • 6546

article-image-adding-flash-your-wordpress-theme
Packt
24 Dec 2009
11 min read
Save for later

Adding Flash to your WordPress Theme

Packt
24 Dec 2009
11 min read
Adobe Flash—it's come quite a long way since my first experience with it as a Macromedia product (version 2 in 1997). Yet still, it does not adhere to W3C standards, requires a plugin to view, and above all, is a pretty pricey proprietary product. So why is everyone so hot on using it? Love it or hate it, Flash is here to stay. It does have a few advantages that we'll take a quick look at. The Flash player plugin does boast the highest saturation rate around (way above other media player plugins) and it now readily accommodates audio and video, as video sites such as You Tube take advantage of it. It's pretty easy to add and upgrade for all major browsers. The price may seem prohibitive at first, but after the initial purchase, additional upgrades are reasonably priced. Plus, many third-party software companies offer very cheap authoring tools that allow you to create animations and author content using the Flash player format. (In most cases, no one needs to know you're using the $50 version of Swish and not the $800 Flash CS3 to create your content.) Above all, it can do so much more than just playing video and audio (like most plugins). You can create seriously rich and interactive content, even entire applications with it, and the best part is, no matter what you create with it, it is going to look and work exactly the same on all browsers and platforms. These are just a few of the reasons why so many developers chose to build content and applications for the Flash player. Oh, and did I mention you can easily make awesome, visually slick, audio-filled stuff with it? Yeah, that's why your client wants you to put it in their site. Flash in your theme A commonly requested use of Flash is usually in the form of a snazzy header within the theme of the site, the idea being that various relevant and/or random photographs or designs load into the header with some supercool animation (and possibly audio) every time a page loads or a section changes. I'm going to assume if you're using anything that requires the Flash player, you're pretty comfortable with generating content for it. So, we're not going to focus on any Flash timeline tricks or ActionScripting. We'll simply cover getting your Flash content into your WordPress theme. For the most part, you can simply take the HTML object embed code that Flash (or other third-party tools) will generate for you and paste it into the header area of your WordPress index.php or header.php template file. Handling users without Flash, older versions of Flash, and IE6 users While the previous method is extremely clean and simple, it doesn't help all of your site's users in dealing with Flash. What about users who don't have Flash installed or have an older version that won't support your content? What about IE users who have the Active X restrain? You'll want your site and theme to gracefully handle users who do not have Flash (if you've used the overlay method, they'll simply see the CSS background image and probably not know anything is wrong!) or an older version of Flash that doesn't support the content you wish to display. This method lets you add in a line of text or a static image as an alternative, so people who don't have the plugin/correct version installed are either served up alternative content and they're none-the-wiser, or served up content that nicely explains that they need the plugin and directs them towards getting it. Most importantly, this method also nicely handles IE's ActiveX restrictions. Is the ActiveX restriction still around? In 2006, the IE browser upped its security, so users had to validate content that shows up in the Flash player (or any player) via Microsoft's ActiveX controls). Your Flash content starts to play, but there's a "grey outline" around the player area which may or may not mess up your design. If your content is interactive, then people will need to click to activate it. This is annoying, but the main workaround involved "injecting" controls and players via JavaScript. Essentially, you need to include your Flash content via a JavaScript include file. As of April 2008, this restriction was reverted, but only if your user has updated their browser; chances are, if they intent on still using IE6 or 7, they haven't done this update. Regardless of whether you are concerned about ActiveX restrictions, using JavaScript to help you instantiate your Flash will greatly add to the ease of embedding content. It will also make sure that users of all versions or who need to install Flash are handled either by directing them to the proper Flash installation and/or letting them see an alternative version of the content. swfObject For a while, I used this standard swfObject method that was detailed in this great SitePoint article: http://www.sitepoint.com/article/activex-activationissue-ie. A similar, robust version of this JavaScript is located on Google Code's AJAX API http://code.google.com/p/swfobject/wiki/hosted_library. You can download the script (it's very small) or you can link directly to the swfObject AJAX API URL: <script type="text/javascript"src="http://ajax.googleapis.com/ajax/libs/swfobject/2.2/swfobject.js"></script> Downloaded or linked to the Google Code CDN, be sure to place this below your wp_head or any wp_enqueue_script calls in your < head > tags in your header.php template file or other head template file. Adding a SWF to the template using swfObject If you'd like to use the swfObject.js file and method, you can read the full documentation here: http://code.google.com/p/swfobject/wiki/documentation. But essentially, we're going to use the dynamic publishing option to include our SWF file. Using the SWF file included in this book's code packet, create a new directory in your theme called flash and place the SWF file in it. Then, create a div with alternative content and a script tag that includes the following JavaScript: <script type="text/javascript">swfobject.embedSWF("myContent.swf", "myContent", "300", "120","9.0.0");</script>...<div id="myContent"><p>Alternative content</p></div>... Add this ID rule to your stylesheet (I placed it just below my other header and intHeader ID rules): #flashHold{float: right;margin-top: 12px;margin-right: 47px;} As long as you take care to make sure the div is positioned correctly, the object embed code has the correct height and width of your Flash file, and you're not accidentally overwriting any parts of the theme that contain WordPress template tags or other valuable PHP code, you're good to go. What's the Satay method?It's a cleaner way to embed your Flash movies while still supporting web standards. Drew McLellan discusses its development in detail in this article: http://www.alistapart.com/articles/flashsatay. This method was fine on its own until IE6 decided to include its ActiveX security restriction. Nowadays, a modified embed method called the "nested-objects method": http://www.alistapart.com/articles/flashembedcagematch/ is used with the swfObject JavaScript we just covered. Good developer's tip:Even if you loathe IE (as lots of us as developers tend to), it is an "industry standard" browser and you have to work with it. I've found the Microsoft's IE blog ( http://blogs.msdn.com/ie/) extremely useful in keeping tabs on IE so that I can better develop CSS-based templates for it. While you're at it, go ahead and subscribe to the RSS feeds for Firefox ( http://developer.mozilla.org/devnews/), Safari ( http://developer.apple.com/internet/safari/), and your other favorite browsers. You'll be surprised at the insight you can glean, which can be extremely handy if you ever need to debug CSS or JavaScripts for one of those browsers. jQuery Flash plugin In the past year, as I've found myself making more and more use of jQuery, I've discovered and really liked Luke Lutman's jQuery Flash plugin. There is no CDN for this and it's not bundled with WordPress, so you'll need to download it and add it to your theme's js directory: ( http://jquery.lukelutman.com/plugins/flash/). Embedding Flash files using the jQuery Flash plugin As we're leveraging jQuery already, I find Luke's Flash plugin a little easier to deal with. Load the script under the wp_head. Place a div of alternative content; just the div of alternative content and nothing else! Write the jQuery script that will replace that content or show your alternative content for old/no Flash players. Code goes here. I think you see why I liked this so much more. Passing Flash a WordPress variable So now you've popped a nice Flash header into your theme. Here's a quick trick to make it all the more impressive. If you'd like to keep track of what page, post, or category your WordPress user has clicked on and display a relevant image or animation in the header, you can pass your Flash SWF file a variable from WordPress using PHP. I've made a small and simple Flash movie that will fit right over the top-right of my internal page's header. I'd like my Flash header to display some extra text when the viewer selects a different "column" (a.k.a. category). In this case, the animation will play and display OpenSource Magazine: On The New Web underneath the open source logo when the user selects the On The New Web category. More fun with CSSIf you look at the final theme package available from this title's URL on the Packt Publishing site, I've included the original ooflash-sample. FLA file. You'll notice the FLA has a standard white background. If you look at my header.php file, you'll notice that I've set my wmode parameter to transparent. This way, my animation is working with my CSS background. Rather than beef up my SWF's file size with another open source logo, I simply animate over it! Even if my animation "hangs" or never loads, the user's perception and experience of the page is not hampered. You can also use this trick as a "cheater's preloader". In your stylesheet, assign the div that holds your Flash object embed tags, a background image of an animated preloading GIF or some other image that indicates the user should expect something to load. The user will see this background image until your Flash file starts to play and covers it up. My favorite site to get and create custom loading GIFs is http://www.ajaxload.info/.   In your Flash authoring program, set up a series of animations or images that will load or play based on a variable set in the root timeline called catName. You'll pass this variable to your ActionScript. In my FLA example, if the catName variable does not equal On The New Web, then the main animation will play, but if the variable returns On The New Web, then the visibility of the movie clip containing the words OpenSource Magazine: On The New Web will be set to "true". Now, let's get our PHP variable into our SWF file. In your object embed code where your swfs are called, be sure to add the following code: If you plan on using the Satay embed method, your object embed will look like this: ...<script type="text/javascript">var flashvars = {catName: "<?echo single_cat_title('');?>"};swfobject.embedSWF("<?php bloginfo('template_directory');?>/flash/ooflash-sample.swf", "flashHold", "338", "150","8.0.0","expressInstall.swf", flashvars);</script>... If you'd like to use jQuery Flash, your jQuery will look like this: ...<script type="text/javascript">jQuery(document).ready(function(){jQuery('#flashHold').flash({src: '<?php bloginfo('template_directory');?>/flash/ooflash-sample.swf',width: 338,height: 150,flashvars: { catName: '<?echo single_cat_title('');?>' }},{ version: 8 });});</script>... Be sure to place the full path to your SWF file in the src and value parameters for the embed tags or jQuery src. Store your Flash file inside your themes directory and link to it directly, that is, src="/mythemename/flas'); template tag. This will ensure that your SWF file loads properly. Using this method every time someone loads a page or clicks on a link on your site that is within the On The New Web category, PHP will render the template tag as myswfname.swf?catName=On The New Web, or whatever the $single_cat_title(""); for that page is. So your Flash file's ActionScript is going to look for a variable called catName in the_root or _level0, and based on that value, do whatever you told it to do—call a function, go to a frame and animate; you can even name it. For extra credit, you can play around with the other template tag variables such as the_author_meta or the_date(), for example, and load up special animations, images, or call functions based on them.
Read more
  • 0
  • 2
  • 6542

article-image-5-blog-posts-that-could-make-you-a-better-python-programmer
Sam Wood
11 Feb 2019
2 min read
Save for later

5 blog posts that could make you a better Python programmer

Sam Wood
11 Feb 2019
2 min read
Python is one of the most important languages to master. It’s top rated, fast growing, and in demand by businesses around the globe. There’s a host of excellent insight across the web about how to become a better programmer with Python. Here’s five blogs we think you need to read to upgrade your skills and knowledge. 1. A Brief History of Python Did you know Python is actually older than Java, R and JavaScript? If you want to be a better Python programmer, it pays to know your history. This quick blog post takes you through the language's journey from Christmas hobby project to its modern ascendancy with version 3. 2. Do you write Python Code or Pythonic Code? Are you writing code in Python, or code for Python? When people talk about Pythonic code they mean that the code uses Python idioms well, that is natural or displays fluency in the language. Are you writing code like you would write Java or C++? This 4-minute blog post gives quick tips on how to make your code Pythonic. 3. The Singleton Python Design Pattern in Depth The singleton pattern is a powerful design pattern that allows you to create only one instance of data. You’d generally use it for things like the logging class and its subclasses, managing a connection to a database, or use read-only singletons to store some global states. This in-depth blog post takes you through the three principle ways to implement singletons, for better Python code. 4. Why is Python so good for artificial intelligence and machine learning? 5 Experts Explain. Python is the breakout language of data, zooming ahead of rival R to be dominant in the field of artificial intelligence and machine learning. But what is it about the programming language that makes it so well suited for this fast-growing field? In this blog post, five artificial intelligence experts all weigh in on what they think makes Python perfect for AI and machine learning. 5. Top 7 Python Programming Books You Need To Read That’s right - we put a list in our list. But if you really want to become a better Python programmer, you’ll want to get to grips with this stack of amazing Python books. Whether you’re a complete beginner or more experienced, these seven Python titles are the perfect way to upgrade your knowledge.
Read more
  • 0
  • 0
  • 6540

article-image-arduino-development
Packt
22 Apr 2015
19 min read
Save for later

Arduino Development

Packt
22 Apr 2015
19 min read
Most systems using the Arduino have a similar architecture. They have a way of reading data from the environment—a sensor—they make decision using the code running inside the Arduino and then output those decisions to the environment using various actuators, such as a simple motor. Using three recipes from the book, Arduino Development Cookbook, by Cornel Amariei, we will build such a system, and quite a useful one—a fan controlled by the air temperature. Let's break the process into three key steps, the first and easiest will be to connect an LED to the Arduino, a few of them will act as a thermometer, displaying the room temperature. The second step will be to connect the sensor and program it, and the third will be to connect the motor. Here, we will learn this basic skills. (For more resources related to this topic, see here.) Connecting an external LED Luckily, the Arduino boards come with an internal LED connected to pin 13. It is simple to use and always there. But most times we want our own LEDs in different places of our system. It is possible that we connect something on top of the Arduino board and are unable to see the internal LED anymore. Here, we will explore how to connect an external LED. Getting ready For this step we need the following ingredients: An Arduino board connected to the computer via USB A breadboard and jumper wires A regular LED (the typical LED size is 3 mm) A resistor between 220–1,000 ohm How to do it… Follow these steps to connect an external LED to an Arduino board: Mount the resistor on the breadboard. Connect one end of the resistor to a digital pin on the Arduino board using a jumper wire. Mount the LED on the breadboard. Connect the anode (+) pin of the LED to the available pin on the resistor. We can determine the anode on the LED in two ways. Usually, the longer pin is the anode. Another way is to look for the flat edge on the outer casing of the LED. The pin next to the flat edge is the cathode (-). Connect the LED cathode (-) to the Arduino GND using jumper wires. Schematic This is one possible implementation on the second digital pin. Other digital pins can also be used. Here is a simple way of wiring the LED: Code The following code will make the external LED blink: // Declare the LED pin int LED = 2; void setup() { // Declare the pin for the LED as Output pinMode(LED, OUTPUT); } void loop(){ // Here we will turn the LED ON and wait 200 milliseconds digitalWrite(LED, HIGH); delay(200); // Here we will turn the LED OFF and wait 200 milliseconds digitalWrite(LED, LOW); delay(200); } If the LED is connected to a different pin, simply change the LED value to the value of the pin that has been used. How it works… This is all semiconductor magic. When the second digital pin is set to HIGH, the Arduino provides 5 V of electricity, which travels through the resistor to the LED and GND. When enough voltage and current is present, the LED will light up. The resistor limits the amount of current passing through the LED. Without it, it is possible that the LED (or worse, the Arduino pin) will burn. Try to avoid using LEDs without resistors; this can easily destroy the LED or even your Arduino. Code breakdown The code simply turns the LED on, waits, and then turns it off again. In this one we will use a blocking approach by using the delay() function. Here we declare the LED pin on digital pin 2: int LED = 2; In the setup() function we set the LED pin as an output: void setup() { pinMode(LED, OUTPUT); } In the loop() function, we continuously turn the LED on, wait 200 milliseconds, and then we turn it off. After turning it off we need to wait another 200 milliseconds, otherwise it will instantaneously turn on again and we will only see a permanently on LED. void loop(){ // Here we will turn the LED ON and wait 200 miliseconds digitalWrite(LED, HIGH); delay(200); // Here we will turn the LED OFF and wait 200 miliseconds digitalWrite(LED, LOW); delay(200); } There's more… There are a few more things we can do. For example, what if we want more LEDs? Do we really need to mount the resistor first and then the LED? LED resistor We do need the resistor connected to the LED; otherwise there is a chance that the LED or the Arduino pin will burn. However, we can also mount the LED first and then the resistor. This means we will connect the Arduino digital pin to the anode (+) and the resistor between the LED cathode (-) and GND. If we want a quick cheat, check the following See also section. Multiple LEDs Each LED will require its own resistor and digital pin. For example, we can mount one LED on pin 2 and one on pin 3 and individually control each. What if we want multiple LEDs on the same pin? Due to the low voltage of the Arduino, we cannot really mount more than three LEDs on a single pin. For this we require a small resistor, 220 ohm for example, and we need to mount the LEDs in series. This means that the cathode (-) of the first LED will be mounted to the anode (+) of the second LED, and the cathode (-) of the second LED will be connected to the GND. The resistor can be placed anywhere in the path from the digital pin to the GND. See also For more information on external LEDs, take a look at the following recipes and links: For more details about LEDs in general, visit http://electronicsclub.info/leds.htm To connect multiple LEDs to a single pin, read the instructable at http://www.instructables.com/id/How-to-make-a-string-of-LEDs-in-parallel-for-ardu/ Because we are always lazy and we don't want to compute the needed resistor values, use the calculator at http://www.evilmadscientist.com/2009/wallet-size-led-resistance-calculator/ Now that we know how to connect an LED, let's also learn how to work with a basic temperature sensor, and built the thermometer we need. Temperature sensor Almost all sensors use the same analog interface. Here, we explore a very useful and fun sensor that uses the same. Temperature sensors are useful for obtaining data from the environment. They come in a variety of shapes, sizes, and specifications. We can mount one at the end of a robotic hand and measure the temperature in dangerous liquids. Or we can just build a thermometer. Here, we will build a small thermometer using the classic LM35 and a bunch of LEDs. Getting ready The following are the ingredients required: A LM35 temperature sensor A bunch of LEDs, different colors for a better effect Some resistors between 220–1,000 ohm How to do it… The following are the steps to connect a button without a resistor: Connect the LEDs next to each other on the breadboard. Connect all LED negative terminals—the cathodes—together and then connect them to the Arduino GND. Connect a resistor to each positive terminal of the LED. Then, connect each of the remaining resistor terminals to a digital pin on the Arduino. Here, we used pins 2 to 6. Plug the LM35 in the breadboard and connect its ground to the GND line. The GND pin is the one on the right, when looking at the flat face. Connect the leftmost pin on the LM35 to 5V on the Arduino. Lastly, use a jumper wire to connect the center LM35 pin to an analog input on the Arduino. Here we used the A0 analog pin. Schematic This is one possible implementation using the pin A0 for analog input and pins 2 to 6 for the LEDs: Here is a possible breadboard implementation: Code The following code will read the temperature from the LM35 sensor, write it on the serial, and light up the LEDs to create a thermometer effect: // Declare the LEDs in an array int LED [5] = {2, 3, 4, 5, 6}; int sensorPin = A0; // Declare the used sensor pin void setup(){    // Start the Serial connection Serial.begin(9600); // Set all LEDs as OUTPUTS for (int i = 0; i < 5; i++){    pinMode(LED[i], OUTPUT); } }   void loop(){ // Read the value of the sensor int val = analogRead(sensorPin); Serial.println(val); // Print it to the Serial // On the LM35 each degree Celsius equals 10 mV // 20C is represented by 200 mV which means 0.2 V / 5 V * 1023 = 41 // Each degree is represented by an analogue value change of   approximately 2 // Set all LEDs off for (int i = 0; i < 5; i++){    digitalWrite(LED[i], LOW); } if (val > 40 && val < 45){ // 20 - 22 C    digitalWrite( LED[0], HIGH); } else if (val > 45 && val < 49){ // 22 - 24 C    digitalWrite( LED[0], HIGH);    digitalWrite( LED[1], HIGH); } else if (val > 49 && val < 53){ // 24 - 26 C    digitalWrite( LED[0], HIGH);    digitalWrite( LED[1], HIGH);    digitalWrite( LED[2], HIGH); } else if (val > 53 && val < 57){ // 26 - 28 C    digitalWrite( LED[0], HIGH);    digitalWrite( LED[1], HIGH);    digitalWrite( LED[2], HIGH);    digitalWrite( LED[3], HIGH); } else if (val > 57){ // Over 28 C    digitalWrite( LED[0], HIGH);    digitalWrite( LED[1], HIGH);    digitalWrite( LED[2], HIGH);    digitalWrite( LED[3], HIGH);    digitalWrite( LED[4], HIGH); } delay(100); // Small delay for the Serial to send } Blow into the temperature sensor to observe how the temperature goes up or down. How it works… The LM35 is a very simple and reliable sensor. It outputs an analog voltage on the center pin that is proportional to the temperature. More exactly, it outputs 10 mV for each degree Celsius. For a common value of 25 degrees, it will output 250 mV, or 0.25 V. We use the ADC inside the Arduino to read that voltage and light up LEDs accordingly. If it's hot, we light up more of them, if not, less. If the LEDs are in order, we will get a nice thermometer effect. Code breakdown First, we declare the used LED pins and the analog input to which we connected the sensor. We have five LEDs to declare so, rather than defining five variables, we can store all five pin numbers in an array with 5 elements: int LED [5] = {2, 3, 4, 5, 6}; int sensorPin = A0; We use the same array trick to simplify setting each pin as an output in the setup() function. Rather than using the pinMode() function five times, we have a for loop that will do it for us. It will iterate through each value in the LED[i] array and set each pin as output: void setup(){ Serial.begin(9600); for (int i = 0; i < 5; i++){    pinMode(LED[i], OUTPUT); } } In the loop() function, we continuously read the value of the sensor using the analogRead() function; then we print it on the serial: int val = analogRead(sensorPin); Serial.println(val); At last, we create our thermometer effect. For each degree Celsius, the LM35 returns 10 mV more. We can convert this to our analogRead() value in this way: 5V returns 1023, so a value of 0.20 V, corresponding to 20 degrees Celsius, will return 0.20 V/5 V * 1023, which will be equal to around 41. We have five different temperature areas; we'll use standard if and else casuals to determine which region we are in. Then we light the required LEDs. There's more… Almost all analog sensors use this method to return a value. They bring a proportional voltage to the value they read that we can read using the analogRead() function. Here are just a few of the sensor types we can use with this interface: Temperature Humidity Pressure Altitude Depth Liquid level Distance Radiation Interference Current Voltage Inductance Resistance Capacitance Acceleration Orientation Angular velocity Magnetism Compass Infrared Flexing Weight Force Alcohol Methane and other gases Light Sound Pulse Unique ID such as fingerprint Ghost! The last building block is the fan motor. Any DC fan motor will do for this, here we will learn how to connect and program it. Controlling motors with transistors We can control a motor by directly connecting it to the Arduino digital pin; however, any motor bigger than a coin would kill the digital pin and most probably burn Arduino. The solution is to use a simple amplification device, the transistor, to aid in controlling motors of any size. Here, we will explore how to control larger motors using both NPN and PNP transistors. Getting ready To execute this recipe, you will require the following ingredients: A DC motor A resistor between 220 ohm and 10K ohm A standard NPN transistor (BC547, 2N3904, N2222A, TIP120) A standard diode (1N4148, 1N4001, 1N4007) All these components can be found on websites such as Adafruit, Pololu, and Sparkfun, or in any general electronics store. How to do it… The following are the steps to connect a motor using a transistor: Connect the Arduino GND to the long strip on the breadboard. Connect one of the motor terminals to VIN or 5V on the Arduino. We use 5V if we power the board from the USB port. If we want higher voltages, we could use an external power source, such as a battery, and connect it to the power jack on Arduino. However, even the power jack has an input voltage range of 7 V–12 V. Don't exceed these limitations. Connect the other terminal of the motor to the collector pin on the NPN transistor. Check the datasheet to identify which terminal on the transistor is the collector. Connect the emitter pin of the NPN transistor to the GND using the long strip or a long connection. Mount a resistor between the base pin of the NPN transistor and one digital pin on the Arduino board. Mount a protection diode in parallel with the motor. The diode should point to 5V if the motor is powered by 5V, or should point to VIN if we use an external power supply. Schematic This is one possible implementation on the ninth digital pin. The Arduino has to be powered by an external supply. If not, we can connect the motor to 5V and it will be powered with 5 volts. Here is one way of hooking up the motor and the transistor on a breadboard: Code For the coding part, nothing changes if we compare it with a small motor directly mounted on the pin. The code will start the motor for 1 second and then stop it for another one: // Declare the pin for the motor int motorPin = 2; void setup() { // Define pin #2 as output pinMode(motorPin, OUTPUT); } void loop(){ // Turn motor on digitalWrite(motorPin, HIGH); // Wait 1000 ms delay(1000); // Turn motor off digitalWrite(motorPin, LOW); // Wait another 1000 ms delay(1000); } If the motor is connected to a different pin, simply change the motorPin value to the value of the pin that has been used. How it works… Transistors are very neat components that are unfortunately hard to understand. We should think of a transistor as an electric valve: the more current we put into the valve, the more water it will allow to flow. The same happens with a transistor; only here, current flows. If we apply a current on the base of the transistor, a proportional current will be allowed to pass from the collector to the emitter, in the case of an NPN transistor. The more current we put on the base, the more the flow of current will be between the other two terminals. When we set the digital pin at HIGH on the Arduino, current passes from the pin to the base of the NPN transistor, thus allowing current to pass through the other two terminals. When we set the pin at LOW, no current goes to the base and so, no current will pass through the other two terminals. Another analogy would be a digital switch that allows current to pass from the collector to the emitter only when we 'push' the base with current. Transistors are very useful because, with a very small current on the base, we can control a very large current from the collector to the emitter. A typical amplification factor called b for a transistor is 200. This means that, for a base current of 1 mA, the transistor will allow a maximum of 200 mA to pass from the collector to the emitter. An important component is the diode, which should never be omitted. A motor is also an inductor; whenever an inductor is cut from power it may generate large voltage spikes, which could easily destroy a transistor. The diode makes sure that all current coming out of the motor goes back to the power supply and not to the motor. There's more… Transistors are handy devices; here are a few more things that can be done with them. Pull-down resistor The base of a transistor is very sensitive. Even touching it with a finger might make the motor turn. A solution to avoid unwanted noise and starting the motor is to use a pull-down resistor on the base pin, as shown in the following figure. A value of around 10K is recommended, and it will safeguard the transistor from accidentally starting. PNP transistors A PNP transistor is even harder to understand. It uses the same principle, but in reverse. Current flows from the base to the digital pin on the Arduino; if we allow that current to flow, the transistor will allow current to pass from its emitter to its collector (yes, the opposite of what happens with an NPN transistor). Another important point is that the PNP is mounted between the power source and the load we want to power up. The load, in this case a motor, will be connected between the collector on the PNP and the ground. A key point to remember while using PNP transistors with Arduino is that the maximum voltage on the emitter is 5 V, so the motor will never receive more than 5 V. If we use an external power supply for the motor, the base will have a voltage higher than 5 V and will burn the Arduino. One possible solution, which is quite complicated, has been shown here: MOSFETs Let's face it; NPN and PNP transistors are old. There are better things these days that can provide much better performance. They are called Metal-oxide-semiconductor field-effect transistors. Normal people just call them MOSFETs and they work mostly the same. The three pins on a normal transistor are called collector, base, and emitter. On the MOSFET, they are called drain, gate, and source. Operation-wise, we can use them exactly the same way as with normal transistors. When voltage is applied at the gate, current will pass from the drain to the source in the case of an N-channel MOSFET. A P-channel is the equivalent of a PNP transistor. However, there are some important differences in the way a MOSFET works compared with a normal transistor. Not all MOSFETs can be properly powered on by the Arduino. Usually logic-level MOSFETs will work. Some of the famous N-channel MOSFETs are the FQP30N06, the IRF510, and the IRF520. The first one can handle up to 30 A and 60 V while the following two can handle 5.6 A and 10 A, respectively, at 100 V. Here is one implementation of the previous circuit, this time using an N-channel MOSFET: We can also use the following breadboard arrangement: Different loads A motor is not the only thing we can control with a transistor. Any kind of DC load can be controlled. An LED, a light or other tools, even another Arduino can be powered up by an Arduino and a PNP or NPN transistor. Arduinoception! See also For general and easy to use motors, Solarbotics is quite nice. Visit the site at https://solarbotics.com/catalog/motors-servos/. For higher-end motors that pack quite some power, Pololu has made a name for itself. Visit the site at https://www.pololu.com/category/51/pololu-metal-gearmotors. Putting it all together Now that we have the three key building blocks, we need to assemble them together. For the code, we only need to briefly modify the Temperature Sensor code, to also output to the motor: // Declare the LEDs in an array int LED [5] = {2, 3, 4, 5, 6}; int sensorPin = A0; // Declare the used sensor pin int motorPin = 9; // Declare the used motor pin void setup(){    // Start the Serial connection Serial.begin(9600); // Set all LEDs as OUTPUTS for (int i = 0; i < 5; i++){    pinMode(LED[i], OUTPUT); } // Define motorPin as output pinMode(motorPin, OUTPUT);   }   void loop(){ // Read the value of the sensor int val = analogRead(sensorPin); Serial.println(val); // Print it to the Serial // On the LM35 each degree Celsius equals 10 mV // 20C is represented by 200 mV which means 0.2 V / 5 V * 1023 = 41 // Each degree is represented by an analogue value change of   approximately 2 // Set all LEDs off for (int i = 0; i < 5; i++){    digitalWrite(LED[i], LOW); } if (val > 40 && val < 45){ // 20 - 22 C    digitalWrite( LED[0], HIGH);    digitalWrite( motorPIN, LOW); // Fan OFF } else if (val > 45 && val < 49){ // 22 - 24 C    digitalWrite( LED[0], HIGH);    digitalWrite( LED[1], HIGH);    digitalWrite( motorPIN, LOW); // Fan OFF } else if (val > 49 && val < 53){ // 24 - 26 C    digitalWrite( LED[0], HIGH);    digitalWrite( LED[1], HIGH);    digitalWrite( LED[2], HIGH);    digitalWrite( motorPIN, LOW); // Fan OFF } else if (val > 53 && val < 57){ // 26 - 28 C    digitalWrite( LED[0], HIGH);    digitalWrite( LED[1], HIGH);    digitalWrite( LED[2], HIGH);    digitalWrite( LED[3], HIGH);    digitalWrite( motorPIN, LOW); // Fan OFF } else if (val > 57){ // Over 28 C    digitalWrite( LED[0], HIGH);    digitalWrite( LED[1], HIGH);    digitalWrite( LED[2], HIGH);    digitalWrite( LED[3], HIGH);    digitalWrite( LED[4], HIGH);    digitalWrite( motorPIN, HIGH); // Fan ON } delay(100); // Small delay for the Serial to send } Summary In this article, we learned the three basic skills—to connect an LED to the Arduino, to connect a sensor, and to connect a motor. Resources for Article: Further resources on this subject: Internet of Things with Xively [article] Avoiding Obstacles Using Sensors [article] Hardware configuration [article]
Read more
  • 0
  • 0
  • 6538
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at £15.99/month. Cancel anytime
article-image-gql-graph-query-language-joins-sql-as-a-global-standards-project-and-is-now-the-international-standard-declarative-query-language-for-graphs
Amrata Joshi
19 Sep 2019
6 min read
Save for later

GQL (Graph Query Language) joins SQL as a Global Standards Project and will be the international standard declarative query language for graphs

Amrata Joshi
19 Sep 2019
6 min read
On Tuesday, the team at Neo4j, the graph database management system announced that the international committees behind the development of the SQL standard have voted to initiate GQL (Graph Query Language) as the new database query language. GQL is now going to be the international standard declarative query language for property graphs and it is also a Global Standards Project. GQL is developed and maintained by the same international group that maintains the SQL standard. How did the proposal for GQL pass? Last year in May, the initiative for GQL was first time processed in the GQL Manifesto. This year in June, the national standards bodies across the world from the ISO/IEC’s Joint Technical Committee 1 (responsible for IT standards) started voting on the GQL project proposal.  The ballot closed earlier this week and the proposal was passed wherein ten countries including Germany, Korea, United States, UK, and China voted in favor. And seven countries agreed to put forward their experts to work on this project. Japan was the only country to vote against in the ballot because according to Japan, existing languages already do the job, and SQL/Property Graph Query extensions along with the rest of the SQL standard can do the same job. According to the Neo4j team, the GQL project will initiate development of next-generation technology standards for accessing data. Its charter mandates building on core foundations that are established by SQL and ongoing collaboration in order to ensure SQL and GQL interoperability and compatibility. GQL would reflect rapid growth in the graph database market by increasing adoption of the Cypher language.  Stefan Plantikow, GQL project lead and editor of the planned GQL specification, said, “I believe now is the perfect time for the industry to come together and define the next generation graph query language standard.”  Plantikow further added, “It’s great to receive formal recognition of the need for a standard language. Building upon a decade of experience with property graph querying, GQL will support native graph data types and structures, its own graph schema, a pattern-based approach to data querying, insertion and manipulation, and the ability to create new graphs, and graph views, as well as generate tabular and nested data. Our intent is to respect, evolve, and integrate key concepts from several existing languages including graph extensions to SQL.” Keith Hare, who has served as the chair of the international SQL standards committee for database languages since 2005, charted the progress toward GQL, said, “We have reached a balance of initiating GQL, the database query language of the future whilst preserving the value and ubiquity of SQL.” Hare further added, “Our committee has been heartened to see strong international community participation to usher in the GQL project.  Such support is the mark of an emerging de jure and de facto standard .” The need for a graph-specific query language Researchers and vendors needed a graph-specific query language because of the following limitations: SQL/PGQ language is restricted to read-only queries SQL/PGQ cannot project new graphs The SQL/PGQ language can only access those graphs that are based on taking a graph view over SQL tables. Researchers and vendors needed a language like Cypher that would cover insertion and maintenance of data and not just data querying. But SQL wasn’t the apt model for a graph-centric language that takes graphs as query inputs and outputs a graph as a result. But GQL, on the other hand, builds in openCypher, a project that brings Cypher to Apache Spark and gives users a composable graph query language. SQL and GQL can work together According to most of the companies and national standards bodies that are supporting the GQL initiative, GQL and SQL are not competitors. Instead, these languages can complement each other via interoperation and shared foundations. Alastair Green, Query Languages Standards & Research Lead at Neo4j writes, “A SQL/PGQ query is in fact a SQL sub-query wrapped around a chunk of proto-GQL.” SQL is a language that is built around tables whereas GQL is built around graphs. Users can use GQL to find and project a graph from a graph.  Green further writes, “I think that the SQL standards community has made the right decision here: allow SQL, a language built around tables, to quote GQL when the SQL user wants to find and project a table from a graph, but use GQL when the user wants to find and project a graph from a graph. Which means that we can produce and catalog graphs which are not just views over tables, but discrete complex data objects.” It is still not clear when will the first implementation version of GQL will be out. The official page reads,  “The work of the GQL project starts in earnest at the next meeting of the SQL/GQL standards committee, ISO/IEC JTC 1 SC 32/WG3, in Arusha, Tanzania, later this month. It is impossible at this stage to say when the first implementable version of GQL will become available, but it is highly likely that some reasonably complete draft will have been created by the second half of 2020.” Developer community welcomes the new addition Users are excited to see how GQL will incorporate Cypher, a user commented on HackerNews, “It's been years since I've worked with the product and while I don't miss Neo4j, I do miss the query language. It's a little unclear to me how GQL will incorporate Cypher but I hope the initiative is successful if for no other reason than a selfish one: I'd love Cypher to be around if I ever wind up using a GraphDB again.” Few others mistook GQL to be Facebook’s GraphQL and are sceptical about the name. A comment on HackerNews reads, “Also, the name is of course justified, but it will be a mess to search for due to (Facebook) GraphQL.” A user commented, “I read the entire article and came away mistakenly thinking this was the same thing as GraphQL.” Another user commented, “That's quiet an unfortunate name clash with the existing GraphQL language in a similar domain.” Other interesting news in Data Media manipulation by Deepfakes and cheap fakes refquire both AI and social fixes, finds a Data & Society report Percona announces Percona Distribution for PostgreSQL to support open source databases  Keras 2.3.0, the first release of multi-backend Keras with TensorFlow 2.0 support is now out
Read more
  • 0
  • 0
  • 6531

article-image-techniques-and-practices-game-ai
Packt
14 Jan 2016
10 min read
Save for later

Techniques and Practices of Game AI

Packt
14 Jan 2016
10 min read
In this article by Peter L Newton, author of the book Learning Unreal AI Programming, we will understand the fundamental techniques and practices of game AI. This will be the building block to developing an amazing and interesting game AI. (For more resources related to this topic, see here.) Navigation While all the following components aren't necessary to achieve AI navigation, they all contribute critical feedback that can affect navigation. Navigating within a world is limited only by the pathways within the game. Navigation for AI is built up of the following things: Path following (path nodes): Another solution similar to NavMesh, path nodes can designate the space in which the AI traverses. Navigation mesh: Using tools such as Navigation Mesh, also known as NavMesh, you can designate areas in which the AI can traverse. NavMesh generates a plot of grids that is used to calculate the path and cost during navigation. It's important to know that this is only one of several pathfinding techniques available; we use it because it works well in this demonstration. Behavior trees: Using behavior trees to influence your AI's next destination can create a more interesting player experience. It not only calculates its requested destination, but also decides whether it should enter the screen with a cartwheel double backflip, no hands or try the triple somersault to jazz hands. Steering behaviors: Steering behaviors affect the way the AI moves while navigating to avoid obstacles. This also means using steering to create formations with your fleets that you have set to attack the king's wall. Steering can be used in many ways to influence the movement of the character. Sensory systems: Sensory systems can provide critical details, such as players nearby, sound levels, cover nearby, and many other variables of the environment that can alter movement. It's critical that your AI understands the changing environment so that it doesn't break the illusion of being a real opponent. Achieving realistic movement with steering When you think of what steering does for a car, you would be right to imagine that the same idea is applied to game AI navigation. Steering influences the movement of AI elements as they traverse to their next destination. The influences can be supplied as necessary, but we will go over the most commonly used ones. Avoidance is used essentially to avoid colliding with oncoming AI. Flocking is another key factor in steering; you commonly see an example of it while watching a school of fish. This phenomenon, known as flocking, is useful in simulating interesting group movement; simulate a complete panic or a school of fish. The goal of steering behaviors is to achieve realistic movement behavior within the player's world. Creating character with randomness and probability AI with character is what randomness and probability adds to the bots decision making. If a bot attacked you the same way, always entered the scene the same way, and annoyed you with its laugh after every successful hit, it wouldn't make for a unique experience—the AI always does the same thing. By using randomness and probability, you can instead make the AI laugh based on probability or introduce randomness to the AI's skill of choice. Another great by-product of applying randomness and probability is that it allows you to introduce levels of difficulty. You can lower the chance of missing the skill cast or even allow the bots to aim more precisely. If you have bots who wander around looking for enemies, their next destination can be randomly chosen. Creating complex decision making with behavior trees Finite State Machines (FSM) allow your bot to perform transitions between states. This allows it to go from wandering to hunting and then to killing. Behavior trees are similar but allow more flexibility. Behavior trees allow hierarchical FSM, which introduces another layer of decisions. So, the bot decides between branches of behaviors that define the state it is in. There is a tool provided by UE4 called Behavior Tree. Its editor tool allows you to modify AI behavior quickly and with ease. The following sections show the components found within UE4's Behavior Tree. Root This node is the starting node that sends the signal to the next node in the tree. This would connect to a composite that begins your first tree. What you may notice is that you are required to use a composite first to define a tree and then create the task for that tree. This is because a hierarchical FSM creates branches of states. These states will be populated with other states or tasks. This allows easy transitions between multiple states. Decorators This node creates another task, which you can add on top of the node as a "decoration". This could be, for example, a Force Success decorator when using a sequence composite or using a loop to have a node's actions repeated a number of times. I used a decorator in the AI we will make that tells it to update to the next available route. Consider the following screenshot: In the preceding screenshot, you see the Attack & Destroy decorator at the top of the composite, which defines the state. This state includes two tasks, Attack Enemy and Move To Enemy, the latter of which also has a decorator telling it to execute only when the bot state is searching. Composites These are the starting points of the states. They define how the state will behave with returns and execution flow. There is a Selector in our example that will execute each of its children from left to right and doesn't fail but returns success when one of its children returns success. Therefore, this is good for a state that doesn't check for successfully executed nodes. The Sequence executes its children in a similar fashion to the Selector, but returns a fail message when one of its children returns fail. This means that it's required that the nodes return a success message to complete the sequence. Last but not least is Simple Parallel. This allows you to execute a task and a tree at essentially the same time. This is great for creating a state that will require another task to always be called. So, to set it up, you first need to connect it to a task that it will execute. The second task or state that is connected continues to be called with the first task until the first task returns a success message. Services Services run as long as the composite that it is added to stays activated. They tick on the intervals that you set within the properties. They have another float property that allows you to create deviations in the tick intervals. Services are used to modify the state of the AI in most cases, because it's always called. For example, in the bot that we will create, we add a service to the first branch of the tree so that it's called without interruption, thus being able to maintain the state that the bot should be in at any given movement. This service, called Detect Enemy, actually runs a deviating cycle that updates Blackboard variables, such as State and EnemyActor: Tasks Tasks do the dirty work and report with a success or failed message if necessary. They have two nodes, which you'll use most often when working with a task: Event Receive Execute, which receives the signal to execute the connected scripts, and Finish Execute, which sends the signal back, returning a true or false message on success. This is important when making a task meant for the Sequence composite. Blackboards Blackboards are used to store variables within the behavior tree of the AI. In our example, we store an enumeration variable, State, to store the state, TargetPoint to hold the currently targeted enemy, and Route, which stores the current route position the AI has been requested to travel to, just to name a few. Blackboards work just by setting a public variable of a node to one of the available Blackboard variables in the drop-down menu. The naming convention shown in the following screenshot makes this process streamlined: Sensory system Creating a sensory system is heavily based on the environment where the AI will be fighting the player. It will need to be able to find cover, evade the enemy, get ammo, and other features that you feel will create an immersive AI for your game. Games with AI that challenges the player create a unique individual experience. A good sensory system contributes critical information, which makes for reactive AI. In this project, we use the sensory system to detect pawns that the AI can see. We also use functions to check for the line of sight of the enemy. We check whether there is another pawn in our path. We can check for cover and other resources within the area. Machine learning Machine learning is a branch of its own. This technique allows AI to learn from situations and simulations. The inputs are from the environment, including the context in which the bot allows it to make decisive actions. In machine learning, the inputs are put within a classifier, which can predict a set of outputs with a certain level of certainty. Classifiers can be combined into ensembles to increase the accuracy of the probabilistic prediction. We don't dig heavily into this subject, but I will provide some material for those interested. Tracing Tracing allows another actor within the world to detect objects by ray tracing. A single line trace is sent out, and if it collides with an actor, the actor is returned, including the information about the impact. Tracing is used for many reasons. One way it is used in FPS games is to detect hits. Are you familiar with the hit box? When your player shoots in a game, a trace is shot out that collides with the opponent's hit box, determining the damage to your opponent and, if you're skillful enough, resulting in their death. There are other shapes available for traces, such as spheres, capsules, and boxes, which allow tracing for different situations. Recently, I used the box trace for my car in order to detect objects near it. Influence mapping Influence mapping isn't a finite approach; it's the idea that specific locations on the map would contribute information that directly influences the player or AI. An example when using influence mapping with AI is presence falloff. Say we have enemy AI in a group. Their presence map would create a radial circle around the group with an intensity based on the size of the group. This way, other AI elements know that on entering this area, they're entering a zone occupied by enemy AI. Practical information isn't the only thing people use this for, so just understand that it's meant to provide another level of input to help your bot make additional decisions. Summary In this article, we saw the fundamental techniques and practices of game AI. We saw how to implement navigation, achieve realistic movement of AI elements, and create characters with randomness in order to achieve a sense of realism. We also looked at behavior trees and all their constituent elements. Further, we touched upon some aspects related to AI, such as machine learning and tracing. Resources for Article: Further resources on this subject: Overview of Unreal Engine 4[article] The Unreal Engine[article] Creating weapons for your game using UnrealScript[article]
Read more
  • 0
  • 0
  • 6520

article-image-creating-basic-javascript-plugin
Packt
17 Jan 2014
9 min read
Save for later

Creating a basic JavaScript plugin

Packt
17 Jan 2014
9 min read
(For more resources related to this topic, see here.) Getting started with an empty plugin To get started, create three files called manifest.xml, MyCompany.WebAccess.Plugin.debug.js, and MyCompany.WebAccess.Plugin.min.js. In the manifest.xml file, place the following XML: <WebAccess version="12.0"> <plugin name="MyCompany Plugin - Web Access" vendor="Gordon Beeming" moreinfo="http://31og.com" version="1.0"> <modules> <module namespace="MyCompany.WebAccess.Plugin" loadAfter="TFS.Agile.TaskBoard.View"/> <module namespace="MyCompany.WebAccess.Plugin" loadAfter="TFS.Agile.Boards.Controls"/> </modules> </plugin> </WebAccess> In the preceding code, once the plugin node has the attributes name, vendor, moreinfo, and version, we will be able to easily identify our plugin in the TFS Web Access admin area. Under the modules node, you will see that we have added two child module nodes. This informs TFS that we want to load our MyCompany.WebAccess.Plugin namespace after the TFS.Agile.TaskBoard.View and TFS.Agile.Boards.Controls namespaces, which are namespaces loaded on the task board and portfolio boards. You can get the base of this plugin from the sample code in the MyCompany.WebAccess.Plugin - Base.js file. If you have used the RequireJs module loader, you will notice that this syntax is very familiar. In the base code, you will see a bit of code like the following: TfsWebAccessPlugin.prototype.initialize = function () { // place code here to get started alert('MyCompany.WebAccess.Plugin is running'); }; This initialize method is where you start gaining control of what is happening in Web Access. Take all the code in the base code and place it in the debug.js file. Importing a plugin into TFS Web Access The first part of importing a plugin into TFS is to make sure that you have placed a minified version of your *.debug.js contents into your *.min.js file. Update the version of your plugin in the manifest.xml file, if required; for now, we will leave it at 1.0. Zip the three files we created; the name of this ZIP file doesn't make a difference to the usage of the plugin. Browse to the server's home page and then click on the Administer Server button in the top-right corner as shown in the following screenshot: The Administer Server Button Click on the Extensions tab and then click on Install . In the model window, click on browse to browse for the ZIP file you created with the contents of the plugin and then click on OK . You will now see that the plugin is visible in the extensions screen but is currently not enabled. Click on Enable and then on OK to enable it, as shown in the following screenshot: Web access extension when disabled When you navigate to any of the boards, you will see the alert that we placed in the initialize function. Setting up the debug mode We have just imported our plugin into TFS, and this was quite a long process. Although it is fine if we upload our plugin into an environment, when we have finished creating our plugin, it becomes very time consuming when we need to make changes to the plugin. You have to go through this whole process to see the changes. So, we will use some tricks that will help us debug our extension. Enabling the Script Debug Mode Navigate to the TFS URL with _diagnostics appended at the end, that is, http://gordon-pc:8080/tfs/_diagnostics. On this page, we will click on the Script Debug Mode link, which should currently be disabled. This should also switch Client Trace Point Collector to Enabled , as shown in the following screenshot: TFS diagnostics settings This will now make TFS use the debug.js file instead of the min.js file. You will also see more requests for JavaScript files as each file is now streamed separately instead of being bundled together for better load performance. For this reason, it is probably very clear that this should not be enabled on a production environment. Configuring a Fiddler AutoResponder rule The next part is to configure Fiddler to automatically respond to any requests for your plugin from the server with your local debug.js file. You can download Fiddler from http://fiddler2.com/. We are going to use Fiddler to intercept the request for our plugins' JavaScript file from TFS and use our local version of the plugin. The first step would be to start up Fiddler and make sure you can see the request for the MyCompany.WebAccess.Plugin.js file, which should have a URL similar to http://gordon-pc:8080/tfs/_static/tfs/12/_scripts/TFS/debug//tfs/_plugins/1957/MyCompany.WebAccess.Plugin.js. In Fiddler, switch to the AutoResponder tab and check Enable automatic responses and Unmatched requests passthrough . Now click on Add Rule and in the Rule editor menu, use the regex:http://gordon-pc:8080/tfs/_static/tfs/12/_scripts/TFS/.+/MyCompany.WebAccess.Plugin.js rule; this will put a wildcard on the mode and plugin ID that is being used currently. In the second textbox, write down the full location of the debug.js file for this plugin and then click on Save . Add a second rule in the same pattern, but this time in the second textbox, use header:CachControl=no-cache and click on Save . You should see something similar to the following screenshot in Fiddler: Fiddler AutoResponder rule added This will now make Web Access use your local debug.js file for all requests for the plugin in TFS. To try this out, go to the debug.js file, change the alert to we have added debugging , and save the file. Refresh the board, and you will see that without any additional effort, the alert changed. Adding information to display work items We will be going through some of the snippets that make a difference and are crucial to our plugin working correctly. The easiest way to make use of these types of plugins is to change the HTML based on the information available in the HTML; this is useful for small changes, such as displaying the ID of work items on the work item cards on the boards. For this, you would, on initialization of your plugin, use the setInterval function in JavaScript and call the following function every 500 milliseconds: function TaskBoardFunctions() { //replace IDs for tasks $("#taskboard-table .tbTile").each(function () { var id = $(this).attr("id"); id = id.split('-')[1]; $(this).find(".tbTileContent .witTitle").html("<span style='font-weight:bold;'>" + id + "</span> - " + $(this).find(".witTitle").html()); }); //replace IDs for tasks $("#taskboard-table .taskboard-row .taskboard-parent").each(function () { var id = $(this).attr("id"); if (id != undefined) { id = id.split('_')[1]; id = id.substring(1); $(this).find(".witTitle").html("<span style='font-weight:bold;'>" + id + "</span> - " + $(this).find(".witTitle").html()); } }); } This function just looks for all work items on the page using the IDs that are specified in the attributes in the HTML elements to add the IDs to the UI. A better way to do this would be to make use of the events in the API, and only make modifications to the displayed information when necessary. You would still use something similar to the preceding code for your initial loading to go through the board, and set all the information you would want to display; however, you would reply on the events to do any further updates. So, in this case, we would use the preceding code to scan for all the IDs on the page and then pass that through to a method, such as the following one, which will query the work item store. TFS has a configurable value that tells us the number of results that can be returned per query through the JavaScript API, and for this reason, we query 100 work items at a time; however, you can change this if it's not applicable to your plugin. Core.prototype.loadWorkItemsWork = function (idsToFetch, onComplete, that) { var takeAmount = 100; if (takeAmount >= idsToFetch.length) { takeAmount = idsToFetch.length; } if (takeAmount > 0) { that.WorkItemManager.store.beginPageWorkItems(idsToFetch.splice(0,takeAmount), [ "System.Id", "System.State" ], function (payload) { that.loadWorkItemsWork(idsToFetch, onComplete, that); $.each(payload.rows, function (index, row) { onComplete(index, row, that); }); }, function (err) { that.loadWorkItemsWork(idsToFetch, onComplete, that); alert(err); }); } }; As you can see, we are querying the work item store for the ID and the state of each work item on the page. We are then passing this off to an onComplete function that is using jQuery to find the elements by ID. We then alter the displayed information to show the ID, and on the task board to show the state of the requirement. If you use all the sample code and upload it into TFS, you will see a portfolio board like the one shown in the following screenshot: IDs on the portfolio board And on the task board, you will see the following screenshot: IDs and State on task board You can see that the tasks have IDs on them, which are the same as the portfolio boards, and the requirements listed on the left have IDs and their current states. Summary In this article, we covered customizing the TFS dashboard to display information that helps us find out a team's current status by pinning queries, build status, and recent changes to the source code. We then made some changes to the columns displayed in the portfolio backlog and the quick add panel. We finished off by going through what is required to create a TFS Web Access plugin. Resources for Article : Further resources on this subject: Ensuring Quality for Unit Testing with Microsoft Visual Studio 2010 [Article] Team Foundation Server 2012 [Article] The Command Line [Article]
Read more
  • 0
  • 0
  • 6510

article-image-article
Packt
28 Nov 2012
14 min read
Save for later

Sharing a Mind Map: Using the Best of Mobile and Web Features

Packt
28 Nov 2012
14 min read
Mind Mapping Sharing the mind map, using the exporting features of FreeMind, is to be explored in this Article. Besides, we can export and upload the mind map on the Web. We are going to deal with the options offered, and we are also adding another condiment to the recipes to spice them up. There are plenty of features that can make our mind map more attractive to the reader, but we have to dig into the different alternatives to make it happen. Moreover, taking into account the pros and cons of each exporting feature, we are going to consider them before exporting our mind map. When designing the activity considering how useful the reader can find the mind map, we are also bearing in mind, at the same time, which exporting features to work with afterwards. Therefore, it is very important to learn the options and their possibilities. Besides, we can improve the exporting possibilities by using a multimedia asset, which can enhance our mind map as well. Exporting the mind map in different formats is going to be the main concern when designing and creating mind maps. We are working with FreeMind, and we see the maps as we design them. However, when exporting the maps, we can see them in different ways; thus the importance of the exporting feature. Exporting a branch as a new map or HTML In this recipe we are going to split the mind map in order to create a new one, using one part of the original one. It is very important to do it when the mind map is getting overwritten, that is to say when we add too much information and we want to represent the subject matter and write the exact information. Getting ready It is time to design a new mind map out of a node that already exists in the one that we are designing; therefore, we will export a branch in order to create a new one. Another option for exporting a branch is to export it as HTML. How to do it... When writing the mind map, if we feel that we do need to split it or if we want to keep on writing but the space that we have is not enough, it means that we have to export a branch as a new mind map. We must bear in mind that this branch appears as the root node of the new one. The sibling nodes that it has are to be exported as well. In the first part of the recipe we are going to focus on the two ways to export the branch; in the second part of the recipe we will compare the results when exporting them in one way or another. The following are the steps that you have to perform: Open the file that you are going to work with. In case you are working with a new file, you do need to save it before exporting the branch. Click on the node that you want to make the root node, as shown in the following screenshot: Click on File | Export | Branch As New Mind, as shown in the following screenshot: A pop-up window appears. Write a name for your file and click on Save. A red arrow appears on the branch, indicating that the branch was exported as a new mind map, as shown in the following screenshot: The branch exported as a mind map looks like the following screenshot: It is important to point out that when exporting the branch of the mind map, the nodes—whether folded or not—no longer appear in the original one, as they are going to be exported to the new mind map. To export another branch of the mind map as HTML, click on the node to export, as shown in the following screenshot: Click on File | Export | Branch as HTML, as shown in the following screenshot: How it works... The main difference between exporting the branch as HTML instead of exporting the branch itself, is that the sibling nodes remain in the original mind map. So, we only export the branch and it appears in our default web browser. Another relevant feature is that when exporting the branch of the mind map as HTML, we see the map in a different format, not as a mind map itself. It is shown in the following screenshot: We can also save this HTML file. Click on the down arrow next to Firefox. Click on Save Page As, write a name for the file, and click on Save. The file is saved as shown in the preceding screenshot, on your computer. Exporting the mind map to bitmaps or vector graphics We can export a mind map as an image, using any of the following three graphic file formats—JPEG, PNG, and SVG. We are going to analyze the options, although they are very simple. If the mind map has bitmaps, we can export the mind map as PNG (short for Portable Network Graphics). It is a file format that provides advanced graphic features and lossless compression. It is advisable to use this type of file format while working with bitmaps when we don't want to lose image quality while rendering the mind map to the final bitmap. In case we want to export the mind map in the file form with the smallest possible size, we can use the JPEG (short for Joint Photographic Experts Group) file format, which uses lossy compression. Pixels with different colors add noise to the image, and delete color information and replace it with pixels of approximated values. If the mind map has many photographs, the best choice for the smallest file size that will lose some image quality is JPEG. JPEG is a compressed graphic file that is used for images that have many colors, that is to say, pictures taken with any type of photographic camera. The third option is to export the mind map as SVG (short for Scalable Vector Graphics). It is advisable to export the mind map in this format if we want to edit the geometric forms that make up the mind map. We can use software that deals with vector graphics and can edit this format. An example of such software is Inkscape. Getting ready It is time to think which mind map we want to export! The steps are the same for any of the exports, what differs is that we are going to choose a different type of file extension. How to do it... We have designed several types of mind maps, using different elements. We are now going to choose a mind map to export and a file extension that suits it. The following are the steps to be performed: Open the file that you want to export. Remember to unfold all the nodes, otherwise they are going to be exported as folded. Click on File | Export | As SVG…, as shown in the following screenshot: Write a name for the file and click on Save. How it works... We must take into consideration that the software that deals with SVG should be Inkscape or any other similar commercial one, such as Paint. When we look for the file on our computer and click on it, any of the said software will open and show the file. It is shown in the following screenshot: Uploading the mind map on Flickr and sharing it In this recipe we are going to upload our mind map on Flickr in order to share it. We must bear in mind that we have to unfold all its branches at the time of exporting, because we won't be able to unfold a branch if it is exported in a folded form. So, the first step is to prepare a mind map in order to be exported. Another important aspect to consider when uploading a file on Flickr is that it does not accept SVG files; therefore, we have to export our mind map either as PNG or JPEG, taking into account how they are designed. So, let's get ready! Getting ready It is time to sign up on Flickr; to do it, we must go to http://www.flickr.com/. In this recipe, we are going to upload that mind map on Flickr. Why? So that we can have access to the HTML code and embed it, and have the URL and create a link with it. There are plenty of options to have our mind map uploaded on this photo-sharing site. Furthermore, there are several activities that teachers can create using this mind map. How to do it... The first step that we have to perform in order to upload the mind map is to export it. We have to choose a mind map with pictures or photographs. After that, we have to export it as PNG or JPEG. In this recipe we are going to export it as PNG, because the mind map has bitmaps. The following are the steps that you have to perform: Open the file that you are going to work with. Click on File | Export | As PNG.... Write a name for the file and click on Save, as shown in the following screenshot: The file is saved in order to upload it on Flickr Sign in to your Flickr account and personalize your profile. Go to http://www.flickr.com/, and sign in for your account. You can also sign in with your Facebook account if you happen to have one. Click on Upload, as shown in the following screenshot: Click on Choose photos and videos. Search for the mind map that you have just exported as PNG. Click on the name of the file and click on Open. Choose the type of privacy for this file, within the Set Privacy block, as shown in the following screenshot: Click on Upload Photos and Videos. Wait for the file to upload. Click on add a description, as shown in the following screenshot: Complete the blocks, as shown in the following screenshot: Click on SAVE. How it works... After uploading the mind map to Flickr, we can now click on it. We can share it through different ways—by grabbing the link or copying and pasting the HTML/BBCode, as shown in the following screenshot: Exporting the mind map as HTML In this recipe we are going to export the mind map as HTML. In order to export the mind map as HTML, it is important to consider the fact that it has to be designed using words rather than images. Furthermore, it is also convenient to create the mind map using different types of sizes, fonts, as well as colors, in order to show the importance and differences after being exported. Getting ready The mind map has only text, and it is appropriate for exporting as HTML. How to do it... When exporting as HTML, we have two options. We can export the mind map folded or unfolded. If we export it as folded, we also have the possibility to unfold it when exported; but if we export it as unfolded, we do not have the possibility to fold it afterwards. These possibilities are explored in this recipe. So, the following are the steps that we have to perform: Open the file to export. Fold all the nodes. Click on File | Export | As HTML…, as shown in the following screenshot: The mind map is exported. There appear + signs next to the nodes that have subnodes, as shown in the following screenshot: Click on the + sign to unfold the nodes, as shown in the following screenshot: How it works... When exporting the mind map with its nodes folded, the result is different. The mind map looks the same as the previous screenshot, but the + or - signs do not appear next to the nodes that contain subnodes. The exported mind map, with its nodes unfolded, appears as shown in the following screenshot: Exporting the mind map as XHTML There are three options when exporting our mind maps as XHTML. Two of them are available in the menu, but the third one depends on whether the mind map is folded or not. Therefore, before exporting it, we have to analyze the options. When the mind map is exported, it looks similar to HTML, but it is more colorful and the information in the note window appears below the node in which we have added it. Getting ready We are going to export the same mind map using the three different alternatives, so that we can notice the different results. The mind map to be exported is the one of British monarchs. In the previous recipe we exported the mind map as HTML, so we can also compare the difference with that exportation. How to do it... We are going to export the mind map as XHTML (clickable map image version). But, we are going to export it with its nodes folded. So, the following are the steps that you have to perform: Open the file that you are going to work with. Fold all the nodes containing subnodes. Click on File | Export | As XHTML (Clickable map image version) …, as shown in the following screenshot: Enter a name for the file and click on Save. The exported map appears in the default web browser, as shown in the following screenshot: Minimize your default web browser and go back to FreeMind. Unfold all the nodes in the same mind map. Repeat steps 3 and 4. The exported map appears in your default web browser, as shown in the following screenshot: How it works... It is time to explore the third option when exporting the mind map as XHTML. The option that we are exploring here does not export the image of the map, so it does not matter whether the nodes are folded or not. The information about the different nodes is exported, and we can expand or collapse the nodes, by clicking on the upper part. Perform the following steps: Open the mind map you want to export. Click on File | Export | As XHTML (JavaScript version)…, as shown in the following screenshot: Enter a name for the file. Click on Save. The mind map is exported to your default web browser, as shown in the following screenshot: Save the files on your computer in any of these cases, when the mind map is exported. Exporting the mind map as Flash Exporting the mind map as Flash is very interesting, because it maintains the characteristics that we have added to the mind map. Besides, whether exported with the nodes folded or not, we can open them by clicking on the nodes. In the case that the nodes contain information in the note window, we can read them if we hover the mouse over those nodes. Getting ready It is time to see how to export the mind map in a different way. We have to bear in mind that Flash is not available in some operating systems. How to do it... We are going to keep on exporting the same mind map, so that the difference in the exportations is noticeable. The following are the steps to perform: Open the file that you are going to work with. Click on File | Export | As Flash…, as shown in the following screenshot: Write a name for the file. Click on Save. How it works... The exported mind map appears in the default web browser. The mind map looks the same as in FreeMind. When clicking on the nodes, they fold or unfold. If there is a node that contains information in the note window, it appears while hovering the mouse over it. It is shown in the following screenshot:
Read more
  • 0
  • 0
  • 6510
article-image-how-to-secure-data-in-salesforce-einstein-analytics
Amey Varangaonkar
22 Mar 2018
5 min read
Save for later

How to secure data in Salesforce Einstein Analytics

Amey Varangaonkar
22 Mar 2018
5 min read
[box type="note" align="" class="" width=""]The following excerpt is taken from the book Learning Einstein Analytics written by Santosh Chitalkar. This book includes techniques to build effective dashboards and Business Intelligence metrics to gain useful insights from data.[/box] Before getting into security in Einstein Analytics, it is important to set up your organization, define user types so that it is available to use. In this article we explore key aspects of security in Einstein Analytics. The following are key points to consider for data security in Salesforce: Salesforce admins can restrict access to data by setting up field-level security and object-level security in Salesforce. These settings prevent data flow from loading sensitive Salesforce data into a dataset. Dataset owners can restrict data access by using row-level security. Analytics supports security predicates, a robust row-level security feature that enables you to model many different types of access control on datasets. Analytics also supports sharing inheritance. Take a look at the following diagram: Salesforce data security In Einstein Analytics, dataflows bring the data to the Analytics Cloud from Salesforce. It is important that Einstein Analytics has all the necessary permissions and access to objects as well as fields. If an object or a field is not accessible to Einstein then the data flow fails and it cannot extract data from Salesforce. So we need to make sure that the required access is given to the integration user and security user. We can configure the permission set for these users. Let’s configure permissions for an integration user by performing the following steps: Switch to classic mode and enter Profiles in the Quick Find / Search… box Select and clone the Analytics Cloud Integration User profile and Analytics Cloud Security User profile for the integration user and security user respectively: Save the cloned profiles and then edit them Set the permission to Read for all objects and fields Save the profile and assign it to users Take a look at the following diagram: Data pulled from Salesforce can be made secure from both sides: Salesforce as well as Einstein Analytics. It is important to understand that Salesforce and Einstein Analytics are two independent databases. So, a user security setting given to Einstein will not affect the data in Salesforce. There are the following ways to secure data pulled from Salesforce: Salesforce Security Einstein Analytics Security Roles and profiles Inheritance security Organization-Wide Defaults (OWD) and record ownership Security predicates Sharing rules Application-level security Sharing mechanism in Einstein All Analytics users start off with Viewer access to the default Shared App that’s available out-of-the-box; administrators can change this default setting to restrict or extend access. All other applications created by individual users are private, by default; the application owner and administrators have Manager access and can extend access to other Users, groups, or roles. The following diagram shows how the sharing mechanism works in Einstein Analytics: Here’s a summary of what users can do with Viewer, Editor, and Manager access: Action / Access level Viewer Editor Manager View dashboards, lenses, and datasets in the application. If the underlying dataset is in a different application than a lens or dashboard, the user must have access to both applications to view the lens or dashboard. Yes Yes Yes See who has access to the application. Yes Yes Yes Save contents of the application to another application that the user has Editor or Manager access to. Yes Yes Yes Save changes to existing dashboards, lenses, and datasets in the application (saving dashboards requires the appropriate permission set license and permission). Yes Yes Change the application’s sharing settings. Yes Rename the application. Yes Delete the application. Yes Confidentiality, integrity, and availability together are referred to as the CIA Triad and it is designed to help organizations decide what security policies to implement within the organization. Salesforce knows that keeping information private and restricting access by unauthorized users is essential for business. By sharing the application, we can share a lens, dashboard, and dataset all together with one click. To share the entire application, do the following: Go to your Einstein Analytics and then to Analytics Studio Click on the APPS tab and then the icon for your application that you want to share, as shown in the following screenshot: 3. Click on Share and it will open a new popup window, as shown in the following screenshot: Using this window, you can share the application with an individual user, a group of users, or a particular role. You can define the access level as Viewer, Editor, or Manager After selecting User, click on the user you wish to add and click on Add Save and then close the popup And that’s it. It’s done. Mass-sharing the application Sometimes, we are required to share the application with a wide audience: There are multiple approaches to mass-sharing the Wave application such as by role or by username In Salesforce classic UI, navigate to Setup|Public Groups | New For example, to share a sales application, label a public group as Analytics_Sales_Group Search and add users to a group by Role, Roles and Subordinates, or by Users (username): 5. Search for the Analytics_Sales public group 6. Add the Viewer option as shown in the following screenshot: 7. Click on Save Protecting data from breaches, theft, or from any unauthorized user is very important. And we saw that Einstein Analytics provides the necessary tools to ensure the data is secure. If you found this excerpt useful and want to know more about securing your analytics in Einstein, make sure to check out this book Learning Einstein Analytics.  
Read more
  • 0
  • 0
  • 6502

article-image-using-firebase-real-time-database
Oliver Blumanski
18 Jan 2017
5 min read
Save for later

Using the Firebase Real-Time Database

Oliver Blumanski
18 Jan 2017
5 min read
In this post, we are going to look at how to use the Firebase real-time database, along with an example. Here we are writing and reading data from the database using multiple platforms. To do this, we first need a server script that is adding data, and secondly we need a component that pulls the data from the Firebase database. Step 1 - Server Script to collect data Digest an XML feed and transfer the data into the Firebase real-time database. The script runs as cronjob frequently to refresh the data. Step 2 - App Component Subscribe to the data from a JavaScript component, in this case, React-Native. About Firebase Now that those two steps are complete, let's take a step back and talk about Google Firebase. Firebase offers a range of services such as a real-time database, authentication, cloud notifications, storage, and much more. You can find the full feature list here. Firebase covers three platforms: iOS, Android, and Web. The server script uses the Firebases JavaScript Web API. Having data in this real-time database allows us to query the data from all three platforms (iOS, Android, Web), and in addition, the real-time database allows us to subscribe (listen) to a database path (query), or to query a path once. Step 1 - Digest XML feed and transfer into Firebase Firebase Set UpThe first thing you need to do is to set up a Google Firebase project here In the app, click on "Add another App" and choose Web, a pop-up will show you the configuration. You can copy paste your config into the example script. Now you need to set the rules for your Firebase database. You should make yourself familiar with the database access rules. In my example, the path latestMarkets/ is open for write and read. In a real-world production app, you would have to secure this, having authentication for the write permissions. Here are the database rules to get started: { "rules": { "users": { "$uid": { ".read": "$uid === auth.uid", ".write": "$uid === auth.uid" } }, "latestMarkets": { ".read": true, ".write": true } } } The Server Script Code The XML feed contains stock market data and is frequently changing, except on the weekend. To build the server script, some NPM packages are needed: Firebase Request xml2json babel-preset-es2015 Require modules and configure Firebase web api: const Firebase = require('firebase'); const request = require('request'); const parser = require('xml2json'); // firebase access config const config = { apiKey: "apikey", authDomain: "authdomain", databaseURL: "dburl", storageBucket: "optional", messagingSenderId: "optional" } // init firebase Firebase.initializeApp(config) [/Code] I write JavaScript code in ES6. It is much more fun. It is a simple script, so let's have a look at the code that is relevant to Firebase. The code below is inserting or overwriting data in the database. For this script, I am happy to overwrite data: Firebase.database().ref('latestMarkets/'+value.Symbol).set({ Symbol: value.Symbol, Bid: value.Bid, Ask: value.Ask, High: value.High, Low: value.Low, Direction: value.Direction, Last: value.Last }) .then((response) => { // callback callback(true) }) .catch((error) => { // callback callback(error) }) Firebase Db first references the path: Firebase.database().ref('latestMarkets/'+value.Symbol) And then the action you want to do: // insert/overwrite (promise) Firebase.database().ref('latestMarkets/'+value.Symbol).set({}).then((result)) // get data once (promise) Firebase.database().ref('latestMarkets/'+value.Symbol).once('value').then((snapshot)) // listen to db path, get data on change (callback) Firebase.database().ref('latestMarkets/'+value.Symbol).on('value', ((snapshot) => {}) // ...... Here is the Github repository: Displaying the data in a React-Native app This code below will listen to a database path, on data change, all connected devices will synchronise the data: Firebase.database().ref('latestMarkets/').on('value', snapshot => { // do something with snapshot.val() }) To close the listener, or unsubscribe the path, one can use "off": Firebase.database().ref('latestMarkets/').off() I’ve created an example react-native app to display the data: The Github repository Conclusion In mobile app development, one big question is: "What database and cache solution can I use to provide online and offline capabilities?" One way to look at this question is like you are starting a project from scratch. If so, you can fit your data into Firebase, and then this would be a great solution for you. Additionally, you can use it for both web and mobile apps. The great thing is that you don't need to write a particular API, and you can access data straight from JavaScript. On the other hand, if you have a project that uses MySQL for example, the Firebase real-time database won't help you much. You would need to have a remote API to connect to your database in this case. But even if using the Firebase database isn't a good fit for your project, there are still other features, such as Firebase Storage or Cloud Messaging, which are very easy to use, and even though they are beyond the scope of this post, they are worth checking out. About the author Oliver Blumanski is a developer based out of Townsville, Australia. He has been a software developer since 2000, and can be found on GitHub at @blumanski.
Read more
  • 0
  • 0
  • 6501

article-image-building-your-app-creating-executables-nwjs
Adam Lynch
17 Nov 2015
5 min read
Save for later

Building Your App: Creating Executables for NW.js

Adam Lynch
17 Nov 2015
5 min read
How hard can it be to package up your NW.js app into real executables? To be a true desktop app, it should be a self-contained .exe, .app, or similar. There are a few ways to approach this. Let's start with the simplest approach with the least amount of code or configuration. It's possible to run your app by creating a ZIP archive containing your app code, changing the file extension to .nw and then launching it using the official npm module like this: nw myapp.nw. Let's say you wanted to put your app out there as a download. Anyone looking to use it would have to have nw installed globally too. Unless you're making an app for NW.js users, that's not a great idea. Use an existing executable You could substitute one of the official NW.js executables in for the nw module. You could download a ZIP from the NW.js site containing an executable (nw.exe for example) and a few other bits and pieces. If you already have the nw module, then if you go to where it's installed on your machine (e.g. /usr/local/lib/node_modules/nw on Mac OS X), the executable can be found in the nwjs directorty. If you wanted, you could keep things really simple and leave it at that. Just use the official executable to open your .nw archive; i.e. nw.exe myapp.nw. Merging them Ideally though, you want as few files as possible. Think of your potential end users, they deserve better. One way to do this is to mash the NW.js executable and your .nw archive together to produce a single executable. This is achieved differently per platform though. On Windows, you need to run copy /b nw.exe+myapp.nw nw.exe on the command-line. Now we have a single nw.exe. Even though we now have a single executable, it still requires the DLLs and everything else which comes with the official builds to be in the same directory as the .exe for it to work correctly. You could rename nw.exe to something nicer but it's not advised as native modules will not work if the executable isn't named nw.exe. This is expected to be fixed in NW.js 0.13.0 when NW.js will come with an nw.dll (along with nw.exe) which modules will link to instead. On Linux, the command would be cat path/to/nw myapp.nw > myapp && chmod +x myapp (where nw is the NW.js executable). Since .app executables are just directories on Mac OS X, you could just copy the offical nwjs executable and edit it. Rename your .nw archive to app.nw, put it in the Contents/Resources inner directory, and you're done. Actually, a .nw archive isn't even necessarily. You could create an Contents/Resources/app.nw directory and add your raw app files there. Other noteworthy files which you could edit are Contents/Resources/nw.icns which is your app's icon and Contents/Info.plist, Apple's app package description file. nw-builder There are a few downsides to all of that; it's platform-specific, very manual, and is very limited. The nw-builder module will handle all of that for you, and more. Either from the command-line or programmatically, it makes building executables light work. Once you install it globally by running npm install -g nw-builder, then you could run the following command to generate executables: nwbuild your/app/files/directory -o destination/directory nw-builder will go and grab the latest NW.js version and generate self-contained executables for you. You can specify a lot of options here via flags too; the NW.js version you'd like, which platforms to build for, etc. Yes, you can build for multiple platforms. By default it builds 32-bit and 64-bit Windows and Mac executables, but Linux 32-bit and 64-bit executables can also be generated. E.g. nwbuild appDirectory -v 0.12.2 -o dest -p linux64. Note: I am a maintainer of nw-builder. Ignoring my bias, that was surprisingly simple. right? Using the API I personally prefer to use it programmatically though. That way I can have a build script which passes all of the options and so on. Let's say you create a simple file called build.js; var NwBuilder = require('nw-builder'); var nw = new NwBuilder({ files: './path/to/app/files/**/**' // use the glob format }); // .build() returns a promise but also supports a plain callback approach as well nw.build().then(function () { console.log('all done!'); }).catch(function (error) { console.error(error); }); Running node build.js will produce your executables. Simples. Gulp If you already use Gulp like I do and would like to slot this into your tasks, it's easy. Just use the same nw-builder module; var gulp = require('gulp'); var NwBuilder = require('nw-builder'); var nw = newNwBuilder({     files: './path/to/app/files/**/**'// use the glob format }); gulp.task('default', function(){     returnnw.build(); }); Grunt Yep, there's a plugin for that; run npm install grunt-nw-builder to get it. Then add something like the following to your Gruntfile: grunt.initConfig({   nwjs: {     options: {},     src: ['./path/to/app/files/**/*']   } }); Then running grunt nwjs will produce your executables. All nw-builder options are available to Grunt users too. Options There are a lot of options which pretty granular control. Aside from the ones I've mentioned already and options already available in the app manifest, there are ones for controlling the structure and or compression of inner files, your executables' icons, Mac OS X specific options concerning the plist file and so on. Go check out nw-builder for yourself and see how quickly you can package your Web app into real executables. About the Author Adam Lynch is a TeamworkChat Product Lead & Senior Software Engineer at Teamwork. He can be found on Twitter @lynchy010. 
Read more
  • 0
  • 0
  • 6497
article-image-w3c-world-wide-web-consortium-declares-webassembly-1-0-as-an-official-web-standard
Sugandha Lahoti
09 Dec 2019
3 min read
Save for later

W3C (World Wide Web Consortium) declares WebAssembly 1.0 as an official web standard

Sugandha Lahoti
09 Dec 2019
3 min read
Last Thursday, the World Wide Web Consortium declared Web Assembly 1.0 as an official W3C Recommendation. With this announcement, WebAssembly becomes the fourth language to run natively in browsers following HTML, CSS, and JavaScript. “The arrival of Web Assembly expands the range of applications that can be achieved by simply using Open Web Platform technologies. In a world where machine learning and Artificial Intelligence become more and more common, it is important to enabling high-performance applications on the Web, without compromising the safety of the users,” declared Philippe Le Hégaret, W3C Project Lead in the official press release. Web Assembly has been the talk of the town for providing a safe, portable, low-level code format designed for efficient execution and compact representation. According to the W3C consortium, WebAssembly enables the Web platform to perform a more efficient execution of computationally-intensive algorithms, which in turn makes it practical to deliver whole new classes of user experience on the Web and elsewhere. Because it is a platform-independent execution environment, it can also be used on any other computer platform. W3C has published three WebAssembly specifications as W3C Recommendations: WebAssembly Core Specification defines a low-level virtual machine that closely mimics the functionality of many microprocessors upon which it is run. WebAssembly Web API which defines a Promise-based interface for requesting and executing a .wasm resource. WebAssembly JavaScript Interface that provides a JavaScript API for invoking and passing parameters to WebAssembly functions. W3C is also working on a range of features for future versions of the standard. These include Threading: Threads provide the benefits of shared-memory multi-threading and atomic memory accesses. Fixed-width SIMD: Vector operations that execute loops in parallel. Reference types: Allow Web Assembly code to directly reference host objects. Tail calls: Enable calling functions without using extra stack space. ECMAScript module integration: Interact with JavaScript by loading WebAssembly executables as ES6 modules. There are many other longer-term projects that W3C is working on. Many of them are aimed at improving the usability and availability of Web Assembly. For example garbage collection, debugging interfaces, and Web Assembly System Interface (WASI). In other news, recently, Mozilla partnered with Fastly, Intel and Red Hat to form the Bytecode Alliance to build a secure-by-default future for WebAssembly and to take it beyond the browser. Introducing Woz, a Progressive WebAssembly Application (PWA + WebAssembly) generator written entirely in Rust. You can now use WebAssembly from .NET with Wasmtime! 4 predictions by Richard Feldman on the future of the web: TypeScript, WebAssembly, and more.
Read more
  • 0
  • 0
  • 6495

article-image-pfsense-essentials
Packt
06 Sep 2016
60 min read
Save for later

pfSense Essentials

Packt
06 Sep 2016
60 min read
In this article by David Zientara, the author of the book Mastering pfSense, While high-speed Internet connectivity is becoming more and more common, many in the online world—especially those with residential connections or small office/home office (SOHO) setups—lack the hardware to fully take advantage of those speeds. Fiber optic technology brings with it the promise of a gigabit speed or greater, and the technology surrounding traditional copper networks is also yielding improvements. Yet many people are using consumer-grade routers that offer, at best, mediocre performance. (For more resources related to this topic, see here.) pfSense, an open source router/firewall solution is a far better alternative that is available to you. You have likely already downloaded, installed, and configured pfSense, possibly in a residential or SOHO environment. As an intermediate-level pfSense user, you do not need to be sold on the benefits of pfSense. Nevertheless, you may be looking to deploy pfSense in a different environment (for example, a corporate network), or you may just be looking to enhance your knowledge of pfSense. This chapter is designed to review the process of getting your pfSense system up and running. It will guide you through the process of choosing the right hardware for your deployment, but it will not provide a detailed treatment of installation and initial configuration. The emphasis will be on troubleshooting, as well as some of the newer configuration options. Finally, the article will provide a brief treatment of how to upgrade, back up, and restore pfSense. This article will cover the following topics: A brief overview of the pfSense project pfSense deployment scenarios Minimum specifications and hardware sizing guidelines An introduction to Virtual local area networks (VLANs) and Domain Name System (DNS) The best practices for installation and configuration Basic configuration from both the console and the pfSense web GUI Upgrading, backing up, and restoring pfSense pfSense project overview The origins of pfSense can be traced to the OpenBSD packet filter known as PF, which was incorporated into FreeBSD in 2001. As PF is limited to a command-line interface, several projects have been launched in order to provide a graphical interface for PF. m0n0wall, which was released in 2003, was the earliest attempt at such a project. pfSense began as a fork of the m0n0wall project. Version 1.0 of pfSense was released on October 4, 2006. Version 2.0 was released on September 17, 2011. Version 2.1 was released on September 15, 2013, and Version 2.2 was released on January 23, 2015. As of writing this, Version 2.2.6 (released on December 21, 2015) is the latest version. Version 2.3 is expected to be released soon, and will be a watershed release in many respects. The web GUI has had a major facelift, and support for some legacy technologies is being phased out. Support for Point-to-Point Tunnelling Protocol (PPTP) will be discontinued, as will support for Wireless Encryption Protocol (WEP). The current version of pfSense incorporates such functions as traffic shaping, the ability to act as a Virtual Private Network (VPN) client or server, IPv6 support, and through packages, intrusion detection and prevention, the ability to act as a proxy server, spam and virus blocking, and much more. Possible deployment scenarios Once you have decided to add a pfSense system to your network, you need to consider how it is going to be deployed on your network. pfSense is suitable for a variety of networks, from small to large ones, and can be employed in a variety of deployment scenarios. In this article, we will cover the following possible uses for pfSense: Perimeter firewall Router Switch Wireless router/wireless access point The most common way to add pfSense to your network is to use it as a perimeter firewall. In this scenario, your Internet connection is connected to one port on the pfSense system, and your local network is connected to another port on the system. The port connected to the Internet is known as the WAN (wide area network) interface, and the port connected to the local network is known as the LAN (local area network) interface. Diagram showing a deployment in which pfSense is the perimeter firewall. If pfSense is your perimeter firewall, you may choose to set it up as a dedicated firewall, or you might want to have it perform the double duty of a firewall and a router. You may also choose to have more than two interfaces in your pfSense system (known as optional interfaces). In order to act as a perimeter firewall, however, a pfSense system requires at least two interfaces: a WAN interface (to connect to outside networks), and a LAN interface (to connect to the local network). In more complex network setups, your pfSense system may have to exchange routing information with other routers on the network. There are two types of protocols for exchanging such information: distance vector protocols obtain their routing information by exchanging information with neighboring routers; Routers that use link-state protocols to build a map of the network in order to calculate the shortest path to another router, with each router calculating distances independently. pfSense is capable of running both types of protocols. Packages are available for distance vector protocols such as RIP and RIPv2, and link-state protocols such as Border Gateway Protocol (BGP). Another common deployment scenario is to set up pfSense as a router. In a home or SOHO environment, firewall and router functions are often performed by the same device. In mid-sized to large networks, however, the router is a device separate from that of the perimeter firewall. In larger networks, which have several network segments, pfSense can be used to connect these segments. In corporate-type environments, these are often used in conjunction, which allows a single network interface card (NIC) to operate in multiple broadcast domains via 802.1q tagging. VLANs are often used with the ever-popular router on a stick configuration, in which the router has a single physical connection to a switch, with the single Ethernet interface divided into multiple VLANs, and the router forwarding packets between the VLANs. One of the advantages of this setup is that it only requires a single port, and, as a result, it allows us to use pfSense with systems on when adding another NIC would be cumbersome or even impossible: for example, a laptop or certain thin clients. In most cases, where pfSense is deployed as a router on mid-sized and large networks, it would be used to connect different LAN segments; however, it could also be used as a WAN router. In this case, pfSense's function would be to provide a private WAN connection to the end user. Another possible deployment scenario is to use pfSense as a switch. If you have multiple interfaces on your pfSense system and bridge them together, pfSense can function as a switch. This is a far less common scenario, however, for several reasons: Using pfSense as a switch is generally not cost-effective. You can purchase a 5-port Ethernet switch for less than what it would cost to purchase the hardware for a pfSense system. Buying a commercially available switch will also save you money in the long run, as they likely would consume far less power than whatever computer you would be using to run pfSense. Commercially available switches will likely outperform pfSense, as pfSense will process all packets that pass between ports, while a typical Ethernet switch will handle it locally with dedicated hardware made specifically for passing data between ports quickly. While you can disable filtering entirely in pfSense if you know what you're doing, you will still be limited by the speed of the bus on which your network cards reside, whether it is PCI, PCI-X, or PCI Express (PCI-e). There is also the administrative overhead of using pfSense as a switch. Simple switches are designed to be plug-and-play, and setting up these switches is as easy as plugging in your Ethernet cables and the power cord. Managed switches typically enable you to configure settings at the console and/or through a web interface, but in many cases, configuration is only necessary if you want to modify the operation of the switch. If you use pfSense as a switch, however, some configuration will be required. If none of this intimidates you, then feel free to use pfSense as a switch. While you're not likely to achieve the performance level or cost savings of using a commercially available switch, you will likely learn a great deal about pfSense and networking in the process. Moreover, advances in hardware could make using pfSense as a switch viable at some point in the future. Advances in low-power consumption computers are one factor that could make this possible. Yet another possibility is using pfSense as a wireless router/access point. A sizable proportion of modern networks incorporate some type of wireless connectivity. Connecting to networks wireless is not only easier, but in some cases, running Ethernet cable is not a realistic option. With pfSense, you can add wireless networking capabilities to your system by adding a wireless network card, provided that the network card is supported by FreeBSD. Generally, however, using pfSense as a wireless router or access point is not the best option. Support for wireless network cards in FreeBSD leaves something to be desired. Support for the IEEE's 802.11b and g standards is OK, but support for 802.11n and 802.11ac is not very good. A more likely solution is to buy a wireless router (even if it is one of the aforementioned consumer-grade units), set it up to act solely as an access point, connect it to the LAN port of your pfSense system, and let pfSense act as a Dynamic Host Configuration Protocol (DHCP) server. A typical router will work fine as a dedicated wireless access point, and they are more likely to support the latest wireless networking standards than pfSense. Another possibility is to buy a dedicated wireless access point. These are generally inexpensive and some have such features as multiple SSIDs, which allow you to set up multiple wireless networks (for example, you could have a separate guest network which is completely isolated from other local networks). Using pfSense as a router, in combination with a commercial wireless access point, is likely the least troublesome option. Hardware requirements and sizing guidelines Once you have decided where to deploy pfSense on your network, you should have a clearer idea of what your hardware requirements are. As a minimum, you will need a CPU, motherboard, memory (RAM), some form of disk storage, and at least two network interfaces (unless you are opting for a router on a stick setup, in which case you only need one network interface). You may also need one or more optional interfaces. Minimum specifications The starting point for our discussion on hardware requirements is the pfSense minimum specifications. As of January 2016, the minimum hardware requirements are as follows (these specifications are from the official pfSense site, pfsense.org): CPU – 500 MHz (1 GHz recommended) RAM – 256 MB (1 GB recommended) There are two architectures currently supported by pfSense: i386 (32-bit) and amd64 (64-bit). There are three separate images provided for these architectures: CD, CD on a USB memstick, and embedded. There is also an image for the Netgate RCC-VE 2440 system. A pfSense installation requires at least 1 GB of disk space. If you are installing to an embedded device, you can access the console either by a serial or VGA port. A step-by-step installation guide for the pfSense Live CD can be found on the official pfSense website at: https://doc.pfsense.org/index.php/PfSense_IO_installation_step_by_step. Version 2.3 eliminated the Live CD, which allowed you to try out pfSense without installing it onto other media. If you really want to use the Live CD, however, you could use a pre-2.3 image (version 2.2.6 or earlier). You can always upgrade to the latest version of pfSense after installation. Installation onto either a hard disk drive (HDD) or an SSD is the most common option for a full install of pfSense, whereas embedded installs typically use CF, SD, or USB media. A full install of the current version of pfSense will fit onto a 1 GB drive but will leave little room for installation of packages or for log files. Any activity that requires caching, such as running a proxy server, will also require additional disk space. The last installation option in the table is installation onto an embedded system. For the embedded version, pfSense uses NanoBSD, a tool for installing FreeBSD onto embedded systems. Such an install is ideal for a dedicated appliance (for example, a VPN server), and is geared toward fewer file writes. However, embedded installs cannot run some of the more interesting packages. Hardware sizing guidelines The minimum hardware requirements are general guidelines, and you may want to exceed these minimums based on different factors. It may be useful to consider these factors when determining what CPU, memory, and storage device to use. For the CPU, requirements increase for faster Internet connections. Guidelines for the CPU and network cards can be found at the official pfSense site at http://pfsense.org/hardware/#requirements. The following general guidelines apply: The minimum hardware specifications (Intel/AMD CPU of 500 MHz or greater) are valid up to 20 Mbps. CPU requirements begun to increase at speeds greater than 20 Mbps. Connections of 100 Mbps or faster will require PCI-e network adapters to keep up with the increased network throughput. If you intend to use pfSense to bridge interfaces—for example, if you want to bridge a wireless and wired network, or if you want to use pfSense as a switch—then the PCI bus speed should be considered. The PCI bus can easily become a bottleneck. Therefore, in such scenarios, using PCI-e hardware is the better option, as it offers up to 31.51 GB/s (for PCI-e v. 4.0 on a 16-lane slot) versus 533 MB/s for the fastest conventional PCI buses. If you plan on using pfSense as a VPN server, then you should take into account the effect VPN usage will have on the CPU. Each VPN connection requires the CPU to encrypt traffic, and the more connections there are, the more the CPU will be taxed. Generally, the most cost-effective solution is to use a more powerful CPU. But there are ways to reduce the CPU load from VPN traffic. Soekris has the vpn14x1 product range; these cards offload the CPU of the computing intensive tasks of encryption and compression. AES-NI acceleration of IPsec also significantly reduces the CPU requirements. If you have hundreds of simultaneous captive portal users, you will require slightly more CPU power than you would otherwise. Captive portal usage does not put as much of a load on the CPU as VPN usage, but if you anticipate having a lot of captive portal users, you will want to take this into consideration. If you're not a power user, 256 MB of RAM might be enough for your pfSense system. This, however, would leave little room for the state table (where, as mentioned earlier, active connections are tracked). Each state requires about 1 KB of memory, which is less memory than some consumer-grade routers require, but you still want to be mindful of RAM if you anticipate having a lot of simultaneous connections. The other components of pfSense require 32 to 48 MB of RAM, and possibly more, depending on which features you are using, so you have to subtract that from the available memory in calculating the maximum state table size. RAM Maximum Connections (States) 256 MB ~22,000 connections 512 MB ~46,000 connections 1 GB ~93,000 connections 2 GB ~190,000 connections Installing packages can also increase your RAM requirements; Snort and ntop are two such examples. You should also probably not install packages if you have limited disk space. Proxy servers in particular use up a fair amount of disk space, which is something you should probably consider if you plan on installing a proxy server such as Squid. The amount of disk space, as well as the form of storage you utilize, will likely be dictated by what packages you install, and what forms of logging you will have enabled. Some packages are more taxing on storage than others. Some packages require more disk space than others. Proxies such as Squid store web pages; anti-spam programs such as pfBlocker download lists of blocked IP addresses, and therefore require additional disk space. Proxies also tend to perform a great deal of read and write operations; therefore, if you are going to install a proxy, disk I/O performance is something you should likely take into consideration. You may be tempted to opt for the cheapest NICs. However, inexpensive NICs often have complex drivers that offload most of the processing to the CPU. They can saturate your CPU with interrupt handling, thus causing missed packets. Cheaper network cards typically have smaller buffers (often no more than 300 KB), and when the buffers become full, packets are dropped. In addition, many of them do not support Ethernet frames that are larger than the maximum transmission unit (MTU) of 1500 bytes. NICs that do not support larger frames cannot send or receive jumbo frames (frames with an MTU larger than 1500 bytes), and therefore they cannot take advantage of the performance improvement that using jumbo frames would bring. In addition, such NICs will often have problems with VLAN traffic, since a VLAN tag increases the size of the Ethernet header beyond the traditional size limit. The pfSense project recommends NICs based on Intel chipsets, and there are several reasons why such NICs are considered reliable. They tend to have adequately sized buffers, and do not have problems processing larger frames. Moreover, the drivers tend to be well-written and work well with Unix-based operating systems. For a typical pfSense setup, you will need two network interfaces: one for the WAN and one for the LAN. Each additional subnet (for example, for a guest network) will require an additional interface, as will each additional WAN interface. It should be noted that you don't need an additional card for each interface added; you can buy a multiport network card (most such cards have either 2 or 4 ports). You don't need to buy new NICs for your pfSense system; in fact, it is often economical to buy used NICs, and except in rare cases, the performance level will be the same. If you want to incorporate wireless connectivity into your network, you may consider adding a wireless card to your pfSense system. As mentioned earlier, however, the likely better option is to use pfSense in conjunction with a separate wireless access point. If you do decide to add a wireless card to your system and configure it for use as an access point, you will want to check the FreeBSD hardware compatibility list before making a purchase. Using a laptop You might be wondering if using an old laptop as a pfSense router is a good idea. In many respects, laptops are good candidates for being repurposed into routers. They are small, energy efficient, and when the AC power shuts off, they run on battery power, so they have a built-in uninterruptable power supply (UPS). Moreover, many old laptops can be purchased relatively cheaply at thrift shops and online. There is, however, one critical limitation to using a laptop as a router: in almost all cases, they only have one Ethernet port. Moreover, there is often no realistic way to add another NIC: as there are no expansion slots that will take another NIC (some, however, do have PCMCIA slots that will take a second NIC). There are gigabit USB-to-Ethernet adapters (for USB 3.0), but this is not much of a solution. Such adapters do not have the reliability of traditional NICs. Most laptops do not have Intel NICs either; high-end business laptops are usually the exception to this rule. There is a way to use a laptop with a single Ethernet port as a pfSense router, and that is to configure pfSense using VLANs. As mentioned earlier, VLANs, or virtual LANs, allow us to use a single NIC to serve multiple subnets. Thus, we can set up two VLANs on our single port: virtual LAN #1, which we will use for the WAN interface, and virtual LAN #2, which we will use for the LAN interface. The one disadvantage of this setup is that you must use a managed switch to make this work. Managed switches are switches that can usually be configured and managed as groups, they often have both a command-line and web interface for management, and they often have a wide range of capabilities, such as VLANs. Since unmanaged switches forward traffic to all other ports, they are unsuitable for this setup. You can, however, connect an unmanaged switch to the managed switch to add ports. Keep in mind that managed switches are expensive (more expensive than dual and quad port network cards), and if there are multiple VLANs on a single link, this link can easily become overloaded. In scenarios where you can add a network card, this is usually the better option. If you have an existing laptop, however, a managed switch with VLANs is a workable solution. Introduction to VLANs and DNS Two of the areas in which pfSense excels is in incorporating functionality to implement VLANs and DNS servers. First, let's consider why we would want to implement these. Introduction to VLANs The standard way to partition your network is to use a router to pass traffic between networks, and configure a separate switch (or switches) for each network. In this scenario, there is a one-to-one relationship between the number of network interfaces and the number of physical ports. This works well in many network deployments, especially in small networks. As the network gets larger, however, there are issues with this type of configuration. As the number of users on the network increases, we are faced with a choice of either having more users on each subnet, or increasing the number of subnets (and therefore the number of network interfaces on the router). Both solutions also create new problems: Each subnet makes up a separate broadcast domain. Increasing the number of users on a subnet increases the amount of broadcast traffic, which can bog down our network. Each user on a subnet can use a packet sniffer to sniff network traffic, which creates a security problem. Segmenting the network by adding subnets tends to be costly, as each new subnet requires a separate switch. VLANs offer us a way out of this dilemma with relatively little downside. VLANs allow us to divide traffic on a single network interface (for example, LAN) into several separate networks, by adding a special tag to frames entering the network. This tag, known as an 802.1q tag, identifies which VLAN to which the device belongs. Dividing network traffic in such a way offers several advantages: As each VLAN constitutes a separate broadcast domain, broadcast domains are now smaller, and thus there is less network traffic. Users on one VLAN cannot sniff traffic from another VLAN, even if they are on the same physical interface, thus improving security. Using VLANs requires us to have a managed switch on the interface on which VLANs exist. This is somewhat more expensive than an unmanaged switch, but the cost differential between a managed and unmanaged switch is less than it might be if we had to buy additional switches for new subnets. As a result, VLANs are often the most efficient way of making our networks more scalable. Even if your network is small, it might be advantageous to at least consider implementing a VLAN, as you will likely want to make future growth as seamless as possible. Introduction to DNS The DNS provides a means of converting an easy-to-remember domain name with a numerical (IP) address. It thus provides us with a phone book for the Internet as well as providing a structure that is both hierarchical (there is the root node, which covers all domain names, top-level domains like .com and .net, domain names and subdomain names) and decentralized (the Internet is divided into different zones, and a name server is authoritative for a specific zone). In a home or SOHO environment, we might not need to implement our own DNS server. In these scenarios, we could use our ISP's DNS servers to resolve Internet hostnames. For local hostnames, we could rely on NetBIOS under Windows, the Berkeley Internet Name Domain service (BIND) under Linux (using a configuration that does not require us to run name servers), or osx under Mac OS X. Another option for mapping hostnames to IP addresses on the local network would be to use HOSTS.TXT. This is a text file, which contains a list of hostnames and corresponding IP addresses. But there are certain factors that may prompt us to set up our own DNS server for our networks: We may have chosen to utilize HOSTS.TXT for name resolution, but maintaining the HOSTS.TXT file on each of the hosts on our network may prove to be too difficult. If we have roaming clients, it may even be impossible. If your network is hosting resources that are available externally (for example, an FTP server or a website), and you are constantly making changes to the IP addresses of these resources, you will likely find it much easier to update your own data rather than submit forms to third parties and wait for them to implement the changes. Although your DNS server will only be authoritative for your domains, it can cache DNS data from the rest of the Internet. On your local network, this cached data can be retrieved much faster than DNS data from a remote DNS server. Thus, maintaining your own DNS server should result in faster name resolution. If you anticipate ever having to implement a public DNS server, a private DNS server can be a good learning experience, and if you make mistakes in implementing a private DNS server, the consequences are not as far-reaching as they would be with a public one. Implementing a DNS server with pfSense is relatively easy. By using the DNS resolver, we can have pfSense answer DNS queries from local clients, and we can also have pfSense utilize any currently available DNS servers. We can also use third-party packages such as dns-server (which is a pfSense version of TinyDNS) to add DNS server functionality. The best practices for installation and configuration Once you have chosen your hardware and which version you are going to install, you can download pfSense. Browse to the Downloads section of pfsense.org and select the appropriate computer architecture (32-bit, 64-bit, or Netgate ADI), the appropriate platform (Live CD, memstick, or embedded), and you should be presented with a list of mirrors. Choose the closest one for the best performance. You will also want to download the MD5 checksum file in order to verify the integrity of the downloaded image. Windows has several utilities for displaying MD5 hashes for a file. Under BSD and Linux, generating the MD5 hash is as easy as typing the following command: md5 pfSense-LiveCD-2.2.6-RELEASE-amd64.iso This command would generate the MD5 checksum for the 64-bit Live CD version for pfSense 2.2.6. Compare the resulting hash with the contents of the .md5 file downloaded from the pfSense website. If you are doing a full install from the Live CD or memory stick, then you just need to write the ISO to the target media, boot from either the CD or memory stick, perform some basic configuration, and then invoke the installer. The embedded install is done from a compact flash (CF) card and console data can be sent to either a serial port or the VGA port, depending on which embedded configuration you chose. If you use the serial port version, you will need to connect the embedded system to another computer with a null modem cable. Troubleshooting installation In most cases, you should be able to invoke the pfSense installer and begin installing pfSense onto the system. In some cases, however, pfSense may not boot from the target media, or the system may hang during the boot process. If pfSense is not booting at all, you may want to check to make sure the system is set up to boot from the target media. This can be done by changing the boot sequence in the BIOS settings (which can be accessed during system boot, usually by hitting the Delete key). Most computers also have a means of choosing the boot device on a one-time basis during the boot sequence. Check your motherboard's manual on how to do this. If the system is already set up to boot from the target media, then you may want to verify the integrity of the pfSense image again, or repeat the process of writing the images to the target media. The initial pfSense boot menu when booting from a CD or USB flash drive. If the system hangs during the boot process, there are several options you can try. The first menu that appears, as pfSense boots, has several options. The last two options are Kernel and Configure Boot Options. Kernel allows you to select which kernel to boot from among the available kernels. If you have a reason to suspect that the FreeBSD kernel being used is not compatible with your hardware, you might want to switch to the older version. Configure Boot Options launches a menu (shown in the preceding screenshot) with several useful options. A description of these options can be found at: http://www.freebsd.org/doc/handbook/book.html. Toggling [A]CPI Support to off can help in some cases, as ACPI's hardware discovery and configuration capabilities may cause the pfSense boot process to hang. If turning this off doesn't work, you could try booting in Safe [M]ode, and if all else fails, you can toggle [V]erbose mode to On, which will give you detailed messages while booting. The two options after boot are [R]ecovery, and [I]nstaller. The [R]ecovery mode provides a shell prompt and helps recover from a crash by retrieving config.xml from a crashed hard drive. [I]nstaller allows you to install pfSense onto a hard drive or other media, and gets invoked by default after the timeout period. The installer provides you with the option to either do a quick install or a custom install. In most cases, the quick install option can be used. Invoking the custom install option is only recommended if you want to install pfSense on a drive other than the first drive on the target system, or if you want to install multiple operating systems on the system. It is not likely that either of these situations will apply, unless you are installing pfSense for evaluation purposes (and in such cases, you would probably have an easier time running pfSense on a virtual machine). If you were unable to install pfSense on to the target media, you may have to troubleshoot your system and/or installation media. If you are attempting to install from the CD, your optical drive may be malfunctioning, or the CD may be faulty. You may want to start with a known good bootable disc and see if the system will boot off of it. If it can, then your pfSense disc may be at fault; burning the disc again may solve the problem. If, however, your system cannot boot off the known good disc, then the optical drive itself, or the cables connecting the optical drive to the motherboard, may be at fault. In some cases, however, none of the aforementioned possibilities hold true, and it is possible that the FreeBSD boot loader will not work on the target system. If so, then you could opt to install pfSense on a different system. Another possibility is to install pfSense onto a hard drive on a separate system, then transfer the hard drive into the target system. In order to do this, go through the installation process on another system as you would normally until you get to the Assign Interfaces prompt. When the installer asks if you want to assign VLANS, type n. Type exit at the Assign Interfaces prompt to skip the interface assignment. Proceed through the rest of the installation; then power down the system and transfer the hard drive to the target system. Assuming that the pfSense hard drive is in the boot sequence, the system should boot pfSense and detect the system's hardware correctly. Then you should be able to assign network interfaces. The rest of the configuration can then proceed as usual. If you have not encountered any of these problems, the software should be installed on the target system, and you should get a dialog box telling you to remove the CD from the optical drive tray and press Enter. The system will now reboot, and you will be booting into your new pfSense install for the first time. pfSense configuration Configuration takes place in two phases. Some configuration must be done at the console, including interface configuration and interface IP address assignment. Some configuration steps, such as VLAN and DHCP setup, can be done both at the console and within the web GUI. Configuration from the console On boot, you should eventually see a menu identical to the one seen on the CD version, with the boot multi or single user options and other options. After a timeout period, the boot process will continue and you will get an options menu that is also identical to the CD version, except option 99 (installation option) will not be there. You should select 1 from the menu to begin interface assignment. This is where the network cards installed in the system are given their roles as WAN, LAN, and optional interfaces (OPT1, OPT2, and so on). If you select this option, you will be presented with a list of network interfaces. This list provides four pieces of information: pfSense's device name for the interface (fxp0, em1, and so on) The MAC address of the interface The link state of the interface (up if a link is detected; down otherwise) The manufacturer and model of the interface (Intel PRO 1000, for example) As you are probably aware, generally speaking, no two network cards have the same MAC address, so each of the interfaces in your system should have a unique MAC address. To begin the configuration, press 1 and Enter for the Assign Interfaces option. After that, a prompt will show up for VLAN configuration. Otherwise, type n and press Enter. Keep in mind that you can always configure VLANs later on. The interfaces must be configured, and you will be prompted for the WAN interface first. If you only configure one interface, it will be assigned to the WAN, and you will subsequently be able to login to pfSense through this port. This is not what you would normally want, as the WAN port is typically accessible from the other side of the firewall. Once at least one other interface is configured, you will no longer be able to login to pfSense from the WAN port. Unless you are using VLANs, you will have to set up at least two network interfaces. In pfSense, network interfaces are assigned rather cryptic device names (for example, fxp0, em1 and so on) and it is not always easy to know which ports correspond to particular device names. One way of solving this problem is to use the automatic interface assignment feature. To do this, unplug all network cables from the system and then type a and press Enter to begin auto-detection. The WAN interface is the first interface to be detected, so plug a cable into the port you intend to be the WAN interface. The process is repeated with each successive interface. The LAN interface is configured next, then each of the optional interfaces (OPT1, OPT2). If auto-detection does not work, or you do not want to use it, you can always choose manual configuration. You can always reassign network interfaces later on, so even if you make a mistake on this step, the mistake can be easily fixed. Once you have finished configuration, type y at the Do you want to proceed? prompt, or type n and enter to re-assign the interfaces. Option two on the menu is Set interface(s) IP address, and you will likely want to complete this step as well. When you invoke this option, you will be prompted to specify which interface's IP address is to be set. If you select WAN interface, you will be asked if you want to configure the IP address via DHCP. In most scenarios, this is probably the option you want to choose, especially if pfSense is acting as a firewall. In that case, the WAN interface will receive an IP address from your ISP's DHCP server. For all other interfaces (or if you choose not to use DHCP on the WAN interface), you will be prompted to enter the interface's IPv4 address. The next prompt will ask you for the subnet bit count. In most cases, you'll want to enter 8 if you are using a Class A private address, 16 for Class B, and 24 for Class C, but if you are using classless subnetting (for example, to divide a Class C network into two separate networks), then you will want to set the bit count accordingly. You will also be prompted for the IPv4 gateway address (any interface with a gateway set is a WAN, and pfSense supports multiple WANs); if you are not configuring the WAN interface(s), you can just hit Enter here. Next, you will be prompted to provide the address, subnet bit count, and gateway address for IPv6; if you want your network to fully utilize IPv6 addresses, you should enter them here. We have now configured as much as we need to from the console (actually, we have done more than we have to, since we really only have to configure the WAN interface from the console). The remainder of the configuration can be done from the pfSense web GUI. Configuration from the web GUI The pfSense web GUI can only be accessed from another PC. If the WAN was the only interface assigned during the initial setup, then you will be able to access pfSense through the WAN IP address. Once one of the local interfaces is configured (typically the LAN interface), pfSense can no longer be accessed through the WAN interface. You will, however, be able to access pfSense from the local side of the firewall (typically through the LAN interface). In either case, you can access the web GUI by connecting another computer to the pfSense system, either directly (with a crossover cable) or indirectly (through a switch), and then typing either the WAN or LAN IP address into the connected computer's web browser. The login screen should look similar to the following screenshot: The pfSense 2.3 web GUI login screen. When you initially log in to pfSense, the default username/password combination will be admin/pfsense respectively. On your first login, Setup Wizard will begin automatically. Click on the Next button to begin configuration. The first screen provides a link for information about a pfSense Gold subscription. You can click on the link to sign up, or click on the Next button. On the next screen, you will be prompted to enter the hostname of the router as well as the domain. Hostnames can contain letters, numbers and hyphens, but must begin with a letter. If you have a domain, you can enter it in the appropriate field. In the Primary DNS Server and Secondary DNS Server fields, you can enter your DNS servers. If you are using DHCP for your WAN, you can probably leave these fields blank, as they will usually be assigned automatically by your ISP. If you have alternate DNS servers you wish to use, you can enter them here. I have entered 8.8.8.8 and 8.8.4.4 as the primary and secondary DNS servers (these are two DNS servers run by Google that conveniently have easy to remember IP addresses). You can keep the Override DNS checkbox checked unless you have reason to use DNS servers other than the ones assigned by your ISP. Click on Next when finished. The next screen will prompt you for the Network Time Protocol (NTP) server as well as the local time zone. You can keep the default value for the server hostname for now. For the Timezone field, you should select the zone which matches your location and click on Next. The next screen of the wizard is the WAN configuration page. You will be prompted to select the WAN type. You can select either DHCP (the default type) or Static. If your pfSense system is behind another firewall and it is not going to receive an IP address from an upstream DHCP server, then you probably should choose Static. If pfSense is going to be a perimeter firewall, however, then DHCP is likely the correct setting, since your ISP will probably dynamically assign an IP address (this is not always the case, as you may have an IP address statically assigned to you by your ISP, but it is the more likely scenario). If you are not sure which WAN type to use, you will need to obtain this information from your ISP (the other choices are PPPoE, PPTP, and Static. PPPoE stands for Point-to-Point over Ethernet and PPTP stands for Point-to-Point Tunneling Protocol). The MAC address field allows you to enter a MAC address that is different from the actual MAC address of the WAN interface. This can be useful if your ISP will not recognize an interface with a different MAC address than the device that was previously connected, or if you want to acquire a different IP address (changing the MAC address will cause the upstream DHCP server to assign a different address). If you use this option, make sure the portion of the address reserved for the Organizationally Unique Identifier (OUI) is a valid OUI – in other words, an OUI assigned to a network card manufacturer. (The OUI portion of the address is the first three bytes of a MAC-48 address and the first five bytes of an EUI-48 address.) The next few fields can usually be left blank. Maximum Transmission Unit (MTU) allows you to change the MTU size if necessary. DHCP hostname allows you to send a hostname to your ISP when making a DHCP request, which is useful if your ISP requires this. Besides DHCP and Static, you can select PPTP or PPPoE as your WAN type. If you choose PPPoE, then there will be a field for a PPPoE Username, PPPoE Password, and PPPoE Server Name. The PPPoE dial-on-demand checkbox allows you to connect to your ISP only when a user requests data that requires an Internet connection. PPPoE Idle timeout specifies how long the connection will be kept open after transmitting data when this option is invoked. The Block RFC1918 Private Networks checkbox, if checked, will block registered private networks (as defined by RFC 1918) from connecting to the WAN interface. The Block Bogon Networks option blocks traffic from reserved and/or unassigned IP addresses. For the WAN interface, you should check both options unless you have special reasons for not invoking these options. Click the Next button when you are done. The next screen provides fields in which you can change LAN IP address and subnet mask, but only if you configured the LAN interface previously. You can keep the default, or change it to another value within the private address blocks. You may want to choose an address range other than the very common 192.168.1.x in order to avoid a conflict. Be aware that if you change the LAN IP address value, you will also need to adjust your PC's IP address, or release and renew its DHCP lease when finished with the network interface. You will also have to change the pfSense IP address in your browser to reflect the change. The final screen of the pfSense Setup Wizard allows you to change the admin password, which you should probably do. Enter the password, enter it again for confirmation in the next edit box, and click on Next. On the following screen, there will be a Reload button; click on Reload. This will reload pfSense with the new changes. Once you have completed the wizard, you should have network connectivity. Although there are other means of making changes to pfSense's configuration, if you want to repeat the wizard, you can do so by navigating to System | Setup Wizard. Completion of the wizard will take you to the pfSense dashboard. The pfSense dashboard, redesigned for version 2.3. Configuring additional interfaces By now, both the WAN and LAN interface configuration should be complete. Although additional interface configuration can be done at the console, it can also be done in the web GUI. To add optional interfaces, navigate to Interfaces | assign theInterface assignments tab will show a list of assigned interfaces, and at the bottom of the table, there will be an Available network ports: entry option. There will be a corresponding drop-down box with a list of unassigned network ports. These will have device names such as fxp0, em1, and so on. To assign an unused port, select the port you want to assign from the drop-down box, and click on the + button to the right. The page will reload, and the new interface will be the last entry in the table. The name of the interface will be OPTx, where x equals the number of optional interfaces. By clicking on interface name, you can configure the interface. Nearly all the settings here are similar to the settings that were available on the WAN and LAN configuration pages in the pfSense Setup Wizard. Some of the options under the General Configuration section, that are not available in the setup wizard, are MSS (Maximum Segment Size), and Speed and duplex. Normally, MSS should remain unchanged, although you can change this setting if your Internet connection requires it. If you click on the Advanced button under Speed and duplex, a drop-down box will appear in which you can explicitly set the speed and duplex for the interface. Since virtually all modern network hardware has the capability of automatically selecting the correct speed and duplex, you will probably want to leave this unchanged. If you have selected DHCP as the configuration type, then there are several options in addition to the ones available in the setup wizard. Alias IPv4 address allows you to enter a fixed IP address for the DHCP client. The Reject Leases from field allows you to specify the IP address or subnet of an upstream DHCP server to be ignored. Clicking on the Advanced checkbox in the DHCP client configuration causes several additional options to appear in this section of the page. The first is Protocol Timing, which allows you to control DHCP protocol timings when requesting a lease. You can also choose several presets (FreeBSD, pfSense, Clear, or Saved Cfg) using the radio buttons on the right. The next option in this section is Lease Requirements and Requests. Here you can specify send, request, and require options when requesting a DHCP lease. These options are useful if your ISP requires these options. The last section is Option Modifiers, where you can add DHCP option modifiers, which are applied to an obtained DHCP lease. There is a second checkbox at the top of this section called Config File Override. Checking this box allows you to enter a DHCP client configuration file. If you use this option, you must specify the full absolute path of the file. Starting with pfSense version 2.2.5, there is support for IPv6 with DHCP (DHCP6). If you are running 2.2.5 or above, there will be a section on the page called DHCP6 client configuration. The first setting is Use IPv4 connectivity as parent interface. This allows you to request an IPv6 address over IPv4. The second is Request only an IPv6 prefix. This is useful if your ISP supports Stateless Address Auto Configuration(SLAAC). In this case, instead of the usual procedure in which the DHCP server assigns an IP address to the client, the DHCP server only sends a prefix, and the host may generate its own IP address and test the uniqueness of a generated address in the intended addressing scope. By default, the IPv6 prefix is 64 bits, but you can change that by altering the DHCPv6 Prefix Delegation size in the corresponding drop-down box. The last setting is the Send IPv6 prefix hint, which allows you to request the specified prefix size from your ISP. The advanced DHCP6 client configuration section of the interface configuration page. This section appears if DHCP6 is selected as the IPv6 configuration type. Checking the Advanced checkbox in the heading of this section displays the advanced DHCP 6 options. If you check the Information Only checkbox on the left, pfSense will send requests for stateless DHCPv6 information. You can specify send and request options, just as you can for IPv4. There is also a Script field where you can enter the absolute path to a script that will be invoked on certain conditions. The next options are for the Identity Association Statement checkboxes. The Non-Temporary Address Allocation checkbox results in normal, that is, not temporary, IPv6 addresses to be allocated for the interface. The Prefix Delegation checkbox causes a set of IPv6 prefixes to be allocated from the DHCP server. The next set of options, Authentication Statement, allow you to specify authentication parameters to the DHCP server. The Authname parameter allows you to specify a string, which in turn specifies a set of parameters. The remaining parameters are of limited usefulness in configuring a DHCP6 client, because each has only one allowed value, and leaving them blank will result in only the allowed value being used. If you are curious as to what these values are, here they are: Parameter Allowed value Description Protocol Delayed The DHCPv6 delayed authentication protocol Algorithm hmac-md5, HMAC-MD5, hmacmd5, or HMACMD5 The HMAC-MD5 authentication algorithm rdm Monocounter The replay protection method; only monocounter is available Finally, Key info Statement allows you to enter a secret key. The required fields are key id, which identifies the key, and secret, which provides the shared secret. key name and realm are arbitrary strings and may be omitted. expire may be used to specify an expiration time for the key, but if it is omitted, the key will never expire. The last section on the page is identical to the interface configuration page in the Setup Wizard, and contains the Block Private Networks and Block Bogon Networks checkboxes. Normally, these are checked for WAN interfaces, but not for other interfaces. General setup options You can find several configuration options under System | General Setup. Most of these are identical to settings that can be configured in the Setup Wizard (Hostname, Domain, DNS servers, Timezone, and NTP server). There are two additional settings available. The Language drop-down box allows you to select the web configurator language. Under the Web Configurator section, there is a Theme drop-down box that allows you to select the theme. The default theme of pfSense is perfectly adequate, but you can select another one here. pfSense 2.3 also adds new options to control the look and feel of the web interface; these settings are also found in the Web Configurator section of the General Settings page. The Top Navigation drop-down box allows you to choose whether the top navigation scrolls with the page, or remains anchored at the top as you scroll. The Dashboard Columns option allows you to select the number of columns on the dashboard page (the default is 2). The next set of options is Associated Panels Show/Hide. These options control the appearance of certain panels on the Dashboard and System Logs page. The options are: Available Widgets: Checking this box causes the Available Widgets panel to appear on the Dashboard. Prior to version 2.3, the Available Widgets panel was always visible on the Dashboard. Log Filter: Checking this box causes the Advanced Log Filter panel to appear on the System Logs page. Advanced Log Filter allows you to filter the system logs by time, process, PID and message. Manage Log: Checking this box causes the Manage General Log panel to appear on the System Logs page. The Manage General Log panel allows you to control the display of the logs, how big the log file may be, and the formatting of the log file, among other things. The last option on this page, Left Column Labels, allows you to select/toggle the first item in a group by clicking on the left column if checked. Click on Save at the bottom of the page to save any changes. Advanced setup options Under System | Advanced, there are a number of options that you will probably want to configure before completing the initial setup. There are six separate tabs here, all with multiple options, and we won't cover all of them here, but we will cover the more common ones. The first setting allows you to choose between HTTP and HTTPS for the web configurator. If you plan on making the pfSense web GUI accessible from the WAN side, you will definitely want to choose HTTPS in order to encrypt access to the web GUI. Even if the web GUI will only be accessible over local networks, you probably will want to choose HTTPS. Modern web browsers will complain about the SSL certificate the first time you access the web GUI, but most of them will allow you to create an exception. The next setting, SSL certificate, allows you to choose a certificate from a drop-down list of available certificates. You can choose web Configurator default, or you can add another certificate (by navigating to System | Cert Manager and adding one), and use it instead. The next important setting, also in the Web Configurator section, is the Disable web Configurator anti-lockout rule. If left unchecked, access to the web GUI is always allowed on the LAN (or WAN if the LAN interface has not been assigned), regardless of any user-defined firewall rules. If you check this option and you don't have a user-defined rule to allow access to pfSense, you will lock yourself out of the web GUI. If you are locked out of the web GUI because of firewall rules, there are several options. The easiest option is probably to restore a previous configuration from the console. You can also reset pfSense to factory defaults, but if you don't mind typing in shell commands, there are less drastic options. One is to add an allow all rule on the WAN interface by typing the following command at the console shell prompt (type 8 at the console menu to invoke the shell): pfSsh.php playback enableallowallwan Once you issue this command, you will be able to access the web GUI through the WAN interface. To do so, either connect the WAN port to a network running DHCP (if the WAN uses DHCP), or connect the WAN port to another computer with an IP on the same network (if the WAN has a static IP). Be sure to delete the WAN allow all rule before deploying the system. Another possibility is to temporarily disable the firewall rules with the following shell command: pfctl –d Once you have regained access, you can re-enable the firewall rules with this command: pfctl -e In any case, you want to make sure your firewall rules are configured correctly before invoking the anti-lockout option. You can reset pfSense to factory defaults by selecting 4 from the console menu. If you need to go back to a previous configuration, you can do that by selecting 15 from the console menu; this option will allow you to select from automatically-saved restore points. The next section is Secure Shell; checking the Enable Secure Shell checkbox makes the console accessible via a Secure Shell (SSH) connection. This makes life easier for admins, but it also creates a security concern. Therefore, it is a good idea to change the default SSH port (the default is 22), which you can do in this section. You can add another layer of security by checking the Disable password login for the Secure Shell checkbox. If you invoke this option, you must create authorized SSH keys for each user that requires SSH access. The process for generating SSH keys is different depending on your OS. Under Linux, it is fairly simple. First, enter the following at the command prompt: ssh-keygen –t rsa You will receive the following prompt: Enter file in which to save the key (/home/user/.ssh/id-rsa): The directory in parenthesis will be a subdirectory of your home directory. You can change the directory or press Enter. The next prompt asks you for a passphrase: Enter passphrase (empty for no passphrase): You can enter a passphrase here or just press Enter. You will be prompted to enter the passphrase again, and then the public/private key pair will be generated. The public key will now be saved in a file called id-rsa.pub. Entering SSH keys for a user in the user manager. The next step is adding the newly generated public key to the admin account in pfSense. Open the file in the text editor of your choice and in the web GUI, select the public key and copy it to the clipboard. Then navigate to System | User Manager and click on the Edit user icon for the appropriate user. Scroll down to the Keys section and paste the key into the Authorized SSH keys box. Then click on Save at the bottom of the page. You should now be able to SSH into the admin account without entering the password. Type the following at the command line: ssh pfsense_address –ladmin Here pfsense_address is the IP address of the pfSense system. If you specified a passphrase earlier, you will be prompted to enter it in order to unlock the private key. You will not be prompted for the passphrase on subsequent logins. Once you unlock the private key, you should be logged into the console. The last section of the page, Console options, gives you one more layer of security by allowing you to require a password for console login. Check this checkbox if you want to enable this option, although this could result in being locked out if you forget the password. If this happens, you may still be able to restore access by booting from the live CD and doing a pre-flight install, described in a subsequent section. The next tab, Firewall/NAT, contains a number of important settings relating to pfSense's firewall functionality. Firewall Optimization Options allows you to select the optimization algorithm for the state table. The Normal option is designed for average case scenario network usage. High latency, as the name implies, is for connections in which it is expected that there will be a significant delay between a request and response (a satellite connection is a good example). Aggressive and Conservative are inverses of each other. Aggressive is more aggressive than Normal in dropping idle connections, while Conservative will leave idle connections open longer than Normal would. Obviously, the trade-off here is that if we expire idle connections too soon, legitimate connections may be dropped, while keeping them open too long will be costly from a resource (CPU and memory) standpoint. In the Firewall Advanced section, there is a Disable all packet filtering checkbox. Enabling this option disables all firewall functionality, including NAT. This should be used with caution, but may be useful in troubleshooting. The Firewall maximum settings and Firewall maximum table entries options allow you to specify the maximum number of connections and maximum number of table entries respectively to hold in the system state table. If you leave these entries blank, pfSense will assign reasonable defaults based on the amount of memory your system has. Since increasing the maximum number of connections and/or state table entries will leave less memory for everything else, you will want to invoke these options with caution. The static route filtering checkbox, if checked, will result in firewall rules not taking effect for traffic that enters and leaves through the same interface. This can be useful if you have a static route in which traffic enters pfSense through an interface, but the source of the traffic is not the same as the interface on which it enters. This option does not apply to traffic whose source and destination is the same interface – such traffic is intra-network traffic, and firewall rules would not apply to it whether or not this option was invoked. The next section of the page, Bogon Networks, allows you to select the update frequency of the list of addresses reserved, or not yet assigned, by IANA. If someone is trying to access your network from a newly-assigned IP address, but the Bogon networks list has not yet been updated, they may find themselves blocked. If this is happening on a frequent basis, you may want to change the update frequency. The next tab, Networking, contains a number of IPv6 options. The Allow IPv6 checkbox must be checked in order for IPv6 traffic to pass (it is checked by default). The next option, IPv6 over IPv4 Tunneling, allows you to enable the transitional IPv6 over IPv4. There is also an option called Prefer IPv4 even when IPv6 is available, which will cause IPv4 to be used in cases where a hostname resolves both IPv4 and IPv6 addresses. The next tab is called Miscellaneous. The Proxy Port section allows you to specify URL for a remote proxy server, as well as the proxy port as well as a username and password. The following section, Load Balancing, has two settings. The first setting, Use sticky connections, causes successive connections from the same source to be connected to the same server, instead of directing them to the next web server in the pool, which would be the normal behavior. The timeout period for sticky connections may be adjusted in the adjacent Edit box. The default is 0, so the sticky connection expires as soon as the last connection from the source expires. The second setting, Enable default gateway switching, switches from the default gateway to another available one when the default gateway goes down. This is not necessary in most cases, since it is easier to incorporate redundancy into gateways with gateway groups. The Scheduling section has only one option, but it has significance if you use rule scheduling. Checking the Do not kill connections when schedule expires checkbox will cause connections permitted by the rule to survive even after the time period specified by the schedule expires. Otherwise, pfSense will kill all existing connections when a schedule expires. Upgrading, backing up, and restoring pfSense You can usually upgrade pfSense from one version to another, although the means of upgrading may differ depending on what platform you are using. So long as the firmware is moving from an older version to a newer version, pfSense will work unless otherwise noted. Before you make any changes, you should make an up-to-date backup. In the web GUI, you can back up the configuration by navigating to Diagnostics | Backup/Restore. In the Backup Configuration section of the page, set Backup Area to ALL. Then click on Download Configuration and save the file. Before you upgrade pfSense, it is a good idea to have a plan on how to recover in case the upgrade goes wrong. There is always a chance that an upgrade will leave pfSense in an unusable state. In these cases, it is always helpful to have a backup system available. Also, with advance planning, the firewall can be quickly returned to the previous release. There are three methods for upgrading pfSense. The first is to download the upgrade binaries from the official pfSense site. The same options are available as are available for a full install. Just download the appropriate image, write the image to the target media, and boot the system to be upgraded from the target media. For embedded systems, releases prior to 1.2.3 are not upgradable (in such cases, a full install would be the only way to upgrade), but newer NanoBSD-based embedded images do support upgrades. The second method is to upgrade from the console. From the console menu, select 13 (the Upgrade from Console option). pfSense will check the repositories to see if there is an update, and if there is, how much more disk space is required, and also inform you that upgrading will require a reboot. It will also prompt you to confirm that the upgrade should proceed. Type y and Enter, and the upgrade will proceed. pfSense will also automatically reboot 10 seconds after downloading and installing the upgrade. Rebooting may take slightly longer than it would normally, since pfSense must extract the new binaries from a tarball during the boot sequence. Upgrading pfSense from the console. The third method is the easiest way to upgrade your system: from the web GUI. Navigate to Status | Dashboard (this should also be the screen you see when initially logging into the web GUI). The System Information widget should have a section called Version, and this section should provide: The current version of pfSense Whether an update is available If an update is available, there will be a link to the firmware auto update page; click on this link. (Alternatively, you can access this page by navigating to System | Update and clicking on the System Update tab (note that on versions prior to 2.3, this menu option was called Firmware instead of Update.) If there is an update available, this page will let you know. Choosing a firmware branch from the Update Settings tab of the Update option. The Update Settings tab contains options that may be helpful in some situations. The Firmware Branch section has a drop-down box, allowing you to select either the Stable branch or Development branch. The Dashboard check checkbox allows you to disable the dashboard auto-update check. Once you are satisfied with these settings, you can click on the Confirm button on the System Update tab. The updating process will then begin, starting with the backup (if you chose that option). Upgrading can take as little as 15 minutes, especially if you are upgrading from one minor version to another. If you are upgrading in a production environment, you will want to schedule your upgrade for a suitable time (either during the weekend or after normal working hours). The web GUI will keep you informed of the status of the update process and when it is complete. Another means of updating pfSense in the web GUI is to use the manual update feature. To do so, navigate to System | Update and click on the Manual Update tab. Click on the Enable firmware upload button. When you do this, a new section should appear on the page. The Choose file button launches a file dialog box where you can specify the firmware image file. Once you select the file, click on Open to close out the file dialog box. There is a Perform full backup prior to upgrade checkbox you can check if you want to back up the system, and also an Upgrade firmware button that will start the upgrade process. If the update is successful, the System Information widget on the Dashboard should indicate that you are on the current version of pfSense (or the version to which you upgraded, if you invoked the manual update). If something went wrong and pfSense is not functioning properly, and you made a backup prior to updating, you can restore the old version. Available methods of backing up and restoring pfSense are outlined in the next section. Backing up and restoring pfSense The following screenshot shows the options related to backing up and restoring pfSense: Backup and restore options in pfSense 2.3. You can back up and restore the config.xml file from the web GUI by navigating to Diagnostics | Backup/Restore. The first section, Backup configuration, allows you to back up some or all of the configuration data. There is a drop-down box which allows you to select which areas to backup. There are checkbox options such as do not backup package information, and Encrypt this configuration file. The final checkbox, selected by default, allows you to disable the backup of round robin database (RRD) data, real-time traffic data which you likely will not want to save. The Download Configuration as XML button allows you to save config.xml to a local drive. Restoring the configuration is just as easy. In the Restore configuration section of the page, select the area to restore from the drop-down box and browse to the file by clicking on the Choose File button. Specify whether config.xml is encrypted with the corresponding checkbox, and then click the Restore configuration button. Restoring a configuration with Pre-Flight Install You may find it is necessary to restore an old pfSense configuration. Moreover, it is possible that restoring an old configuration from the console or web GUI as described previously in this article is not possible. In these cases, there is one more possible way of restoring an old configuration, and that is with a Pre-Flight Install (PFI), A PFI essentially involves the following: Copying a backup config.xml file into a directory called conf on a DOS/FAT formatted USB drive. Plugging the USB drive into the system whose configuration is to be restored, and then booting off the Live CD. Installing pfSense from the CD onto the target system. Rebooting the system, and allowing pfSense to boot (off the target media, not the CD). The configuration should now be restored. Another option that is useful if you want to retain your configuration while reinstalling pfSense is to choose the menu option Rescue config.xml during the installation process. This allows you to select and load a configuration file from any storage media attached to the system. Summary The goal of this article was to provide an overview of how to get pfSense up and running. Completion of this article should give you an idea of where to deploy your pfSense system as well as what hardware to utilize. You should also know how to troubleshoot the most common installation problems, and how to do basic system configuration and interface setup for both IPv4 and IPv6 networks. You should know how to configure pfSense for remote access. Finally, you should know how to upgrade, backup, and restore pfSense. Resources for Article: Further resources on this subject: Configuring the essential networking services provided by pfSense [article] pfSense: Configuring NAT and Firewall Rules [article] Upgrading a Home Network to a Small Business System Using pfSense [article]
Read more
  • 0
  • 0
  • 6495