Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

6719 Articles
article-image-gui-components-qt-5
Packt
30 Mar 2015
8 min read
Save for later

GUI Components in Qt 5

Packt
30 Mar 2015
8 min read
In this article by Symeon Huang, author of the book Qt 5 Blueprints, explains typical and basic GUI components in Qt 5 (For more resources related to this topic, see here.) Design UI in Qt Creator Qt Creator is the official IDE for Qt application development and we're going to use it to design application's UI. At first, let's create a new project: Open Qt Creator. Navigate to File | New File or Project. Choose Qt Widgets Application. Enter the project's name and location. In this case, the project's name is layout_demo. You may wish to follow the wizard and keep the default values. After this creating process, Qt Creator will generate the skeleton of the project based on your choices. UI files are under Forms directory. And when you double-click on a UI file, Qt Creator will redirect you to integrated Designer, the mode selector should have Design highlighted and the main window should contains several sub-windows to let you design the user interface. Here we can design the UI by dragging and dropping. Qt Widgets Drag three push buttons from the widget box (widget palette) into the frame of MainWindow in the center. The default text displayed on these buttons is PushButtonbut you can change text if you want, by double-clicking on the button. In this case, I changed them to Hello, Hola, and Bonjouraccordingly. Note that this operation won't affect the objectName property and in order to keep it neat and easy-to-find, we need to change the objectName! The right-hand side of the UI contains two windows. The upper right section includes Object Inspector and the lower-right includes the Property Editor. Just select a push button, we can easily change objectName in the Property Editor. For the sake of convenience, I changed these buttons' objectName properties to helloButton, holaButton, and bonjourButton respectively. Save changes and click on Run on the left-hand side panel, it will build the project automatically then run it as shown in the following screenshot: In addition to the push button, Qt provides lots of commonly used widgets for us. Buttons such as tool button, radio button, and checkbox. Advanced views such as list, tree, and table. Of course there are input widgets, line edit, spin box, font combo box, date and time edit, and so on. Other useful widgets such as progress bar, scroll bar, and slider are also in the list. Besides, you can always subclass QWidget and write your own one. Layouts A quick way to delete a widget is to select it and press the Delete button. Meanwhile, some widgets, such as the menu bar, status bar, and toolbar can't be selected, so we have to right-click on them in Object Inspector and delete them. Since they are useless in this example, it's safe to remove them and we can do this for good. Okay, let's understand what needs to be done after the removal. You may want to keep all these push buttons on the same horizontal axis. To do this, perform the following steps: Select all the push buttons either by clicking on them one by one while keeping the Ctrl key pressed or just drawing an enclosing rectangle containing all the buttons. Right-click and select Layout | LayOut Horizontally. The keyboard shortcut for this is Ctrl + H. Resize the horizontal layout and adjust its layoutSpacing by selecting it and dragging any of the points around the selection box until it fits best. Hmm…! You may have noticed that the text of the Bonjour button is longer than the other two buttons, and it should be wider than the others. How do you do this? You can change the property of the horizontal layout object's layoutStretch property in Property Editor. This value indicates the stretch factors of the widgets inside the horizontal layout. They would be laid out in proportion. Change it to 3,3,4, and there you are. The stretched size definitely won't be smaller than the minimum size hint. This is how the zero factor works when there is a nonzero natural number, which means that you need to keep the minimum size instead of getting an error with a zero divisor. Now, drag Plain Text Edit just below, and not inside, the horizontal layout. Obviously, it would be neater if we could extend the plain text edit's width. However, we don't have to do this manually. In fact, we could change the layout of the parent, MainWindow. That's it! Right-click on MainWindow, and then navigate to Lay out | Lay Out Vertically. Wow! All the children widgets are automatically extended to the inner boundary of MainWindow; they are kept in a vertical order. You'll also find Layout settings in the centralWidget property, which is exactly the same thing as the previous horizontal layout. The last thing to make this application halfway decent is to change the title of the window. MainWindow is not the title you want, right? Click on MainWindow in the object tree. Then, scroll down its properties to find windowTitle. Name it whatever you want. In this example, I changed it to Greeting. Now, run the application again and you will see it looks like what is shown in the following screenshot: Qt Quick Components Since Qt 5, Qt Quick has evolved to version 2.0 which delivers a dynamic and rich experience. The language it used is so-called QML, which is basically an extended version of JavaScript using a JSON-like format. To create a simple Qt Quick application based on Qt Quick Controls 1.2, please follow following procedures: Create a new project named HelloQML. Select Qt Quick Application instead of Qt Widgets Application that we chose previously. Select Qt Quick Controls 1.2 when the wizard navigates you to Select Qt Quick Components Set. Edit the file main.qml under the root of Resources file, qml.qrc, that Qt Creator has generated for our new Qt Quick project. Let's see how the code should be. import QtQuick 2.3 import QtQuick.Controls 1.2   ApplicationWindow {    visible: true    width: 640    height: 480    title: qsTr("Hello QML")      menuBar: MenuBar {        Menu {            title: qsTr("File")            MenuItem {                text: qsTr("Exit")                shortcut: "Ctrl+Q"                onTriggered: Qt.quit()            }        }    }      Text {        id: hw        text: qsTr("Hello World")        font.capitalization: Font.AllUppercase        anchors.centerIn: parent    }      Label {        anchors { bottom: hw.top; bottomMargin: 5; horizontalCenter: hw.horizontalCenter }        text: qsTr("Hello Qt Quick")    } } If you ever touched Java or Python, then the first two lines won't be too unfamiliar for you. It simply imports the Qt Quick and Qt Quick Controls. And the number behind is the version of the library. The body of this QML source file is really in JSON style, which enables you understand the hierarchy of the user interface through the code. Here, the root item is ApplicationWindow, which is basically the same thing as QMainWindow in Qt/C++. When you run this application in Windows, you can barely find the difference between the Text item and Label item. But on some platforms, or when you change system font and/or its colour, you'll find that Label follows the font and colour scheme of the system while Text doesn't. Run this application, you'll see there is a menu bar, a text, and a label in the application window. Exactly what we wrote in the QML file: You may miss the Design mode for traditional Qt/C++ development. Well, you can still design Qt Quick application in Design mode! Click on Design in mode selector when you edit main.qml file. Qt Creator will redirect you into Design mode where you can use mouse drag-and-drop UI components: Almost all widgets you use in Qt Widget application can be found here in a Qt Quick application. Moreover, you can use other modern widgets such as busy indicator in Qt Quick while there's no counterpart in Qt Widget application. However, QML is a declarative language whose performance is obviously poor than C++. Therefore, more and more developers choose to write UI with Qt Quick in order to deliver a better visual style, while keep core functions in Qt/C++. Summary In this article, we had a brief contact with various GUI components of Qt 5 and focus on the Design mode in Qt Creator. Two small examples used as a Qt-like "Hello World" demonstrations. Resources for Article: Further resources on this subject: Code interlude – signals and slots [article] Program structure, execution flow, and runtime objects [article] Configuring Your Operating System [article]
Read more
  • 0
  • 0
  • 7767

article-image-predicting-sports-winners-decision-trees-and-pandas
Packt
12 Aug 2015
6 min read
Save for later

Predicting Sports Winners with Decision Trees and pandas

Packt
12 Aug 2015
6 min read
In this article by Robert Craig Layton, author of Learning Data Mining with Python, we will look at predicting the winner of games of the National Basketball Association (NBA) using a different type of classification algorithm—decision trees. Collecting the data The data we will be using is the match history data for the NBA, for the 2013-2014 season. The Basketball-Reference.com website contains a significant number of resources and statistics collected from the NBA and other leagues. Perform the following steps to download the dataset: Navigate to http://www.basketball-reference.com/leagues/NBA_2014_games.html in your web browser. Click on the Export button next to the Regular Season heading. Download the file to your data folder (and make a note of the path). This will download a CSV file containing the results of 1,230 games in the regular season of the NBA. We will load the file with the pandas library, which is an incredibly useful library for manipulating data. Python also contains a built-in library called csv that supports reading and writing CSV files. We will use pandas instead as it provides more powerful functions to work with datasets. For this article, you will need to install pandas. The easiest way to do that is to use pip3, which you may previously have used to install scikit-learn: $pip3 install pandas Using pandas to load the dataset We can load the dataset using the read_csv function in pandas as follows: import pandas as pddataset = pd.read_csv(data_filename) The result of this is a data frame, a data structure used by pandas. The pandas.read_csv function has parameters to fix some of the problems in the data, such as missing headings, which we can specify when loading the file: dataset = pd.read_csv(data_filename, parse_dates=["Date"],skiprows=[0,])dataset.columns = ["Date", "Score Type", "Visitor Team","VisitorPts", "Home Team", "HomePts", "OT?", "Notes"] We can now view a sample of the data frame: dataset.ix[:5] Extracting new features We extract our classes, 1 for a home win, and 0 for a visitor win. We can specify this using the following code to extract those wins into a NumPy array: dataset["HomeWin"] = dataset["VisitorPts"] < dataset["HomePts"] y_true = dataset["HomeWin"].values The first two new features we want to create are to indicate whether each of the two teams won their previous game. This would roughly approximate which team is currently playing well. We will compute this feature by iterating through the rows in order, and recording which team won. When we get to a new row, we look up whether the team won the last time: from collections import defaultdictwon_last = defaultdict(int) We can then iterate over all the rows and update the current row with the team's last result (win or loss): for index, row in dataset.iterrows():home_team = row["Home Team"]visitor_team = row["Visitor Team"]row["HomeLastWin"] = won_last[home_team]row["VisitorLastWin"] = won_last[visitor_team]dataset.ix[index] = row We then set our dictionary with each team's result (from this row) for the next time we see these teams: won_last[home_team] = row["HomeWin"]won_last[visitor_team] = not row["HomeWin"] Decision trees Decision trees are a class of classification algorithm such as a flow chart that consist of a sequence of nodes, where the values for a sample are used to make a decision on the next node to go to. We can use the DecisionTreeClassifier class to create a decision tree: from sklearn.tree import DecisionTreeClassifierclf = DecisionTreeClassifier(random_state=14) We now need to extract the dataset from our pandas data frame in order to use it with our scikit-learn classifier. We do this by specifying the columns we wish to use and using the values parameter of a view of the data frame: X_previouswins = dataset[["HomeLastWin", "VisitorLastWin"]].values Decision trees are estimators, and therefore, they have fit and predict methods. We can also use the cross_val_score method as before to get the average score: scores = cross_val_score(clf, X_previouswins, y_true,scoring='accuracy')print("Accuracy: {0:.1f}%".format(np.mean(scores) * 100)) This scores up to 56.1%; we are better off choosing randomly! Predicting sports outcomes We have a method for testing how accurate our models are using the cross_val_score method that allows us to try new features. For the first feature, we will create a feature that tells us whether the home team is generally better than the visitors by seeing whether they ranked higher in the previous season. To obtain the data, perform the following steps: Head to http://www.basketball-reference.com/leagues/NBA_2013_standings.html Scroll down to Expanded Standings. This gives us a single list for the entire league. Click on the Export link to the right of this heading. Save the download in your data folder. In your IPython Notebook, enter the following into a new cell. You'll need to ensure that the file was saved into the location pointed to by the data_folder variable: standings_filename = os.path.join(data_folder,"leagues_NBA_2013_standings_expanded-standings.csv")standings = pd.read_csv(standings_filename, skiprows=[0,1]) We then iterate over the rows and compare the team's standings: dataset["HomeTeamRanksHigher"] = 0for index, row in dataset.iterrows():home_team = row["Home Team"]visitor_team = row["Visitor Team"] Between 2013 and 2014, a team was renamed as follows: if home_team == "New Orleans Pelicans":home_team = "New Orleans Hornets"elif visitor_team == "New Orleans Pelicans":visitor_team = "New Orleans Hornets" Now, we can get the rankings for each team. We then compare them and update the feature in the row: home_rank = standings[standings["Team"] ==home_team]["Rk"].values[0]visitor_rank = standings[standings["Team"] ==visitor_team]["Rk"].values[0]row["HomeTeamRanksHigher"] = int(home_rank > visitor_rank)dataset.ix[index] = row Next, we use the cross_val_score function to test the result. First, we extract the dataset as before: X_homehigher = dataset[["HomeLastWin", "VisitorLastWin", "HomeTeamRanksHigher"]].values Then, we create a new DecisionTreeClassifier class and run the evaluation: clf = DecisionTreeClassifier(random_state=14)scores = cross_val_score(clf, X_homehigher, y_true,scoring='accuracy')print("Accuracy: {0:.1f}%".format(np.mean(scores) * 100)) This now scores up to 60.3%—even better than our previous result. Unleash the full power of Python machine learning with our 'Learning Data Mining with Python' book.
Read more
  • 0
  • 0
  • 7766

article-image-asynchronous-programming-python
Packt
26 Aug 2015
20 min read
Save for later

Asynchronous Programming with Python

Packt
26 Aug 2015
20 min read
 In this article by Giancarlo Zaccone, the author of the book Python Parallel Programming Cookbook, we will cover the following topics: Introducing Asyncio GPU programming with Python Introducing PyCUDA Introducing PyOpenCL (For more resources related to this topic, see here.) An asynchronous model is of fundamental importance along with the concept of event programming. The execution model of asynchronous activities can be implemented using a single stream of main control, both in uniprocessor systems and multiprocessor systems. In the asynchronous model of a concurrent execution, various tasks intersect with each other along the timeline, and all of this happens under the action of a single flow of control (single-threaded). The execution of a task can be suspended and then resumed alternating in time with any other task. The asynchronous programming model As you can see in the preceding figure, the tasks (each with a different color) are interleaved with one another, but they are in a single thread of control. This implies that when one task is in execution, the other tasks are not. A key difference between a multithreaded programming model and single-threaded asynchronous concurrent model is that in the first case, the operating system decides on the timeline whether to suspend the activity of a thread and start another, while in the second case, the programmer must assume that a thread may be suspended and replaced with another at almost any time. Introducing Asyncio The Python module Asyncio provides facilities to manage events, coroutines, tasks and threads, and synchronization primitives to write concurrent code. When a program becomes very long and complex, it is convenient to divide it into subroutines, each of which realizes a specific task, for which the program implements a suitable algorithm. The subroutine cannot be executed independently but only at the request of the main program, which is then responsible for coordinating the use of subroutines. Coroutines are a generalization of the subroutine. Like a subroutine, the coroutine computes a single computational step, but unlike subroutines, there is no main program that is used to coordinate the results. This is because the coroutines link themselves together to form a pipeline without any supervising function responsible for calling them in a particular order. In a coroutine, the execution point can be suspended and resumed later, having kept track of its local state in the intervening time. In this example, we see how to use the coroutine mechanism of Asyncio to simulate a finite state machine of five states. A Finite-state automaton (FSA) is a mathematical model that is not only widely used in engineering disciplines but also in sciences, such as mathematics and computer science. The automata we want to simulate the behavior is as follows: Finite State Machine We have indicated with S0, S1, S2, S3, and S4 the states of the system, with 0 and 1 as the values for which the automata can pass from one state to the next state (this operation is called a transition). So for example, the state S0 can be passed to the state S1 only for the value 1 and S0 can pass the state S2 only to the value 0. The Python code that follows simulates a transition of the automaton from the state S0, the so-called Start State, up to the state S4, the End State: #Asyncio Finite State Machine import asyncio import time from random import randint @asyncio.coroutine def StartState(): print ("Start State called n") input_value = randint(0,1) time.sleep(1) if (input_value == 0): result = yield from State2(input_value) else : result = yield from State1(input_value) print("Resume of the Transition : nStart State calling " + result) @asyncio.coroutine def State1(transition_value): outputValue = str(("State 1 with transition value = %s n" %(transition_value))) input_value = randint(0,1) time.sleep(1) print("...Evaluating...") if (input_value == 0): result = yield from State3(input_value) else : result = yield from State2(input_value) result = "State 1 calling " + result return (outputValue + str(result)) @asyncio.coroutine def State2(transition_value): outputValue = str(("State 2 with transition value = %s n" %(transition_value))) input_value = randint(0,1) time.sleep(1) print("...Evaluating...") if (input_value == 0): result = yield from State1(input_value) else : result = yield from State3(input_value) result = "State 2 calling " + result return (outputValue + str(result)) @asyncio.coroutine def State3(transition_value): outputValue = str(("State 3 with transition value = %s n" %(transition_value))) input_value = randint(0,1) time.sleep(1) print("...Evaluating...") if (input_value == 0): result = yield from State1(input_value) else : result = yield from EndState(input_value) result = "State 3 calling " + result return (outputValue + str(result)) @asyncio.coroutine def EndState(transition_value): outputValue = str(("End State with transition value = %s n" %(transition_value))) print("...Stop Computation...") return (outputValue ) if __name__ == "__main__": print("Finite State Machine simulation with Asyncio Coroutine") loop = asyncio.get_event_loop() loop.run_until_complete(StartState()) After running the code, we have an output similar to this: C:Python CookBookChapter 4- Asynchronous Programmingcodes - Chapter 4>python asyncio_state_machine.py Finite State Machine simulation with Asyncio Coroutine Start State called ...Evaluating... ...Evaluating... ...Evaluating... ...Evaluating... ...Evaluating... ...Evaluating... ...Evaluating... ...Evaluating... ...Evaluating... ...Evaluating... ...Evaluating... ...Evaluating... ...Stop Computation... Resume of the Transition : Start State calling State 1 with transition value = 1 State 1 calling State 3 with transition value = 0 State 3 calling State 1 with transition value = 0 State 1 calling State 2 with transition value = 1 State 2 calling State 3 with transition value = 1 State 3 calling State 1 with transition value = 0 State 1 calling State 2 with transition value = 1 State 2 calling State 1 with transition value = 0 State 1 calling State 3 with transition value = 0 State 3 calling State 1 with transition value = 0 State 1 calling State 2 with transition value = 1 State 2 calling State 3 with transition value = 1 State 3 calling End State with transition value = 1 Each state of the automata has been defined with the annotation @asyncio.coroutine. For example, the state S0 is: @asyncio.coroutine def StartState(): print ("Start State called n") input_value = randint(0,1) time.sleep(1) if (input_value == 0): result = yield from State2(input_value) else : result = yield from State1(input_value) The transition to the next state is determined by input_value, which is defined by the randint(0,1) function of Python's module random. This function randomly provides the value 0 or 1, where it randomly determines to which state the finite-state machine will be passed: input_value = randint(0,1) After determining the value at which state the finite state machine will be passed, the coroutine calls the next coroutine using the command yield from: if (input_value == 0): result = yield from State2(input_value) else : result = yield from State1(input_value) The variable result is the value that each coroutine returns. It is a string, and at the end of the computation, we can reconstruct [NV1] the transition from the initial state of the automaton, the Start State, up to the final state, the End State. The main program starts the evaluation inside the event loop: if __name__ == "__main__": print("Finite State Machine simulation with Asyncio Coroutine") loop = asyncio.get_event_loop() loop.run_until_complete(StartState()) GPU programming with Python A graphics processing unit (GPU) is an electronic circuit that specializes in processing data to render images from polygonal primitives. Although they were designed to carry out rendering images, GPUs have continued to evolve, becoming more complex and efficient in serving both real-time and offline rendering community. GPUs have continued to evolve, becoming more complex and efficient in performing any scientific computation. Each GPU is indeed composed of several processing units called streaming multiprocessor (SM), representing the first logic level of parallelism; each SM in fact, works simultaneously and independently from the others. The GPU architecture Each SM is in turn divided into a group of Stream Processors (SP), each of which has a core of real execution and can run a thread sequentially. SP represents the smallest unit of execution logic and the level of finer parallelism. The division in SM and SP is structural in nature, but it is possible to outline a further logical organization of the SP of a GPU, which are grouped together in logical blocks characterized by a particular mode of execution—all cores that make up a group run at the same time with the same instructions. This is just the SIMD (Single Instruction, Multiple Data) model. The programming paradigm that characterizes GPU computing is also called stream processing because the data can be viewed as a homogeneous flow of values that are applied synchronously to the same operations. Currently, the most efficient solutions to exploit the computing power provided by the GPU cards are the software libraries CUDA and OpenCL. Introducing PyCUDA PyCUDA is a Python wrapper for CUDA (Compute Unified Device Architecture), the software library developed by NVIDIA for GPU programming. The PyCuda programming model is designed for the common execution of a program on the CPU and GPU so as to allow you to perform the sequential parts on the CPU and the numeric parts that are more intensive on the GPU. The phases to be performed in the sequential mode are implemented and executed on the CPU (host), while the steps to be performed in parallel are implemented and executed on the GPU (device). The functions to be performed in parallel on the device are called kernels. The skeleton general for the execution of a generic function kernel on the device is as follows: Allocation of memory on the device. Transfer of data from the host memory to that allocated on the device. Running the device: Running the configuration. Invocation of the kernel function. Transfer of the results from the memory on the device to the host memory. Release of the memory allocated on the device. The PyCUDA programming model To show the PyCuda workflow, let's consider a 5 × 5 random array and the following procedure: Create the array 5×5 on the CPU. Transfer the array to the GPU. Perform a Task[NV2]  on the array in the GPU (double all the items in the array). Transfer the array from the GPU to the CPU. Print the results. The code for this is as follows: import pycuda.driver as cuda import pycuda.autoinit from pycuda.compiler import SourceModule import numpy a = numpy.random.randn(5,5) a = a.astype(numpy.float32) a_gpu = cuda.mem_alloc(a.nbytes) cuda.memcpy_htod(a_gpu, a) mod = SourceModule(""" __global__ void doubleMatrix(float *a) { int idx = threadIdx.x + threadIdx.y*4; a[idx] *= 2; } """) func = mod.get_function("doubleMatrix") func(a_gpu, block=(5,5,1)) a_doubled = numpy.empty_like(a) cuda.memcpy_dtoh(a_doubled, a_gpu) print ("ORIGINAL MATRIX") print a print ("DOUBLED MATRIX AFTER PyCUDA EXECUTION") print a_doubled The example output should be like this : C:Python CookBookChapter 6 - GPU Programming with Python >python PyCudaWorkflow.py ORIGINAL MATRIX [[-0.59975582 1.93627465 0.65337795 0.13205571 -0.46468592] [ 0.01441949 1.40946579 0.5343408 -0.46614054 -0.31727529] [-0.06868593 1.21149373 -0.6035406 -1.29117763 0.47762445] [ 0.36176383 -1.443097 1.21592784 -1.04906416 -1.18935871] [-0.06960868 -1.44647694 -1.22041082 1.17092752 0.3686313 ]] DOUBLED MATRIX AFTER PyCUDA EXECUTION [[-1.19951165 3.8725493 1.3067559 0.26411143 -0.92937183] [ 0.02883899 2.81893158 1.0686816 -0.93228108 -0.63455057] [-0.13737187 2.42298746 -1.2070812 -2.58235526 0.95524889] [ 0.72352767 -1.443097 1.21592784 -1.04906416 -1.18935871] [-0.06960868 -1.44647694 -1.22041082 1.17092752 0.3686313 ]] The code starts with the following imports: import pycuda.driver as cuda import pycuda.autoinit from pycuda.compiler import SourceModule The pycuda.autoinit import automatically picks a GPU to run on based on the availability and number. It also creates a GPU context for subsequent code to run in. Both the chosen device and the created context are available from pycuda.autoinit as importable symbols if needed. While the SourceModule component is the object where a C-like code for the GPU must be written. The first step is to generate the input 5 × 5 matrix. Since most GPU computations involve large arrays of data, the NumPy module must be imported: import numpy a = numpy.random.randn(5,5) Then, the items in the matrix are converted in a single precision mode, many NVIDIA cards support only single precision: a = a.astype(numpy.float32) The first operation to be done in order to implement a GPU loads the input array from the host memory (CPU) to the device (GPU). This is done at the beginning of the operation and consists two steps that are performed by invoking two functions provided PyCuda[NV3] . Memory allocation on the device is done via the cuda.mem_alloc function. The device and host memory may not ever communicate while performing a function kernel. This means that to run a function in parallel on the device, the data relating to it must be present in the memory of the device itself. Before you copy data from the host memory to the device memory, you must allocate the memory required on the device: a_gpu = cuda.mem_alloc(a.nbytes). Copy of the matrix from the host memory to that of the device with the function: call cuda.memcpy_htod : cuda.memcpy_htod(a_gpu, a). We also note that a_gpu is one dimensional, and on the device, we need to handle it as such. All these operations do not require the invocation of a kernel and are made directly by the main processor. The SourceModule entity serves to define the (C-like) kernel function doubleMatrix that multiplies each array entry by 2: mod = SourceModule(""" __global__ void doubleMatrix(float *a) { int idx = threadIdx.x + threadIdx.y*4; a[idx] *= 2; } """) The __global__ qualifier is a directive that indicates that the doubleMatrix function will be processed on the device. It will be just the compiler Cuda nvcc that will be used to perform this task. Let's look at the function's body, which is as follows: int idx = threadIdx.x + threadIdx.y*4; The idx parameter is the matrix index that is identified by the thread coordinates threadIdx.x and threadIdx.y. Then, the element matrix with the index idx is multiplied by 2: a[idx] *= 2; We also note that this kernel function will be executed once in 16 different threads. Both the variables threadIdx.x and threadIdx.y contain indices between 0 and 3 , and the pair[NV4]  is different for each thread. Threads scheduling is directly linked to the GPU architecture and its intrinsic parallelism. A block of threads is assigned to a single SM. Here, threads are further divided into groups called warps. The size of a warp depends on the architecture under consideration. The threads of the same warp are managed by the control unit called the warp scheduler. To take full advantage of the inherent parallelism of the SM, the threads of the same warp must execute the same instruction. If this condition does not occur, we speak of divergence of threads. If the same warp threads execute different instructions, the control unit cannot handle all the warps. It must follow the sequences of instructions for every single thread (or for homogeneous subsets of threads) in a serial mode. Let's observe how the thread block is divided in various warps—threads are divided by the value of threadIdx. The threadIdx structure consists of 3 fields: threadIdx.x, threadIdx.y, and threadIdx.z. Thread blocks subdivision: T(x,y), where x = threadIdx.x and y = threadIdx.y We can see again that the code in the kernel function will be automatically compiled by the nvcc cuda compiler. If there are no errors, a pointer to this compiled function will be created. In fact, the mod.get_function[NV5] ("doubleMatrix") returns an identifier to the function created func: func = mod.get_function("doubleMatrix ") To perform a function on the device, you must first configure the execution appropriately. This means that we need to determine the size of the coordinates to identify and distinguish the thread belonging to different blocks. This will be done using the block parameter inside the func call: func(a_gpu, block = (5, 5, 1)) The block = (5, 5, 1) tells us that we are calling a kernel function with a_gpu linearized input matrix and a single thread block of size, 5 threads in the x direction, 5 threads in the y direction, and 1 thread in the z direction, 16 threads in total. This structure is designed with parallel implementation of the algorithm of interest. The division of the workload results is an early form of parallelism that is sufficient and necessary to make use of the computing resources provided by the GPU. Once you've configured the kernel's invocation, you can invoke the kernel function that executes instructions in parallel on the device. Each thread executes the same code kernel. After the computation in the gpu device, we use an array to store the results: a_doubled = numpy.empty_like(a) cuda.memcpy_dtoh(a_doubled, a_gpu) Introducing PyOpenCL As for programming with PyCuda, the first step to build a program for PyOpenCL is the encoding of the host application. In fact, this is performed on the host computer (typically, the user's PC) and then, it dispatches the kernel application on the connected devices (GPU cards). The host application must contain five data structures, which are as follows: Device: This identifies the hardware where the kernel code must be executed. A PyOpenCL application can be executed not only on CPU and GPU cards but also on embedded devices such as FPGA (Field Programmable Gate Array). Program: This is a group of kernels. A program selects which kernel must be executed on the device. Kernel: This is the code to be executed on the device. A kernel is essentially a (C-like) function that enables it to be compiled for an execution on any device that supports OpenCL drivers. The kernel is the only way the host can call a function that will run on a device. When the host invokes a kernel, many work items start running on the device. Each work item runs the code of the kernel but works on a different part of the dataset. Command queue: Each device receives kernels through this data structure. A command queue orders the execution of kernels on the device. Context: This is a group of devices. A context allows devices to receive kernels and transfer data. PyOpenCL programming The preceding figure shows how these data structures can work in a host application. Let's remember again that a program can contain multiple functions that are to be executed on the device and each kernel encapsulates only a single function from the program. In this example, we show you the basic steps to build a PyOpenCL program. The task to be executed is the parallel sum of two vectors. In order to maintain a readable output, let's consider two vectors, each of one out of 100 elements. The resulting vector will be for each element's i[NV6] th, which is the sum of the ith element vector_a plus the ith element vector_b. Of course, to be able to appreciate the parallel execution of this code, you can also increase some orders of magnitude the size of the input vector_dimension:[NV7]  import numpy as np import pyopencl as cl import numpy.linalg as la vector_dimension = 100 vector_a = np.random.randint(vector_dimension, size=vector_dimension) vector_b = np.random.randint(vector_dimension, size=vector_dimension) platform = cl.get_platforms()[0] device = platform.get_devices()[0] context = cl.Context([device]) queue = cl.CommandQueue(context) mf = cl.mem_flags a_g = cl.Buffer(context, mf.READ_ONLY | mf.COPY_HOST_PTR, hostbuf=vector_a) b_g = cl.Buffer(context, mf.READ_ONLY | mf.COPY_HOST_PTR, hostbuf=vector_b) program = cl.Program(context, """ __kernel void vectorSum(__global const int *a_g, __global const int *b_g, __global int *res_g) { int gid = get_global_id(0); res_g[gid] = a_g[gid] + b_g[gid]; } """).build() res_g = cl.Buffer(context, mf.WRITE_ONLY, vector_a.nbytes) program.vectorSum(queue, vector_a.shape, None, a_g, b_g, res_g) res_np = np.empty_like(vector_a) cl.enqueue_copy(queue, res_np, res_g) print ("PyOPENCL SUM OF TWO VECTORS") print ("Platform Selected = %s" %platform.name ) print ("Device Selected = %s" %device.name) print ("VECTOR LENGTH = %s" %vector_dimension) print ("INPUT VECTOR A") print vector_a print ("INPUT VECTOR B") print vector_b print ("OUTPUT VECTOR RESULT A + B ") print res_np assert(la.norm(res_np - (vector_a + vector_b))) < 1e-5 The output from Command Prompt should be like this: C:Python CookBook Chapter 6 - GPU Programming with PythonChapter 6 - codes>python PyOpenCLParallellSum.py Platform Selected = NVIDIA CUDA Device Selected = GeForce GT 240 VECTOR LENGTH = 100 INPUT VECTOR A [ 0 29 88 46 68 93 81 3 58 44 95 20 81 69 85 25 89 39 47 29 47 48 20 86 59 99 3 26 68 62 16 13 63 28 77 57 59 45 52 89 16 6 18 95 30 66 19 29 31 18 42 34 70 21 28 0 42 96 23 86 64 88 20 26 96 45 28 53 75 53 39 83 85 99 49 93 23 39 1 89 39 87 62 29 51 66 5 66 48 53 66 8 51 3 29 96 67 38 22 88] INPUT VECTOR B [98 43 16 28 63 1 83 18 6 58 47 86 59 29 60 68 19 51 37 46 99 27 4 94 5 22 3 96 18 84 29 34 27 31 37 94 13 89 3 90 57 85 66 63 8 74 21 18 34 93 17 26 9 88 38 28 14 68 88 90 18 6 40 30 70 93 75 0 45 86 15 10 29 84 47 74 22 72 69 33 81 31 45 62 81 66 69 14 71 96 91 51 35 4 63 36 28 65 10 41] OUTPUT VECTOR RESULT A + B [ 98 72 104 74 131 94 164 21 64 102 142 106 140 98 145 93 108 90 84 75 146 75 24 180 64 121 6 122 86 146 45 47 90 59 114 151 72 134 55 179 73 91 84 158 38 140 40 47 65 111 59 60 79 109 66 28 56 164 111 176 82 94 60 56 166 138 103 53 120 139 54 93 114 183 96 167 45 111 70 122 120 118 107 91 132 132 74 80 119 149 157 59 86 7 92 132 95 103 32 129] In the first line of the code after the required module import, we defined the input vectors: vector_dimension = 100 vector_a = np.random.randint(vector_dimension, size= vector_dimension) vector_b = np.random.randint(vector_dimension, size= vector_dimension) Each vector contain 100 integers items that are randomly selected thought the NumPy function: np.random.randint(max integer , size of the vector) Then, we must select the device to run the kernel code. To do this, we must first select the platform using the get_platform() PyOpenCL statement: platform = cl.get_platforms()[0] This platform, as you can see from the output, corresponds to the NVIDIA CUDA platform. Then, we must select the device using the get_device() platform's method: device = platform.get_devices()[0] In the following steps, the context and the queue are defined, PyOpenCL provides the method context (device selected) and queue (context selected): context = cl.Context([device]) queue = cl.CommandQueue(context) To perform the computation in the device, the input vector must be transferred to the device's memory. So, two input buffers in the device memory must be created: mf = cl.mem_flags a_g = cl.Buffer(context, mf.READ_ONLY | mf.COPY_HOST_PTR, hostbuf=vector_a) b_g = cl.Buffer(context, mf.READ_ONLY | mf.COPY_HOST_PTR, hostbuf=vector_b) Also, we prepare the buffer for the resulting vector: res_g = cl.Buffer(context, mf.WRITE_ONLY, vector_a.nbytes) Finally, the core of the script, the kernel code is defined inside a program as follows: program = cl.Program(context, """ __kernel void vectorSum(__global const int *a_g, __global const int *b_g, __global int *res_g) { int gid = get_global_id(0); res_g[gid] = a_g[gid] + b_g[gid]; } """).build() The kernel's name is vectorSum. The parameter list defines the data types of the input arguments (vectors of integers) and output data type (a vector of integer again). Inside the kernel, the sum of the two vectors is simply defined as: Initialize the vector index: int gid = get_global_id(0) Sum the vector's components: res_g[gid] = a_g[gid] + b_g[gid]; In OpenCL and PyOpenCL, buffers are attached to a context[NV8]  and are only moved to a device once the buffer is used on that device. Finally, we execute vectorSum in the device: program.vectorSum(queue, vector_a.shape, None, a_g, b_g, res_g) To visualize the results, an empty vector is built: res_np = np.empty_like(vector_a) Then, the result is copied into this vector: cl.enqueue_copy(queue, res_np, res_g) Finally, the results are displayed: print ("VECTOR LENGTH = %s" %vector_dimension) print ("INPUT VECTOR A") print vector_a print ("INPUT VECTOR B") print vector_b print ("OUTPUT VECTOR RESULT A + B ") print res_np To check the result, we use the assert statement. It tests the result and triggers an error if the condition is false: assert(la.norm(res_np - (vector_a + vector_b))) < 1e-5 Summary In this article we discussed about Asyncio, GPU programming with Python, PyCUDA, and PyOpenCL. Resources for Article: Further resources on this subject: Bizarre Python[article] Scientific Computing APIs for Python[article] Optimization in Python [article]
Read more
  • 0
  • 0
  • 7763

article-image-golang-decorators-logging-time-profiling
Nicholas Maccharoli
30 Mar 2016
6 min read
Save for later

Golang Decorators: Logging & Time Profiling

Nicholas Maccharoli
30 Mar 2016
6 min read
Golang's imperative world Golang is not, by any means, a functional language; its design remains true to its jingle, which says that it is "C for the 21st Century". One task I tried to do early on in learning the language was search for the map, filter, and reduce functions in the standard library but to no avail. Next, I tried rolling my own versions, but I felt as though I hit a bit of a road block when I discovered that there is no support for generics in the language at the time of writing this. There is, however, support for Higher Order Functions or, more simply put, functions that take other functions as arguments and return functions. If you have spent some time in Python, you may have come to love a design pattern called "Decorator". In fact, decorators make life in Python so great that support for applying them is built right into the language with a nifty @ operator! Python frameworks such as Flask extensively use decorators. If you have little or no experience in Python, fear not for the concept is a design pattern independent of any language. Decorators An alternative name for the decorator pattern is "wrapper", which pretty much sums it all up in one word! A decorator's job is only to wrap a function so that additional code can be executed when the original function is called. This is accomplished by writing a function that takes a function as its argument and returns a function of the same type (Higher Order Functions in action!). While this still calls the original function and passes through its return value, it does something extra along the way. Decorators for logging We can easily log which specific method is passed with a little help from our decorator friends. Say, we wanted to log which user liked a blog post and what the ID of the post was all without touching any code in the original likePost function. Here is our original function: func likePost(userId int, postId int) bool { fmt.Printf("Update Complete!n") return true } Our decorator might look something similar to this: type LikeFunc func(int, int) bool func decoratedLike(f LikeFunc) LikeFunc { return func(userId int, postId int) bool { fmt.Printf("likePost Log: User %v liked post# %vn", userId, postId) return f(userId, postId) } } Note the use of the type definition here. I encourage you to use it for the sake of readability when defining functions with long signatures, such as those of decorators, as you need to type the function signature twice. Now, we can apply the decorator and allow the logging to begin: r := likeStats(likePost) r(1414, 324) r(5454, 324) r(4322, 250) This produces the following output: likePost Log: User 1414 liked post# 324 Update Complete! likePost Log: User 5454 liked post# 324 Update Complete! likePost Log: User 4322 liked post# 250 Update Complete! Our original likePost function still gets called and runs as expected, but now we get an additional log detailing the user and post IDs that were passed to the function each time it was called. Hopefully, this will help speed up debugging our likePost function if and when we encounter strange behavior! Decorators for performance! Say, we run a "Top 10" site and previously, our main sorting routine to find the top 10 cat photos of this week on the Internet was written with Golang's func Sort(data Interface) function from the sort package of the Golang standard library. Everything is fine until we are informed that Fluffy the cat is infuriated that she is coming in at number six on the list and not number five. The cats at ranks five and six on the list both had 5000 likes each, but Fluffy reached 5000 likes a day earlier than Bozo the cat, who is currently higher ranked. We like to give credit where it's due, so we apologize to Fluffy and go on to use the stable version of the func Stable(data Interface) sort, which preserves the order of elements equal in value during the sort. We can improve our code and tests so that this does not happen again (We promised Fluffy!). The tests pass, everything looks great, and we deploy gracefully... or so we think. Over the course of the day, other developers also deploy their changes, and then, after checking our performance reports, we notice a slowdown somewhere. Is it from our switch to stable the sorting? Well, let’s use decorators to measure the performance of both sort functions and check whether there is a noticeable dip in performance. Here’s our testing function: type SortFunc func(sort.Interface) func timedSortFunc(f SortFunc) SortFunc { return func(data sort.Interface) { defer func(t time.Time) { fmt.Printf("--- Time Elapsed: %v ---n", time.Since(t)) }(time.Now()) f(data) } } In case you are unfamiliar with defer, all it does is call the function it is passed right after its calling function returns. The arguments passed to defer are evaluated right away, so the value we get from time.Now() is really the start time of the function! Let’s go ahead and give this test a go: stable := timedSortFunc(sort.Stable) unStable := timedSortFunc(sort.Sort) // 10000 Elements with values ranging // between 0 and 5000 randomCatList1 := randomCatScoreSlice(10000, 5000) randomCatList2 := randomCatScoreSlice(10000, 5000) fmt.Printf("Unstable Sorting Function:n") stable(randomCatList1) fmt.Printf("Stable Sorting Function:n") unStable(randomCatList2) The following output is yielded: Unstable Sorting Function: --- Time Elapsed: 282.889µs --- Stable Sorting Function: --- Time Elapsed: 93.947µs --- Wow! Fluffy's complaint not only made our top 10 list more accurate but now they sort about three times as fast with the stable version of sort as well! (However, we still need to be careful; sort.Stable most likely uses way more memory than the standard sort.Sort function.) Final thoughts Figuring out when and where to apply the decorator pattern is really up to you and your team. There are no hard rules, and you can completely live without it. However, when it comes to things like extra logging or profiling a pesky area of your code, this technique may prove to be a valuable tool. Where is the rest of the code? In order get this example up and running, there is some setup code that was not shown here in order to keep the post from becoming too bloated. I encourage you take a look at this code here if you are interested! About the author Nick Maccharoli is an iOS/backend developer and open source enthusiast working at a start-up in Tokyo and enjoying the current development scene. You can see what he is up to at @din0sr or github.com/nirma.
Read more
  • 0
  • 0
  • 7760

article-image-article-authorizations-in-sap-hana
Packt
16 Jul 2013
28 min read
Save for later

Authorizations in SAP HANA

Packt
16 Jul 2013
28 min read
(For more resources related to this topic, see here.) Roles In SAP HANA, as in most of SAP's software, authorizations are grouped into roles. A role is a collection of authorization objects, with their associated privileges. It allows us, as developers, to define self-contained units of authorization. In the same way that at the start of this book we created an attribute view allowing us to have a coherent view of our customer data which we could reuse at will in more advanced developments, authorization roles allow us to create coherent developments of authorization data which we can then assign to users at will, making sure that users who are supposed to have the same rights always have the same rights. If we had to assign individual authorization objects to users, we could be fairly sure that sooner or later, we would forget someone in a department, and they would not be able to access the data they needed to do their everyday work. Worse, we might not give quite the same authorizations to one person, and have to spend valuable time correcting our error when they couldn't see the data they needed (or worse, more dangerous and less obvious to us as developers, if the user could see more data than was intended). It is always a much better idea to group authorizations into a role and then assign the role to users, than assign authorizations directly to users. Assigning a role to a user means that when the user changes jobs and needs a new set of privileges; we can just remove the first role, and assign a second one. Since, we're just starting out using authorizations in SAP HANA, let's get into this good habit right from the start. It really will make our lives easier later on. Creating a role Role creation is done, like all other SAP HANA development, in the Studio. If your Studio is currently closed, please open it, and then select the Modeler perspective. In order to create roles, privileges, and users, you will yourself need privileges. Your SAP HANA user will need the ROLE ADMIN, USER ADMIN, and CREATE STRUCTURED PRIVILEGE system privileges in order to do the development work in this article. You will see in the Navigator panel we have a Security folder, as we can see here: Please find the Security folder and then expand this folder. You will see a subfolder called Roles. Right-click on the Roles folder and select New Role to start creating a role. On the screen which will open, you will see a number of tabs representing the different authorization objects we can create, as we can see here: We'll be looking at each of these in turn, in the following sections, so for the moment just give your role Name (BOOKUSER might be appropriate, if not very original). Granted roles Like many other object types in SAP HANA, once you have created a role, you can then use it inside another role. This onion-like arrangement makes authorizations a lot easier to manage. If we had, for example, a company with two teams: Sales   Purchasing   And two countries, say: France   Germany   We could create a role giving access to sales analytic views, one giving purchasing analytic views, one giving access to data for France, and one giving access to data for Germany. We could then create new roles, say Sales-France, which don't actually contain any authorization objects themselves, but contain only the Sales and the France roles. The role definition is much simpler to understand and to maintain than if we had directly created the Sales-France role and a Sales-Germany role with all the underlying objects. Once again, as with other development objects, creating small self-contained roles and reusing them when possible will make your (maintenance) life easier. In the Granted Roles tab we can see the list of subroles this main role contains. Note that this list is only a pointer, you cannot modify the actual authorizations and the other roles given here, you would need to open the individual role and make changes there. Part of roles The Part of Roles tab in the role definition screen is exactly the opposite of the Granted Roles tab. This tab lists all other roles of which this role is a subrole. It is very useful to track authorizations, especially when you find yourself in a situation where a user seems to have too many authorizations and can see data they shouldn't be able to see. You cannot manipulate this list as such, it exists for information only. If you want to make changes, you need to modify the main role of which this role is a subrole. SQL privileges An SQL privilege is the lowest level at which we can define restrictions for using database objects. SQL privileges apply to the simplest objects in the database such as schemas, tables and so on. No attribute, analytical, or calculation view can be seen by SQL privileges. This is not strictly true, though you can consider it so. What we have seen as an analytical view, for example, the graphical definition, the drag and drop, the checkboxes, has been transformed into a real database object in the _SYS_BIC schema upon activation. We could therefore define SQL privileges on this database object if we wanted, but this is not recommended and indeed limits the control we can have over the view. We'll see a little later that SAP HANA has much finer-grained authorizations for views than this. An important thing to note about SQL privileges is that they apply to the object on which they are defined. They restrict access to a given object itself, but do not at any point have any impact on the object's contents. For example, we can decide that one of our users can have access to the CUSTOMER table, but we couldn't restrict their access to only CUSTOMER values from the COUNTRY USA. SQL privileges can control access to any object under the Catalog node in the Navigator panel. Let's add some authorizations to our BOOK schema and its contents. At the top of the SQL Privileges tab is a green plus sign button. Now click on this button to get the Select Catalog Object dialog, shown here: As you can see in the screenshot, we have entered the two letters bo into the filter box at the top of the dialog. As soon as you enter at least two letters into this box, the Studio will attempt to find and then list all database objects whose name contains the two letters you typed. If you continue to type, the search will be refined further. The first item in the list shown is the BOOK schema we created right back at the start of the book in the Chapter 2, SAP HANA Studio - Installation and First Look . Please select the BOOK item, and then click on OK to add it to our new role: The first thing to notice is the warning icon on the SQL Privileges tab itself: This means that your role definition is incomplete, and the role cannot be activated and used as yet. On the right of the screen, a list of checkbox options has appeared. These are the individual authorizations appropriate to the SQL object you have selected. In order to grant rights to a user via a role, you need to decide which of these options to include in the role. The individual authorization names are self-explicit. For example, the CREATE ANY authorization allows creation of new objects inside a schema. The INSERT or SELECT authorization might at first seem unusual for a schema, as it's not an object which can support such instructions. However, the usage is actually quite elegant. If a user has INSERT rights on the schema BOOK, then they have INSERT rights on all objects inside the schema BOOK. Granting rights on the schema itself avoids having to specify the names of all objects inside the schema. It also future-proofs your authorization concept, since new objects created in the schema will automatically inherit from the existing authorizations you have defined. On the far right of the screen, alongside each authorization is a radio button which gives an additional privilege, the possibility for a given user to, in turn, give the rights to a second user. This is an option which should not be given to all users, and so should not be present in all roles you create; the right to attribute privileges to users should be limited to your administrators. If you give just any user the right to pass on their authorizations further, you will soon find that you are no longer able to determine who can do what in your database. For the moment we are creating a simple role to show the working of the authorization concept in SAP HANA, so we will check all the checkboxes, and leave the radio buttons at No : There are some SQL privileges which are necessary for any user to be able to do work in SAP HANA. These are listed below. They give access to the system objects describing the development models we create in SAP HANA, and if a user does not have these privileges, nothing will work at all, the user will not be authorized to do anything. The SQL privileges you will need to add to the role in order to give access to basic SAP HANA system objects are: The SELECT privilege on the _SYS_BI schema   The SELECT privilege on the _SYS_REPO schema   The EXECUTE privilege on the REPOSITORY_REST procedure   Please add these SQL privileges to your role now, in order to obtain the following result: As you can see with the configuration we have just done, SQL privileges allow a user to access a given object and allow specific actions on the object. They do not however allow us to specify particular authorizations to the contents of the object. In order to use such fine-grained rights, we need to create an analytic privilege, and then add it to our role, so let's do that now. Analytic privileges An analytic privilege is an artifact unique to SAP HANA, it is not part of the standard SQL authorization concept. Analytic privileges allow us to restrict access to certain values of a given attribute, analytic, or calculation view. This means that we can create one view, which by default shows all available data, and then restrict what is actually visible to different users. We could restrict visible data by company code, by country, or by region. For example, our users in Europe would be allowed to see and work with data from our customers in Europe, but not those in the USA. An analytic privilege is created through the Quick Launch panel of Modeler , so please open that view now (or switch to the Quick Launch tab if it's already open). You don't need to close the role definition tab that's already open, we can leave it for now, create our analytic privilege, and then come back to the role definition later. From the Quick Launch panel, select Analytic Privilege , and then Create . As usual with SAP HANA, we are asked to give Name , Description , and select a package for our object. We'll call it AP_EU (for analytic privilege, Europe), use the name as the description, and put it into our book package alongside our other developments. As is common in SAP HANA, we have the option of creating an analytic privilege from scratch (Create New ) or copying an existing privilege (Copy From ). We don't currently have any other analytic privileges in our development, so leave Create New selected, then click on Next to go to the second screen of the wizard, shown here: On this page of the dialog, we are prompted to add development models to the analytic privilege. This will then allow us to restrict access to given values of these models. In the previous screenshot, we have added the CUST_REV analytic view to the analytic privilege. This will allow us to restrict access to any value we specify of any of the fields visible in the view. To add a view to the analytic privilege, just find it in the left panel, click on its name and then click on the Add button. Once you have added the views you require for your authorizations, click on the Finish button at the bottom of the window to go to the next step. You will be presented with the analytic privilege development panel, reproduced here: This page allows us to define our analytic privilege completely. On the left we have the list of database views we have included in the analytic privilege. We can add more, or remove one, using the Add and Remove buttons. To the right, we can see the Associated Attributes Restrictions and Assign Restrictions boxes. These are where we define the restrictions to individual values, or sets of values. In the top box, Associated Attributes Restrictions , we define on which attributes we want to restrict access (country code or region, maybe). In the bottom box, Assign Restrictions , we define the individual values on which to restrict (for example, for company code, we could restrict to value 0001, or US22; for region, we could limit access to EU or USA). Let's add a restriction to the REGION field of our CUST_REV view now. Click on the Add button next to the Associated Attributes Restrictions box, to see the Select Object dialog: As can be expected, this dialog lists all the attributes in our analytic view. We just need to select the appropriate attribute and then click on OK to add it to the analytic privilege. Measures in the view are not listed in the dialog. We cannot restrict access to a view according to numeric values. We cannot therefore, make restrictions to customers with a revenue over 1 million Euros, for example. Please add the REGION field to the analytic privilege now. Once the appropriate fields have been added, we can define the restrictions to be applied to them. Click on the REGION field in the Associated Attributes Restrictions box, then on the Add button next to the Assign Restrictions box, to define the restrictions we want to apply. As we can see, restrictions can be defined according to the usual list of comparison operators. These are the same operators we used earlier to define a restricted column in our analytic views. In our example, we'll be restricting access to those lines with a REGION column equal to EU, so we'll select Equal . In the Value column, we can either type the appropriate value directly, or use the value help button, and the familiar Value Help Dialog which will appear, to select the value from those available in the view. Please add the EU value, either by typing it or by having SAP HANA find it for us, now. There is one more field which needs to be added to our analytic privilege, and the reason behind might seem at first a little strange. This point is valid for SAP HANA SP5, up to and including (at least) release 50 of the software. If this point turns out to be a bug, then it might not be necessary in later versions of the software. The field on which we want to restrict user actions (REGION) is not actually part of the analytic view itself. REGION, if you recall, is a field which is present in CUST_REV , thanks to the included attribute view CUST_ATTR . In its current state, the analytic privilege will not work, because no fields from the analytic view are actually present in the analytic privilege. We therefore need to add at least one of the native fields of the analytic view to the analytic privilege. We don't need to do any restriction on the field; however it needs to be in the privilege for everything to work as expected. This is hinted at in SAP Note 1809199, SAP HANA DB: debugging user authorization errors. Only if a view is included in one of the cube restrictions and at least one of its attribute is employed by one of the dimension restrictions, access to the view is granted by this analytical privilege. Not an explicit description of the workings of the authorization concept, but close. Our analytic view CUST_REV contains two native fields, CURRENCY and YEAR. You can add either of these to the analytic privilege. You do not need to assign any restrictions to the field; it just needs to be in the privilege. Here is the state of the analytic privilege when development work on it is finished: The Count column lists the number of restrictions in effect for the associated field. For the CURRENCY field, no restrictions are defined. We just need (as always) to activate our analytic privilege in order to be able to use it. The activation button is the same one as we have used up until now to activate the modeling views, the round green button with the right-facing white arrow at the top-right of the panel, which you can see on the preceding screenshot. Please activate the analytic privilege now. Once that has been done, we can add it to our role. Return to the Role tab (if you left it open) or reopen the role now. If you closed the role definition tab earlier, you can get back to our role by opening the Security node in the Navigator panel, then opening Roles, and double-clicking on the BOOKUSER role. In the Analytic Privileges tab of the role definition screen, click on the green plus sign at the top, to add an analytic privilege to our role. The analytic privilege we have just created is called AP_EU, so type ap_eu into the search box at the top of the dialog window which will open. As soon as you have typed at least two characters, SAP HANA will start searching for matching analytic privileges, and your AP_EU privilege will be listed, as we can see here: Click on OK to add the privilege to the role. We will see in a minute the effect our analytic privilege has on the rights of a particular user, but for the moment we can take a look at the second-to-last tab in the role definition screen, System Privileges . System privileges As its name suggests, system privileges gives to a particular user the right to perform specific actions on the SAP HANA system itself, not just on a given table or view. These are particular rights which should not be given to just any user, but should be reserved to those users who need to perform a particular task. We'll not be adding any of these privileges to our role, however we'll take a look at the available options and what they are used for. Click on the green plus-sign button at the top of the System Privileges tab to see a list of the available privileges. By default the dialog will do a search on all available values; there are only fifteen or so, but you can as usual filter them down if you require using the filter box at the top of the dialog: For a full list of the system privileges available and their uses, please refer to the SAP HANA SQL Reference, available on the help.sap.com website at http://help.sap.com/hana/html/sql_grant.html. Package privileges The last tab in the role definition screen concerns Package Privileges . These allow a given user to access those objects in a package. In our example, the package is called book, so if we add the book package to our role in the Package Privileges tab, we will see the following result: Assigning package privileges is similar to assigning SQL privileges we saw earlier. We first add the required object (here our book package), then we need to indicate exactly which rights we give to the role. As we can see in the preceding screenshot, we have a series of checkboxes on the right-hand side of the window. At least one of these checkboxes must be checked in order to save the role. The individual rights have names which are fairly self-explanatory. REPO.READ gives access to read the package, whereas REPO.EDIT_NATIVE_OBJECTS allows modification of objects, for example. The role we are creating is destined for an end user who will need to see the data in a role, but should not need to modify the data models in any way (and in fact we really don't want them to modify our data models, do we?). We'll just add the REPO.READ privilege, on our book package, to our role. Again we can decide whether the end user can in turn assign this privilege to others. And again, we don't need this feature in our role. At this point, our role is finished. We have given access to the SQL objects in the BOOK schema, created an analytic privilege which limits access to the Europe region in our CUST_REV model, and given read-only access to our book package. After activation (always) we'll be able to assign our role to a test user, and then see the effect our authorizations have on what the user can do and see. Please activate the role now. Users Users are probably the most important part of the authorization concept. They are where all our problems begin, and their attempts to do and see things they shouldn't are the main reason we have to spend valuable time defining authorizations in the first place. In technical terms, a user is just another database object. They are created, modified, and deleted in the same way a modeling view is. They have properties (their name and password, for example), and it is by modifying these properties that we influence the actions that the person who connects using the user can perform. Up until now we have been using the SYSTEM user (or the user that your database administrator assigned to you). This user is defined by SAP, and has basically the authorizations to do anything with the database. Use of this user is discouraged by SAP, and the author really would like to insist that you don't use it for your developments. Accidents happen, and one of the great things about authorizations is that they help to prevent accidents. If you try to delete an important object with the SYSTEM user, you will delete it, and getting it back might involve a database restore. If however you use a development user with less authorization, then you wouldn't have been allowed to do the deletion, saving a lot of tears. Of course, the question then arises, why have you been using the SYSTEM user for the last couple of hundred pages of development. The answer is simple: if the author had started the book with the authorizations article, not many readers would have gotten past page 10. Let's create a new user now, and assign the role we have just created. From the Navigator panel, open the Security node, right-click on User , and select New User from the menu to obtain the user creation screen as shown in the following screenshot: Defining a user requires remarkably little information: User Name : The login that the user will use. Your company might have a naming convention for users. Users might even already have a standard login they use to connect to other systems in your enterprise. In our example, we'll create a user with the (once again rather unimaginative) name of BOOKU.   Authentication : How will SAP HANA know that the user connecting with the name of ANNE really is Anne? There are three (currently) ways of authenticating a user with SAP HANA. Password : This is the most common authentication system, SAP HANA will ask Anne for her password when she connects to the system. Since Anne is the only person who knows her password, we can be sure that Anne really is ANNE, and let her connect and do anything the user ANNE is allowed to do. Passwords in SAP HANA have to respect a certain format. By default this format is one capital, one lowercase, one number, and at least eight characters. You can see and change the password policy in the system configuration. Double-click on the system name in the Navigator panel, click on the Configuration tab, type the word pass into the filter box at the top of the tab, and scroll down to indexserver.ini and then password policy . The password format in force on your system is listed as password_layout . By default this is A1a, meaning capitals, numbers, and lowercase letters are allowed. The value can also contain the # character, meaning that special characters must also be contained in the password. The only special characters allowed by SAP HANA are currently the underscore, dollar sign, and the hash character. Other password policy defaults are also listed on this screen, such as maximum_password_lifetime (the time after which SAP HANA will force you to change your password).   Kerberos and SAML : These authentication systems need to be set up by your network administrator and allow single sign-on in your enterprise. This means that SAP HANA will be able to see the Windows username that is connecting to the system. The database will assume that the authentication part (deciding whether Anne really is ANNE) has already been done by Windows, and let the user connect.     Session Client : As we saw when we created attribute and analytic views back at the start of the book, SAP HANA understands the notion of client, referring to a partition system of the SAP ERP database. In the SAP ERP, different users can work in different Clients. In our development, we filtered on Client 100. A much better way of handling filtering is to define the default client for a user when we define their account. The Session Client field can be filled with the ERP Client in which the user works. In this way we do not need to filter on the analytic models, we can leave their client value at Dynamic in the view, and the actual value to use will be taken from the user record. Once again this means maintenance of our developments is a lot simpler. If you like, you can take a few minutes at the end of this article to create a user with a session client value of 100, then go back and reset our attribute and analytic views' default client value to Dynamic, reactivate everything, and then do a data preview with your test user. The result should be identical to that obtained when the view was filtered on client 100. However, if you then create a second user with a session client of 200, this second user will see different data.   We'll create a user with a password login, so type a password for your user now. Remember to adhere to the password policy in force on your system. Also note that the user will be required to change their password on first login. At the bottom of the user definition screen, as we can see from the preceding screenshot, we have a series of tabs corresponding to the different authorizations we can assign to our user. These are the same tabs we saw earlier when defining a role. As explained at the beginning of this article, it is considered best practice to assign authorizations to a role and then the role to a user, rather than assign authorizations directly to a user; this makes maintenance easier. For this reason we will not be looking at the different tabs for assigning authorizations to our user, other than the first one, Granted Roles . The Granted Roles tab lists, and allows adding and removing roles from the list assigned to the user. By default when we create a user, they have no roles assigned, and hence have no authorizations at all in the system. They will be able to log in to SAP HANA but will be able to do no development work, and will see no data from the system. Please click on the green plus sign button in the Granted Roles tab of the user definition screen, to add a role to the user account. You will be provided with the Select Role dialog, shown in part here: This dialog has the familiar search box at the top, so typing the first few letters of a role name will bring up a list of matching roles. Here our role was called BOOKUSER, so please do a search for it, then select it in the list and click on OK to add it to the user account. Once that is done, we can test our user to verify that we can perform the necessary actions with the role and user we have just created. We just need, as with all objects in SAP HANA, to activate the user object first. As usual, this is done with the round green button with the right-facing white arrow at the top-right of the screen. Please do this now. Testing our user and role The only real way to check if the authorizations we have defined are appropriate to the business requirements is to create a user and then try out the role to see what the user can and cannot see and do in the system. The first thing to do is to add our new user to the Studio so we can connect to SAP HANA using this new user. To do this, in the Navigator panel, right click on the SAP HANA system name, and select Add Additional User from the menu which appears. This will give you the Add additional user dialog, shown in the following screenshot:     Enter the name of the user you just created (BOOKU) and the password you assigned to the user. You will be required to change the password immediately: Click on Finish to add the user to the Studio. You will see immediately in the Navigator panel that we can now work with either our SYSTEM user, or our BOOKU user: We can also see straight away that BOOKU is missing the privileges to perform or manage data backups; the Backup node is missing from the list for the BOOKU user. Let's try to do something with our BOOKU user and see how the system reacts. The way the Studio lets you handle multiple users is very elegant, since the tree structure of database objects is duplicated, one per user, you can see immediately how the different authorization profiles affect the different users. Additionally, if you request a data preview from the CUST_REV analytic view in the book package under the BOOKU user's node in the Navigator panel, you will see the data according to the BOOKU user's authorizations. Requesting the same data preview from the SYSTEM user's node will see the data according to SYSTEM's authorizations. Let's do a data preview on the CUST_REV view with the SYSTEM user, for reference: As we can see, there are 12 rows of data retrieved, and we have data from the EU and NAR regions. If we ask for the same data preview using our BOOKU user, we can see much less data: BOOKU can only see nine of the 12 data rows in our view, as no data from the NAR region is visible to the BOOKU user. This is exactly the result we aimed to achieve using our analytic privilege, in our role, assigned to our user. Summary In this article, we have taken a look at the different aspects of the authorization concept in SAP HANA. We examined the different authorization levels available in the system, from SQL privileges, analytic privileges, system privileges, and package privileges. We saw how to add these different authorization concepts to a role, a reusable group of authorizations. We went on to create a new user in our SAP HANA system, examining the different types of authentications available, and the assignment of roles to users. Finally, we logged into the Studio with our new user account, and found out the first-hand effect our authorizations had on what the user could see and do. In the next article, we will be working with hierarchical data, seeing what hierarchies can bring to our reporting applications, and how to make the best use of them. Resources for Article : Further resources on this subject: SAP Netweaver: Accessing the MDM System [Article] SAP HANA integration with Microsoft Excel [Article] Exporting SAP BusinessObjects Dashboards into Different Environments [Article]
Read more
  • 0
  • 2
  • 7753

article-image-20-lessons-bias-machine-learning-systems-nips-2017
Aarthi Kumaraswamy
08 Dec 2017
9 min read
Save for later

20 lessons on bias in machine learning systems by Kate Crawford at NIPS 2017

Aarthi Kumaraswamy
08 Dec 2017
9 min read
Kate Crawford is a Principal Researcher at Microsoft Research and a Distinguished Research Professor at New York University. She has spent the last decade studying the social implications of data systems, machine learning, and artificial intelligence. Her recent publications address data bias and fairness, and social impacts of artificial intelligence among others. This article attempts to bring our readers to Kate’s brilliant Keynote speech at NIPS 2017. It talks about different forms of bias in Machine Learning systems and the ways to tackle such problems. By the end of this article, we are sure you would want to listen to her complete talk on the NIPS Facebook page. All images in this article come from Kate's presentation slides and do not belong to us. The rise of Machine Learning is every bit as far reaching as the rise of computing itself.  A vast new ecosystem of techniques and infrastructure are emerging in the field of machine learning and we are just beginning to learn their full capabilities. But with the exciting things that people can do, there are some really concerning problems arising. Forms of bias, stereotyping and unfair determination are being found in machine vision systems, object recognition models, and in natural language processing and word embeddings. High profile news stories about bias have been on the rise, from women being less likely to be shown high paying jobs to gender bias and object recognition datasets like MS COCO, to racial disparities in education AI systems. 20 lessons on bias in machine learning systems Interest in the study of bias in ML systems has grown exponentially in just the last 3 years. It has more than doubled in the last year alone. We are speaking different languages when we talk about bias. I.e., it means different things to different people/groups. Eg: in law, in machine learning, in geometry etc. Read more on this in the ‘What is bias?’ section below. In the simplest terms, for the purpose of understanding fairness in machine learning systems, we can consider ‘bias’ as a skew that produces a type of harm. Bias in MLaaS is harder to identify and also correct as we do not build them from scratch and are not always privy to how it works under the hood. Data is not neutral. Data cannot always be neutralized. There is no silver bullet for solving bias in ML & AI systems. There are two main kinds of harms caused by bias: Harms of allocation and harms of representation. The former takes an economically oriented view while the latter is more cultural. Allocative harm is when a system allocates or withholds certain groups an opportunity or resource. To know more, jump to the ‘harms of allocation’ section. When systems reinforce the subordination of certain groups along the lines of identity like race, class, gender etc., they cause representative harm. This is further elaborated in the ‘Harms of representation’ section. Harm can further be classified into five types: stereotyping, recognition, denigration, under-representation and ex-nomination.  There are many technical approaches to dealing with the problem of bias in a training dataset such as scrubbing to neutral, demographic sampling etc among others. But they all still suffer from bias. Eg: who decides what is ‘neutral’. When we consider bias purely as a technical problem, which is hard enough, we are already missing part of the picture. Bias in systems is commonly caused by bias in training data. We can only gather data about the world we have which has a long history of discrimination. So, the default tendency of these systems would be to reflect our darkest biases.  Structural bias is a social issue first and a technical issue second. If we are unable to consider both and see it as inherently socio-technical, then these problems of bias are going to continue to plague the ML field. Instead of just thinking about ML contributing to decision making in say hiring or criminal justice, we also need to think of the role of ML in the harmful representation of human identity. While technical responses to bias are very important and we need more of them, they won’t get us all the way to addressing representational harms to group identity. Representational harms often exceed the scope of individual technical interventions. Developing theoretical fixes that come from the tech world for allocational harms is necessary but not sufficient. The ability to move outside our disciplinary boundaries is paramount to cracking the problem of bias in ML systems. Every design decision has consequences and powerful social implications. Datasets reflect not only the culture but also the hierarchy of the world that they were made in. Our current datasets stand on the shoulder of older datasets building on earlier corpora. Classifications can be sticky and sometimes they stick around longer than we intend them to, even when they are harmful. ML can be deployed easily in contentious forms of categorization that could have serious repercussions. Eg: free-of-bias criminality detector that has Physiognomy at the heart of how it predicts the likelihood of a person being a criminal based on his appearance. What is bias? 14th century: an oblique or diagonal line 16th century: undue prejudice 20th century: systematic differences between the sample and a population In ML: underfitting (low variance and high bias) vs overfitting (high variance and low bias) In Law:  judgments based on preconceived notions or prejudices as opposed to the impartial evaluation of facts. Impartiality underpins jury selection, due process, limitations placed on judges etc. Bias is hard to fix with model validation techniques alone. So you can have an unbiased system in an ML sense producing a biased result in a legal sense. Bias is a skew that produces a type of harm. Where does bias come from? Commonly from Training data. It can be incomplete, biased or otherwise skewed. It can draw from non-representative samples that are wholly defined before use. Sometimes it is not obvious because it was constructed in a non-transparent way. In addition to human labeling, other ways that human biases and cultural assumptions can creep in ending up in exclusion or overrepresentation of subpopulation. Case in point: stop-and-frisk program data used as training data by an ML system.  This dataset was biased due to systemic racial discrimination in policing. Harms of allocation Majority of the literature understand bias as harms of allocation. Allocative harm is when a system allocates or withholds certain groups, an opportunity or resource. It is an economically oriented view primarily. Eg: who gets a mortgage, loan etc. Allocation is immediate, it is a time-bound moment of decision making. It is readily quantifiable. In other words, it raises questions of fairness and justice in discrete and specific transactions. Harms of representation It gets tricky when it comes to systems that represent society but don't allocate resources. These are representational harms. When systems reinforce the subordination of certain groups along the lines of identity like race, class, gender etc. It is a long-term process that affects attitudes and beliefs. It is harder to formalize and track. It is a diffused depiction of humans and society. It is at the root of all of the other forms of allocative harm. 5 types of allocative harms Source: Kate Crawford’s NIPS 2017 Keynote presentation: Trouble with Bias Stereotyping 2016 paper on word embedding that looked at Gender stereotypical associations and the distances between gender pronouns and occupations. Google translate swaps the genders of pronouns even in a gender-neutral language like Turkish   Recognition When a group is erased or made invisible by a system In a narrow sense, it is purely a technical problem. i.e., does a system recognize a face inside an image or video? Failure to recognize someone’s humanity. In the broader sense, it is about respect, dignity, and personhood. The broader harm is whether the system works for you. Eg: system could not process darker skin tones, Nikon’s camera s/w mischaracterized Asian faces as blinking, HP's algorithms had difficulty recognizing anyone with a darker shade of pale. Denigration When people use culturally offensive or inappropriate labels Eg: autosuggestions when people typed ‘jews should’ Under-representation An image search of 'CEOs' yielded only one woman CEO at the bottom-most part of the page. The majority were white male. ex-nomination Technical responses to the problem of biases Improve accuracy Blacklist Scrub to neutral Demographics or equal representation Awareness Politics of classification Where did identity categories come from? What if bias is a deeper and more consistent issue with classification? Source: Kate Crawford’s NIPS 2017 Keynote presentation: Trouble with Bias The fact that bias issues keep creeping into our systems and manifesting in new ways, suggests that we must understand that classification is not simply a technical issue but a social issue as well. One that has real consequences for people that are being classified. There are two themes: Classification is always a product of its time We are currently in the biggest experiment of classification in human history Eg: labeled faces in the wild dataset has 77.5% men, and 83.5% white. An ML system trained on this dataset will work best for that group. What can we do to tackle these problems? Start working on fairness forensics Test our systems: eg: build pre-release trials to see how a system is working across different populations How do we track the life cycle of a training dataset to know who built it and what the demographics skews might be in that dataset Start taking interdisciplinarity seriously Working with people who are not in our field but have deep expertise in other areas Eg: FATE (Fairness Accountability Transparency Ethics) group at Microsoft Research Build spaces for collaboration like the AI now institute. Think harder on the ethics of classification The ultimate question for fairness in machine learning is this. Who is going to benefit from the system we are building? And who might be harmed?
Read more
  • 0
  • 0
  • 7730
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $15.99/month. Cancel anytime
article-image-facebook-application-development-ruby-rails
Packt
21 Oct 2009
4 min read
Save for later

Facebook Application Development with Ruby on Rails

Packt
21 Oct 2009
4 min read
Technologies needed for this article RFacebook RFacebook (http://rfacebook.rubyforge.org/index.html) is a Ruby interface to the Facebook APIs. There are two parts to RFacebook—the gem and the plug-in. The plug-in is a stub that calls RFacebook on the Rails library packaged in the gem. RFacebook on Rails library extends the default Rails controller, model, and view. RFacebook also provides a simple interface through an RFacebook session to call any Facebook API. RFacebook uses some meta-programming idioms in Ruby to call Facebook APIs. Indeed Indeed is a job search engine that allows users to search for jobs based on keywords and location. It includes job listings from major job boards and newspapers and even company career pages. Acquiring candidates through Facebook We will be creating a Facebook application and displaying it through Facebook. This application, when added into the list of a user's applications, allows the user to search for jobs using information in his or her Facebook profile. Facebook applications, though displayed within the Facebook interface, are actually hosted and processed somewhere else. To display it within Facebook, you need to host the application in a publicly available website and then register the application. We will go through these steps in creating the Job Board Facebook application. Creating a Rails application Next, create a Facebook application. To do this, you will need to first add a special application in your Facebook account—the Developer application. Go to http://www.facebook.com/developers and you will be asked to allow Developer to be installed in your Facebook account. Add the Developer application and agree to everything in the permissions list. You will not have any applications yet, so click on the create one link to create a new application. Next you will be asked for the name of the application you want to create. Enter a suitable name; in our case, enter 'Job Board' and you will be redirected to the Developer application main page, where you are shown your newly created application with its API key and secret. You will need the API key and secret in a while. Installing and configuring RFacebook RFacebook consists of two components—the gem and the plug-in. The gem contains the libraries needed to communicate with Facebook while the plug-in enables your Rails application to integrate with Facebook. As mentioned earlier, the plug-in is basically a stub to the gem. The gem is installed like any other gem in Ruby: $gem install rfacebook To install the plug-in go to your RAILS_ROOT folder and type in: $./script/plugin install svn://rubyforge.org/var/svn/rfacebook/trunk/rfacebook/plugins/rfacebook Next, after the gem and plug-in is installed, run a setup rake script to create the configuration file in the RAILS_ROOT folder: $rake facebook:setup This creates a facebook.yml configuration file in RAILS_ROOT/config folder. The facebook.yml file contains three environments that mirror the Rails startup environments. Open it up to configure the necessary environment with the API key and secret that you were given when you created the application in the section above. development: key: YOUR_API_KEY_HERE secret: YOUR_API_SECRET_HERE canvas_path: /yourAppName/ callback_path: /path/to/your/callback/ tunnel: username: yourLoginName host: www.yourexternaldomain.com port: 1234 local_port: 5678 For now, just fill in the API key and secret. In a later section when we configure the rest of the Facebook application, we will need to revisit this configuration. Extracting the Facebook user profile Next we want to extract the user's Facebook user profile and display it on the Facebook application. We do this to let the user confirm that this is the information he or she wants to send as search parameters. To do this, create a controller named search_controller.rb in the RAILS_ROOT/app/controllers folder. class SearchController < ApplicationController before_filter :require_facebook_install layout 'main' def index view render :action => :view end def view if fbsession.is_valid? response = fbsession.users_getInfo(:uids => [fbsession.session_user_id], :fields => ["current_location", "education_history", "work_history"]) @work_history = response.work_history @education_history = response.education_history @current_location = response.current_location endend
Read more
  • 0
  • 0
  • 7727

article-image-iis-10-fundamentals
Packt
06 Jul 2017
12 min read
Save for later

IIS 10 Fundamentals

Packt
06 Jul 2017
12 min read
In this article by Ashraf Khan, the author of the book Microsoft IIS 10 Cookbook, helps us to understand the following topics: Understanding IIS 10 Basic requirements of IIS 10 Understanding application pools on IIS 10 Installation of lower framework version Configuration of application pool on IIS 10 (For more resources related to this topic, see here.) Understanding IIS 10 In this recipe, we will understand how to work with IIS 10's new features. We will have an overview of the following new features added to IIS 10: HTTP/2 HTTP/2 requests are now faster than ever. This feature is active by default with IIS 10 on Windows Server 2016 and Windows 10. IIS 10 on Nano Server IIS 10 is easy and quick to install on Nano Server. You can manage IIS 10 remotely with PowerShell or the IIS Manager console. Nano Server is much faster and consumes less memory and disk space that the full-fledged Windows Server. Rebooting is also faster so that you can manage time effectively. Wildcard host headers IIS 10 support the subdomain feature for your parent domain name. This will really help you manage more subdomains with the same primary domain name. PowerShell 5 cmdlets IIS 10 adds a new, simplified PowerShell module for quick and easy management. You can use PowerShell to access server-management features remotely. It also supports existing WebAdministration cmdlets. FTP FTP is a simple protocol for transferring files. This system can transfer files inside your company LAN and WAN using the default port, 21. IIS 10 includes an FTP server that is easy to configure and manage. FTPS FTPS is the same as FTP, with the only difference that it is secure. FTPS transfers data with SSL. We are going to use HTTPS port 443. For this, we need to create and install an SSL certificate that encrypts and decrypts data securely. SSL ensures that all data passed between web server and browser remains private and consistent during upload and download over private or public networks. Multi-web hosting IIS 10 allows you to create multiple websites and multiple applications on the same server. You can easily manage and create a new virtual directory located in the default location or a custom location. Virtual directories IIS 10 makes it easy to manage and create the virtual directories you require. Understanding application pools on IIS 10 In this recipe, we are going to understand application pools. We can simply say that application pool is the heart of IIS 10. Application pools are logical groupings of web applications that will execute in a common process, thereby allowing greater granularity of which programs are clustered together in a single process. For example, if you required every Web Application to execute in a separate process, you simply go and create an Application Pool for each application of different frameworks versions.  Let's say that we have more than one version of website, one which supports framework 2.0 and another one supporting framework 4.0 or different application like PHP or WordPress. All these website process are managed through application pool. Getting ready To step through this recipe , you will need a running IIS 10. You should also be having Administrative privilege. No other prerequisites are required. How to do it... Open the Server Manager on Windows Server 2016. Click on Tools menu and open the IIS Manager. Expand the IIS server (WIN2016IIS) this is the localhost server name WIN2016IIS. We get the listed application pools and sites. In Application Pools, you will get IIS 10 DefaultsAppPool as shown in above figure, also you get Actions panel in right side of the screen where you may add application pools. Click on DefaultAppPool, then you will get the Actions panel of DefaultAppPool. Here you will get an option for Application Pool Tasks highlighted in right side, where you may Start, Stop, and Recycle the services of IIS 10. In Edit Application Pool section, you can change the settings of application pool as Basic Settings..., Advanced Settings..., Rename the application pool and you may also do the Recycling... How it works... Let's take a look at what we explored in IIS Manager and Application Pools. We understood the basics of application pools and the property where we can get the changes done as per our requirement. IIS 10 default application pool framework is v4.0 which is supported upto v4.6, but we will get some more option for installing different versions of application pool. We can easily customize the application pool, which helps us to fulfill our typical web application requirement. We have several options for application pool in action pane. We can add new application pool, we can start, stop and recycle the application pool task. We can do the editing and automated recycling. Now we are going to learn in the next recipe more about application pools for Installation of lower framework version. Installation of lower framework version In this recipe, we are going to install framework 3.5 on Windows Server 2016. Default IIS 10 has the framework 4.0. We will install the lower version of framework which supports the web application of Version 2.0 to Version 3.5 .NET framework. Let's start now if you have your own web application which you had created a few years back and it was developed in v2.0 .NET framework. You want to run this application on IIS 10.  We are going to cover this topic in this recipe. Getting ready To step through this recipe you need to install framework version3.5, v3.5 framework is based on v2.0 framework. You will need a Windows Server 2016. You should be having a Window Server 2016 Operating System media for framework 3.5 or Internet connected on Window server 2016. You should have Administrative privilege. No other prerequisites are required. How to do it.... Open the Server Manager on Windows Server 2016, click on highlighted Add roles and features option. Click on Next until you get the Select features wizard. You can see the next figure. Click on Features panel and click on the check box .NET Framework 3.5 Features. It will also install the 2.0 supported framework. Move to next wizard as shown in figure. There is a warning coming before the installation: Do you need to specify an alternate source path? One or more installation selection are missing source files on the destination. We have to provide the Installation media sourcesSxS folder path. Click on Specify an alternate source path. See next figure for more details. Here we have Windows Server 2016 media in D:drive. This is the media path in our case which I have downloaded but in your case it can be different path to locate where you already had downloaded. There is a folder which is called sources and sub folder(SxS). Inside framework 3, installation file is available. You may see the next figure. Now you know where the source folder is. Come to confirm Screen and click on Install.  The next figure shows Installation progress on WIN2016IIS. Click on close when installation is completed, you have framework 3.5 available on your server. Now you have to check whether framework 3.5 has been installed or not. It should be available in features wizard. Open the Server Manager, click on Add roles and features. Click next and next until you get the Select Features wizard. You will see .NET framework 3.5 check box checked with gray color which is disabled. You can not check and uncheck the checkbox. You can see the next figure.   As shown in the preceding figure, it has been confirmed that .NET framework 3.5 has been installed. This can be installed through PowerShell. We can install directly from Windows Update, you need Internet connectivity and running Windows Update service on Window Server. How it works... In this recipe, IIS administrator is installed in the framework v3.5. The version 3.5 framework on window server 2016 helps us to run built in application .NET framework v2.0 or v3.5. The framework v3.5 processes the application which is built in framework v3.5 or v2.0. We also find out where is the sourcesSxS folder, after installation we verified that this .NET framework v3.5 is available. We are going to create application pool which will support the .NET framework v3.5. Configuration of application pool on IIS 10 In this recipe, we will have an overview of application pool property. We will check the default configuration of Basic Settings, Recycling and Advanced Settings. This is very helpful for developer or system administrator as we can do the configuration of different property of different application pool based upon application requirement. Getting ready For this recipe, we need IIS 1o and .NET framework of any version which is installed on IIS 10. You must have Administrative privilege. No other prerequisites are required. How to do it... Open the Server Manager on Windows Server 2016. Click on Tools menu and open the IIS Manager. Expand the IIS server (WIN2016IIS). We get the listed Application Pools. You may see in the next figure. Now we have already created application pool which is displayed in Application Pools. We created 2and3.5AppPool, Asp.net and DefaultAppPool (Default one). In the Actions panel, we can add many application pools and we can set any one of the created application pool as default application pool. The default application pool helps us when we are creating a website. The default application pool will be selected as an application pool. Select the 2and3.5AppPool from application pools. You will see the Actions pane having a list of available properties in which you can do some changes if needed. The version of 2and3.5AppPool is v2.0, you can see in the next figure. See the Actions panel, Application Pool Tasks and Edit Application Pool which we selected. From the Application Tool Tasks we can Start, Stop and Recycle... the application pool. Now let's come to the basic property of application pool. Click on Basic Settings... from Edit Application Pool, see the next figure which will appear after clicking. Basic Settings... is nothing but a quick settings to change limited number of things. We can change the .NET framework version to framework v4.0 or framework v3.5(version 2.0 is updated version 3.5). We can change the Managed pipeline mode to Integrated or Classic, also we can check or uncheck the start option. Next is the Advanced Settings... which has more options for customization of relevant Application Pool. Click on Advanced Settings..., the next figure will open. We have more settings option available in Advanced Settings... window. You may change the .NET framework version, you can select 32 bit application support true or false. Queue Length is 1000 by default. You may reduce or increase as you need. Start Mode should be OnDemand or Always Running. We can also customize utilization of CPU which helps you to manage the load of each application and their performance. Process Model will help you to define task for application pool availability and accessibility. We can see more about application pool in next figure. Rapid-Fail Protection is generally used for fail over. We can setup the fail over server and configuration. Recycling is to refresh the application pool overlapped. We can set a default recycling value. We can do more specific settings through Recycling settings by clicking on Recycling.... You may see your recycling conditions window in the next figure. Recycling is based on conditions like virtual memory usage, private memory usage, specific time, regular time intervals and fixed number of request, also it will generate you a log file which will help you understand which one was executed at what time. Here you will set the fixed intervals based on time and based on number of request, or specific time and based on Memory utilization, virtual and private memory. Click Next. In the Recycling Events to Log window, we generate log on the recycling events. How it works.... In this recipe we have learned three properties of IIS Application - Basic property, Advanced property and Recycling. We can use these properties for web application which we will host in IIS server to process through the application pool. When we are hosting a web application, there is always some requirement which we need to configure in application pool settings. For example, our management decides that we need to limit the queue of 2and3.5apppool application. We can just go to advance settings and change it. In the next section, we are going to host v4.0 .NET framework website and we will make use of application pool v4.0. Summary In this article, we understood application pools in IIS 10 and how to install and configure them. We also understood how to install a lower framework version. Resources for Article: Further resources on this subject: Exploring Microsoft Dynamics NAV – An Introduction [article] The Microsoft Azure Stack Architecture [article] Setting up Microsoft Bot Framework Dev Environment [article]
Read more
  • 0
  • 0
  • 7725

article-image-overview-tomcat-6-servlet-container-part-1
Packt
18 Jan 2010
11 min read
Save for later

An Overview of Tomcat 6 Servlet Container: Part 1

Packt
18 Jan 2010
11 min read
In practice, it is highly unlikely that you will interface an EJB container from WebSphere and a JMS implementation from WebLogic, with the Tomcat servlet container from the Apache foundation, but it is at least theoretically possible. Note that the term 'interface', as it is used here, also encompasses abstract classes. The specification's API might provide a template implementation whose operations are defined in terms of some basic set of primitives that are kept abstract for the service provider to implement. A service provider is required to make available concrete implementations of these interfaces and abstract classes. For example, the HttpSession interface is implemented by Tomcat in the form of org.apache.catalina.session.StandardSession. Let's examine the image of the Tomcat container: The objective of this article is to cover the primary request processing components that are present in this image. Advanced topics, such as clustering and security, are shown as shaded in this image and are not covered. In this image, the '+' symbol after the Service, Host, Context, and Wrapper instances indicate that there can be one or more of these elements. For instance, a Service may have a single Engine, but an Engine can contain one or more Hosts. In addition, the whirling circle represents a pool of request processor threads. Here, we will fly over the architecture of Tomcat from a 10,000-foot perspective taking in the sights as we go. Component taxonomy Tomcat's architecture follows the construction of a Matrushka doll from Russia. In other words, it is all about containment where one entity contains another, and that entity in turn contains yet another. In Tomcat, a 'container' is a generic term that refers to any component that can contain another, such as a Server, Service, Engine, Host, or Context. Of these, the Server and Service components are special containers, designated as Top Level Elements as they represent aspects of the running Tomcat instance. All the other Tomcat components are subordinate to these top level elements. The Engine, Host, and Context components are officially termed Containers, and refer to components that process incoming requests and generate an appropriate outgoing response. Nested Components can be thought of as sub-elements that can be nested inside either Top Level Elements or other Containers to configure how they function. Examples of nested components include the Valve, which represents a reusable unit of work; the Pipeline, which represents a chain of Valves strung together; and a Realm which helps set up container-managed security for a particular container. Other nested components include the Loader which is used to enforce the specification's guidelines for servlet class loading; the Manager that supports session management for each web application; the Resources component that represents the web application's static resources and a mechanism to access these resources; and the Listener that allows you to insert custom processing at important points in a container's life cycle, such as when a component is being started or stopped. Not all nested components can be nested within every container. A final major component, which falls into its own category, is the Connector. It represents the connection end point that an external client (such as a web browser) can use to connect to the Tomcat container. Before we go on to examine these components, let's take a quick look at how they are organized structurally. Note that this diagram only shows the key properties of each container. When Tomcat is started, the Java Virtual Machine (JVM) instance in which it runs will contain a singleton Server top level element, which represents the entire Tomcat server. A Server will usually contain just one Service object, which is a structural element that combines one or more Connectors (for example, an HTTP and an HTTPS connector) that funnel incoming requests through to a single Catalina servlet Engine. The Engine represents the core request processing code within Tomcat and supports the definition of multiple Virtual Hosts within it. A virtual host allows a single running Tomcat engine to make it seem to the outside world that there are multiple separate domains (for example, www.my-site.com and www.your-site.com) being hosted on a single machine. Each virtual host can, in turn, support multiple web applications known as Contexts that are deployed to it. A context is represented using the web application format specified by the servlet specification, either as a single compressed WAR (Web Application Archive) file or as an uncompressed directory. In addition, a context is configured using a web.xml file, as defined by the servlet specification. A context can, in turn, contain multiple servlets that are deployed into it, each of which is wrapped in a Wrapper component. The Server, Service, Connector, Engine, Host, and Context elements that will be present in a particular running Tomcat instance are configured using the server.xml configuration file. Architectural benefits This architecture has a couple of useful features. It not only makes it easy to manage component life cycles (each component manages the life cycle notifications for its children), but also to dynamically assemble a running Tomcat server instance that is based on the information that has been read from configuration files at startup. In particular, the server.xml file is parsed at startup, and its contents are used to instantiate and configure the defined elements, which are then assembled into a running Tomcat instance. The server.xml file is read only once, and edits to it will not be picked up until Tomcat is restarted. This architecture also eases the configuration burden by allowing child containers to inherit the configuration of their parent containers. For instance, a Realm defines a data store that can be used for authentication and authorization of users who are attempting to access protected resources within a web application. For ease of configuration, a realm that is defined for an engine applies to all its children hosts and contexts. At the same time, a particular child, such as a given context, may override its inherited realm by specifying its own realm to be used in place of its parent's realm. Top Level Components The Server and Service container components exist largely as structural conveniences. A Server represents the running instance of Tomcat and contains one or more Service children, each of which represents a collection of request processing components. Server A Server represents the entire Tomcat instance and is a singleton within a Java Virtual Machine, and is responsible for managing the life cycle of its contained services. The following image depicts the key aspects of the Server component. As shown, a Server instance is configured using the server.xml configuration file. The root element of this file is <Server> and represents the Tomcat instance. Its default implementation is provided using org.apache.catalina.core.StandardServer, but you can specify your own custom implementation through the className attribute of the <Server> element. A key aspect of the Server is that it opens a server socket on port 8005 (the default) to listen a shutdown command (by default, this command is the text string SHUTDOWN). When this shutdown command is received, the server gracefully shuts itself down. For security reasons, the connection requesting the shutdown must be initiated from the same machine that is running this instance of Tomcat. A Server also provides an implementation of the Java Naming and Directory Interface (JNDI) service, allowing you to register arbitrary objects (such as data sources) or environment variables, by name. At runtime, individual components (such as servlets) can retrieve this information by looking up the desired object name in the server's JNDI bindings. While a JNDI implementation is not integral to the functioning of a servlet container, it is part of the Java EE specification and is a service that servlets have a right to expect from their application servers or servlet containers. Implementing this service makes for easy portability of web applications across containers. While there is always just one server instance within a JVM, it is entirely possible to have multiple server instances running on a single physical machine, each encased in its own JVM. Doing so insulates web applications that are running on one VM from errors in applications that are running on others, and simplifies maintenance by allowing a JVM to be restarted independently of the others. This is one of the mechanisms used in a shared hosting environment (the other is virtual hosting, which we will see shortly) where you need isolation from other web applications that are running on the same physical server. Service While the Server represents the Tomcat instance itself, a Service represents the set of request processing components within Tomcat. A Server can contain more than one Service, where each service associates a group of Connector components with a single Engine. Requests from clients are received on a connector, which in turn funnels them through into the engine, which is the key request processing component within Tomcat. The image shows connectors for HTTP, HTTPS, and the Apache JServ Protocol (AJP). There is very little reason to modify this element, and the default Service instance is usually sufficient. A hint as to when you might need more than one Service instance can be found in the above image. As shown, a service aggregates connectors, each of which monitors a given IP address and port, and responds in a given protocol. An example use case for having multiple services, therefore, is when you want to partition your services (and their contained engines, hosts, and web applications) by IP address and/or port number. For instance, you might configure your firewall to expose the connectors for one service to an external audience, while restricting your other service to hosting intranet applications that are visible only to internal users. This would ensure that an external user could never access your Intranet application, as that access would be blocked by the firewall. The Service, therefore, is nothing more than a grouping construct. It does not currently add any other value to the proceedings. Connectors A Connector is a service endpoint on which a client connects to the Tomcat container. It serves to insulate the engine from the various communication protocols that are used by clients, such as HTTP, HTTPS, or the Apache JServ Protocol (AJP). Tomcat can be configured to work in two modes—Standalone or in Conjunction with a separate web server. In standalone mode, Tomcat is configured with HTTP and HTTPS connectors, which make it act like a full-fledged web server by serving up static content when requested, as well as by delegating to the Catalina engine for dynamic content. Out of the box, Tomcat provides three possible implementations of the HTTP/1.1 and HTTPS connectors for this mode of operation. The most common are the standard connectors, known as Coyote which are implemented using standard Java I/O mechanisms. You may also make use of a couple of newer implementations, one which uses the non-blocking NIO features of Java 1.4, and the other which takes advantage of native code that is optimized for a particular operating system through the Apache Portable Runtime (APR). Note that both the Connector and the Engine run in the same JVM. In fact, they run within the same Server instance. In conjunction mode, Tomcat plays a supporting role to a web server, such as Apache httpd or Microsoft's IIS. The client here is the web server, communicating with Tomcat either through an Apache module or an ISAPI DLL. When this module determines that a request must be routed to Tomcat for processing, it will communicate this request to Tomcat using AJP, a binary protocol that is designed to be more efficient than the text based HTTP when communicating between a web server and Tomcat. On the Tomcat side, an AJP connector accepts this communication and translates it into a form that the Catalina engine can process. In this mode, Tomcat is running in its own JVM as a separate process from the web server. In either mode, the primary attributes of a Connector are the IP address and port on which it will listen for incoming requests, and the protocol that it supports. Another key attribute is the maximum number of request processing threads that can be created to concurrently handle incoming requests. Once all these threads are busy, any incoming request will be ignored until a thread becomes available. By default, a connector listens on all the IP addresses for the given physical machine (its address attribute defaults to 0.0.0.0). However, a connector can be configured to listen on just one of the IP addresses for a machine. This will constrain it to accept connections from only that specified IP address. Any request that is received by any one of a service's connectors is passed on to the service's single engine. This engine, known as Catalina, is responsible for the processing of the request, and the generation of the response. The engine returns the response to the connector, which then transmits it back to the client using the appropriate communication protocol.
Read more
  • 0
  • 0
  • 7723

article-image-create-your-first-react-element
Packt
17 Feb 2016
22 min read
Save for later

Create Your First React Element

Packt
17 Feb 2016
22 min read
From the 7th to the 13th of November 2016, you can save up to 80% on some of our top ReactJS content - so what are you waiting for? Dive in here before the week ends! As many of you know, creating a simple web application today involves writing the HTML, CSS, and JavaScript code. The reason we use three different technologies is because we want to separate three different concerns: Content (HTML) Styling (CSS) Logic (JavaScript) (For more resources related to this topic, see here.) This separation works great for creating a web page because, traditionally, we had different people working on different parts of our web page: one person structured the content using HTML and styled it using CSS, and then another person implemented the dynamic behavior of various elements on that web page using JavaScript. It was a content-centric approach. Today, we mostly don't think of a website as a collection of web pages anymore. Instead, we build web applications that might have only one web page, and that web page does not represent the layout for our content—it represents a container for our web application. Such a web application with a single web page is called (unsurprisingly) a Single Page Application (SPA). You might be wondering, how do we represent the rest of the content in a SPA? Surely, we need to create an additional layout using HTML tags? Otherwise, how does a web browser know what to render? These are all valid questions. Let's take a look at how it works in this article. Once you load your web page in a web browser, it creates a Document Object Model (DOM) of that web page. A DOM represents your web page in a tree structure, and at this point, it reflects the structure of the layout that you created with only HTML tags. This is what happens regardless of whether you're building a traditional web page or a SPA. The difference between the two is what happens next. If you are building a traditional web page, then you would finish creating your web page's layout. On the other hand, if you are building a SPA, then you would need to start creating additional elements by manipulating the DOM with JavaScript. A web browser provides you with the JavaScript DOM API to do this. You can learn more about it at https://developer.mozilla.org/en-US/docs/Web/API/Document_Object_Model. However, manipulating (or mutating) the DOM with JavaScript has two issues: Your programming style will be imperative if you decide to use the JavaScript DOM API directly. This programming style leads to a code base that is harder to maintain. DOM mutations are slow because they cannot be optimized for speed, unlike other JavaScript code. Luckily, React solves both these problems for us. Understanding virtual DOM Why do we need to manipulate the DOM in the first place? Because our web applications are not static. They have a state represented by the user interface (UI) that a web browser renders, and that state can be changed when an event occurs. What kind of events are we talking about? There are two types of events that we're interested in: User events: When a user types, clicks, scrolls, resizes, and so on Server events: When an application receives data or an error from a server, among others What happens while handling these events? Usually, we update the data that our application depends on, and that data represents a state of our data model. In turn, when a state of our data model changes, we might want to reflect this change by updating a state of our UI. Looks like what we want is a way of syncing two different states: the UI state and the data model state. We want one to react to the changes in  the other and vice versa. How can we achieve this? One of the ways to sync your application's UI state with an underlying data model's state is two-way data binding. There are different types of two-way data binding. One of them is key-value observing (KVO), which is used in Ember.js, Knockout, Backbone, and iOS, among others. Another one is dirty checking, which is used in Angular. Instead of two-way data binding, React offers a different solution called the virtual DOM. The virtual DOM is a fast, in-memory representation of the real DOM, and it's an abstraction that allows us to treat JavaScript and DOM as if they were reactive. Let's take a look at how it works: Whenever the state of your data model changes, the virtual DOM and React will rerender your UI to a virtual DOM representation. React then calculates the difference between the two virtual DOM representations: the previous virtual DOM representation that was computed before the data was changed and the current virtual DOM representation that was computed after the data was changed. This difference between the two virtual DOM representations is what actually needs to be changed in the real DOM. React updates only what needs to be updated in the real DOM. The process of finding a difference between the two representations of the virtual DOM and rerendering only the updated patches in a real DOM is fast. Also, the best part is, as a React developer, that you don't need to worry about what actually needs to be rerendered. React allows you to write your code as if you were rerendering the entire DOM every time your application's state changes. If you would like to learn more about the virtual DOM, the rationale behind it, and how it can be compared to data binding, then I would strongly recommend that you watch this very informative talk by Pete Hunt from Facebook at https://www.youtube.com/watch?v=-DX3vJiqxm4. Now that we've learnt about the virtual DOM, let's mutate a real DOM by installing React and creating our first React element. Installing React To start using the React library, we need to first install it. I am going to show you two ways of doing this: the simplest one and the one using the npm install command. The simplest way is to add the <script> tag to our ~/snapterest/build/index.html file: For the development version of React, add the following command: <script src="https://cdnjs.cloudflare.com/ajax/libs/react/0.14.0-beta3/react.js"></script> For the production version version of React, add the following command: <script src="https://cdnjs.cloudflare.com/ajax/libs/react/0.14.0-beta3/react.min.js"></script> For our project, we'll be using the development version of React. At the time of writing, the latest version of React library is 0.14.0-beta3. Over time, React gets updated, so make sure you use the latest version that is available to you, unless it introduces breaking changes that are incompatible with the code samples provided in this article. Visit https://github.com/fedosejev/react-essentials to learn about any compatibility issues between the code samples and the latest version of React. We all know that Browserify allows us to import all the dependency modules for our application using the require() function. We'll be using require() to import the React library as well, which means that, instead of adding a <script> tag to our index.html, we'll be using the npm install command to install React: Navigate to the ~/snapterest/ directory and run this command:  npm install --save [email protected] [email protected] Then, open the ~/snapterest/source/app.js file in your text editor and import the React and ReactDOM libraries to the React and ReactDOM variables, respectively: var React = require('react'); var ReactDOM = require('react-dom'); The react package contains methods that are concerned with the key idea behind React, that is, describing what you want to render in a declarative way. On the other hand, the react-dom package offers methods that are responsible for rendering to the DOM. You can read more about why developers at Facebook think it's a good idea to separate the React library into two packages at https://facebook.github.io/react/blog/2015/07/03/react-v0.14-beta-1.html#two-packages. Now we're ready to start using the React library in our project. Next, let's create our first React Element! Creating React Elements with JavaScript We'll start by familiarizing ourselves with a fundamental React terminology. It will help us build a clear picture of what the React library is made of. This terminology will most likely update over time, so keep an eye on the official documentation at http://facebook.github.io/react/docs/glossary.html. Just like the DOM is a tree of nodes, React's virtual DOM is a tree of React nodes. One of the core types in React is called ReactNode. It's a building block for a virtual DOM, and it can be any one of these core types: ReactElement: This is the primary type in React. It's a light, stateless, immutable, virtual representation of a DOM Element. ReactText: This is a string or a number. It represents textual content and it's a virtual representation of a Text Node in the DOM. ReactElements and ReactTexts are ReactNodes. An array of ReactNodes is called a ReactFragment. You will see examples of all of these in this article. Let's start with an example of a ReactElement: Add the following code to your ~/snapterest/source/app.js file: var reactElement = React.createElement('h1'); ReactDOM.render(reactElement, document.getElementById('react-application')); Now your app.js file should look exactly like this: var React = require('react'); var ReactDOM = require('react-dom'); var reactElement = React.createElement('h1'); ReactDOM.render(reactElement, document.getElementById('react-application')); Navigate to the ~/snapterest/ directory and run Gulp's default task: gulp You will see the following output: Starting 'default'... Finished 'default' after 1.73 s Navigate to the ~/snapterest/build/ directory, and open index.html in a web browser. You will see a blank web page. Open Developer Tools in your web browser and inspect the HTML markup for your blank web page. You should see this line, among others: <h1 data-reactid=".0"></h1> Well done! We've just created your first React element. Let's see exactly how we did it. The entry point to the React library is the React object. This object has a method called createElement() that takes three parameters: type, props, and children: React.createElement(type, props, children); Let's take a look at each parameter in more detail. The type parameter The type parameter can be either a string or a ReactClass: A string could be an HTML tag name such as 'div', 'p', 'h1', and so on. React supports all the common HTML tags and attributes. For a complete list of HTML tags and attributes supported by React, you can refer to http://facebook.github.io/react/docs/tags-and-attributes.html. A ReactClass is created via the React.createClass() method. The type parameter describes how an HTML tag or a ReactClass is going to be rendered. In our example, we're rendering the h1 HTML tag. The props parameter The props parameter is a JavaScript object passed from a parent element to a child element (and not the other way around) with some properties that are considered immutable, that is, those that should not be changed. While creating DOM elements with React, we can pass the props object with properties that represent the HTML attributes such as class, style, and so on. For example, run the following commands: var React = require('react'); var ReactDOM = require('react-dom'); var reactElement = React.createElement('h1', { className: 'header' }); ReactDOM.render(reactElement, document.getElementById('react-application')); The preceding code will create an h1 HTML element with a class attribute set to header: <h1 class="header" data-reactid=".0"></h1> Notice that we name our property className rather than class. The reason is that the class keyword is reserved in JavaScript. If you use class as a property name, it will be ignored by React, and a helpful warning message will be printed on the web browser's console: Warning: Unknown DOM property class. Did you mean className?Use className instead. You might be wondering what this data-reactid=".0" attribute is doing in our h1 tag? We didn't pass it to our props object, so where did it come from? It is added and used by React to track the DOM nodes; it might be removed in a future version of React. The children parameter The children parameter describes what child elements this element should have, if any. A child element can be any type of ReactNode: a virtual DOM element represented by a ReactElement, a string or a number represented by a ReactText, or an array of other ReactNodes, which is also called ReactFragment. Let's take a look at this example: var React = require('react'); var ReactDOM = require('react-dom'); var reactElement = React.createElement('h1', { className: 'header' }, 'This is React'); ReactDOM.render(reactElement, document.getElementById('react-application')); The following code will create an h1 HTML element with a class attribute and a text node, This is React: <h1 class="header" data-reactid=".0">This is React</h1> The h1 tag is represented by a ReactElement, while the This is React string is represented by a ReactText. Next, let's create a React element with a number of other React elements as it's children: var React = require('react'); var ReactDOM = require('react-dom');   var h1 = React.createElement('h1', { className: 'header', key: 'header' }, 'This is React'); var p = React.createElement('p', { className: 'content', key: 'content' }, "And that's how it works."); var reactFragment = [ h1, p ]; var section = React.createElement('section', { className: 'container' }, reactFragment);   ReactDOM.render(section, document.getElementById('react-application')); We've created three React elements: h1, p, and section. h1 and p both have child text nodes, "This is React" and "And that's how it works.", respectively. The section has a child that is an array of two ReactElements, h1 and p, called reactFragment. This is also an array of ReactNodes. Each ReactElement in the reactFragment array must have a key property that helps React to identify that ReactElement. As a result, we get the following HTML markup: <section class="container" data-reactid=".0">   <h1 class="header" data-reactid=".0.$header">This is React</h1>   <p class="content" data-reactid=".0.$content">And that's how it works.</p> </section> Now we understand how to create React elements. What if we want to create a number of React elements of the same type? Does it mean that we need to call React.createElement('type') over and over again for each element of the same type? We can, but we don't need to because React provides us with a factory function called React.createFactory(). A factory function is a function that creates other functions. This is exactly what React.createFactory(type) does: it creates a function that produces a ReactElement of a given type. Consider the following example: var React = require('react'); var ReactDOM = require('react-dom');   var listItemElement1 = React.createElement('li', { className: 'item-1', key: 'item-1' }, 'Item 1'); var listItemElement2 = React.createElement('li', { className: 'item-2', key: 'item-2' }, 'Item 2'); var listItemElement3 = React.createElement('li', { className: 'item-3', key: 'item-3' }, 'Item 3');   var reactFragment = [ listItemElement1, listItemElement2, listItemElement3 ]; var listOfItems = React.createElement('ul', { className: 'list-of-items' }, reactFragment);   ReactDOM.render(listOfItems, document.getElementById('react-application')); The preceding example produces this HTML: <ul class="list-of-items" data-reactid=".0">   <li class="item-1" data-reactid=".0.$item-1">Item 1</li>   <li class="item-2" data-reactid=".0.$item-2">Item 2</li>   <li class="item-3" data-reactid=".0.$item-3">Item 3</li> </ul> We can simplify it by first creating a factory function: var React = require('react'); var ReactDOM = require('react-dom'); var createListItemElement = React.createFactory('li'); var listItemElement1 = createListItemElement({ className: 'item-1', key: 'item-1' }, 'Item 1'); var listItemElement2 = createListItemElement({ className: 'item-2', key: 'item-2' }, 'Item 2'); var listItemElement3 = createListItemElement({ className: 'item-3', key: 'item-3' }, 'Item 3'); var reactFragment = [ listItemElement1, listItemElement2, listItemElement3 ]; var listOfItems = React.createElement('ul', { className: 'list-of-items' }, reactFragment); ReactDOM.render(listOfItems, document.getElementById('react-application')); In the preceding example, we're first calling the React.createFactory() function and passing a li HTML tag name as a type parameter. Then, the React.createFactory() function returns a new function that we can use as a convenient shorthand to create elements of type li. We store a reference to this function in a variable called createListItemElement. Then, we call this function three times, and each time we only pass the props and children parameters, which are unique for each element. Notice that React.createElement() and React.createFactory() both expect the HTML tag name string (such as li) or the ReactClass object as a type parameter. React provides us with a number of built-in factory functions to create the common HTML tags. You can call them from the React.DOM object; for example, React.DOM.ul(), React.DOM.li(), React.DOM.div(), and so on. Using them, we can simplify our previous example even further: var React = require('react'); var ReactDOM = require('react-dom');   var listItemElement1 = React.DOM.li({ className: 'item-1', key: 'item-1' }, 'Item 1'); var listItemElement2 = React.DOM.li({ className: 'item-2', key: 'item-2' }, 'Item 2'); var listItemElement3 = React.DOM.li({ className: 'item-3', key: 'item-3' }, 'Item 3');   var reactFragment = [ listItemElement1, listItemElement2, listItemElement3 ]; var listOfItems = React.DOM.ul({ className: 'list-of-items' }, reactFragment);   ReactDOM.render(listOfItems, document.getElementById('react-application')); Now we know how to create a tree of ReactNodes. However, there is one important line of code that we need to discuss before we can progress further: ReactDOM.render(listOfItems, document.getElementById('react-application')); As you might have already guessed, it renders our ReactNode tree to the DOM. Let's take a closer look at how it works. Rendering React Elements The ReactDOM.render() method takes three parameters: ReactElement, a regular DOMElement, and a callback function: ReactDOM.render(ReactElement, DOMElement, callback); ReactElement is a root element in the tree of ReactNodes that you've created. A regular DOMElement is a container DOM node for that tree. The callback is a function executed after the tree is rendered or updated. It's important to note that if this ReactElement was previously rendered to a parent DOM Element, then ReactDOM.render() will perform an update on the already rendered DOM tree and only mutate the DOM as it is necessary to reflect the latest version of the ReactElement. This is why a virtual DOM requires fewer DOM mutations. So far, we've assumed that we're always creating our virtual DOM in a web browser. This is understandable because, after all, React is a user interface library, and all the user interfaces are rendered in a web browser. Can you think of a case when rendering a user interface on a client would be slow? Some of you might have already guessed that I am talking about the initial page load. The problem with the initial page load is the one I mentioned at the beginning of this article—we're not creating static web pages anymore. Instead, when a web browser loads our web application, it receives only the bare minimum HTML markup that is usually used as a container or a parent element for our web application. Then, our JavaScript code creates the rest of the DOM, but in order for it to do so it often needs to request extra data from the server. However, getting this data takes time. Once this data is received, our JavaScript code starts to mutate the DOM. We know that DOM mutations are slow. How can we solve this problem? The solution is somewhat unexpected. Instead of mutating the DOM in a web browser, we mutate it on a server. Just like we would with our static web pages. A web browser will then receive an HTML that fully represents a user interface of our web application at the time of the initial page load. Sounds simple, but we can't mutate the DOM on a server because it doesn't exist outside a web browser. Or can we? We have a virtual DOM that is just a JavaScript, and as you know using Node.js, we can run JavaScript on a server. So technically, we can use the React library on a server, and we can create our ReactNode tree on a server. The question is how can we render it to a string that we can send to a client? React has a method called ReactDOMServer.renderToString() just to do this: var ReactDOMServer = require('react-dom/server'); ReactDOMServer.renderToString(ReactElement); It takes a ReactElement as a parameter and renders it to its initial HTML. Not only is this faster than mutating a DOM on a client, but it also improves the Search Engine Optimization (SEO) of your web application. Speaking of generating static web pages, we can do this too with React: var ReactDOMServer = require('react-dom/server'); ReactDOM.renderToStaticMarkup(ReactElement); Similar to ReactDOM.renderToString(), this method also takes a ReactElement as a parameter and outputs an HTML string. However, it doesn't create the extra DOM attributes that React uses internally, it produces shorter HTML strings that we can transfer to the wire quickly. Now you know not only how to create a virtual DOM tree using React elements, but you also know how to render it to a client and server. Our next question is whether we can do it quickly and in a more visual manner. Creating React Elements with JSX When we build our virtual DOM by constantly calling the React.createElement() method, it becomes quite hard to visually translate these multiple function calls into a hierarchy of HTML tags. Don't forget that, even though we're working with a virtual DOM, we're still creating a structure layout for our content and user interface. Wouldn't it be great to be able to visualize that layout easily by simply looking at our React code? JSX is an optional HTML-like syntax that allows us to create a virtual DOM tree without using the React.createElement() method. Let's take a look at the previous example that we created without JSX: var React = require('react'); var ReactDOM = require('react-dom');   var listItemElement1 = React.DOM.li({ className: 'item-1', key: 'item-1' }, 'Item 1'); var listItemElement2 = React.DOM.li({ className: 'item-2', key: 'item-2' }, 'Item 2'); var listItemElement3 = React.DOM.li({ className: 'item-3', key: 'item-3' }, 'Item 3');   var reactFragment = [ listItemElement1, listItemElement2, listItemElement3 ]; var listOfItems = React.DOM.ul({ className: 'list-of-items' }, reactFragment);   ReactDOM.render(listOfItems, document.getElementById('react-application')); Translate this to the one with JSX: var React = require('react'); var ReactDOM = require('react-dom');   var listOfItems = <ul className="list-of-items">                     <li className="item-1">Item 1</li>                     <li className="item-2">Item 2</li>                     <li className="item-3">Item 3</li>                   </ul>; ReactDOM.render(listOfItems, document.getElementById('react-application'));   As you can see, JSX allows us to write HTML-like syntax in our JavaScript code. More importantly, we can now clearly see what our HTML layout will look like once it's rendered. JSX is a convenience tool and it comes with a price in the form of an additional transformation step. Transformation of the JSX syntax into valid JavaScript syntax must happen before our "invalid" JavaScript code is interpreted. We know that the babely module transforms our JSX syntax into a JavaScript one. This transformation happens every time we run our default task from gulpfile.js: gulp.task('default', function () {   return browserify('./source/app.js')         .transform(babelify)         .bundle()         .pipe(source('snapterest.js'))         .pipe(gulp.dest('./build/')); }); As you can see, the .transform(babelify) function call transforms JSX into JavaScript before bundling it with the other JavaScript code. To test our transformation, run this command: gulp Then, navigate to the ~/snapterest/build/ directory, and open index.html in a web browser. You will see a list of three items. The React team has built an online JSX Compiler that you can use to test your understanding of how JSX works at http://facebook.github.io/react/jsx-compiler.html. Using JSX, you might feel very unusual in the beginning, but it can become a very intuitive and convenient tool to use. The best part is that you can choose whether to use it or not. I found that JSX saves me development time, so I chose to use it in this project that we're building. If you choose to not use it, then I believe that you have learned enough in this article to be able to translate the JSX syntax into a JavaScript code with the React.createElement() function calls. If you have a question about what we have discussed in this article, then you can refer to https://github.com/fedosejev/react-essentials and create a new issue. Summary We started this article by discussing the issues with single web page applications and how they can be addressed. Then, we learned what a virtual DOM is and how React allows us to build it. We also installed React and created our first React element using only JavaScript. Then, we also learned how to render React elements in a web browser and on a server. Finally, we looked at a simpler way of creating React elements with JSX. Resources for Article: Further resources on this subject: Changing Views [article] Introduction to Akka [article] ECMAScript 6 Standard [article]
Read more
  • 0
  • 0
  • 7721
article-image-apache-wicket-displaying-data-using-datatable
Packt
01 Apr 2011
6 min read
Save for later

Apache Wicket: displaying data using DataTable

Packt
01 Apr 2011
6 min read
It's hard to find a web application that does not have a single table that presents the user with some data. Building these DataTables, although not very difficult, can be a daunting task because each of these tables must often support paging, sorting, filtering, and so on. Wicket ships with a very powerful component called the DataTable that makes implementing all these features simple and elegant. Because Wicket is component-oriented, once implemented, these features can be easily reused across multiple DataTable deployments. In this article, we will see how to implement the features mentioned previously using the DataTable and the infrastructure it provides. Sorting A common requirement, when displaying tabular data, is to allow users to sort it by clicking the table headers. Click a header once and the data is sorted on that column in ascending order; click it again, and the data is sorted in the descending order. In this recipe, we will see how to implement such a behavior when displaying data using a DataTable component. We will build a simple table that will look much like a phone book and will allow the sorting of data on the name and e-mail columns: Getting ready Begin by creating a page that will list contacts using the DataTable, but without sorting: Create Contact bean: Contact.java public class Contact implements Serializable { public String name, email, phone; // getters, setters, constructors2. Create the page that will list the contacts: HomePage.html <html> <body> <table wicket_id="contacts" class="contacts"></table> </body> </html> HomePage.java public class HomePage extends WebPage { private static List<Contact> contacts = Arrays.asList( new Contact("Homer Simpson", "[email protected]", "555-1211"), new Contact("Charles Burns", "[email protected]", "555-5322"), new Contact("Ned Flanders", "[email protected]", "555-9732")); public HomePage(final PageParameters parameters) { // sample code adds a DataTable and a data providert hat uses the contacts list created above } } How to do it... Enable sorting by letting DataTable columns know they can be sorted by using a constructor that takes the sort data parameter: HomePage.java List<IColumn<Contact>> columns = new ArrayList<IColumn<Contact>>(); columns.add(new PropertyColumn<Contact>(Model.of("Name"), "name","name")); columns.add(new PropertyColumn<Contact>(Model.of("Email"), "email", "email")); columns.add(new PropertyColumn<Contact>(Model.of("Phone"), "phone")); Implement sorting by modifying the data provider: private static class ContactsProvider extends SortableDataProvider<Contact> { public ContactsProvider() { setSort("name", true); } public Iterator<? extends Contact> iterator(int first, int count) { List<Contact> data = new ArrayList<Contact>(contacts); Collections.sort(data, new Comparator<Contact>() { public int compare(Contact o1, Contact o2) { int dir = getSort().isAscending() ? 1 : -1; if ("name".equals(getSort().getProperty())) { return dir * (o1.name.compareTo(o2.name)); } else { return dir * (o1.email.compareTo(o2.email)); } } }); return data.subList(first, Math.min(first + count, data.size())).iterator(); } public int size() { return contacts.size(); } public IModel<Contact> model(Contact object) { return Model.of(object); } } How it works... DataTable supports sorting out of the box. Any column with the IColumn#getSortProperty() method that returns a non-null value is treated as a sortable column and Wicket makes its header clickable. When a header of a sortable column is clicked Wicket will pass the value of IColumn#getSortProperty to the data provider which should use this value to sort the data. In order to know about the sorting information the data provider must implement the ISortableDataProvider interface; Wicket provides the default SortableDataProvider implementation which is commonly used to implement sort-capable data providers. DataTable will take care of details such as multiple clicks to the same column resulting in change of sorting direction, so on. Let's examine how to implement sorting in practice. In step 1 and 2, we have implemented a basic DataTable that cannot yet sort data. Even though the data provider we have implemented already extends a SortableDataProvider, it does not yet take advantage of any sort information that may be passed to it. We start building support for sorting by enabling it on the columns, in our case the name and the email columns: List<IColumn<Contact>> columns = new ArrayList<IColumn<Contact>>(); columns.add(new PropertyColumn<Contact>(Model.of("Name"), "name", "name")); columns.add(new PropertyColumn<Contact>(Model.of("Email"), "email", "email")); columns.add(new PropertyColumn<Contact>(Model.of("Phone"), "phone")); We enable sorting on the columns by using the three-argument constructor of the PropertyColumn, with the second argument being the "sort data". Whenever a DataTable column with sorting enabled is clicked, the data provider will be given the value of the "sort data". In the example, only the name and e-mail columns have sorting enabled with the sort data defined as a string with values "name" and "e-mail" respectively. Now, let's implement sorting by making our data provider implementation sort-aware. Since our data provider already extends a provider that implements ISortableDataProvider we only need to take advantage of the sort information: public Iterator<? extends Contact> iterator(int first, int count) { List<Contact> data = new ArrayList<Contact>(contacts); Collections.sort(data, new Comparator<Contact>() { public int compare(Contact o1, Contact o2) { int dir = getSort().isAscending() ? 1 : -1; if ("name".equals(getSort().getProperty())) { return dir * (o1.name.compareTo(o2.name)); } else { return dir * (o1.email.compareTo(o2.email)); } } }); return data.subList(first, Math.min(first + count, data.size())).iterator(); } First we copy the data into a new list which we can sort as needed and then we sort based on the sort data and direction provided. The value returned by getSort().getProperty() is the same sort data values we have defined previously when creating columns. The only remaining task is to define a default sort which will be used when the table is rendered before the user clicks any header of a sortable column. We do this in the constructor of our data provider: public ContactsProvider() { setSort("name", true); } There's more... DataTable gives us a lot out of the box; in this section we see how to add some usability enhancements. Adding sort direction indicators via CSS DataTable is nice enough to decorate sortable <th> elements with sort-related CSS classes out of the box. This makes it trivial to implement sort direction indicators as shown in the following screenshot: A possible CSS style definition can look like this: table tr th { background-position: right; background-repeat:no-repeat; } table tr th.wicket_orderDown { background-image: url(images/arrow_down.png); } table tr th.wicket_orderUp { background-image: url(images/arrow_up.png); } table tr th.wicket_orderNone { background-image: url(images/arrow_off.png);
Read more
  • 0
  • 1
  • 7714

article-image-introduction-raspberry-pi-zero-w-wireless
Packt
03 Mar 2018
14 min read
Save for later

Introduction to Raspberry Pi Zero W Wireless

Packt
03 Mar 2018
14 min read
In this article by Vasilis Tzivaras, the author of the book Raspberry Pi Zero W Wireless Projects, we will be covering the following topics:  An overview of the Raspberry Pi family  An introduction to the new Raspberry Pi Zero W Distributions  Common issues Raspberry Pi Zero W is the new product of the Raspberry Pi Zero family. In early 2017, Raspberry Pi community has announced a new board with wireless extension. It offers wireless functionality and now everyone can develop his own projects without cables and other components. Comparing the new board with Raspberry Pi 3 Model B we can easily see that it is quite smaller with many possibilities over the Internet of Things. But what is a Raspberry Pi Zero W and why do you need it? Let' s go though the rest of the family and introduce the new board. In the following article we will cover the following topics: (For more resources related to this topic, see here.) Raspberry Pi family As said earlier Raspberry Pi Zero W is the new member of Raspberry Pi family boards. All these years Raspberry Pi are evolving and become more user friendly with endless possibilities. Let's have a short look at the rest of the family so we can understand the difference of the Pi Zero board. Right now, the heavy board is named Raspberry Pi 3 Model B. It is the best solution for projects such as face recognition, video tracking, gaming or anything else that is demanding:                                      RASPBERRY PI 3 MODEL B It is the 3rd generation of Raspberry Pi boards after Raspberry Pi 2 and has the following specs:  A 1.2GHz 64-bit quad-core ARMv8 CPU 802.11n Wireless LAN Bluetooth 4.1 Bluetooth Low Energy (BLE)  Like the Pi 2, it also has 1GB RAM 4 USB ports 40 GPIO pins Full HDMI port Ethernet port Combined 3.5mm audio jack and composite video  Camera interface (CSI)  Display interface (DSI)  Micro SD card slot (now push-pull rather than push-push)  VideoCore IV 3D graphics core The next board is Raspberry Pi Zero, in which the Zero W was based. A small low cost and power board able to do many things:                                     Raspberry Pi Zero The specs of this board can be found as follows:  1GHz, Single-core CPU  512MB RAM  Mini-HDMI port Micro-USB OTG port  Micro-USB power  HAT-compatible 40-pin header  Composite video and reset headers  CSI camera connector (v1.3 only) At this point we should not forget to mention that apart from the boards mentioned earlier there are several other modules and components such as the Sense Hat or Raspberry Pi Touch Display available which will work great for advance projects. The 7″ Touchscreen Monitor for Raspberry Pi gives users the ability to create all-in-one, integrated projects such as tablets, infotainment systems and embedded projects:                                                        RASPBERRY PI Touch Display Where Sense HAT is an add-on board for Raspberry Pi, made especially for the Astro Pi mission. The Sense HAT has an 8×8 RGB LED matrix, a five-button joystick and includes the following sensors: Gyroscope Accelerometer  Magnetometer Temperature  Barometric pressure Humidity                                                                         sense HAT Stay tuned with more new boards and modules at the official website: https://www.raspberrypi.org/ Raspberry Pi Zero W Raspberry Pi Zero W is a small device that has the possibilities to be connected either on an external monitor or TV and of course it is connected to the internet. The operating system varies as there are many distros in the official page and almost everyone is baled on Linux systems.                                                        Raspberry Pi Zero W   With Raspberry Pi Zero W you have the ability to do almost everything, from automation to gaming! It is a small computer that allows you easily program with the help of the GPIO pins and some other components such as a camera. Its possibilities are endless! Specifications If you have bought Raspberry PI 3 Model B you would be familiar with Cypress CYW43438 wireless chip. It provides 802.11n wireless LAN and Bluetooth 4.0 connectivity. The new Raspberry Pi Zero W is equipped with that wireless chip as well. Following you can find the specifications of the new board: Dimensions: 65mm × 30mm × 5mm SoC:Broadcom BCM 2835 chip ARM11 at 1GHz, single core CPU 512ΜΒ RAM Storage: MicroSD card  Video and Audio:1080P HD video and stereo audio via mini-HDMI connector Power:5V, supplied via micro USB connector  Wireless:2.4GHz 802.11 n wireless LAN Bluetooth: Bluetooth classic 4.1 and Bluetooth Low Energy (BLE) Output: Micro USB  GPIO: 40-pin GPIO, unpopulated                                Raspberry Pi Zero W Notice that all the components are on the top side of the board so you can easily choose your case without any problems and keep it safe. As far as the antenna concern, it is formed by etching away copper on each layer of the PCB. It may not be visible as it is in other similar boards but it is working great and offers quite a lot functionalities:                  Raspberry Pi Zero W Capacitors Also, the product is limited to only one piece per buyer and costs 10$. You can buy a full kit with microsd card, a case and some more extra components for about 45$ or choose the camera full kit which contains a small camera component for 55$. Camera support Image processing projects such as video tracking or face recognition require a camera. Following you can see the official camera support of Raspberry Pi Zero W. The camera can easily be mounted at the side of the board using a cable like the Raspberry Pi 3 Model B board:The official Camera support of Raspberry Pi Zero W Depending on your distribution you many need to enable the camera though command line. More information about the usage of this module will be mentioned at the project. Accessories Well building projects with the new board there are some other gadgets that you might find useful working with. Following there is list of some crucial components. Notice that if you buy Raspberry Pi Zero W kit, it includes some of them. So, be careful and don't double buy them:  OTG cable  powerHUB GPIO header  microSD card and card adapter  HDMI to miniHDMI cable  HDMI to VGA cable Distributions The official site https://www.raspberrypi.org/downloads/ contains several distributions for downloading. The two basic operating systems that we will analyze after are RASPBIAN and NOOBS. Following you can see how the desktop environment looks like. Both RASPBIAN and NOOBS allows you to choose from two versions. There is the full version of the operating system and the lite one. Obviously the lite version does not contain everything that you might use so if you tend to use your Raspberry with a desktop environment choose and download the full version. On the other side if you tend to just ssh and do some basic stuff pick the lite one. It' s really up to you and of course you can easily download again anything you like and re-write your microSD card. NOOBS distribution Download NOOBS: https://www.raspberrypi.org/downloads/noobs/. NOOBS distribution is for the new users with not so much knowledge in linux systems and Raspberry PI boards. As the official page says it is really "New Out Of the Box Software". There is also pre-installed NOOBS SD cards that you can purchase from many retailers, such as Pimoroni, Adafruit, and The Pi Hut, and of course you can download NOOBS and write your own microSD card. If you are having trouble with the specific distribution take a look at the following links: Full guide at https://www.raspberrypi.org/learning/software-guide/. View the video at https://www.raspberrypi.org/help/videos/#noobs-setup. NOOBS operating system contains Raspbian and it provides various of other operating systems available to download. RASPBIAN distribution Download RASPBIAN: https://www.raspberrypi.org/downloads/raspbian/. Raspbian is the official supported operating system. It can be installed though NOOBS or be downloading the image file at the following link and going through the guide of the official website. Image file: https://www.raspberrypi.org/documentation/installation/installing-images/README.md. It has pre-installed plenty of software such as Python, Scratch, Sonic Pi, Java, Mathematica, and more! Furthermore, more distributions like Ubuntu MATE, Windows 10 IOT Core or Weather Station are meant to be installed for more specific projects like Internet of Things (IoT) or weather stations. To conclude with, the right distribution to install actually depends on your project and your expertise in Linux systems administration. Raspberry Pi Zero W needs an microSD card for hosting any operating system. You are able to write Raspbian, Noobs, Ubuntu MATE, or any other operating system you like. So, all that you need to do is simple write your operating system to that microSD card. First of all you have to download the image file from https://www.raspberrypi.org/downloads/ which, usually comes as a .zip file. Once downloaded, unzip the zip file, the full image is about 4.5 Gigabytes. Depending on your operating system you have to use different programs:  7-Zip for Windows  The Unarchiver for Mac  Unzip for Linux Now we are ready to write the image in the MicroSD card. You can easily write the .img file in the microSD card by following one of the next guides according to your system. For Linux users dd tool is recommended. Before connecting your microSD card with your adaptor in your computer run the following command:  df -h Now connect your card and run the same command again. You must see some new records. For example if the new device is called /dev/sdd1 keep in your mind that the card is at /dev/sdd (without the 1). The next step is to use the dd command and copy the image to the microSD card. We can do this by the following command:  dd if= of= Where if is the input file (image file or the distribution) and of is the output file (microSD card). Again be careful here and use only /dev/sdd or whatever is yours without any numbers. If you are having trouble with that please use the full manual at the following link https://www.raspberrypi.org/documentation/installation/installing-images/linux.md. A good tool that could help you out for that job is GParted. If it is not installed on your system you can easily install it with the following command:  sudo apt-get install gparted Then run sudogparted to start the tool. Its handles partitions very easily and you can format, delete or find information about all your mounted partitions. More information about ddcan be found here: https://www.raspberrypi.org/documentation/installation/installing-images/linux.md For Mac OS users dd tool is always recommended: https://www.raspberrypi.org/documentation/installation/installing-images/mac.md For Windows users Win32DiskImager utility is recommended: https://www.raspberrypi.org/documentation/installation/installing-images/windows.md There are several other ways to write an image file in a microSD card. So, if you are against any kind of problems when following the guides above feel free to use any other guide available on the Internet. Now, assuming that everything is ok and the image is ready. You can now gently plugin the microcard to your Raspberry PI Zero W board. Remember that you can always confirm that your download was successful with the sha1 code. In Linux systems you can use sha1sum followed by the file name (the image) and print the sha1 code that should and must be the same as it is at the end of the official page where you downloaded the image. Common issues Sometimes, working with Raspberry Pi boards can lead to issues. We all have faced some of them and hope to never face them again. The Pi Zero is so minimal and it can be tough to tell if it is working or not. Since, there is no LED on the board, sometimes a quick check if it is working properly or something went wrong is handy. Debugging steps With the following steps you will probably find its status: Take your board, with nothing in any slot or socket. Remove even the microSD card!  Take a normal micro-USB to USB-ADATA SYNC cable and connect the one side to your computer and the other side to the Pi's USB, (not the PWR_IN).  If the Zero is alive: • On Windows the PC will go ding for the presence of new hardware and you should see BCM2708 Boot in Device Manager. On Linux, with a ID 0a5c:2763 Broadcom Corp message from dmesg. Try to run dmesg in a Terminal before your plugin the USB and after that. You will find a new record there. Output example: [226314.048026] usb 4-2: new full-speed USB device number 82 using uhci_hcd [226314.213273] usb 4-2: New USB device found, idVendor=0a5c, idProduct=2763 [226314.213280] usb 4-2: New USB device strings: Mfr=1, Product=2, SerialNumber=0 [226314.213284] usb 4-2: Product: BCM2708 Boot [226314.213] usb 4-2: Manufacturer: Broadcom If you see any of the preceding, so far so good, you know the Zero's not dead. microSD card issue Remember that if you boot your Raspberry and there is nothing working, you may have burned your microSD card wrong. This means that your card many not contain any boot partition as it should and it is not able to boot the first files. That problem occurs when the distribution is burned to /dev/sdd1 and not to /dev/sdd as we should. This is a quite common mistake and there will be no errors in your monitor. It will just not work! Case protection Raspberry Pi boards are electronics and we never place electronics in metallic surfaces or near magnetic objects. It will affect the booting operation of the Raspberry and it will probably not work. So a tip of advice, spend some extra money for the Raspberry PI Case and protect your board from anything like that. There are many problems and issues when hanging your raspberry pi using tacks. It may be silly, but there are many that do that. Summary Raspberry Pi Zero W is a new promising board allowing everyone to connect their devices to the Internet and use their skills to develop projects including software and hardware. This board is the new toy of any engineer interested in Internet of Things, security, automation and more! We have gone through an introduction in the new Raspberry Pi Zero board and the rest of its family and a brief analysis on some extra components that you should buy as well. Resources for Article:   Further resources on this subject: Raspberry Pi Zero W Wireless Projects Full Stack Web Development with Raspberry Pi 3
Read more
  • 0
  • 0
  • 7713

article-image-working-aspnet-datalist-control
Packt
19 Feb 2010
8 min read
Save for later

Working With ASP.NET DataList Control

Packt
19 Feb 2010
8 min read
In this article by Joydip Kanjilal, we will discuss the ASP.NET DataList control which can be used to display a list of repeated data items. We will learn about the following: Using the DataList control Binding images to a DataList control dynamically Displaying data using the DataList control Selecting, editing and deleting data using this control Handling the DataList control events The ASP.NET DataList Control The DataList control like the Repeater control is a template driven, light weight control, and acts as a container of repeated data items. The templates in this control are used to define the data that it will contain. It is flexible in the sense that you can easily customize the display of one or more records that are displayed in the control. You have a property in the DataList control called RepeatDirection that can be used to customize the layout of the control. The RepeatDirection property can accept one of two values, that is, Vertical or Horizontal. The RepeatDirection is Vertical by default. However, if you change it to Horizontal, rather than displaying the data as rows and columns, the DataList control will display them as a list of records with the columns in the data rendered displayed as rows. This comes in handy, especially in situations where you have too many columns in your database table or columns with larger widths of data. As an example, imagine what would happen if there is a field called Address in our Employee table having data of large size and you are displaying the data using a Repeater, a DataGrid, or a GridView control. You will not be able to display columns of such large data sizes with any of these controls as the display would look awkward. This is where the DataList control fits in. In a sense, you can think the DataList control as a combination of the DataGrid and the Repeater controls. You can use templates with it much as you did with a Repeater control and you can also edit the records displayed in the control, much like the DataGrid control of ASP.NET. The next section compares the features of the three controls that we have mentioned so far, that is, the Repeater, the DataList, and the DataGrid control of ASP.NET. When the web page is in execution with the data bound to it using the Page_Load event, the data in the DataList control is rendered as DataListItem objects, that is, each item displayed is actually a DataListItem. Similar to the Repeater control, the DataList control does not have Paging and Sorting functionalities build into it. Using the DataList Control To use this control, drag and drop the control in the design view of the web form onto a web form from the toolbox. Refer to the following screenshot, which displays a DataList control on a web form: The following list outlines the steps that you can follow to add a DataList control in a web page and make it working: Drag and drop a DataList control in the web form from the toolbox. Set the DataSourceID property of the control to the data source that you will use to bind data to the control, that is, you can set this to an SQL Data Source control. Open the .aspx file, declare the <ItemTemplate> element and define the fields as per your requirements. Use data binding syntax through the Eval() method to display data in these defined fields of the control. You can bind data to the DataList control in two different ways, that is, using the DataSourceID and the DataSource properties. You can use the inbuilt features like selecting and updating data when using the DataSourceID property. Note that you need to write custom code for selecting and updating data to any data source that implements the ICollection and IEnumerable data sources. We will discuss more on this later. The next section discusses how you can handle the events in the DataList control. Displaying Data Similar to the Repeater control, the DataList control contains a template that is used to display the data items within the control. Since there are no data columns associated with this control, you use templates to display data. Every column in a DataList control is rendered as a <span> element. A DataList control is useless without templates. Let us now lern what templates are, the types of templates, and how to work with them. A template is a combination of HTML elements, controls, and embedded server controls, and can be used to customize and manipulate the layout of a control. A template comprises HTML tags and controls that can be used to customize the look and feel of controls like Repeater, DataGrid, or DataList. There are seven templates and seven styles in all. You can use templates for the DataList control in the same way you did when using the Repeater control. The following is the list of templates and their associated styles in the DataList control The Templates are as follows: ItemTemplate AlternatingItemTemplate EditItemTemplate FooterTemplate HeaderTemplate SelectedItemTemplate SeparatorTemplate The following screenshot illustrates the different templates of this control. As you can see from this figure, the templates are grouped under three broad categories. These are: Item Templates Header and Footer Templates Separator Template Note that out of the templates given above, the ItemTemplate is the one and only mandatory template that you have to use when working with a DataList control. Here is a sample of how your DataList control's templates are arranged: < asp:DataList id="dlEmployee" runat="server"><HeaderTemplate>...</HeaderTemplate><ItemTemplate>...</ItemTemplate><AlternatingItemTemplate>...</AlternatingItemTemplate><FooterTemplate>...</FooterTemplate></asp:DataList> The following screenshot displays a DataList control populated with data and with its templates indicated. Customizing a DataList control at run timeYou can customize the DataList control at run time using the ListItemType property in the ItemCreated event of this control as follows: private void DataList1_ItemCreated(objectsender, ...........System.Web.UI.WebControls.DataListItemEventArgs e){ switch (e.Item.ItemType) { case System.Web.UI.WebControls.ListItemType.Item : e.Item.BackColor = Color.Red; break; case System.Web.UI.WebControls.ListItemType. AlternatingItem : e.Item.BackColor = Color.Blue; break; case System.Web.UI.WebControls.ListItemType. SelectedItem : e.Item.BackColor = Color.Green; break; default : break; }} The Styles that you can use with the DataList control to customize the look and feel are: AlternatingItemStyle EditItemStyle FooterStyle HeaderStyle ItemStyle SelectedItemStyle SeparatorStyle You can use any of these styles to format the control, that is, format the HTML code that is rendered. You can also use layouts of the DataList control for formatting, that is, further customization of your user interface. The available layouts are as follows: FlowLayout TableLayout VerticalLayout HorizontalLayout You can specify your desired flow or table format at design time by specifying the following in the .aspx file. RepeatLayout = "Flow" You can also do the same at run time by specifying your desired layout using the RepeatLayout property of the DataList control as shown in the following code snippet: DataList1.RepeatLayout = RepeatLayout.Flow In the code snippet, it is assumed that the name of the DataList control is DataList1. Let us now understand how we can display data using the DataList control. For this, we would first drag and drop a DataList control in our web form and specify the templates for displaying data. The code in the .aspx file is as follows: <asp:DataList ID="DataList1" runat="server"> <HeaderTemplate> <table border="1"> <tr> <th> Employee Code </th> <th> Employee Name </th> <th> Basic </th> <th> Dept Code </th> </tr> </HeaderTemplate> <ItemTemplate> <tr bgcolor="#0xbbbb"> <td> <%# DataBinder.Eval(Container.DataItem, "EmpCode")%> </td> <td> <%# DataBinder.Eval(Container.DataItem, "EmpName")%> </td> <td> <%# DataBinder.Eval(Container.DataItem, "Basic")%> </td> <td> <%# DataBinder.Eval(Container.DataItem, "DeptCode")%> </td> </tr> </ItemTemplate> <FooterTemplate> </FooterTemplate></asp:DataList> The DataList control is populated with data in the Page_Load event of the web form using the DataManager class as usual. protected void Page_Load(object sender, EventArgs e) { DataManager dataManager = new DataManager(); DataList1.DataSource = dataManager.GetEmployees(); DataList1.DataBind(); } Note that the DataBinder.Eval() method has been used as usual to display the values of the corresponding fields from the data container in the DataList control. The data container in our case is the DataSet instance that is returned by the GetEmployees () method of the DataManager class. When you execute the application, the output is as follows:
Read more
  • 0
  • 1
  • 7707
article-image-getting-started-kinect-windows-sdk-programming
Packt
20 Feb 2013
12 min read
Save for later

Getting started with Kinect for Windows SDK Programming

Packt
20 Feb 2013
12 min read
(For more resources related to this topic, see here.) System requirements for the Kinect for Windows SDK While developing applications for any device using an SDK, compatibility plays a pivotal role. It is really important that your development environment must fulfill the following set of requirements before starting to work with the Kinect for Windows SDK. Downloading the example code You can download the example code files for all Packt books you have purchased from your account at http://www.PacktPub.com. If you purchased this book elsewhere, you can visit http://www.PacktPub.com/support. and register to have the files e-mailed directly to you. Supported operating systems The Kinect for Windows SDK, as its name suggests, runs only on the Windows operating system. The following are the supported operating systems for development: Windows 7 Windows Embedded 7 Windows 8 The Kinect for Windows sensor will also work on Windows operating systems running in a virtual machine such as Microsoft HyperV, VMWare, and Parallels. System configuration The hardware requirements are not as stringent as the software requirements. It can be run on most of the hardware available in the market. The following are the minimum configurations required for development with Kinect for Windows: A 32- (x86) or 64-bit (x64) processor Dual core 2.66 GHz or faster processor Dedicated USB 2.0 bus 2 GB RAM The Kinect sensor It goes without saying, you need a Kinect sensor for your development. You can use the Kinect for Windows or the Kinect for Xbox sensor for your development. Before choosing a sensor for your development, make sure you are clear about the limitations of the Kinect for Xbox sensor over the Kinect for Windows sensor, in terms of features, API supports, and licensing mechanisms. The Kinect for Windows sensor By now, you are already familiar with the Kinect for Windows sensor and its different components. The Kinect for Windows sensor comes with an external power supply, which supplies the additional power, and a USB adapter to connect with the system. For the latest updates and availability of the Kinect for Windows sensor, you can refer to http://www.microsoft.com/en-us/kinectforwindows/site. The Kinect for Xbox sensor If you already have a Kinect sensor with your Xbox gaming console, you may use it for development. Similar to the Kinect for Windows sensor, you will require a separate power supply for the device so that it can power up the motor, camera, IR sensor, and so on. If you have bought a Kinect sensor with an Xbox as a bundle, you will need to buy the adapter / power supply separately. You can check out the external power supply adapter at http://www.microsoftstore.com. If you have bought only the Kinect for Xbox sensor, you will have everything that is required for a connection with a PC and external power cable. Development tools and software The following are the software that are required for development with Kinect SDK: Microsoft Visual Studio 2010 Express or higher editions of Visual Studio Microsoft .NET Framework 4.0 or higher Kinect for Windows SDK Kinect for Windows SDK uses the underlying speech capability of a Windows operating system to interact with the Kinect audio system. This will require Microsoft Speech Platform – Server Runtime, the Microsoft Speech Platform SDK, and a language pack to be installed in the system, and these will be installed along with the Kinect for Windows SDK. The system requirements for SDK may change with upcoming releases. Refer to http://www.microsoft.com/en-us/ kinectforwindows/. for the latest system requirements. Evaluation of the Kinect for Windows SDK   Though the Kinect for Xbox sensor has been in the market for quite some time, Kinect for Windows SDK is still fairly new in the developer paradigm, and it's evolving. The book is written on Kinect for Windows SDK v1.6. The Kinect for Windows SDK was first launched as a Beta 1 version in June 2011, and after a thunderous response from the developer community, the updated version of Kinect for Windows SDK Beta 2 version was launched in November 2011. Initially, both the SDK versions were a non-commercial release and were meant only for hobbyists. The first commercial version of Kinect for Windows SDK (v1.0) was launched in February 2012 along with a separate commercial hardware device. SDK v1.5 was released on May 2012 with bunches of new features, and the current version of Kinect for Windows SDK (v1.6) was launched in October 2012. The hardware hasn't changed since its first release. It was initially limited to only 12 countries across the globe. Now the new Kinect for Windows sensor is available in more than 40 countries. The current version of SDK also has the support of speech recognition for multiple languages.   Downloading the SDK and the Developer Toolkit The Kinect SDK and the Developer Toolkit are available for free and can be downloaded from http://www.microsoft.com/en-us/kinectforwindows/. The installer will automatically install the 64- or 32-bit version of SDK depending on your operating system. The Kinect for Windows Developer Toolkit is an additional installer that includes samples, tools, and other development extensions. The following diagram shows these components:   The main reason behind keeping SDK and Developer Toolkit in two different installers is to update the Developer Toolkit independently from the SDK. This will help to keep the toolkit and samples updated and distributed to the community without changing or updating the actual SDK version. The version of Kinect for Windows SDK and that for the Kinect for Windows Developer Toolkit might not be the same. Installing Kinect for Windows SDK Before running the installation, make sure of the following: You have uninstalled all the previous versions of Kinect for Windows SDK The Kinect sensor is not plugged into the USB port on the computer There are no Visual Studio instances currently running Start the installer, which will display the start screen as End User License Agreement. You need to read and accept this agreement to proceed with the installation. The following screenshot shows the license agreement:   Accept the agreement by selecting the checkbox and clicking on the Install option, which will do the rest of the job automatically. Before the installation, your computer may pop out the User Access Control (UAC) dialog, to get a confirmation from you that you are authorizing the installer to make changes in your computer. Once the installation is over, you will be notified along with an option for installing the Developer Toolkit, as shown in the next screenshot: Is it mandatory to uninstall the previous version of SDK before we install the new one? The upgrade will happen without any hassles if your current version is a non-Beta version. As a standard procedure, it is always recommended to uninstall the older SDK prior to installing the newer one, if your current version is a Beta version. Installing the Developer Toolkit If you didn't downloaded the Developer Toolkit installer earlier, you can click on the Download the Developer Toolkit option of the SDK setup wizard (refer to the previous screenshot); this will first download and then install the Developer Toolkit setup. If you have already downloaded the setup, you can close the current window and execute the standalone Toolkit installer. The installation process for Developer Toolkit is similar to the process for the SDK installer. Components installed by the SDK and the Developer Toolkit The Kinect for Windows SDK and Kinect for Windows Developer Toolkit install the drivers, assemblies, samples, and the documentation. To check which components are installed, you can navigate to the Install and Uninstall Programs section of Control Panel and search for Kinect. The following screenshot shows the list of components that are installed with the SDK and Toolkit installer: The default location for the SDK and Toolkit installation is %ProgramFiles%/Microsoft SDKs/Kinect. Kinect management service The Kinect for Windows SDK also installs Kinect Management, which is a Windows service that runs in the background while your PC communicates with the device. This service is responsible for the following tasks: Listening to the Kinect device for any status changes Interacting with the COM Server for any native support Managing the Kinect audio components by interacting with Windows audio drivers You can view this service by launching Services by navigating to Control Panel |Administrative Tools, or by typing Services.msc in the Run command. Is it necessary to install the Kinect SDK to end users' systems? The answer is No. When you install the Kinect for Windows SDK, it creates a Redist directory containing an installer that is designed to be deployed with Kinect applications, which install the runtime and drivers. This is the path where you can find the setup file after the SDK is installed: %ProgramFiles%/Microsoft SDKsKinectv1.6Redist KinectRuntime-v1.6-Setup.exe This can be used with your application deployment package, which will install only the runtime and necessary drivers. Connecting the sensor with the system Now that we have installed the SDK, we can plug the Kinect device into your PC. The very first time you plug the device into your system, you will notice the LED indicator of the Kinect sensor turning solid red and the system will start installing the drivers automatically. The default location of the driver is %Program Files%Microsoft Kinect DriversDrivers. The drivers will be loaded only after the installation of SDK is complete and it's a one-time job. This process also checks for the latest Windows updates on USB Drivers, so it is good to be connected to the Internet if you don't have the latest updates of Windows. The check marks in the dialog box shown in the next screenshot indicate successful driver software installation: When the drivers have finished loading and are loaded properly, the LED light on your Kinect sensor will turn solid green. This indicates that the device is functioning properly and can communicate with the PC as well. Verifying the installed drivers This is typically a troubleshooting procedure in case you encounter any problems. Also, the verification procedure will help you to understand how the device drivers are installed within your system. In order to verify that the drivers are installed correctly, open Control Panel and select Device Manager; then look for the Kinect for Windows node. You will find the Kinect for Windows Device option listed as shown in the next screenshot: Not able to view all the device components At some point of time, it may happen that you are able to view only the Kinect for Windows Device node (refer to the following screenshot). At this point of time, it looks as if the device is ready. However, a careful examination reveals a small hitch. Let's see whether you can figure it out or not! The Kinect device LED is on and Device Manager has also detected the device, which is absolutely fine, but we are still missing something here. The device is connected to the PC using the USB port, and the system prompt shows the device installed successfully—then where is the problem? The default USB port that is plugged into the system doesn't have the power capabilities required by the camera, sensor, and motor. At this point, if you plug it into an external power supplier and turn the power on, you will find all the driver nodes in Device Manager loaded automatically. This is one of the most common mistakes made by the developers. While working with Kinect SDK, make sure your Kinect device is connected with the computer using the USB port, and the external power adapter is plugged in and turned on. The next picture shows the Kinect sensor with USB connector and power adapter, and how they have been used: With the aid of the external power supply, the system will start searching for Windows updates for the USB components. Once everything is installed properly, the system will prompt you as shown in the next screenshot: All the check marks in the screenshot indicate that the corresponding components are ready to be used and the same components are also reflected in Device Manager. The messages prompting for the loading of drivers, and the prompts for the installation displaying during the loading of drivers, may vary depending upon the operating system you are using. You might also not receive any of them if the drivers are being loaded in the background. Detecting the loaded drivers in Device Manager Navigate to Control Panel | Device Manager, look for the Kinect for Windows node, and you will find the list of components detected. Refer to the next screenshot: The Kinect for Windows Audio Array Control option indicates the driver for the Kinect audio system whereas the Kinect for Windows Camera option controls the camera sensor. The Kinect for Windows Security Control option is used to check whether the device being used is a genuine Microsoft Kinect for Windows or not. In addition to appearing under the Kinect for Windows node, the Kinect for Windows USB Audio option should also appear under the Sound, Video and Game Controllers node, as shown in the next screenshot: Once the Kinect sensor is connected, you can identify the Kinect microphone like any other microphone connected to your PC in the Audio Device Manager section. Look at the next screenshot:  
Read more
  • 0
  • 0
  • 7698

article-image-displaying-posts-and-pages-using-wordpress-loop
Packt
28 Jun 2010
12 min read
Save for later

Displaying Posts and Pages Using Wordpress Loop

Packt
28 Jun 2010
12 min read
(For more resources on Wordpress, see here.) The Loop is the basic building block of WordPress template files. You'll use The Loop when displaying posts and pages, both when you're showing multiple items or a single one. Inside of The Loop you use WordPress template tags to render information in whatever manner your design requires. WordPress provides the data required for a default Loop on every single page load. In addition, you're able to create your own custom Loops that display post and page information that you need. This power allows you to create advanced designs that require a variety of information to be displayed. This article will cover both basic and advanced Loop usage and you'll see exactly how to use this most basic WordPress structure. Creating a basic Loop The Loop nearly always takes the same basic structure. In this recipe, you'll become acquainted with this structure, find out how The Loop works, and get up and running in no time. How to do it... First, open the file in which you wish to iterate through the available posts. In general, you use The Loop in every single template file that is designed to show posts. Some examples include index.php, category.php, single.php, and page.php. Place your cursor where you want The Loop to appear, and then insert the following code: <?phpif( have_posts() ) { while( have_posts() ) { the_post(); ?> <h2><?php the_title(); ?></h2> <?php }}?> Using the WordPress theme test data with the above Loop construct, you end up with something that looks similar to the example shown in following screenshot: Depending on your theme's styles, this output could obviously look very different. However, the important thing to note is that you've used The Loop to iterate over available data from the system and then display pieces of that data to the user in the way that you want to. From here, you can use a wide variety of template tags in order to display different information depending on the specific requirements of your theme. How it works... A deep understanding of The Loop is paramount to becoming a great WordPress designer and developer, so you should understand each of the items in the above code snippet fairly well. First, you should recognize that this is just a standard while loop with a surrounding if conditional. There are some special WordPress functions that are used in these two items, but if you've done any PHP programming at all, you should be intimately familiar with the syntax here. If you haven't experienced programming in PHP, then you might want to check out the syntax rules for if and while constructs at http://php.net/if and http://php.net/ while, respectively. The next thing to understand about this generic loop is that it depends directly on the global $wp_query object. $wp_query is created when the request is parsed, request variables are found, and WordPress figures out the posts that should be displayed for the URL that a visitor has arrived from. $wp_query is an instance of the WP_Query object, and the have_posts and the_post functions delegate to methods on that object. The $wp_query object holds information about the posts to be displayed and the type of page being displayed (normal listing, category archive, date archive, and so on). When have_posts is called in the if conditional above, the $wp_query object determines whether any posts matched the request that was made, and if so, whether there are any posts that haven't been iterated over. If there are posts to display, a while construct is used that again checks the value of have_posts. During each iteration of the while loop, the the_post function is called. the_post sets an index on $wp_query that indicates which posts have been iterated over. It also sets up several global variables, most notably $post. Inside of The Loop, the the_title function uses the global $post variable that was set up in the_post to produce the appropriate output based on the currently-active post item. This is basically the way that all template tags work. If you're interested in further information on how the WP_Query class works, you should read the documentation about it in the WordPress Codex at http://codex.wordpress.org/Function_ Reference/WP_Query. You can find more information about The Loop at http://codex. wordpress.org/The_Loop. Displaying ads after every third post If you're looking to display ads on your site, one of the best places to do it is mixed up with your main content. This will cause visitors to view your ads, as they're engaged with your work, often resulting in higher click-through rates and better paydays for you. How to do it... First, open the template in which you wish to display advertisements while iterating over the available posts. This will most likely be a listing template file like index.php or category. php. Decide on the number of posts that you wish to display between advertisements. Place your cursor where you want your loop to appear, and then insert the following code: <?phpif( have_posts() ) { $ad_counter = 0; $after_every = 3; while( have_posts() ) { $ad_counter++; the_post(); ?> <h2><?php the_title(); ?></h2> <?php // Display ads $ad_counter = $ad_counter % $after_every; if( 0 == $ad_counter ) { echo '<h2 style="color:red;">Advertisement</h2>'; } }}?> If you've done everything correctly, and are using the WordPress theme test data, you should see something similar to the example shown in the following screenshot: Obviously, the power here comes when you mix in paying ads or images that link to products that you're promoting. Instead of a simple heading element for the Advertisement text, you could dynamically insert JavaScript or Flash elements that pull in advertisements for you. How it works... As with the basic Loop, this code snippet iterates over all available posts. In this recipe, however, a counter variable is declared that counts the number of posts that have been iterated over. Every time that a post is about to be displayed, the counter is incremented to track that another post has been rendered. After every third post, the advertisement code is displayed because the value of the $ad_counter variable is equal to 0. It is very important to put the conditional check and display code after the post has been displayed. Also, notice that the $ad_counter variable will never be greater than 3 because the modulus operator (%) is being used every time through The Loop. Finally, if you wish to change the frequency of the ad display, simply modify the $after_every variable from 3 to whatever number of posts you want to display between ads. Removing posts in a particular category Sometimes you'll want to make sure that posts from a certain category never implicitly show up in the Loops that you're displaying in your template. The category could be a special one that you use to denote portfolio pieces, photo posts, or whatever else you wish to remove from regular Loops. How to do it... First, you have to decide which category you want to exclude from your Loops. Note the name of the category, and then open or create your theme's functions.php file. Your functions. php file resides inside of your theme's directory and may contain some other code. Inside of functions.php, insert the following code: add_action('pre_get_posts', 'remove_cat_from_loops');function remove_cat_from_loops( $query ) { if(!$query->get('suppress_filters')) { $cat_id = get_cat_ID('Category Name'); $excluded_cats = $query->get('category__not_in'); if(is_array($excluded_cats)) { $excluded_cats[] = $cat_id; } else { $excluded_cats = array($cat_id); } $query->set('category__not_in', $excluded_cats); } return $query;} How it works... In the above code snippet, you are excluding the category with the name Category Name. To exclude a different category, change the Category Name string to the name of the category you wish to remove from loops. You are filtering the WP_Query object that drives every Loop. Before any posts are fetched from the database, you dynamically change the value of the category__not_in variable in the WP_Query object. You append an additional category ID to the existing array of excluded category IDs to ensure that you're not undoing work of some other developer. Alternatively, if the category__not_in variable is not an array, you assign it an array with a single item. Every category ID in the category__not_in array will be excluded from The Loop, because when the WP_Query object eventually makes a request to the database, it structures the query such that no posts contained in any of the categories identified in the category__not_in variable are fetched. Please note that the denoted category will be excluded by default from all Loops that you create in your theme. If you want to display posts from the category that you've marked to exclude, then you need to set the suppress_filters parameter to true when querying for posts, as follows: query_posts( array( 'cat'=>get_cat_ID('Category Name'), 'suppress_filters'=>true)); Removing posts with a particular tag Similar to categories, it could be desirable to remove posts with a certain tag from The Loop. You may wish to do this if you are tagging certain posts as asides, or if you are saving posts that contain some text that needs to be displayed in a special context elsewhere on your site. How to do it... First, you have to decide which tag you want to exclude from your Loops. Note the name of the tag, and then open or create your theme's functions.php file. Inside of functions.php, insert the following code: add_action('pre_get_posts', 'remove_tag_from_loops');function remove_tag_from_loops( $query ) { if(!$query->get('suppress_filters')) { $tag_id = get_term_by('name','tag1','post_tag')->term_id; $excluded_tags = $query->get('tag__not_in'); if(is_array( $excluded_tags )) { $excluded_tags[] = $tag_id; } else { $excluded_tags = array($tag_id); } $query->set('tag__not_in', $excluded_tags); } return $query;} How it works... In the above code snippet, you are excluding the tag with the slug tag1. To exclude a different tag, change the string tag1 to the name of the tag that you wish to remove from all Loops. When deciding what tags to exclude, the WordPress system looks at a query parameter named tag__not_in, which is an array. In the above code snippet, the function appends the ID of the tag that should be excluded directly to the tag__not_in array. Alternatively, if tag__not_in isn't already initialized as an array, it is assigned an array with a single item, consisting of the ID for the tag that you wish to exclude. After that, all posts with that tag will be excluded from WordPress Loops. Please note that the chosen tag will be excluded, by default, from all Loops that you create in your theme. If you want to display posts from the tag that you've marked to exclude, then you need to set the suppress_filters parameter to true when querying for posts, as follows: query_posts( array( 'tag'=>get_term_by('name','tag1','post_tag')->term_id, 'suppress_filters'=>true )); Highlighting sticky posts Sticky posts are a feature added in version 2.7 of WordPress and can be used for a variety of purposes. The most frequent use is to mark posts that should be "featured" for an extended period of time. These posts often contain important information or highlight things (like a product announcement) that the blog author wants to display in a prominent position for a long period of time. How to do it... First, place your cursor inside of a Loop where you're displaying posts and want to single out your sticky content. Inside The Loop, after a call to the_post, insert the following code: <?phpif(is_sticky()) { ?> <div class="sticky-announcer"> <p>This post is sticky.</p> </div> <?php}?> Create a sticky post on your test blog and take a look at your site's front page. You should see text appended to the sticky post, and the post should be moved to the top of The Loop. You can see this in the following screenshot: How it works... The is_sticky function checks the currently-active post to see if it is a sticky post. It does this by examining the value retrieved by calling get_option('sticky_posts'), which is an array, and trying to find the active post's ID in that array. In this case, if the post is sticky then the sticky-announcer div is output with a message. However, there is no limit to what you can do once you've determined if a post is sticky. Some ideas include: Displaying a special icon for sticky posts Changing the background color of sticky posts Adding content dynamically to sticky posts Displaying post content differently for sticky posts
Read more
  • 0
  • 0
  • 7696