Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

6719 Articles
article-image-java-server-faces-jsf-tools
Packt
07 Oct 2009
5 min read
Save for later

Java Server Faces (JSF) Tools

Packt
07 Oct 2009
5 min read
Throughout this article, we will follow the "learning by example" technique, and we will develop a completely functional JSF application that will represent a JSF registration form as you can see in the following screenshot. These kinds of forms can be seen on many sites, so it will be very useful to know how to develop them with JSF Tools. The example consists of a simple JSP page, named register.jsp, containing a JSF form with four fields (these fields will be mapped into a managed bean), as follows: personName – this is a text field for the user's name personAge – this is a text field for the user's age personPhone – this is a text field for the user's phone number personBirthDate – this is a text field for the user's birth date The information provided by the user will be properly converted and validated using JSF capabilities. Afterwards, the information will be displayed on a second JSP page, named success.jsp. Overview of JSF Java Server Faces (JSF) is a popular framework used to develop User Interfaces (UI) for server-based applications (it is also known as JSR 127 and is a part of J2EE 1.5). It contains a set of UI components totally managed through JSF support, like handling events, validation, navigation rules, internationalization, accessibility, customizability, and so on. In addition, it contains a tag library for expressing UI components within a JSP page. Among JSF features, we mention the ones that JSF provides: A set of base UI components Extension of the base UI components to create additional UI component libraries Custom UI components Reusable UI components Read/write application date to and from UI components Managing UI state across server requests Wiring client-generated events to server-side application code Multiple rendering capabilities that enable JSF UI components to render themselves differently depending on the client type JSF is tool friendly JSF is implementation agnostic Abstract away from HTTP, HTML, and JSP Speaking of JSF life cycle, you should know that every JSF request that renders a JSP involves a JSF component tree, also called a view. Each request is made up of phases. By standard, we have the following phases (shown in the following figure): The restore view is built Request values are applied Validations are processed Model values are updated The applications is invoked A response is rendered During the above phases, the events are notified to event listeners We end this overview with a few bullets regarding JSF UI components: A UI component can be anything from a simple to a complex user interface (for example, from a simple text field to an entire page) A UI component can be associated to model data objects through value binding A UI component can use helper objects, like converters and validators A UI component can be rendered in different ways, based on invocation A UI component can be invoked directly or from JSP pages using JSF tag libraries Creating a JSF project stub In this section you will see how to create a JSF project stub with JSF Tools. This is a straightforward task that is based on the following steps: From the File menu, select New | Project option. In the New Project window, expand the JBoss Tools Web node and select the JSF Project option (as shown in the following screenshot). After that, click the Next button. In the next window, it is mandatory to specify a name for the new project (Project Name field), a location (Location field—only if you don't use the default path), a JSF implementation (JSF Environment field), and a template (Template field). As you can see, JSF Tools offers a set of predefined templates as follows: JSFBlankWithLibs—this is a blank JSF project with complete JSF support. JSFKickStartWithLibs—this is a demo JSF project with complete JSF support. JSFKickStartWithoutLibs—this is a demo JSF project without JSF support. In this case, the JSF libraries are missing for avoiding the potential conflicts with the servers that already offer JSF support. JSFBlankWithoutLibs—this is a blank JSF project without JSF support. In this case, the JSF libraries are missing for avoiding the potential conflicts with the servers that already offer JSF support (for example, JBoss AS includes JSF support). The next screenshot is an example of how to configure our JSF project at this step. At the end, just click the Next button: This step allows you to set the servlet version (Servlet Version field), the runtime (Runtime field) used for building and compiling the application, and the server where the application will be deployed (Target Server field). Note that this server is in direct relationship with the selected runtime. The following screenshot shows an example of how to complete this step (click on the Finish button): After a few seconds, the new JSF project stub is created and ready to take shape! You can see the new project in the Package Explorer view.
Read more
  • 0
  • 0
  • 6238

article-image-azure-stream-analytics-7-reasons-to-choose
Sugandha Lahoti
19 Apr 2018
11 min read
Save for later

How to get started with Azure Stream Analytics and 7 reasons to choose it

Sugandha Lahoti
19 Apr 2018
11 min read
In this article, we will introduce Azure Stream Analytics, and show how to configure it. We will then look at some of key the advantages of the Stream Analytics platform including how it will enhance developer productivity, reduce and improve the Total Cost of Ownership (TCO) of building and maintaining a scaling streaming solution among other factors. What is Azure Stream Analytics and how does it work? Microsoft Azure Stream Analytics falls into the category of PaaS services where the customers don't need to manage the underlying infrastructure. However, they are still responsible for and manage an application that they build on the top of PaaS service and more importantly the customer data. Azure Stream Analytics is a fully managed server-less PaaS service that is built for real-time analytics computations on streaming data. The service can consume from a multitude of sources. Azure will take care of the hosting, scaling, and management of the underlying hardware and software ecosystem. The following are some of the examples of different use cases for Azure Stream Analytics. When we are designing the solution that involves streaming data, in almost every case, Azure Stream Analytics will be part of a larger solution that the customer was trying to deploy. This can be real-time dashboarding for monitoring purposes or real-time monitoring of IT infrastructure equipment, preventive maintenance (auto-manufacturing, vending machines, and so on), and fraud detection. This means that the streaming solution needs to be thoughtful about providing out-of-the-box integration with a whole plethora of services that could help build a solution in a relatively quick fashion. Let's review a usage pattern for Azure Stream Analytics using a canonical model: We can see devices and applications that generate data on the left in the preceding illustration that can connect directly or through cloud gateways to your stream ingest sources. Azure Stream Analytics can pick up the data from these ingest sources, augment it with reference data, run necessary analytics, gather insights and push them downstream for action. You can trigger business processes, write the data to a database or directly view the anomalies on a dashboard. In the previous canonical pattern, the number of streaming ingest technologies are used; let's review them in the following section: Event Hub: Global scale event ingestion system, where one can publish events from millions of sensors and applications. This will guarantee that as soon as an event comes in here, a subscriber can pick that event up within a few milliseconds. You can have one or more subscriber as well depending on your business requirements. A typical use case for an Event Hub is real-time financial fraud detection and social media sentiment analytics. IoT Hub: IoT Hub is very similar to Event Hub but takes the concept a lot further forward—in that you can take bidirectional actions. It will not only ingest data from sensors in real time but can also send commands back to them. It also enables you to do things like device management. Enabling fundamental aspects such as security is a primary need for IoT built with it. Azure Blob: Azure Blob is a massively scalable object storage for unstructured data, and is accessible through HTTP or HTTPS. Blob storage can expose data publicly to the world or store application data privately. Reference Data: This is auxiliary data that is either static or that changes slowly. Reference data can be used to enrich incoming data to perform correlation and lookups. On the ingress side, with a few clicks, you can connect to Event Hub, IOT Hub, or Blob storage. The Streaming data can be enriched with reference data in the Blob store. Data from the ingress process will be consumed by the Azure Stream Analytics service; we can call machine learning (ML) for event scoring in real time. The data can be egressed to live Dashboarding to Power BI, or could also push data back to Event Hub from where dashboards and reports can pick it up. The following is a summary of the ingress, egress, and archiving options: Ingress choices: Event Hub IoT Hub Blob storage Egress choices: Live Dashboards: PowerBI Event Hub Driving workflows: Event Hubs Service Bus Archiving and post analysis: Blob storage Document DB Data Lake SQL Server Table storage Azure Functions One key point to note is there the number of customers who push data from Stream Analytics processing (egress point) to Event Hub and then add Azure website-as hosted solutions into their own custom dashboard. One can drive workflows by pushing the events to Azure Service Bus and PowerBI. For example, customer can build IoT support solutions to detect an anomaly in connected appliances and pushing the result into Azure Service Bus. A worker role can run as a daemon to pull the messages and create support tickets using Dynamics CRM API. Then use Power BI on the ticket can be archived for post analysis. This solution eliminates the need for the customer to log a ticket , but the system will automatically do it based on predefined anomaly thresholds. This is just one sample of real-time connected solution. There are a number of use cases that don't even involve real-time alerts. You can also use it to aggregate data, filter data, and store it in Blob storage, Azure Data Lake (ADL), Document DB, SQL, and then run U-SQL Azure Data Lake Analytics (ADLA), HDInsight, or even call ML models for things like predictive maintenance. Configuring Azure Stream Analytics Azure Stream Analytics (ASA) is a fully managed, cost-effective real-time event processing engine. Stream Analytics makes it easy to set up real-time analytic computations on data streaming from devices, sensors, websites, social media, applications, infrastructure systems, and more. The service can be hosted with a few clicks in the Azure portal; users can author a Stream Analytics job specifying the input source of the streaming data, the output sink for the results of your job, and a data transformation expressed in a SQL-like language. The jobs can be monitored and you can adjust the scale/speed of the job in the Azure portal to scale from a few kilobytes to a gigabyte or more of events processed per second. Let's review how to configure Azure Stream Analytics step by step: Log in to the Azure portal using your Azure credentials, click on New, and search for Stream Analytics job: 2. Click on Create to create an Azure Stream Analytics instance: 3. Provide a Job Name and Resource group name for the Azure Stream Analytics job deployment: 4. After a few minutes, the deployment will be complete: 5. Review the following in the deployment--audit trail of the creation: 6. Ability stream up and down using a simple UI: 7. Build in the Query interface to run queries: 8. Run Queries using a SQL-like interface, with the ability to accept late-arriving events with simple GUI-based configuration: Key advantages of Azure Stream Analytics Let's quickly review how traditional streaming solutions are built; the core deployment starts with procuring and setting up the basic infrastructure necessary to host the streaming solution. Once this is done, we can then build the ingress and egress solution on top of the deployed infrastructure. Once the core infrastructure is built, customer tools will be used to build business intelligence (BI) or machine-learning integration. After the system goes into production, scaling during runtime needs to be taken care of by capturing the telemetry and building and configuration of HW/SW resources as necessary. As business needs ramp up, so does the monitoring and troubleshooting. Security Azure Stream Analytics provides a number of inbuilt security mechanics in areas such as authentication, authorization, auditing, segmentation, and data protection. Let's quickly review them. Authentication support: Authentication support in Azure Stream Analytics is done at portal level. Users should have a valid subscription ID and password to access the Azure Stream Analytics job. Authorization: Authorization is the process during login where users provide their credentials (for example, user account name and password, smart card and PIN, Secure ID and PIN, and so on) to prove their Microsoft identity so that they can retrieve their access token from the authentication server. Authorization is supported by Azure Stream Analytics. Only authenticated/authorized users can access the Azure Stream Analytics job. Support for encryption: Data-at-rest using client-side encryption and TDE. Support for key management: Key management is supported through ingress and egress points. Programmer productivity One of the key features of Azure Stream Analytics is developer productivity, and it is driven a lot by the query language that is based on SQL constructs. It provides a wide array of functions for analytics on streaming data, all the way from simple data manipulation functions, data and time functions, temporal functions, mathematical, string, scaling, and much more. It provides two features natively out of the box. Let's review the features in detail in the next section Declarative SQL constructs Built-in temporal semantics Declarative SQL constructs A simple-to-use UI is provided and queries can be constructed using the provided user interface. The following is the feature set of the declarative SQL constructs: Filters (Where) Projections (Select) Time-window and property-based aggregates (Group By) Time-shifted joins (specifying time bounds within which the joining events must occur) All combinations thereof The following is a summary of different constructs to manipulate streaming data: Data manipulation: SELECT, FROM, WHERE GROUP BY, HAVING, CASE WHEN THEN ELSE, INNER/LEFT OUTER JOIN, UNION, CROSS/OUTER APPLY, CAST, INTO, ORDER BY ASC, DSC Date and time functions: DateName, DatePart, Day, Month, Year, DateDiff, DateTimeFromParts, DateAdd Temporal functions: Lag, IsFirst, LastCollectTop Aggregate functions: SUM, COUNT, AVG, MIN, MAX, STDEV, STDEVP, VAR VARP, TopOne Mathematical functions: ABS, CEILING, EXP, FLOOR POWER, SIGN, SQUARE, SQRT String functions: Len, Concat, CharIndex Substring, Lower Upper, PatIndex Scaling extensions: WITH, PARTITION BY OVER Geospatial: CreatePoint, CreatePolygon, CreateLineString, ST_DISTANCE, ST_WITHIN, ST_OVERLAPS, ST_INTERSECTS Built-in temporal semantics Azure Stream Analytics provides prebuilt temporal semantics to query time-based information and merge streams with multiple timelines. Here is a list of temporal semantics: Application or ingest timestamp Windowing functions Policies for event ordering Policies to manage latencies between ingress sources Manage streams with multiple timelines Join multiple streams of temporal windows Join streaming data with data-at-rest Lowest total cost of ownership Azure Stream Analytics is a fully managed PaaS service on Azure. There are no upfront costs or costs involved in setting up computer clusters and complex hardware wiring like you would do with an on-prem solution. It's a simple job service where there is no cluster provisioning and customers pay for what they use. A key consideration is the variable workloads. With Azure Stream Analytics, you do not need to design your system for peak throughput and can add more compute footprint as you go. If you have scenarios where data comes in spurts, you do not want to design a system for peak usage and leave it unutilized for other times. Let's say you are building a traffic monitoring solution—naturally, there is the expectation that it will expect peaks to show up during morning and evening rush hours. However, you would not want to design your system or investments to cater to these extremes. Cloud elasticity that Azure offers is a perfect fit here. Azure Stream Analytics also offers fast recovery by checkpointing and at-least-once event delivery. Mission-critical and enterprise-less scalability and availability Azure Stream Analytics is available across multiple worldwide data centers and sovereign clouds. Azure Stream Analytics promises 3-9s availability that is financially guaranteed with built-in auto recovery so that you will never lose the data. The good thing is customers do not need to write a single line of code to achieve this. The bottom-line is that enterprise readiness is built into the platform. Here is a summary of the Enterprise-ready features: Distributed scale-out architecture Ingests millions of events per second Accommodates variable loads Easily adds incremental resources to scale Available across multiple data centres and sovereign clouds Global compliance In addition, Azure Stream Analytics is compliant with many industries and government certifications. It is already HIPPA-compliant built-in and suitable to host healthcare applications. That's how customers can scale up their businesses confidently. Here is a summary of global compliance: ISO 27001 ISO 27018 SOC 1 Type 2 SOC 2 Type 2 SOC 3 Type 2 HIPAA/HITECH PCI DSS Level 1 European Union Model Clauses China GB 18030 Thus we reviewed Azure Stream Analytics and understood its key advantages. These advantages included: Ease in terms of developer productivity, Ease of development and how to reduces total cost of ownership, Global compliance certifications, The value of the PaaS based streaming solution to host mission-critical applications and security This post is taken from the book, Stream Analytics with Microsoft Azure, written by Anindita Basak, Krishna Venkataraman, Ryan Murphy, and Manpreet Singh. This book will help you to understand Azure Stream Analytics so that you can develop efficient analytics solutions that can work with any type of data. Say hello to Streaming Analytics How to build a live interactive visual dashboard in Power BI with Azure Stream Performing Vehicle Telemetry job analysis with Azure Stream Analytics tools    
Read more
  • 0
  • 0
  • 6236

article-image-internet-connected-smart-water-meter-0
Packt
23 Oct 2015
15 min read
Save for later

Internet-Connected Smart Water Meter

Packt
23 Oct 2015
15 min read
In this article by Pradeeka Seneviratne author of the book, Internet of Things with Arduino Blueprints, explains that for many years and even now, water meter readings have been collected manually. To do this, a person has to visit the location where the water meter is installed. In this article, you will learn how to make a smart water meter with an LCD screen that has the ability to connect to the internet and serve meter readings to the consumer through the Internet. In this article, you shall do the following: Learn about water flow sensors and its basic operation Learn how to mount and plumb a water flow meter on and into the pipeline Read and count the water flow sensor pulses Calculate the water flow rate and volume Learn about LCD displays and connecting with Arduino Convert a water flow meter to a simple web server and serve meter readings through the Internet (For more resources related to this topic, see here.) Prerequisites An Arduino UNO R3 board (http://store.arduino.cc/product/A000066) Arduino Ethernet Shield R3 (https://www.adafruit.com/products/201) A liquid flow sensor (http://www.futurlec.com/FLOW25L0.shtml) A Hitachi HD44780 DRIVER compatible LCD Screen (16 x 2) (https://www.sparkfun.com/products/709) A 10K ohm resistor A 10K ohm potentiometer (https://www.sparkfun.com/products/9806) Few Jumper wires with male and female headers (https://www.sparkfun.com/products/9140) A breadboard (https://www.sparkfun.com/products/12002) Water flow sensors The heart of a water flow sensor consists of a Hall effect sensor (https://en.wikipedia.org/wiki/Hall_effect_sensor) that outputs pulses for magnetic field changes. Inside the housing, there is a small pinwheel with a permanent magnet attached to it. When the water flows through the housing, the pinwheel begins to spin, and the magnet attached to it passes very close to the Hall effect sensor in every cycle. The Hall effect sensor is covered with a separate plastic housing to protect it from the water. The result generates an electric pulse that transitions from low voltage to high voltage, or high voltage to low voltage, depending on the attached permanent magnet's polarity. The resulting pulse can be read and counted using the Arduino. For this project, we will use a Liquid Flow sensor from Futurlec (http://www.futurlec.com/FLOW25L0.shtml). The following image shows the external view of a Liquid Flow Sensor: Liquid flow sensor – the flow direction is marked with an arrow The following image shows the inside view of the liquid flow sensor. You can see a pinwheel that is located inside the housing: Pinwheel attached inside the water flow sensor Wiring the water flow sensor with Arduino The water flow sensor that we are using with this project has three wires, which are the following: Red (or it may be a different color) wire, which indicates the Positive terminal Black (or it may be a different color) wire, which indicates the Negative terminal Brown (or it may be a different color) wire, which indicates the DATA terminal All three wire ends are connected to a JST connector. Always refer to the datasheet of the product for wiring specifications before connecting them with the microcontroller and the power source. When you use jumper wires with male and female headers, do the following: Connect positive terminal of the water flow sensor to Arduino 5V. Connect negative terminal of the water flow sensor to Arduino GND. Connect DATA terminal of the water flow sensor to Arduino digital pin 2. Water flow sensor connected with Arduino Ethernet Shield using three wires You can directly power the water flow sensor using Arduino since most residential type water flow sensors operate under 5V and consume a very low amount of current. Read the product manual for more information about the supply voltage and supply current range to save your Arduino from high current consumption by the water flow sensor. If your water flow sensor requires a supply current of more than 200mA or a supply voltage of more than 5v to function correctly, then use a separate power source with it. The following image illustrates jumper wires with male and female headers: Jumper wires with male and female headers Reading pulses The water flow sensor produces and outputs digital pulses that denote the amount of water flowing through it. These pulses can be detected and counted using the Arduino board. Let's assume the water flow sensor that we are using for this project will generate approximately 450 pulses per liter (most probably, this value can be found in the product datasheet). So 1 pulse approximately equals to [1000 ml/450 pulses] 2.22 ml. These values can be different depending on the speed of the water flow and the mounting polarity of the water flow sensor. Arduino can read digital pulses generating by the water flow sensor through the DATA line. Rising edge and falling edge There are two type of pulses, as listed here:. Positive-going pulse: In an idle state, the logic level is normally LOW. It goes HIGH state, stays there for some time, and comes back to the LOW state. Negative-going pulse: In an idle state, the logic level is normally HIGH. It goes LOW state, stays LOW state for time, and comes back to the HIGH state. The rising and falling edges of a pulse are vertical. The transition from LOW state to HIGH state is called rising edge and the transition from HIGH state to LOW state is called falling edge. Representation of Rising edge and Falling edge in digital signal You can capture digital pulses using either the rising edge or the falling edge. In this project, we will use the rising edge. Reading and counting pulses with Arduino In the previous step, you attached the water flow sensor to Arduino UNO. The generated pulse can be read by Arduino digital pin 2 and the interrupt 0 is attached to it. The following Arduino sketch will count the number of pulses per second and display it on the Arduino Serial Monitor: Open a new Arduino IDE and copy the sketch named B04844_03_01.ino. Change the following pin number assignment if you have attached your water flow sensor to a different Arduino pin: int pin = 2; Verify and upload the sketch on the Arduino board: int pin = 2; //Water flow sensor attached to digital pin 2 volatile unsigned int pulse; const int pulses_per_litre = 450; void setup() { Serial.begin(9600); pinMode(pin, INPUT); attachInterrupt(0, count_pulse, RISING); } void loop() { pulse = 0; interrupts(); delay(1000); noInterrupts(); Serial.print("Pulses per second: "); Serial.println(pulse); } void count_pulse() { pulse++; } Open the Arduino Serial Monitor and blow air through the water flow sensor using your mouth. The number of pulses per second will print on the Arduino Serial Monitor for each loop, as shown in the following screenshot: Pulses per second in each loop The attachInterrupt() function is responsible for handling the count_pulse() function. When the interrupts() function is called, the count_pulse() function will start to collect the pulses generated by the liquid flow sensor. This will continue for 1000 milliseconds, and then the noInterrupts() function is called to stop the operation of count_pulse() function. Then, the pulse count is assigned to the pulse variable and prints it on the serial monitor. This will repeat again and again inside the loop() function until you press the reset button or disconnect the Arduino from the power. Calculating the water flow rate The water flow rate is the amount of water flowing in at a given point of time and can be expressed in gallons per second or liters per second. The number of pulses generated per liter of water flowing through the sensor can be found in the water flow sensor's specification sheet. Let's say there are m pulses per liter of water. You can also count the number of pulses generated by the sensor per second: Let's say there are n pulses per second. The water flow rate R can be expressed as: In litres per second Also, you can calculate the water flow rate in liters per minute using the following formula: For example, if your water flow sensor generates 450 pulses for one liter of water flowing through it, and you get 10 pulses for the first second, then the elapsed water flow rate is: 10/450 = 0.022 liters per second or 0.022 * 1000 = 22 milliliters per second. The following steps will explain you how to calculate the water flow rate using a simple Arduino sketch: Open a new Arduino IDE and copy the sketch named B04844_03_02.ino. Verify and upload the sketch on the Arduino board. The following code block will calculate the water flow rate in milliliters per second: Serial.print("Water flow rate: "); Serial.print(pulse * 1000/pulses_per_litre); Serial.println("milliliters per second"); Open the Arduino Serial Monitor and blow air through the water flow sensor using your mouth. The number of pulses per second and the water flow rate in milliliters per second will print on the Arduino Serial Monitor for each loop, as shown in the following screenshot: Pulses per second and water flow rate in each loop Calculating the water flow volume The water flow volume can be calculated by summing up the product of flow rate and the time interval: Volume = ∑ Flow Rate * Time_Interval The following Arduino sketch will calculate and output the total water volume since the device startup: Open a new Arduino IDE and copy the sketch named B04844_03_03.ino. The water flow volume can be calculated using following code block: volume = volume + flow_rate * 0.1; //Time Interval is 0.1 second Serial.print("Volume: "); Serial.print(volume); Serial.println(" milliliters"); Verify and upload the sketch on the Arduino board. Open the Arduino Serial Monitor and blow air through the water flow sensor using your mouth. The number of pulses per second, water flow rate in milliliters per second, and total volume of water in milliliters will be printed on the Arduino Serial Monitor for each loop, as shown in the following screenshot: Pulses per second, water flow rate and in each loop and sum of volume  To accurately measure water flow rate and volume, the water flow sensor needs to be carefully calibrated. The hall effect sensor inside the housing is not a precision sensor, and the pulse rate does vary a bit depending on the flow rate, fluid pressure, and sensor orientation. Adding an LCD screen to the water meter You can add an LCD screen to your newly built water meter to display readings, rather than displaying them on the Arduino serial monitor. You can then disconnect your water meter from the computer after uploading the sketch on to your Arduino. Using a Hitachi HD44780 driver compatible LCD screen and Arduino Liquid Crystal library, you can easily integrate it with your water meter. Typically, this type of LCD screen has 16 interface connectors. The display has two rows and 16 columns, so each row can display up to 16 characters. The following image represents the top view of a Hitachi HD44760 driver compatible LCD screen. Note that the 16-pin header is soldered to the PCB to easily connect it with a breadboard. Hitachi HD44780 driver compatible LCD screen (16 x 2)—Top View The following image represents the bottom view of the LCD screen. Again, you can see the soldered 16-pin header. Hitachi HD44780 driver compatible LCD screen (16x2)—Bottom View Wire your LCD screen with Arduino as shown in the next diagram. Use the 10k potentiometer to control the contrast of the LCD screen. Now, perform the following steps to connect your LCD screen with your Arduino: LCD RS pin (pin number 4 from left) to Arduino digital pin 8. LCD ENABLE pin (pin number 6 from left) to Arduino digital pin 7. LCD READ/WRITE pin (pin number 5 from left) to Arduino GND. LCD DB4 pin (pin number 11 from left) to Arduino digital pin 6. LCD DB5 pin (pin number 12 from left) to Arduino digital pin 5. LCD DB6 pin (pin number 13 from left) to Arduino digital pin 4. LCD DB7 pin (pin number 14 from left) to Arduino digital pin 3. Wire a 10K pot between Arduino +5V and GND, and wire its wiper (center pin) to LCD screen V0 pin (pin number 3 from left). LCD GND pin (pin number 1 from left) to Arduino GND. LCD +5V pin (pin number 2 from left) to Arduino 5V pin. LCD Backlight Power pin (pin number 15 from left) to Arduino 5V pin. LCD Backlight GND pin (pin number 16 from left) to Arduino GND. Fritzing representation of the circuit Open a new Arduino IDE and copy the sketch named B04844_03_04.ino. First initialize the Liquid Crystal library using following line: #include <LiquidCrystal.h> To create a new LCD object with following parameters, the syntax is LiquidCrystal lcd (RS, ENABLE, DB4, DB5, DB6, DB7): LiquidCrystal lcd(8, 7, 6, 5, 4, 3); Then initialize number of rows and columns in the LCD. Syntax is lcd.begin(number_of_columns, number_of_rows): lcd.begin(16, 2); You can set the starting location to print a text on the LCD screen using following function, syntax is lcd.setCursor(column, row): lcd.setCursor(7, 1); The column and row numbers are 0 index based and the following line will start to print a text in the intersection of the 8th column and 2nd row. Then, use the lcd.print() function to print some text on the LCD screen: lcd.print(" ml/s"); Verify and upload the sketch on the Arduino board. Blow some air through the water flow sensor using your mouth. You can see some information on the LCD screen such as pulses per second, water flow rate, and total water volume from the beginning of the time:  LCD screen output Converting your water meter to a web server In the previous steps, you learned how to display your water flow sensor's readings and calculate water flow rate and total volume on the Arduino serial monitor. In this step, you will learn how to integrate a simple web server to your water flow sensor and remotely read your water flow sensor's readings. You can make an Arduino web server with Arduino WiFi Shield or Arduino Ethernet shield. The following steps will explain how to convert the Arduino water flow meter to a web server with Arduino Wi-Fi shield: Remove all the wires you have connected to your Arduino in the previous sections in this article. Stack the Arduino WiFi shield on the Arduino board using wire wrap headers. Make sure the Arduino WiFi shield is properly seated on the Arduino board. Now, reconnect the wires from water flow sensor to the Wi-Fi shield. Use the same pin numbers as used in the previous steps. Connect the 9VDC power supply to the Arduino board. Connect your Arduino to your PC using the USB cable and upload the next sketch. Once the upload is completed, remove your USB cable from the Arduino. Open a new Arduino IDE and copy the sketch named B04844_03_05.ino. Change the following two lines according to your WiFi network settings, as shown here: char ssid[] = "MyHomeWiFi"; char pass[] = "secretPassword"; Verify and upload the sketch on the Arduino board. Blow the air through the water flow sensor using your mouth, or it would be better if you can connect the water flow sensor to a water pipeline to see the actual operation with the water. Open your web browser, type the WiFi shield's IP address assigned by your network, and hit the Enter key: http://192.168.1.177 You can see your water flow sensor's pulses per second, flow rate, and total volume on the Web page. The page refreshes every 5 seconds to display updated information. You can add an LCD screen to the Arduino WiFi shield as discussed in the previous step. However, remember that you can't use some of the pins in the Wi-Fi shield because they are reserved for SD (pin 4), SS (pin 10), and SPI (pin 11, 12, 13). We have not included the circuit and source code here in order to make the Arduino sketch simple. A little bit about plumbing Typically, the direction of the water flow is indicated by an arrow mark on top of the water flow meter's enclosure. Also, you can mount the water flow meter either horizontally or vertically according to its specifications. Some water flow meters can mount both horizontally and vertically. You can install your water flow meter to a half-inch pipeline using normal BSP pipe connectors. The outer diameter of the connector is 0.78" and the inner thread size is half-inch. The water flow meter has threaded ends on both sides. Connect the threaded side of the PVC connectors to both ends of the water flow meter. Use a thread seal tape to seal the connection, and then connect the other ends to an existing half-inch pipeline using PVC pipe glue or solvent cement. Make sure that you connect the water flow meter with the pipe line in the correct direction. See the arrow mark on top of the water flow meter for flow direction. BNC pipe line connector made by PVC Securing the connection between the water flow meter and BNC pipe connector using thread seal PVC solvent cement. Image taken from https://www.flickr.com/photos/ttrimm/7355734996/ Summary In this article, you gained hands-on experience and knowledge about water flow sensors and counting pulses while calculating and displaying them. Finally, you made a simple web server to allow users to read the water meter through the Internet. You can apply this to any type of liquid, but make sure to select the correct flow sensor because some liquids react chemically with the material that the sensor is made of. You can Google and find which flow sensors support your preferred liquid type. Resources for Article: Further resources on this subject: The Arduino Mobile Robot [article] Arduino Development [article] Getting Started with Arduino [article]
Read more
  • 0
  • 0
  • 6233

article-image-developers-want-to-become-entrepreneurs-make-successful-transition
Fatema Patrawala
04 Jun 2018
9 min read
Save for later

1 in 3 developers want to be entrepreneurs. What does it take to make a successful transition?

Fatema Patrawala
04 Jun 2018
9 min read
Much of the change we have seen over the last decade has been driven by a certain type of person: part developer, part entrepreneur. From Steve Jobs, to Bill Gates, Larry Page to Sergey Bin, Mark Zuckerberg to Elon Musk - the list is long. These people have built their entrepreneurial success on software. In doing so, they’ve changed the way we live, and changed the way we look at the global tech landscape. Part of the reason software is eating the world is because of the entrepreneurial spirit of people like these.Silicon Valley is a testament to this! That is why a combination of tech and leadership skills in developers is the holy grail even today. From this perspective, we weren’t surprised to find that a significant number of developers working today have entrepreneurial ambitions. In this year’s Skill Up survey we asked respondents what they hope to be doing in the next 5 years. A little more than one third of the respondents said they would like to be working as a founder of their own company. Source: Packt Skill Up Survey But being a successful entrepreneur requires a completely new set of skills from those that make one a successful developer. It requires a different mindset. True, certain skills you might already possess. Others, though you’re going to have to acquire. With skills realizing the power of purpose and values will be the key factor to building a successful venture. You will need to clearly articulate why you’re doing something and what you’re doing. In this way you will attract and keep customers who will clearly know the purpose of your business. Let us see what it takes to transition from a developer to a successful entrepreneur. Think customer first, then product: Source: Gifer Whatever you are thinking right now, think Bigger! Developers love making things. And that’s a good place to begin. In tech, ideas often originate from technical conversations and problem solving. “Let’s do X on Z  platform because the Y APIs make it possible”. This approach works sometimes, but it isn’t a sustainable business model. That’s because customers don’t think like that. Their thought process is more like this: “Is X for me?”, “Will X solve my problem?”, “Will X save me time or money?”. Think about the customer first and then begin developing your product. You can’t create a product and then check if there is it is in demand - that sounds obvious, but it does happen. Do your research, think about the market and get to know your customer first. In other words, to be a successful entrepreneur, you need to go one step further - yes, care about your products, but think about it from the long term perspective. Think big picture: what’s going to bring in real value to the lives of other people. Search for opportunities to solve problems in other people’s lives adopting a larger and more market oriented approach. Adding a bit of domain knowledge like market research, product-market fit, market positioning etc to your to-do list should be a vital first step in your journey. Remember, being a good technician is not enough Source: Tenor Most developers are comfortable and probably excel at being good technicians. This means you’re good at writing those beautiful codes, produce something tangible, and cranking away on each task, moving one step closer to the project launch date. Great. But it takes more than a technician to run a successful business. It’s critical to look ahead into both the near and the long term. What’s going to provide the best ROI next year? Where do I want to be in 5 years time? To do this you need to be organized and focused. It’s of critical importance to determine your goals and objectives. Simply put you need to evolve from being a problem solver/great executioner to a visionary/strategic thinker. The developer-entrepreneur is a rare breed of doer-thinker. Be passionate and focussed Source: Bustle Perseverance is the single most important trait found in successful entrepreneurs. As a developer you are primed to think logically and come to conclusions, which is a great trait to possess in most cases. However, in entrepreneurship there will be times when you will do all the right things, meet the right people and yet meet with failures. To get your company off the ground, you need to be passionate and excited about what you’re working on. This enthusiasm will often be the only thing to carry you through late nights, countless setbacks and tough situations. You must stay open, even if logic dictates otherwise. Most startups fail either because they read the market wrong or they didn’t stay long enough in the race. You need to also know how to focus closely on the very next step to get closer to your ultimate goal. There will be many distracting forces when trying to build a business that focusing on one particular task will not be easy and you will need to religiously master this skill. Become amazing at networking Source: Gfycat It truly isn't about what you know or what you have developed. Sure, what you know is important. But what's far more important is who you know. There's a reason why certain people can make so much progress in such a brief period. These business networking power players command the room by bringing the right people together.  As an entrepreneur, if there's one thing that you should focus on, it's becoming a truly skilled business networker. Imagine having an idea for a business that's so wonderful, that you can pick up the phone and call four or five people who can help you turn that idea into a reality. Sharpen your strategic and critical thinking skills Source: Tenor As entrepreneurs it’s essential to possess sharp critical thinking skills. When you think critically,t you ask the hard, tactical questions while expanding the lens to see the wider picture. Without thinking critically, it’s hard to assess whether the most creative application that you have developed really has a life out in the world. You are always going to love what you have created but it is important to wear multiple hats to think from the users’ perspective. Get it reviewed by the industry stalwarts in your community to catch that silly but important area that you could have missed while planning. With critical thinking it is also important you strategize and plan out things for smooth functioning of your business. Find and manage the right people Source: Gfycat In reality businesses are like living creatures, who’s organs all need to work in harmony. There is no major organ more critical than another, and a failure to one system can bring down all the others. Developers build things, marketers get customers and salespersons sell products. The trick is, to find and employ the right people for your business if you are to create a sustainable business. Your ideas and thinking will need to align with the ideas of people that you are working with. Only by learning to leverage employees, vendors and other resources can you build a scalable and sustainable company. Learn the art of selling an idea Source: Patent an idea Every entrepreneur needs to play the role of a sales person whether they like it or not. To build a successful business venture you will need to sell your ideas, products or services to customers, investors or employees. You should be ready to work and be there when customers are ready to buy. Alternatively, you should also know how to let go and move on when they are not. The ability to convince others that you are going to provide them the maximum product value will help you crack that mission critical deal. Be flexible. Create contingency plans Source: Gifer Things rarely go as per plan in software development. Project scope creeps, clients expectations rise, or bugs always seem to mysteriously appear. Even the most seasoned developers can’t predict all possible scenarios so they have to be ready with contingencies. This is also true in the world of business start-ups. Despite the best-laid business plans, entrepreneurs need to be prepared for the unexpected and be able to roll with the punches. You need to very well be prepared with an option A, B or C. Running a business is like sea surfing. You’ve got to be nimble enough both in your thinking and operations to ride the waves, high and low. Always measure performance Source: Tenor "If you can't measure it, you can't improve it." Peter Drucker’s point feels obvious but is one worth bearing in mind always. If you can't measure something, you’ll never improve. Measuring key functions of your business will help you scale it faster. Otherwise you risk running your firm blindly, without any navigational path to guide it. Closing Comments Becoming an entrepreneur and starting your own business is one of life’s most challenging and rewarding journeys. Having said that, for all of the perks that the entrepreneurial path offers, it’s far from being all roses. Being an entrepreneur means being a warrior. It means being clever, hungry and often a ruthless competitor. If you are a developer turned entrepreneur, we’d love to hear your take on the topic and your own personal journey. Share your comments below or write to us at [email protected]. Developers think managers don’t know enough about technology. And that’s hurting business. 96% of developers believe developing soft skills is important Don’t call us ninjas or rockstars, say developers        
Read more
  • 0
  • 0
  • 6232

article-image-apache-solr-php-integration
Packt
25 Nov 2013
7 min read
Save for later

Apache Solr PHP Integration

Packt
25 Nov 2013
7 min read
(For more resources related to this topic, see here.) We will be looking at installation on both Windows and Linux environments. We will be using the Solarium library for communication between Solr and PHP. This article will give a brief overview of the Solarium library and showcase some of the concepts and configuration options on Solr end for implementing certain features. Calling Solr using PHP code A ping query is used in Solr to check the status of the Solr server. The Solr URL for executing the ping query is http://localhost:8080/solr/collection1/admin/ping/?wt=json. Response of Solr ping query in browser We can use Curl to get the ping response from Solr via PHP code; a sample code for executing the previous ping query is as below $curl = curl_init("http://localhost:8080/solr/collection1/admin/ping/?wt=json"); curl_setopt($curl, CURLOPT_RETURNTRANSFER, 1); $output = curl_exec($curl); $data = json_decode($output, true); echo "Ping Status : ".$data["status"].PHP_EOL; Though Curl can be used to execute almost any query on Solr, but it is preferable to use a library which does the work for us. In our case we will be using Solarium. To execute the same query on Solr using the Solarium library the code is as follows. include_once("vendor/autoload.php"); $config = array("endpoint" => array("localhost" => array("host"=>"127.0.0.1", "port"=>"8080", "path"=>"/solr", "core"=>"collection1",) ) ); We have included the Solarium library in our code. And defined the connection parameters for our Solr server. Next we will need to create a Solarium client with the previous Solr configuration. And call the createPing() function to create the ping query. $client = new SolariumClient($config); $ping = $client->createPing(); Finally execute the ping query and get the result. $result = $client->ping($ping); $result->getStatus(); The output should be similar to the one shown below. Output of ping query using PHP Adding documents to Solr index To create a Solr index, we need to add documents to the Solr index using the command line, Solr web interface or our PHP program. But before we create a Solr index, we need to define the structure or the schema of the Solr index. Schema consists of fields and field types. It defines how each field will be treated and handled during indexing or during search. Let us see a small piece of code for adding documents to the Solr index using PHP and Solarium library. Create a solarium client. Create an instance of the update query. Create the document in PHP and finally add fields to the document. $client = new SolariumClient($config); $updateQuery = $client->createUpdate(); $doc1 = $updateQuery->createDocument(); $doc1->id = 112233445; $doc1->cat = 'book'; $doc1->name = 'A Feast For Crows'; $doc1->price = 8.99; $doc1->inStock = 'true'; $doc1->author = 'George R.R. Martin'; $doc1->series_t = '"A Song of Ice and Fire"'; Id field has been marked as unique in our schema. So we will have to keep different values for Id field for different documents that we add to Solr. Add documents to the update query followed by commit command. Finally execute the query. $updateQuery->addDocuments(array($doc1)); $updateQuery->addCommit(); $result = $client->update($updateQuery); Let us execute the code. php insertSolr.php After executing the code, a search for martin will give these documents in the result. http://localhost:8080/solr/collection1/select/?q=martin Document added to Solr index Executing search on Solr Index Documents added to the Solr index can be searched using the following piece of PHP code. $selectConfig = array( 'query' => 'cat:book AND author:Martin', 'start' => 3, 'rows' => 3,'fields' => array('id','name','price','author'), 'sort' => array('price' => 'asc') ); $query = $client->createSelect($selectConfig); $resultSet = $client->select($query); The above code creates a simple Solr query and searches for book in cat field and Martin in author field. The results are sorted in ascending order or price and fields returned are id, name of book, price and author of book. Pagination has been implemented as 3 results per page, so this query returns results for 2nd page starting from 3rd result. In addition to this simple select query, Solr also supports some advanced query modes known as dismax and edismax. With the help of these query modes, we can boost certain fields to give more importance to certain fields in our query. We can also use function queries to do some type of dynamic boosting based on values in fields. If no sorting is provided, the Solr results are sorted by the score of documents which are calculated based on the terms in the query and the matching terms in the documents in the index. Score is calculated for each document in the result set using two main factors - term frequency known as tf and inverse document frequency known as idf. In addition to these, Solr provides a way of narrowing down the results using filter queries. Also facets can be created based on fields in the index and it can be used by the end users to narrow down the results. Highlighting search results using PHP and Solr Solr can be used to highlight the fields returned in a search result based on the query. Here is a sample code for highlighting the results for search keyword harry. $query->setQuery('harry'); $query->setFields(array('id','name','author','series_t','score','last_modified')); Get the highlighting component from the query, set the fields to be highlighted and also set the html tags to be used for highlighting. $hl = $query->getHighlighting(); $hl->setFields('name,series_t'); $hl->setSimplePrefix('<strong>')->setSimplePostfix('</strong>'); Once the query is run and result set is received, we will need to retrieve the highlighted results from the result set. Here is the output for the highlighting code. Highlighted search results In addition to highlighting, Solr can be used to create a spelling suggester and a spell checker. Spelling suggester can be used to prompt input query to the end user as the user keeps on typing. Spell check can be used to prompt spelling corrections similar to 'did you mean' to the user. Solr can also be used for finding documents which are similar to a certain document based on words in certain fields. This functionality of Solr is known as more like this and is exposed via Solarium by the MoreLikeThis component. Solr also provides grouping of the result based on a particular query or a certain field. Scaling Solr Solr can be scaled to handle large number of search requests by using master slave architecture. Also if the index is huge, it can be sharded across multiple Solr instances and we can run a distributed search to get results for our query from all the sharded instances. Solarium provides a load balancing plug-in which can be used to load balance queries across master-slave architecture. Summary Solr provides an extensive list of features for implementing search. These features can be easily accessed in PHP using the Solarium library to build a full features search application which can be used to power search on any website. Resources for Article: Further resources on this subject: Apache Solr Configuration [Article] Getting Started with Apache Solr [Article] Making Big Data Work for Hadoop and Solr [Article]
Read more
  • 0
  • 0
  • 6226

article-image-how-create-breakout-game-godot-engine-part-1
George Marques
22 Nov 2016
8 min read
Save for later

How to create a Breakout game with Godot Engine – Part 1

George Marques
22 Nov 2016
8 min read
The Godot Engine is a piece of open source software designed to help you make any kind of game. It possesses a great number of features to ease the work flow of game development. This two-part article will cover some basic features of the engine—such as physics and scripting—by showing you how to make a game with the mechanics of the classic Breakout. To install Godot, you can download it from the website and extract it in your place of preference. You can also install it through Steam. While the latter has a larger download, it has all the demos and export templates already installed. Setting up the project When you first open Godot, you are presented with the Project Manager window. In here, you can create new or import existing projects. It's possible to run a project directly from here without opening the editor itself. The Templates tab shows the available projects in the online Asset Library where you can find and download community-made content. Note that there might also be a console window, which shows some messages of what's happening on the engine, like warnings and error messages. This window must remain open but it's also helpful for debugging. It will not show up in your final exported version of the game. Project Manager To start creating our game, let's click on the New Project button. Using the dialog interface, navigate to where you want to place it, and then create a new folder just for the project. Select it and choose a name for the project ("Breakout" may be a good choice). Once you do that, the new project will be shown at the top of the list. You can double-click to open it in the editor. Creating the paddle You will see first the main screen of the Godot editor. I rearranged the docks to suit my preferences, but you can leave the default one or change it to something you like. If you click on the Settings button at the top-right corner, you can save and load layouts. Godot uses a system in which every object is a Node. A Scene is just a tree of nodes and can also be used as a "prefab", like other engines call it. Every scene can be instanced as a part of another scene. This helps in dividing the project and reusing the work. This is all explained in the documentation and you can consult it if you have any doubt. Main screen Now we are going to create a scene for the paddle, which can later be instanced on the game scene. I like to start with an object that can be controlled by the player so that we can start to feel what the interactivity looks like. On the Scene dock, click on the "+" (plus) button to add a new Node (pressing Ctrl + A will also work). You'll be presented with a large collection of Nodes to choose from, each with its own behavior. For the paddle, choose a StaticBody2D. The search field can help you find it easily. This will be root of the scene. Remember that a scene is a tree of nodes, so it needs a "root" to be its main anchor point. You may wonder why we chose a static body for the paddle since it will move. The reason for using this kind of node is that we don't want it to be moved by physical interaction. When the ball hits the paddle, we want it to be kept in the same place. We will move it only through scripting. Select the node you just added in the Scene dock and rename it to Paddle. Save the scene as paddle.tscn. Saving often is good to avoid losing your work. Add a new node of the type Sprite (not Sprite3D, since this is a 2D game). This is now a child of the root node in the tree. The sprite will serve as the image of the paddle. You can use the following image in it: Paddle Save the image in the project folder and use the FileSystem dock in the Godot editor to drag and drop into the Texture property on the Inspector dock. Any property that accepts a file can be set with drag and drop. Now the Static Body needs a collision area so that it can tell where other physics objects (like the ball) will collide. To do this, select the Paddle root node in the Scene dock and add a child node of type CollisionShape2D to it. The warning icon is there because we didn't set a shape to it yet, so let's do that now. On the Inspector dock, set the Shape property to a New RectangleShape2D. You can set the shape extents visually using the editor. Or you can click on the ">" button just in front of the Shape property on the Inspector to edit its properties and set the extents to (100, 15) if you're using the provided image. This is the half-extent, so the rectangle will be doubled in each dimension based on what you set here. User input While our paddle is mostly done, it still isn't controllable. Before we delve into the scripting world, let's set up the Input Map. This is a Godot functionality that allows us to abstract user input into named actions. You can later modify the keys needed to move the player later without changing the code. It also allows you to use multiple keys and joystick buttons for a single action, making the game work on keyboard and gamepad seamlessly. Click on the Project Settings option under the Scene menu in the top left of the window. There you can see the Input Map tab. There are some predefined actions, which are needed by UI controls. Add the actions move_left and move_right that we will use to move the paddle. Then map the left and right arrow keys of the keyboard to them. You can also add a mapping to the D-pad left and right buttons if you want to use a joystick. Close this window when done. Now we're ready to do some coding. Right-click on the Paddle root node in the Scene dock and select the option Add Script from the context menu. The "Create Node Script" dialog will appear and you can use the default settings, which will create a new script file with the same name as the scene (with a different extension). Godot has its own script language called "GDScript", which has a syntax a bit like Python and is quite easy to learn if you are familiar with programming. You can use the following code on the Paddle script: extends StaticBody2D # Paddle speed in pixels per second export var speed = 150.0 # The "export" keyword allows you to edit this value from the Inspector # Holds the limits off the screen var left_limit = 0 var right_limit = 0 # This function is called when this node enters the game tree # It is useful for initialization code func _ready(): # Enable the processing function for this node set_process(true) # Set the limits to the paddle based on the screen size left_limit = get_viewport_rect().pos.x + (get_node("Sprite").get_texture().get_width() / 2) right_limit = get_viewport_rect().pos.x + get_viewport_rect().size.x - (get_node("Sprite").get_texture().get_width() / 2) # The processing function func _process(delta): var direction = 0 if Input.is_action_pressed("move_left"): # If the player is pressing the left arrow, move to the left # which means going in the negative direction of the X axis direction = -1 elif Input.is_action_pressed("move_right"): # Same as above, but this time we go in the positive direction direction = 1 # Create a movement vector var movement = Vector2(direction * speed * delta, 0) # Move the paddle using vector arithmetic set_pos(get_pos() + movement) # Here we clamp the paddle position to not go off the screen if get_pos().x < left_limit: set_pos(Vector2(left_limit, get_pos().y)) elif get_pos().x > right_limit: set_pos(Vector2(right_limit, get_pos().y)) If you play the scene (using the top center bar or pressing F6), you can see the paddle and move it with the keyboard. You may find it too slow, but this will be covered in part two of this article when we set up the game scene. Up next You now have a project set up and a paddle on the screen that can be controlled by the player. You also have some understanding of how the Godot Engine operates with its nodes, scenes, and scripts. In part two, you will learn how to add the ball and the destroyable bricks. About the Author: George Marques is a Brazilian software developer who has been playing with programming in a variety of environments since he was a kid. He works as a freelancer programmer for web technologies based on open source solutions such as WordPress and Open Journal Systems. He's also one of the regular contributors of the Godot Engine, helping solving bugs and adding new features to the software, while also giving solutions to the community for the questions they have.
Read more
  • 0
  • 1
  • 6219
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at £15.99/month. Cancel anytime
article-image-deepcube-a-new-deep-reinforcement-learning-approach-solves-the-rubiks-cube-with-no-human-help
Savia Lobo
29 Jul 2018
4 min read
Save for later

DeepCube: A new deep reinforcement learning approach solves the Rubik’s cube with no human help

Savia Lobo
29 Jul 2018
4 min read
Humans have been excellent players in most of the gameplays be it indoor or outdoors. However, over the recent years we have been increasingly coming across machines that are playing and winning popular board games Go and Chess against humans using machine learning algorithms. If you think machines are only good at solving the black and whites, you are wrong. The recent achievement of a machine trying to solve a complex game (a Rubik’s cube) is DeepCube. Rubik cube is a challenging piece of puzzle that’s captivated everyone since childhood. Solving it is a brag-worthy accomplishment for most adults. A group of UC Irvine researchers have now developed a new algorithm (used by DeepCube) known as Autodidactic Iteration, which can solve a Rubik’s cube with no human assistance. The Erno Rubik’s cube conundrum Rubik’s cube, a popular three-dimensional puzzle was developed by Erno Rubik in the year 1974. Rubik worked for a month to figure out the first algorithm to solve the cube. Researchers at the UC Irvine state that “Since then, the Rubik’s Cube has gained worldwide popularity and many human-oriented algorithms for solving it have been discovered. These algorithms are simple to memorize and teach humans how to solve the cube in a structured, step-by-step manner.” After the cube became popular among mathematicians and computer scientists, questions around how to solve the cube with least possible turns became mainstream. In 2014, it was proved that the least number of steps to solve the cube puzzle was 26. More recently, computer scientists have tried to find ways for machines to solve the Rubik’s cube. As a first step, they tried and tested ways to use the same successful approach tried in the games Go and Chess. However, this approach did not work well for the Rubik’s cube. The approach: Rubik vs Chess and Go Algorithms used in Go and Chess are fed with rules of the game and then they play against themselves. The deep learning machine here is rewarded based on its performance at every step it takes. Reward process is considered as important as it helps the machine to distinguish between a good and a bad move. Following this, the machine starts playing well i.e it learns how to play well. On the other hand, the rewards in the case of Rubik’s cube are nearly hard to determine. This is because there are random turns in the cube and it is hard to judge whether the new configuration is any closer to a solution. The random turns can be unlimited and hence earning an end-state reward is very rare. Both Chess and Go have a large search space but each move can be evaluated and rewarded accordingly. This isn’t the case for Rubik’s cube! UC Irvine researchers have found a way for machines to create its own set of rewards in the Autodidactic Iteration method for DeepCube. Autodidactic Iteration: Solving the Rubik’s Cube without human Knowledge DeepCube’s Autodidactic Iteration (ADI) is a form of deep learning known as deep reinforcement learning (DRL). It combines classic reinforcement learning, deep learning, and Monte Carlo Tree Search (MCTS). When DeepCube gets an unsolved cube, it decides whether the specific move is an improvement on the existing configuration. To do this, it must be able to evaluate the move. The algorithm, Autodidactic iteration starts with the finished cube and works backwards to find a configuration that is similar to the proposed move. Although this process is imperfect, deep learning helps the system figure out which moves are generally better than others. Researchers trained a network using ADI for 2,000,000 iterations. They further reported, “The network witnessed approximately 8 billion cubes, including repeats, and it trained for a period of 44 hours. Our training machine was a 32-core Intel Xeon E5-2620 server with three NVIDIA Titan XP GPUs.” After training, the network uses a standard search tree to hunt for suggested moves for each configuration. The researchers in their paper said, “Our algorithm is able to solve 100% of randomly scrambled cubes while achieving a median solve length of 30 moves — less than or equal to solvers that employ human domain knowledge.” Researchers also wrote, “DeepCube is able to teach itself how to reason in order to solve a complex environment with only one reward state using pure reinforcement learning.” Furthermore, this approach will have a potential to provide approximate solutions to a broad class of combinatorial optimization problems. To explore Deep Reinforcement Learning check out our latest releases, Hands-On Reinforcement Learning with Python and Deep Reinforcement Learning Hands-On. How greedy algorithms work Creating a reference generator for a job portal using Breadth First Search (BFS) algorithm Anatomy of an automated machine learning algorithm (AutoML)    
Read more
  • 0
  • 0
  • 6216

article-image-learn-computer-vision-applications-open-cv
Packt
07 Jun 2011
16 min read
Save for later

Learn computer vision applications in Open CV

Packt
07 Jun 2011
16 min read
  OpenCV 2 Computer Vision Application Programming Cookbook Over 50 recipes to master this library of programming functions for real-time computer vision         Read more about this book       OpenCV (Open Source Computer Vision) is an open source library containing more than 500 optimized algorithms for image and video analysis. Since its introduction in 1999, it has been largely adopted as the primary development tool by the community of researchers and developers in computer vision. OpenCV was originally developed at Intel by a team led by Gary Bradski as an initiative to advance research in vision and promote the development of rich, vision-based CPU-intensive applications. In this article by Robert Laganière, author of OpenCV 2 Computer Vision Application Programming Cookbook, we will cover: Calibrating a camera Computing the fundamental matrix of an image pair Matching images using random sample consensus Computing a homography between two images (For more resources related to the article, see here.) Introduction Images are generally produced using a digital camera that captures a scene by projecting light onto an image sensor going through its lens. The fact that an image is formed through the projection of a 3D scene onto a 2D plane imposes the existence of important relations between a scene and its image, and between different images of the same scene. Projective geometry is the tool that is used to describe and characterize, in mathematical terms, the process of image formation. In this article, you will learn some of the fundamental projective relations that exist in multi-view imagery and how these can be used in computer vision programming. But before we start the recipes, let's explore the basic concepts related to scene projection and image formation. Image formation Fundamentally, the process used to produce images has not changed since the beginning of photography. The light coming from an observed scene is captured by a camera through a frontal aperture and the captured light rays hit an image plane (or image sensor) located on the back of the camera. Additionally, a lens is used to concentrate the rays coming from the different scene elements. This process is illustrated by the following figure: Here, do is the distance from the lens to the observed object, di is the distance from the lens to the image plane, and f is the focal length of the lens. These quantities are related by the so-called thin lens equation: In computer vision, this camera model can be simplified in a number of ways. First, we can neglect the effect of the lens by considering a camera with an infinitesimal aperture since, in theory, this does not change the image. Only the central ray is therefore considered. Second, since most of the time we have do>>di, we can assume that the image plane is located at the focal distance. Finally, we can notice from the geometry of the system, that the image on the plane is inverted. We can obtain an identical but upright image by simply positioning the image plane in front of the lens. Obviously, this is not physically feasible, but from a mathematical point of view, this is completely equivalent. This simplified model is often referred to as the pin-hole camera model and it is represented as follows: From this model, and using the law of similar triangles, we can easily derive the basic projective equation: The size (hi) of the image of an object (of height ho) is therefore inversely proportional to its distance (do) from the camera which is naturally true. This relation allows the position of the image of a 3D scene point to be predicted onto the image plane of a camera. Calibrating a camera From the introduction of this article, we learned that the essential parameters of a camera under the pin-hole model are its focal length and the size of the image plane (which defines the field of view of the camera). Also, since we are dealing with digital images, the number of pixels on the image plane is another important characteristic of a camera. Finally, in order to be able to compute the position of an image's scene point in pixel coordinates, we need one additional piece of information. Considering the line coming from the focal point that is orthogonal to the image plane, we need to know at which pixel position this line pierces the image plane. This point is called the principal point. It could be logical to assume that this principal point is at the center of the image plane, but in practice, this one might be off by few pixels depending at which precision the camera has been manufactured. Camera calibration is the process by which the different camera parameters are obtained. One can obviously use the specifications provided by the camera manufacturer, but for some tasks, such as 3D reconstruction, these specifications are not accurate enough. Camera calibration will proceed by showing known patterns to the camera and analyzing the obtained images. An optimization process will then determine the optimal parameter values that explain the observations. This is a complex process but made easy by the availability of OpenCV calibration functions. How to do it... To calibrate a camera, the idea is show to this camera a set of scene points for which their 3D position is known. You must then determine where on the image these points project. Obviously, for accurate results, we need to observe several of these points. One way to achieve this would be to take one picture of a scene with many known 3D points. A more convenient way would be to take several images from different viewpoints of a set of some 3D points. This approach is simpler but requires computing the position of each camera view, in addition to the computation of the internal camera parameters which fortunately is feasible. OpenCV proposes to use a chessboard pattern to generate the set of 3D scene points required for calibration. This pattern creates points at the corners of each square, and since this pattern is flat, we can freely assume that the board is located at Z=0 with the X and Y axes well aligned with the grid. In this case, the calibration process simply consists of showing the chessboard pattern to the camera from different viewpoints. Here is one example of a calibration pattern image: The nice thing is that OpenCV has a function that automatically detects the corners of this chessboard pattern. You simply provide an image and the size of the chessboard used (number of vertical and horizontal inner corner points). The function will return the position of these chessboard corners on the image. If the function fails to find the pattern, then it simply returns false: // output vectors of image pointsstd::vector<cv::Point2f> imageCorners;// number of corners on the chessboardcv::Size boardSize(6,4);// Get the chessboard cornersbool found = cv::findChessboardCorners(image, boardSize, imageCorners); Note that this function accepts additional parameters if one needs to tune the algorithm, which are not discussed here. There is also a function that draws the detected corners on the chessboard image with lines connecting them in sequence: //Draw the cornerscv::drawChessboardCorners(image, boardSize, imageCorners, found); // corners have been found The image obtained is seen here: The lines connecting the points shows the order in which the points are listed in the vector of detected points. Now to calibrate the camera, we need to input a set of such image points together with the coordinate of the corresponding 3D points. Let's encapsulate the calibration process in a CameraCalibrator class: class CameraCalibrator { // input points: // the points in world coordinates std::vector<std::vector<cv::Point3f>> objectPoints; // the point positions in pixels std::vector<std::vector<cv::Point2f>> imagePoints; // output Matrices cv::Mat cameraMatrix; cv::Mat distCoeffs; // flag to specify how calibration is done int flag; // used in image undistortion cv::Mat map1,map2; bool mustInitUndistort; public: CameraCalibrator() : flag(0), mustInitUndistort(true) {}; As mentioned previously, the 3D coordinates of the points on the chessboard pattern can be easily determined if we conveniently place the reference frame on the board. The method that accomplishes this takes a vector of the chessboard image filename as input: // Open chessboard images and extract corner pointsint CameraCalibrator::addChessboardPoints( const std::vector<std::string>& filelist, cv::Size & boardSize) { // the points on the chessboard std::vector<cv::Point2f> imageCorners; std::vector<cv::Point3f> objectCorners; // 3D Scene Points: // Initialize the chessboard corners // in the chessboard reference frame // The corners are at 3D location (X,Y,Z)= (i,j,0) for (int i=0; i<boardSize.height; i++) { for (int j=0; j<boardSize.width; j++) { objectCorners.push_back(cv::Point3f(i, j, 0.0f)); } } // 2D Image points: cv::Mat image; // to contain chessboard image int successes = 0; // for all viewpoints for (int i=0; i<filelist.size(); i++) { // Open the image image = cv::imread(filelist[i],0); // Get the chessboard corners bool found = cv::findChessboardCorners( image, boardSize, imageCorners); // Get subpixel accuracy on the corners cv::cornerSubPix(image, imageCorners, cv::Size(5,5), cv::Size(-1,-1), cv::TermCriteria(cv::TermCriteria::MAX_ITER + cv::TermCriteria::EPS, 30, // max number of iterations 0.1)); // min accuracy //If we have a good board, add it to our data if (imageCorners.size() == boardSize.area()) { // Add image and scene points from one view addPoints(imageCorners, objectCorners); successes++; } } return successes;} The first loop inputs the 3D coordinates of the chessboard, which are specified in an arbitrary square size unit here. The corresponding image points are the ones provided by the cv::findChessboardCorners function. This is done for all available viewpoints. Moreover, in order to obtain a more accurate image point location, the function cv::cornerSubPix can be used and as the name suggests, the image points will then be localized at sub-pixel accuracy. The termination criterion that is specified by the cv::TermCriteria object defines a maximum number of iterations and a minimum accuracy in sub-pixel coordinates. The first of these two conditions that is reached will stop the corner refinement process. When a set of chessboard corners has been successfully detected, these points are added to our vector of image and scene points: // Add scene points and corresponding image pointsvoid CameraCalibrator::addPoints(const std::vector<cv::Point2f>&imageCorners, const std::vector<cv::Point3f>& objectCorners) { // 2D image points from one view imagePoints.push_back(imageCorners); // corresponding 3D scene points objectPoints.push_back(objectCorners);} The vectors contains std::vector instances. Indeed, each vector element being a vector of points from one view. Once a sufficient number of chessboard images have been processed (and consequently a large number of 3D scene point/2D image point correspondences are available), we can initiate the computation of the calibration parameters: // Calibrate the camera// returns the re-projection errordouble CameraCalibrator::calibrate(cv::Size &imageSize){ // undistorter must be reinitialized mustInitUndistort= true; //Output rotations and translations std::vector<cv::Mat> rvecs, tvecs; // start calibration return calibrateCamera(objectPoints, // the 3D points imagePoints, // the image points imageSize, // image size cameraMatrix,// output camera matrix distCoeffs, // output distortion matrix rvecs, tvecs,// Rs, Ts flag); // set options} In practice, 10 to 20 chessboard images are sufficient, but these must be taken from different viewpoints at different depths. The two important outputs of this function are the camera matrix and the distortion parameters. The camera matrix will be described in the next section. For now, let's consider the distortion parameters. So far, we have mentioned that with the pin-hole camera model, we can neglect the effect of the lens. But this is only possible if the lens used to capture an image does not introduce too important optical distortions. Unfortunately, this is often the case with lenses of lower quality or with lenses having a very short focal length. You may have already noticed that in the image we used for our example, the chessboard pattern shown is clearly distorted. The edges of the rectangular board being curved in the image. It can also be noticed that this distortion becomes more important as we move far from the center of the image. This is a typical distortion observed with fish-eye lens and it is called radial distortion. The lenses that are used in common digital cameras do not exhibit such a high degree of distortion, but in the case of the lens used here, these distortions cannot certainly be ignored. It is possible to compensate for these deformations by introducing an appropriate model. The idea is to represent the distortions induced by a lens by a set of mathematical equations. Once established, these equations can then be reverted in order to undo the distortions visible on the image. Fortunately, the exact parameters of the transformation that will correct the distortions can be obtained together with the other camera parameter during the calibration phase. Once this is done, any image from the newly calibrated camera can be undistorted: // remove distortion in an image (after calibration)cv::Mat CameraCalibrator::remap(const cv::Mat &image) { cv::Mat undistorted; if (mustInitUndistort) { // called once per calibration cv::initUndistortRectifyMap( cameraMatrix, // computed camera matrix distCoeffs, // computed distortion matrix cv::Mat(), // optional rectification (none) cv::Mat(), // camera matrix to generate undistorted image.size(), // size of undistorted CV_32FC1, // type of output map map1, map2); // the x and y mapping functions mustInitUndistort= false; } // Apply mapping functions cv::remap(image, undistorted, map1, map2, cv::INTER_LINEAR); // interpolation type return undistorted;} Which results in the following image: As you can see, once the image is undistorted, we obtain a regular perspective image. How it works... In order to explain the result of the calibration, we need to go back to the figure in the introduction which describes the pin-hole camera model. More specifically, we want to demonstrate the relation between a point in 3D at position (X,Y,Z) and its image (x,y) on a camera specified in pixel coordinates. Let's redraw this figure by adding a reference frame that we position at the center of the projection as seen here: Note that the Y-axis is pointing downward to get a coordinate system compatible with the usual convention that places the image origin at the upper-left corner. We learned previously that the point (X,Y,Z) will be projected onto the image plane at (fX/Z,fY/Z). Now, if we want to translate this coordinate into pixels, we need to divide the 2D image position by, respectively, the pixel width (px) and height (py). We notice that by dividing the focal length f given in world units (most often meters or millimeters) by px, then we obtain the focal length expressed in (horizontal) pixels. Let's then define this term as fx. Similarly, fy =f/py is defined as the focal length expressed in vertical pixel unit. The complete projective equation is therefore: Recall that (u0,v0) is the principal point that is added to the result in order to move the origin to the upper-left corner of the image. These equations can be rewritten in matrix form through the introduction of homogeneous coordinates in which 2D points are represented by 3-vectors, and 3D points represented by 4-vectors (the extra coordinate is simply an arbitrary scale factor that need to be removed when a 2D coordinate needs to be extracted from a homogeneous 3-vector). Here is the projective equation rewritten: The second matrix is a simple projection matrix. The first matrix includes all of the camera parameters which are called the intrinsic parameters of the camera. This 3x3 matrix is one of the output matrices returned by the cv::calibrateCamera function. There is also a function called cv::calibrationMatrixValues that returns the value of the intrinsic parameters given a calibration matrix. More generally, when the reference frame is not at the projection center of the camera, we will need to add a rotation (a 3x3 matrix) and a translation vector (3x1 matrix). These two matrices describe the rigid transformation that must be applied to the 3D points in order to bring them back to the camera reference frame. Therefore, we can rewrite the projection equation in its most general form: Remember that in our calibration example, the reference frame was placed on the chessboard. Therefore, there is a rigid transformation (rotation and translation) that must be computed for each view. These are in the output parameter list of the cv::calibrateCamera function. The rotation and translation components are often called the extrinsic parameters of the calibration and they are different for each view. The intrinsic parameters remain constant for a given camera/lens system. The intrinsic parameters of our test camera obtained from a calibration based on 20 chessboard images are fx=167, fy=178, u0=156, v0=119. These results are obtained by cv::calibrateCamera through an optimization process aimed at finding the intrinsic and extrinsic parameters that will minimize the difference between the predicted image point position, as computed from the projection of the 3D scene points, and the actual image point position, as observed on the image. The sum of this difference for all points specified during the calibration is called the re-projection error. To correct the distortion, OpenCV uses a polynomial function that is applied to the image point in order to move them at their undistorted position. By default, 5 coefficients are used; a model made of 8 coefficients is also available. Once these coefficients are obtained, it is possible to compute 2 mapping functions (one for the x coordinate and one for the y) that will give the new undistorted position of an image point on a distorted image. This is computed by the function cv::initUndistortRectifyMap and the function cv::remap remaps all of the points of an input image to a new image. Note that because of the non-linear transformation, some pixels of the input image now fall outside the boundary of the output image. You can expand the size of the output image to compensate for this loss of pixels, but you will now obtain output pixels that have no values in the input image (they will then be displayed as black pixels). There's more... When a good estimate of the camera intrinsic parameters are known, it could be advantageous to input them to the cv::calibrateCamera function. They will then be used as initial values in the optimization process. To do so, you just need to add the flag CV_CALIB_USE_INTRINSIC_GUESS and input these values in the calibration matrix parameter. It is also possible to impose a fixed value for the principal point (CV_CALIB_ FIX_PRINCIPAL_POINT), which can often be assumed to be the central pixel. You can also impose a fixed ratio for the focal lengths fx and fy (CV_CALIB_FIX_RATIO) in which case you assume pixels of square shape.
Read more
  • 0
  • 1
  • 6213

article-image-implementing-proximal-policy-optimization-ppo-algorithm-in-unity-tutorial
Natasha Mathur
12 Oct 2018
10 min read
Save for later

Implementing Proximal Policy Optimization (PPO) algorithm in Unity [Tutorial]

Natasha Mathur
12 Oct 2018
10 min read
ML-agents uses a reinforcement learning technique called PPO or Proximal Policy Optimization. This is the preferred training method that Unity has developed which uses a neural network. This PPO algorithm is implemented in TensorFlow and runs in a separate Python process (communicating with the running Unity application over a socket). In this tutorial, we look at how to implement PPO, a reinforcement learning algorithm used for training the ML agents in Unity. This tutorial also explores training statistics with TensorBoard. This tutorial is an excerpt taken from the book 'Learn Unity ML-Agents – Fundamentals of Unity Machine Learning'  by Micheal Lanham. Before implementing PPO, let's have a look at how to set a special unity environment needed for controlling the Unity training environment. Go through the following steps to learn how to configure the 3D ball environment for external. How to set up a 3D environment in Unity for external training Open the Unity editor and load the ML-Agents demo unityenvironment project. If you still have it open from the last chapter, then that will work as well. Open the 3DBall.scene in the ML-Agents/Examples/3DBall folder. Locate the Brain3DBrain object in the Hierarchy window and select it. In the Inspector window set the Brain Type to External. From the menu, select Edit | Project Settings | Player. From the Inspector window, set the properties as shown in the following screenshot: Setting the Player resolution properties From the menu, select File | Build Settings. Click on the Add Open Scene button and make sure that only the 3DBall scene is active, as shown in the following dialog: Setting the Build Settings for the Unity environment Set the Target Platform to your chosen desktop OS (Windows, in this example) and click the Build button at the bottom of the dialog. You will be prompted to choose a folder to build into. Select the python folder in the base of the ml-agents folder. If you are prompted to enter a name for the file, enter 3DBall. On newer versions of Unity, from 2018 onward, the name of the folder will be set by the name of the Unity environment build folder, which will be python.Be sure that you know where Unity is placing the build, and be sure that the file is in the python folder. At the time of writing, on Windows, Unity will name the executable python.exe and not 3DBall.exe. This is important to remember when we get to set up the Python notebook. With the environment built, we can move on to running the Basics notebook against the app. Let's see how to run the Jupyter notebook to control the environment. Running the environment Open up the Basics Jupyter notebook again; remember that we wanted to leave it open after testing the Python install.  Go through the following steps to run the environment. Ensure that you update the first code block with your environment name, like so: env_name = "python" # Name of the Unity environment binary to launch train_mode = True # Whether to run the environment in training or inference mode We have the environment name set to 'python' here because that is the name of the executable that gets built into the python folder. You can include the file extension, but you don't have to. If you are not sure what the filename is, check the folder; it really will save you some frustration. Go inside the first code block and then click the Run button on the toolbar. Clicking Run will run the block of code you currently have your cursor in. This is a really powerful feature of a notebook; being able to move back and forth between code blocks and execute what you need is very useful when building complex algorithms. Click inside the second code block and click Run. The second code block is responsible for loading code dependencies. Note the following line in the second code block: from unityagents import UnityEnvironment This line is where we import the unityagents UnityEnvironment class. This class is our controller for running the environment. Run the third code block. Note how a Unity window will launch, showing the environment. You should also notice an output showing a successful startup and the brain stats. If you encounter an error at this point, go back and ensure you have the env_name variable set with the correct filename. Run the fourth code block. You should again see some more output, but unfortunately, with this control method, you don't see interactive activity. We will try to resolve this issue in a later chapter. Run the fifth code block. This will run through some random actions in order to generate some random output. Finally, run the sixth code block. This will close the Unity environment. Feel free to review the Basics notebook and play with the code. Take advantage of the ability to modify the code or make minor changes and quickly rerun code blocks. Now that we know how to set up a 3D environment in Unity, we can move on to implementation of PPO. How to implement PPO in Unity The implementation of PPO provided by Unity for training has been set up in a single script that we can put together quite quickly. Open up Unity to the unityenvironment sample projects and go through the following steps: Locate the GridWorld scene in the Assets/ML-Agents/Examples/GridWorld folder. Double-click it to open it. Locate the GridWorldBrain and set it to External. Set up the project up using the steps mentioned in the previous section. From the menu, select File | Build Settings.... Uncheck any earlier scenes and be sure to click Add Open Scenes to add the GridWorld scene to the build. Click Build to build the project, and again make sure that you put the output in the python folder. Again, if you are lost, refer to the ML-Agents external brains section. Open a Python shell or Anaconda prompt window. Be sure to navigate to the root source folder, ml-agents. Activate the ml-agents environment with the following: activate ml-agents From the ml-agents folder, run the following command: python python/learn.py python/python.exe --run-id=grid1 --train You may have to use Python 3 instead, depending on your Python engine. This will execute the learn.py script against the python/python.exe environment; be sure to put your executable name if you are not on Windows. Then we set a useful run-id we can use to identify runs later. Finally, we set the --train switch in order for the agent/brain to also be trained. As the script runs, you should see the Unity environment get launched, and the shell window or prompt will start to show training statistics, as shown in the following screenshot of the console window: Training output generated from learn.py Let the training run for as long as it needs. Depending on your machine and the number of iterations, you could be looking at a few hours of training—yes, you read that right. As the environment is trained, you will see the agent moving around and getting reset over and over again. In the next section, we will take a closer look at what the statistics are telling us. Understanding training statistics with TensorBoard Inherently, ML has its roots in statistics, statistical analysis, and probability theory. While we won't strictly use statistical methods to train our models like some ML algorithms do, we will use statistics to evaluate training performance. Hopefully, you have some memory of high school statistics, but if not, a quick refresher will certainly be helpful. The Unity PPO and other RL algorithms use a tool called TensorBoard, which allows us to evaluate statistics as an agent/environment is running. Go through the following steps as we run another Grid environment while watching the training with TensorBoard: Open the trainer_config.yaml file in Visual Studio Code or another text editor. This file contains the various training parameters we use to train our models. Locate the configuration for the GridWorldBrain, as shown in the following code: GridWorldBrain: batch_size: 32 normalize: false num_layers: 3 hidden_units: 256 beta: 5.0e-3 gamma: 0.9 buffer_size: 256 max_steps: 5.0e5 summary_freq: 2000 time_horizon: 5 Change the num_layers parameter from 1 to 3, as shown in the highlighted code. This parameter sets the number of layers the neural network will have. Adding more layers allows our model to better generalize, which is a good thing. However, this will decrease our training performance, or the time it takes our agent to learn. Sometimes, this isn't a bad thing if you have the CPU/GPU to throw at training, but not all of us do, so evaluating training performance will be essential. Open a command prompt or shell in the ml-agents folder and run the following command: python python/learn.py python/python.exe --run-id=grid2 --train Note how we updated the --run-id parameter to grid2 from grid1. This will allow us to add another run of data and compare it to the last run in real time. This will run a new training session. If you have problems starting a session, make sure you are only running one environment at a time. Open a new command prompt or shell window to the same ml-agents folder. Keep your other training window running. Run the following command: tensorboard --logdir=summaries This will start the TensorBoard web server, which will serve up a web UI to view our training results. Copy the hosting endpoint—typically http://localhost:6006, or perhaps the machine name—and paste it into a web browser. After a while, you should see the TensorBoard UI, as shown in the following screenshot: TensorBoard UI showing the results of training on GridWorld You will need to wait a while to see progress from the second training session. When you do, though, as shown in the preceding image, you will notice that the new model (grid2) is lagging behind in training. Note how the blue line on each of the plots takes several thousand iterations to catch up. This is a result of the more general multilayer network. This isn't a big deal in this example, but on more complex problems, that lag could make a huge difference. While some of the plots show the potential for improvement—such as the entropy plot—overall, we don't see a significant improvement. Using a single-layer network for this example is probably sufficient. We learned about PPO and its implementation in Unity. To learn more PPO concepts in Unity, be sure to check out the book Learn Unity ML-Agents – Fundamentals of Unity Machine Learning. Implementing Unity game engine and assets for 2D game development [Tutorial] Creating interactive Unity character animations and avatars [Tutorial] Unity 2D & 3D game kits simplify Unity game development for beginners
Read more
  • 0
  • 0
  • 6213

article-image-reactive-extensions-create-rxjs-observables
Sugandha Lahoti
03 Aug 2018
10 min read
Save for later

Reactive Extensions: Ways to create RxJS Observables [Tutorial]

Sugandha Lahoti
03 Aug 2018
10 min read
Reactive programming is a paradigm where the main focus is working with an asynchronous data flow. Reactive Extensions allow you to work with asynchronous data streams. Reactive Extensions is an agnostic framework (this means it has implementations for several languages), and can be used in other platforms (such as RxJava, RxSwift, and so on). This makes learning Reactive Extensions (and functional reactive programming) really useful, as you can use it to improve your code on different platforms. One of these types, Reactive Extensions for JavaScript (RxJS) is a reactive streams library for JS that can be used both in the browser or in the server-side using Node.js. RxJS is a library for reactive programming using Observables. Observables provide support for passing messages between publishers and subscribers in your application. Observables offer significant benefits over other techniques for event handling, asynchronous programming, and handling multiple values. In this article, we will learn about the different types of observables in the context of RxJS and a few different ways of creating them. This article is an excerpt from the book, Mastering Reactive JavaScript, written by Erich de Souza Oliveira. RxJS lets you have even more control over the source of your data.  We will learn about the different flavors of Observables and how we can better control their life cycle. Installing RxJS RxJS is divided into modules. This way, you can create your own bundle with only the modules you're interested in. However, we will always use the official bundle with all the contents from RxJS; by doing so, we'll not have to worry about whether a certain module exists in our bundle or not. So, let's follow the steps described here to install RxJS. To install it on your server, just run the following command inside a node project: npm i [email protected] -save To add it to an HTML page, just paste the following code snippet inside your HTML: <script src="https://cdnjs.cloudflare.com/ajax/libs/rxjs/4.1.0/rx.all.js"> </script> For those using other package managers, you can install RxJS from Bower or NuGet. If you're running inside a node program, you need to have the RxJS library in each JavaScript file that you want to use. To do this, add the following line to the beginning of your JavaScript file: var Rx = require('rx'); The preceding line will be omitted in all examples, as we expect you to have added it before testing the sample code. Creating an observable Here we will see a list of methods to create an observable from common event sources. This is not an exhaustive list, but it contains the most important ones. You can see all the available methods on the RxJS GitHub page (https://github.com/Reactive-Extensions/RxJS). Creating an observable from iterable objects We can create an observable from iterable objects using the from() method. An iterable in JavaScript can be an array (or an array-like object) or other iterates added in ES6 (such as Set() and map()). The from() method has the following signature: Rx.Observable.from(iterable,[mapFunction],[context],[scheduler]); Usually, you will pass only the first argument. Others arguments are optional; you can see them here: iterable: This is the iterable object to be converted into an observable (can be an array, set, map, and so on) mapFunction: This is a function to be called for every element in the array to map it to a different value context: This object is to be used when mapFunction is provided scheduler: This is used to iterate the input sequence Don't worry if you don't know what a scheduler is. We will see how it changes our observables, but we will discuss it later in this chapter. For now, focus only on the other arguments of this function. Now let's see some examples on how we can create observables from iterables. To create an observable from an array, you can use the following code: Rx.Observable .from([0,1,2]) .subscribe((a)=>console.log(a)); This code prints the following output: 0 1 2 Now let's introduce a minor change in our code, to add the mapFunction argument to it, instead of creating an observable to propagate the elements of this array. Let's use mapFunction to propagate the double of each element of the following array: Rx.Observable .from([0,1,2], (a) => a*2) .subscribe((a)=>console.log(a)); This prints the following output: 0 2 4 We can also use this method to create an observable from an arguments object. To do this, we need to run from() in a function. This way, we can access the arguments object of the function. We can implement it with the following code: var observableFromArgumentsFactory = function(){ return Rx.Observable.from(arguments); }; observableFromArgumentsFactory(0,1,2) .subscribe((a)=>console.log(a)); If we run this code, we will see the following output: 0 1 2 One last usage of this method is to create an observable from either Set() or Map(). These data structures were added to ES6. We can implement it for a set as follows: var set = new Set([0,1,2]); Rx.Observable .from(set) .subscribe((a)=>console.log(a)); This code prints the following output: 0 1 2 We can also use a map as an argument for the from() method, as follows: var map = new Map([['key0',0],['key1',1],['key2',2]]); Rx.Observable .from(map) .subscribe((a)=>console.log(a)); This prints all the key-value tuples on this map: [ 'key0', 0 ] [ 'key1', 1 ] [ 'key2', 2 ] All observables created from this method are cold observables. As discussed before, this means it fires the same sequence for all the observers. To test this behavior, create an observable and add an Observer to it; add another observer to it after a second: var observable = Rx.Observable.from([0,1,2]); observable.subscribe((a)=>console.log('first subscriber receives => '+a)); setTimeout(()=>{ observable.subscribe((a)=>console.log('second subscriber receives => '+a)); },1000); If you run this code, you will see the following output in your console, showing both the subscribers receiving the same data as expected: first subscriber receives => 0 first subscriber receives => 1 first subscriber receives => 2 second subscriber receives => 0 second subscriber receives => 1 second subscriber receives => 2 Creating an observable from a sequence factory Now that we have discussed how to create an observable from a sequence, let's see how we can create an observable from a sequence factory. RxJS has a built-in method called generate() that lets you create an observable from an iteration (such as a for() loop). This method has the following signature: Rx.Observable.generate(initialState, conditionFunction, iterationFunction, resultFactory, [scheduler]); In this method, the only optional parameter is the last one. A brief description of all the parameters is as follows: initialState: This can be any object, it is the first object used in the iteration conditionFunction: This is a function with the condition to stop the iteration iterationFunction: This is a function to be used on each element to iterate resultFactory: This is a function whose return is passed to the sequence scheduler: This is an optional scheduler Before checking out an example code for this method, let's see some code that implements one of the most basic constructs in a program: a for() loop. This is used to generate an array from an initial value to a final value. We can produce this array with the following code: var resultArray=[]; for(var i=0;i < 3;i++){ resultArray.push(i) } console.log(resultArray); This code prints the following output: [0,1,2] When you create a for() loop, you basically give to it the following: an initial state (the first argument), the condition to stop the iteration (the second argument), how to iterate over the value (the third argument), and what to do with the value (block). Its usage is very similar to the generate() method. Let's do the same thing, but using the generate() method and creating an observable instead of an array: Rx.Observable.generate( 0, (i) => i<3, (i) => i+1, (i) => i ).subscribe((i) => console.log(i)); This code will print the following output: 0 1 2 Creating an observable using range () Another common source of data for observables are ranges. With the range() method, we can easily create an observable for a sequence of values in a range. The range() method has the following signature: Rx.Observable.range(first, count, [scheduler]); The last parameter in the following list is the only optional parameter in this method: first: This is the initial integer value in the sequence count: This is the number of sequential integers to be iterated from the beginning of the sequence scheduler: This is used to generate the values We can create an observable using a range with the following code: Rx.Observable .range(0, 4) .subscribe((i)=>console.log(i)); This prints the following output: 0 1 2 3 Creating an observable using period of time In the previous chapter, we discussed how to create timed sequences in bacon.js. In RxJS, we have two different methods to implement observables emitting values with a given interval. The first method is interval(). This method emits an infinite sequence of integers starting from one every x milliseconds; it has the following signature: Rx.Observable.interval(interval, [scheduler]); The interval parameter is mandatory, and the second argument is optional: interval: This is an integer number to be used as the interval between the values of this sequence scheduler: This is used to generate the values Run the following code: Rx.Observable .interval(1000) .subscribe((i)=> console.log(i)); You will see an output as follows; you will have to stop your program (hitting Ctrl+C) or it will keep sending events: 0 1 2 The interval() method sends the first value of the sequence after the given period of interval and keeps sending values after each interval. RxJS also has a method called timer(). This method lets you specify a due time to start the sequence or even generate an observable of only one value emitted after the due time has elapsed. It has the following signature: Rx.Observable.timer(dueTime, [interval], [scheduler]); Here are the parameters: dueTime: This can be a date object or an integer. If it is a date object, then it means it is the absolute time to start the sequence; if it is an integer, then it specifies the number of milliseconds to wait for before you could send the first element of the sequence. interval: This is an integer denoting the time between the elements. If it is not specified, it generates only one event. scheduler: This is used to produce the values. We can create an observable from the timer() method with the following code: Rx.Observable .timer(1000,500) .subscribe((i)=> console.log(i)); You will see an output that will be similar to the following; you will have to stop your program or it will keep sending events: 0 1 2 We can also use this method to generate only one value and finish the sequence. We can do this omitting the interval parameter, as shown in the following code: Rx.Observable .timer(1000) .subscribe((i)=> console.log(i)); If you run this code, it will only print 0 in your console and finish. We learned about various RxJS Observables and a few different ways of creating them. Read the book,  Mastering Reactive JavaScript, to create powerful applications using RxJs without compromising performance. (RxJS) Observable for Promise Users: Part (RxJS) Observable for Promise Users : Part 2 Angular 6 is here packed with exciting new features!
Read more
  • 0
  • 0
  • 6211
article-image-how-to-use-mapreduce-with-mongo-shell
Amey Varangaonkar
02 Mar 2018
8 min read
Save for later

How to use MapReduce with Mongo shell

Amey Varangaonkar
02 Mar 2018
8 min read
[box type="note" align="" class="" width=""]The following excerpt is taken from the book Mastering MongoDB 3.x authored by Alex Giamas. This book demonstrates the power of MongoDB to build high performance database solutions with ease.[/box] MongoDB is one of the most popular NoSQL databases in the world and can be combined with various Big Data tools for efficient data processing. In this article we explore interesting features of MongoDB, which has been underappreciated and not widely supported throughout the industry as yet - the ability to write MapReduce natively using shell. MapReduce is a data processing method for getting aggregate results from a large set of data. The main advantage is that it is inherently parallelizable as evidenced by frameworks such as Hadoop. A simple example of MapReduce would be as follows, given that our input books collection is as follows: > db.books.find() { "_id" : ObjectId("592149c4aabac953a3a1e31e"), "isbn" : "101", "name" : "Mastering MongoDB", "price" : 30 } { "_id" : ObjectId("59214bc1aabac954263b24e0"), "isbn" : "102", "name" : "MongoDB in 7 years", "price" : 50 } { "_id" : ObjectId("59214bc1aabac954263b24e1"), "isbn" : "103", "name" : "MongoDB for experts", "price" : 40 } And our map and reduce functions are defined as follows: > var mapper = function() { emit(this.id, 1); }; In this mapper, we simply output a key of the id of each document with a value of 1: > var reducer = function(id, count) { return Array.sum(count); }; In the reducer, we sum across all values (where each one has a value of 1): > db.books.mapReduce(mapper, reducer, { out:"books_count" }); { "result" : "books_count", "timeMillis" : 16613, "counts" : { "input" : 3, "emit" : 3, "reduce" : 1, "output" : 1 }, "ok" : 1 } > db.books_count.find() { "_id" : null, "value" : 3 } > Our final output is a document with no ID, since we didn't output any value for id, and a value of 6, since there are six documents in the input dataset. Using MapReduce, MongoDB will apply map to each input document, emitting key-value pairs at the end of the map phase. Then each reducer will get key-value pairs with the same key as input, processing all multiple values. The reducer's output will be a single key-value pair for each key. Optionally, we can use a finalize function to further process the results of the mapper and reducer. MapReduce functions use JavaScript and run within the mongod process. MapReduce can output inline as a single document, subject to the 16 MB document size limit, or as multiple documents in an output collection. Input and output collections can be sharded. MapReduce concurrency MapReduce operations will place several short-lived locks that should not affect operations. However, at the end of the reduce phase, if we are outputting data to an existing collection, then output actions such as merge, reduce, and replace will take an exclusive global write lock for the whole server, blocking all other writes in the db instance. If we want to avoid that we should invoke MapReduce in the following way: > db.collection.mapReduce( Mapper, Reducer, { out: { merge/reduce: bookOrders, nonAtomic: true } }) We can apply nonAtomic only to merge or reduce actions. replace will just replace the contents of documents in bookOrders, which would not take much time anyway. With the merge action, the new result is merged with the existing result if the output collection already exists. If an existing document has the same key as the new result, then it will overwrite that existing document. With the reduce action, the new result is processed together with the existing result if the output collection already exists. If an existing document has the same key as the new result, it will apply the reduce function to both the new and the existing documents and overwrite the existing document with the result. Although MapReduce has been present since the early versions of MongoDB, it hasn't evolved as much as the rest of the database, resulting in its usage being less than that of specialized MapReduce frameworks such as Hadoop. Incremental MapReduce Incremental MapReduce is a pattern where we use MapReduce to aggregate to previously calculated values. An example would be counting non-distinct users in a collection for different reporting periods (that is, hour, day, month) without the need to recalculate the result every hour. To set up our data for incremental MapReduce we need to do the following: Output our reduce data to a different collection At the end of every hour, query only for the data that got into the collection in the last hour With the output of our reduce data, merge our results with the calculated results from the previous hour Following up on the previous example, let's assume that we have a published field in each of the documents, with our input dataset being: > db.books.find() { "_id" : ObjectId("592149c4aabac953a3a1e31e"), "isbn" : "101", "name" : "Mastering MongoDB", "price" : 30, "published" : ISODate("2017-06-25T00:00:00Z") } { "_id" : ObjectId("59214bc1aabac954263b24e0"), "isbn" : "102", "name" : "MongoDB in 7 years", "price" : 50, "published" : ISODate("2017-06-26T00:00:00Z") } Using our previous example of counting books we would get the following: var mapper = function() { emit(this.id, 1); }; var reducer = function(id, count) { return Array.sum(count); }; > db.books.mapReduce(mapper, reducer, { out: "books_count" }) { "result" : "books_count", "timeMillis" : 16700, "counts" : { "input" : 2, "emit" : 2, "reduce" : 1, "output" : 1 }, "ok" : 1 } > db.books_count.find() { "_id" : null, "value" : 2 } Now we get a third book in our mongo_books collection with a document: { "_id" : ObjectId("59214bc1aabac954263b24e1"), "isbn" : "103", "name" : "MongoDB for experts", "price" : 40, "published" : ISODate("2017-07-01T00:00:00Z") } > db.books.mapReduce( mapper, reducer, { query: { published: { $gte: ISODate('2017-07-01 00:00:00') } }, out: { reduce: "books_count" } } ) > db.books_count.find() { "_id" : null, "value" : 3 } What happened here, is that by querying for documents in July 2017 we only got the new document out of the query and then used its value to reduce the value with the already calculated value of 2 in our books_count document, adding 1 to the final sum of three documents. This example, as contrived as it is, shows a powerful attribute of MapReduce: the ability to re-reduce results to incrementally calculate aggregations over time. Troubleshooting MapReduce Throughout the years, one of the major shortcomings of MapReduce frameworks has been the inherent difficulty in troubleshooting as opposed to simpler non-distributed patterns. Most of the time, the most effective tool is debugging using log statements to verify that output values match our expected values. In the mongo shell, this being a JavaScript shell, this is as simple as outputting using the console.log()function. Diving deeper into MapReduce in MongoDB we can debug both in the map and the reduce phase by overloading the output values. Debugging the mapper phase, we can overload the emit() function to test what the output key values are: > var emit = function(key, value) { print("debugging mapper's emit"); print("key: " + key + " value: " + tojson(value)); } We can then call it manually on a single document to verify that we get back the key-value pair that we would expect: > var myDoc = db.orders.findOne( { _id: ObjectId("50a8240b927d5d8b5891743c") } ); > mapper.apply(myDoc); The reducer function is somewhat more complicated. A MapReduce reducer function must meet the following criteria: It must be idempotent The order of values coming from the mapper function should not matter for the reducer's result The reduce function must return the same type of result as the mapper function We will dissect these following requirements to understand what they really mean: It must be idempotent: MapReduce by design may call the reducer multiple times for the same key with multiple values from the mapper phase. It also doesn't need to reduce single instances of a key as it's just added to the set. The final value should be the same no matter the order of execution. This can be verified by writing our own "verifier" function forcing the reducer to re-reduce or by executing the reducer many, many times: reduce( key, [ reduce(key, valuesArray) ] ) == reduce( key, valuesArray ) It must be commutative: Again, because multiple invocations of the reducer may happen for the same key, if it has multiple values, the following should hold: reduce(key, [ C, reduce(key, [ A, B ]) ] ) == reduce( key, [C, A, B ] ) The order of values coming from the mapper function should not matter for the reducer's result: We can test that the order of values from the mapper doesn't change the output for the reducer by passing in documents to the mapper in a different order and verifying that we get the same results out: reduce( key, [ A, B ] ) == reduce( key, [ B, A ] ) The reduce function must return the same type of result as the mapper function: Hand-in-hand with the first requirement, the type of object that the reduce function returns should be the same as the output of the mapper function. We saw how MapReduce is useful when implemented on a data pipeline. Multiple MapReduce commands can be chained to produce different results. An example would be aggregating data by different reporting periods (hour, day, week, month, year) where we use the output of each more granular reporting period to produce a less granular report. If you found this article useful, make sure to check our book Mastering MongoDB 3.x to get more insights and information about MongoDB’s vast data storage, management and administration capabilities.
Read more
  • 0
  • 0
  • 6205

article-image-scale-your-django-app
Jean Jung
11 Jan 2017
6 min read
Save for later

Scale your Django App: Gunicorn + Apache + Nginx

Jean Jung
11 Jan 2017
6 min read
One question when starting with Django is "How do I scale my app"? Brandon Rhodes has answered this question in Foundations of Python Network Programming. Rhodes shows us different options, so in this post we will focus on the preferred and main option: Gunicorn + Apache + Nginx. The idea of this architecture is to have Nginx as a proxy to delegate dynamic content to Gunicorn and static content to Apache. As Django, by itself, does not handle static content, and Apache does it very well, we can take advantages from that. Below we will see how to configure everything. Environment Project directory: /var/www/myproject Apache2 Nginx Gunicorn Project settings STATIC_ROOT: /var/www/myproject/static STATIC_URL: /static/ MEDIA_ROOT: /var/www/myproject/media MEDIA_URL: /media/ ALLOWED_HOSTS: myproject.com Gunicorn Gunicorn is a Python and WSGI compatible server, making it our first option when working with Django. It’s possible to install Gunicorn from pip by running: pip install gunicorn To run the Gunicorn server, cd to your project directoryand run: gunicorn myproject.wsgi:application -b localhost:8000 By default, Gunicorn runs just one worker to serve the pages. If you feel you need more workers, you can start them by passing the number of workers to the --workers option. Gunicorn also runs in the foreground, but you need to configure a service on your server. This, however, is not the focus of this post. Visit localhost:8000 on your browser and see that your project is working. You will probably see that your static wasn’t accessible. This is because Django itself cannot serve static files, and Gunicorn is not configured to serve them too. Let’s fix that with Apache in the next section. If your page does not work here, check if you are using a virtualenv and if it is enabled on the Gunicorn running process. Apache Installing Apache takes some time and is not the focus of this post; additionally, a great majority of the readers will already have Apache, so if you don’t know how to install Apache, follow this guide. If you already have configured Apache to serve static content, this one will be very similar to what you have done. If you have never done that, do not be afraid; it will be easy! First of all, change the listening port from Apache. Currently, on Apache2, edit the /etc/apache2/ports.conf and change the line: Listen 80 To: Listen 8001 You can choose other ports too; just be sure to adjust the permissions on the static and media files dir to match the current Apache running user needs. Create a file at /etc/apache2/sites-enabled/myproject.com.conf and add this content: <VirtualHost *:8001> ServerName static.myproject.com ServerAdmin webmaster@localhost CustomLog ${APACHE_LOG_DIR}/static.myproject.com-access.log combined ErrorLog ${APACHE_LOG_DIR}/static.myproject.com-error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn DocumentRoot /var/www/myproject Alias /static/ /var/www/myproject/static/ Alias /media/ /var/www/myproject/media/ <Directory /var/www/myproject/static> Require all granted </Directory> <Directory /var/www/myproject/media> Require all granted </Directory> </VirtualHost> Be sure to replace everything needed to fit your project needs. But your project still does not work well because Gunicorn does not know about Apache, and we don’t want it to know anything about that. This is because we will use Nginx, covered in the next session. Nginx Nginx is a very light and powerful web server. It is different from Apache, and it does not spawn a new process for every request, so it works very well as a proxy. As I’ve said, when installing Apache, you would lead to this reference to know how to install Nginx. Proxy configuration is very simple in Nginx; just create a file at /etc/nginx/conf.d/myproject.com.conf and put: upstream dynamic { server 127.0.0.1:8000; } upstream static { server 127.0.0.1:8001; } server { listen 80; server_name myproject.com; # Root request handler to gunicorn upstream location / { proxy_pass http://dynamic; } # Static request handler to apache upstream location /static { proxy_pass http://static/static; } # Media request handler to apache upstream location /media { proxy_pass http://static/media; } proxy_set_header X-Real-IP $remote_addr; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; error_page 500 502 503 504 /50x.html; } This way, you have everything working on your machine! If you have more than one machine, you can dedicate the machines to deliver static/dynamic contents. The machine where Nginx runs is the proxy, and it needs to be visible from the Internet. The machines running Apache or Gunicorn can be visible only from your local network. If you follow this architecture, you can just change the Apache and Gunicorn configurations to listen to the default ports, adjust the domain names, and set the Nginx configuration to deliver the connections over the new domains. Where to go now? For more details on deploying Gunicorn with Nginx, see the Gunicorn deployment page. You would like to see the Apache configuration page and the Nginx getting started page to have more information about scalability and security. Summary In this post you saw how to configure Nginx, Apache, and Gunicorn servers to deliver your Django app over a proxy environment, balancing your requests through Apache and Gunicorn. There was a state about how to start more Gunicorn workers and where to find details about scaling each of the servers being used. References [1] - PEP 3333 - WSGI [2] - Gunicorn Project [3] - Apache Project [4] - Nginx Project [5] - Rhodes, B. & Goerzen, J. (2014). Foundations of Python Network Programming. New York, NY: Apress. About the author Jean Jung is a Brazilian developer passionate about technology. Currently a System Analyst at EBANX, an international payment processing cross boarder for Latin America, he's very interested in Python and Artificial Intelligence, specifically Machine Learning, Compilers, and Operational Systems. As a hobby, he's always looking for IoT projects with Arduino.
Read more
  • 0
  • 0
  • 6203

article-image-reversing-android-applications
Packt
20 Mar 2014
8 min read
Save for later

Reversing Android Applications

Packt
20 Mar 2014
8 min read
(For more resources related to this topic, see here.) Android application teardown An Android application is an archive file of the data and resource files created while developing the application. The extension of an Android application is .apk, meaning application package, which includes the following files and folders in most cases: Classes.dex (file) AndroidManifest.xml (file) META-INF (folder) resources.arsc (file) res (folder) assets (folder) lib (folder) In order to verify this, we could simply unzip the application using any archive manager application, such as 7zip, WinRAR, or any preferred application. On Linux or Mac, we could simply use the unzip command in order to show the contents of the archive package, as shown in the following screenshot: Here, we have used the -l (list) flag in order to simply show the contents of the archive package instead of extracting it. We could also use the file command in order to see whether it is a valid archive package. An Android application consists of various components, which together create the working application. These components are Activities, Services, Broadcast Receivers, Content providers, and Shared Preferences. Before proceeding, let's have a quick walkthrough of what these different components are all about: Activities: These are the visual screens which a user could interact with. These may include buttons, images, TextView, or any other visual component. Services: These are the Android components which run in the background and carry out specific tasks specified by the developer. These tasks may include anything from downloading a file over HTTP to playing music in the background. Broadcast Receivers: These are the receivers in the Android application that listen to the incoming broadcast messages by the Android system, or by other applications present in the device. Once they receive a broadcast message, a particular action could be triggered depending on the predefined conditions. The conditions could range from receiving an SMS, an incoming phone call, a change in the power supply, and so on. Shared Preferences: These are used by an application in order to save small sets of data for the application. This data is stored inside a folder named shared_prefs. These small datasets may include name value pairs such as the user's score in a game and login credentials. Storing sensitive information in shared preferences is not recommended, as they may fall vulnerable to data stealing and leakage. Intents: These are the components which are used to bind two or more different Android components together. Intents could be used to perform a variety of tasks, such as starting an action, switching activities, and starting services. Content Providers: These are used to provide access to a structured set of data to be used by the application. An application can access and query its own data or the data stored in the phone using the Content Providers. Now that we know of the Android application internals and what an application is composed of, we can move on to reversing an Android application. That is getting the readable source code and other data sources when we just have the .apk file with us. Reversing an Android application As we discussed earlier, that Android applications are simply an archive file of data and resources. Even then, we can't simply unzip the archive package (.apk) and get the readable sources. For these scenarios, we have to rely on tools that will convert the byte code (as in classes.dex) into readable source code. One of the approaches to convert byte codes to readable files is using a tool called dex2jar. The .dex file is the converted Java bytecode to Dalvik bytecode, making it optimized and efficient for mobile platforms. This free tool simply converts the .dex file present in the Android application to a corresponding .jar file. Please follow the ensuing steps: Download the dex2jar tool from https://code.google.com/p/dex2jar/. Now we can use it to run against our application's .dex file and convert to .jar format. Now, all we need to do is go to the command prompt and navigate to the folder where dex2jar is located. Next, we need to run the d2j-dex2jar.bat file (on Windows) or the d2j-dex2jar.sh file (on Linux/Mac) and provide the application name and path as the argument. Here in the argument, we could simply use the .apk file, or we could even unzip the .apk file and then pass the classes.dex file instead, as shown in the following screenshot: As we can see in the preceding screenshot, dex2jar has successfully converted the .dex file of the application to a .jar file named helloworld-dex2jar.jar. Now, we can simply open this .jar file in any Java graphical viewer such as JD-GUI, which can be downloaded from its official website at http://jd.benow.ca/. Once we download and install JD-GUI, we could now go ahead and open it. It will look like the one shown in the following screenshot: Here, we could now open up the converted .jar file from the earlier step and see all the Java source code in JD-GUI. To open a .jar file, we could simply navigate to File | Open. In the right-hand side pane, we can see the Java sources and all the methods of the Android application. Note that the recompilation process will give you an approximate version of the original Java source code. This won't matter in most cases; however, in some cases, you might see that some of the code is missing from the converted .jar file. Also, if the application developer is using some protections against decompilations such as proguard and dex2jar, when we decompile the application using dex2jar or Apktool, we won't be seeing the exact source code; instead, we will see a bunch of different source files, which won't be the exact representation of the original source code. Using Apktool to reverse an Android application Another way of reversing an Android application is converting the .dex file to smali files. A smali is a file format whose syntax is similar to a language known as Jasmine. We won't be going in depth into the smali file format as of now. For more information, take a look at the online wiki at https://code.google.com/p/smali/wiki/ in order to get an in-depth understanding of smali. Once we have downloaded Apktool and configured it, we are all set to go further. The main advantage of Apktool over JD-GUI is that it is bidirectional. This means if you decompile an application and modify it, and then recompile it back using Apktool, it will recompile perfectly and will generate a new .apk file. However, dex2jar and JD-GUI won't be able to do this similar functionality, as it gives an approximate code and not the exact code. So, in order to decompile an application using Apktool, all we need to do is to pass in the .apk filename along with the Apktool binary. Once decompiled, Apktool will create a new folder with the application name in which all of the files will be stored. To decompile, we will simply go ahead and use apktool d [app-name].apk. Here, the -d flag stands for decompilation. In the following screenshot, we can see an app being decompiled using Apktool: Now, if we go inside the smali folder, we will see a bunch of different smali files, which will contain the code of the Java classes that were written while developing the application. Here, we can also open up a file, change the values, and use Apktool to build it back again. To build a modified application from smali, we will use the b (build) flag in Apktool. apktool d [decompiled folder name] [target-app-name].apk However, in order to decompile, modify, and recompile applications, I would personally recommend using another tool called Virtuous Ten Studio (VTS). This tool offers similar functionalities as Apktool, with the only difference that VTS presents it in a nice graphical interface, which is relatively easy to use. The only limitation for this tool is it runs natively only on the Windows environment. We could go ahead and download VTS from the official download link, http://www.virtuous-ten-studio.com/. The following is a screenshot of the application decompiling the same project: Summary In this article, we covered some of the methods and techniques that are used to reverse the Android applications. Resources for Article: Further resources on this subject: Android Native Application API [Article] Introducing an Android platform [Article] Animating Properties and Tweening Pages in Android 3-0 [Article]
Read more
  • 0
  • 0
  • 6203
article-image-multi-agents-environments-and-adversarial-self-play-in-unity-tutorial
Natasha Mathur
19 Oct 2018
9 min read
Save for later

Multi-agents environments and adversarial self-play in Unity [Tutorial]

Natasha Mathur
19 Oct 2018
9 min read
There are various novel training strategies that we can employ with multiple agents and/or brains in an environment, from adversarial and cooperative self-play to imitation and curriculum learning. In this tutorial, we will look at how to build multi-agent environments in Unity as well as explore adversarial self-play. This tutorial is an excerpt taken from the book 'Learn Unity ML-Agents – Fundamentals of Unity Machine Learning'  by Micheal Lanham. Let's get started! Multi-agent environments A multi-agent environment consists of multiple interacting intelligent agents competing against each other, thereby, making the game more engaging.  It started out as just a fun experiment for game developers, but as it turns out, letting agents compete against themselves can really amp up training. There are a few configurations we can set up when working with multiple agents. The BananaCollector example we will look at, uses a single brain shared among multiple competing agents. Open up Unity and follow this exercise to set up the scene: Load the BananaCollectorBananaRL scene file located in the Assets/ML-Agents/Examples/BananaCollectors/ folder. Leave the Brain on Player; if you changed it, change it back. Run the scene in Unity. Use the WASD keys to move the agent cubes around the scene and collect bananas. Notice how there are multiple cubes responding identically. That is because each agent is using the same brain. Expand the RLArea object in the Hierarchy window, as shown in the following screenshot: Inspecting the RLArea and agents Notice the five Agent objects under the RLArea. These are the agents that will be training against the single brain. After you run the first example, you come back and duplicate more agents to test the effect this has on training. Switch the Brain to External. Be sure your project is set to use an external brain. If you have run an external brain with this project, you don't need any additional setup. There are also several environments in this example that will allow us to train with A3C, but for now, we will just use the single environment. Feel free to go back and try this example with multiple environments enabled. From the menu, select File | Build Settings.... Uncheck any other active scenes and make sure the BananaRL scene is the only scene active. You may have to use the Add Open Scene button. Build the environment to the python folder. Open a Python or Anaconda prompt. Activate ml-agents and navigate to the 'ml-agents' folder. Run the trainer with the following code: python python/learn.py python/python.exe --run-id=banana1 --train Watch the sample run. The Unity environment window for this example is large enough so that you can see most of the activities going on. The objective of the game is for the agents to collect yellow bananas while avoiding the blue bananas. To make things interesting, the agents are able to shoot lasers in order to freeze opposing agents. This sample is shown in the following screenshot: Banana Collectors multi-agent example running You will notice that the mean and standard deviation of reward accumulates quickly in this example. This is the result of a few changes in regards to reward values for one, but this particular example is well-suited for multi-agent training. Depending on the game or simulation you are building, using multi-agents with a single brain could be an excellent way to train. Feel free to go back and enable multiple environments in order to train multiple agents in multiple environments using multiple A3C agents. Next, we will look at another example that features adversarial self-play using multiple agents and multiple brains. Adversarial self-play The last example we looked at is best defined as a competitive multi-agent training scenario where the agents are learning by competing against each other to collect bananas or freeze other agents out. Now, we will look at another similar form of training that pits agent vs. agent using an inverse reward scheme called Adversarial self-play. Inverse rewards are used to punish an opposing agent when a competing agent receives a reward. Let's see what this looks like in the Unity ML-Agents Soccer (football) example by following this exercise: Open up Unity to the SoccerTwos scene located in the Assets/ML-Agents/Examples/Soccer/Scenes folder. Run the scene and use the WASD keys to play all four agents. Stop the scene when you are done having fun. Expand the Academy object in the Hierarchy window. Select the StrikerBrain and switch it to External. Select the GoalieBrain and switch it to External. From the menu, select File | Build Settings.... Click the Add Open Scene button and disable other scenes so only the SoccerTwos scene is active. Build the environment to the python folder. Launch a Python or Anaconda prompt and activate ml-agents. Then, navigate to the ml-agents folder. Launch the trainer with the following code: python python/learn.py python/python.exe --run-id=soccor1 --train Watching the training session is quite entertaining, so keep an eye on the Unity environment window and the console in order to get a sense of the training progress. Notice how the brains are using an inverse reward, as shown in the following screenshot: Watching the training progress of the soccer agents The StrikerBrain is currently getting a negative reward and the GoalieBrain is getting a positive reward. Using inverse rewards allows the two brains to train to a common goal, even though they are self-competing against each other as well. In the next example, we are going to look at using our trained brains in Unity as internal brains. Using internal brains It can be fun to train agents in multiple scenarios, but when it comes down to it, we ultimately want to be able to use these agents in a game or proper simulation. Now that we have a training scenario already set up to entertain us, let's enable it so that we can play soccer (football) against some agents. Follow this exercise to set the scene so that you can use an internal brain: From the menu, select Edit | Project Settings | Player. Enter ENABLE_TENSORFLOW in the Scripting Define Symbols underneath Other Settings, as shown in the following screenshot: Setting the scripting define symbols for enabling TensorFlow Setting this will enable the internal running of TensorFlow models through TensorFlowSharp. Locate the Academy object and expand it to expose the StrikerBrain and GoalieBrain objects. Select the StrikerBrain and press Ctrl + D (Command+D on macOS) to duplicate the brain. Set the original StrikerBrain and GoalieBrain to use an Internal brain type. When you switch the brain type, make sure that the Graph Model under the TensorFlow properties is set to Soccer, as shown in the following screenshot: Checking that the Graph Model is set to Soccer Leave the new StrikerBrain(1) you just duplicated to the Player brain type. This will allow you to play the game against the agents. Expand the SoccerFieldsTwos->Players objects to expose the four player objects. Select the Striker(1) object and set its Brain to the StrikerBrain(1) player brain, as shown in the following screenshot: Setting the brain on the player cube This sets the agent (player) to use the Player brain type we duplicated. Press the Play button to run the game. Use the WASD keys to control the striker and see how well you can score. After you play for a while, you will soon start to realize how well the agents have learned. This is a great example and quickly shows how easily you can build agents for most game scenarios given enough training time and setup. What's more is that the decision code is embedded in a light TensorFlow graph that blazes trails around other Artificial Intelligence solutions. We are still not using new brains we have trained, so we will do that next. Using trained brains internally Here, we will use the brains we previously trained as agent's brains in our soccer (football) game. This will give us a good comparison to how the default Unity trained brain compares against the one we trained in our first exercise. We are getting to the fun stuff now and you certainly don't want to miss the following exercise where we will be using a trained brain internally in a game we can play: Open a File Explorer and open the 'ml-agents'/models/soccor1 folder. The name of the folder will match the run-id you used in the training command-line parameter. Drag the .bytes file, named python_soccer.bytes in this example, to the Assets/ML-Agents/Examples/Soccer/TFModels folder, as shown in the following screenshot: Dragging the TensorFlow model file to the TFModels folder Locate the StrikerBrain and set the Graph Model by clicking the target icon and selecting the python_soccor1  TextAsset, as shown in the following screenshot: Setting the Graph Model in the StrikerBrain The file is called a TextAsset, but it is actually a binary byte file holding the TensorFlow graph. Change the GoalieBrain to the same graph model. Both brains are included in the same graph. We can denote which brain is which with the Graph Scope parameter. Again, leave the player striker brain as it is. Press Play to run the game. Play the game with the WASD keys. The first thing you will notice is that the agents don't quite play as well. That could be because we didn't use all of our training options. Now would be a good time to go back and retrain the soccer (football) brains using A3C and other options we covered thus far. In this tutorial, we were able to play with several variations of training scenarios. We started by looking at extending our training to multi-agent environments that still used a single brain. Next, we looked at a variety of multi-agent training called Adversarial self-play, that allows us to train pairs of agents using a system of inverse rewards. If you found this post useful, and want to learn other methods such as imitation and curriculum learning, then be sure to check out the book  'Learn Unity ML-Agents – Fundamentals of Unity Machine Learning'. Implementing Unity game engine and assets for 2D game development [Tutorial] Creating interactive Unity character animations and avatars [Tutorial] Unity 2D & 3D game kits simplify Unity game development for beginners
Read more
  • 0
  • 0
  • 6199

article-image-is-golang-truly-community-driven-and-does-it-really-matter
Sugandha Lahoti
24 May 2019
6 min read
Save for later

Is Golang truly community driven and does it really matter?

Sugandha Lahoti
24 May 2019
6 min read
Golang, also called Go, is a statically typed, compiled programming language designed by Google. Golang is going from strength to strength, as more engineers than ever are using it at work, according to Go User Survey 2019. An opinion that has led to the Hacker News community into a heated debate last week: “Go is Google's language, not the community's”. The thread was first started by Chris Siebenmann who works at the Department of Computer Science, University of Toronto. His blog post reads, “Go has community contributions but it is not a community project. It is Google's project.” Chris explicitly states that the community's voice doesn't matter very much for Go's development, and we have to live with that. He argues that Google is the gatekeeper for community contributions; it alone decides what is and isn't accepted into Go. If a developer wants some significant feature to be accepted into Golang, working to build consensus in the community is far less important than persuading the Golang core team. He then cites the example of how one member of Google's Go core team discarded the entire Go Modules system that the Go community had been working on and brought in a relatively radically different model. Chris believes that the Golang team cares about the community and want them to be involved, but only up to a certain point. He wants the Go core team to be bluntly honest about the situation, rather than pretend and implicitly lead people on. He further adds, “Only if Go core team members start leaving Google and try to remain active in determining Go's direction, can we [be] certain Golang is a community-driven language.” He then compares Go with C++, calling the latter a genuine community-driven language. He says there are several major implementations in C++ which are genuine community projects, and the direction of C++ is set by an open standards committee with a relatively distributed membership. https://twitter.com/thatcks/status/1131319904039309312 What is better - community-driven or corporate ownership? There has been an opinion floating around developers about how some open source programming projects are just commercial projects driven mainly by a single company.  If we look at the top open source projects, most of them have some kind of corporate backing ( Apple’s Swift, Oracle’s Java, MySQL, Microsoft’s Typescript, Google’s Kotlin, Golang, Android, MongoDB, Elasticsearch) to name a few. Which brings us to the question, what does corporate ownership of open source projects really mean? A benevolent dictatorship can have two outcomes. If the community for a particular project suggests a change, and in case a change is a bad idea, the corporate team can intervene and stop changes. On the other hand, though, it can actually stop good ideas from the community in being implemented, even if a handful of members from the core team disagree. Chris’s post has received a lot of attention by developers on Hacker News who both sided with and disagreed with the opinion put forward. A comment reads, “It's important to have a community and to work with it, but, especially for a programming language, there has to be a clear concept of which features should be implemented and which not - just accepting community contributions for the sake of making the community feel good would be the wrong way.” Another comment reads, “Many like Go because it is an opinionated language. I'm not sure that a 'community' run language will create something like that because there are too many opinions. Many claims to represent the community, but not the community that doesn't share their opinion. Without clear leaders, I fear technical direction and taste will be about politics which seems more uncertain/risky. I like that there is a tight cohesive group in control over Go and that they are largely the original designers. I might be more interested in alternative government structures and Google having too much control only if those original authors all stepped down.” Rather than splitting between Community or Corporate, a more accurate representation would be how much market value is depending on those projects. If a project is thriving, usually enterprises will take good decisions to handle it. However, another but entirely valid and important question to ask is ‘should open source projects be driven by their market value?’ Another common argument is that the core team’s full-time job is to take care of the language instead of taking errant decisions based on community backlash. Google (or Microsoft, or Apple, or Facebook for that matter) will not make or block a change in a way that kills an entire project. But this does not mean they should sit idly, ignoring the community response. Ideally, the more that a project genuinely belongs to its community, the more it will reflect what the community wants and needs. Google also has a propensity to kill its own products. What happens when Google is not as interested in Golang anymore? The company could leave it to the community to figure out the governance model suddenly by pulling off the original authors into some other exciting new project. Or they may let the authors only work on Golang in their spare time at home or at the weekends. While Google's history shows that many of their dead products are actually an important step towards something better and more successful, why and how much of that logic would be directly relevant to an open source project is something worth thinking about. As a Hacker news user wrote, “Go is developed by Bell Labs people, the same people who bought us C, Unix and Plan 9 (Ken, Pike, RSC, et al). They took the time to think through all their decisions, the impacts of said decisions, along with keeping things as simple as possible. Basically, doing things right the first time and not bolting on features simply because the community wants them.” Another says, “The way how Golang team handles potentially tectonic changes in language is also exemplary – very well communicated ideas, means to provide feedback and clear explanation of how the process works.” Rest assured, if any major change is made to Go, even a drastic one such as killing it, it will not be done without consulting the community and taking their feedback. Go User Survey 2018 results: Golang goes from strength to strength, as more engineers than ever are using it at work. GitHub releases Vulcanizer, a new Golang Library for operating Elasticsearch State of Go February 2019 – Golang developments report for this month released
Read more
  • 0
  • 0
  • 6195