Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - IoT & Hardware

152 Articles
article-image-arduino-mobile-robot
Packt
17 Dec 2014
8 min read
Save for later

The Arduino Mobile Robot

Packt
17 Dec 2014
8 min read
In this article, Marco Schwartz and Stefan Buttigieg, the authors of the Arduino Android Blueprints, we are going to use most of the concepts we have learned to control a mobile robot via an Android app. The robot will have two motors that we can control, and also an ultrasonic sensor in the front so that it can detect obstacles. The robot will also have a BLE chip so that it can receive commands from the Android app. The following will be the major takeaways: Building a mobile robot based on the Arduino platform Connecting a BLE module to the Arduino robot (For more resources related to this topic, see here.) Configuring the hardware We are first going to assemble the robot itself, and then see how to connect the Bluetooth module and the ultrasonic sensor. To give you an idea of what you should end up with, the following is a front-view image of the robot when fully assembled: The following image shows the back of the robot when fully assembled: The first step is to assemble the robot chassis. To do so, you can watch the DFRobot assembly guide at https://www.youtube.com/watch?v=tKakeyL_8Fg. Then, you need to attach the different Arduino boards and shields to the robot. Use the spacers found in the robot chassis kit to mount the Arduino Uno board first. Then put the Arduino motor shield on top of that. At this point, use the screw header terminals to connect the two DC motors to the motor shield. This is how it should look at this point: Finally, mount the prototyping shield on top of the motor shield. We are now going to connect the BLE module and the ultrasonic sensor to the Arduino prototyping shield. The following is a schematic diagram showing the connections between the Arduino Uno board (done via the prototyping shield in our case) and the components: Now perform the following steps: First, we are now going to connect the BLE module. Place the module on the prototyping shield. Connect the power supply of the module as follows: GND goes to the prototyping shield's GND pin, and VIN goes to the prototyping shield's +5V. After that, you need to connect the different wires responsible for the SPI interface: SCK to Arduino pin 13, MISO to Arduino pin 12, and MOSI to Arduino pin 11. Then connect the REQ pin to Arduino pin 10. Finally, connect the RDY pin to Arduino pin 2 and the RST pin to Arduino pin 9. For the URM37 module, connect the VCC pin of the module to Arduino +5V, GND to GND, and the PWM pin to the Arduino A3 pin. To review the pin order on the URM37 module, you can check the official DFRobot documentation at http://www.dfrobot.com/wiki/index.php?title=URM37_V3.2_Ultrasonic_Sensor_(SKU:SEN0001). The following is a close-up image of the prototyping shield with the BLE module connected: Finally, connect the 7.4 V battery to the Arduino Uno board power jack. The battery is simply placed below the Arduino Uno board. Testing the robot We are now going to write a sketch to test the different functionalities of the robot, first without using Bluetooth. As the sketch is quite long, we will look at the code piece by piece. Before you proceed, make sure that the battery is always plugged into the robot. Now perform the following steps: The sketch starts by including the aREST library that we will use to control the robot via serial commands: #includ e <aREST.h> Now we declare which pins the motors are connected to: int speed_motor1 = 6; int speed_motor2 = 5; int direction_motor1 = 7; int direction_motor2 = 4; We also declare which pin the ultrasonic sensor is connected to: int distance_sensor = A3; Then, we create an instance of the aREST library: aREST rest = aREST(); To store the distance data measured by the ultrasonic sensor, we declare a distance variable: int distance; In the setup() function of the sketch, we first initialize serial communications that we will use to communicate with the robot for this test: Serial.begin(115200); We also expose the distance variable to the REST API, so we can access it easily: rest.variable("distance",&distance); To control the robot, we are going to declare a whole set of functions that will perform the basic operations: going forward, going backward, turning on itself (left or right), and stopping. We will see the details of these functions in a moment; for now, we just need to expose them to the API: rest.function("forward",forward); rest.function("backward",backward); rest.function("left",left); rest.function("right",right); rest.function("stop",stop); We also give the robot an ID and a name: rest.set_id("001"); rest.set_name("mobile_robot"); In the loop() function of the sketch, we first measure the distance from the sensor: distance = measure_distance(distance_sensor); We then handle the requests using the aREST library: rest.handle(Serial); Now, we will look at the functions for controlling the motors. They are all based on a function to control a single motor, where we need to set the motor pins, the speed, and the direction of the motor: void send_motor_command(int speed_pin, int direction_pin, int pwm, boolean dir) { analogWrite(speed_pin, pwm); // Set PWM control, 0 for stop, and 255 for maximum speed digitalWrite(direction_pin, dir); // Dir set the rotation direction of the motor (true or false means forward or reverse) } Based on this function, we can now define the different functions to move the robot, such as forward: int forward(String command) { send_motor_command(speed_motor1,direction_motor1,100,1); send_motor_command(speed_motor2,direction_motor2,100,1); return 1; } We also define a backward function, simply inverting the direction of both motors: int backward(String command) { send_motor_command(speed_motor1,direction_motor1,100,0); send_motor_command(speed_motor2,direction_motor2,100,0); return 1; } To make the robot turn left, we simply make the motors rotate in opposite directions: int left(String command) { send_motor_command(speed_motor1,direction_motor1,75,0); send_motor_command(speed_motor2,direction_motor2,75,1); return 1; } We also have a function to stop the robot: int stop(String command) { send_motor_command(speed_motor1,direction_motor1,0,1); send_motor_command(speed_motor2,direction_motor2,0,1); return 1; } There is also a function to make the robot turn right, which is not detailed here. We are now going to test the robot. Before you do anything, ensure that the battery is always plugged into the robot. This will ensure that the motors are not trying to get power from your computer USB port, which could damage it. Also place some small support at the bottom of the robot so that the wheels don't touch the ground. This will ensure that you can test all the commands of the robot without the robot moving too far from your computer, as it is still attached via the USB cable. Now you can upload the sketch to your Arduino Uno board. Open the serial monitor and type the following: /forward This should make both the wheels of the robot turn in the same direction. You can also try the other commands to move the robot to make sure they all work properly. Then, test the ultrasonic distance sensor by typing the following: /distance You should get back the distance (in centimeters) in front of the sensor: {"distance": 24, "id": "001", "name": "mobile_robot", "connected": true} Try changing the distance by putting your hand in front of the sensor and typing the command again. Writing the Arduino sketch Now that we have made sure that the robot is working properly, we can write the final sketch that will receive the commands via Bluetooth. As the sketch shares many similarities with the test sketch, we are only going to see what is added compared to the test sketch. We first need to include more libraries: #include <SPI.h> #include "Adafruit_BLE_UART.h" #include <aREST.h> We also define which pins the BLE module is connected to: #define ADAFRUITBLE_REQ 10 #define ADAFRUITBLE_RDY 2     // This should be an interrupt pin, on Uno thats #2 or #3 #define ADAFRUITBLE_RST 9 We have to create an instance of the BLE module: Adafruit_BLE_UART BTLEserial = Adafruit_BLE_UART(ADAFRUITBLE_REQ, ADAFRUITBLE_RDY, ADAFRUITBLE_RST); In the setup() function of the sketch, we initialize the BLE chip: BTLEserial.begin(); In the loop() function, we check the status of the BLE chip and store it in a variable: BTLEserial.pollACI(); aci_evt_opcode_t status = BTLEserial.getState(); If we detect that a device is connected to the chip, we handle the incoming request with the aREST library, which will allow us to use the same commands as before to control the robot: if (status == ACI_EVT_CONNECTED) { rest.handle(BTLEserial); } You can now upload the code to your Arduino board, again by making sure that the battery is connected to the Arduino Uno board via the power jack. You can now move on to the development of the Android application to control the robot. Summary In this article, we managed to create our very own mobile robot together with a companion Android application that we can use to control our robot. We achieved this step by step by setting up an Arduino-enabled robot and coding the companion Android application. It uses the BLE software and hardware of an Android physical device running on Android 4.3 or higher. Resources for Article: Further resources on this subject: Our First Project – A Basic Thermometer [article] Hardware configuration [article] Avoiding Obstacles Using Sensors [article]
Read more
  • 0
  • 0
  • 3381

article-image-first-projects-esp8266
Packt
17 Oct 2016
9 min read
Save for later

First Projects with the ESP8266

Packt
17 Oct 2016
9 min read
In this article by Marco Schwartz, author Internet of Things with ESP8266, we will focus on the ESP8266 chip is ready to be used and you can connect it to your Wi-Fi network, we can now build some basic projects with it. This will help you understand the basics of the ESP8266. (For more resources related to this topic, see here.) We are going to see three projects in this article: how to control an LED, how to read data from a GPIO pin, and how to grab the contents from a web page. We will also see how to read data from a digital sensor. Controlling an LED First, we are going to see how to control a simple LED. Indeed, the GPIO pins of the ESP8266 can be configured to realize many functions: inputs, outputs, PWM outputs, and also SPI or I2C communications. This first project will teach you how to use the GPIO pins of the chip as outputs. The first step is to add an LED to our project. These are the extra components you will need for this project: 5mm LED (https://www.sparkfun.com/products/9590) 330 Ohm resistor (to limit the current in the LED) (https://www.sparkfun.com/products/8377) The next step is to connect the LED with the resistor to the ESP8266 board. To do so, the first thing to do is to place the resistor on the breadboard. Then, place the LED on the breadboard as well, connecting the longest pin of the LED (the anode) to one pin of the resistor. Then, connect the other end of the resistor to the GPIO pin 5 of the ESP8266, and the other end of the LED to the ground. This is how it should look like at the end: We are now going to light up the LED by programming the ESP8266 chip, by connecting it to the Wi-Fi network. This is the complete code for this section: // Import required libraries #include <ESP8266WiFi.h> void setup() { // Set GPIO 5 as output pinMode(5, OUTPUT); // Set GPIO 5 on a HIGH state digitalWrite(5, HIGH); } void loop() { } This code simply sets the GPIO pin as an output, and then applies a HIGH state on it. The HIGH state means that the pin is active, and that positive voltage (3.3V) is applied on the pin. A LOW state would mean that the output is at 0V. You can now copy this code and paste it in the Arduino IDE. Then, upload the code to the board using the instructions from the previous article. You should immediately see that the LED is lighting up. You can shut it down again by using digitalWrite(5, LOW) in the code. You could also, for example, modify the code so the ESP8266 switches the LED on and off every second. Reading data from a GPIO pin As a second project in this article, we are going to read the state of a GPIO pin. For this, we will use the same pin as in the previous project. You can therefore remove the LED and the resistor that we used in the previous project. Now, simply connect this pin (GPIO 5) of the board to the positive power supply on your breadboard with a wire, therefore applying a 3.3V signal on this pin. Reading data from a pin is really simple. This is the complete code for this part: // Import required libraries #include <ESP8266WiFi.h> void setup(void) { // Start Serial (to display results on the Serial monitor) Serial.begin(115200); // Set GPIO 5 as input pinMode(5, INPUT);} void loop() { // Read GPIO 5 and print it on Serial port Serial.print("State of GPIO 5: "); Serial.println(digitalRead(5)); // Wait 1 second delay(1000); } We simply set the pin as an input, and then read the value of this pin, and print it out every second. Copy and paste this code into the Arduino IDE, then upload it to the board using the instructions from the previous article. This is the result you should get in the Serial monitor: State of GPIO 5: 1 We can see that the returned value is 1 (digital state HIGH), which is what we expected, because we connected the pin to the positive power supply. As a test, you can also connect the pin to the ground, and the state should go to 0. Grabbing the content from a web page As a last project in this article, we are finally going to use the Wi-Fi connection of the chip to grab the content of a page. We will simply use the www.example.com page, as it's a basic page largely used for test purposes. This is the complete code for this project: // Import required libraries #include <ESP8266WiFi.h> // WiFi parameters constchar* ssid = "your_wifi_network"; constchar* password = "your_wifi_password"; // Host constchar* host = "www.example.com"; void setup() { // Start Serial Serial.begin(115200); // We start by connecting to a WiFi network Serial.println(); Serial.println(); Serial.print("Connecting to "); Serial.println(ssid); WiFi.begin(ssid, password); while (WiFi.status() != WL_CONNECTED) { delay(500); Serial.print("."); } Serial.println(""); Serial.println("WiFi connected"); Serial.println("IP address: "); Serial.println(WiFi.localIP()); } int value = 0; void loop() { Serial.print("Connecting to "); Serial.println(host); // Use WiFiClient class to create TCP connections WiFiClient client; const int httpPort = 80; if (!client.connect(host, httpPort)) { Serial.println("connection failed"); return; } // This will send the request to the server client.print(String("GET /") + " HTTP/1.1rn" + "Host: " + host + "rn" + "Connection: closernrn"); delay(10); // Read all the lines of the reply from server and print them to Serial while(client.available()){ String line = client.readStringUntil('r'); Serial.print(line); } Serial.println(); Serial.println("closing connection"); delay(5000); } The code is really basic: we first open a connection to the example.com website, and then send a GET request to grab the content of the page. Using the while(client.available()) code, we also listen for incoming data, and print it all inside the Serial monitor. You can now copy this code and paste it into the Arduino IDE. This is what you should see in the Serial monitor: This is basically the content of the page, in pure HTML code. Reading data from a digital sensor In this last section of this article, we are going to connect a digital sensor to our ESP8266 chip, and read data from it. As an example, we will use a DHT11 sensor that can be used to get ambient temperature and humidity. You will need to get this component for this section, the DHT11 sensor (https://www.adafruit.com/products/386) Let's now connect this sensor to your ESP8266: First, place the sensor on the breadboard. Then, connect the first pin of the sensor to VCC, the second pin to pin #5 of the ESP8266, and the fourth pin of the sensor to GND. This is how it will look like at the end: Note that here I've used another ESP8266 board, the Adafruit ESP8266 breakout board. We will also use the aREST framework in this example, so it's easy for you to access the measurements remotely. aREST is a complete framework to control your ESP8266 boards remotely (including from the cloud), and we are going to use it several times in the article. You can find more information about it at the following URL: http://arest.io/. Let's now configure the board. The code is too long to be inserted here, but I will detail the most important part of it now. It starts by including the required libraries: #include "ESP8266WiFi.h" #include <aREST.h> #include "DHT.h" To install those libraries, simply look for them inside the Arduino IDE library manager. Next, we need to set the pin on which the DHT sensor is connected to: #define DHTPIN 5 #define DHTTYPE DHT11 After that we declare an instance of the DHT sensor: DHT dht(DHTPIN, DHTTYPE, 15); As earlier, you will need to insert your own Wi-Fi name and password inside the code: const char* ssid = "wifi-name"; const char* password = "wifi-pass"; We also define two variables that will hold the measurements of the sensor: float temperature; float humidity; In the setup() function of the sketch, we initialize the sensor: dht.begin(); Still in the setup() function, we expose the variables to the aREST API, so we can access them remotely via Wi-Fi: rest.variable("temperature",&temperature); rest.variable("humidity",&humidity); Finally, in the loop() function, we make the measurements from the sensor: humidity = dht.readHumidity(); temperature = dht.readTemperature(); It's now time to test the project! Simply grab all the code and put it inside the Arduino IDE. Also make sure to install the aREST Arduino library using the Arduino library manager. Now, put the ESP8266 board in bootloader mode, and upload the code to the board. After that, reset the board, and open the Serial monitor. You should see the IP address of the board being displayed: Now, we can access the measurements from the sensor remotely. Simply go to your favorite web browser, and type: 192.168.115.105/temperature You should immediately get the answer from the board, with the temperature being displayed: { "temperature": 25.00, "id": "1", "name": "esp8266", "connected": true } You can of course do the same with humidity. Note that we used here the aREST API. You can learn more about it at: http://arest.io/. Congratulations, you just completed your very first projects using the ESP8266 chip! Feel free to experiment with what you learned in this article, and start learning more about how to configure your ESP8266 chip. Summary In this article, we realized our first basic projects using the ESP8266 Wi-Fi chip. We first learned how to control a simple output, by controlling the state of an LED. Then, we saw how to read the state of a digital pin on the chip. Finally, we learned how to read data from a digital sensor, and actually grab this data using the aREST framework. We are going to go right into the main topic of the article, and build our first Internet of Things project using the ESP8266. Resources for Article: Further resources on this subject: Sending Notifications using Raspberry Pi Zero [article] The Raspberry Pi and Raspbian [article] Working with LED Lamps [article]
Read more
  • 0
  • 0
  • 3376

article-image-3d-printing
Packt
18 Nov 2013
9 min read
Save for later

3D Printing

Packt
18 Nov 2013
9 min read
(For more resources related to this topic, see here.) Function The software starts by taking a .stl or .obj file along, with all our settings, and converts it into GCode. Think of the GCode like an instruction set to our printer, which includes where to move, how fast to move, whether or not to extrude material, extruder temperature, lower platform, and so on. The following screenshot shows an example of GCode produced by the ReplicatorG slicing engine Skeinforge at the start of a print: Start of GCode The slicing engine is what tells your printer what to make and exactly how to make it. The algorithm involved directly relates to print quality, and thus gets the most attention from developers. It's at this stage we can see the tradeoff between software and hardware: maybe the printer has the capability to print with greater resolution but current software might only break the model down so far? Or perhaps it's the opposite, where the software can break down the model finer than the hardware is capable of producing? It turns out this is a major factor in what separates personal and commercial printers, and even a $200 and $2,000 printer—precision. More precision costs more money in both hardware and software development. The hardware must be able to handle the precision calculated in software, and where it cannot, and then software solutions must be implemented in circumvention. It's these reasons why algorithms improve by leaps and bounds with every software update, and why newer released printers outperform their predecessors. To make it easier on the microcontroller, in the printer the GCode is converted into the .s3g or .x3g code, which is essentially just optimized GCode. From here it is used to generate motor steps and direction pulses, which are sent to the motor controller and then to the motors. It's at this stage we realize that the process of 3D printing is just a handful of motors moving in a set pattern combined with a heater to melt the plastic material. The magic of 3D printing happens behind the scenes inside the slicing algorithm in order to create those explicit patterns. MakerWare You may be thinking, "This software is made by MakerBot and is intended to use with MakerBot printers that sounds like the best option"; well, you are mostly correct. MakerWare is currently still in beta, but is released for general use. The software is always in revision, and has made major improvements in a very short period time. Logically, it would make sense to use the software explicitly designed for use with your printer, but up until MakerWare v2.2, the ReplicatorG software had been superior. With all the improvements made in MakerWare's latest release v2.3.1, I would argue that the MakerWare software surpasses the ReplicatorG for use with a MakerBot 3D printer. This is one of the joys of being involved with MakerBot and the 3D printing environment- The products are always evolving and always improving. In a period of five years, MakerBot went from an idea to being acquired by Stratasys for $403 million. This speaks for how fast the industry is moving, and how fast the technology is advancing. For those interested, visit http://www.makerbot.com/blog/category/makerbot-software-updates/ to see a detailed description (lots of pictures) of the improvements in each MakerWare update. ReplicatorG You might be wondering why we would even consider the ReplicatorG software. The simple answer is with the release of MakerWare v2.3.1 and the purchase of a MakerBot, we wouldn't. The ReplicatorG software undoubtedly served as a building block for many of the features in MakerWare, and was the leading software for personal/hobbyist 3D printing for many years. The MakerWare software will meet all the needs for our designs, but if you are interested in learning more about open source 3D printing, I would suggest checking out this software. We have chosen to use MakerWare (v2.3.1) for the examples in the article, as this software is most tailored to our needs. Visit http://www.makerbot.com/makerware/ to download your own beta copy. MakerWare options and settings The first step after opening MakerWare is to add a model to the build platform by clicking on the Add button. Let's add Mr Jaws (by navigating to File | Examples | Mr_Jaws.stl) to our build platform. Once the model has been added, it needs to be selected by left-clicking it. This should highlight the model in yellow as shown in the following screenshot: Mr Jaws is selected Notice the buttons on the left-hand side, which are intuitively labeled Look, Move, Turn, and Scale. Clicking these buttons allows us to orient our model. Let's move Mr Jaws to the top-right corner, spin him 180 degrees in the Z-plane, and scale him to 110 percent. The result can be seen in the following screenshot: Mr Jaws is moved, rotated, and scaled Once we are satisfied with the orientation, click on Make located at the top of the screen to open up the print options. Ensure that your model is sitting on the platform by hitting the On Platform button (by navigating to Move | On Platform); else, you will print many layers of supporting material before finally reaching your part (if your part is floating above the platform), or you will damage your nozzle and platform (if your part is floating below the platform). Print options The default options are fairly straightforward, and will modify all the advanced options automatically. We will use the following table to describe each of the default options: Option Description I want to: This gives the option where to save the file or to send it directly to your MakerBot, if its plugged in by USB. Export For: This helps you select your MakerBot. Material: This helps you select the PLA that is recommended. Resolution: MakerWare has 3 quick set profiles which are Low, Standard, and High. These profiles reference the desired print resolution and directly control the Z-layer height. Remember that higher resolution requires longer print times. Raft: A raft is a surface slightly larger than the part which is built between the bottom of the part and the build platform. Rafts help reduce warping by having more surface area adhered to the build platform. Once a part has been printed, the raft is easily broken away. We will always be using rafts to help reduce errors in our models brought about by poor adhesion and warping. Supports: Supports are used to support sharp overhangs. We will always have the support box checked, as the software will determine when and where to use supports and will not print them if they are not needed. This ensures we will never run into errors from floating layers. By evaluating the advanced options, we are able to observe the result from changing the default options and gain a little more insight into settings that influence our print. The following is a table describing these advanced options: Advanced options Description Profile: Profiles handle all the settings of a print. Profiles are a grouping of preselected options. By default, there are 4 profiles: Low, Standard, and High but you can create your own custom profiles. Slicer: The options are between MakerBot Slicer and Skeinforge slicer, and can be changed by creating a new profile. It is recommended that we use the MakerBot slicing engine because it has been optimized for use with MakerBot printers as mentioned earlier. Quality | Infill: Infill is the density of the object measured by a percentage. By default, the amount of infill is low (less than 20 percent) to save both time and material. The slicing engine will automatically create a pattern for the infill (most commonly honeycomb). Quality | Number of Shells: The number of shells will represent the perimeter thickness of your model. One shell corresponds to one layer (which is approximately 0.4 mm the width of the nozzle opening in XY, and will depend on the next property, Quality | Layer Height, for the Z thickness). All the profiles default to two shells. If strength is a concern for your model, it is suggested to increase the number of shells rather than infill. Adding shells will also increase print times. Quality | Layer Height: Layer height is the height of each individual cross-sectional layer. MakerBot Replicator 2 is capable of heights as low as 0.1 mm. This corresponds to 10 layers for a model of 1 mm in height. Lowering the height will increase print times. Temperature | Extruders: The temperature for the extruder by default sets to 230 C. Greater temperatures can improve adhesion but may require slower printing. Every individual has their own "magic number" for temperature, which they feel works best but its best to say within +/- 10 C range of the default. For our models we will be using the default 230 C. Temperature | Build Plate A heated build plate is only required if we are printing in ABS, in order to reduce warping. The default is 110 C, and slightly higher temperatures will improve adhesion but also risk greater warping upon cooling. Speed | Speed while Extruding: Extruder speed and temperature are directly linked to one another; the speed needs to be slow enough to allow the layer currently being extruded to bond with the layer underneath. Greater speeds have the potential to reduce accuracy but will decrease print time. Be extremely cautious while modifying this parameter, as it takes experience to match increased speed and temperatures properly. Speed | Speed while Travelling: When not extruding, the extruder head is capable of faster travel. It is recommended to leave this parameter as set. Preview before printing This is an amazing tool and you should check it off every time. Preview before printing allows you to see an approximate material use and time estimate to print the given model. Summary This article thus explained some basic steps to build a 3D model in MakerBot and to get it ready for 3D printing. Resources for Article: Further resources on this subject: Design Tools and Basics [Article] Materials with Ogre 3D [Article] Importing 3D Formats into Away3D [Article]
Read more
  • 0
  • 0
  • 3375
Visually different images

article-image-calling-your-fellow-agents
Packt
04 Feb 2015
12 min read
Save for later

Calling your fellow agents

Packt
04 Feb 2015
12 min read
In this article by Stefan Sjogelid, author of book Raspberry Pi for Secret Agents Second Edition. We will be setting up SIP Witch by adding softphones, connect them together, and then we will run the softphone on the Pi. When you're out in the field and need to call in a favor from a fellow agent or report back to HQ, you don't want to depend on the public phone network if you can avoid it. Landlines and cell phones alike can be tapped by all sorts of shady characters and to add insult to injury, you have to pay good money for this service. We can do better. Welcome to the wonderful world of Voice over IP (VoIP). VoIP is a blanket term for any technology capable of delivering speech between two end users over IP networks. There are plenty of services and protocols out there that try to meet this demand, most of which force you to connect through a central server that you don't own or control. We're going to turn the Pi into the central server of our very own phone network. To aid us with this task, we'll deploy GNU SIP Witch—a peer-to-peer VoIP server that uses Session Initiation Protocol (SIP) to route calls between phones. While there are many excellent VoIP servers available (Asterisk, FreeSwitch, and Yate and so on) SIP Witch has the advantage of being very lightweight on the Pi because its only concern is connecting phones and not much else. (For more resources related to this topic, see here.) Setting up SIP Witch Once we have the SIP server up and running we'll be adding one or more software phones or softphones. It's assumed that server and phones will all be on the same network. Let's get started! Install SIP Witch using the following command: pi@raspberrypi ~ $ sudo apt-get install sipwitch Just as the output of the previous command says, we have to define PLUGINS in /etc/default/sipwitch before running SIP Witch. Let's open it up for editing: pi@raspberrypi ~ $ sudo nano /etc/default/sipwitch Find the line that reads #PLUGINS="zeroconf scripting subscriber forward" and remove the # character to uncomment the line. This directive tells SIP Witch that we want the standard plugins to be loaded. Next we'll have a look at the main SIP Witch configuration file: pi@raspberrypi ~ $ sudo nano /etc/sipwitch.conf Note how some blocks of text are between <!-- and --> tags. These are comments in XML documents and are ignored by SIP Witch. Whatever changes you want to make, ensure they go outside of those tags. Now we're going to add a few softphone user accounts. It's up to you how many phones you'd like on your system, but each account needs a username, an extension (short phone number) and a password. Find the <provision> tag, make a new line and add your users: <user id="phone1"> <extension>201</extension> <secret>SecretSauce201</secret> <display>Agent 201</display> </user> <user id="phone2"> <extension>202</extension> <secret>SecretSauce202</secret> <display>Agent 202</display> </user> The user ID will be used as a user/login name later from the softphones. In this default configuration, the extensions can be any number between 201 and 299. The secret is the password that will go together with the username on the softphones. We will look into a better way of storing passwords later in this chapter. Finally, the display string defines an identity to present to other phones when calling. One more thing that we need to configure is how SIP Witch should treat local names. This makes it possible to call a phone by user ID in addition to the extension. Find the <stack> tag, make a new line and add the following directive, but replace [IP address] with the IP address of your Pi: <localnames>[IP address]</localnames> Those are all the changes we need to make to the configuration at the moment. Basic SIP Witch configuration for two phones With our configuration in place, let's start up the SIP Witch service: pi@raspberrypi ~ $ sudo service sipwitch start The SIP Witch server runs in the background and only outputs to a log file viewable with this command: pi@raspberrypi ~ $ sudo cat /var/log/sipwitch.log Now we can use the sipwitch command to interact with the running service. Type sipwitch for a list of all possible commands. Here's a short list of particularly handy ones: Command Description sudo sipwitch dump Shows how the SIP Witch server is currently configured. sudo sipwitch registry Lists all currently registered softphones. sudo sipwitch calls Lists active calls. sudo sipwitch message [extension] "[text]" Sends a text message from the server to an extension. Perfect for sending status updates from the Pi through scripting. Connecting the softphones Running your own telecommunications service is kind of boring without actual phones to make use of it. Fortunately, there are softphone applications available for most common electronic devices out there. The configuration of these phones will be pretty much identical no matter which platform they're running on. This is the basic information that will always need to be specified when configuring your softphone application: User / Login name: phone1 or phone2 in our example configuration Password / Authentication: The user's secret in our configuration Server / Host name / Domain: The IP address of your Pi Once a softphone is successfully registered with the SIP Witch server, you should be able to see that phone listed using the sudo sipwitch registry command. What follows is a list of verified decent softphones that will get the job done. Windows (MicroSIP) MicroSIP is an open source softphone that also supports video calls. Visit http://www.microsip.org/downloads to obtain and install the latest version (MicroSIP-3.8.1.exe at the time of writing).   Configuring the MicroSIP softphone for Windows Right-click on either the status bar in the main application window or the system tray icon to bring up the menu that lets you access the Account settings. Mac OS X (Telephone) Telephone is a basic open source softphone that is easily installed through the Mac App store. Configuring the Telephone softphone for Mac OS X Linux (SFLphone) SFLphone is an open source softphone with packages available for all major distributions and client interfaces for both GNOME and KDE. Use your distribution's package manager to find and install the application. Configuring SFLphone GNOME client in Ubuntu Android (CSipSimple) CSipSimple is an excellent open source softphone available from the Google Play store. When adding your account, use the basic generic wizard. Configuring the CSipSimple softphone on Android iPhone/iPad (Linphone) Linphone is an open source softphone that is easily installed through the iPhone App store. Select I have already a SIP-account to go to the setup assistant. Configuring Linphone on the iPhone Running a softphone on the Pi It's always good to be able to reach your agents directly from HQ, that is, the Pi itself. Proving once again that anything can be done from the command line, we're going to install a softphone called Linphone that will make good use of your USB microphone. This new softphone obviously needs a user ID and password just like the others. We will take this opportunity to look at a better way of storing passwords in SIP Witch. Encrypting SIP Witch passwords Type sudo sipwitch dump to see how SIP Witch is currently configured. Find the accounts: section and note how there's already a user ID named pi with extension 200. This is the result of a SIP Witch feature that automatically assigns an extension number to certain Raspbian user accounts. You may also have noticed that the display string for the pi user looks empty. We can easily fix that by filling in the full name field for the Raspbian pi user account with the following command: pi@raspberrypi ~ $ sudo chfn -f "Agent HQ" pi Now restart the SIP Witch server with sudo service sipwitch restart and verify with sudo sipwitch dump that the display string has changed. So how do we set the password for this automatically added pi user? For the other accounts, we specified the password in clear text inside <secret> tags in /etc/sipwitch.conf. This is not the best solution from a security perspective if your Pi would happen to fall into the wrong hands. Therefore, SIP Witch supports specifying passwords in encrypted digest form. Use the following command to create an encrypted password for the pi user: pi@raspberrypi ~ $ sudo sippasswd pi We can then view the database of SIP passwords that SIP Witch knows about: pi@raspberrypi ~ $ sudo cat /var/lib/sipwitch/digests.db Now you can add digest passwords for your other SIP users as well and then delete all <secret> lines from /etc/sipwitch.conf to be completely free of clear text. Setting up Linphone With our pi user account up and ready to go, let's proceed to set up Linphone: Linphone does actually have a graphical user interface, but we'll specify that we want the command-line only client: pi@raspberrypi ~ $ sudo apt-get install linphone-nogtk Now we fire up the Linphone command-line client: pi@raspberrypi ~ $ linphonec You will immediately receive a warning that reads: Warning: Could not start udp transport on port 5060, maybe this port is already used. That is, in fact, exactly what is happening. The standard communication channel for the SIP protocol is UDP port 5060, and it's already in use by our SIP Witch server. Let's tell Linphone to use port 5062 with this command: linphonec> ports sip 5062 Next we'll want to set up our microphone. Use these three commands to list, show, and select what audio device to use for phone calls: linphonec> soundcard list linphonec> soundcard show linphonec> soundcard use [number] For the softphone to perform reasonably well on the Pi, we'll want to make adjustments to the list of codecs that Linphone will try to use. The job of a codec is to compress audio as much as possible while retaining high quality. This is a very CPU-intensive process, which is why we want to use the codec with the least amount of CPU load on the Pi, namely, PCMU or PCMA. Use the following command to list all currently supported codecs: linphonec> codec list Now use this command to disable all codecs that are not PCMU or PCMA: linphonec> codec disable [number] It's time to register our softphone to the SIP Witch server. Use the following command but replace [IP address] with the IP address of your Pi and [password] with the SIP password you set earlier for the pi user: linphonec> register sip:pi@[IP address] sip:[IP address] [password] That's all you need to start calling your fellow agents from the Pi itself. Type help to get a list of all commands that Linphone accepts. The basic commands are call [user id] to call someone, answer to pick up incoming calls and quit to exit Linphone. All the settings that you've made will be saved to ~/.linphonerc and loaded the next time you start linphonec. Playing files with Linphone Now that you know the Linphone basics, let's explore some interesting features not offered by most other softphones. At any time (except during a call), you can switch Linphone into file mode, which lets us experiment with alternative audio sources. Use this command to enable file mode: linphonec> soundcard use files Do you remember eSpeak from earlier in this chapter? While you rest your throat, eSpeak can provide its soothing voice to carry out entire conversations with your agents. If you haven't already got it, install eSpeak first: pi@raspberrypi ~ $ sudo apt-get install espeak Now we tell Linphone what to say next: linphonec> speak english Greetings! I'm a Linphone, obviously. This sentence will be spoken as soon as there's an established call. So you can either make an outgoing call or answer an incoming call to start the conversation, after which you're free to continue the conversation in Italian: linphonec> speak italian Buongiorno! Mi chiamo Enzo Gorlami. Should you want a message to play automatically when someone calls, just toggle auto answer: linphonec> autoanswer enable How about playing a pre-recorded message or some nice grooves? If you have a WAV or MP3 file that you'd like to play over the phone, it has to be converted to a suitable format first. A simple SoX command will do the trick: pi@raspberrypi ~ $ sox "original file.mp3" -c 1 -r 48000 playme.wav Now we can tell Linphone to play the file: linphonec> play playme.wav Finally, you can also record a call to file. Note that only the remote part of the conversation can be recorded, which makes this feature more suitable for leaving messages and such. Use the following command to record: linphonec> record message.wav Summary In this article, we set up our very own phone network using SIP Witch and connected softphones running on a wide variety of platforms including the Pi itself. Resources for Article: Further resources on this subject: Our First Project – A Basic Thermometer [article] Testing Your Speed [article] Creating a 3D world to roam in [article]
Read more
  • 0
  • 0
  • 3311

article-image-debugging-applications-pdb-and-log-files
Packt
23 Sep 2015
13 min read
Save for later

Debugging Applications with PDB and Log Files

Packt
23 Sep 2015
13 min read
 In this article by Dan Nixon of the book Getting Started with Python and Raspberry Pi, we will learn more about how to debug Python code using the Python Debugger (PDB) tool and how we can use the Python logging framework to make complex applications written in Python easier to debug when they fail. (For more resources related to this topic, see here.) We will also look at the technique of unit testing and how the unittest Python module can be used to test small sections of a Python application to ensure that it is functioning as expected. These techniques are commonly used in applications written in other languages and are good skills to learn if you are often going to be developing applications. The Python debugger PDB is a tool that allows real time debugging of running Python code. It can help to track down issues with the logic of a program to help find the cause of a crash or unexpected behavior. PDB can be launched with the following command: pdb2.7 do_calculaton.py This will open a new PDB shell, as shown in the following screenshot: We can use the continue command (which can be shortened to c) to execute the next section of the code until a breakpoint is hit. As we are yet to declare any breakpoints, this will run the script until it exits normally, as shown in the following screenshot: We can set breakpoints in the application, where the program will be stopped, and you will be taken back to the PDB shell in order to debug the control flow of the program. The easiest way to set a breakpoint is by giving a specific line in a file, for example: break Operation.py:7 This command will add a breakpoint on line 7 of Operation.py. When this is added, PDB will confirm the file and the line number, as shown in the following screenshot: Now, when we run the application, we will see the program stop each time the breakpoint is reached. When a breakpoint is reached, we can resume the program using the c command: When paused at a breakpoint, we can view the details of the local variables in the current scope. For example, in the breakpoint we have added, there is a variable named name, which we can see the value of by using the following command: p name This outputs the value of the variable, as shown in the following screenshot: When at a breakpoint, we can also get a stack trace of the functions that have been called so far. This is done using the bt command and gives output like that shown in the following screenshot: We can also modify the values of the variables when paused at a breakpoint. To do this, simply assign a value to the variable name as you would in a regular Python script: name = 'subtract' In the following screenshot, this was used to change the first operation in the do_calculation.py script from add to subtract; the effect on the calculation is seen in the different result value: When at a breakpoint, we can also use the l command to see the current line the program is paused at. An example of this is shown in the following screenshot: We can also setup a series of commands to be executed when we hit a breakpoint. This can allow debugging to be automated to an extent by automatically recording or modifying the values of the variables at certain points in the program's execution. This can be demonstrated using the following commands on a new instance of PDB with no breakpoints set (first, quit PDB using the q command, and then re-launch it): break Operation.py:7 commands p name c This gives the following output. Note that the commands are entered on a terminal prefixed (com) rather than the PDB terminal prefixed (pdb). This set of commands tells PDB to print the value of the name variable and continue execution when the last added breakpoint was hit. This gives the output shown in the following screenshot: Within PDB, you can also use the ? command to get a full list of the available commands and help on using them, as shown in the following screenshot: Further information and full documentation on PDB is available at https://docs.python.org/2/library/pdb.html. Writing log files The next technique we will look at is having our application output a log file. This allows us to get a better understanding of what was happening at the time an application failed, which can provide key information into finding the cause of the failure, especially when the failure is being reported by a user of your application. We will add some logging statements to the Calculator.py and Operation.py files. To do this, we must first add the import for the logging module (https://docs.python.org/2/library/logging.html) to the start of each python file, which is simply: import logging In the Operation.py file, we will add two logging calls in the evaluate function, as shown in the following code: def evaluate(self, a, b): logging.getLogger(__name__).info("Evaluating operation: %s" % (self._operation)) logging.getLogger(__name__).debug("RHS: %f, LHS: %f" % (a, b)) This will output two logging statements: one at the debug level and one at the information level. There are in total five unique levels at which messages can be output. In increasing severity, they are: debug() info() warning() error() critical() Log handlers can be filtered to only process the log messages of a certain severity if required. We will see this in action later in this section. The logging.getLogger(__name__) call is used to retrieve the Logger class for the current module (where the name of the module is given by the __name__ variable). By default, each module uses its own Logger class identified by the name of the module. Next, we can add some debugging statements to the Calculator.py file in the same way. Here, we will add logging to the enter_value, enter_operation, evaluate, and all_clear functions, as shown in the following code snippet: def enter_value(self, value): if len(self._input_list) > 0 and not isinstance(self._input_list[-1], Operation): raise RuntimeError("Must enter an operation next") logging.getLogger(__name__).info("Adding value: %f" % (value)) self._input_list.append(float(value)) def enter_operation(self, operation_name): if len(self._input_list) == 0 or isinstance(self._input_list[-1], Operation): raise RuntimeError("Must enter a value next") logging.getLogger(__name__).info("Adding operation: %s" % (operation_name)) self._input_list.append(Operation(operation_name)) def evaluate(self): logging.getLogger(__name__).info("Evaluating calculation") if len(self._input_list) % 2 == 0: raise RuntimeError("Input length mismatch") self._result = self._input_list[0] for idx in range(1, len(self._input_list), 2): operation = self._input_list[idx] next_value = self._input_list[idx + 1] logging.getLogger(__name__).debug("Next function: %f %s %f" % ( self._result, str(operation), next_value)) self._result = operation.evaluate(self._result, next_value) logging.getLogger(__name__).info("Result is: %f" % (self._result)) return self._result def all_clear(self): logging.getLogger(__name__).info("Clearing calculator") self._input_list = [] self._result = 0.0 Finally, we need to configure a handler for the log messages. This is what will handle the messages sent by each logger and output them to a suitable destination; for example, the standard output or a file. We will configure this in the do_conversion.py file. First, we will configure a basic handler that will print all the log messages to the standard output so that they appear on the terminal. This can be achieved with the following code: logging.basicConfig(level=logging.DEBUG) We will also add the following line to the end of the script. This is used to close any open log handlers and should be included at the very end of an application (the logging framework should not be used after calling this function). logging.shutdown() Now, we can see the effects by running the script using the following command: python do_calculation.py This will give an output to the terminal, as shown in the following screenshot: We can also have the log output written to a file instead of printed to the terminal by adding a filename to the logger configuration. This helps to keep the terminal free of unnecessary information. logging.basicConfig(level=logging.DEBUG, filename='calc.log') When executed, this will give no additional output other than the result of the calculation, but will have created an additional file, calc.log, which contains the log messages, as shown in the following screenshot: Unit testing Unit testing is a technique for automated testing of small sections ("units") of code to ensure that the components of a larger application are working as intended, independently of each other. There are many frameworks for this in almost every language. In Python, we will be using the unittest module, as this is included with the language and is the most common framework used in the Python applications. To add unit tests to our calculator module, we will create an additional module in the same directory named test. Inside that will be three files: __init__.py (used to denote that a directory is a Python package), test_Calculator.py, and test_Operation.py. After creating this additional module, the structure of the code will be the same as shown in the following image: Next, we will modify the test_Operation.py file to include a test case for the Operation class. As always, this will start with the required imports for the modules we will be using: import unittest from calculator.Operation import Operation We will be creating a class, test_Operation, which inherits from the TestCase class provided by the unittest module. This contains the logic required to run the functions of the class as individual unit tests. class test_Operation(unittest.TestCase): Now, we will define four tests to test the creation of a new Operation instance for each of the operations that are supported by the class. Here, the assertEquals function is used to test for equality between two variables; this determines if the test passes or not. def test_create_add(self): op = Operation('add') self.assertEqual(str(op), 'add') def test_create_subtract(self): op = Operation('subtract') self.assertEqual(str(op), 'subtract') def test_create_multiply(self): op = Operation('multiply') self.assertEqual(str(op), 'multiply') def test_create_divide(self): op = Operation('divide') self.assertEqual(str(op), 'divide') In this test we are checking that a RuntimeError is raised when an unknown operation is given to the Operation constructor. We will do this using the assertRaises function. def test_create_fails(self): self.assertRaises(ValueError, Operation, 'not_a_function') Next, we will create four tests to ensure that each of the known operations evaluates to the correct result: def test_add(self): op = Operation('add') result = op.evaluate(5, 2) self.assertEqual(result, 7) def test_subtract(self): op = Operation('subtract') result = op.evaluate(5, 2) self.assertEqual(result, 3) def test_multiply(self): op = Operation('multiply') result = op.evaluate(5, 2) self.assertEqual(result, 10) def test_divide(self): op = Operation('divide') result = op.evaluate(5, 2) self.assertEqual(result, 2) This will form the test case for the Operation class. Typically, the test file for a module should have the name of the module prefixed by test, and the name of each test function within a test case class should start with test. Next, we will create a test case for the Calculator class in the test_Calculator.py file. This again starts by importing the required modules and defining the class: import unittest from calculator.Calculator import Calculator class test_Operation(unittest.TestCase): We will now add two test cases that test the correct handling of errors when operations and values are entered in the incorrect order. This time, we will use the assertRaises function to create a context to test for RuntimeError being raised. In this case, the error must be raised by any of the code within the context. def test_add_value_out_of_order_fails(self): with self.assertRaises(RuntimeError): calc = Calculator() calc.enter_value(5) calc.enter_value(5) calc.evaluate() def test_add_operation_out_of_order_fails(self): with self.assertRaises(RuntimeError): calc = Calculator() calc.enter_operation('add') calc.evaluate() This test is to ensure that the all_clear function works as expected. Note that, here, we have multiple test assertions in the function, and all assertions have to pass for the test to pass. def test_all_clear(self): calc = Calculator() calc.enter_value(5) calc.evaluate() self.assertEqual(calc.get_result(), 5) calc.all_clear() self.assertEqual(calc.get_result(), 0) This test ensured that the evaluate() function works as expected and checks the output of a known calculation. Note, here, that we are using the assertAlmostEqual function, which ensures that two numerical variables are equal within a given tolerance, in this case 13 decimal places. def test_evaluate(self): calc = Calculator() calc.enter_value(5.0) calc.enter_operation('multiply') calc.enter_value(2.0) calc.enter_operation('divide') calc.enter_value(5.0) calc.enter_operation('add') calc.enter_value(18.0) calc.enter_operation('subtract') calc.enter_value(5.0) self.assertAlmostEqual(calc.evaluate(), 15.0, 13) self.assertAlmostEqual(calc.get_result(), 15.0, 13) These two tests will test that the errors are handled correctly when the evaluate() function is called, when there are values missing from the input or the input is empty: def test_evaluate_failure_empty(self): with self.assertRaises(RuntimeError): calc = Calculator() calc.enter_operation('add') calc.evaluate() def test_evaluate_failure_missing_value(self): with self.assertRaises(RuntimeError): calc = Calculator() calc.enter_value(5) calc.enter_operation('add') calc.evaluate() That completes the test case for the Calculator class. Note that we have only used a small subset of the available test assertions over our two test classes. A full list of all the test assertions is available in the unittest module documentation at https://docs.python.org/2/library/unittest.html#test-cases. Once all the tests are written, they can be executed using the following command in the directory containing both the calculator and tests directories: python -m unittest discover -v Here, we have the unit test framework discover all the tests automatically (which is why following the expected naming convention of prefixing names with "test" is important). We also request verbose output with the -v parameter, which shows all the tests executed and their results, as shown in the following screenshot: Summary In this article, we looked at how the PDB tool can be used to find faults in Python code and applications. We also looked at using the logging module to have Python code output a log file during execution and how this can make debugging the failures easier, as well as automated unit testing for portions of the application. Resources for Article: Further resources on this subject: Basic Image Processing[article] IRemote Desktop to Your Pi from Everywhere[article] Scraping the Data [article]
Read more
  • 0
  • 0
  • 3238

article-image-sewable-leds-clothing
Packt
14 Aug 2017
16 min read
Save for later

Sewable LEDs in Clothing

Packt
14 Aug 2017
16 min read
In this article by Jonathan Witts, author of the book Wearable Projects with Raspberry Pi Zero, we will use sewable LEDs and conductive thread to transform an item of clothing into a sparkling LED piece of wearable tech, controlled with our Pi Zero hidden in the clothing. We will incorporate a Pi Zero and battery into a hidden pocket in the garment and connect our sewable LEDs to the Pi's GPIO pins so that we can write Python code to control the order and timings of the LEDs. (For more resources related to this topic, see here.) To deactivate the software running automatically, connect to your Pi Zero over SSH and issue the following command: sudo systemctl disable scrollBadge.service Once that command completes, you can shutdown your Pi Zero by pressing your off-switch for three seconds. Now let's look at what we are going to cover in this article. What We Will Cover  In this article we will cover the following topics: What we will need to complete the project in this article How we will modify our item of clothing Writing a Python program to control the electronics in our modified garment Making our program run automatically Let's jump straight in and look at what parts we will need to complete the project in this article. Bill of parts  We will make use of the following things in the project in this article: A Pi Zero W An official Pi Zero Case A portable battery pack Item of clothing to modify e.g. a top or t-shirt 10 sewable LEDs Conductive Thread Some fabric the same color as the clothing Thread the same color as the clothing A sewing needle Pins 6 metal poppers Some black and yellow colored cable Solder and soldering iron Modifying our item of clothing  So let's take a look at what we need to do to modify our item of clothing ready to accommodate our Pi Zero, battery pack and sewable LEDs. We will start by looking at creating our hidden pocket for the Pi Zero and the batteries, followed by how we will sew our LEDs into the top and design our sewable circuit. We will then solve the problem of connecting our conductive thread back to the GPIO holes on our Pi Zero. Our hidden pocket  You will need a piece of fabric that is large enough to house your Pi Zero case and battery pack alongside each other, with enough spare to hem the pocket all the way round. If you have access to a sewing machine then this section of the project will be much quicker, otherwise you will need to do the stitching by hand. The piece of fabric I am using is 18 x 22 cm allowing for a 1 cm hem all around. Fold and pin the 1 cm hem and then either stitch by hand or run it through a sewing machine to secure your hem. When you have finished, remove the pins. You need to then turn your garment inside out and decide where you are going to position your hidden pocket. As I am using a t-shirt for my garment I am going to position my pocket just inside at the bottom side of the garment, wrapping around the front and back so that it sits across the wearer's hip. Pin your pocket in place and then stitch it along the bottom and left and right sides, leaving the top open. Make sure that you stitch this nice and firmly as it has to hold the weight of your battery pack. The picture below shows you my pocket after being stitched in place. When you have finished this you can remove the pins and turn your garment the correct way round again. Adding our sewable LEDs  We are now going to plan our circuit for our sewable LEDs and add them to the garment. I am going to run a line of 10 LEDs around the bottom of my top. These will be wired up in pairs so that we can control a pair of LEDs at any one time with our Pi Zero. You can position your LEDs however you want, but it is important that the circuits of conductive thread do not cross one another. Turn your garment inside out again and mark with a washable pen where you want your LEDs to be sewn. As the fabric for my garment is quite light I am going to just stitch them inside and let the LED shine through the material. However if your material is heavier then you will have to put a small cut where each LED will be and then button-hole the cut so the LED can push through. Start with the LED furthest from your hidden pocket. Once you have your LED in position take a length of your conductive thread and over-sew the negative hole of your LED to your garment, trying to keep your stitches as small and as neat as possible, to minimize how much they can be seen from the front of the garment. You now need to stitch small running stitches from the first LED to the second, ensure that you use the same piece of conductive thread and do not cut it! When you get to the position of your LED, again over-sew the negative hole of your LED ensuring that it is face down so that the LED shows through the fabric. As I am stitching my LEDs quite close to the hem of my t-shirt, I have made use of the hem to run the conductive thread in when connecting the negative holes of the LEDs, as shown in the following image: Continue on connecting each negative point of your LEDs to each other with a single length of conductive thread. Your final LED will be the one closest to your hidden pocket, continue with a running stitch until you are on your hidden pocket. Now take the male half of one of your metal poppers and over-sew this in place through one of the holes. You can now cut the length of conductive thread as we have completed our common ground circuit for all the LEDs. Now stitch the three remaining holes with standard thread, as shown in this picture. When cutting the conductive thread be sure to leave a very short end. If two pieces of thread were to touch when we were powering the LEDs we could cause a short circuit. We now need to sew our positive connections to our garment. Start with a new length of conductive thread and attach the LED second closest to your hidden pocket. Again over-sew it it to the fabric trying to ensure that your stitches are as small as possible so they are not very visible from the front of the garment. Now you have secured the first LED, sew a small running stitch to the positive connection on the LED closest to the hidden pocket. After securing this LED stitch a running stitch so that it stops alongside the popper you previously secured to your pocket, and this time attach a female half of a metal popper in the same way as before, as shown in this picture: Secure your remaining 8 LEDs in the same fashion, working in pairs, away from the pocket. so that you are left with 1 male metal popper and 5 female metal poppers in a line on your hidden pocket. Ensure that the 6 different conductive threads do not cross at any point, the six poppers do not touch and that you have the positive and negative connections the right way round! The picture below shows you the completed circuit stitched into the t-shirt, terminating at the poppers on the pocket. Connecting our Pi Zero Now we have our electrical circuit we need to find a way to attach our Pi Zero to each pair of LEDs and the common ground we have sewn into our garment. We are going to make use of the poppers we stitched onto our hidden pocket for this. You have probably noticed that the only piece of conductive thread which we attached the male popper to was the common ground thread for our LEDs. This is so that when we construct our method of attaching the Pi Zero GPIO pins to the conductive thread it will be impossible to connect the positive and negative threads the wrong way round! Another reason for using the poppers to attach our Pi Zero to the conductive thread is because the LEDs and thread I am using are both rated as OK for hand washing; your Pi Zero is not!  Take your remaining female popper and solder a length of black cable to it, about two and a half times the height of your hidden pocket should do the job. You can feed the cable through one of the holes in the popper as shown in the picture to ensure you get a good connection. For the five remaining male poppers solder the same length of yellow cable to each popper. The picture below shows two of my soldered poppers. Now connect all of your poppers to the other parts on your garment and carefully bend all the cables so that they all run in the same direction, up towards the top of your hidden pocket. Trim all the cable to the same length and then mark the top and bottom of each yellow cable with a permanent marker so that you know which cable is attached to which pair of LEDs. I am marking the bottom yellow cable as number 1 and the top as number 5. We can now cut a length of heat shrink and and cover the loose lengths of cable, leaving about 4 cm free to strip, tin and solder onto your Pi. You can now heat the heat shrink with a hair dryer to shrink it around your cables. We are now going to stitch together a small piece of fabric to attach the poppers to. We want to end up with a piece of fabric which is large enough to sew all six poppers to and to stitch the uncovered cables down to. This will be used to detach the Pi Zero from our garment when we need to wash it, or just remove our Pi Zero for another project. To strengthen up the fabric I am doubling it over and hemming a long and short side of the fabric to make a pocket. This can then be turned inside out and the remaining short side stitched over. You now need to position these poppers onto your piece of fabric so that they are aligned with the poppers you have sewn into your garment. Once you are happy with their placement, stitch them to the piece of fabric using standard thread, ensuring that they are really firmly attached. If you like you can also put a few stitches over each cable to ensure they stay in place too. Using a fine permanent marker number both ends of the 5 yellow cables 1 through 5 so that you can identify each cable. Now push all 6 cables through about 12 cm of shrink wrap and apply some heat from a hair dryer until it shrinks around your cables.You can strip and tin the ends of the six cables ready to solder them to your Pi Zero. Insert the black cable in the ground hole below GPIO 11 on your Pi Zero and then insert the 5 yellow cables, sequentially 1 through 5, into the GPIO holes 11, 09, 10, 7 and 8 as shown in this diagram, again from the rear of the Pi Zero. When you are happy all the cables are in the correct place, solder them to your Pi Zero and clip off any extra length from the front of the Pi zero with wire snips. You should now be able to connect your Pi Zero to your LEDs by pressing all 6 poppers together. To ensure that the wearer's body does not cause a short circuit with the conductive thread on the inside of the garment, you may want to take another piece of fabric and stitch it over all of the conductive thread lines. I would recommend that you do this after you have tested all your LEDs with the program in the next section. We have now carried out all the modifications needed to our garment, so let's move onto writing our Python program to control our LEDs. Writing Our Python Program  To start with we will write a simple, short piece of Python just to check that all ten of our LEDs are working and we know which GPIO pin controls which pair of LEDs. Power on your Pi Zero and connect to it via SSH. Testing Our LEDs  To check that our LEDs are all correctly wired up and that we can control them using Python, we will write this short program to test them. First move into our project directory by typing: cd WearableTech Now make a new directory for this article by typing: mkdir Chapter3 Now move into our new directory: cd Chapter3 Next we create our test program, by typing: nano testLED.py Then we enter the following code into Nano: #!/usr/bin/python3 from gpiozero import LED from time import sleep pair1 = LED(11) pair2 = LED(9) pair3 = LED(10) pair4 = LED(7) pair5 = LED(8) for i in range(4):     pair1.on()     sleep(2)     pair1.off()     pair2.on()     sleep(2)     pair2.off()     pair3.on()     sleep(2)     pair3.off()     pair4.on()     sleep(2)     pair4.off()     pair5.on()     sleep(2)     pair5.off() Press Ctrl + o followed by Enter to save the file, and then Ctrl + x to exit Nano. We can then run our file by typing: python3 ./testLED.py All being well we should see each pair of LEDs light up for two seconds in turn and then the next pair, and the whole loop should repeat 4 times.  Our final LED program  We will now write our Python program which will control our LEDs in our t-shirt. This will be the program which we configure to run automatically when we power up our Pi Zero. So let's begin...  Create a new file by typing: nano tShirtLED.py Now type the following Python program into Nano: #!/usr/bin/python3 from gpiozero import LEDBoard from time import sleep from random import randint leds = LEDBoard(11, 9, 10, 7, 8) while True: for i in range(5): wait = randint(5,10)/10 leds.on() sleep(wait) leds.off() sleep(wait) for i in range(5): wait = randint(5,10)/10 leds.value = (1, 0, 1, 0, 1) sleep(wait) leds.value = (0, 1, 0, 1, 0) sleep(wait) for i in range(5): wait = randint(1,5)/10 leds.value = (1, 0, 0, 0, 0) sleep(wait) leds.value = (1, 1, 0, 0, 0) sleep(wait) leds.value = (1, 1, 1, 0, 0) sleep(wait) leds.value = (1, 1, 1, 1, 0) sleep(wait) leds.value = (1, 1, 1, 1, 1) sleep(wait) leds.value = (1, 1, 1, 1, 0) sleep(wait) leds.value = (1, 1, 1, 0, 0) sleep(wait) leds.value = (1, 1, 0, 0, 0) sleep(wait) leds.value = (1, 0, 0, 0, 0) sleep(wait) Now save your file by pressing Ctrl + o followed by Enter, then exit Nano by pressing Ctrl + x. Test that your program is working correctly by typing: python3 ./tShirtLED.py If there are any errors displayed, go back and check your program in Nano. Once your program has gone through the three different display patterns, press Ctrl + c to stop the program from running. We have introduced a number of new things in this program, firstly we imported a new GPIOZero library item called LEDBoard. LEDBoard lets us define a list of GPIO pins which have LEDs attached to them and perform actions on our list of LEDs rather than having to operate them all one at a time. It also lets us pass a value to the LEDBoard object which indicates whether to turn the individual members of the board on or off. We also imported randint from the random library. Randint allows us to get a random integer in our program and we can also pass it a start and stop value from which the random integer should be taken. We then define three different loop patterns and set each of them inside a for loop which repeats 5 times. Making our program start automatically  We now need to make our tShirtLED.py program run automatically when we switch our Pi Zero on: First we must make the Python program we just wrote executable, type: chmod +x ./tShirtLED.py Now we will create our service definition file, type: sudo nano /lib/systemd/system/tShirtLED.service Now type the definition into it: [Unit] Description=tShirt LED Service After=multi-user.target [Service] Type=idle ExecStart=/home/pi/WearableTech/Chapter3/tShirtLED.py [Install] WantedBy=multi-user.target Save and exit Nano by typing Ctrl + o followed by Enter and then Ctrl + x. Now change the file permissions, reload the systemd daemon and activate our service by typing: sudo chmod 644 /lib/systemd/system/tShirtLED.service sudo systemctl daemon-reload sudo systemctl enable tShirtLED.service Now we need to test whether this is working, so reboot your Pi by typing sudo reboot and then when your Pi Zero restarts you should see that your LED pattern starts to display automatically. Once you are happy that it is all working correctly press and hold your power-off button for three seconds to shut your Pi Zero down. You can now turn your garment the right way round and install the Pi Zero and battery pack into your hidden pocket. As soon as you plug your battery pack into your Pi Zero your LEDs will start to display their patterns and you can safely turn it all off using your power-off button. Summary In this article we looked at making use of stitchable electronics and how we could combine them with our Pi Zero. We made our first stitchable circuit and found a way that we could connect our Pi Zero to this circuit and control the electronic devices using the GPIO Zero Python library. Resources for Article: Further resources on this subject: Sending Notifications using Raspberry Pi Zero [article] Raspberry Pi LED Blueprints [article] Raspberry Pi and 1-Wire [article]
Read more
  • 0
  • 0
  • 3204
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $15.99/month. Cancel anytime
article-image-amds-293-million-jv-with-chinese-chipmaker-hygon-starts-production-of-x86-cpus
Natasha Mathur
10 Jul 2018
3 min read
Save for later

AMD’s $293 million JV with Chinese chipmaker Hygon starts production of x86 CPUs

Natasha Mathur
10 Jul 2018
3 min read
Chinese chip producer Hygon begins production of China-made x86 processors named “Dhyana”. These processors use AMD’s Zen microarchitecture and are the result of the licensing deal between AMD and its Chinese partners. Hygon has started shipping the new “Dhyana” x86 CPUs. According to the official statements made by AMD, it does not permit the selling of the final chip designs to its partners in China instead it encourages their partners to design their own processors that suit the needs of the Chinese server market. This is an effort to break China’s dependency on the foreign technology market. In 2016, AMD announced that it is working on a joint project in China to develop processors. This provided AMD with a $293 million in cash by Tianjin Haiguang Advanced Technology Investment Co. (THATIC). THATIC includes AMD as well as the Chinese Academy of Sciences. What’s interesting is that AMD managed to establish a license allowing Chinese processor manufacturers to develop and sell x86 processors despite the fact that Intel was blocked from selling Xeon processors to China in 2015 by the Obama administration. This happened over concerns that the chips would help China’s nuclear weapon programs. Dhyana processors are focusing on embedded applications currently. It is a System on chip ( SoC) instead of a socketed chip. But this design doesn’t limit the Dhyana processors from being used in high-performance or data center applications, which usually leverages Intel Xeon and other server processors. Also, Linux kernels developers have stated that the x86 processors are very close in design to that of AMD’s EPYC. In fact, when moving the Linux kernel code for EPYC processors to the Hygon chips, it required fewer than 200 new lines of code, according to a report from Michael Larabel of Phoronix. The only difference between the two is the vendor IDs and family series. Apart from AMD, there are other chip-producing ventures that China is heavily engrossed in. One such venture is Zhaoxin Semiconductor that is working to manufacture x86 chips through a partnership with VIA. China is making continuous efforts to free the country from US interventions and to change their long-term processor market. There are implications that most of the x86 processors are 14nm chips, but there isn’t much information available on the capabilities of the Dhyana family. Also, other details of their manufacturing characteristics are still to be known. Baidu releases Kunlun AI chip, China’s first cloud-to-edge AI chip Qualcomm announces a new chipset for standalone AR/VR headsets at Augmented World Expo
Read more
  • 0
  • 0
  • 3192

article-image-raspberry-pi-4-is-up-for-sale-at-35-with-64-bit-arm-core-up-to-4gb-memory-full-throughput-gigabit-ethernet-and-more
Vincy Davis
24 Jun 2019
5 min read
Save for later

Raspberry Pi 4 is up for sale at $35, with 64-bit ARM core, up to 4GB memory, full-throughput gigabit Ethernet and more!

Vincy Davis
24 Jun 2019
5 min read
Today, the Raspberry Pi 4 model is up for sale, starting at $35. It has a 1.5GHz quad-core 64-bit ARM Cortex-A72 CPU, three memory options of up to 4GB, full-throughput gigabit Ethernet, Dual-band 802.11ac wireless networking, two USB 3.0 and two USB 2.0 ports, a complete compatibility with earlier Raspberry Pi products and more. Eben Upton, Chief Executive at Raspberry Pi Trading has said that “This is a comprehensive upgrade, touching almost every element of the platform.” This is the first Raspberry Pi product available offline, since the opening of their store in Cambridge, UK.   https://youtu.be/sajBySPeYH0   What’s new in Raspberry Pi 4? New Raspberry Pi silicon Previous Raspberry Pi models are based on 40nm silicon. However, the new Raspberry Pi 4 is a complete re-implementation of BCM283X on 28nm. The power saving delivered by the smaller process geometry has enabled the use of Cortex-A72 core, which has a 1.5GHz quad-core 64-bit ARM. The Cortex-A72 core can execute more instructions per clock, yielding four times performance improvement, over Raspberry Pi 3B+, depending on the benchmark. New Raspbian software The new Raspbian software provides numerous technical improvements, along with an extensively modernized user interface, and updated applications including the Chromium 74 web browser. For Raspberry Pi 4, the Raspberry team has retired the legacy graphics driver stack used on previous models and opted for the Mesa “V3D” driver. It offers benefits like OpenGL-accelerated web browsing and desktop composition, and also eliminates roughly half of the lines of closed-source code in the platform. Raspberry Pi 4 memory options For the first time, Raspberry Pi 4 is offering a choice of memory capacities, as shown below: All three variants of the new Raspberry Pi model have been launched. The entry-level Raspberry Pi 4 Model B is priced at 35$, excluding sales tax, import duty, and shipping. Additional improvements in Raspberry Pi 4 Power Raspberry Pi 4 has USB-C as the power connector, which will support an extra 500mA of current, ensuring 1.2A for downstream USB devices, even under heavy CPU load. Video The previous type-A HDMI connector has been replaced with a pair of type-D HDMI connectors, so as to accommodate dual display output within the existing board footprint. Ethernet and USB The Gigabit Ethernet magjack has been moved to the top right of the board, hence simplifying the PCB routing. The 4-pin Power-over-Ethernet (PoE) connector is in the same location, thus Raspberry Pi 4 remains compatible with the PoE HAT. The Ethernet controller on the main SoC is connected to an external Broadcom PHY, thus providing full throughput. USB is provided via an external VLI controller, connected over a single PCI Express Gen 2 lane, and providing a total of 4Gbps of bandwidth, shared between the four ports. The Raspberry Pi 4 model has the LPDDR4 memory technology, with triple bandwidth. It has also upgraded the video decode, 3D graphics, and display output to support 4Kp60 throughput. Onboard Gigabit Ethernet and PCI Express controllers have been added to address the non-multimedia I/O limitations of the previous devices. Image Source: Raspberry Pi blog New Raspberry Pi 4 accessories Due to the connector and form-factor changes, Raspberry Pi 4 has the requirement of new accessories. The Raspberry Pi 4 has its own case, priced at $5. It also has developed a suitable 5V/3A power supply, which is priced at $8 and is available in the UK, European, North American and Australian plug formats. The Raspberry Pi 4 Desktop Kit is also available and priced at $120. While the earlier Raspberry Pi models will be available in the market, Upton has mentioned that Raspberry Pi will continue to build these models as long as there's a demand for them. Users are quite ecstatic with the availability of Raspberry Pi 4 and many have already placed orders for it. https://twitter.com/Morphy99/status/1143103131821252609 https://twitter.com/M0VGA/status/1143064771446677509 A user on Reddit comments, “Very nice. Gigabit LAN and 4GB memory is opening it up to a hell of a lot more use cases. I've been tempted by some of the Pi's higher-specced competitors like the Pine64, but didn't want to lose out on the huge community behind the Pi. This seems like the best of both worlds to me.” A user on Hacker News says that “Oh my! This is such a crazy upgrade. I've been using the RPI2 as my HTPC/NAS at my folks, and I'm so happy with it. I was itching to get the last one for myself. USB 3.0! Gigabit Ethernet! WiFi 802.11ac, BT 5.0, 4GB RAM! 4K! $55 at most?! What the!? How the??! I know I'm not maintaining decorum at Hacker News, but I am SO mighty, MIGHTY excited! I'm setting up a VPN to hook this (when I get it) to my VPS and then do a LOT of fun stuff back and forth, remotely, and with the other RPI at my folks.” Another comment reads “This is absolutely great. The RPi was already exceptional for its price point, and this version seems to address the few problems it had (lack of Gigabit, USB speed and RAM capacity) and add onto it even more features. It almost seems too good to be true. Can't wait!” Another user says that “I'm most excited about the modern A72 cores, upgraded hardware decode, and up to 4 GB RAM. They really listened and delivered what most people wanted in a next gen RPi.” For more details, head over to the Raspberry Pi official blog. You can now install Windows 10 on a Raspberry Pi 3 Setting up a Raspberry Pi for a robot – Headless by Default [Tutorial] Introducing Strato Pi: An industrial Raspberry Pi
Read more
  • 0
  • 0
  • 3139

article-image-build-sensor-application-measure-ambient-light
Gebin George
01 May 2018
16 min read
Save for later

How to build a sensor application to measure Ambient Light

Gebin George
01 May 2018
16 min read
In today's tutorial, we will look at how to build a sensor application to measure the ambient light. Preparing our Sensor project We will create a new Universal Windows Platform application project. This time, we call it Sensor. We can use the Raspberry Pi 3, even though we will only use the light sensor and motion detector (PIR sensor) in this project. We will also add the latest version of a new NuGet package, the Waher.Persistence.FilesLW package. This package will help us with data persistence. It takes our objects and stores them in a local object database. We can later load the objects back into the memory and search for them. This is all done by analyzing the metadata available in the class definitions, so there's no need to do any database programming. Go ahead and install the package in your new project. The Waher.Persistence.Files package contains similar functionality, but it performs data encryption and dynamic code compilation of object serializes as well. These features require .NET standard v1.5, which is not compatible with the Universal Windows Platform in its current state. That is why we use the Light Weight version of the same library, which only requires .NET standard 1.3. The Universal Windows Platform supports .NET Standard up to 1.4. For more information, visit https://docs.microsoft.com/en-us/dotnet/articles/standard/library#net-platforms-support. Initializing the inventory library The next step is to initialize the libraries we have just included in the project. The persistence library includes an inventory library (Waher.Runtime.Inventory) that helps with dynamic type-related tasks, as well as keeping track of available types, interfaces and which ones have implemented which interfaces in the runtime environment. This functionality is used by the object database defined in the persistence libraries. The object database figures out how to store, load, search for, and create objects, using only their class definitions appended with a minimum of metadata in the form of attributes. So, one of the first things we need to do during startup is to tell the inventory environment which assemblies it and, by extension, the persistence library can use. We do this as follows: Log.Informational("Starting application."); Types.Initialize( typeof(FilesProvider).GetTypeInfo().Assembly, typeof(App).GetTypeInfo().Assembly); Here, Types is a static class defined in the Waher.Runtime.Inventory namespace. We initialize it by providing an array of assemblies it can use. In our case, we include the assembly of the persistence library, as well as the assembly of our own application. Initializing the persistence library We then go on to initialize our persistence library. It is accessed through the static Database class, defined in the Waher.Persistence namespace. Initialization is performed by registering one object database provider. This database provider will then be used for all object database transactions. In our case, we register our local files object database provider, FilesProvider, defined in the Waher.Persistence.Files namespace: Database.Register(new FilesProvider( Windows.Storage.ApplicationData.Current.LocalFolder.Path + Path.DirectorySeparatorChar + "Data", "Default", 8192, 1000, 8192, Encoding.UTF8, 10000)); The first parameter defines a folder where database files will be stored. In our case, we store database files in the Data subfolder of the application local data folder. Objects are divided into collections. Collections are stored in separate files and indexed differently, for performance reasons. Collections are defined using attributes in the class definition. Classes lacing a collection definition are assigned the default collection, which is specified in the second argument. Objects are then stored in B-tree ordered files. Such files are divided into blocks into which objects are crammed. For performance reasons, the block size, defined in the third argument, should be correlated to the sector size of the underlying storage medium, which is typically a power of two. This minimizes the number of reads and writes necessary. In our example, we've chosen 8,192 bytes as a suitable block size. The fourth argument defines the number of blocks the provider can cache in the memory. Caching improves performance, but requires more internal memory. In our case, we're satisfied with a relatively modest cache of 1,000 blocks (about 8 MB). Binary Large Objects (BLOBs), that is, objects that cannot efficiently be stored in a block, are stored separately in BLOB files. These are binary files consisting of doubly linked blocks. The fifth parameter controls the block size of BLOB files. The sixth parameter controls the character encoding to use when serializing strings. The seventh, and last parameter, is the maximum time the provider will wait, in milliseconds, to get access to the underlying database when an operation is to be performed. Sampling raw sensor data After the database provider has been successfully registered, the persistence layer is ready to be used. We now continue with the first step in acquiring the sensor data: sampling. Sampling is normally done using a short regular time interval. Since we use the Arduino, we get values as they change. While such values can be an excellent source for event-based algorithms, they are difficult to use in certain kinds of statistical calculations and error-correction algorithms. To set up the regular sampling of values, we begin by creating a Timer object from the System.Threading namespace, after the successful initialization of the Arduino: this.sampleTimer = new Timer(this.SampleValues, null, 1000 - DateTime.Now.Millisecond, 1000); This timer will call the SampleValues method every thousand milliseconds, starting the next second. The second parameter allows us to send a state object to the timer callback method. We will not use this, so we let it be null. We then sample the values, as follows: privateasync void SampleValues(object State) { try { ushort A0 = this.arduino.analogRead("A0"); PinState D8= this.arduino.digitalRead(8); ... } catch (Exception ex) { Log.Critical(ex); } } We define the method as asynchronous at this point, even though we still haven't used any asynchronous calls. We will do so, later in this chapter. Since the method does not return a Task object, exceptions are not propagated to the caller. This means that they must be caught inside the method to avoid unhandled exceptions closing the application. Performing basic error correction Values we sample may include different types of errors, some of which we can eliminate in the code to various degrees. There are systematic errors and random errors. Systematic errors are most often caused by the way we've constructed our device, how we sample, how the circuit is designed, how the sensors are situated, how they interact with the physical medium and our underlying mathematical model, or how we convert the sampled value into a physical quantity. Reducing systematic errors requires a deeper analysis that goes beyond the scope of this book. Random errors are errors that are induced stochastically and are often unbiased. They can be induced due to a lack of resolution or precision, by background noise, or through random events in the physical world. While background noise and the lack of resolution or precision in our electronics create a noise in the measured input, random events in the physical world may create spikes. If something briefly flutters past our light sensor, it might register a short downwards spike, even though the ambient light did not change. You'll learn how to correct for both types of random errors. Canceling noise Since the digital PIR sensor already has error correction built into it, we will only focus on how to cancel noise from our analog light sensor. Noise can be canceled electronically, using, for instance, low-pass filters. It can also be cancelled algorithmically, using a simple averaging calculation over a short window of values. The averaging calculation will increase our resolution, at the cost of a small delay in the output. If we perform the average over 10 values, we effectively gain one power of 10, or one decimal, of resolution in our output value. The value will be delayed 10 seconds, however. This algorithm is therefore only suitable for input signals that vary slowly, or where a quick reaction to changes in the input stimuli is not required. Statistically, the expected average value is the same as the expected value, if the input is a steady signal overlaid with random noise. The implementation is simple. We need the following variables to set up our averaging algorithm: privateconstintwindowSize = 10; privateint?[] windowA0 = new int?[windowSize]; privateint nrA0 = 0; privateint sumA0 = 0; We use nullable integers (int?), to be able to remove bad values later. In the beginning, all values are null. After sampling the value, we first shift the window one step, and add our newly sampled value at the end. We also update our counters and sums. This allows us to quickly calculate the average value of the entire window, without having to loop through it each time: if (this.windowA0[0].HasValue) { this.sumA0 -= this.windowA0[0].Value; this.nrA0--; } Array.Copy(this.windowA0, 1, this.windowA0, 0, windowSize - 1); this.windowA0[windowSize - 1] = A0; this.sumA0 += A0; this.nrA0++; double AvgA0 = ((double)this.sumA0) / this.nrA0; int? v; Removing random spikes We now have a value that is 10 times more accurate than the original, in cases where our ambient light is not expected to vary quickly. This is typically the case, if ambient light depends on the sun and weather. Calculating the average over a short window has an added advantage: it allows us to remove bad measurements, or spikes. When a physical quantity changes, it normally changes continuously, slowly, and smoothly. This will have the effect that roughly half of the measurements, even when the input value changes, will be on one side of the average value, and the other half on the other side. A single spike, on the other hand, especially in the middle of the window, if sufficiently large, will stand out alone on one side, while the other values remain on the other. We can use this fact to remove bad measurements from our window. We define our middle position first: private const int spikePos = windowSize / 2; We proceed by calculating the number of values on each side of the average, if our window is sufficiently full: if (this.nrA0 >= windowSize - 2) { int NrLt = 0; int NrGt = 0; foreach (int? Value in this.windowA0) { if (Value.HasValue) { if (Value.Value < AvgA0) NrLt++; else if (Value.Value > AvgA0) NrGt++; } } If we only have one value on one side, and this value happens to be in the middle of the window, we identify it as a spike and remove it from the window. We also make sure to adjust our average value accordingly: if (NrLt == 1 || NrGt == 1) { v = this.windowA0[spikePos]; if (v.HasValue) { if ((NrLt == 1 && v.Value < AvgA0) || (NrGt == 1 && v.Value > AvgA0)) { this.sumA0 -= v.Value; this.nrA0--; this.windowA0[spikePos] = null; AvgA0 = ((double)this.sumA0) / this.nrA0; } } } } Since we remove the spike when it reaches the middle of the window, it might pollute the average of the entire window up to that point. We therefore need to recalculate an average value for the half of the window, where any spikes have been removed. This part of the window is smaller, so the resolution gain is not as big. Instead, the average value will not be polluted by single spikes. But we will still have increased the resolution by a factor of five: int i, n; for (AvgA0 = i = n = 0; i < spikePos; i++) { if ((v = this.windowA0[i]).HasValue) { n++; AvgA0 += v.Value; } } if (n > 0) { AvgA0 /= n; Converting to a physical quantity It is not sufficient for a sensor to have a numerical raw value of the measured quantity. It only tells us something if we know something more about the raw value. We must, therefore, convert it to a known physical unit. We must also provide an estimate of the precision (or error) the value has. A sensor measuring a physical quantity should report a numerical value, its physical unit, and the corresponding precision, or error of the estimate. To avoid creating a complex mathematical model that converts our measured light intensity into a known physical unit, which would go beyond the scope of this book, we convert it to a percentage value. Since we've gained a factor of five of precision using our averaging calculation, we can report two decimals of precision, even though the input value is only 1,024 bits, and only contains one decimal of precision: double Light = (100.0 * AvgA0) / 1024; MainPage.Instance.LightUpdated(Light, 2, "%"); } Illustrating measurement results Following image shows how our measured quantity behaves. The light sensor is placed in broad daylight on a sunny day, so it's saturated. Things move in front of the sensor, creating short dips. The thin blue line is a scaled version of our raw input A0. Since this value is event based, it is being reported more often than once a second. Our red curve is our measured, and corrected, ambient light value, in percent. The dots correspond to our second values. Notice that the first two spikes are removed and don't affect the measurement, which remains close to 100%. Only the larger dips affect the measurement. Also, notice the small delay inherent in our algorithm. It is most noticeable if there are abrupt changes: If we, on the other hand, have a very noisy input, our averaging algorithm helps our measured value to stay more stable. Perhaps the physical quantity goes below some sensor threshold, and input values become uncertain. In the following image, we see how the floating average varies less than the noisy input: Calculating basic statistics A sensor normally reports more than the measured momentary value. It also calculates basic statistics on the measured input, such as peak values. It also makes sure to store measured values regularly, to allow its users to view historical measurements. We begin by defining variables to keep track of our peak values: private int? lastMinute = null; private double? minLight = null; private double? maxLight = null; private DateTime minLightAt = DateTime.MinValue; private DateTime maxLightAt = DateTime.MinValue; We then make sure to update these after having calculated a new measurement: DateTime Timestamp = DateTime.Now; if (!this.minLight.HasValue || Light < this.minLight.Value) { this.minLight = Light; this.minLightAt = Timestamp; } if (!this.maxLight.HasValue || Light > this.maxLight.Value) { this.maxLight = Light; this.maxLightAt = Timestamp; } Defining data persistence The last step in this is to store our values regularly. Later, when we present different communication protocols, we will show how to make these values available to users. Since we will use an object database to store our data, we need to create a class that defines what to store. We start with the class definition: [TypeName(TypeNameSerialization.None)] [CollectionName("MinuteValues")] [Index("Timestamp")] public class LastMinute { [ObjectId] public string ObjectId = null; } The class is decorated with a couple of attributes from the Waher.Persistence.Attributes namespace. The CollectionName attribute defines the collection in which objects of this class will be stored. The TypeName attribute defines if we want the type name to be stored with the data. This is useful, if you mix different types of classes in the same collection. We plan not to, so we choose not to store type names. This saves some space. The Index attribute defines an index. This makes it possible to do quick searches. Later, we will want to search historical records based on their timestamps, so we add an index on the Timestamp field. We also define an Object ID field. This is a special field that is like a primary key in object databases. We need it to be able to delete objects later. You can add any number of indices and any number of fields in each index. Placing a hyphen (-) before the field name makes the engine use descending sort order for that field. Next, we define some member fields. If you want, you can use properties as well, if you provide both getters and setters for the properties you wish to persist. By providing default values, and decorating the fields (or properties) with the corresponding default value, you can optimize storage somewhat. Only members with values different from the declared default values will then be persisted, to save space: [DefaultValueDateTimeMinValue] public DateTime Timestamp = DateTime.MinValue; [DefaultValue(0)] public double Light = 0; [DefaultValue(PinState.LOW)] public PinState Motion= PinState.LOW; [DefaultValueNull] public double? MinLight = null; [DefaultValueDateTimeMinValue] public DateTime MinLightAt = DateTime.MinValue; [DefaultValueNull] public double? MaxLight = null; [DefaultValueDateTimeMinValue] public DateTime MaxLightAt = DateTime.MinValue; Storing measured data We are now ready to store our measured data. We use the lastMinute field defined earlier to know when we pass into a new minute. We use that opportunity to store the most recent value, together with the basic statistics we've calculated: if (!this.lastMinute.HasValue) this.lastMinute = Timestamp.Minute; else if (this.lastMinute.Value != Timestamp.Minute) { this.lastMinute = Timestamp.Minute; We begin by creating an instance of the LastMinute class defined earlier: LastMinute Rec = new LastMinute() { Timestamp = Timestamp, Light = Light, Motion= D8, MinLight = this.minLight, MinLightAt = this.minLightAt, MaxLight = this.maxLight, MaxLightAt = this.maxLightAt }; Storing this object is very easy. The call is asynchronous and can be executed in parallel, if desired. We choose to wait for it to complete, since we will be making database requests after the operation has completed: await Database.Insert(Rec); We then clear our variables used for calculating peak values, to make sure peak values are calculated within the next period: this.minLight = null; this.minLightAt = DateTime.MinValue; this.maxLight = null; this.maxLightAt = DateTime.MinValue; } Removing old data We cannot continue storing new values without also having a plan for removing old ones. Doing so is easy. We choose to delete all records older than 100 minutes. This is done by first performing a search, and then deleting objects that are found in this search. The search is defined by using filters from the Waher.Persistence.Filters namespace: foreach (LastMinute Rec2 in await Database.Find<LastMinute>( new FilterFieldLesserThan("Timestamp", Timestamp.AddMinutes(-100)))) { await Database.Delete(Rec2); } You can now execute the application, and monitor how the MinuteValues collection is being filled. We created a simple sensor app for the Raspberry Pi using C#. You read an excerpt from the book, Mastering Internet of Things, written by Peter Waher. This book will help you augment your IoT skills with the help of engaging and enlightening tutorials designed for Raspberry Pi 3. 25 Datasets for Deep Learning in IoT How IoT is going to change tech teams How to run and configure an IoT Gateway
Read more
  • 0
  • 0
  • 3084

article-image-raspberry-pi-family-raspberry-pi-zero-w-wireless
Packt Editorial Staff
16 Apr 2018
12 min read
Save for later

Meet the Coolest Raspberry Pi Family Member: Raspberry Pi Zero W Wireless

Packt Editorial Staff
16 Apr 2018
12 min read
In early 2017, Raspberry Pi community announced a new board with wireless extension. It is a highly promising board allowing everyone to connect their devices to the Internet. Offering a wireless functionality where everyone can develop their own projects without cables and components. It uses their skills to develop projects including software and hardware. This board is the new toy of any engineer interested in Internet of Things, security, automation and more! Comparing the new board with Raspberry Pi 3 Model B we can easily figure that it is quite small with many possibilities over the Internet of Things. But what is a Raspberry Pi Zero W and why do you need it? In today’s post, we will cover the following topics: Overview of the Raspberry Pi family Introduction to the new Raspberry Pi Zero W Distributions Common issues Raspberry Pi family As said earlier Raspberry Pi Zero W is the new member of Raspberry Pi family boards. All these years Raspberry Pi evolved and became more user friendly with endless possibilities. Let's have a short look at the rest of the family so we can understand the difference of the Pi Zero board. Right now, the heavy board is named Raspberry Pi 3 Model B. It is the best solution for projects such as face recognition, video tracking, gaming or anything else that is in demand: It is the 3rd generation of Raspberry Pi boards after Raspberry Pi 2 and has the following specs: A 1.2GHz 64-bit quad-core ARMv8 CPU 11n Wireless LAN Bluetooth 4.1 Bluetooth Low Energy (BLE) Like the Pi 2, it also has 1GB RAM 4 USB ports 40 GPIO pins Full HDMI port Ethernet port Combined 3.5mm audio jack and composite video Camera interface (CSI) Display interface (DSI) Micro SD card slot (now push-pull rather than push-push) VideoCore IV 3D graphics core The next board is Raspberry Pi Zero, in which the Zero W was based. A small low cost and power board able to do many things: The specs of this board can be found as follows: 1GHz, Single-core CPU 512MB RAM Mini-HDMI port Micro-USB OTG port Micro-USB power HAT-compatible 40-pin header Composite video and reset headers CSI camera connector (v1.3 only) At this point we should not forget to mention that apart from the boards mentioned earlier there are several other modules and components such as the Sense Hat or Raspberry Pi Touch Display available which will work great for advance projects. The 7″ Touchscreen Monitor for Raspberry Pi gives users the ability to create all-in-one, integrated projects such as tablets, infotainment systems and embedded projects: Where Sense HAT is an add-on board for Raspberry Pi, made especially for the Astro Pi mission. The Sense HAT has an 8×8 RGB LED matrix, a five-button joystick and includes the following sensors: Gyroscope Accelerometer Magnetometer Temperature Barometric pressure Humidity Stay tuned with more new boards and modules at the official website: https://www.raspberrypi.org/ Raspberry Pi Zero W Raspberry Pi Zero W is a small device that has the possibilities to be connected either on an external monitor or TV and of course it is connected to the internet. The operating system varies as there are many distros in the official page and almost everyone is baled on Linux systems. With Raspberry Pi Zero W you have the ability to do almost everything, from automation to gaming! It is a small computer that allows you easily program with the help of the GPIO pins and some other components such as a camera. Its possibilities are endless! Specifications If you have bought Raspberry PI 3 Model B you would be familiar with Cypress CYW43438 wireless chip. It provides 802.11n wireless LAN and Bluetooth 4.0 connectivity. The new Raspberry Pi Zero W is equipped with that wireless chip as well. Following you can find the specifications of the new board: Dimensions: 65mm × 30mm × 5mm SoC:Broadcom BCM 2835 chip ARM11 at 1GHz, single core CPU 512ΜΒ RAM Storage: MicroSD card Video and Audio:1080P HD video and stereo audio via mini-HDMI connector Power:5V, supplied via micro USB connector Wireless:2.4GHz 802.11 n wireless LAN Bluetooth: Bluetooth classic 4.1 and Bluetooth Low Energy (BLE) Output: Micro USB GPIO: 40-pin GPIO, unpopulated Notice that all the components are on the top side of the board so you can easily choose your case without any problems and keep it safe. As far as the antenna concern, it is formed by etching away copper on each layer of the PCB. It may not be visible as it is in other similar boards but it is working great and offers quite a lot functionalities: Also, the product is limited to only one piece per buyer and costs 10$. You can buy a full kit with microsd card, a case and some more extra components for about 45$ or choose the camera full kit which contains a small camera component for 55$. Camera support Image processing projects such as video tracking or face recognition require a camera. Following you can see the official camera support of Raspberry Pi Zero W. The camera can easily be mounted at the side of the board using a cable like the Raspberry Pi 3 Model B board: Depending on your distribution you many need to enable the camera though command line. More information about the usage of this module will be mentioned at the project. Accessories Well building projects with the new board there are some other gadgets that you might find useful working with. Following there is list of some crucial components. Notice that if you buy Raspberry Pi Zero W kit, it includes some of them. So, be careful and don't double buy them: OTG cable powerHUB GPIO header microSD card and card adapter HDMI to miniHDMI cable HDMI to VGA cable Distributions The official site https://www.raspberrypi.org/downloads/ contains several distributions for downloading. The two basic operating systems that we will analyze after are RASPBIAN and NOOBS. Following you can see how the desktop environment looks like. Both RASPBIAN and NOOBS allows you to choose from two versions. There is the full version of the operating system and the lite one. Obviously the lite version does not contain everything that you might use so if you tend to use your Raspberry with a desktop environment choose and download the full version. On the other side if you tend to just ssh and do some basic stuff pick the lite one. It' s really up to you and of course you can easily download again anything you like and re-write your microSD card: NOOBS distribution Download NOOBS: https://www.raspberrypi.org/downloads/noobs/. NOOBS distribution is for the new users with not so much knowledge in linux systems and Raspberry PI boards. As the official page says it is really "New Out Of the Box Software". There is also pre-installed NOOBS SD cards that you can purchase from many retailers, such as Pimoroni, Adafruit, and The Pi Hut, and of course you can download NOOBS and write your own microSD card. If you are having trouble with the specific distribution take a look at the following links: Full guide at https://www.raspberrypi.org/learning/software-guide/. View the video at https://www.raspberrypi.org/help/videos/#noobs-setup. NOOBS operating system contains Raspbian and it provides various of other operating systems available to download. RASPBIAN distribution Download RASPBIAN: https://www.raspberrypi.org/downloads/raspbian/. Raspbian is the official supported operating system. It can be installed though NOOBS or be downloading the image file at the following link and going through the guide of the official website. Image file: https://www.raspberrypi.org/documentation/installation/installing-images/README.md. It has pre-installed plenty of software such as Python, Scratch, Sonic Pi, Java, Mathematica, and more! Furthermore, more distributions like Ubuntu MATE, Windows 10 IOT Core or Weather Station are meant to be installed for more specific projects like Internet of Things (IoT) or weather stations. To conclude with, the right distribution to install actually depends on your project and your expertise in Linux systems administration. Raspberry Pi Zero W needs an microSD card for hosting any operating system. You are able to write Raspbian, Noobs, Ubuntu MATE, or any other operating system you like. So, all that you need to do is simple write your operating system to that microSD card. First of all you have to download the image file from https://www.raspberrypi.org/downloads/ which, usually comes as a .zip file. Once downloaded, unzip the zip file, the full image is about 4.5 Gigabytes. Depending on your operating system you have to use different programs: 7-Zip for Windows The Unarchiver for Mac Unzip for Linux Now we are ready to write the image in the MicroSD card. You can easily write the .img file in the microSD card by following one of the next guides according to your system. For Linux users dd tool is recommended. Before connecting your microSD card with your adaptor in your computer run the following command: df -h Now connect your card and run the same command again. You must see some new records. For example if the new device is called /dev/sdd1 keep in your mind that the card is at /dev/sdd (without the 1). The next step is to use the dd command and copy the image to the microSD card. We can do this by the following command: dd if=<path to your image> of=</dev/***> Where if is the input file (image file or the distribution) and of is the output file (microSD card). Again be careful here and use only /dev/sdd or whatever is yours without any numbers. If you are having trouble with that please use the full manual at the following link https://www.raspberrypi.org/documentation/installation/installing-images/linux.md. A good tool that could help you out for that job is GParted. If it is not installed on your system you can easily install it with the following command: sudo apt-get install gparted Then run sudogparted to start the tool. Its handles partitions very easily and you can format, delete or find information about all your mounted partitions. More information about ddcan be found here: https://www.raspberrypi.org/documentation/installation/installing-images/linux.md For Mac OS users dd tool is always recommended: https://www.raspberrypi.org/documentation/installation/installing-images/mac.md For Windows users Win32DiskImager utility is recommended: https://www.raspberrypi.org/documentation/installation/installing-images/windows.md There are several other ways to write an image file in a microSD card. So, if you are against any kind of problems when following the guides above feel free to use any other guide available on the Internet. Now, assuming that everything is ok and the image is ready. You can now gently plugin the microcard to your Raspberry PI Zero W board. Remember that you can always confirm that your download was successful with the sha1 code. In Linux systems you can use sha1sum followed by the file name (the image) and print the sha1 code that should and must be the same as it is at the end of the official page where you downloaded the image. Common issues Sometimes, working with Raspberry Pi boards can lead to issues. We all have faced some of them and hope to never face them again. The Pi Zero is so minimal and it can be tough to tell if it is working or not. Since, there is no LED on the board, sometimes a quick check if it is working properly or something went wrong is handy. Debugging steps With the following steps you will probably find its status: Take your board, with nothing in any slot or socket. Remove even the microSD card! Take a normal micro-USB to USB-ADATA SYNC cable and connect the one side to your computer and the other side to the Pi's USB, (not the PWR_IN). If the Zero is alive: On Windows the PC will go ding for the presence of new hardware and you should see BCM2708 Boot in Device Manager. On Linux, with a ID 0a5c:2763 Broadcom Corp message from dmesg. Try to run dmesg in a Terminal before your plugin the USB and after that. You will find a new record there. Output example: [226314.048026] usb 4-2: new full-speed USB device number 82 using uhci_hcd [226314.213273] usb 4-2: New USB device found, idVendor=0a5c, idProduct=2763 [226314.213280] usb 4-2: New USB device strings: Mfr=1, Product=2, SerialNumber=0 [226314.213284] usb 4-2: Product: BCM2708 Boot [226314.213] usb 4-2: Manufacturer: Broadcom If you see any of the preceding, so far so good, you know the Zero's not dead. microSD card issue Remember that if you boot your Raspberry and there is nothing working, you may have burned your microSD card wrong. This means that your card many not contain any boot partition as it should and it is not able to boot the first files. That problem occurs when the distribution is burned to /dev/sdd1 and not to /dev/sdd as we should. This is a quite common mistake and there will be no errors in your monitor. It will just not work! Case protection Raspberry Pi boards are electronics and we never place electronics in metallic surfaces or near magnetic objects. It will affect the booting operation of the Raspberry and it will probably not work. So a tip of advice, spend some extra money for the Raspberry PI Case and protect your board from anything like that. There are many problems and issues when hanging your raspberry pi using tacks. To summarize, we introduced the new Raspberry Pi Zero board with the rest of its family and a brief analysis on some extra components that are must buy as well. [box type="shadow" align="aligncenter" class="" width=""]This article is an excerpt from the book Raspberry Pi Zero W Wireless Projects written by Vasilis Tzivaras. The Raspberry Pi has always been the go–to, lightweight ARM-based computer. This book will help you design and build interesting DIY projects using the Raspberry Pi Zero W board.[/box] Introduction to Raspberry Pi Zero W Wireless Build your first Raspberry Pi project
Read more
  • 0
  • 0
  • 3002
article-image-amazon-remars-day-1-kicks-off-showcasing-amazons-next-gen-ai-robots-spot-the-robo-dog-and-a-guest-appearance-from-iron-man
Savia Lobo
06 Jun 2019
11 min read
Save for later

Amazon re:MARS Day 1 kicks off showcasing Amazon’s next-gen AI robots; Spot, the robo-dog and a guest appearance from ‘Iron Man’

Savia Lobo
06 Jun 2019
11 min read
Amazon’s inaugural re:MARS event kicked off on Tuesday, June 4 at the Aria in Las Vegas. This 4-day event is inspired by MARS, a yearly invite-only event hosted by Jeff Bezos that brings together innovative minds in Machine learning, Automation, Robotics, and Space to share new ideas across these rapidly advancing domains. re:MARS featured a lot of announcements revealing a range of robots each engineered for a different purpose. Some of them include helicopter drones for delivery, two robot dogs by Boston Dynamics, Autonomous human-like acrobats by Walt Disney Imagineering, and much more. Amazon also revealed Alexa’s new Dialog Modeling for Natural, Cross-Skill Conversations. Let us have a brief look at each of the announcements. Robert Downey Jr. announces ‘The Footprint Coalition’ project to clean up the environment using Robotics Popularly known as the “Iron Man”, Robert Downey Jr.’s visit was one of the exciting moments where he announced a new project called The Footprint Coalition to clean up the planet using advanced technologies at re:MARS. “Between robotics and nanotechnology we could probably clean up the planet significantly, if not entirely, within a decade,” he said. According to The Forbes, “Amazon did not immediately respond to questions about whether it was investing financially or technologically in Downey Jr.’s project.” “At this point, the effort is severely light on details, with only a bare-bones website to accompany Downey’s public statement, but the actor said he plans to officially launch the project by April 2020,” Forbes reports. A recent United Nations report found that humans are having an unprecedented and devastating effect on global biodiversity, and researchers have found microplastics polluting the air, ocean, and soil. The announcement of this project has been opened to the public because the “company itself is under fire for its policies around the environment and climate change”. Additionally, Morgan Pope and Tony Dohi of Walt Disney Imagineering, also demonstrated their work to create autonomous acrobats. https://twitter.com/jillianiles/status/1136082571081555968 https://twitter.com/thesullivan/status/1136080570549563393 Amazon will soon deliver orders using drones On Wednesday, Amazon unveiled a revolutionary new drone that will test deliver toothpaste and other household goods starting within months. This drone is “part helicopter and part science-fiction aircraft” with built-in AI features and sensors that will help it fly robotically without threatening traditional aircraft or people on the ground. Gur Kimchi, vice president of Amazon Prime Air, said in an interview to Bloomberg, “We have a design that is amazing. It has performance that we think is just incredible. We think the autonomy system makes the aircraft independently safe.” However, he refused to provide details on where the delivery tests will be conducted. Also, the drones have received a year’s approval from the FAA to test the devices in limited ways that still won't allow deliveries. According to a Bloomberg report, “It can take years for traditional aircraft manufacturers to get U.S. Federal Aviation Administration approval for new designs and the agency is still developing regulations to allow drone flights over populated areas and to address national security concerns. The new drone presents even more challenges for regulators because there aren’t standards yet for its robotic features”. Competitors to Amazon’s unnamed drone include Alphabet Inc.’s Wing, which became the first drone to win an FAA approval to operate as a small airline, in April. Also, United Parcel Service Inc. and drone startup Matternet Inc. began using drones to move medical samples between hospitals in Raleigh, North Carolina, in March. Amazon’s drone is about six feet across with six propellers that lift it vertically off the ground. It is surrounded by a six-sided shroud that will protect people from the propellers, and also serves as a high-efficiency wing such that it can fly more horizontally like a plane. Once it gets off the ground, the craft tilts and flies sideways -- the helicopter blades becoming more like airplane propellers. Kimchi said, “Amazon’s business model for the device is to make deliveries within 7.5 miles (12 kilometers) from a company warehouse and to reach customers within 30 minutes. It can carry packages weighing as much as five pounds. More than 80% of packages sold by the retail behemoth are within that weight limit.” According to the company, one of the things the drone has mastered is detecting utility wires and clotheslines. They have been notoriously difficult to identify reliably and pose a hazard for a device attempting to make deliveries in urban and suburban areas. To know more about these high-tech drones in detail, head over to Amazon’s official blogpost. Boston Dynamics’ first commercial robot, Spot Boston Dynamics revealed its first commercial product, a quadrupedal robot named Spot.  Boston Dynamics’ CEO Marc Raibert told The Verge, “Spot is currently being tested in a number of “proof-of-concept” environments, including package delivery and surveying work.” He also said that although there’s no firm launch date for the commercial version of Spot, it should be available within months, certainly before the end of the year. “We’re just doing some final tweaks to the design. We’ve been testing them relentlessly”, Raibert said. These Spot robots are capable of navigating environments autonomously, but only when their surroundings have been mapped in advance. They can withstand kicks and shoves and keep their balance on tricky terrain, but they don’t decide for themselves where to walk. These robots are simple to control; using a D-pad, users can steer the robot as just like an RC car or mechanical toy. A quick tap on the video feed streamed live from the robot’s front-facing camera allows to select a destination for it to walk to, and another tap lets the user assume control of a robot arm mounted on top of the chassis. With 3D cameras mounted atop, a Spot robot can map environments like construction sites, identifying hazards and work progress. It also has a robot arm which gives it greater flexibility and helps it open doors and manipulate objects. https://twitter.com/jjvincent/status/1136096290016595968 The commercial version will be “much less expensive than prototypes [and] we think they’ll be less expensive than other peoples’ quadrupeds”, Raibert said. Here’s a demo video of the Spot robot at the re:MARS event. https://youtu.be/xy_XrAxS3ro Alexa gets new dialog modeling for improved natural, cross-skill conversations Amazon unveiled new features in Alexa that would help the conversational agent to answer more complex questions and carry out more complex tasks. Rohit Prasad, Alexa vice president and head scientist, said, “We envision a world where customers will converse more naturally with Alexa: seamlessly transitioning between skills, asking questions, making choices, and speaking the same way they would with a friend, family member, or co-worker. Our objective is to shift the cognitive burden from the customer to Alexa.” This new update to Alexa is a set of AI modules that work together to generate responses to customers’ questions and requests. With every round of dialog, the system produces a vector — a fixed-length string of numbers — that represents the context and the semantic content of the conversation. “With this new approach, Alexa will predict a customer’s latent goal from the direction of the dialog and proactively enable the conversation flow across topics and skills,” Prasad says. “This is a big leap for conversational AI.” At re:MARS, Prasad also announced the developer preview of Alexa Conversations, a new deep learning-based approach for skill developers to create more-natural voice experiences with less effort, fewer lines of code, and less training data than before. The preview allows skill developers to create natural, flexible dialogs within a single skill; upcoming releases will allow developers to incorporate multiple skills into a single conversation. With Alexa Conversations, developers provide: (1) application programming interfaces, or APIs, that provide access to their skills’ functionality; (2) a list of entities that the APIs can take as inputs, such as restaurant names or movie times;  (3) a handful of sample dialogs annotated to identify entities and actions and mapped to API calls. Alexa Conversations’ AI technology handles the rest. “It’s way easier to build a complex voice experience with Alexa Conversations due to its underlying deep-learning-based dialog modeling,” Prasad said. To know more about this announcement in detail, head over to Alexa’s official blogpost. Amazon Robotics unveiled two new robots at its fulfillment centers Brad Porter, vice president of robotics at Amazon, announced two new robots, one is, code-named Pegasus and the other one, Xanthus. Pegasus, which is built to sort packages, is a 3-foot-wide robot equipped with a conveyor belt on top to drop the right box in the right location. “We sort billions of packages a year. The challenge in package sortation is, how do you do it quickly and accurately? In a world of Prime one-day [delivery], accuracy is super-important. If you drop a package off a conveyor, lose track of it for a few hours  — or worse, you mis-sort it to the wrong destination, or even worse, if you drop it and damage the package and the inventory inside — we can’t make that customer promise anymore”, Porter said. Porter said Pegasus robots have already driven a total of 2 million miles, and have reduced the number of wrongly sorted packages by 50 percent. Porter said the Xanthus, represents the latest incarnation of Amazon’s drive robot. Amazon uses tens of thousands of the current-generation robot, known as Hercules, in its fulfillment centers. Amazon unveiled Xanthus Sort Bot and Xanthus Tote Mover. “The Xanthus family of drives brings innovative design, enabling engineers to develop a portfolio of operational solutions, all of the same hardware base through the addition of new functional attachments. We believe that adding robotics and new technologies to our operations network will continue to improve the associate and customer experience,” Porter says. To know more about these new robots watch the video below: https://youtu.be/4MH7LSLK8Dk StyleSnap: An AI-powered shopping Amazon announced StyleSnap, a recent move to promote AI-powered shopping. StyleSnap helps users pick out clothes and accessories. All they need to do is upload a photo or screenshot of what they are looking for, when they are unable to describe what they want. https://twitter.com/amazonnews/status/1136340356964999168 Amazon said, "You are not a poet. You struggle to find the right words to explain the shape of a neckline, or the spacing of a polka dot pattern, and when you attempt your text-based search, the results are far from the trend you were after." To use StyleSnap, just open the Amazon app, click the camera icon in the upper right-hand corner, select the StyleSnap option, and then upload an image of the outfit. Post this, StyleSnap provides recommendations of similar outfits on Amazon to purchase, with users able to filter across brand, pricing, and reviews. Amazon's AI system can identify colors and edges, and then patterns like floral and denim. Using this information, its algorithm can then accurately pick a matching style. To know more about StyleSnap in detail, head over to Amazon’s official blog post. Amazon Go trains cashierless store algorithms using synthetic data Amazon at the re:MARS shared more details about Amazon Go, the company’s brand for its cashierless stores. They said Amazon Go uses synthetic data to intentionally introduce errors to its computer vision system. Challenges that had to be addressed before opening stores to avoid queues include the need to make vision systems that account for sunlight streaming into a store, little time for latency delays, and small amounts of data for certain tasks. Synthetic data is being used in a number of ways to power few-shot learning, improve AI systems that control robots, train AI agents to walk, or beat humans in games of Quake III. Dilip Kumar, VP of Amazon Go, said, “As our application improved in accuracy — and we have a very highly accurate application today — we had this interesting problem that there were very few negative examples, or errors, which we could use to train our machine learning models.” He further added, “So we created synthetic datasets for one of our challenging conditions, which allowed us to be able to boost the diversity of the data that we needed. But at the same time, we have to be careful that we weren’t introducing artifacts that were only visible in the synthetic data sets, [and] that the data translates well to real-world situations — a tricky balance.” To know more about this news in detail, check out this video: https://youtu.be/jthXoS51hHA The Amazon re:MARS event is still ongoing and will have many more updates. To catch live updates from Vegas visit Amazon’s blog. World’s first touch-transmitting telerobotic hand debuts at Amazon re:MARS tech showcase Amazon introduces S3 batch operations to process millions of S3 objects Amazon Managed Streaming for Apache Kafka (Amazon MSK) is now generally available
Read more
  • 0
  • 0
  • 3001

article-image-creating-random-insults
Packt
28 Apr 2015
21 min read
Save for later

Creating Random Insults

Packt
28 Apr 2015
21 min read
In this article by Daniel Bates, the author of Raspberry Pi for Kids - Second edition, we're going to learn and use the Python programming language to generate random funny phrases such as, Alice has a smelly foot! (For more resources related to this topic, see here.) Python In this article, we are going to use the Python programming language. Almost all programming languages are capable of doing the same things, but they are usually designed with different specializations. Some languages are designed to perform one job particularly well, some are designed to run code as fast as possible, and some are designed to be easy to learn. Scratch was designed to develop animations and games, and to be easy to read and learn, but it can be difficult to manage large programs. Python is designed to be a good general-purpose language. It is easy to read and can run code much faster than Scratch. Python is a text-based language. Using it, we type the code rather than arrange building blocks. This makes it easier to go back and change the pieces of code that we have already written, and it allows us to write complex pieces of code more quickly. It does mean that we need to type our programs accurately, though—there are no limits to what we can type, but not all text will form a valid program. Even a simple spelling mistake can result in errors. Lots of tutorials and information about the available features are provided online at http://docs.python.org/2/. Learn Python the Hard Way, by Shaw Zed A., is another good learning resource, which is available at http://learnpythonthehardway.org. As an example, let's take a look at some Scratch and Python code, respectively, both of which do the same thing. Here's the Scratch code: The Python code that does the same job looks like: def count(maximum):    value = 0    while value < maximum:        value = value + 1        print "value =", value   count(5) Even if you've never seen any Python code before, you might be able to read it and tell what it does. Both the Scratch and Python code count from 0 to a maximum value, and display the value each time. The biggest difference is in the first line. Instead of waiting for a message, we define (or create) a function, and instead of sending a message, we call the function (more on how to run Python code, shortly). Notice that we include maximum as an argument to the count function. This tells Python the particular value we would like to keep as the maximum, so we can use the same code with different maximum values. The other main differences are that we have while instead of forever if, and we have print instead of say. These are just different ways of writing the same thing. Also, instead of having a block of code wrap around other blocks, we simply put an extra four spaces at the beginning of a line to show which code is contained within a particular block. Python programming To run a piece of Python code, open Python 2 from the Programming menu on the Raspberry Pi desktop and perform the following steps: Type the previous code into the window and you should notice that it can recognize how many spaces to start a line with. When you have finished the function block, press Enter a couple of times, until you see >>>. This shows that Python recognizes that your block of code has been completed, and that it is ready to receive a new command. Now, you can run your code by typing in count(5) and pressing Enter. You can change 5 to any number you like and press Enter again to count to a different number. We're now ready to create our program! The Raspberry Pi also supports Python 3, which is very similar but incompatible with Python 2. You can check out the differences between Python 2 and Python 3 at http://python-future.org/compatible_idioms.html. The program we're going to use to generate phrases As mentioned earlier, our program is going to generate random, possibly funny, phrases for us. To do this, we're going to give each phrase a common structure, and randomize the word that appears in each position. Each phrase will look like: <name> has a <adjective> <noun> Where <name> is replaced by a person's name, <adjective> is replaced by a descriptive word, and <noun> is replaced by the name of an object. This program is going to be a little larger than our previous code example, so we're going to want to save it and modify it easily. Navigate to File | New Window in Python 2. A second window will appear which starts off completely blank. We will write our code in this window, and when we run it, the results will appear in the first window. For the rest of the article, I will call the first window the Shell, and the new window the Code Editor. Remember to save your code regularly! Lists We're going to use a few different lists in our program. Lists are an important part of Python, and allow us to group together similar things. In our program, we want to have separate lists for all the possible names, adjectives, and nouns that can be used in our sentences. We can create a list in this manner: names = ["Alice", "Bob", "Carol"] Here, we have created a variable called names, which is a list. The list holds three items or elements: Alice, Bob, and Carol. We know that it is a list because the elements are surrounded by square brackets, and are separated by commas. The names need to be in quote marks to show that they are text, and not the names of variables elsewhere in the program. To access the elements in a list, we use the number which matches its position, but curiously, we start counting from zero. This is because if we know where the start of the list is stored, we know that its first element is stored at position start + 0, the second element is at position start + 1, and so on. So, Alice is at position 0 in the list, Bob is at position 1, and Carol is at position 2. We use the following code to display the first element (Alice) on the screen: print names[0] We've seen print before: it displays text on the screen. The rest of the code is the name of our list (names), and the position of the element in the list that we want surrounded by square brackets. Type these two lines of code into the Code Editor, and then navigate to Run | Run Module (or press F5). You should see Alice appear in the Shell. Feel free to play around with the names in the list or the position that is being accessed until you are comfortable with how lists work. You will need to rerun the code after each change. What happens if you choose a position that doesn't match any element in the list, such as 10? Adding randomness So far, we have complete control over which name is displayed. Let's now work on displaying a random name each time we run the program. Update your code in the Code Editor so it looks like: import random names = ["Alice", "Bob", "Carol"] position = random.randrange(3) print names[position] In the first line of the code, we import the random module. Python comes with a huge amount of code that other people have written for us, separated into different modules. Some of this code is simple, but makes life more convenient for us, and some of it is complex, allowing us to reuse other people's solutions for the challenges we face and concentrate on exactly what we want to do. In this case, we are making use of a collection of functions that deal with random behavior. We must import a module before we are able to access its contents. Information on the available modules available can be found online at www.python.org/doc/. After we've created the list of names, we then compute a random position in the list to access. The name random.randrange tells us that we are using a function called randrange, which can be found inside the random module that we imported earlier. The randrange function gives us a random whole number less than the number we provide. In this case, we provide 3 because the list has three elements and we store the random position in a new variable called position. Finally, instead of accessing a fixed element in the names list, we access the element that position refers to. If you run this code a few times, you should notice that different names are chosen randomly. Now, what happens if we want to add a fourth name, Dave, to our list? We need to update the list itself, but we also need to update the value we provide to randrange to let it know that it can give us larger numbers. Making multiple changes just to add one name can cause problems—if the program is much larger, we may forget which parts of the code need to be updated. Luckily, Python has a nice feature which allows us to make this simpler. Instead of a fixed number (such as 3), we can ask Python for the length of a list, and provide that to the randrange function. Then, whenever we update the list, Python knows exactly how long it is, and can generate suitable random numbers. Here is the code, which is updated to make it easier to change the length of the list: import random names = ["Alice", "Bob", "Carol"] length = len(names) position = random.randrange(length) print names[position] Here, we've created a new variable called length to hold the length of the list. We then use the len function (which is short for length) to compute the length of our list, and we give length to the randrange function. If you run this code, you should see that it works exactly as it did before, and it easily copes if you add or remove elements from the list. It turns out that this is such a common thing to do, that the writers of the random module have provided a function which does the same job. We can use this to simplify our code: import random names = ["Alice", "Bob", "Carol", "Dave"] print random.choice(names) As you can see, we no longer need to compute the length of the list or a random position in it: random.choice does all of this for us, and simply gives us a random element of any list we provide it with. As we will see in the next section, this is useful since we can reuse random.choice for all the different lists we want to include in our program. If you run this program, you will see that it works the same as it did before, despite being much shorter. Creating phrases Now that we can get a random element from a list, we've crossed the halfway mark to generating random sentences! Create two more lists in your program, one called adjectives, and the other called nouns. Put as many descriptive words as you like into the first one, and a selection of objects into the second. Here are the three lists I now have in my program: names = ["Alice", "Bob", "Carol", "Dave"] adjectives = ["fast", "slow", "pretty", "smelly"] nouns = ["dog", "car", "face", "foot"] Also, instead of printing our random elements immediately, let's store them in variables so that we can put them all together at the end. Remove the existing line of code with print in it, and add the following three lines after the lists have been created: name = random.choice(names) adjective = random.choice(adjectives) noun = random.choice(nouns) Now, we just need to put everything together to create a sentence. Add this line of code right at the end of the program: print name, "has a", adjective, noun Here, we've used commas to separate all of the things we want to display. The name, adjective, and noun are our variables holding the random elements of each of the lists, and "has a" is some extra text that completes the sentence. print will automatically put a space between each thing it displays (and start a new line at the end). If you ever want to prevent Python from adding a space between two items, separate them with + rather than a comma. That's it! If you run the program, you should see random phrases being displayed each time, such as Alice has a smelly foot or Carol has a fast car. Making mischief So, we have random phrases being displayed, but what if we now want to make them less random? What if you want to show your program to a friend, but make sure that it only ever says nice things about you, or bad things about them? In this section, we'll extend the program to do just that. Dictionaries The first thing we're going to do is replace one of our lists with a dictionary. A dictionary in Python uses one piece of information (a number, some text, or almost anything else) to search for another. This is a lot like the dictionaries you might be used to, where you use a word to search for its meaning. In Python, we say that we use a key to look for a value. We're going to turn our adjectives list into a dictionary. The keys will be the existing descriptive words, and the values will be tags that tell us what sort of descriptive words they are. Each adjective will be "good" or "bad". My adjectives list becomes the following dictionary. Make similar changes to yours. adjectives = {"fast":"good", "slow":"bad", "pretty":"good", "smelly":"bad"} As you can see, the square brackets from the list become curly braces when you create a dictionary. The elements are still separated by commas, but now each element is a key-value pair with the adjective first, then a colon, and then the type of adjective it is. To access a value in a dictionary, we no longer use the number which matches its position. Instead, we use the key with which it is paired. So, as an example, the following code will display "good" because "pretty" is paired with "good" in the adjectives dictionary: print adjectives["pretty"] If you try to run your program now, you'll get an error which mentions random.choice(adjectives). This is because random.choice expects to be given a list, but is now being given a dictionary. To get the code working as it was before, replace that line of code with this: adjective = random.choice(adjectives.keys()) The addition of .keys() means that we only look at the keys in the dictionary—these are the adjectives we were using before, so the code should work as it did previously. Test it out now to make sure. Loops You may remember the forever and repeat code blocks in Scratch. In this section, we're going to use Python's versions of these to repeatedly choose random items from our dictionary until we find one which is tagged as "good". A loop is the general programming term for this repetition—if you walk around a loop, you will repeat the same path over and over again, and it is the same with loops in programming languages. Here is some code, which finds an adjective and is tagged as "good". Replace your existing adjective = line of code with these lines: while True:    adjective = random.choice(adjectives.keys())    if adjectives[adjective] == "good":        break The first line creates our loop. It contains the while key word, and a test to see whether the code should be executed inside the loop. In this case, we make the test True, so it always passes, and we always execute the code inside. We end the line with a colon to show that this is the beginning of a block of code. While in Scratch we could drag code blocks inside of the forever or repeat blocks, in Python we need to show which code is inside the block in a different way. First, we put a colon at the end of the line, and then we indent any code which we want to repeat by four spaces. The second line is the code we had before: we choose a random adjective from our dictionary. The third line uses adjectives[adjective] to look into the (adjectives) dictionary for the tag of our chosen adjective. We compare the tag with "good" using the double = sign (a double = is needed to make the comparison different from the single = case, which stores a value in a variable). Finally, if the tag matches "good",we enter another block of code: we put a colon at the end of the line, and the following code is indented by another four spaces. This behaves the same way as the Scratch if block. The fourth line contains a single word: break. This is used to escape from loops, which is what we want to do now that we have found a "good" adjective. If you run your code a few times now, you should see that none of the bad adjectives ever appear. Conditionals In the preceding section, we saw a simple use of the if statement to control when some code was executed. Now, we're going to do something a little more complex. Let's say we want to give Alice a good adjective, but give Bob a bad adjective. For everyone else, we don't mind if their adjective is good or bad. The code we already have to choose an adjective is perfect for Alice: we always want a good adjective. We just need to make sure that it only runs if our random phrase generator has chosen Alice as its random person. To do this, we need to put all the code for choosing an adjective within another if statement, as shown here: if name == "Alice":    while True:        adjective = random.choice(adjectives.keys())        if adjectives[adjective] == "good":            break Remember to indent everything inside the if statement by an extra four spaces. Next, we want a very similar piece of code for Bob, but also want to make sure that the adjective is bad: elif name == "Bob":    while True:        adjective = random.choice(adjectives.keys())        if adjectives[adjective] == "bad":           break The only differences between this and Alice's code is that the name has changed to "Bob", the target tag has changed to "bad", and if has changed to elif. The word elif in the code is short for else if. We use this version because we only want to do this test if the first test (with Alice) fails. This makes a lot of sense if we look at the code as a whole: if our random person is Alice, do something, else if our random person is Bob, do something else. Finally, we want some code that can deal with everyone else. This time, we don't want to perform another test, so we don't need an if statement: we can just use else: else:    adjective = random.choice(adjectives.keys()) With this, our program does everything we wanted it to do. It generates random phrases, and we can even customize what sort of phrase each person gets. You can add as many extra elif blocks to your program as you like, so as to customize it for different people. Functions In this section, we're not going to change the behavior of our program at all; we're just going to tidy it up a bit. You may have noticed that when customizing the types of adjectives for different people, you created multiple sections of code, which were almost identical. This isn't a very good way of programming because if we ever want to change the way we choose adjectives, we will have to do it multiple times, and this makes it much easier to make mistakes or forget to make a change somewhere. What we want is a single piece of code, which does the job we want it to do, and then be able to use it multiple times. We call this piece of code a function. We saw an example of a function being created in the comparison with Scratch at the beginning of this article, and we've used a few functions from the random module already. A function can take some inputs (called arguments) and does some computation with them to produce a result, which it returns. Here is a function which chooses an adjective for us with a given tag: def chooseAdjective(tag):    while True:        item = random.choice(adjectives.keys())        if adjectives[item] == tag:            break    return item In the first line, we use def to say that we are defining a new function. We also give the function's name and the names of its arguments in brackets. We separate the arguments by commas if there is more than one of them. At the end of the line, we have a colon to show that we are entering a new code block, and the rest of the code in the function is indented by four spaces. The next four lines should look very familiar to you—they are almost identical to the code we had before. The only difference is that instead of comparing with "good" or "bad", we compare with the tag argument. When we use this function, we will set tag to an appropriate value. The final line returns the suitable adjective we've found. Pay attention to its indentation. The line of code is inside the function, but not inside the while loop (we don't want to return every item we check), so it is only indented by four spaces in total. Type the code for this function anywhere above the existing code, which chooses the adjective; the function needs to exist in the code prior to the place where we use it. In particular, in Python, we tend to place our code in the following order: Imports Functions Variables Rest of the code This allows us to use our functions when creating the variables. So, place your function just after the import statement, but before the lists. We can now use this function instead of the several lines of code that we were using before. The code I'm going to use to choose the adjective now becomes: if name == "Alice":    adjective = chooseAdjective("good") elif name == "Bob":    adjective = chooseAdjective("bad") else:    adjective = random.choice(adjectives.keys()) This looks much neater! Now, if we ever want to change how an adjective is chosen, we just need to change the chooseAdjective function, and the change will be seen in every part of the code where the function is used. Complete code listing Here is the final code you should have when you have completed this article. You can use this code listing to check that you have everything in the right order, or look for other problems in your code. Of course, you are free to change the contents of the lists and dictionaries to whatever you like; this is only an example: import random   def chooseAdjective(tag):    while True:        item = random.choice(adjectives.keys())        if adjectives[item] == tag:            break    return item   names = ["Alice", "Bob", "Carol", "Dave"] adjectives = {"fast":"good", "slow":"bad", "pretty":"good", "smelly":"bad"} nouns = ["dog", "car", "face", "foot"]   name = random.choice(names) #adjective = random.choice(adjectives) noun = random.choice(nouns)   if name == "Alice":    adjective = chooseAdjective("good") elif name == "Bob":    adjective = chooseAdjective("bad") else:    adjective = random.choice(adjectives.keys())   print name, "has a", adjective, noun Summary In this article, we learned about the Python programming language and how it can be used to create random phrases. We saw that it shared lots of features with Scratch, but is simply presented differently. Resources for Article: Further resources on this subject: Develop a Digital Clock [article] GPS-enabled Time-lapse Recorder [article] The Raspberry Pi and Raspbian [article]
Read more
  • 0
  • 0
  • 2986

article-image-getting-started-electronic-projects
Packt
13 Jan 2015
7 min read
Save for later

Getting Started with Electronic Projects

Packt
13 Jan 2015
7 min read
Welcome to my second book produced by the good folks at Packt Publishing LLC. This book is somewhat different from my other book in that, instead of one large project this book is a collection of several small, medium, and large projects. While the name of the book is called Getting Started with Electronics Projects, I convinced the folks at Packt to let me write a book with several projects for the electronics hacker and experimenter groups. The first few projects do not even involve a BeagleBone, something which had my reviewers shaking their heads at first. So what follows is a brief taste of what you can look forward to, in this book. Before we go any further I should explain who this book is for. If you are a software person who has never heated up a soldering iron before, you might want to practice a bit before attempting the more difficult assembly (electronics assembly, not assembly language programming) projects. If you are a hardware guy, who just wants it to work out of the box, then I suggest you download the image and burn yourself a microSD card. If you feel adventurous, you can always play with the code sections. If you succeed in giving the Kernel a heart attack( also known as Kernel Panic), no worries. Just burn the image again. The book is divided into eight chapters and seven different projects. The first four don't involve a BeagleBone at all. (For more resources related to this topic, see here.) Chapter 1 – Introduction – Our First Project This chapter is for the hardware guys and the adventurous programmers. In this chapter, you will build your own infrared flashlight. If you can use a soldering iron and a solder sucker you can build this project. IR flashlight Chapter 2 – Infrared Beacon In this chapter, we continue with the theme of infrared devices, by building a somewhat more challenging project from a construction prospective. Files for the PCB are available for download from the Packt site, if you bought the book of course. What this beacon does is flash two infrared LED's on and off at a rate that can be selected by the builder. The beacon is only visible when viewed through night-vision goggles or on a black-and-white video camera. IR beacon While it may not be obvious from the preceding image, the case is actually made from ABS water pipe I purchased from a local hardware store. I like ABS pipe because it is so easy to work with. Chapter 3 – Motion Alarm Once again we will be using ABS pipe to construct a cool project. This time we will be building a motion sensor. Most alarm sensors use some sort of Passive Infrared (PIR) sensor or a millimetre wave radar to detect motion. This project uses a simple (cheap) mercury switch to detect motion. How you reset the alarm is a carefully guarded secret, so you will have to by the book to learn the secret! Motion sensor Notice the ring at the right end of the tube? That is so you can hang it up like a Christmas ornament! As with the last chapter, the PCB files are available for download from the Packt site. Chapter 4 – Sound Card-based Oscilloscope This chapter uses a USB soundcard connected to a PC, because the software I found appears to only run on a PC. If you can find a MAC version of the software, go for it. This project will work for MAC or Linux users too. By the way, I tested all of the software in this chapter on a Pentium 4 class machine running Windows XP, so here is an opportunity to recycle/repurpose that old PC you were going to junk! Soundblaster oscilloscope The title of the chapter is somewhat misleading, because the project also includes plans for building a sound card-based audio signal generator. There are a number of commercial and freeware versions of software that take advantage of this hardware. Soundblaster software on PC There are a number of commercial software packages that have a freeware version available for download. The preceding screenshot shows one of the better ones I found running under Windows XP. Chapter 5 – Calibrated RF Source In this chapter we will be building a clean calibrated RF signal source. In addition to being of use to ham radio enthusiasts, it will also be used the chapters that follow. Clean 50MHzsignal This is the first project that actually makes use of the BeagleBone Black. The BeagleBone is used to control a digitally controlled step attenuator. This allows us to output a calibrated signal level from our 50MHz source. In addition to its use in upcoming chapters, once again ham radio enthusiasts will no doubt find a clean RF source with a calibrated output which is selectable in .5dB steps. GUI running on BeagleBone Black Chapter 6 – RF Power Meter – Hardware In this chapter we will be building and RF power meter capable of measuring RF power from 40MHz to over 6GHz. The circuit is based on the Linear Technology LTC5582 RMS power detector. The beauty of this device is that it outputs a DC voltage proportional to the RMS power it detects. There is no need for conversion as there is with other detectors. RMS power is the AC power measured by your digital voltmeter when you have it set to AC. RF detector mounter on protoboard The connector near the notch in the protoboard allows the BeagleBone to both read the RF power and control the step attenuator mentioned earlier. Chapter 7 – RF Power Meter – Software In this chapter we will be building a development system based on Ubuntu and using a docking station available from https://specialcomp.com/beaglebone/index.htm This could be considered the "deluxe" version. It is also possible to complete the next two chapters using the debug port on the BeagleBone and a communications program like PuTTY. BeagleBone development system This configuration also contains the hardware to build a combination wired and wireless alarm system. More on that is in the following chapter. Chapter 8 – Creating a ZigBee Network of Sensors This is the longest and by far the most complex chapter in the book. In this chapter we will learn how to configure XBee modules from Digi International Inc. using the XCTU Windows application. We will then build a standalone wireless alarm system. This alarm system will be based on hardware developed and presented in my previous book: http://www.packtpub.com/building-a-home-security-system-with-beaglebone/book If you purchased my previous book and build any of the alarm system hardware, you can also use it in this chapter to convert your wired alarm system to wireless! The following image is of the XBee module mounted on top of the alarm boards. Each wireless remote module has two alarm zone inputs and four isolated alarm outputs. Completed wireless alarm remote module Summary This book will hopefully have something of interest to a large variety of electronics enthusiasts, from hams to hackers. I would say that, as long as you have at least intermediate programming and construction skills, you should have no problem completing the projects in this book. All the projects use through-hole parts to make assembly easier. Resources for Article: Further resources on this subject: Building robots that can walk [article] Beagle Boards [article] Protecting GPG Keys in BeagleBone [article]
Read more
  • 0
  • 0
  • 2930
article-image-webcam-and-video-wizardry
Packt
26 Jul 2013
13 min read
Save for later

Webcam and Video Wizardry

Packt
26 Jul 2013
13 min read
(For more resources related to this topic, see here.) Setting up your camera Go ahead, plug in your webcam and boot up the Pi; we'll take a closer look at what makes it tick. If you experimented with the dwc_otg.speed parameter to improve the audio quality during the previous article, you should change it back now by changing its value from 1 to 0, as chances are that your webcam will perform worse or will not perform at all, because of the reduced speed of the USB ports.. Meet the USB Video Class drivers and Video4Linux Just as the Advanced Linux Sound Architecture (ALSA) system provides kernel drivers and a programming framework for your audio gadgets, there are two important components involved in getting your webcam to work under Linux: The Linux USB Video Class (UVC) drivers provide the low-level functions for your webcam, which are in accordance with a specifcation followed by most webcams produced today. Video4Linux (V4L) is a video capture framework used by applications that record video from webcams, TV tuners, and other video-producing devices. There's an updated version of V4L called V4L2, which we'll want to use whenever possible. Let's see what we can find out about the detection of your webcam, using the following command: pi@raspberrypi ~ $ dmesg The dmesg command is used to get a list of all the kernel information messages that of messages, is a notice from uvcvideo. Kernel messages indicating a found webcam In the previous screenshot, a Logitech C110 webcam was detected and registered with the uvcvideo module. Note the cryptic sequence of characters, 046d:0829, next to the model name. This is the device ID of the webcam, and can be a big help if you need to search for information related to your specifc model. Finding out your webcam's capabilities Before we start grabbing videos with our webcam, it's very important that we find out exactly what it is capable of in terms of video formats and resolutions. To help us with this, we'll add the uvcdynctrl utility to our arsenal, using the following command: pi@raspberrypi ~ $ sudo apt-get install uvcdynctrl Let's start with the most important part—the list of supported frame formats. To see this list, type in the following command: pi@raspberrypi ~ $ uvcdynctrl -f List of frame formats supported by our webcam According to the output of this particular webcam, there are two main pixel formats that are supported. The first format, called YUYV or YUV 4:2:2, is a raw, uncompressed video format; while the second format, called MJPG or MJPEG, provides a video stream of compressed JPEG images. Below each pixel format, we find the supported frame sizes and frame rates for each size. The frame size, or image resolution, will determine the amount of detail visible in the video. Three common resolutions for webcams are 320 x 240, 640 x 480 (also called VGA), and 1024 x 768 (also called XGA). The frame rate is measured in Frames Per Second (FPS) and will determine how "fuid" the video will appear. Only two different frame rates, 15 and 30 FPS, are available for each frame size on this particular webcam. Now that you know a bit more about your webcam, if you happen to be the unlucky owner of a camera that doesn't support the MJPEG pixel format, you can still go along, but don't expect more than a slideshow of images of 320 x 240 from your webcam. Video processing is one of the most CPU-intensive activities you can do with the Pi, so you need your webcam to help in this matter by compressing the frames first. Capturing your target on film All right, let's see what your sneaky glass eye can do! We'll be using an excellent piece of software called MJPG-streamer for all our webcam capturing needs. Unfortunately, it's not available as an easy-to-install package for Raspbian, so we will have to download and build this software ourselves. Often when we compile software from source code, the application we're building will want to make use of code libraries and development headers. Our MJPG-streamer application, for example, would like to include functionality for dealing with JPEG images and Video4Linux devices. Install the libraries and headers for JPEG and V4L by typing in the following command: pi@raspberrypi ~ $ sudo apt-get install libjpeg8-dev libv4l-dev Next, we're going to download the MJPG-streamer source code using the following command: pi@raspberrypi ~ $ wget http: // mjpg-streamer.svn.sourceforge.net/viewvc/mjpg-streamer/mjpg-streamer/?view=tar -O mjpg-streamer.tar.gz The wget utility is an extraordinarily handy web download tool with many uses. Here we use it to grab a compressed TAR archive from a source code repository, and we supply the extra -O mjpg-streamer.tar.gz to give the downloaded tarball a proper filename. Now we need to extract our mjpg-streamer.tar.gz article, using the following command: pi@raspberrypi ~ $ tar xvf mjpg-streamer.tar.gz The tar command can both create and extract archives, so we supply three fags here: x for extract, v for verbose (so that we can see where the files are being extracted to), and f to tell tar to use the article we specify as input, instead of reading from the standard input. Once you've extracted it, enter the directory containing the sources: pi@raspberrypi ~ $ cd mjpg-streamer Now type in the following command to build MJPG-streamer with support for V4L2 devices: pi@raspberrypi ~/mjpg-streamer $ make USE_LIBV4L2=true Once the build process has finished, we need to install the resulting binaries and other application data somewhere more permanent, using the following command: pi@raspberrypi ~/mjpg-streamer $ sudo make DESTDIR=/usr install You can now exit the directory containing the sources and delete it, as we won't need it anymore: pi@raspberrypi ~/mjpg-streamer $ cd .. && rm -r mjpg-streamer Let's fre up our newly-built MJPG-streamer! Type in the following command, but adjust the values for resolution and frame rate to a moderate setting that you know (from the previous section) that your webcam will be able to handle: pi@raspberrypi ~ $ mjpg_streamer -i "input_uvc.so -r 640x480 -f 30" -o "output_http.so -w /usr/www" MJPG-streamer starting up You may have received a few error messages saying Inappropriate ioctl for device; these can be safely ignored. Other than that, you might have noticed the LED on your webcam (if it has one) light up as MJPG-streamer is now serving your webcam feed over the HTTP protocol on port 8080. Press Ctrl + C at any time to quit MJPG-streamer. To tune into the feed, open up a web browser (preferably Chrome or Firefox) on a computer connected to the same network as the Pi and enter the following line into the address field of your browser, but change [IP address] to the IP address of your Pi. That is, the address in your browser should look like this: http://[IP address]:8080. You should now be looking at the MJPG-streamer demo pages, containing a snapshot from your webcam. MJPG-streamer demo pages in Chrome The following pages demonstrate the different methods of obtaining image data from your webcam: The Static page shows the simplest way of obtaining a single snapshot frame from your webcam. The examples use the URL http://[IP address]:8080/?action=snapshot to grab a single frame. Just refresh your browser window to obtain a new snapshot. You could easily embed this image into your website or blog by using the <img src = "http://[IP address]:8080/?action=snapshot"/> HTML tag, but you'd have to make the IP address of your Pi reachable on the Internet for anyone outside your local network to see it. The Stream page shows the best way of obtaining a video stream from your webcam. This technique relies on your browser's native support for decoding MJPEG streams and should work fne in most browsers except for Internet Explorer. The direct URL for the stream is http://[IP address]:8080/?action=stream. The Java page tries to load a Java applet called Cambozola, which can be used as a stream viewer. If you haven't got the Java browser plugin already installed, you'll probably want to steer clear of this page. While the Cambozola viewer certainly has some neat features, the security risks associated with the plugin outweigh the benefits of the viewer. The JavaScript page demonstrates an alternative way of displaying a video stream in your browser. This method also works in Internet Explorer. It relies on JavaScript code to continuously fetch new snapshot frames from the webcam, in a loop. Note that this technique puts more strain on your browser than the preferred native stream method. You can study the JavaScript code by viewing the page source of the following page: http://[IP address]:8080/javascript_simple.html The VideoLAN page contains shortcuts and instructions to open up the webcam video stream in the VLC media player. We will get to know VLC quite well during this article; leave it alone for now. The Control page provides a convenient interface for tweaking the picture settings of your webcam. The page should pop up in its own browser window so that you can view the webcam stream live, side-by-side, as you change the controls. Viewing your webcam in VLC media player You might be perfectly content with your current webcam setup and viewing the stream in your browser; for those of you who prefer to watch all videos inside your favorite media player, this section is for you. Also note that we'll be using VLC for other purposes further in this article, so we'll go through the installation here. Viewing in Windows Let's install VLC and open up the webcam stream: Visit http://www.videolan.org/vlc/download-windows.html and download the latest version of the VLC installer package(vlc-2.0.5-win32.exe, at the time of writing). Install VLC media player using the installer. Launch VLC using the shortcut on the desktop or from the Start menu. From the Media drop-down menu, select Open Network Stream…. Enter the direct stream URL we learned from the MJPG-streamer demo pages (http://[IP address]:8080/?action=stream), and click on the Play button. (Optional) You can add live audio monitoring from the webcam by opening up a command prompt window and typing in the following command: "C:Program Files (x86)PuTTYplink" pi@[IP address] -pw [password] sox -t alsa plughw:1 -t sox - | "C:Program Files (x86)sox-14-4-1sox" -q -t sox - -d Viewing in Mac OS X Let's install VLC and open up the webcam stream: Visit http://www.videolan.org/vlc/download-macosx.html and download the latest version of the VLC dmg package for your Mac model. The one at the top, vlc-2.0.5.dmg (at the time of writing), should be fne for most Macs. Double-click on the VLC disk image and drag the VLC icon to the Applications folder. Launch VLC from the Applications folder. From the File drop-down menu, select Open Network…. Enter the direct stream URL we learned from the MJPG-streamer demo pages (http://[IP address]:8080/?action=stream) and click on the Open button. (Optional) You can add live audio monitoring from the webcam by opening up a Terminal window (located in /Applications/Utilities ) and typing in the following command: ssh pi@[IP address] sox -t alsa plughw:1 -t sox - | sox -q -t sox - -d Viewing on Linux Let's install VLC or MPlayer and open up the webcam stream: Use your distribution's package manager to add the vlc or mplayer package. For VLC, either use the GUI to Open a Network Stream or launch it from the command line with vlc http://[IP address]:8080/?action=stream For MPlayer, you need to tag on an MJPG article extension to the stream, using the following command: mplayer "http://[IP address]:8080/?action= stream&stream.mjpg" (Optional) You can add live audio monitoring from the webcam by opening up a Terminal and typing in the following command: ssh pi@[IP address] sox -t alsa plughw:1 -t sox - | sox -q -t sox - -d Recording the video stream The best way to save a video clip from the stream is to record it with VLC, and save it into an AVI article container. With this method, we get to keep the MJPEG compression while retaining the frame rate information. Unfortunately, you won't be able to record the webcam video with sound. There's no way to automatically synchronize audio with the MJPEG stream. The only way to produce a video article with sound would be to grab video and audio streams separately and edit them together manually in a video editing application such as VirtualDub. Recording in Windows We're going to launch VLC from the command line to record our video: Open up a command prompt window from the Start menu by clicking on the shortcut or by typing in cmd in the Run or Search fields. Then type in the following command to start recording the video stream to a article called myvideo.avi, located on the desktop: C:> "C:Program Files (x86)VideoLANVLCvlc.exe" http://[IP address]:8080/?action=stream --sout="#standard{mux=avi,dst=%UserProfile%Desktopmyvideo.avi,access=file}" As we've mentioned before, if your particular Windows version doesn't have a C:Program Files (x86) folder, just erase the (x86) part from the path, on the command line. It may seem like nothing much is happening, but there should now be a growing myvideo.avi recording on your desktop. To confirm that VLC is indeed recording, we can select Media Information from the Tools drop-down menu and then select the Statistics tab. Simply close VLC to stop the recording. Recording in Mac OS X We're going to launch VLC from the command line, to record our video: Open up a Terminal window (located in /Applications/Utilities) and type in the following command to start recording the video stream to a file called myvideo.avi, located on the desktop: $ /Applications/VLC.app/Contents/MacOS/VLC http://[IP address]:8080/?action=stream --sout='#standard{mux=avi,dst=/Users/[username]/Desktop/myvideo.avi,access=file}' Replace [username] with the name of the account you used to log in to your Mac, or remove the directory path to write the video to the current directory. It may seem like nothing much is happening, but there should now be a growing myvideo.avi recording on your desktop. To confirm that VLC is indeed recording, we can select Media Information from the Window drop-down menu and then select the Statistics tab. Simply close VLC to stop the recording. Recording in Linux We're going to launch VLC from the command line to record our video: Open up a Terminal window and type in the following command to start recording the video stream to a file called myvideo.avi, located on the desktop: $ vlc http://[IP address]:8080/?action=stream --sout='#standard{mux=avi,dst=/home/[username]/Desktop/myvideo.avi,access=file}' Replace [username] with your login name, or remove the directory path to write the video to the current directory.
Read more
  • 0
  • 1
  • 2888

article-image-baking-bits-yocto-project
Packt
07 Jul 2014
3 min read
Save for later

Baking Bits with Yocto Project

Packt
07 Jul 2014
3 min read
(For more resources related to this topic, see here.) Understanding the BitBake tool The BitBake task scheduler started as a fork from Portage, which is the package management system used in the Gentoo distribution. However, nowadays the two projects have diverged a lot due to the different usage focusses. The Yocto Project and the OpenEmbedded Project are the most known and intensive users of BitBake, which remains a separated and independent project with its own development cycle and mailing list ([email protected]). BitBake is a task scheduler that parses Python and the Shell Script mixed code. The code parsed generates and runs tasks that may have a complex dependency chain, which is scheduled to allow a parallel execution and maximize the use of computational resources. BitBake can be understood as a tool similar to GNU Make in some aspects. Exploring metadata The metadata used by BitBake can be classified into three major areas: Configuration (the .conf files) Classes (the .bbclass files) Recipes (the .bb and and .bbappend files) The configuration files define the global content, which is used to provide information and configure how the recipes will work. One common example of the configuration file is the machine file that has a list of settings, which describes the board hardware. The classes are used by the whole system and can be inherited by recipes, according to their needs or by default, as in this case with classes used to define the system behavior and provide the base methods. For example, kernel.bbclass helps the process of build, install, and package of the Linux kernel, independent of version and vendor. The recipes and classes are written in a mix of Python and Shell Scripting code. The classes and recipes describe the tasks to be run and provide the needed information to allow BitBake to generate the needed task chain. The inheritance mechanism that permits a recipe to inherit one or more classes is useful to reduce code duplication, and eases the maintenance. A Linux kernel recipe example is linux-yocto_3.14.bb, which inherits a set of classes, including kernel.bbclass. BitBake's most commonly used aspects across all types of metadata (.conf, .bb, and .bbclass). Summary In this article, we studied about BitBake and the classification of metadata. Resources for Article: Further resources on this subject: Installation and Introduction of K2 Content Construction Kit [article] Linux Shell Script: Tips and Tricks [article] Customizing a Linux kernel [article]
Read more
  • 0
  • 0
  • 2885