Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech Guides

851 Articles
article-image-tensorflow-next-gen-machine-learning
Ariel Scarpinelli
01 Jun 2016
7 min read
Save for later

Tensorflow: Next Gen Machine Learning

Ariel Scarpinelli
01 Jun 2016
7 min read
Last November, Google open sourced its shiny Machine Intelligence package, promising a simpler way to develop deep learning algorithms that can be deployed anywhere, from your phone to a big cluster without a hassle. They even take advantage of running over GPUs for better performance. Let's Give It a Shot! First things first, let's install it: # Ubuntu/Linux 64-bit, CPU only (GPU enabled version requires more deps): $ sudo pip install --upgrade https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-0.7.1-cp27-none-linux_x86_64.whl # Mac OS X, CPU only:$ sudo easy_install --upgrade six$ sudo pip install --upgrade https://storage.googleapis.com/tensorflow/mac/tensorflow-0.7.1-cp27-none-any.whl We are going to play with the old-known iris dataset, where we will train a neural network to take dimensions of the sepals and petals of an iris plant and classify it between three different types of iris plants: Iris setosa, Iris versicolour, and Iris virginica. You can download the training CSV dataset from here. Reading the Training Data Because TensorFlow is prepared for cluster-sized data, it allows you to define an input by feeding it with a queue of filenames to process (think of MapReduce output shards). In our simple case, we are going to just hardcode the path to our only file: import tensorflow as tf def inputs(): filename_queue = tf.train.string_input_producer(["iris.data"]) We then need to set up the Reader, which will work with the file contents. In our case, it's a TextLineReader that will produce a tensor for each line of text in the dataset: reader = tf.TextLineReader()key, value = reader.read(filename_queue) Then we are going to parse each line into the feature tensor of each sample in the dataset, specifying the data types (in our case, they are all floats except the iris class, which is a string). # decode_csv will convert a Tensor from type string (the text line) in # a tuple of tensor columns with the specified defaults, which also # sets the data type for each column sepal_length, sepal_width, petal_length, petal_width, label = tf.decode_csv(value, record_defaults=[[0.0], [0.0], [0.0], [0.0], [""]]) # we could work with each column separately if we want; but here we # simply want to process a single feature vector containing all the # data for each sample. features = tf.pack([sepal_length, sepal_width, petal_length, petal_width]) Finally, in our data file, the samples are actually sorted by iris type. This would lead to bad performance of the model and make it inconvenient for splitting between training and evaluation sets, so we are going to shuffle the data before returning it by using a tensor queue designed for it. All the buffering parameters can be set to 1500 because that is the exact number of samples in the data, so will store it completely in memory. The batch size will also set the number of rows we pack in a single tensor for applying operations in parallel: return tf.train.shuffle_batch([features, label], batch_size=100, capacity=1500, min_after_dequeue=100)   Converting the Data Our label field on the training dataset is a string that holds the three possible values of the Iris class. To make it friendly with the neural network output, we need to convert this data to a three-column vector, one for each class, where the value should be 1 (100% probability) when the sample belongs to that class. This is a typical transformation you may need to do with input data. def string_label_as_probability_tensor(label): is_setosa = tf.equal(label, ["Iris-setosa"]) is_versicolor = tf.equal(label, ["Iris-versicolor"]) is_virginica = tf.equal(label, ["Iris-virginica"]) return tf.to_float(tf.pack([is_setosa, is_versicolor, is_virginica]))   The Inference Model (Where the Magic Happens) We are going to use a single neuron network with a Softmax activation function. The variables (learned parameters of our model) will only be the matrix weights applied to the different features for each sample of input data. # model: inferred_label = softmax(Wx + b) # where x is the features vector of each data example W = tf.Variable(tf.zeros([4, 3])) b = tf.Variable(tf.zeros([3])) def inference(features): # we need x as a single column matrix for the multiplication x = tf.reshape(features, [1, 4]) inferred_label = tf.nn.softmax(tf.matmul(x, W) + b) return inferred_label Notice that we left the model parameters as variables outside of the scope of the function. That is because we want to use those same variables both while training and when evaluating and using the model. Training the Model We train the model using backpropagation, trying to minimize cross entropy, which is the usual way to train a Softmax network. At a high level, this means that for each data sample, we compare the output of the inference with the real value and calculate the error (how far we are). Then we use the error value to adjust the learning parameters in a way that minimizes that error. We also have to set the learning factor; it means for each sample, how much of the computed error we will apply to correct the parameters. There has to be a balance between the learning factor, the number of learning loop cycles, and the number of samples we pack tighter in the same tensor in batch; the bigger the batch, the smaller the factor and the higher the number of cycles. def train(features, tensor_label): inferred_label = inference(features) cross_entropy = -tf.reduce_sum(tensor_label*tf.log(inferred_label)) train_step = tf.train.GradientDescentOptimizer(0.001) .minimize(cross_entropy) return train_step Evaluating the Model We are going to evaluate our model using accuracy, which is the ratio of cases where our network identifies the right iris class over the total evaluation samples. def evaluate(evaluation_features, evaluation_labels): inferred_label = inference(evaluation_features) correct_prediction = tf.equal(tf.argmax(inferred_label, 1), tf.argmax(evaluation_labels, 1)) accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) return accuracy Running the Model We are only left to connect our graph and run it in a session, where the defined operations are actually going to use the data. We also split our input data between training and evaluation around 70%:30%, and run a training loop with it 1,000 times. features, label = inputs() tensor_label = string_label_as_probability_tensor(label) train_step = train(features[0:69, 0:4], tensor_label[0:69, 0:3]) evaluate_step = evaluate(features[70:99, 0:4], tensor_label[70:99, 0:3]) with tf.Session() as sess: sess.run(tf.initialize_all_variables()) # Start populating the filename queue. coord = tf.train.Coordinator() threads = tf.train.start_queue_runners(coord=coord) for i in range(1000): sess.run(train_step) print sess.run(evaluate_step) # should print 0 => setosa print sess.run(tf.argmax(inference([[5.0, 3.6, 1.4, 0.2]]), 1)) # should be 1 => versicolor print sess.run(tf.argmax(inference([[5.5, 2.4, 3.8, 1.1]]), 1)) # should be 2 => virginica print sess.run(tf.argmax(inference([[6.9, 3.1, 5.1, 2.3]]), 1)) coord.request_stop() coord.join(threads) sess.closs() If you run this, it should print an accuracy value close to 1. This means our network correctly classifies the samples in almost 100% of the cases, and also we are providing the right answers for the manual samples to the model. Conclusion Our example was very simple, but TensorFlow actually allows you to do much more complicated things with similar ease, such as working with voice recognition and computer vision. It may not look much different than using any other deep learning or math packages, but the key is the ability to run the expressed model in parallel. Google is willing to create a mainstream DSL to express data algorithms focused on machine learning, and they may succeed in doing so. For instance, although Google has not yet open sourced the distributed version of the engine, a tool capable of running Tensorflow-modeled graphs directly over an Apache Spark cluster was just presented at the Spark Summit, which shows that the community is interested in expanding its usage. About the author Ariel Scarpinelli is a senior Java developer in VirtualMind and is a passionate developer with more than 15 years of professional experience. He can be found on Twitter at @ triforcexp.
Read more
  • 0
  • 0
  • 2302

article-image-webassembly-trick-or-treat
Prasad Ramesh
31 Oct 2018
1 min read
Save for later

WebAssembly - Trick or Treat?

Prasad Ramesh
31 Oct 2018
1 min read
WebAssembly is a low level language that works in binary and close with the machine code. It defines an AST in a binary format. In this language, you can create and debug code in plain text format. It made popular appearance in many browsers last year and is catching on due to its ability to run heavier apps with speed on a browser window. There are Tools and languages built for it. Why are developers excited about WebAssembly? Developers are excited about this as it can potentially run heavy desktop games and applications right inside your browser window. As Mozilla shares plans to bring more functionality to WebAssembly, modern day web browsing will become more robust. However, the language used by this, WASM, poses some security threats. This is because WASM binary applications cannot be checked for tampers. Some features are even being held back from WebAssembly till it is more secure against attacks like Spectre and Meltdown.
Read more
  • 0
  • 0
  • 2300

article-image-responsive-design-is-hard
Ed Gordon
29 Oct 2014
7 min read
Save for later

Responsive Web Design is Hard

Ed Gordon
29 Oct 2014
7 min read
Last week, I embarked on a quest to build my first website that would simultaneously deliver on two puns; I would “launch” my website with a “landing” page that was of a rocket sailing across the stars. On my journey, I learned SO much that it probably best belongs in a BuzzFeed list. 7 things only a web dev hack would know “Position” is a thing no one on the Internet knows about. You change the attribute until it looks right, and hope no one breaks it. The Z-index has a number randomly ascribed until the element goes where you want. CSS animations are beyond my ability as someone who’s never really written CSS before. So is parallax scrolling. So is anything other than ‘width: x%’. Hosting sites ring you. All the time. They won’t leave you alone. The more tabs you have open the better you are as a person. Alt+Tab is the best keyboard hack ever. Web development is 60% deleting things you once thought were integral to the design. So, I bought a site, jslearner.com (cool domain, right?), included the boilerplate Bootstrap CDN, and got to work. Act I: Design, or, ‘how to not stick to plan’ Web design starts with the design bit, right? My initial drawing, like all great designs, was done on the back of an envelope that contained relatively important information. (Author’s note: I’ve now lost the envelope because I left it in the work scanner. Please can I get it back?!) As you can clearly see from the previous image, I had a strong design aesthetic for the site, right from the off. The rocket (bottom left) was to travel along the line (line for illustration purposes only) and correct itself, before finally landing on a moon that lay across the bottom of the site. In a separate drawing, I’d also decided that I needed two rows consisting of three columns each, so that my rocket could zoom from bottom left to top right, and back down again. This will be relevant in about 500 words. Confronting reality I’m a terrible artist, as you can see from my hand-drawn rocket. I have no eye for design. After toying with trying to draw the assets myself, I decided to pre-buy them. The pack I got from Envato, however, came as a PNG and a file I couldn’t open. So, I had to hack the PNG (puts on shades): I used Pixlr and magic-wanded the other planets away, so I was left with a pretty dirty version of the planet I wanted. After I had hand-painted the edges, I realised that I could just magic-wand the planet I wanted straight out. This wouldn’t be the first 2 hours I wasted. I then had to get my rocket in order. Another asset paid for, and this time I decided to try and do it professionally. I got Inkscape, which is baffling, and pressed buttons until my rocket looked like it had come to rest. So this: After some tweaking, became this: After flipping the light sources around, I was ready to charge triumphantly on to the next stage of my quest; the fell beast of design was slain. Development was going to be the easy part. My rocket would soar across the page, against a twinkling backdrop, and land upon my carefully crafted assets. Act II: Development, or, ‘responsive design is hard’ My first test was to actually understand the Bootstrap column thingy… CSS transformations and animations would be taking a back seat in the rocket ship. These columns and rows were to hold my content. I added some rules to include the image of the planets and a background color of ‘space blue’ (that’s a thing, I assure you). My next problem was that the big planet wasn’t sitting at the bottom of the page. Nothing I could do would rectify this. The number of open tabs is increasing… This was where I learned the value of using the Chrome/Mozilla developer tools to write rules and see what works. Hours later, I figured out that ‘fixed position’ and ‘100% width’ seemed to do the trick. At this point, the responsive element of the site was handling itself. The planets generally seemed to be fine when scaling up and down. So, the basic premise was set up. Now I just had to add the rocket. Easy, right? Responsive design is really quite hard When I positioned my rocket neatly on my planet – using % spacing of course – I decided to resize the browser. It went literally everywhere. Up, down, to the side. This was bad. It was important to the integrity of my design for the rocket to sit astride the planet. The problem I was facing was that I just couldn’t get the element to stay in the same place whilst also adjusting its size. Viewing it on a 17-inch desktop, it looked like the rocket was stuck in mid-air. Not the desired effect. Act III: Refactoring, or, ‘sticking to plan == stupid results’ When I ‘wireframed’ my design (in pencil on an envelope), for some reason I drew two rows. Maybe it’s because I was watching TV, whilst playing Football Manager. I don’t know. Whatever the reason, the result of this added row was that when I resized, the moon stuck to its row, and the rocket went up with the top of the browser. Responsive design is as much about solid structure as it is about fancy CSS rules. Realising this point would cost me hours of my life. Back to the drawing board. After restructuring the HTML bits (copy/paste), I’d managed to get the rocket/moon in to the same div class. But it was all messed up, again. Why tiny moon? Why?! Again, I spent hours tweaking CSS styles in the browser until I had something closer to what I was looking for. Rocket on moon, no matter the size. I feel like a winner, listen to the Knight Rider theme song, and go to bed. Act IV: Epiphany, or, ‘expectations can be fault tolerant’ A website containing four elements had taken me about 15 hours of work to make look ‘passable’. To be honest, it’s still not great, but it does work. Part of this is my own ignorance of speedier development workflows (design in browser, use the magic wand, and so on). Another part of this was just how hard responsive design is. What I hadn’t realised was how much of responsive design depends on clever structure and markup. I hadn’t realised that this clever structure doesn’t even start with HTML – for me, it started with a terrible drawing on the back of an envelope. The CSS part enables your ‘things’ to resize nicely, but without your elements in the right places, no amount of {z-position: -11049;} will make it work properly. It’s what makes learning resources so valuable; time invested in understanding how to do it properly is time well spent. It’s also why Bootstrap will help make my stuff look better, but will never on its own make me a better designer.
Read more
  • 0
  • 0
  • 2295

article-image-vr-unity
Raka Mahesa
25 May 2016
6 min read
Save for later

VR in Unity!

Raka Mahesa
25 May 2016
6 min read
If you're a Unity developer looking to get into VR development, then you're in luck because Unity is definitely going all-in on the virtual reality front. With the recent announcements at Vision VR/AR Summit 2016, Unity has promised built-in support for all major virtual reality platforms, which are Oculus Rift (including GearVR), Playstation VR, SteamVR, and Google Cardboard. Unity is also currently developing an experimental version of the Unity editor that can be used inside of VR. Given how there is one game engine that supports most VR platforms, there has never been a better time to get your feet wet in VR development. So let's do exactly that; let's get started with VR using Unity. And don't worry; I'll be here to help you every step of the way. A journey of a thousand miles begins with a single step, so let's do some simple stuff first. There are two basic topics that we're going to cover this time: head tracking and interaction in VR. While those sound pretty basic, they are the fundamental components of a virtual reality experience, so it's best to understand them in the beginning. Requirement There are a couple of requirements that need to be satisfied before we start, however. We're going to need: Unity 5.3 or above Windows 7 or above Oculus Rift device (optional) Oculus Runtime 0.8.0.0 (only if you have an Oculus Rift) While it's recommended to have an Oculus Rift ready when you're developing a VR app, it is not required. Starting out All right then, let's get started. I'm going to assume that you're already familiar with programming and Unity in general, so we won’t go into too much detail with that. Let's make a new Unity 3D project. Once you've made a new project, you need to turn on VR support by going to Edit > Project Settings > Player. On the Other Settings tab, check the Virtual Reality Supported box. Don't forget to make sure that your selected build platform is PC, Mac, and Linux. The next step is to create a scene and add some objects there. Any objects will do, but in my case, I just added a cube at the center of the scene. Then we create an empty GameObject and add (or move) the main camera as its child object. This part is important—the main camera has to be a child of another object because when you're developing for VR, you cannot move the camera object by yourself. The VR system will track your movement and adjust the camera position accordingly, overriding any change you made to the camera position. So, to have your camera moving, you'll have to modify the position of the camera's parent object instead of the camera itself. Now, if you have an Oculus Rift in your hand, you're set and ready to go! All you need to do is simply connect the device to your PC, and when you press play, your head movement will automatically be tracked and translated to the in-game camera and you will be able to see your scene in VR on your Oculus Rift. Don't fret if there's no Oculus Rift available for you to use. We'll just have to simulate head movements using this script (by Peter Koch from talesfromtherift). Copy that script to your project and attach it to your main camera. Now, if you play the project in the editor, you can rotate the camera freely by holding the Alt button and moving your cursor around. Interaction Okay, time to step things up a notch. It's not really an app if you can't interact with it, so let's inject some interactivity into the project. We'll add the most basic interaction in virtual reality—gazing. To add gazing to our project, we're going to need this script. Copy and attach it to the main camera. Now when you're playing the project, if you look at a GameObject that has a Collider, the object will slowly turn red. If you clicked while you're looking at an object, that object will be teleported to a random position. Interaction is fun, isn't it? Well, let's dive deeper into the script that enables that gazing interaction. Basically, at every frame, the script will check whether you're looking at an object by casting a ray forward from the camera and seeing whether it hits an object or not. if (Physics.Raycast(new Ray(transform.position, transform.forward), out hit, GAZE_LENGTH)) When the cast ray hits an object, it will check whether it's the same object as the previous frame or not. If it's the same object, a timer will be updated and the script will change the color of the object according to that timer. //Increase gaze time mGazeTime += Time.deltaTime; if (mGazeTime > MAX_GAZE_DURATION) mGazeTime = MAX_GAZE_DURATION; //Recolor float color = (MAX_GAZE_DURATION - mGazeTime) / MAX_GAZE_DURATION; ColorObject(mObject, new Color(1f, color, color)); Interacting via clicks is just as simple. After the script has detected that there's an object, it checks whether the left mouse button is clicked or not. If the button is clicked, the script will then move the object to another position. if (mObject != null && Input.GetMouseButtonUp(0)) { //Move object elsewhere float newX = Random.Range(-8, 8f); float newY = Random.Range(-2f, 2f); float newZ = Random.Range(0, 3f); mObject.transform.position = new Vector3(newX, newY, newZ); } Right now the script, while functional, is very basic. What if you want to have different objects behave differently when they're being looked at? Or what if you want the script to interact with only several objects? Well, I figure I'll leave all that to you as exercise. This concludes the beginning of our journey into VR development. While we only scratched the surface of the development process in this post, we've learned enough stuff to actually make a functioning VR app or game. If you're interested in working on more VR projects, check out the Unity VR samples. They have a bunch of reusable codes that you can use for your VR projects. Good luck! About this author Raka Mahesa is a game developer at Chocoarts  who is interested in digital technology in general. Outside of work hours, he likes to work on his own projects, with Corridoom VR  being his latest released game. Raka also regularly tweets as @legacy99
Read more
  • 0
  • 0
  • 2293

article-image-best-tools-improve-your-development-workflow
Antonio Cucciniello
01 Nov 2017
5 min read
Save for later

The best tools to improve your development workflow

Antonio Cucciniello
01 Nov 2017
5 min read
For thoseweb developers out there who are looking for some tools that can help them become more productive and efficient, this list is for you. I will go through some of the basic tools you can take a look at in order to become a better web developer. Text editors First off, where do we write our code? That's right, in text editors. You need to have a text editor that you trust and one that you love. I have used a couple that I will recommend that you check out. Currently I am using Atom. It is a text editor that is minimal but can have plenty of features added to it. You may install various plugins that make things easier, or connect it to things like GitHub for your source control needs with your project. Another text editor I use is Sublime Text. This is extremely similar to Atom. It is actually a little faster to open than Atom as well. The only issue with this editor is that when you are using the free version, it asks you to donate or buy every couple of times you save a file. This can get annoying, but nonetheless, it's still a very powerful editor. The main key here is to find something that you love. Stick with it, and learn the ins and outs of it. You want to become a pro with it. This will greatly increase your productivity if you know your text editor inside and out. Source control High on the list of must have tools for web development, or even just development in general, is a form of Source Control. You need a place to backup your code and save in multiple states. It also allows for better and easier collaboration between multiple people working in different branches. I recommend using git and GitHub. The user interface is very friendly and the two integrate seamlessly. I have also used Subversion and AWS Code Commit, but these did not leave as great as impression as GitHub did. Source Control is very important, so make sure that you have selected it and use it for every project. It also doubles as a place to display your code over time. Command line interfaces This is not a specific tool per say, because it is already part of your computer. If you have a Mac or Linux, it is terminal. If you have Windows it is the command shell. I recommend learning as many of the commands that you can. You can do things like create files, create directories, delete files, edit files, and so much more. It allows you to be so much more productive. You must think of this as your go to tool for plenty of things you end up doing. Pro Tip for Windows Users: I personally do not like the Windows Command Prompt nearly as much as the Unix one. Look into using Cygwin, which allows you to use Unix commands on a windows command prompt. Debugging If you are doing anything in web development you need to be able to debug your code. No one writes perfect code and you will inevitably spend time fixing bugs. In order to reduce time spent on bugs, you should look into a tool that can help you with debugging.  Personally, if you are using Google Chrome, I suggest using Chrome DevTools. It allows you to set breakpoints, edit code, and manipulate page elements as well as checking all CSS properties of the different HTML elements on the page. It is extremely powerful and can help you when debugging to see what is happening in real time on the webpage. HTTP client I believe you need something like Postman to test HTTP requests from web services. It makes it extremely easy to test and create APIs. You can make all different types of requests, pass in whatever headers you want, and even see what the response is like! This is important for any developer who needs to make API requests.  So, there you have it. These are the best tools for web development, in my opinion. I hope this list has helped you get started in improving your web development workflow. Always be on the lookout for improving your toolset as time goes on. You can always improve, so why not let these tools make it easier for you?  About the Author  Antonio Cucciniello is a Software Engineer with a background in C, C++ and JavaScript (Node.Js) from New Jersey.  His most recent project called Edit Docs is an Amazon Echo skill that allows users to edit Google Drive files using your voice.  He loves building cool things with software, reading books on self-help and improvement, finance, and entrepreneurship. Follow him on twitter @antocucciniello, and follow him on GitHub here: https://github.com/acucciniello. 
Read more
  • 0
  • 0
  • 2277

article-image-roubleshooting-flying-drones-safely
Vijin Boricha
17 May 2018
5 min read
Save for later

Tips and tricks for troubleshooting and flying drones safely

Vijin Boricha
17 May 2018
5 min read
Your drone may have some problems when you fly it regularly or if you have just started piloting a drone. This can be because of maintenance or accidents. So, we need to troubleshoot and fly our drone safely. In this article, we will look at a few common troubleshooting drone tips. This is an excerpt from Building Smart Drones with ESP8266 and Arduino written by Syed Omar Faruk Towaha.  1. My drone tries to flip or flip when I turn it on This problem may occur for several reasons. Check if you calibrated your ESCs. Are your propellers balanced? Have you configured the radio properly? Are your ArduPilot's sensors working properly? Have you checked all the wire connection? Have you calibrated the drone frame? Have you added the wrong propellers to the wrong motors (for example, clockwise propellers to anticlockwise motors)? I hope you can solve the problem now. 2. My motors spin but the drone does not fly or take off This happens because the motors are not giving enough thrust to take off the drone. From the parameter list of the Mission Planner, change THR_MAX . Make sure it is in between 800 and 1000. If THR_MAX is 800 and still the drone does not take off, change the parameter to above 800 and try flying again. 3. My drone moves in any direction The drone moves in any direction even though the stick of the transmitter is cantered. To solve the problem, you must match the RC channel's 1 and 2 values to the PWM values displayed on the Mission Planner. If they are not the same, this error will happen. To match them, open your Mission Planner, connect it via telemetry, go to the Advanced Parameter List, and change HS1_TRIM and HS2_TRIM. With the roll and pitch stick cantered, the RC1 channel and RC2 channel should be the same as the values you wrote for the HS1_TRIM and HS2_TRIM parameters. If the values are different, then calibrate your radio. The HS1 trim value must match the live stick cantered roll value, and the HS2 trim value must match the pitch stick cantered value. You must not use the radio trim for yaw. Make sure the center of gravity of the copter is dead center. 4. When I pitch or roll, the drone yaws This can happen for several reasons. For the brushless AC motors, you need to swap any two of the three wires connected to the ESC. This will change the motor spinning direction. For the brushless DC motors, you need to check if the propellers are mounted properly because the brushless DC motors do not move in the wrong directions unless the connection is faulty. Also, check that the drone's booms are not twisted. Calibrating the compass and magnetometer will also help if there is no hardware problem. 5. It faces GPS lost communication This happens because of a bad GPS signal. You can do one thing before launching the drone. You need to find a spot where the GPS signal is strong so that it can be set to return to home or return to launch if the radio communication is lost. Before flying the drone, you may disarm the drone for a couple of minutes in a strong GPS signal. 6. It shows radio system failed To solve this issue, we can use the radio amplifier. Using the radio amplifier can increase the signal strength. You can have radio failure when there is a minor block in between the drone and the receiver. 7. My drone's battery life is too short When a drone is not used, we should keep the battery stored at room temperature with low humidity. High temperature and moisture will cause the battery to damage the chemical elements inside the battery cells. This will result in a shorter battery life. For the LiPo battery, I would suggest using a balance charger. 8. Diagnosing drone problems using logs For our ArduPilot, we used telemetry to communicate the drone to our Mission Planner. So, after the flight, we can analyze the telemetry logs. The telemetry logs are known as tlogs. There is Sik radio telemetry, Bluetooth telemetry, XBee, and so on. Before going any further, let's see where we can find the data files and how we can download them: In the home screen, you will find the telemetry logs below the Flight Data panel. From there you can choose the graph type after loading the log. When you load the logs, you will be redirected to a folder where the tlogs are situated. Click any of them to load. You can sort them by time so that you can be sure which data or log you need to analyze. You can also export your tlog data to a KML file for further analysis. You can also see the 3D data of the flight path from the tlog files: Open the Mission Planner's flight data screen. Click on the Telemetry Log tab and click on the button marked Tlog>KML or Graph. A new window will appear. Click on the Create KML + GPX button. A .kmz and .kml file will be created where the .tlog files are saved. In Google Earth, just drag and drop the .kmz file and you will see the 3D flight path. You can see the graphs of the tlog files by clicking Graph Log on the screen after the Togl>KMs or Graph button has been clicked. From there you need to select the flight tlog and a Graph this screen will appear. Check the necessary data from the screen and you will see the graphs. We have learned to diagnose drone issues through logs and have also learned to analyze graphs depending on data and troubleshooting flight problem. Get to know more about radio control calibration problems and before/after flight safety from this book Building Smart Drones with ESP8266 and Arduino. Drones: Everything you ever wanted to know!    
Read more
  • 0
  • 0
  • 2275
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at £15.99/month. Cancel anytime
article-image-neuroevolution-step-toward-thinking-machine
Amarabha Banerjee
16 Oct 2017
9 min read
Save for later

Neuroevolution: A step towards the Thinking Machine

Amarabha Banerjee
16 Oct 2017
9 min read
“I propose to consider the question - Can machines think?” - Alan Turing The goal for AI research has always remained the same - create a machine that has human-like decision-making capabilities based on available information. This includes the machine’s ability to analyze and process huge amounts of data and then make a meaningful inference from it. Machine learning, deep learning and other old and new paradigms in AI research are all attempts at imparting complex decision-making capabilities to machines or systems. Alan Turing’s famous test for AI has set the standards over the years for what qualifies as a smart AI i.e. a thinking machine. The imitation game is about an AI/ bot interacting with a human anonymously, in a way that the human can’t decipher the fact that it’s a machine. This not-so-trivial test has seen many adaptations over the years like the modern day Tokyo test. These tests set challenging boundaries that machines must cross to be considered capable of possessing intelligence. Neuroevolution, a few decades old theory, remodeled in a modern day format with the help of Neural and Deep Neural Networks, promises to challenge these boundaries and even break them. With neuroevolution, machines aim to solve complex problems on their own with satisfactory levels of accuracy even though they do not know how to achieve those results.   Neuroevolution: The Essence “If a wild animal habitually performs some useless activity, natural selection will favor rival individuals who instead devote time to surviving and reproducing...Ruthless utilitarianism trumps, even if it doesn’t always seem that way.” - Richard Dawkins This is the essence of Neuroevolution. But the process itself is not as simple. Just like the human evolution process, in the beginning, a set of algorithms work on a problem. The algorithms that show an inclination to solve the problem in the right way are selected for the next stage. They then undergo random minor mutations - i.e., small logical changes in the inherent algorithm structure. Next, we check whether these changes enable the algorithms to achieve the same result with better accuracy or efficiency. The successful ones then move to the next stage with further mutations introduced. This is similar to how nature did the sorting for us and humans evolved from a natural need to survive in unfamiliar situations. Since the concept uses Neural Networks, it has come to be known as Neuroevolution. Neuroevolution, in the simplest terms, is the process of “descent with modification” by which machines/systems evolve and get better at solving the problems they were built for. Backpropagation to DNN: The Evolution Neural networks are made up of nodes. These nodes function like neurons in the human brain that receive a set of inputs and generate a response based on the type, intensity, frequency etc of stimuli. A single node looks like the below illustration: An algorithm can be viewed as a node. With backpropagation, the algorithm is modified in an iterative manner - where the error generated after each pass, is fed back to the system. The algorithms (nodes) responsible for higher error contribution are identified and assigned less weight in the next pass. Thus, backpropagation is a way to assign appropriate weights to nodes by calculating error contributions of individual nodes. These nodes, when combined in different layers, form the structure of Deep Neural Networks. Deep Neural networks have separate input and output layers and a middle layer of hidden nodes which form the core of DNN. This hidden layer consists of multiple nodes like the following. In case of DNNs, as before in each iteration, the weight of the nodes are adjusted based on their accuracy. The number of iterations is a factor that varies for each DNN. As explained earlier, the system without any external stimuli continues to improve on its own. Now, where have we seen this before? Of course, this looks a lot like a simplified, miniature version of evolution! Unfit nodes are culled by reducing the weight they have in the overall output, and the ones with favorable results are encouraged, just like the natural selection. However, the only thing that is missing from this is the mutation and the ability to process mutation. This is where we introduce the mutations in the successful algorithms and let them evolve on their own. Backpropagation in DNNs doesn’t change the algorithm or it’s approach, it merely increases or decreases the algorithm’s overall contribution to the desired result. Forcing random mutations of neural and deep neural networks and then letting these mutations take shape as these neural networks together try to solve a given problem seem pretty straightforward. The point where everything starts getting messy is when different layers or neural networks start solving the given problem in their own pre-defined way. One of two things may then happen: The neural networks behave in self-contradiction and stall the overall problem-solving process. The system as such cannot take any decision and becomes dormant.     The neural networks are in some sort of agreement regarding a decision. The decision itself might be correct or incorrect. Both scenarios present us with dilemmas - how to restart a stalled process and how to achieve better decision making capability. The solution to both of situations lies in enabling the DNNs to rectify themselves first by choosing the correct algorithms. And then by mutating them with an intention to allow them to evolve and reach a decision toward achieving greater accuracy.   Here’s a look at some popular implementations of this idea. Neuroevolution in flesh and blood Cutting edge AI research giants like OpenAI backed by Elon Musk and Google DeepMind have taken the concept of neuroevolution and applied them to a bunch of deep neural networks. Both aim to evolve these algorithms in a way that the smarter ones survive and eventually create better and faster models & systems. Their approaches are however starkly different. The Google implementation Google’s way is simple - It takes a number of algorithms, divides them into groups and assigns one particular task to all. The algorithms that fare better at solving these problems are then chosen for the next stage, much like the reward and punishment system in reinforcement learning. However, the difference here is that the faster algorithms are not just chosen for the next step, but their models and parameters are tweaked slightly -  this is our way of introducing a mutation into the successful algorithms. These minor mutations then play out as these modified algorithms try to solve the given problem. Again, the better ones remain and the rest are culled out. This way, the algorithms themselves find a way to perform better and better until they are reasonably close to the desired result. The most important advantage of this process is that the algorithms keep track of their evolution process as they get smarter. A major limitation of Google’s approach is that the time taken for performing these complex computations is too high, hence the result takes time to show. Also, once the mutation kicks in, their behavior is not controlled externally - i.e., quite literally they can go berserk because of the resulting mutation - which means the process can fail even at an advanced stage. The OpenAI implementation Let’s contrast this with OpenAI’s master-worker approach to neuroevolution. OpenAI used a set of nearly 1440 algorithms to play the game of Atari and submit their scores to the master algorithm. Then, the algorithms with better scores were chosen and given a mutation and put back into the same process. In more abstract terms, the OpenAI method looks like this. A set of worker algorithms are given a certain complex problem to solve. The best scores are passed on to the master algorithm. The better algorithms are then mutated and set to perform the same tasks. The scores are again recorded and passed on to the master algorithm. This happens through multiple iterations. The master algorithm progressively eliminates the chance of failure since the master algorithm knows which algorithms to employ when given a certain problem. However, it does not know the road to success as it has access only to the final scores and not how those scores were achieved. The advantage of this approach is that better results are guaranteed, there are no cases of decision conflict and the system stalling. The flip side is that this system only knows its way through the given problem. All this effort to evolve the system to a better one will have to be repeated for a similar but different problem. The process is therefore cumbersome and lengthy. The Future with Neuroevolution Human evolution has taken millions of years to reach where we are today. Evolving AI and enabling them to pass the Turing test, or to further make them smart enough to pass a university entrance exam will require significant improvement from the current crop of AI. Amazon’s Alexa and Apple’s Siri are mere digital assistants. If we want AI driven smart systems with seamless integration of AI into our everyday life, algorithms with evolutionary characteristics are a must. Neuroevolution might hold the secret to inventing smart AIs that can ultimately propel human civilization to greater heights of development and advancement. “It seems probable that once the machine thinking method had started, it would not take long to outstrip our feeble powers...They would be able to converse with each other to sharpen their wits. At some stage, therefore, we should have to expect the machines to take control." - Alan Turing
Read more
  • 0
  • 0
  • 2274

article-image-kubernetes-hands
Ryan Richard
22 Jun 2015
6 min read
Save for later

Hands on with Kubernetes

Ryan Richard
22 Jun 2015
6 min read
In February I wrote a high level overview of the primary Kubernetes features. In this blog post, we’ll actively use all of these features to deploy a simple 2-tier application inside of a Kubernetes cluster. I highly recommend reading the intro blog before getting started. Setup The easiest way to deploy a cluster is to use the Google Container Engine, which is available on your Google Compute Engine account. If you don’t have an account, you may use one of the available Getting Started guides in the official Github repository. One of the great things about Kubernetes is that it will function almost identically regardless of where it’s deployed with the exception of some cloud provider integrations. I’ve created a small test cluster on GCE, which resulted in three instances being created. I’ve also added my public SSH key to master node so that I may log in via SSH and use the kubectl command locally. kubectl is the CLI for Kubernetes and you can also install it locally on your workstation if you prefer. My demo application is a small python based app that leverages redis as a backend. The source is available here. It expects Docker style environment variables for to point to the redis server and will purposely throw a 5XX status code if there are issues reaching the database. Walkthrough First we’re going to change the Kubernetes configuration to allow privileged containers. This is only being done for demo purposes and shouldn’t be used in a production environment if you can avoid it. This is for the logging container we’ll be deploying with the application. SSH into the master instance. Run the following commands to update the salt configuration sudo sed -i 's/false/true/' /srv/pillar/privilege.sls sudo salt '*' saltutil.refresh_pillar sudo salt-minion Reboot your non-master nodes to force the salt changes. On the master create a redis-master.yaml file with the following content once the nodes are back online: id: redis-master kind: Pod apiVersion: v1beta1 labels: name: redis-master desiredState: manifest: version: v1beta1 id: redis-master containers: - name: redis-master image: dockerfile/redis ports: - containerPort: 6379 I’m using a Pod as opposed to a replicationController since this is a stateful service and it would not be appropriate to run multiple redis nodes in this scenario. Once ready, instruct kubenetes to deploy the container: kubectl create -f redis-master.yaml kubectl get pods Create a redis-service.yaml with the following: kind: Service apiVersion: v1beta1 id: redis port: 6379 selector: name: redis-master containerPort: 6379 kubectl create –f redis-service.yaml kubectl get services Notice that I’m hard coding the service port to match the standard redis port of 6379. Making these match isn’t required as so long as the containerPort is correct. Under the hood, creating a service causes a new iptables entry to be created on each node. The entries will automatically redirect traffic to a port locally where kube-proxy is listening. Kube-proxy is in turn aware of where my redis-master container is running and will proxy connections for me. To prove this works, I’ll connect to redis via my local address (127.0.0.1:60863) which does not have redis running and I’ll get a proper connection to my database which is on another machine: Seeing as that works, let’s get back to the point at hand and deploy our application. Write a demoapp.yaml file with the following content: id: frontend-controller apiVersion: v1beta1 kind: ReplicationController labels: name: frontend-controller desiredState: replicas: 2 replicaSelector: name: demoapp podTemplate: labels: name: demoapp desiredState: manifest: id: demoapp version: v1beta3 containers: - name: frontend image: doublerr/redis-demo ports: - containerPort: 8888 hostPort: 80 - name: logentries privileged: true command: - "--no-stats" - "-l" - "<log token>" - "-j" - "-t" - "<account token>" - "-a app=demoapp" image: logentries/docker-logentries volumeMounts: - mountPath: /var/run/docker.sock name: dockersock readOnly: true volumes: - name: dockersock source: hostDir: path: /var/run/docker.sock In the above description, I’m grouping 2 containers based on my redis-demo image and the logentries image respectively. I wanted to show the idea of sidecar containers, which are containers deployed alongside of the primary container and whose job is to support the primary container. In the above case, the sidecar forwards logs to my logentries.com account tagged with name of my app. If you’re following along you can sign up for a free logentries account to test this out. You’ll need to create a new log, retrieve the log token and account token first. You can then replace the <log token> and <account token> in the yaml file with your values. Deploy the application kubectl create -f demoapp.yaml kubectl get pods   If your cloud provider is blocking port 80 traffic, make sure to allow it directly to your nodes and you should be able to see the app running in a browser once the pod status is “Running”.   Co-locating Containers Co-locating containers is a powerful concept worth spending some time talking about. Since Kubernetes guarantees co-located containers be run together, my primary container doesn’t need to be aware of anything beyond running the application. In this case logging is dealt with separately. If I want to switch logging services, I just need to redeploy the app with a new sidecar container that is able to send the logs elsewhere. Imagine doing this for monitoring, application content updates, etc . You can really see the power of co-locating containers together. On a side note the logentries image isn’t perfectly suited for this methodology. It’s designed such that you should run 1 of these containers per docker host and it will forward all container logs upstream. It also requires access to the docker socket on the host. A better design for Kubernetes paradigm would be a container that only collects STDOUT and STDERR for the container it’s attached to. The logentries image works for this proof of concept though and I can see errors in my account: In closing, Kubernetes is fun to deploy applications into especially if you start thinking of how best to leverage group containers. Most stateless applications will want to leverage the ReplicationController instead of a single pod and services help tie everything together. For more Docker tutorials, insight and analysis, visit our dedicated Docker page.  About the Author Ryan Richard is a systems architect at Rackspace with a background in automation and OpenStack. His primary role revolves around research and development of new technologies. He added the initial support for the Rackspace Cloud into the Kubernetes codebase. He can be reached at: @rackninja on Twitter.
Read more
  • 0
  • 0
  • 2267

article-image-virtual-reality-developers-cardboard-gear-vr-rift-and-vive
Casey Borders
22 Jul 2016
5 min read
Save for later

Virtual Reality for Developers: Cardboard, Gear VR, Rift, and Vive

Casey Borders
22 Jul 2016
5 min read
Right now is a very exciting time in the virtual reality space! We’ve already seen what mobile VR platforms have to offer with Google Cardboard and Samsung Gear VR. Now we’re close to a full commercial release of the two biggest desktop VR contenders. Oculus started taking pre-orders in January for their Rift and plan to start shipping bythe end of March. HTC opened pre-orders on March 1st for their Vive, which will ship in April. Both platforms are working to make it as easy as possible for developers to build games for their platform. Oculus have offered Unity integration since the beginning with its first development kit (DK1), but have stopped supporting OSX after their acquisition by Facebook, leaving the Mac runtime at version V0.5.0.1-beta. The Windows runtime is up to version v0.8.0.0-beta. You can download both of these as well as a bunch of other tools from their developer download site. HTC has teamed up with Valve, who is writing all of the software for the Vive. It was announced at the Vision AR/VR Summit in February that they would have official Unity integration coming soon. Adding basic VR support to your game is amazingly easy with the Oculus Unity package. From the developer download page, look under the Engine Integration heading and download the “Oculus Utilities for Unity 5” bundle. Importing that into your Unity project will bring in everything you need to integrate VR into your game as well as some sample scenes that you can use for reference while you're getting started. Looking under OVR> Prefabs, you'll find an OVRCameraRig prefab that works as a drop-in replacement for the standard Unity camera. This prefab handles retrieving of the sensor data from the head-mounted display (HMD) and rendering the stereoscopic output. This lets you go from downloading the Unity package to viewing your game in Oculus Rift in just a few minutes! Virtual Reality Games Virtual reality opens up a whole new level of immersion in games. It can make the player truly feel like they are in another world. It also brings with it some unique obstacles that you'll need to consider when working on a VR game. The first and most obvious thing to consider is that the player’s vision is going to be completely blocked by the head-mounted display. This means that you can't ask the player to type anything and it's going to be extremely difficult and frustrating for them to use a large number of key bindings. Game controllers are a great way to get around this since they have a limited number of buttons and are very tactile. If you are going to target PCs then supporting a mouse and keyboard is a must; just try to keep the inputs to a reasonable number. The User Interface The second issue is the user interface. Screen space UI is jarring in VR and can really damage a player’s sense of immersion. Also, if it blocks out a large portion of the player’s field of view, it can cause them to become nauseous since it will remain static as they move their head around. A better way to handle this would be to build the UI into your world. If you want to show the user how much ammo they have left, build a display into the gun. If your game requires users to follow a set path, try putting up signs along the way or paint the directions on the road. If you really need to keep some kind of information visible all the time, try to make that fit with the theme of your world. For example, maybe your player has a helmet that projects an HUD like Tony Stark's Ironman suit. Player Movement The last big thing to keep in mind when making VR-friendly games is player movement. The current hardware offerings allow two levels of player movement. Google Cardboard, Samsung Gear VR and Oculus Rift allow for mostly stationary interaction. The head-mounted display will give you its yaw, pitch and roll, but the player will remain mostly still. The Rift DK2 and consumer version allow for some range of motion as long as the head-mounted display stays within the field of view of its IR camera. This allows players to lean in and get a closer look at things, but not much else. To allow the player to explore your game world, you'll still need to implement the same type of movement controls that you would for a regular non-VR game. The HTC Vive is full room-scale VR, which means that the player has a volume within which they have complete freedom to move around. The position and orientation of the head-mounted display and the controllers will be given to you, so you can see where the player is and what they are trying to interact with. This comes with its own interesting problems since the game world can be larger than the player’s play space. And each person is going to have different amounts of room that they can dedicate to VR, so the volumes are going to be different for each player. Above everything else though, virtual reality is just a whole lot of fun! For developers, it offers a lot of new and interesting challenges, and for players, it allows you to explore worlds like you never could before. And with VR setups ranging from a few dollars of cardboard to an entire room-scale rig, there's something out there to satisfy just about anybody! About the author Casey Borders is an avid gamer and VR fan with over 10 years of experience with graphics development. He was worked on everything from military simulation to educational VR / AR experiences to game development. More recently, he has focused on mobile development.
Read more
  • 0
  • 0
  • 2251

article-image-common-kafka-addons
Timothy Chen
11 Nov 2014
5 min read
Save for later

Common Kafka Addons

Timothy Chen
11 Nov 2014
5 min read
Apache Kafka is one of the most popular choices in choosing a durable and high-throughput messaging system. Kafka's protocol doesn't conform to any queue agnostic standard protocol (that is, AMQP), and provides concepts and semantics that are similar, but still different, from other queuing systems. In this post I will cover some common Kafka tools and add-ons that you should consider employing when using Kafka as part of your system design. Data mirroring Most large-scale production systems deploy their systems to multiple data centers (or A availability zones / regions in the cloud) to either avoid a SPOF (Single Point of Failure) when the whole data center is brought down, or reduce latency by serving systems closer to customers at different geo-locations. Having all Kafka clients all reading across data centers to access data as needed is quite expensive in terms of network latency, and it affects service performance. For Kafka to have the best performance in throughput and latency, all services should ideally communicate to a Kafka cluster within the same data center. Therefore, the Kafka team built a tool called MirrorMaker that is also employed in production at Linkedin. MirrorMaker itself is an installed daemon that sets up a configured number of replication streams from the destination cluster pulling from the source cluster, and is able to recover from failures and records its state in Zookeeper. With MirrorMaker you can set up Kafka clients that can read/write from clusters in the same DC. This aggregation from other brokers is replicated asynchronously and the local changes are polled from other clusters as well. Auditing Kafka is often served as a pub/sub queue between a frontend collecting service and a number of downstream services that includes batching frameworks, logging services, or event processing systems. Kafka works really well with various downstream services because it holds no state of each client (which is impossible for AMQP). Kafka also allows each consumer to consume data at different offsets of the same partition with high performance. Also, typically systems not only have one cluster of Kafka, but multiple Kafka clusters. These clusters act as a pipeline where a consumer of one Kafka cluster feeds into a recommendation system that writes that output into another set of Kafka clusters. One common need for a data pipeline is to have logging/auditing, to ensure that all of the data you produce from the source is reliably delivered into each stage. If this data is not delivered, then you will know the percentage of data that is missing. Kafka out-of-the-box doesn't provide this functionality, but it can be added using Kafka directly. One implementation is to give each stage of your pipeline an ID, and in the producer code at each stage write out the sum of the number of records in a configurable window that is pushed into Kafka along with the stage ID into a specific topic (that is, counts) at each stage of the pipeline. For example, with a Kafka pipeline that consists of stage A -> B -> C, you could imagine simple code such as the following to write out counts at a configured window: producer.send(topic, messages); sum += messages.count(); lastUpdatedAt = System.currentTimeMillis(); if (lastUpdatedAt - lastAudited >= WINDOW_MS) { lastAuditedAt = System.currentTimeMillis(); auditing.send("counts", new Message(new AuditMessage(stageId, sum, lastAuditedAt).toBytes()); } At the very bottom of the pipeline the counts topic will have the aggregate of counts from each pipeline, and a custom consumer can pull in all of the count messages and partition by stage and compare the sums. The results at each window can also be graphed to show the number of messages that are flowing through the system. This is what is done at LinkedIn to audit their production pipeline, and has been suggested for a while to be incorporated into Kafka itself but that hasn't happened yet. Topic partition assignments Kafka is highly available, since it offers replication and allows users to define the number of acknowledgments and the broker assignment of each replicated data for each partition. By default, if no assignment is given, then it's randomly assigned. Random assignment might not be suitable, especially if you have more requirements of how you want to place these data replicas. For example, if you are hosting your data on the cloud and want to withstand an availability zone failure, then placing more than one AZ for your data replication would be a good idea. Another example would be rack awareness in your data center. You can definitely build an extra tool that generates a specific replica assignment based on all of this information. Conclusion The Kafka tools described in this post are some common tools and features companies in the community often employ, but depending upon your system there might be other needs to consider. The best way to see if someone has implemented a similar feature that is open source is to email the mailing list or ask on IRC (freenode #kafka). About The Author Timothy Chen is a distributed systems engineer at Mesosphere Inc., The Apache Software Foundation. His interests include: open source technologies, big data, and large-scale distributed systems. He can be found on Github as tnachen.
Read more
  • 0
  • 0
  • 2231
article-image-future-node
Owen Roberts
09 Sep 2016
2 min read
Save for later

The Future is Node

Owen Roberts
09 Sep 2016
2 min read
In the few years we’ve seen Node.js explode onto the tech scene and go from strength to strength – In fact, the rate of adoption has been so great that the Node Foundation has mentioned that in the last year alone we’ve seen the amount of developers using the server-side platform have grown by 100% to reach a staggering 3.5 million users. Early adopters to Nodehave included Netflix, PayPal, and even Walmart. The Node fanbase is constantly building new Node Package Managerpackages to share among themselves. With React and Angular offering the perfect accompaniment to Node in modern web applications, along with a host of JavaScript tools like Gulp and Grunt able to use Node’s best practices for easier development Node has become an essential tool for the modern JavaScript developer, one that shows no signs of slowing down or being replaced. Whether Node will be around a decade from now remains to seen, but with a hungry user base, thousands of user created npms already created, and full-stack JavaScript moving to be the cornerstone of most web applications, it’s definitely not going anyway anytime soon. For now, the future really is Node. Want to get started learning Node? Or perhaps you’re looking give your skills the boost to ensure you stay on top? We’ve just released Node.js Blueprints, and if you’re looking to see the true breadth of possibilities that Node offers you then there’s no better way to discover how to apply this framework in new and unexpected ways. But why wait? With a free Mapt account you can read the first 2 chapters for nothing at all! When you’re ready to continue learning just what Node can do, sign up to Mapt to get unlimited access to every chapter in the book, along with the rest of our entire video and eBook range, at just $29.99 per month!
Read more
  • 0
  • 0
  • 2230

article-image-progression-maker
Travis Ripley
30 Jun 2014
6 min read
Save for later

Progression of a Maker

Travis Ripley
30 Jun 2014
6 min read
There’s a natural path for the education of a maker that takes place within the techshops and makerspaces. It begins in the world of tools you may already know, like handheld tools or power tools, and quickly creeps into an unknown world of machines suited to bring any desire to fruition. At first, taking any classes may seem like a huge investment, but the payback you receive from the knowledge is priceless. I can’t even put a price on the payback I’ve earned from developing these maker skills, but I can tell you that the number of opportunities is overflowing. I know it doesn’t sound like much, but the opportunities to grow and learn also increase your connections and that’s what helps you to create an enterprise. Your options for education all depend upon what is available to you locally. As the ideology of technological dissonance has been growing culturally, it is influencing advancements on open source and open hardware. It has a big impact on the trend of creating incubators, startups, techshops, and makerspaces on a global scale. When I first began my education into the makerspace, I was worried that I’d never be able to learn it all. I started small by reading blogs and magazines, and eventually I decided to take a chance and sign up for a membership at our local makerspace: http://www.Makerplace.com. There I was given access to a variety of tools that would be too bulky and loud for my house and workspace, not to mention extremely out of my price range. When I first started at the Makerplace, I was overwhelmed by the amount of technology that was available to me, and I was daunted by the degree of difficulty it would take to even use these machines. But you can only learn so much from videos and books; the real trial begins when you put that knowledge to work with hands-on experience. I was ready to get some experience under my belt. The degree of difficulty for a student can vary, obviously, by experience, and how well one grasps the concepts. I started by taking a class that offers a brief introduction to a topic and some guidance from an expert. After that, you learn on your own and will break things such as materials, end mills, electronic components, and lots of consumables (I do not condone breaking fingers, body parts, or huge expensive tools). This stage is key, because once you understand what can and will go wrong, you’ll undeniably want more training from an expert. And as the saying goes, “practice makes perfect,” which is the key to mastery. As you begin your education, it will become apparent to you what classes will need to come next. The best place to start is learning the obvious software necessary to develop your tangible goods. For those of you who are interested I will list the suggested order of the tools and experience I have learned from ground zero. I suggest the first tools to learn are the Laser, Waterjet, and Plasma CNC cutters, as they can precisely cut shapes out of sheet type material. The laser is the easiest to learn, and can be used to not only cut, but engrave wood, acrylics, metal, and other sheet type materials. Most likely the makerspaces and hackerspaces that you have access to will have this available. The Waterjet and Plasma CNC machines will depend upon the workshop, since they require more room, along with the outfitting of vapor and fume containment equipment. The next set of tools that require a bigger learning curve are the Multi-Axis CNC Mills, Routers, Conventional Mill, and Lathe. CNC (Computer Numerical Control) is the automation of machine tools. These processes of controlled material removal today are collectively known as Subtractive Manufacturing. This requires you to take unfinished work pieces made of materials such as metals, plastics, ceramics, and wood and create 2D/3D shapes, which can be made into tools or finished as tangible objects. The CNC routers are for the same process, but they use sheet materials, such as plywood, MDF, and foam. The first time I took a tour of the makerplace, these machines looked so intimidating. They were big, loud, and I had no clue what they were used for. It wasn’t until I gained further insight into manufacturing that I understood how valuable these tools are. The learning curve is gradual, since there are multiple moving parts and operations happening at once. I took the CNC fundamentals class, which was required before operating any of these machines. I then completed the conventional Mill and Lathe classes before moving on to the CNC machines. I suggest the steps in this order, since understanding the conventional process will play an integral role in how you design your parts to be machined using the CNC machines. I found out the hard way why endmills were called consumables, as I scrapped many parts and broke many endmills. This is a great skill to understand as it directly compliments the Additive processes, such as 3D printing. Once you have a grasp on the basics of automated machinery, the next step is to learn welding and plasma cutting equipment and metal forming tools. This skill opens many possibilities and opportunities to makers, such as making and customizing frames, chassis, and jigs. Along the way you will also learn how to use the metal forming tools to create and craft three-dimensional shapes from thin-gauge sheet metal. And last but not least, depending on how far you want to develop your learning, there are large air compressors, such as bead blasters and paint sprayers used with tools that require constant pressure in the metal forming category. There is also high temperature equipment, such as furnaces, ovens, and acrylic sheet benders, and my personal new favorite, the vacuum formers that bend and form plastic into complex shapes. With all of these new skills under my belt, a network of like-minded individuals, and a passion for knowledge in manufacturing and design, I was able to produce and create products at a pro level, which totally changed my career. Whatever your curious intentions may be, I encourage you to take on a new challenge, such as learning manufacturing skills, and you will be guaranteed a transformative look at the world around you, from consumer to maker. About the Author Travis Ripley is a designer/developer. He enjoys developing products with composites, woods, steel, and aluminum, and has been immersed in the Maker community for over two years. He also teaches game development at the University of California, Los Angeles. He can be found @travezripley.
Read more
  • 0
  • 0
  • 2215

article-image-5-data-science-tools-matter-2018
Richard Gall
12 Dec 2017
3 min read
Save for later

5 data science tools that will matter in 2018

Richard Gall
12 Dec 2017
3 min read
We know your time is valuable. That's why what matters is important. We've written about the trends and issues that are going to matter in data science, but here you can find 5 data science tools that you need to pay attention to in 2018. Read our 5 things that matter in data science in 2018 here. 1. TensorFlow Google's TensorFlow has been one of the biggest hits of 2017 when it comes to libraries. It’s arguably done a lot to make machine learning more accessible than ever before. That means more people actually building machine learning and deep learning algorithms, and the technology moving beyond the domain of data professionals and into other fields. So, if TensorFlow has passed you by we recommend you spend some time exploring it. It might just give your skill set the boost you’re looking for. Explore TensorFlow content here. 2.Jupyter Jupyter isn’t a new tool, sure. But it’s so crucial to the way data science is done that it’s importance can’t be understated. And as pressure is placed on data scientists and analysts to communicate and share data in ways that empower stakeholders in a diverse range of roles and departments. It’s also worth mentioning its relationship with Python - we’ve seen Python go from strength to strength throughout 2017, and showing no signs of letting up; the close relationship between the two will only serve to make Jupyter more popular across the data science world. Discover Jupyter eBooks and videos here. 3. Keras In a year when deep learning has captured the imagination, it makes sense to include both libraries helping to power it. It’s a close call between Keras and TensorFlow which deep learning framework is ‘better’ - ultimately, like everything, it’s about what you’re trying to do. This post explores the difference between Keras and TensorFlow very well - the conclusion is ultimately that while TensorFlow offers more ‘control’, Keras is the library you want if you simply need to get up and running. Both libraries have had a huge impact in 2017, and we’re only going to be seeing more of them in 2018. Learn Keras. Read Deep Learning with Keras. 4. Auto SkLearn Automated machine learning is going to become incredibly important in 2018. As pressure mounts on engineers and analysts to do more with less, tools like Auto SKLearn will be vital in reducing some of the ‘manual labour’ of algorithm selection and tuning. 5. Dask This one might be a little unexpected. We know just how popular Apache Spark is when it comes to distributed and parallel computing, but Dask represents an interesting competitor that’s worth watching throughout 2018. It’s high-level API integrates exceptionally well with Python libraries like NumPy and pandas; it’s also much more lightweight than Spark, so it could be a good option if you want to avoid building out a weighty big data tech stack. Explore Dask in the latest edition of Python High Performance.
Read more
  • 0
  • 0
  • 2211
article-image-pieter-abbeel-et-al-improve-exploratory-behaviour-deep-rl-algorithms
Sugandha Lahoti
14 Feb 2018
4 min read
Save for later

Pieter Abbeel et al on how to improve the exploratory behaviour of Deep Reinforcement Learning algorithms

Sugandha Lahoti
14 Feb 2018
4 min read
The paper, Parameter space noise for exploration proposes parameter space noise as an efficient solution for exploration, a big problem for deep reinforcement learning. This paper is authored by Pieter Abbeel, Matthias Plappert, Rein Houthooft, Prafulla Dhariwal, Szymon Sidor, Richard Y. Chen, Xi Chen, Tamim Asfour, and Marcin Andrychowicz. Pieter Abbeel is currently a professor at UC Berkeley since 2008. He was also a Research Scientist at OpenAI (2016-2017). Pieter is one of the pioneers of deep reinforcement learning for robotics, including learning locomotion and visuomotor skills. His current research focuses on robotics and machine learning with particular focus on deep reinforcement learning, meta-learning, and AI safety. Deep reinforcement learning is the combination of deep learning with reinforcement learning to create artificial agents to achieve human-level performance across many challenging domains. This article will talk about one of Pieter’s top accepted research papers in the field of deep reinforcement learning at the 6th annual ICLR conference scheduled to happen between April 30 - May 03, 2018. Improving the exploratory behavior of Deep RL algorithms with Parameter Space Noise What problem is the paper attempting to solve? This paper is about the exploration challenge in deep reinforcement learning (RL) algorithms. The main purpose of exploration is to ensure that the agent’s behavior does not converge prematurely to a local optimum. Enabling efficient and effective exploration is difficult since it is not directed by the reward function of the underlying Markov decision process (MDP). A large number of methods have been proposed to tackle this challenge in high-dimensional and/or continuous-action MDPs. These methods increase the exploratory nature of these algorithms through the addition of temporally-correlated noise or through the addition of parameter noise. The main limitation of these methods is that they are either only proposed and evaluated for the on-policy setting with relatively small and shallow function approximators or disregard all temporal structure and gradient information. Paper summary This paper proposes adding noise to the parameters (parameter space noise) of a deep network when taking actions in deep reinforcement learning to encourage exploration. The effectiveness of this approach is demonstrated through empirical analysis across a variety of reinforcement learning tasks (i.e.DQN, DDPG, and TRPO). It answers the following questions: Do existing state-of-the-art RL algorithms benefit from incorporating parameter space noise? Does parameter space noise aid in exploring sparse reward environments more effectively? How does parameter space noise exploration compare against evolution strategies for deep policies with respect to sample efficiency? Key Takeaways The paper describes a method which proves parameter space noise as a conceptually simple yet effective replacement for traditional action space noise like -greedy and additive Gaussian noise. This work shows that parameter perturbations can successfully be combined with contemporary on- and off-policy deep RL algorithms such as DQN, DDPG, and TRPO and often results in improved performance compared to action noise. The paper attempts to prove with experiments that using parameter noise allows solving environments with very sparse rewards, in which action noise is unlikely to succeed. Parameter space noise is a viable and interesting alternative to action space noise, which is still the effective standard in most reinforcement learning applications. Reviewer feedback summary Overall Score: 20/30 Average Score: 6.66 The reviewers were pleased with the paper. They termed it as a simple strategy for exploration that is effective empirically. The paper was found to be clear and well written with thorough experiments across deep RL domains.  The authors have also released open-source code along with their paper for reproducibility, which was appreciated by the reviewers. However, a common trend among the reviews was that the authors overstated their claims and contributions.  The reviewers called out some statements in particular (e.g. the discussion of ES and RL). They also felt that the paper lacked a strong justification for the method other than it being empirically effective and intuitive.
Read more
  • 0
  • 0
  • 2209

article-image-data-science-new-alchemy
Erol Staveley
18 Jan 2016
7 min read
Save for later

Data Science Is the New Alchemy

Erol Staveley
18 Jan 2016
7 min read
Every day I come into work and sit opposite Greg. Greg (in my humble opinion) is a complete badass. He directly turns information that we’ve had hanging around for years and years into actual currency. Single handedly, he generates more direct revenue than any one individual in the business. When we were shuffling seating positions not too long ago (we now have room for that standing desk I’ve always wanted ❤), we were afraid to turn off his machine in fear of losing thousands upon thousands of dollars. I remember somebody saying “guys, we can’t unplug Skynet”. Nobody fully knows how it works. Nobody except Greg. We joked that by turning off his equipment, we’d ruin Greg's on-the-side Bitcoin mining gig that he was probably running off the back of the company network. We then all looked at one another in a brief moment of silence. We were all thinking the same thing — it wouldn’t surprise any of us if Greg was actually doing this. We wouldn’t know any better. To many, what Greg does is like modern day alchemy. In reality, Greg is a data scientist — an increasingly crucial role that helps businesses deliver more meaningful, relevant interactions with their customers. I like to think of them more as new-age alchemists, who wield keyboards instead of perfectly choreographed vials and alembics. This week - find out how to become a data alchemist with R. Save 50% on some of our top titles... or pick up any 5 for $50! Find them all here!  Content might have been king a few years back. Now, it’s data. Everybody wants more — and the people who can actually make sense of it all. By surveying 20,000 developers, we found out just how valuable these roles are to businesses of all shapes and sizes. Let’s take a look. Every Kingdom Needs an Alchemist Even within quite a technical business, Greg’s work lends a fresh perspective on what it is other developers want from our content. Putting the value of direct revenue generation to one side, the insight we’ve derived from purchasing patterns and user behaviour is incredibly valuable. We’re constantly challenging our own assumptions, and spending more time looking at what our customers are actually doing. We’re not alone in taking this increasingly data-driven approach. In general, the highest data science salaries are paid by large enterprises. This isn’t too surprising considering that’s where the real troves of precious data reside. At such scale, the aggregation and management of data alone can warrant the recruitment of specialised teams. On average though, SMEs are not too far behind when it comes to how much they’re willing to pay for top talent. Average salary by company size. Apache Spark was a particularly important focus going forward for folks in the Enterprise segment. What’s clear is that data science isn’t just for big businesses any more. It’s for everybody. We can see that in the growth of data-related roles for SMEs. We’re paying more attention to data because it represents the actions of our customers, but also because we’ve just got more of it lying around all over the place. Irrespective of company size, the range of industries we captured (and classified) was colossal. Seems like everybody needs an alchemist these days. They Double as Snake Charmers When supply is low and demand is high in a particular job market, we almost always see people move to fill the gap. It’s a key driver of learning. After all, if you’re trying to move to a new role, you’re likely to be developing new skills. It’s no surprise that Python is the go-to choice for data science. It’s an approachable language with some great introductory resources out there on the market like Python for Secret Agents. It also has a fantastic ecosystem of data science libraries and documentation that can help you get up and running quite quickly. Percentage of respondents who said they used a given technology. When looking at roles in more detail, you see strong patterns between technologies used. For example, those using Python were most likely to also be using R. When you dive deeper into the data you start to notice a lot of crossover between certain segments. It was at this point where we were able to also start seeing the relationships between certain technologies in specific segments. For example, the Financial sector was more likely to use R, and also paid (on average) higher salaries to those who had a more diverse technical background. Alchemists Have Many Forms Back at a higher level, what was really interesting is the natural technology groupings that started to emerge between four very distinct ‘types’ of data alchemist. “What are they?”, I hear you ask. The Visualizers Those who bring data to life. They turn what otherwise would be a spreadsheet or a copy-and-paste pie chart into delightful infographics and informative dashboards. Welcome to the realm of D3.js and Tableau. The Wranglers The SME all-stars. They aggregate, clean and process data with Python whilst leveraging the functionality of libraries like pandas to their full potential. A jack of all trades, master of all. The Builders Those who use Hadoop and other OS tools to deploy and maintain large-scale data projects. They keep the world running by building robust, scalable data platforms. The Architects Those who harness the might of the enterprise toolchain. They co-ordinate large scale Oracle and Microsoft deployments, the sheer scale of which would break the minds of mere mortals. Download the Full Report With 20,000 developers taking part overall, our most recent data science survey contains plenty of juicy information about real-world skills, salaries and trends. Packtpub.com In a Land of Data, the Alchemist is King We used to have our reports delivered in Excel. Now we have them as notebooks on Jupyter. If it really is a golden age for developers, data scientists must be having a hard time keeping their inbox clear of all the recruitment spam. What’s really interesting going forward is that the volume of information we have to deal with is only going to increase. Once IoT really kicks off and wearables become more commonly accepted (the sooner the better if you’re Apple), businesses of all sizes will find dealing with data overload to be a key growing pain — regardless of industry. Plenty of web services and platforms are already popping up, promising to deliver ‘actionable insight’ to everybody who can spare the monthly fees. This is fine for standardised reporting and metrics like bounce rate and conversion, but not so helpful if you’re working with a product that’s unique to you. Greg’s work doesn’t just tell us how we can improve our SEO. It shows us how we can make our products better without having to worry about internal confirmation bias. It helps us better serve our customers. That’s why present-day alchemists like Greg, are heroes.
Read more
  • 0
  • 0
  • 2207