Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech Guides

851 Articles
article-image-war-data-science-python-versus-r
Akram Hussain
30 Jun 2014
7 min read
Save for later

The War on Data Science: Python versus R

Akram Hussain
30 Jun 2014
7 min read
Data science The relatively new field of data science has taken the world of big data by storm. Data science gives valuable meaning to large sets of complex and unstructured data. The focus is around concepts like data analysis and visualization. However, in the field of artificial intelligence, a valuable concept known as Machine Learning has now been adopted by organizations and is becoming a core area for many data scientists to explore and implement. In order to fully appreciate and carry out these tasks, data scientists are required to use powerful languages. R and Python currently dominate this field, but which is better and why? The power of R R offers a broad, flexible approach to data science. As a programming language, R focuses on allowing users to write algorithms and computational statistics for data analysis. R can be very rewarding to those who are comfortable using it. One of the greatest benefits R brings is its ability to integrate with other languages like C++, Java, C, and tools such as SPSS, Stata, Matlab, and so on. The rise to prominence as the most powerful language for data science was supported by R’s strong community and over 5600 packages available. However, R is very different to other languages; it’s not as easily applicable to general programming (not to say it can’t be done). R’s strength and its ability to communicate with every data analysis platform also limit its ability outside this category. Game dev, Web dev, and so on are all achievable, but there’s just no benefit of using R in these domains. As a language, R is difficult to adopt with a steep learning curve, even for those who have experience in using statistical tools like SPSS and SAS. The violent Python Python is a high level, multi-paradigm programming language. Python has emerged as one of the more promising languages of recent times thanks to its easy syntax and operability with a wide variety of different eco-systems. More interestingly, Python has caught the attention of data scientists over the years, and thanks to its object-oriented features and very powerful libraries, Python has become the go-to language for data science, many arguing it’s taken over R. However, like R, Python has its flaws too. One of the drawbacks in using Python is its speed. Python is a slow language and one of the fundamentals of data science is speed! As mentioned, Python is very good as a programming language, but it’s a bit like a jack of all trades and master of none. Unlike R, it doesn’t purely focus on data analysis but has impressive libraries to carry out such tasks. The great battle begins While comparing the two languages, we will go over four fundamental areas of data science and discuss which is better. The topics we will explore are data mining, data analysis, data visualization, and machine learning. Data mining: As mentioned, one of the key components to data science is data mining. R seems to win this battle; in the 2013 Data Miners Survey, 70% of data miners (from the 1200 who participated in the survey) use R for data mining. However, it could be argued that you wouldn’t really use Python to “mine” data but rather use the language and its libraries for data analysis and development of data models. Data analysis: R and Python boast impressive packages and libraries. Python, NumPy, Pandas, and SciPy’s libraries are very powerful for data analysis and scientific computing. R, on the other hand, is different in that it doesn’t offer just a few packages; the whole language is formed around analysis and computational statistics. An argument could be made for Python being faster than R for analysis, and it is cleaner to code sets of data. However, I noticed that Python excels at the programming side of analysis, whereas for statistical and mathematical programming R is a lot stronger thanks to its array-orientated syntax. The winner of this is debatable; for mathematical analysis, R wins. But for general analysis and programming clean statistical codes more related to machine learning, I would say Python wins. Data visualization: the “cool” part of data science. The phrase “A picture paints a thousand words” has never been truer than in this field. R boasts its GGplot2 package which allows you to write impressively concise code that produces stunning visualizations. However. Python has Matplotlib, a 2D plotting library that is equally as impressive, where you can create anything from bar charts and pie charts, to error charts and scatter plots. The overall concession of the two is that R’s GGplot2 offers a more professional feel and look to data models. Another one for R. Machine learning: it knows the things you like before you do. Machine learning is one of the hottest things to hit the world of data science. Companies such as Netflix, Amazon, and Facebook have all adopted this concept. Machine learning is about using complex algorithms and data patterns to predict user likes and dislikes. It is possible to generate recommendations based on a user’s behaviour. Python has a very impressive library, Scikit-learn, to support machine learning. It covers everything from clustering and classification to building your very own recommendation systems. However, R has a whole eco system of packages specifically created to carry out machine learning tasks. Which is better for machine learning? I would say Python’s strong libraries and OOP syntax might have the edge here. One to rule them all From the surface of both languages, they seem equally matched on the majority of data science tasks. Where they really differentiate is dependent on an individual’s needs and what they want to achieve. There is nothing stopping data scientists using both languages. One of the benefits of using R is that it is compatible with other languages and tools as R’s rich packagescan be used within a Python program using RPy (R from Python). An example of such a situation would include using the Ipython environment to carry out data analysis tasks with NumPy and SciPy, yet to visually represent the data we could decide to use the R GGplot2 package: the best of both worlds. An interesting theory that has been floating around for some time is to integrate R into Python as a data science library; the benefits of such an approach would mean data scientists have one awesome place that would provide R’s strong data analysis and statistical packages with all of Python’s OOP benefits, but whether this will happen remains to be seen. The dark horse We have explored both Python and R and discussed their individual strengths and flaws in data science. As mentioned earlier, they are the two most popular and dominant languages available in this field. However a new emerging language called Julia might challenge both in the future. Julia is a high performance language. The language is essentially trying to solve the problem of speed for large scale scientific computation. Julia is expressive and dynamic, it’s fast as C, it can be used for general programming (its focus is on scientific computing) and the language is easy and clean to use. Sounds too good to be true, right?
Read more
  • 0
  • 0
  • 3445

article-image-why-gamification-is-changing-everything
Julian Ursell
30 Jun 2014
5 min read
Save for later

Why Gamification is Changing Everything

Julian Ursell
30 Jun 2014
5 min read
'Must keep streak going'. That is the sound of someone on a twenty plus kill streak on Titanfall (probably me). It's also the sound of someone who's on a 20 day streak learning JavaScript on CodeAcademy. Gamification is becoming an increasingly popular concept as a way to structure and make enjoyable the way people engage in the realms of learning, business and even the most routine aspects of lifestyle. Applications are being developed by prominent businesses which apply game mechanics and rules to a variety of different scenarios, building in incentives and rewards for users to strive toward, whether intrinsic or with a practical benefit in the real world. A brilliant example of gamification is Code Academy, which teaches new coders how to learn a programming language through a motivational system of badges and streaks to keep learners hooked and incentivized to continue learning. I’m currently learning Spanish with the language learning app, Duolingo, which uses gamification to measure and motivate learning progression using streaks, experience (xp) points and ‘checkpoints’ to structure the experience and enhance retention. Learners can unlock bonus skills by acquiring hearts, which are achieved by answering all of the questions correctly in lessons. When I’m on a streak, Duolingo will send notifications to my phone to keep it up, and compel me to reinforce my learning by taking refresher (called ‘strengthen’) lessons. It’s extraordinarily effective as a fun, fulfilling educational experience and I can positively say that I am retaining much of what I have learnt. Gamification has been rolled out among several high profile companies, including Nike, Starbucks and Microsoft (who used gamification for staff appraisals!), and in recent years has increasingly been considered as a solution for a number of important business concerns, whether it’s easing the pain of unpalatable training sessions or to drive customer and community engagement with a product. Nike + is a shining example of gamification on a grand, successful scale. Built on the idea of Nike fuel points, it rewards consistent exercise and activity with trophies and personal benchmarks, offers the option to set individual challenges, as well as compete with friends on a community leaderboard. On the one, cynical, hand, it’s powerful, effective marketing which engages users in Nike’s virtual community and generates revenue through sales of the Fuel Band (a wrist band which tracks wearer movements), as well as running shoes (from 2006-09 Nike increased its share of the running shoe market from 47% to 61% ) and other merchandise, all without rewarding exercisers with anything of physical value (instead they’re treated to celebratory animations). On the other hand, there are obvious benefits to accruing Fuel points as it means undertaking consistent, healthy exercise, with the positive reinforcement of earning trophies, setting new performance goals, and recording statistics about calories burned, distance covered and time spent exercising. All of this is integrated socially, as friends can see exactly what kind of activity you’ve undertaken, creating a socially connected sphere of collective competition. As a statistics junkie, the ability to constantly valorize my exercise and visualize the impact of the hours I’m putting in is even more reason to keep burning down the treads on my Nikes. What gamified apps bring is a way to accommodate the seemingly natural inclination of humans to structure and conceptualise challenges according to game (like) logic. Regardless of whether gamification is being employed for driving marketing and business, engineering customer and community participation, or for encouraging learning, it has proved a versatile approach (I’m trying to avoid calling it a business methodology) to thinking about how to solve different problems in the real world, via games. Gamification won’t be for everyone, and we would assume that it is in part dependent on a degree of investment from the user (gamer) (we might also ask if the user is engaging with the game or the actual subject at hand?), but the beauty of it is that games are so appealing and intuitive to modern generations that users don’t have to be coerced to engage, and enjoy gamified applications. It may have the charge of ‘mandatory enjoyment’ levelled at it, but I’ve never heard someone adamantly refuse to play a game. If anything, it makes the pill of dull company training much easier to swallow. Whatever scepticism some may have over the Gamification of Things we should appreciate it as a validation of the value of games, that it says something very positive about the way we can implement game mechanics in the real world, and that businesses are treating it seriously as a strategy for application development. Gamification is being tested and considered as a solution in a huge array of situations, from project management to the actual deployment of applications – for example the cloud deployment platform Engine Yard introduced gamification in order to increase the contribution of users to the community, giving rewards to users for providing help to other customers. There are even gamification startups offering platforms (Gamification-as-a-Service?) for implementing game mechanics into new or existing applications in the enterprise – just imagine a gamified tech startup! While it won’t be successful or suitable for every application or domain, but there has been enough demonstrable success to show that when implemented correctly, gamified applications are hugely productive for both the user and the business which mobilises it. As long as developers are not building in gamification for gamification’s sake, and the mechanics are intelligently thought out with clever incentive systems in place, we may see an even greater incorporation of it in the future. As with any great game, the focus needs to be on the gameplay as much as the outcome, so that applications benefit both the player and the business in equal measure.
Read more
  • 0
  • 0
  • 1396

article-image-pixar-all-renderman-made-free-everyone
Julian Ursell
30 Jun 2014
3 min read
Save for later

Pixar for All: RenderMan Made Free to Everyone

Julian Ursell
30 Jun 2014
3 min read
Render me excited. Following the announcement that Pixar will be making available a non-commercial version of its 3D visual effects and rendering software, there’s a resounding buzz among the creative populace about the opportunity to play around with the RenderMan sandbox. Just reflect on that–you get to use the technology Pixar used to make Toy Story, Wall-E, and Monsters Inc., and that is responsible for the vast majority of incredible visual trickery of modern cinema. The thought of being able to recreate the astonishing visual environments and landscapes produced by Pixar’s cutting edge rendering software, recognized so unmistakeably around the world, is a mouth-watering prospect. I’m looking forward to messing around with the software and producing several poor man’s versions of Pixar’s most famous films. I’ll also make Cars into a good film. (Pixar is mobilizing lawyers right now to make us redact this.) It’s not just the general availability that has people animated about RenderMan. Along with the free-to-all announcement were details about an overhaul of the software, which vastly enhances the rendering mechanics and capabilities. RIS is the fast new rendering architecture under the hood. It specifically enhances global illumination rendering and ray-tracing scenes, which work with heavy geometry. The classic rendering architecture, REYES, also remains available, giving artists the option to work with either. It’s a wonderful bonus that at the same time it has been made freely available, RenderMan has also been supercharged, giving amateur (and professional) visual effects artists an immensely powerful palette with which to develop animation projects. If you’re someone who’s never used animation software before, it’s probably like being given the keys to a Ferrari without having a driving license. And let’s be honest, that’s not an opportunity anyone would pass up, right?    Threaded into this, the price for the commercial version of RenderMan has been slashed, which may give users who have developed animations with the free version the temptation to go a step further and purchase the paid product for the purposes of industrial distribution. If you have the inclination (and the expertise), you too could be producing the visual effects that have revolutionized cinema in the past 25 years or so. Okay, maybe I’m selling Pixar’s line for them, but this is a hugely progressive move for aspiring animators who will jump at the opportunity to experiment with the technology that is still blazing a trail. It’s potentially a gateway for people with creative talent and flair to showcase their abilities and get into the industry of animation and visual effects, whether that’s for cinema, television, or advertising. I’ve registered for a free license for when it’s made available in August. I may not ever be a VFX wizard but I am, like a vast number of others, nonetheless piqued with intrigue about the opportunity to try my hand with RenderMan—even if the end product is a mangled Ferrari.
Read more
  • 0
  • 0
  • 1200
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €14.99/month. Cancel anytime
article-image-soldering-tips-and-tricks-makers
Clare Bowman
30 Jun 2014
5 min read
Save for later

Soldering: Tips and Tricks for Makers

Clare Bowman
30 Jun 2014
5 min read
Although solderless breadboards provide makers with an easy way to build functioning circuits and software, the builds are only really reliable if they aren't handled too heavily. For example, in our first post, we talked about building a Weather Cube as a sensory tool for occupational therapists. The breadboard circuit and the foam cube secured inside this might survive fairly well, but for any highly-physical wearable applications, it would be easy for a single wire to be pulled out of the circuit, causing it to fail at a vital moment. In this post, we will detail how we soldered our Weather Cube project, plus provide you with timesaving and pain-saving tips born through trial and error (and one burnt finger). If you have very little or no experience working with stripboards, it could be worth practicing your skills before starting. Important Safety warning Protective equipment such as safety glasses should always be worn. You should also have first aid equipment available whenever working with metal, including melting solder, hacksawing, and spot-cutting copper board. Before you begin soldering your project, you will need the following: A soldering iron (this iron becomes extremely hot, so take care not to touch the tip with your hands)· Solder (usually made of tin and lead). Soldering a stripboard for a Weather Cube First, cut your stripboard (also called veroboard by some people, but it's the same thing). Do this by laying the stripboard horizontal, with the copper side facing you. Count 25 points from the middle, right, and side of the stripboard. Draw a line from top to bottom. Use a G-clamp to secure your stripboard to a solid surface, and then cut along the line with your junior hacksaw. Starting with just downward strokes will help you keep on track initially. You could also cut the top two rails off if you want your project to be as small as possible, or color the top two rails to remind yourself not to count these holes. Then, follow these steps: Count six spaces from the right side. Draw a line from the top to the bottom of the board on the copper side. Count seven spaces from the line you’ve just drawn, and draw a line from the top to the bottom again. Count a further six spaces and once again draw a line from the top to the bottom. Spot cut these lines. Spot cutting involves twisting a dedicated spot cutter into parts of the copper where you want a gap in the copper rails. Then, flip over the stripboard so that the copper bit is facing down, and clip it onto the soldering station holder. For convenience, we recommend using exactly the same component positions as the breadboard build. It’s useful to keep a tested breadboard version of the layout nearby. You can use this as a reference for component positions on the stripboard version as you build it, to help ensure you don’t introduce errors. Soldering a piezo A piezo is a small sensor device used by Makers to convert pressure and force into an electrical charge. These sensors are also very delicate, and can easily come apart. If it does, you will have to re-solder it. To solder the piezo back together, follow these steps: Strip the end of the wire approximately 4mm. Twist the wire strands to make one piece of wire. Tin the wire by coating a bit of solder onto the exposed wire. Then, either push the wire into a hole on the same railing, or if the wire has come detached on the piezo end, then solder it back on to the piezo. Don’t leave the soldering iron on the piezo element for too long as you could damage it. Conclusion Soldering can provide projects with greater robustness, allowing them to be handled without easily falling apart. With these steps, we hope to have provided you with some of the tips and tricks to successfully solder your inventions. About the authors Clare Bowman enjoys hacking playful interactive installations and co-designing digitally fabricated consumer products. She has exhibited projects at Maker Faire UK, Victoria and Albert Museum, FutureEverything, and Curiosity Collective gallery shows. Some recent work includes “Sands Everything”, an interactive hourglass installation interpreting Shakespeare’s Seven Ages of Man soliloquy through gravity-controlled animated grains, and more. Cefn Hoile sculpts open source hardware and software, and supports others doing the same. Drawing on 10 years of experience in R&D for a multinational technology company, he works as a public domain inventor, and an innovation catalyst and architect of bespoke digital installations and prototypes. He is a founder-member of the CuriosityCollective.org digital arts group, and a regular contributor to open source projects and not-for-profits. Cefn is currently completing a PhD in Digital Innovation at Highwire, University of Lancaster, UK.
Read more
  • 0
  • 0
  • 2072

article-image-makers-journey-3d-printing
Travis Ripley
30 Jun 2014
14 min read
Save for later

A Maker's Journey into 3D printing

Travis Ripley
30 Jun 2014
14 min read
If you’ve visited any social media outlets, you’ve probably come across a never-ending list of new words and terms—the Internet of Things, technological dissonance, STEM, open source, tinkerer, maker culture, constructivism, DIY, fabrication, rapid-prototyping, techshop, makerspace, 3D printers, Raspberry Pi, wearables, and more. These terms are typically used to describe a Maker, or they have something to do with Maker culture. Follow along to learn about my particular journey into the Maker culture, specifically in the 3D printing space. The rise of the maker culture Maker culture is on the rise. This is a culture that thrives at the intersection of technology and innovation at the informal, social, and peer-led level. The interactions of skilled people driven to share their knowledge with others, develop new pathways, and create solutions for current problems have built a new community. I am proud to say that I am a Maker-Tinkerer (or that I have some form of motivated ADHD that drives me to engage in engineering-oriented pursuits). My journey started at ground zero while studying 3D design and development. A maker's journey I knew there was more that I could do with my knowledge of rendering the three-dimensional surface of an object. Early on, however, I only thought about extending my knowledge for entertainment purposes, such as video games. I didn’t understand the power of having this knowledge and the way it could help create real-world solutions. Then, I came across an issue of Make Magazine and it changed my mental state overnight—I had to create tangible things. Now that I had the information to send me in the right direction, I needed an outlet. An industry friend mentioned a local Hackerspace, known as Deezmaker, which was holding informational workshops about 3D printing. So, I signed up for an introductory class. I had no clue what I was getting myself into as I crossed that first threshold, but by that evening, I was versed in topics that I thought were far from my mental capabilities. I was hooked. The workshop consisted of part lecture, and part hands-on material. I learned that you couldn't just start using a 3D printer. You actually need to have some basic understanding of the manufacturing process, like understanding that layers of material need to be successfully laid down in order to move on to the next stage in the process. Being the curious, impatient, and overly enthusiastic man-child that I am, this was the most difficult part for me, as I couldn’t wait to engage in this new world. 3D printing Almost two years later, I am fully immersed in the world of 3D printing. I currently have a 3D printer at home (which is almost obsolete, by today’s standards) and I have access to multiple printers at a local techshop/makerspace known as Makerplace here in San Diego, Ca. I use this technology regularly, since I have changed directions in my career as a 3D artist towards Manufacturing Engineering and Rapid Prototyping. I am currently attending a Machine Technology/Engineering program at San Diego City College; (for more info on the best Machining program in the country visit http://www.JCbollinger.com). The benefit for me using 3D printers is rapidly producing iterations of prototypes for my clientele, since most people feel more reassured in the process if they have tangible and solid objects and are more likely to trust you as a designer. I feel that having access to this also helps me complete more jobs successfully given that turnaround times for updates can be as little as a few hours, rather than days or weeks (depending on the size/scale). Currently I have a few reoccurring clients that want updates often, and by showing them my progress, the iterations are fewer and I can move onto the next project with no hesitation given how we can successfully see design updates rapidly and minimize the flaws and failures. I produce prototypes for all industries: toys, robotics, vehicles, and so on. Think of it as producing solutions, and how you can either make something better or simpler. Entertaining the idea of a challenge and solving these challenges has benefits as with each new design job you have all these tangible objects to look at and examine. As a hobbyist, the technology has made it easy to produce new or even obsolete items. For example, I love Transformers, but you know how plastic does two things very well: it breaks and gets lost. I came across a forum where guys were distributing the programs for the arm extrusions that break (no one likes gluing), so I printed the parts that had been missing for decades, rebuilt the armature that had for so long been displaced, and then like magic I felt like I was six years old again with a perfectly working Transformer. Here are a few things that I've learned along the way: 3D printing is also known as Additive Manufacturing. It is the process of producing three-dimensional objects in which successive layers of varied material are extruded under computer-controlled equipment that is fed information from 3D models. These models are derived from a data source that processes the information into machine language. The plastic extrusion technology that is now becoming slowly more popular is known as Fused Deposition Modeling (FDM). This process was developed in the early 1990s for the application of job production, mass production, rapid prototyping, product development, and distributed manufacturing. The principle of FDM is that material is laid down in layers. There are many other processes such as Selective Heat Sintering (SHS), Selective Laser Sintering (SLS), Stereolithography (SLA), and Plaster-Based 3D Printing (PP) to name a few. We will keep it simple here and go over the FDM process for now, as most of the printers at the hobbyist level use this process. The FDM process significantly affected roles within the production and manufacturing industries, as wearing multiple hats as an engineer, designer, and operator and as growth made the technology more affordable to an array of industrial fields. In contrast, CNC Machining, which is a Subtractive Manufacturing process, has been incorporated naturally to work together in this development. The influence of this technology in the industrial and manufacturing industries created exposure to new methods of production at exponential rates, for example Automation. For the home-use and hobbyist market, the 3D printers produced by the open source/open hardware initiative can be stemmed directly or indirectly from the RepRap.org project, which is a free to low-cost desktop 3D printer that is self-replicating. That being said, you can thank them for starting this revolution. By getting involved in this community you are benefiting everyone by spreading the spark that will continue to create new developments in manufacturing and consumer technology. The FDM process can be done with a multitude of materials; the two most popular options at this time are PLA (Polylactic acid) and ABS (Acrylonitrile butadiene styrene). Both PLA and ABS have pros and cons, depending upon your model structure. The future use of the print and client requests and understanding the fundamental differences between the two can help you determine your choice of one over the other, or in case of owning a printer with two extruders, how they can be combined. In some cases, PVA (Polyvinyl Acetate) is also used as support material (in the case of two extruders) unlike PLA or ABS, which if used as support material will require cleanup when finishing a print. PVA is water soluble, so you can soak your print in warm water and the support structures will dissolve away. PLA (Polylactic Acid) is a strong biodegradable plastic that is derived from renewable resources: cornstarch and sugarcane. It is more resistant to UV rays than ABS (so you will not see fading with your prints). Also, it sticks better than any other material to the surface of your hotplate (minimal warping), which is a huge advantage. It prints at -180* C, and it can create an ooze, and if your nozzle is loaded it will drip, which also means that leaving a print in your car on a hot day may cause damage. ABS (Acrylonitrile butadiene styrene) is stronger than PLA, but is non-biodegradable; it is a synthetic monomer produced from propylene and ammonia. This means it has more rigidity than PLA, but is also more flexible. It is a colorfast material (which means it will hold its color for years). It prints at -220*C, and is amorphous and therefore has no true melting point, so a heated bed is needed as warping can and will occur (usually because the bed is not hot enough—at least 80*C —or the Z axis is not calibrated correctly). Printer options For the hobbyist maker, there are a few 3D printer options to consider. Depending upon your skill level, your needs, budget and commitments, there is a printer out there for you. The least expensive, smallest, and most straightforward printer available on the market is Printrbot Simple Maker’s 3D Printer. Retailing at $349.99, this printer comes in a kit that includes the bare necessities you need to get started. It is capable of printing a 4” cube. You can also purchase it already assembled for a little extra. The kit and PLA filament are available at www.makershed.com. The 3D printer I started on, personally own, and recommend is the Afina H480 3D printer. Retailing at $1299.99, this printer provides the easiest setup right out of the box, it’s fully assembled, comes with a heated platform for the aid of adhesion and for less chance of warping, and can print up to a 5” cube. It also comes loaded with its own native 3D software, where you can manipulate your .STL files. It has an automated utility to calibrate the printer’s build platform with the printhead, and also automatically generates any support setup material and the “raft”, which is the base support for your prints. There is so much more to it, but as I said I recommend this for beginners, and it is also available through www.makershed.com. For the person who wants to print, and is at the hobbyist and semi-professional level, consider the next generation in 3D printing, the MAKERBOT Replicator. It is quick and efficient. Retailing at $2899.00, this machine has an extremely high layer resolution, LCD display, and if you run out of filament (ABS/PLA), there is no need to start over; this machine will alert you via computer or smartphone that a replacement is needed. There are many types of 3D printers available, with options including open source, open hardware, filament types, delta style mechanics, single/double extruders, and the list goes on. My main suggestion is to try before you buy, either at a local hackerspace or a local Makerfaire. It’s a worthwhile investment that pays for itself. Choosing your tools Before you begin, it's also important to choose your design tools. There are many great open source tools to choose from. Here are some of my favorites. When it comes to design tools, there is a multitude of cost effective and free tools out there to get you started. First off, the 3D printing process has a required “tool-chain” that must be followed in order to complete the process, roughly broken down into three parts: CAD (Computer Aided Design): Tools used to design 3D parts for printing. There are very few interchangeable CAD file formats that are sometimes referred to as parametric files. The most widely used interchangeable mesh file format is .STL (Stereolithography). This format is the most important as it used by CAM tools. CAM (Computer Aided Manufacturing): Tools handling the intermediate step of translating CAD files into a machine-friendly format. Firmware for electronics: This is what runs the onboard electronics of the printer, and is the closest to actual programming; a process known as cross compiling. Here are my best picks for each category, known as FLOSS (free/libre/open source software). FLOSS CAD tools, for example OpenSCAD, FreeCAD, and HeeksCAD for the most part create these parametric files that usually represent parts or assemblies in terms of CSG (Constructive Solid Geometry) which basically represent a tree of Boolean operations performed on primitive shapes such as cubes, spheres, cylinders, and pyramids. These are modified numerically and with great precision and the geometry is a mathematical representation of such, no matter how much you zoom in or out. Another category of CAD tool that represents the parts as 3D polygon mesh is for the most part used for special effects in movies or video games (CG). They are also a little more user friendly, and examples would be Autodesk Maya and Autodesk 3ds Max (these choices are subscription/retail-based). But there are also open source and free versions of this tool such as Autodesk 123D, Google Sketchup, and Blender; I suggest the latter options, since they are free, user friendly, and they are much easier to learn since their options are narrowed down strictly to producing 3D meshes. If you need more precision you should look at OpenSCAD (my favorite), as it was created directly for making physical objects rather than game design or animation. OpenSCAD is easy to learn, with a simple interface, it is powerful and cross-platform, and there are many examples you can use along with strong community support. Next, you’ll need to convert your 3D masterpiece (.stl) into a machine friendly format known as G-Code. This process is also known as “slicing”. You’re going to need some CAM software to produce the “tool paths,” which is the next stop in the tool chain. Most of the slicing software available is open source. Some examples are Slic3r (the most popular, with an ease of use recommended for beginners), Skeinforge (dated, but still one of the best), Cura, and MatterSlice. There is also great closed source slicing software out there. One in particular is KISSlicer, which is a pro version that supports multi-extruder printing. The next stop after slicing is using software known as: A G-Code interpreter, which breaks down each line of the code into electronic signals. A G-Code sender, which sends the signals to the motors on the printer to tell them how to move. This software is usually directly linked to an EMC (Electronic Machine Controller), which controls the printer directly. It can also be linked to an integrated hardware interface that has a G-Code interpreter built in, which loads the G-Code directly from a memory card (SD card/USB). The last stop is the firmware, which controls the electronics onboard the printer. For the most part, the CPUs that control these machines are simple microcontrollers that are usually Arduino-based, and they are compiled using the Arduino IDE. This process may sound time consuming, but once you go through the tool chain process a few times, it becomes second nature, just like driving a manual transmission in a car. Where to go from here? When I finished my first hackerspace workshop, I had been assimilated into a culture that I was not only benefiting from personally, but a culture that I could share my knowledge with and contribute to. I have received far more in my journey as a maker than any previous endeavor. To anyone who is curious, and mechanically inclined (or not), who believes they have an answer to a solution, I challenge you. I challenge you to make the leap into this culture—join a hackerspace, attend a makerfaire, and enrich your life and the lives of others. About the Author Travis Ripley is a designer/developer. He enjoys developing products with composites, woods, steel, and aluminum, and has been immersed in the Maker community for over two years. He also teaches game development at the University of California, Los Angeles. He can be found @travezripley.
Read more
  • 0
  • 0
  • 2783

article-image-making-greater-good
Clare Bowman
30 Jun 2014
7 min read
Save for later

Making for the Greater Good

Clare Bowman
30 Jun 2014
7 min read
Occupational Therapists work with individuals to achieve increased participation in their desired occupations--be it in work, self-care, or leisure activities. The cross collaboration between OTs and the Maker community, a group of technology-based do-it-yourself hobbyists, is a space that has much potential, and should be explored further. In this blog post, we are going to explore one such collaboration: a "Weather Cube" case study. The Weather Cube was originally built for individuals with severe learning difficulties in an environmental awareness group that experienced problems with their sensory integration (SI). If inefficient sensory processing is prevalent in an individual, this may result in sensory integration dysfunction. The cube stimulates the user's imagination, and increases understanding of weather. Discussions can be started around the different weather elements, and introduce stimuli. For example, a fan can give the impression of wind, or water can be dripped onto the service user's hands to experience the feeling of wetness. The sound files and images of the cube can be changed to suit different individuals and groups. Building the Weather Cube Each side of the large foam Weather Cube is stenciled with different meteorological icons and is associated with relevant weather sounds. By turning the face of the cube, the user can hear sounds and associate them with images. Each icon is assigned a unique sound file. As the cube is picked up, the sound file linked with the upward facing plane is wirelessly triggered. Inside of the Weather Cube is housed a Shrimp, which is a DIY circuit (see shrimping.it for further information). Sourcing the Prototyping Materials The hardware we sourced for the Weather Cube use what we call ‘Shrimping,' a strategy for sourcing and openly documenting interactive physical computing kits we create to support UK learners. We call it Shrimping out of loyalty to our humble hometown of Morecambe, an area so famous for its shrimps that they named the soccer team after them! Shrimping is based on sourcing, testing, and documenting the cheapest possible components and modules direct from the manufacturers and wholesalers who serve professional electronics engineers and integrators. After prototyping a project, we provide free, easy-to-follow build graphics, instructions, and sourcing information online enabling others to prepare their own project kits direct from wholesalers, and substantially below retail prices especially when purchased in volume. In this section we outline the benefits and problems of sourcing your own parts direct. Make Circuits Like an Engineer Wholesale component suppliers do not operate with the hobbyist in mind, but their products are incredibly cheap, and with just a bit of community-maintained documentation, can be used like Legos--brought together in different combinations to prototype and deploy in a variety of educational and entertaining devices. Constructing devices on breadboards and stripboards helps makers develop substantial prototyping skills, and understand the pathway that professional device-inventors use. With these skills and materials you can personalize the circuit to meet your own specific needs, which is nearly impossible with a printed circuit board. Once complete, you can use their working circuits as a reference to move towards full-scale manufacturing of printed circuit boards. However, for many people, the main benefit of this approach is price. For hobbyists, Shrimping makes it cost-effective to deploy large numbers of experimental projects. For classrooms and Hackspaces, it becomes feasible to donate the kits for learners to adopt and personalize, which would be prohibitive if using prefabricated microcontroller boards from hobbyist suppliers. Shrimp vs. Arduino The programs that run on the Arduino Uno microcontroller board will run on the Shrimp too. The Shrimp has the full set of input and output pins as an Uno meaning that makers can use the circuit to replicate the many thousands of community-documented Arduino projects. However, it is built from the bare minimum of components, making it roughly one tenth the cost of an official Arduino board. In the Weather Cube, we decided to attach a Shrimp circuit to a Raspberry Pi. Relative to the Shrimp, the Raspberry Pi is more geared up for power and processor-heavy multimedia and desktop applications. For physical computing projects, a Pi always needs some kind of interfacing circuit to be attached, which can themselves be quite expensive. The Shrimp therefore has complementary strengths to the Raspberry Pi, with its low cost, ability to attach directly to sensors and actuators, it's ability to run in low power, and to run software in real time. The computing capabilities of an official Arduino board come from the ATMEGA328 chip at its center, and the Shrimp is essentially the same as the reference circuit from the manufacturer’s data sheet, laid out on a breadboard. Unfortunately, a special program, called an Arduino bootloader needs to be copied to an ATMEGA to make it possible to program it from the Arduino IDE. That means you can’t use wholesale ATMEGAs without an extra preparation step. Using online auction sites you can buy a chip with an Arduino bootloader already added, and once you already have an Arduino-compatible chip, you can use this to bootload more chips using a special Arduino Sketch called Optiloader. Breakout Modules In addition to the Shrimp on breadboard, we've used three breakout modules and another sensor component, a piezoelectric transducer. The breakout modules needed are: a CP2102 USB to UART for wired programming and communication, a HC-06 module for wireless Bluetooth communication and an ADXL345 Accelerometer module for sensing the orientation of our wearable sensors. The codes CP2102, HC-06, and ADXL345 actually refer to small ‘surface mount’ components that have tiny connections intended to be mounted industrially onto printed circuit boards. These components cannot be inserted into a breadboard or connected to for easy prototyping. For this reason, various suppliers provide ‘breakout modules’ which make the connections available as large pins with 0.10 inch (2.54mm) separation, suitable for insertion into breadboard, or wiring with female header cables. The components themselves are quite cheap, and breakout boards are fairly simple to engineer, ensuring that the prices remain low. This also means different suppliers end up making similar-looking breakout boards but with different pin sequences and labeling. Breakout boards have the same fundamental capabilities, because they ‘break out’ the same pins from the same component, so if you wire to the correctly labeled pins, changes to the layout should not normally make much difference. One major exception, sadly, are the 'transmit and receive pins' on UART modules. Some UARTs label their pins according to their role - describing if they transmit TX or receive RX data. Others label their pins to describe what pins on the communicating device to attach to, so a transmitting pin is actually labeled RX, and a receiving pin, TX. As you can see, there is a lot of potential for the Maker community to collaborate with health professionals (and others), to design projects for the greater good. Also, by sourcing wholesale prototyping materials, makers are able to cheaply test and document their projects, and invent personalized circuits. So, if you are a maker, we urge you to get out and partner with your community; your imagination is limitless. About the authors Clare Bowman enjoys hacking playful interactive installations and co-designing digitally fabricated consumer products. She has exhibited projects at Maker Faire UK, Victoria and Albert Museum, FutureEverything, and Curiosity Collective gallery shows. Some recent work includes “Sands Everything”, an interactive hourglass installation interpreting Shakespeare’s Seven Ages of Man soliloquy through gravity-controlled animated grains, and more. Cefn Hoile sculpts open source hardware and software, and supports others doing the same. Drawing on 10 years of experience in R&D for a multinational technology company, he works as a public domain inventor, and an innovation catalyst and architect of bespoke digital installations and prototypes. He is a founder-member of the CuriosityCollective.org digital arts group, and a regular contributor to open source projects and not-for-profits. Cefn is currently completing a PhD in Digital Innovation at Highwire, University of Lancaster, UK.
Read more
  • 0
  • 0
  • 1274
article-image-micro-transactions-gamers-bane
Ed Bowkett
23 Jun 2014
6 min read
Save for later

Micro Transactions: Gamer's Bane?

Ed Bowkett
23 Jun 2014
6 min read
I play a lot of games (who doesn't these days) and I've grown from my Nintendo 64, where I considered myself a pro at Mario Kart, through to the PS4, but my pride and joy is my PC. Lovingly upgraded through the years, my Steam library getting steadily bigger (thanks to Gabe) through the endless sales. Yet, increasingly, I've found myself wanting to be transported back to the days of the Nintendo 64. The reason for this? Micro transactions in games. Now, as we get into the grittiness of this blog, a few confessions and a definition of what micro transactions are. Micro transactions are small sums of money and usually take place online and in-game. It involves the purchase of virtual goods. Now on to the confession part. I’ve partaken in micro transactions in the past. Back during university when my gaming addiction was all about World of Tanks, gold ammo was all the rage. So I calmly handed over my hard earned student loan money and paid for ammo, without really thinking what I was doing. More recently, with the release of Hearthstone, I was intrigued by the legendary cards and how to build decks and generally just wanted to bypass the whole process of slowly working your way to awesome decks. So I purchased a few decks and boosted some of these decks. Do I consider these pay to win elements? With World of Tanks, I sympathised more with those that accused others of paying to win by using premium ammo. However, when you arrived at the higher levels, gold ammo became not only valuable, but necessary. Everyone at this stage is using gold ammo, so ultimately you are doing yourself a disservice by not buying gold ammo. However this is isn’t really a good argument, as people shouldn’t feel it’s necessary to buy gold ammo, but are forced to due to others doing it. With Hearthstone, the assumption that it is pay to win is, in my view, wrong. You can easily obtain these cards freely from leveling up and winning packs through the game mode Arena. While you can “speed” up how many decks you can construct and how quickly you can get up the levels in ranked modes, eventually, others can obtain the cards freely. So there’s not really an element of pay to win in my view. Coupled with the fact you get quests every day (win five games with Mage, and so on) it’s quite feasible to obtain enough gold a month to obtain four and a half packs a week, which translates to 18 packs a month, 216 packs a year, which will give the gamer a huge amount of cards from which to create their deck. Another benefit of this system is the ability to disenchant cards and create more cards to add to decks. Hearthstone can be considered pay to win as there is the mechanism there to pay for more cards, but it is not the only way to win; there are basic cards that can take out the more difficult cards, so it’s a great balance in my view. While the person that paid for all the extra cards will have an initial advantage, the person that went through the daily quests and leveled up will eventually catch up, pay to win is not a permanent thing. And nor should it be. Pay to win, while being be a cash cow for most game companies, does nothing but add negative feedback to games. If it was to be a permanent thing, the benefits of playing the game would be greatly diminished. One of the great pay to win protests I was involved with was in Eve Online during the monoclegate scandal of 2011. This opened my eyes to the number of companies that adopt micro transactions and an element of pay to win into their games. Don’t get me wrong. If you want to fork out money to make a character look aesthetically pleasing, that’s fine, it’s your money. If you want to put in a game damage increasing ammo if you pay $50, then that’s a whole different kettle of fish. Both the enjoyment of the game is decreased as well as strategies employed and effort. What made the 2011 revolt so great is that the entire universe of Eve became united against pay to win and changed the direction of the gaming company, CCP. For me, more players from across the gaming world need to do this as ultimately it’s the gamers that they should be focussed on, not the cash cow that micro transactions offer. The increasing of micro transactions in games also, in my opinion, has come about with the increasing of RMTing or RWting (Real Money Trading or Real World Trading). This involves a website offering credits for the various games (for example, in Eve it is ISK), for a set price, which goes against the EULA of the game. To combat this, CCP introduced PLEX, Pilot's License Extension, an in-game item that you can both purchase on partner websites and trade for ISK. Runescape too did a form of this and the amount of RWTing decreased. So there is a way of using micro transactions sensibly that has a benefit to everyone and goes some way to solving the issue of trying to bend the system illegally. Ultimately there are ways of having micro transactions in games and not making the game slanted towards pay to win. For me, I think Hearthstone has the balance just right. Sure the initial boost to the purchase of decks will be slanted, but there are opportunities to catch up to this boost. With no card being really OP (overpowered), as in there are always ways to counter the cards placed down, the pay to win element is greatly diminished, though not removed. With protests like the summer of 2011 in Eve, micro transactions will continue to be an issue and it's really an area in which gaming companies need to tread carefully. While I appreciate that companies need to make money, they need to be aware that pay to win is not the way forward and there needs to be a balance to it and careful consideration of the consequences. Get up to speed with the very latest in game development. Visit our Unity page here.
Read more
  • 0
  • 0
  • 1299

article-image-gdc-2014-synopsis
Ed Bowkett
22 Jun 2014
5 min read
Save for later

GDC 2014: A Synopsis

Ed Bowkett
22 Jun 2014
5 min read
The GDC of 2014 came and went with something of a miniature Big Bang for game developers. Whether it was new updates to game engines, or new VR peripherals announced, this GDC had plenty of sparks and I am truly excited for the future of game development. In this blog I’m going to cover the main announcements that took my attention and why I’m excited for them. The clash of the Titans Without a shadow of a doubt, out of the main announcements that came out of GDC 14 that was the most appealing to me was the announcement of the updates to the three main game engines, Unity, Unreal, and CryEngine all within a short timeframe. All introduced unique price models that will be covered in a separate blog post, but it was like having a second Christmas, particularly for me, who has a strong interest in this area, both from a hobbyist perspective and in my current role concerning game development books. All three offered a long list of changes and massive updates to various parts of their engine and at some point in the future, I hope to dabble in all three and provide insight on which I preferred and why. The advancement of the hobbyist developer Not to be outdone by the big three, smaller tools announced various new features including Gamemaker, who announced a partnership to develop on the Playstation 4, and Construct 2 announced a similar deal with Wii U (admittedly before GDC). These are hugely significant for me. Support for the new consoles with tools that are primarily aimed at the hobbyist in us all opens up a massive market for potential indie developers and those just trying game development for fun, with the added benefit of the console ecosystem! It means my dream of the game studio I created with Game Dev Tycoon can finally come true. Would you like a side of immersion with your games? I might as well be honest here. VR and I don’t get along. Not in the sense that we broke up after a long relationship and are no longer speaking, I mean more the sense that I just don’t get it. It probably also has something to do with my motion sickness, but that’s less fun. No, in all seriousness, I have no doubt that VR will revolutionize gaming in a big way. From what we’ve seen with certain games such as EVE: Valkyrie, VR has a unique opportunity to take gaming beyond just the screen and for the masses of people out there that love video games, this can only be a positive thing. With Sony announcing Project Morpheus, Oculus Rift releasing a new headset, and Microsoft expressing a strong interest in developing a headset in this area, the area will only continue to expand and competition is not a bad thing. The one question I have is whether it can go from the current gimmicky idea with the large, bulky headset, and become a tour de force in the gaming community. Consoles reaching out to indie developers GDC has always been focussed on indie games and development in the past and this year was no exception. But it wasn’t from the traditional PC love for indie games. Consoles are beginning to cotton on that indie games are much loved and indeed highly played, and as a result, 2014 was the year where the main consoles announced efforts to release more indie games onto their platforms, while trying to drive more indie developers to their respective consoles. Sony, for example, introduced PhyreEngine in GDC 2013, but plans to extend further support through the partnerships of, as mentioned earlier in this article, GameMaker: Studio and MonoGame. Through these tools and their promotion, Sony hopes to improve relations with indie developers and encourage them to use the Sony ecosystem. A similar announcement was made by Nintendo, by introducing the Nintendo Web Framework. They also promoted the fact that Nintendo would be willing to get it promoted and marketed properly. These announcements are both significant and positive for the future of game development, as, from my view, indie games are only going to increase in popularity and to have the ecosystems available for these people to develop on the popular consoles can only be a good thing, and will allow those that are not on an expensive budget or working for an AAA studio to create games and reach a wider audience. That’s the ambition of Sony and Nintendo, I believe. So there you have it; the big announcements that grabbed my attention at GDC. Whilst I could have mentioned Amazon Fire TV and further announcements by Valve, or gone into depth with specific peripherals, I felt an overview of what was announced at GDC was better; the analysis of these announcements can be covered more in depth at a later stage. However what is evident from this blog and what came out of GDC 2014 in general is that game development is an extremely healthy area and is continuously being pushed to the limits and constantly innovated. As an avid fan of games and a mere newbie at game development this excites me and keeps me interested. How was GDC 2014 for you? Any issues that you thought I should have included? Let me know!
Read more
  • 0
  • 0
  • 1386

article-image-you-want-job-publishing-please-set-fire-unicorn-c
Sarah
22 Jun 2014
5 min read
Save for later

You Want a Job in Publishing? Please Set Fire to this Unicorn in C#.

Sarah
22 Jun 2014
5 min read
Total immersion in a tech puzzle that's over your head. That's part of the Packt induction policy. Even those in an editorial role are expected to do battle with some code to get a feel for why we make the books we make. When I joined the commissioning team I'd heard rumours that the current task involved frontend web development. Now, English major I may be, but I've been building sites since the first time my rural convent school took away our paper keyboards and let us loose on the real thing. (True story.) I apprenticed in frames and div.gifs thankfully lost in the Geocities extinction event. CSS or Java? Maybe Sass or JQuery? I was smug. I was on this. Assignment: "This is Unity. Build a game. You have four days." Hang on. What?   The last time I wrote a computer game it was a text adventure in TADS. Turns out amateur game dev technology has moved on somewhat since then. There's nothing like an open brief in a technology you've never even installed before to make you feel cool and in control in a new job. But that was the point, of course. Four days to read, Google, innovate, or pray one's way out of what business types like to call a "pain point". So this is a quick précis of my 32-hour journey from mortifying ignorance to relative success. No, I didn't become the next Flappy Bird millionaire. But I wrote a game, I learned some C#, and I gained a new appreciation for how valuable the guidance of a good author can be as part of the vast toolkit we now have at our fingertips when learning new software. My completed game had a complicated narrative. Day one: deciding what kind of game to make "Make a game" is a really mean instruction. (Yes, boss. I'm standing by that.) "Make an FPS." "Make a pong clone." "Make a game where you're a Tetris block trapped in a Beckett play." All of these are problems to be solved. But "I want to make a game" is about as clear a motivation as "I want to be rich". There are a lot of missing steps there. And it can lead to delusions of scale. Four whole days? I'll write an MMO! I wasted a morning on daydreaming before panicking at lunch and deciding on a side-scroller, on the reasonable logic that those have a beginning, a middle, and an end. I knew from the start that I didn't just want to copy and paste, but I also knew that I couldn't afford to be too precious about my plans before learning the tool. The sheer volume of information on Unity out there is overwhelming. Eventually I started with a book on 2D Games in Unity. By the end of the day, I had a basic game. It wasn't my game, but I'd learned enough along the way to start thinking about what I could do with Unity. Day two: learning the code By mid-morning of day two I'd hit a block. I don't know C#. I've never programmed in C. But if I wanted to do this properly I was going to have to write my own code. Terry Norton wrote us a book on learning C# with unity. For me, a day out working with one clear voice explaining the core concepts before I experimented on my own was exactly what I needed. Day two was given over to learning how to build a state machine from Norton's book. State machines give one strong and flexible control over the narrative of a game. If nothing else ever comes from this whole exercise that is a genuinely cool thing to be able to do in Unity. Eight hours later I had a much better feel for what the engine could do and how the language worked. Day three: everything is terrible Day three is the Wednesday of which I do not speak. Where did the walls go? Everything is pink. Why don't you go left when I tell you to go left? And let's not even get started on my abortive attempts to simultaneously learn to model game objects in Maya. For one hour it looked like all I had to show for the week's work was a single grey sphere that would roll off the back of a platform and then fall through infinite virtual space until my countdown timer triggered a change of state and it all began again. This was an even worse game than the Beckett-Tetris idea. Day four: bringing it together Even though day three was a nightmare, there was a structure to the horror. Because I'd spent Monday learning the interface and Tuesday building a state machine, I had some idea of where the problems lay and what the solutions might look like, even if I couldn't solve them alone. That's where the brilliance of tech online communities comes in, and the Unity community is pretty special. To my astonishment, step-by-step, I fixed each element with the help of my books, the documentation, and Unity Answers. I ended up with a game that worked. Day five: a lick of paint I cheated, obviously. On Friday I came in early and stole a couple of extra hours to swap my polygons for some sketched sprites and add some splash pages. Now my game worked and it was pretty. Check it out: Green orbs increase speed, red slow you down, the waterfall douses the flame. Complex. It works. It's playable. If I have the time there's room to extend it to more levels. I even incidentally learned some extra skills (like animating the sprites properly) and particle streams for extra flair. Bursting with pride I showed it to our Category Manager for Game Dev. He showed me Unicorn Dash. That game is better than my game. Well, you can't win 'em all.
Read more
  • 0
  • 0
  • 1333
article-image-why-phaser-is-a-great-game-development-framework
Alvin Ourrad
17 Jun 2014
5 min read
Save for later

Why Phaser is a Great Game Development Framework

Alvin Ourrad
17 Jun 2014
5 min read
You may have heard about the Phaser framework, which is fast becoming popular and is considered by many to be the best HTML5 game framework out there at the moment. Follow along in this post where I will go into some detail about what makes it so unique. Why Phaser? Phaser is a free open source HTML5 game framework that allows you to make fully fledged 2D games in a browser with little prior knowledge about either game development or JavaScript for designing for a browser in general. It was built and is maintained by a UK-based HTML5 game studio called Photon Storm, directed by Richard Davey, a very well-known flash developer and now full-time HTML5 game developer. His company uses the framework for all of their games, so the framework is updated daily and is thoroughly tested. The fact that the framework is updated daily might sound like a double-edged sword to you, but now that Phaser has reached its 2.0 version, there won't be any changes that break its compatibility, only new features, meaning you can download Phaser and be pretty sure that your code will work in future versions of the framework. Phaser is beginner friendly! One of the main strengths of the framework is its ease of use, and this is probably one of the reasons why it has gained such momentum in such a short amount of time (the framework is just over a year old). In fact, Phaser abstracts away all of the complicated math that is usually required to make a game by providing you with more than just game components.It allows you to skip the part that you spend thinking about how you can implement a given special feature and what level of calculus it requires. With Phaser, everything is simple. For instance, say you want to shoot something using a sprite or the mouse cursor.Whether it is for a space invader or a tower defense game, here is what you would normally have to do to your bullet object (the following example uses pseudo-code and is not tied to any framework): var speed = 50; var vectorX = mouseX - bullet.x; var vectorY = mouseY - bullet.y;   // if you were to shoot a target, not the mouse vectorX = targetSprite.x - bullet.x; vectorY = targetSprite.y - bullet.y;   var angle = Math.atan2(vectorY,vectorX);   bullet.x += Math.cos(angle) * speed;   bullet.y += Math.sin(angle) * speed; With Phaser, here is what you would have to do: var speed = 50; game.physics.arcade.moveToPointer(bullet, speed); // if you were to shoot a target : game.physics.arcade.moveToObject(bullet,target, speed); The fact that the framework was used in a number of games during the latest Ludum Dare (a popular Internet game jam) highly reflects this ease of use.There were about 60 games at Ludum Dare, and you can have a look at themhere. To get started with learning Phaser, take a look here at thePhaser examples, where you’ll find over 350 playable examples. Each example includes a simple demo explaining how to do specific actions with the framework, such as creating particles, using the camera, tweening elements, animating sprites, using the physics engine, and so on. A lot of effort has been put into these examples, and they are all maintained with new ones constantly added by either the creator or the community all the time. Phaser doesn't need any additional dependencies When using a framework, you will usually need an external device library, one for the math and physics calculations, a time management engine, and so on. With Phaser, everything is provided, giving you a very exhaustive device class that you can use to detect the browser's capabilities that is integrated into the framework and is used extensively internally and in games to manage scaling. Yeah, but I don't like the physics engine… Physics engines are usually a major feature in a game framework, and that is a fair point, since physics engines often have their own vocabulary and way of dealing and measuring things. And it's not always easy to switch over from one to another. The physics engines were a really important part of the Phaser 2.0 release. As of today, there are three physics engines fully integrated into Phaser's core, with the possibility to create a custom build of the framework in order to avoid a bloated source code. A physics management module was also created for this release.It dramatically reduces the pain to make your own or an existing physics engine work with the framework. This was the main goal of this feature: make the framework physics-agnostic. Conclusion Photon Storm has put a lot of effort into their framework, and as a result the framework has become widely used by both hobbyists and professional developers. The HTML5 game developers forum is always full of new topics and the community is very helpful as a whole. I hope to see you there.
Read more
  • 0
  • 0
  • 3897

article-image-internet-things-or-internet-thieves
Oli Huggins
07 Jan 2014
3 min read
Save for later

Internet of Things or Internet of Thieves

Oli Huggins
07 Jan 2014
3 min read
While the Internet of Things(IoT) sounds like some hipster start-up from the valley, it is in actual fact sweeping the technology world as the next big thing and is the topic of conversation (and perhaps development) through the majority of the major league tech titans. Simply, the IoT is the umbrella term for IP-enabled every day devices with the ability to communicate over the Internet. Whether that is your fridge transmitting temperature readings to your smartphone, or your doorbell texting you once it has been rung, anything with power (and even some without) can be hooked up to the World Wide Web and be accessed anywhere, anytime. This will of course have a huge impact on consumer tech, with every device under the sun being designed to work with your smartphone or PC, but whatäó_s worryingis how all this is going to be kept secure. While there are a large number of industry leading brands we can all trust (sometimes), there are an even bigger number of companies shipping devices out of China at extremely low production (and quality) costs. This prompts the questionäóñif the companyäó_s mantra is low cost products and mass sales, do they have the time, money (or care) to have an experienced security team and infrastructure to ensure these devices are secure? Iäó_m sure you know the answer to that question. Unconvinced? How about TrendNetcams back in 2012äó_ The basic gist was that a flaw in the latest firmware enabled you to add /anony/mjpg.cgi to the end of one of the camsäó_ IP addresses and you would be left with a live stream of the IP camera. Scary stuff (and some funny stuff) but this was a huge mistake made by what seems to be a fairly legitimate company. Imagine this on a much larger scale, with many more devices, being developed by much more dubious companies. Want a more up-to-date incident? How about a hacker gaining access to a Foscom IP camera that a couple was using to watch over their child, and the hacker screaming "Wake up, baby! Wake up, baby!äó_ Iäó_ll leave you to read more about that. With the suggestion that by 2020 anywhere between 26 and 212 billion devices will be connected to the Internet, this opens up an unimaginable amount of attack vectors, which will be abused by the black hats among us. Luckily, chip developers such as Broadcom have seen the payoff here by developing chips with a security infrastructure designed for wearable tech and the IoT. The newBCM20737 SoC provides äó_ Bluetooth, RSA encryption and decryption capabilities, and Appleäó_s iBeacon device detection technologyäó_ adding another layer of security that will be of interest to most tech developers. Whether the cost of such technology will appeal to all though is another thing altogetheräóîlow cost tech developers will just not bother. Now, I see the threat of someone hacking your toaster and burning your toast is not something you would worry about, but imagine healthcare implants or house security being given the IoT treatment. Not sure Iäó_d want someone taking control of my pacemaker or having a skeleton key to my house! Security is one of the major barriers to total adoption of the IoT, but is also the only barrier that can be jumped over and forgotten about by less law abiding companies. If I were to give anyone any advice before äó_connectingäó_, it would be to spend your money wisely, donäó_t go cheap, and avoid putting yourself in compromising situations around your IoT tech.
Read more
  • 0
  • 0
  • 2515