Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech Guides

851 Articles
article-image-webgl-games
Alvin Ourrad
05 Mar 2015
5 min read
Save for later

WebGL in Games

Alvin Ourrad
05 Mar 2015
5 min read
In this post I am not going to show you any game engine, nor framework, nor library. This post is a more general write-up that aims to give you a more general overview of the technology that powers some of these frameworks : WebGL. Introduction Back in the days, in 2011, 3D in the browser was not really a thing outside of the realm of Flash, and the websites didn't make much use of the canvas element like they do today. During that year, the Khronos Group started an initiative called WebGL. This project was about creating an implementation of OpenGL ES 2.0 in a royalty free, standard, and cross browser API. Even though the canvas element can only draw 2d primitives, it actually is possible to render 3D graphics at a decent speed with this element.  By making a clever use of perspective and using a lot of optimizations, MrDoob with THREE.js managed to create a 3D canvas renderer, which quite frankly offers stunning results as you can see here and there. But, even though canvas can do the job, its speed and level of hardware-acceleration is nothing compared to the one WebGL benefits from, especially when you take into account the browsers on lower-end devices such as our mobile phones. Fast-forward in time, when Apple officially announced the support of WebGL for mobile Safari in IOS 8, the main goal was reached, since most of the recent browsers were able to use this 3D technology natively. Can I have 3D ? It's very likely that you can now, although there are still some graphics cards that were not made to support WebGL, but the global support is very good now. If you are interested in learning how to make 3D graphics in the browser, I recommend you do some research about a library called THREE.js. This library has been around for a while and is usually what most people choose to get started with, as this library is just a 3D library and nothing more. If you want to interact with the mouse, or create a bowling game, you will have to use some additional plugins and/or libraries. 3D in the gaming landscape As the support and the awareness around WebGL started rising, some entrepreneurs and companies saw it as a way to create a business or wanted to take part in this 3D adventure. As a result, several products are available to you if you want to delve into 3D gaming. Playcanvas This company likes saying that they re-created "Unity in the browser", which is not far from the truth really. Their in-browser editor is very complete, and mimics the entity-component system that exists in Unity. However, I think the best thing they have created among their products is their real-time collaboration feature. It allows you to work on a project with a team and instantly updates the editor and the visuals for everyone currently viewing it. The whole engine was also open sourced a few months ago, which has given us beautiful demos like this one:  http://codepen.io/playcanvas/pen/ctxoD Feel free to check out their website and give their editor a try:  https://playcanvas.com Goo technology Goo technology is an environment that encompasses a 3D engine, the Goo engine, an editor and a development environment. Goo create is also a very nicely designed 3D editor in the browser. What I really like about Goo is their cartoony mascot, "Goon" that you can see in a lot of their demos and branding, which adds a lot of fun and humanity to them. Have fun watching this little dude in his adventures and learn more about the company in these links:  http://www.goocreate.com Babylonjs I wasn't sure if this one was worth including, Babylon is a contestant to THREE.js created by Microsoft that doesn't want to be "just a rendering engine," but wants to add some useful components available out-of-the-box such as camera controls, a physics engine, and some audio capabilities. Babylon is relatively new and definitely not as battle-tested as THREE.js, but they created a set of tools that help you get started with it that I like, namely the playground and the shader editor. 2D ? Yes, there is a major point that I haven't mentioned yet. WebGL has been used across more 2D games that you might imagine. Yes, there is no reason why 2D games shouldn’t have this level of hardware-acceleration. The first games that used WebGL for their 2D needs were Rovio and ZeptoLabs for the ports of their respective multi-million-dollar hits that are Angry Birds and Cut the Rope to JavaScript. When pixi.js came out, a lot of people started using it for their games. The major HTML5 game framework, Phaser is also using it. Play ! This is the end of this post, I hope you enjoyed it and that you want to get started with these technologies. There is no time to waste -- it's all in your hands. About the author Alvin Ourrad is a web developer fond of the web and the power of open standards. A lover of open source, he likes experimenting with interactivity in the browser. He currently works as an HTML5 game developer.
Read more
  • 0
  • 0
  • 2169

article-image-an-introduction-to-reactjs
Simon Højberg
27 Feb 2015
1 min read
Save for later

A Quick Video Introduction to ReactJS

Simon Højberg
27 Feb 2015
1 min read
Get your first taste of React.js - which some are saying is going to beat out the Angular JavaScript framework - with this great introductory tutorial video. Expert author Simon Højberg takes us through the basics of React.js by showing us how to perform a classic 'Hello, World' with the library that's going to revolutionize UIs.   Sample Code You can find the sample code on Simon's Github repository.   About The Author Simon Højberg is a Senior UI Engineer at Swipely in Providence, RI. He is the co-organizer of the Providence JS Meetup group and former JavaScript instructor at Startup Institute Boston. He spends his time building functional User Interfaces with JavaScript, and hacking on side projects like cssarrowplease.com. Simon recently co-authored "Developing a React Edge." Learn how to set up testing in React.js with our article. If you want to explore the entire JavaScript ecosystem, and find out how ReactJS fits into it, visit our JavaScript page for our latest titles and free content.
Read more
  • 0
  • 0
  • 1178

article-image-introduction-phonegap
Robi Sen
27 Feb 2015
9 min read
Save for later

An Introduction to PhoneGap

Robi Sen
27 Feb 2015
9 min read
This is the first of a series of posts that will focus on using PhoneGap, the free and open source framework for creating mobile applications using web technologies such as HTML, CSS, and JavaScript that will come in handy for game development. In this first article, we will introduce PhoneGap and build a very simple Android application using PhoneGap, the Android SDK, and Eclipse. In a follow-on article, we will look at how you can use PhoneGap and PhoneGap Build to create iOS apps, Android apps, BlackBerry apps, and others from the same web source code. In future articles, we will dive deeper into exploring the various tools and features of PhoneGap that will help you build great mobile applications that perform and function just like native applications. Before we get into setting up and working with PhoneGap, let’s talk a little bit about what PhoneGap is. PhoneGap was originally developed by a company called Nitobi but was later purchased by Adobe Inc. in 2011. When Adobe acquired PhoneGap, it donated the code of the project to the Apache Software Foundation, which renamed the project to Apache Cordova. While both tools are similar and open source, and PhoneGap is built upon Cordova, PhoneGap has additional capabilities to integrate tightly with Adobe’s Enterprise products, and users can opt for full support and training. Furthermore, Adobe offers PhoneGap Build, which is a web-based service that greatly simplifies building Cordova/PhoneGap projects. We will look at PhoneGap Build in a future post.   Apache Cordova is the core code base that Adobe PhoneGap draws from. While both are open source and free, PhoneGap has a paid-for Enterprise version with greater Adobe product integration, management tools, and support. Finally, Adobe offers a free service called PhoneGap Build that eases the process of building applications, especially for those needing to build for many devices. Getting Started For this post, to save space, we are going to jump right into getting started with PhoneGap and Android and spend a minimal amount of time on other configurations. To follow along, you need to install node.js, PhoneGap, Apache Ant, Eclipse, the Android Developer Tools for Eclipse, and the Android SDK. We’ll be using Windows 8.1 for development in this post, but the instructions are similar regardless of the operating system. Installation guides, for any major OS, can be found at each of the links provided for the tools you need to install. Eclipse and the Android SDK The easiest way to install the Android SDK and the Android ADT for Eclipse is to download the Eclipse ADT bundle here. Just downloading the bundle and unpacking it to a directory of your choice will include everything you need to get moving. If you already have Eclipse installed on your development machine, then you should go to this link here, which will let you download the SDK and the Android Development Tools along with instructions on how to integrate the ADT into Eclipse. Even if you have Eclipse, I would recommend just downloading the Eclipse ADT bundle and installing it into your own unique environment. The ADT plugin can sometimes have conflicts with other Eclipse plugins. Making sure Android tooling is set up One thing you will need to do, no matter whether you use the Eclipse ADT bundle or not, is to make sure that the Android tools are added to your class path. This is because PhoneGap uses the Android Development Tools and Android SDK to build and compile the Android application. The easiest way to make sure everything is added to your path is to edit your environment variables. To do that, just search for “Edit Environment” and select Edit the system environment variables. This will open your System Properties window. From there, select Advanced and then Environment Variables as shown in the next figure. Under System Variables, select Path and Edit. Now you need to add sdkplatform-tools and sdktools  to your path as shown in the next figure. If you have used the Eclipse ADT bundle, your SDK directory should be of the form C:adt-bundle-windows-x86_64-20131030sdk.  If you cannot find your Android SDK, search for your ADT. In our case, the two directory paths we add to the Path  variable are C:adt-bundle-windows-x86_64-20131030sdkplatform-tools  and C:adt-bundle-windows-x86_64-20131030sdktools. Once you’re done, select OK , but don’t just exit the Environment Variables  screen yet since we will need to do this again when installing Ant. Installing Ant PhoneGap makes use of Apache Ant to help build projects. Download Ant from here and make sure to add the bin directory to your path. It is also good to set the environment variable ANT_HOME as well. To do that, create a new variable in the Environment Variables screen under System Variables called ANT_HOME and point it to the directory where you installed Ant: For more detailed instructions, you can read the official install guide for Apache Ant here. Installing Node.js Node.js is a development platform built on Chrome’s JavaScript runtime engine that can be used for building large-scale, real-time, server-based applications. Node.js is used to provide a lot of the command-line tools for PhoneGap, and to install PhoneGap, we first need Node.js. Unix, OS X, and Windows users can find installers as well as source code here on the Node.js download site. For this post, we will be using the Windows 64-bit installer, which you should be able to double-click and install. Once you’re done installing, you should be able to open a command prompt and type npm –version and see something like this: Installing PhoneGap Once you have Node.js installed, open a command line and type npm install –g phonegap. Node will now download and install PhoneGap and its dependencies as shown here: Creating an initial project in PhoneGap Now that you have PhoneGap installed, let’s use the command-line tools to create an initial PhoneGap project. First, create a folder where you want to store your project. Then, to create a basic project, all you need to do is type phonegap create mytestapp as shown in the following figure. PhoneGap will now build a basic project with a deployable app. Now go to the directory you are using for your project’s root directory. You should see a directory called mytestapp, and if you open that directory, you should see something like the following: Now look under platforms>android and you should see something like what is shown in the next figure, which is the directory structure that PhoneGap made for your Android project. Make sure to note the assets directory, which contains the HTML and JavaScript of the application or the Cordova directories that contain the necessary code to tie Android’s API’s to PhoneGap/Cordova’s API calls. Now let’s import the project into Eclipse. Open Eclipse and select Create a New Project, and select Android Project from Existing Code. Browse to your project directory and select the platforms/android folder and select Finish, like this: You should now see the mytestapp project, but you may see a lot of little red X’s and warnings about the project not building correctly. To fix this, all you need to do is clean and build the project again like so: Right-click on the project directory. In the resulting Properties dialog, select Android from the navigation pane. For the project build target, select the highest Android API level you have installed. Click on OK. Select Clean from the Project menu. This should correct all the errors in the project. If it does not, you may need to then select Build again if it does not automatically build. Now you can finally launch your project. To do this, select the HelloWorld project and right-click on it, and select Run as and then Android application. You may now be warned that you do not have an Android Virtual Device, and Eclipse will launch the AVD manager for you. Follow the wizard and set up an AVD image for your API. You can do this by selecting Create in the AVD manager and copying the values you see here: Once you have built the image, you should now be able to launch the emulator. You may have to again right-click on the HelloWorld directory and select Run as then Android application. Select your AVD image and Eclipse will launch the Android emulator and push the HelloWorld application to the virtual image. Note that this can take up to 5 minutes! In a later post, we will look at deploying to an actual Android phone, but for now, the emulator will be sufficient. Once the Android emulator has started, you should see the Android phone home screen. You will have to click-and-drag on the home screen to open it, and you should see the phone launch pad with your PhoneGap HelloWorld app. If you click on it, you should see something like the following: Summary Now that probably seemed like a lot of work, but now that you are set up to work with PhoneGap and Eclipse, you will find that the workflow will be much faster when we start to build a simple application. That being said, in this post, you learned how to set up PhoneGap, how to build a simple application structure, how to install and set up Android tooling, and how to integrate PhoneGap with the Eclipse ADT. In the next post, we will actually get into making a real application, look at how to update and deploy code, and how to push your applications to a real phone. About the author Robi Sen, CSO at Department 13, is an experienced inventor, serial entrepreneur, and futurist whose dynamic twenty-plus year career in technology, engineering, and research has led him to work on cutting edge projects for DARPA, TSWG, SOCOM, RRTO, NASA, DOE, and the DOD. Robi also has extensive experience in the commercial space, including the co-creation of several successful start-up companies. He has worked with companies such as UnderArmour, Sony, CISCO, IBM, and many others to help build new products and services. Robi specializes in bringing his unique vision and thought process to difficult and complex problems, allowing companies and organizations to find innovative solutions that they can rapidly operationalize or go to market with.
Read more
  • 0
  • 0
  • 4213

article-image-5-rising-stars-you-may-not-have-been-watching-2015
Sarah C
16 Feb 2015
3 min read
Save for later

5 Rising Stars (you may not have been watching) for 2015

Sarah C
16 Feb 2015
3 min read
At the end of the year, we ran through the things that made the biggest difference for front-end developers in 2014. But what about the future? In 2015, what’s the state of play for some of the most important new (and old) technology in web development, and what is there to look forward too? Here’s the run down of some projects in which we’re invested for the coming twelve months. Node/Io.js At the end of last year, Node forked due to a difference of opinion on how to run things. Some thought this spelled disaster. In our opinion? Node’s a great project that’s only going to get better. The Io branch will add innovation, whether the projects continue to exist in parallel or merge again downstream. (The first official release in January already looks meaty.) Perhaps even more important than the tech itself – the future of Node/Io will be add to the annals of open-source history. If the two projects can reach an amiable consensus we’ll have a great exemplar for ethical open-source and enterprise interdependence. Neo4J Neo4J is the datastore that stole the show for anyone trying to work with social data. Neo4J’s graph database fundamentally changes the way we think about and use relationship modelling. This isn’t a blip or a quaint hobby – the kind of information that graph databases can deliver is changing the way we use the web. Neo4J’s devs are anticipating that a quarter of all enterprises will be using tech like theirs within three years. This year they’re investing that $20m they just made in increasing adoption. And we’re expecting to see plenty of developers investing their time in learning the ropes. Angular2.0 The news on Angular 2.0 is so big, we’ve got a whole entire post on it. Go see what Ed G has to say about one of the biggest things to hit JavaScript since jQuery. PHP About once every minute somebody stops at my desk, crosses the street, or books a round ticket from Australia to tell me PHP is dead. They’re entirely wrong. In fact, in 2015 we should be getting our first look at PHP7. (We’re skipping 6, best not to ask). 2015 is set to be a good year for PHP. Now that the specifications are in place, the devs are ready to roll out new features for a new generation of PHP: there’ll be a whole new core for a faster, more modern take on the classic language. GruntJS A smaller player on this field, but one that’s been punching way above its weight for some time – will 2015 give us GruntJS 1.0? Grunt is a JavaScript task runner. It doesn’t even have its own Wikipedia page right now, and yet we’re hearing nothing but enthusiasm for how much it helps with workflow. Grunt built a better mouse-trap, and the world’s started beating a path to its door. Now Grunt’s developers are working on making everything more modular with easier integration and dependency management. We’re as excited for their future as they are.
Read more
  • 0
  • 0
  • 938

article-image-2014-the-hackathon-to-remember
Julian Ursell
13 Feb 2015
4 min read
Save for later

2014: The Year of the Hackathon to Remember

Julian Ursell
13 Feb 2015
4 min read
2014 was the year Kim Jong-Un watched you undress through your laptop web camera. Well, not quite, but at times it was almost as worrying. It did see some big plays from the black hats as they set out to pillage, obstruct and generally embarrass major corporations to entire countries, as well as hacktivists intervening as crusaders of social justice. The hacks ranged from petty DDoS attacks to politically charged hacking threats, to all out sex offences. There was also cross-fighting between hackers. The very real phenomenon has been the bane of security professionals' existence, permeating the international consciousness in perhaps the most prominent hacking 'wave' in recent memory. One can't deny the position of power the wily hacker possess right now, and we saw this in many different ways throughout the last year. Was there really a grand context behind it all? Let's look back at the events of 2014, and what they meant, if anything at all. It wasn't exactly a hack, but the Heartbleed vulnerability to the security software OpenSSL was one of the major spooks of the year, prompting a hysteria of password changes and security experts on the breakfast news. Several major websites and applications using an implementation of OpenSSL were affected to varying degrees, including Facebook, Instagram, Netflix and Gmail, although what it amounted to erred more often than not on cautionary advice rather than ultimatums on password changes, as many sites rapidly rolled out security patches. The majority seemed to have experienced no serious security breaches or malicious activity, seemingly catching the bug before hacker groups could really go to town. However, perhaps the strongest underlining to the whole debacle was the resonance it had in the open versus closed source security software debate. That there was a vulnerability lurking within the code of OpenSSL for two years was a hugely embarrassing oversight, bookended with the flood of attacks on servers made possible by the Shellshock bug at the end of the year. In the immediate future there will be a long, hard look at open source security, ensuring that the way in which the software is developed is in itself secure, and weighing up a greater potential interaction between open source and corporate funding. August was a turbulent month for hacking, for hugely different reasons and on separate parts of the spectrum (it ain't just black and white, right?). The celebrity hacking scandal affectionately dubbed 'the Fappening' was responsible for the theft and leak of explicit media of several well known celebrities, and was a big kick in the teeth for Apple's cloud storage service, iCloud. The internet was awash with panic as well as guidance about securing iCloud, putting the scare into people that malicious hackers could reach past the security mechanisms of technological corporations as sophisticated as Apple. It was also the month where hacktivism played a powerful role in real world, unfolding events, as Anonymous intervened in the tense stand off in Ferguson, USA, following the shooting of Michael Brown. As is Anonymous' typical modus operandi, they threatened the police with the release of sensitive information to the public (a method known as doxing), should they not reveal the officer responsible for the killing. However, in the pursuit of social, moral and political justice, Anonymous had to deal with a splinter in its own ranks, as a member was found to have misidentified the officer, forcing the group to swiftly denounce the loose cannon and its misinformation. We saw last year hacking as yet again an activist vessel wielded in defence of justice, demonstrating how cyber space has become a significant dimension in real world events. Finally, on to Christmas, Lizard Squad had their fun making Xbox and Playstation gamers cry ( subsequently triggering a war with Anonymous), but the obvious big story was the furore over Sony's The Interview as its depiction of the North Korean leader's demise wasn't taken with the light hearted grace that I'm sure was previously shown for Kim Jong-Il's even handed representation in Team America. Sufficiently terrified by a threat in broken English and following the overture of one of the worst corporate network hacks in history, Sony backed down and pulled the film, then partially reneged by making it available through VOD, even prompting some to suggest the whole thing was a deliberate conspiracy (which was of course a whole load of hash). Anonymous, the Guardians of Peace, Lizard Squad; 2014 was the year the hackers really pushed all the buttons and got (for the most part) what they wanted. How the world deals with the black hats, the white hats, the hacktivists, the trouble makers in the future will be intriguing for sure.
Read more
  • 0
  • 0
  • 998

article-image-kubernetes-googles-open-docker-orchestration-engine
Ryan Richard
13 Feb 2015
6 min read
Save for later

Understanding Kubernetes: Google’s Open Docker Orchestration Engine

Ryan Richard
13 Feb 2015
6 min read
In April 2014 the first Dockercon took place to a packed house. It became clear that Docker has the right recipe to become a game changer, but one thing was missing: orchestration.  Many companies were attempting to answer the question “How do I run hundreds or thousands of containers cross my infrastructure”? A number of solutions emerged that week: Kubernetes from Google, Geard from Red Hat, fleet from CoreOS, deis, flynn, ad infinitum. Even today there are well over 20 open source solutions for this problem, but one has emerged as an early leader: Kubernetes (kubernetes.io). Besides being built by Google, it has a few features that make it the most interesting solution: Pods, Labels and Services. We’ll review these features in this blog. Along with the entire Docker ecosystem, Kubernetes is written in Go, open source and under heavy development. As of today, it can be deployed on GCE, Rackspace, VMware, Azure, AWS, Digital Ocean, Vagrant and others with scripts located in the official repository (https://github.com/GoogleCloudPlatform/kubernetes/tree/master/cluster). Deploying Kubernetes is generally done via SaltStack but there are a number deployment options for CoreOS as well. Kubernetes Paradigms Let’s take a look at pods, labels and services. Pods Pods are the primary unit that Kubernetes will schedule into your cluster. A pod may consist of 1 or more containers. If you define more than 1 container they are guaranteed to be co-located on a system allowing you to share local volumes and networking between the containers. Here is an example or a pod definition with 1 container running a website, presumably with an application already in the image: (These specs are from the original API, which is under heavy development and will change.) <code - json> { "id": “mysite", "kind": "Pod", "apiVersion": "v1beta1", "desiredState": { "manifest": { "version": "v1beta1", "id": “mysite", "containers": [{ "name": “mysite", "image": “user/mysite", "cpu": 100, "ports": [{ "containerPort": 80 }] }] } }, "labels": { "name": “mysite" } } </code - json> In reality you probably want more than 1 of these containers running in case of a node failure or to help with load. This is the where the ReplicationController paradigm comes in. It allows a user to run multiple replicas of the same pod. Data is not shared between replicas but instead allows for many instances of a pod to be scheduled in the cluster. <code - json> { "id": “mysiteController", "kind": "ReplicationController", "apiVersion": "v1beta1", "desiredState": { "replicas": 2, "replicaSelector": {"name": “mysite"}, "podTemplate": { "desiredState": { "manifest": { "version": "v1beta1", "id": “mysiteController", "containers": [{ "name": “mysite", "image": “user/mysite", "cpu": 100, "ports": [{"containerPort": 80}] }] } }, "labels": {"name": “mysite"} }}, "labels": {"name": “mysite"} } </code – json> In the above template we took the same pod but converted it to a ReplicationController. The “replicas” directive says that we want 2 of these pods running all of the time. Increasing the number of containers is as simple as raising the replica value. Labels Conceptually, labels are similar to standard metadata tags except that they are arbitrary key/value pairs. If want to label your pod “environment: staging” or “name: redis-slave” or both, go right ahead. Labels are primarily used by services to build powerful internal load balancing proxies, but can also be filter output from the API. Services Services are user-defined “load balancers” that are aware of the container locations and their labels. When a user creates a service, a proxy will be created on the Kubernetes nodes that will seamlessly proxy to any container that has the selected labels assigned. <code - json> { "id": “mysite", "kind": "Service", "apiVersion": "v1beta1", "port": 10000, "selector": { "name": “mysite" }, "labels": { "name": “mysite" } } </code – json> This is a basic example that creates a service that listens on port 10000, but will proxy to any pod that fulfills the “selector” requirements of “name: mysite”. If you have 1 container running, it will get all the traffic, if you have 3, they will each receive traffic. If you grow or shrink the number of containers, the proxies will be aware and balance accordingly. Not all of these concepts are unique to Kubernetes, but it brings them together seamlessly. The future is also interesting for Kubernetes because it can act as a broker to the cloud provider for your containers. Need a static IP for a special pod? It could get that from the cloud provider. Need another server for additional resources? It could provision one and add it to the cluster. Google Container Engine It wasn’t a far stretch to see that if this project was successful then Google would run it as a service for their cloud. Indeed they’ve announced the Google Container Service based on Kubernetes (https://cloud.google.com/container-engine/). This also marks the first time Google has built a tool in the open and productized it. A successful product may mean that we see more day 1 open source projects from Google, which is certainly intriguing. AWS and Docker Orchestration Amazon announced their container orchestration service at re:invent. This blog wouldn’t be complete without a quick comparison between the two. Amazon allows you to co-locate multiple Docker containers on a single host, but the similarities with Kubernetes stop there. Their container service is proprietary, which isn’t a surprise. They’re using links to connect containers on the same host, but there is no mention of smart proxies inside the system. There isn’t a lot of integration with the rest of the AWS services (i.e. load balancing) yet, but I expect that to change pretty quickly. Summary In this post, we touched on why Kubernetes exists, why it’s a unique leader in the pack, a bit on its paradigms and finally a quick comparison the AWS EC2 container service. The EC2 container service will get a lot of attention, but in my opinion Kubernetes is the Docker orchestration technology to beat right now, especially if you value open source. If you’re wondering which direction Docker is heading, make sure to keep an eye out for Docker Host and Docker Cluster. Lastly, I hope you recognize that we are at the beginning stages of a new deployment and operational paradigm that leverages lightweight containers. Expect this space to change and evolve rapidly. For more Docker tutorials and even more insight and analysis, visit our dedicated Docker page - find it here. About the author Ryan Richard is a systems architect at Rackspace with a background in automation and OpenStack. His primary role revolves around research and development of new technologies. He added the initial support for the Rackspace Cloud into the Kubernetes codebase. He can be reached at: @rackninja on Twitter.
Read more
  • 0
  • 0
  • 1540
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €14.99/month. Cancel anytime
article-image-what-i-want-happen-game-development-2015
Ed Bowkett
11 Feb 2015
5 min read
Save for later

What I want to happen in Game Development in 2015

Ed Bowkett
11 Feb 2015
5 min read
2014 came and went with some great announcements, some not so good times and some events that should just remain in 2014. It was another good time for good games being released, but also a time when rushed games were a common theme. This blog will focus on specifically what I want to see heading into 2015. Again it’s an opinion piece so feel free to leave your own views and comments in the comments section below. Indie scene matures Of course indie games will always have its own genre and be called Indie games. Anything that’s not made a major studio has this moniker attached to it. 2014, possibly even earlier, sparked the ‘big bang’ of indie developers, 1-2 man studios creating, quite frankly, better games than the major studios. Games like Spelunky, Papers Please, FTL. All these games were huge in 2014. This trend will continue to grow in 2015, but it will also mark the shifting away from  where indie was the cool younger brother of the AAA games, to a situation where we will see indie becoming the new mainstream. Sure you’re still going to get Call of Duty 100, you’re still going to get Fifa 20,000 but you will start to see indie games permeating all genres of the gaming industry. We saw it last year, we’ll see it become much more established this year. Quite frankly, is this a bad trend? More choice, forcing studios to actually decide which games to care about. Stable launch of games 2014 was a year of broken games. Assassin's Creed Unity being the most obvious one. Bearing in mind this game cost around $60 on release day and most people would either pre order it so they can load it up on the night or digitally in which case you can’t return it, resulted in a very messy game, that had more bugs that Quality Assurance must have taken an extended holiday before the game was released. It wasn’t just Assassin’s Creed though. So many flagship games had similar issues. Take Halo:The Master Chief Collection. So many multiplayer issues that it became unplayable and the wait times to get a game far too long. Given Halo prides itself on the multiplayer aspect, this severely affects peoples’ confidence in the studios. 2015 this needs to be stopped. $60 is not a cheap amount for the typical user. Large studios need to prove that bugs of this magnitude will eventually be a thing of the past. I’m not saying games will never have bugs, I’m saying the type of bugs and the amount in each of the games mentioned was far too large and suggests sloppiness and a lack of care for the consumer. 2015 the focus should be on getting things right again. Intelligent gaming With the arrival of the next generation systems in the PS4 and Xbox One we expected great things. What we got was good. We got better graphics, we got good frames per second, but we didn’t get games that were really taxing. Well we did, but it was too late in the year to count towards anything. Shadow of Mordor Nemesis System had Nemeses, which are AI characters generated uniquely through the game. They have their own personality and have different outcomes depending on the player’s success. This is unique and we need more of it. I expect this to make great strides in 2015. This could also be in the form of the hardware released next year. We are already making great strides in virtual reality, it just needs a little bit more to really sell. Possibly intelligent gaming can be attached to this new advent of gaming. Gaming community matures This article has been a lot about the need for maturing. 2014 showed the gaming community in the worst of lights. Whichever side you may fight for, harassment in any workplace is just wrong.  I am of course, talking about GamerGate. Briefly, the word stemmed in response to an alleged breach of ‘ethics in game journalism’. Whether it happened or didn’t, this is not the discussion of this segment. What is the discussion is how some people found it necessary to harass, provide misinformation and generally promote death threats. This engulfed the gaming community and it is a continuing issue moving forward into 2015.  Lessons must be learned about what happened in 2014 and for all the damage and anger that GamerGate brought, positives have to be taken in the way the majority of the community came together and united against the minority of people. 2015 will continue to see the maturing and hopefully GamerGate will be consigned as  a black mark in an otherwise incredibly talented community.   Fundamentally I want awesome games to play. I want the gaming developer community to keep doing what they’ve done for years, make the best game they want and to show passion to each game. I want developers to keep developing tools available to the community to try to create their own games. This is what I want. Pretty selfish, aren’t I?
Read more
  • 0
  • 0
  • 1248

article-image-2014-in-hardware
Ed Bowkett
11 Feb 2015
5 min read
Save for later

The Year that was: Hardware 2014

Ed Bowkett
11 Feb 2015
5 min read
Hardware went under some pretty big changes in 2014, this blog will focus on what I believe were the most significant. Bear in mind, like my previous blogs, this is purely opinion; feel free to counter this with anything you found equally as important. 1)    Internet of Things continues to be a thing Don’t get me wrong, I love the fact that I have a wristband that tracks the amount of exercise I do (not enough apparently) and that it records the type of sleep patterns I have. I like the idea that if I get a certain type of coffee machine I can receive notifications telling me my coffee machine is going to brew me a fresh cup at exactly 13:37. That’s all cool. However, I felt in 2014, the Internet of Things was just for ‘really neat’ things. It felt gimmicky. Whilst I am aware of the IoT beginning to have effects in the health system, 2014 was not the breakout year for this. Besides, when this does happen, is the phrase Internet of Things appropriate? Even the phrase sounds gimmicky. In my view, when IoT does mature to the point where it affects every element of society, then it becomes less about the internet of ‘things’ and more about the internet of ‘everything’. With Gartner reporting the Internet of things to be at the peak of inflated expectations, we have some way to go before the IoT emerges to the desirable stage yet. 2)    Wearables Wearables became such a thing last year. That sounds like a moan. Partly it is. I have, as mentioned above, a wristband, a Fitbit, that tells me how much exercise I’ve done, how much sleep I’ve had, and I love it.  Wearables when it was first announced were a great way of selling technology in the form of bettering your health. For a time most of the wearables coming out, I lapped up accordingly. Yet the more the year progressed, the more it became impossible to filter which wearables were actually useful and which actually benefitted your health. This isn’t an argument against competition in the market, competition is healthy, it’s more an observation that as a consumer, what is beneficial and will help you achieve your long term goals with these wearables has become a lot more cloudy and difficult to ascertain. Yet wearables appear here to stay and when I have self-tying shoes then I guess we’ve become fully assimilated with technology. 3)    Drones Consumer drones exploded onto the scene in 2014. No longer had an area exclusively held by the NSA, hobbyists are increasing flying drones, Quadcopters. These drones and Quadcopters are increasingly becoming easier to obtain, cheaper to buy. As a result issues have risen both over airspace concerns and privacy concerns. Bearing in mind these drones can be adapted to have cameras and video recorders. Whilst I am all in favor of hobbyists creating things, after all inventions come about this way, there needs to be a limit where these drones can be used. 4)    3D Printing Another area of the hardware market that received much attention but ultimately the majority of us are waiting on a more affordable price and really to figure out why we as consumers would want a 3D printer. This is might be a slightly biased viewpoint admittedly to really write about, given the only 3D printing I have experienced is at conferences where the cost of one was astronomical. Yet, I’ve not seen evidence of a 3D printing being really useful to the masses. As Gartner points out it’s just coming down from the ‘peak of inflated expectations’ so it has some way to go before it gets to the stage where the mass market adopts it. At present, it is too hobbyist and in a price band too far. That’s not to say what 3D printers can do isn’t awesome, it just feels too gimmicky for the price tag 5)     Apple Watch No hardware blog looking at the highlights of 2014 would be complete without the announcement of the Apple Watch. Announced in September, this was Apple’s announcement onto the already congested wearable market. Priced at £300 this is certainly not a cheap wearable, but nonetheless we should expect the same quality as previous Apple products. Coming with a new SDK, Watchkit, which allows developers to design apps for the device. The major downside? You have to have an iPhone to be able to use an Apple Watch. We’ve worked out the calculations here and that basically puts you into a commitment of around £1140 for the privilege of remaining locked into an ecosystem (based off of a minimum of £35 a month contract for 24 months with an iPhone) and frankly, I cannot justify that cost, particularly when there are alternatives out there which are far better (for example, the pebble watch is priced at £99.99, the Motorola Moto 360 at £199.99) I’ll probably still get one though, just because the quality of Apple products is so high. So there you have it. My top 5 choices on the year that was, 2014, for hardware. What are your choices? Do you agree?
Read more
  • 0
  • 0
  • 1401

article-image-minecraft-programmers-sandbox
Aaron Mills
30 Jan 2015
6 min read
Save for later

Minecraft: The Programmer's Sandbox

Aaron Mills
30 Jan 2015
6 min read
If you are familiar with gaming or the Java programming language, you've almost certainly heard of Minecraft. This extremely popular game has captured the imagination of a generation. The premise of the game is simple: you are presented with a near-infinitely explorable world built out of textured one-meter cubes. Modifying the landscape around you is simple and powerful. There are things to craft, such as weapons and mechanisms. There are enemies to fight or hide from, animals to tame, and crops to farm. However, the game doesn't actually provide you with any kind of goal or story beyond what you define for yourself. This makes Minecraft the perfect example of a Sandbox Game, if not the golden standard. But more than that, it has also become a Sandbox for people who like to write code. So let us take a moment and delve into why this is so and what it means for Minecraft to have become “The Programmer's Sandbox”. Originally the product of one man, Markus “Notch” Persson, Minecraft is written entirely in Java. The choice of Java as the language has helped define Minecraft in many ways. On the surface, we have the innate portability that Java provides. But when you dig deeper, Java opens up a whole new realm of possibilities. This is largely because of the inherent ease with which Java applications can be inspected, decompiled, and modified. This means that any part of the code can be changed in any way, allowing us to rewrite the game as we desire. This has lead to a large and vibrant modding community, perhaps even the largest such community ever to exist. The Minecraft modding community would not be what it is today without the herculean efforts of several groups of people, sinces the raw code isn't particularly modding friendly. It's obfuscated and not very extensible in ways that let mods exist side by side. But the efforts of teams such as the Mod Coder Pack (MCP) and Forge have changed that. Today, getting started with Minecraft modding is as simple as downloading Forge, running a one-line setup command (gradlew setupDecompWorkspace eclipse), and pointing your IDE at the resulting folder. From there you can dive straight into the code and create a mod that will be compatible with the vast majority of all other mods. And this opens up realms of possibilities for anyone with an interest in seeing their own creations become part of a vibrant explorable world. It is this desire that has driven the community to innovate and design the tools to let anyone just jump into Minecraft modding and get their feet wet in minutes. As an example, here is a simple mod that I have created that adds a block in Minecraft. This is simple, but will give you an idea of what an example looks like: package com.example.examplemod; import cpw.mods.fml.common.Mod; import cpw.mods.fml.common.Mod.EventHandler; import cpw.mods.fml.common.event.FMLInitializationEvent; import cpw.mods.fml.common.registry.GameRegistry; import net.minecraft.block.Block; import net.minecraft.block.BlockStone; @Mod(modid = ExampleMod.MODID, version = ExampleMod.VERSION) public class ExampleMod { public static final String MODID = "examplemod"; public static final String VERSION = "1.0"; @EventHandler public void init(FMLInitializationEvent event) { // some example code Block simpleBlock = new BlockStone().setBlockName("simpleBlock").setBlock TextureName("examplemod:simpleBlock"); GameRegistry.registerBlock(simpleBlock, "simpleBlock"); } } And here is a figure showing the block from the mod in Minecraft: The Minecraft modding community consists of a wide range of people, from the self-taught programmers to the industry code experts. The reason that such a wide range of modders exists is because the code is both accessible enough for the novice and flexible enough for the expert. Adding a new decorative block can be done with just a few simple lines of code, but mods can also become major projects with a line count in the tens or even hundreds of thousands. So whether this is your first time writing code, or you are a Java Guru, you can quickly and easily bring your creations to life in the sandbox world of Minecraft. People have created all kinds of crazy new things for Minecraft: Massive Toroidal Fusion Reactors, Force-fields, ICBMs, Arcane Magic Runes, Flying Magic Carpets, Pipes for pumping fluids around, Zombie Apocalypse Mini-Games, and even entirely new dimensions with giant Bosses and quests and loot. You can even find a mod that lets you visit the Moon. There really is no limit to what you can add to Minecraft. In many cases, people have taken elements from other game genres and incorporated them into the game: RPG Leveling Systems, Survival Horror Adventures, FPS Shooters, and more. These are just some examples of things that people have actually added to the game. The simplicity and flexibility of the game makes this possible. There are several factors that make Minecraft a particularly accessible game to mod. For one, the art assets are all fairly simple. You don't need HD textures or high poly models; the game's art style intentionally avoids these. It instead opts for pixel art and blocky models. So even if you are a genius coder, but have no real skills in textures and modeling, it's still possible to make something that looks good and fits into the game. But the reverse is also true: if you are a great artist, but your coding skills are weak, you can still create awesome decorative blocks. And if you need help with code, there are dozens, if not hundreds, of Open Source mods to learn from and copy. So yes, Minecraft may be a fun Sandbox game by itself. But if you are the type of person who wants to get your hands a bit dirty, it opens up a whole realm of possibilities, a realm where you are no longer limited by the vision of the game's creators but can make your own vision a reality. This is the true beauty of Minecraft: it really can be whatever you want it to be. About the Author Aaron Mills was born in 1983 and lives in the Pacific Northwest, which is a land rich in lore, trees, and rain. He has a Bachelor's Degree in Computer Science and studied at Washington State University Vancouver. He is best known for his work on the Minecraft Mod, Railcraft, but has also contributed significantly to the Minecraft Mods of Forestry and Buildcraft as well some contributions to the Minecraft Forge project.
Read more
  • 0
  • 0
  • 3144

article-image-fab-lab-worldwide-labs-digital-fabrication-0
Michael Ang
30 Jan 2015
7 min read
Save for later

Fab Lab: worldwide labs for digital fabrication

Michael Ang
30 Jan 2015
7 min read
Looking for somewhere to get started using 3D printing, laser cutting, and digital fabrication? The world-wide Fab Lab network provides access to digital fabrication tools based on the ideas of open access, DIY, and knowledge sharing. The Fab Lab initiative was started by MIT’s Center for Bits and Atoms and now boasts affiliated labs in many countries. I first visited Fab Lab Berlin for their "Build your own 3D printer" workshop. Over the course of one weekend I assembled an i3 Berlin 3D printer from a kit, working alongside the kit’s designers and other people interested in building their own printers. Assembling a 3D printer can be a daunting task, but working as a group made it fast and fun - the lab provides access not just to tools but a community of people excited about digital fabrication. I spoke with Wolf Jeschonnek, founder of Fab Lab Berlin. Wolf spent three months visiting Fab Labs and maker spaces in the United States (he blogged his adventure) before returning to Berlin to start his own Fab Lab. For someone that’s not familiar, what is a Fab Lab? The original idea by MIT was an outreach project to show the general public what the Center for Bits and Atoms was doing, which was digital fabrication. And instead of doing an exhibition or a website they put the most important and most commonly used digital fabrication tools that they were using for research into a space and made them accessible to anyone. That’s the whole idea, basically. [Fab Labs follow a simple charter of rules and generally use a shared core set of machines and software so that projects and experience can be shared between the labs. They also provide free or in-kind access to the lab at least part of the time each week.] Machines offered at Fab Lab Berlin You could say the core idea of a Fab Lab is open access to tools for digital fabrication and the sharing of knowledge and experience. Yes, it’s an open research and development lab, basically. Every university, every art or architecture university has a fab lab. There’s not much difference in the equipment, but it’s not accessible - in most of the universities I experienced it’s not even accessible to the students. They’re very strict about rules and safety and so on. So the innovation is to make those kinds of labs accessible to anyone who’s interested. It seems like another big difference is that you get hands-on access to the machines. You get to operate the lasercutter - you get to operate the CNC router. Yes, and also to pass that hands-on knowledge on to the next person who wants to use the lasercutter. In our lab we do a lot of introduction workshops but you can as well just find someone who is willing to show you how the lasercutter works. We have a couple of formal training opportunities for beginners, but after that we don’t do much advanced training, it all happens by people passing their knowledge on to someone else or figuring it out themselves. In the beginning I was the expert for every machine, because I chose them and knew which machines to buy. By now I’m not the expert at all. If you have a very specific question for technology I’m not the person to ask anymore, because the community is much better. What are some of your favorite projects that you’ve seen come out of Fab Lab Berlin? I really like the i3 Berlin 3D printer, because we started together with them, and I could see the development of the machine over the last year and a half. I also built one myself. It was the first machine that we built in the Fab Lab. Bram de Vries with several iterations of his i3 Berlin 3D printer There’s a steadicam rig for a camera that two people built. [The CAMBAL by SeeYa Film.] It’s a very sophisticated motor driven steadicam rig. It’s very steady because the motors have an active control of the movements. Then there is a project that wasn’t really developed in Fab Lab Berlin but it also started with an i3 Berlin workshop. One guy who came and built an i3 Berlin used that printer to design and make a DIY lasercutter. It’s called Mr. Beam. If you look at the machine, the Mr. Beam, you see lots of details that he took from the printer and transferred to the lasercutter. He used the printer that he made himself to build the lasercutter and it was very successful on Kickstarter. CAMBAL camera stabilizer developed at Fab Lab Berlin. Dig that carbon fibre and milled aluminum! Do you use open source tools in the Fab Lab? Is there a preference for that? We try to use as much open source as possible, because we try to enable people to use their own computers to work. It makes a lot of sense to use free software, because obviously you don’t have to pay for it and everybody can install them on their own computers. We try to use open source as much as possible, but sometimes if you do more advanced stuff it makes sense to use something else. What kind of people do you find using the lab? That’s very hard to summarize because it’s very broad. There are doctors and lawyers and computer engineers, but also teachers or just people who are interested in how a 3D printer works. The age range is between 6 and 70, I would probably guess. There are some professionals who use the lab and the infrastructure to work on professional projects then there are also people who are very professional in other fields and do very high-level projects, but for a hobby. The reasons why people come to the lab are very different. We also get a lot of interest from large companies and corporations who are interested in how innovation works in an environment like this. It’s been a very good mix of people and companies so far. Rapid Robots workshop by Niklas Roy / School of Machines, Making & Make-Believe Do you have any advice for people who are coming to Fab Lab for the first time? One advice is to bring something to make, because otherwise it’s very boring. It can still be interesting to talk to people, but if you have a project and you already prepared something, you in most cases can find someone who can help you. Then you can make something and it’s a much different experience compared to just watching. When I toured the United States [visiting Fab Labs] I brought a project - a vertical axis wind turbine. Before I went I really thought hard of a project that I could do. Most people that hang around in a Fab Lab are people who are interested in making stuff. Also talking and sharing knowledge, but the core activity is really making things. If you bring something that’s interesting it’s also a very good starter for conversation. Thanks, Wolf! The Fab Foundation maintains a world-wide list of Fab Labs. Fab Lab Berlin has open lab hours each week as well as a number of introductory workshops to get you started with digital fabrication. Most other labs have similar programs. If there isn’t a lab near you, you could be like Wolf and start your own! About the author: Michael Ang is a Berlin-based artist and engineer working at the intersection of art, engineering, and the natural world. His latest project is the Polygon Construction Kit, a toolkit for bridging the virtual and physical worlds by translating simple 3D models into physical structures.
Read more
  • 0
  • 0
  • 6291
article-image-promising-devops-projects
Julian Ursell
29 Jan 2015
3 min read
Save for later

Promising DevOps Projects

Julian Ursell
29 Jan 2015
3 min read
The DevOps movement is currently driving a wave of innovations in technology, which are contributing to the development of powerful systems and software development architectures, as well as generating a transformation in “systems thinking”. Game changers like Docker have revolutionized the way system engineers, administrators, and application developers approach their jobs, and there is now a concerted push to iterate on the new paradigms that have emerged. The crystallization of containerization virtualization methods is producing a different perspective on service infrastructures, enabling a greater modularity and fine-grained-ness not so imaginable a decade ago. Powerful configuration management tools such as Chef, Puppet, and Ansible allow for infrastructure to be defined literally as code. The flame of innovation is burning brightly in this space and the concept of the “DevOps engineer” is becoming a reality and not the idealistic myth it appeared to be before. Now that DevOps know roughly where they're going, a feverish development drive is gathering pace with projects looking to build upon the flagship technologies that contributed to the initial spark. The next few years are going to be fascinating in terms of seeing how the DevOps foundations laid down will be built upon moving forward. The major foundation of modern DevOps development is the refinement of the concept and implementation of containerization. Docker has demonstrated how it can be leveraged to host, run, and deploy applications, servers, and services in an incredibly lightweight fashion, abstracting resources by isolating parts of the operating system in separate containers. The sea change in thinking this has created has been resounding. Still, however, a particular challenge for DevOps engineers working at scale with containers is developing effective orchestration services. Enter Kubernetes (apparently meaning “helmsman” in Greek), the project open sourced by Google for the orchestration and management of container clusters. The value of Kubernetes is that it works alongside Docker, building beyond simply booting containers to allow a finer degree of management and monitoring. It utilizes units called “pods” that facilitate communication and data sharing between Docker containers and the grouping of application-specific containers. The Docker project has actually taken the orchestration service Fig under its wing for further development, but there are a myriad of ways in which containers can be orchestrated. Kubernetes illustrates how the wave of DevOps-oriented technologies like Docker are driving large scale companies to open source their own solutions, and contribute to the spirit of open source collaboration that underlines the movement. Other influences of DevOps can be seen on the reappraisal of operating system architectures. CoreOS,for example, is a Linux distribution that has been designed with scale, flexibility, and lightweight resource consumption in mind. It hosts applications as Docker containers, and makes the development of large scale distributed systems easier by making it “natively” clustered, meaning it is adapted naturally for use over multiple machines. Under the hood it offers powerful tools including Fleet (CoreOS' cluster orchestration system) and Etcd for service discovery and information sharing between cluster nodes. A tool to watch out for in the future is Terraform (built by the same team behind Vagrant), which offers at its core the ability to build infrastructures with combined resources from multiple service providers, such as Digital Ocean, AWS, and Heroku, describing this infrastructure as code with an abstracted configuration syntax. It will be fascinating to see whether Terraform catches on and becomes opened up to a greater mass of major service providers. Kubernetes, CoreOS, and Terraform all convey the immense development pull generated by the DevOps movement, and one that looks set to roll on for some time yet.
Read more
  • 0
  • 0
  • 2649

article-image-try-something-new-today-reactjs
Sarah C
28 Jan 2015
5 min read
Save for later

Try Something New Today – ReactJS

Sarah C
28 Jan 2015
5 min read
Sometimes it seems like AngularJS is the only frontend game in town. There are reasons for that. It’s sophisticated, it’s a game-changer for web design that makes things better right now, and the phenomenal rate of adoption has also led to all kinds of innovative integrations. However, when all you have is a directive, every problem starts to look like an ng-, as they say. Now and again, we all like the novelty of change. As the first signs of spring emerge from the snow, our techie fancies lightly turn to thoughts of components. So, like a veritable Sam-I-Am, let me press you to try something you may not have tried before.* Today’s the day to take ReactJS for a test drive. So what’s the deal with React, then? ReactJS was developed by Facebook, then open sourced last year. Lately, it’s been picking up speed. Integrations have improved, and now that Facebook have also open sourced Flux, we’re hearing a lot of buzz about what React can do for your UI design. (Flux is an application pattern. You can read more about its controller-free philosophy at Facebook’s GitHub page.) Like so many things, React isn’t quite a framework and it isn’t quite a library. Where React excels is in generating UI components that refresh with data changes. With disarming modesty, React communicates the smallest changes on the server side to the browser quickly, without having to re-render anything except the part of the display that needs to change. Here’s a quick run through of React’s most pleasing features. (ReactJS also has a good sense of humour, and enjoys long walks along the beach at sunset.) Hierarchical components ReactJS is built around components: the new black of web dev. Individual components bundle together the markup and logic as handy reusable treats. Everyone has their own style when developing their apps, but React’s feel and rhythm encourages you to think in components. React’s components are also hierarchical – you can nest them and have them inherit properties and state. There are those who are adamant that this is the future of all good web-app code. Minimal re-rendering Did you catch my mention of ‘state’ up there? React components can have state. Let the wars begin right now about that, but it brings me to the heart of React’s power. React reacts. Any change triggers a refresh, but with minimal re-rendering. With its hierarchical components, React is smart enough to only ever refresh and supply new display data to the part of the component that needs it, not the entire thing. That’s good news for speed and overhead. Speedy little virtual dom In fact, ReactJS is light in every sense. And it owes a lot of its power to its virtual DOM. Rather than plug into the DOM directly, React renders every change into a virtual DOM and then compares it against the current DOM. If it sees something that needs to be changed in the view, React gets to work on changing just that part, leaving everything else untouched. Fun to write React mixes HTML and JavaScript, so you can refer to HTML elements right there inside your <script>. Yes, okay, that’s ‘fun’ for a given value of fun. The kind where dorks get a little giddy about pleasing syntax. But we’re all in this together, so we might as well accept ourselves and each other. For example, here’s a simple component rendering from an official tutorial: // tutorial1.js var CommentBox = React.createClass({ render: function() {    return (      <div className="commentBox">        Hello, world! I am a CommentBox.      </div>    ); } }); React.renderComponent( <CommentBox />, document.getElementById('content') ); This is JSX syntax, which React uses instead of defining templates within a string. Pretty, right? Reactive charts and pictures With luck, at this point, your coffee has kicked in and you’re beginning to think about possible use cases where React might be a shiny new part of your toolkit. Obviously, React’s going to be useful for anything with lots of real-time activity. As a frontend for a chat client, streaming news, or a dashboard, it’s got obvious powers. But think a little further and you’ll see a world of other possibilities. React can also handle SVG for graphics and charts, with the potential to create dynamic and malleable visualisations even without D3. SEO One last-but-not-least selling point: web apps built with this framework don’t scare the Google Spiders. Because everything’s passed to the client side and into the DOM having already had its shoes shined by the virtual DOM, it’s very easy to make apps legible for search engines as well as for people, allowing your stored data to be indexed and boost your SEO by reflecting your actual content. Give it a shot and do some experimenting. Have you had any wins or unexpected problems with React? Or are you thinking of giving it a whirl for your next app? We’re going to try it out for some in-house data viz, and may possibly even report back. What about you? *Do not try ReactJS with a goat on a boat without taking proper safety precautions. (With a mouse in a house is fine and, indeed, encouraged.)
Read more
  • 0
  • 0
  • 4067

article-image-an-introduction-to-reactjs-2
Simon Højberg
14 Jan 2015
1 min read
Save for later

An introduction to React - Part 2 (video)

Simon Højberg
14 Jan 2015
1 min read
  Sample Code You can find the sample code on Simon's Github repository.   About The Author Simon Højberg is a Senior UI Engineer at Swipely in Providence, RI. He is the co-organizer of the Providence JS Meetup group and former JavaScript instructor at Startup Institute Boston. He spends his time building functional User Interfaces with JavaScript, and hacking on side projects like cssarrowplease.com. Simon recently co-authored "Developing a React Edge."
Read more
  • 0
  • 0
  • 3799
article-image-a-short-video-introduction-to-gulp-2
Maximilian Schmitt
12 Jan 2015
1 min read
Save for later

A short introduction to Gulp - Part 2 (video)

Maximilian Schmitt
12 Jan 2015
1 min read
  About The Author Maximilian Schmitt is a full-time university student with a passion for web technologies and JavaScript. Tools he uses daily for development include gulp, Browserify, Node.js, AngularJS, and Express. When he's not working on development projects, he shares what he has learned through his blog and the occasional screencast on YouTube. Max recently co-authored "Developing a gulp Edge". You can find all of Max's projects on his website at maximilianschmitt.me.
Read more
  • 0
  • 0
  • 885

article-image-7-ways-2014-changed-front-end-development
Sarah C
09 Jan 2015
4 min read
Save for later

Angular, Responsive, and MEAN - how 2014 changed front-end development

Sarah C
09 Jan 2015
4 min read
Happy New Year, Web Devians. We've just finished off quite a year for web technologies, haven't we? 2014 was categorised by a growth in diversity – nowadays there’s an embarrassment of riches when it comes to making the most of CSS and JavaScript. We’re firmly past the days when jQuery was considered fancy. This year it wasn’t a question of whether we were using a framework – instead we’ve mostly been tearing our hair out trying to decide which one fits where. But whether you’re pinning your colours to Backbone or Angular, Node or PHP, there have been some clear trends in how the web is changing. Here’s Packt’s countdown of the top seven ways web tech has grown this year. If you weren’t thinking about these things in 2014, then it might be time to get up to speed before 2015 overtakes you! Angular We saw it coming in 2013, but in 2014 Angular basically ate everything. It’s the go-to framework for a subset of JavaScript projects that we’re going to refer to here as [“All Projects Ever”].  This is a sign of the times for where front-end development is right now. The single-page web application is now the heart of the new internet, which is deep, reactive, and aware. 2014 may go down as the year we officially moved the party to the client side. Responsive Web Design Here at Packt we’ve seen a big increase in people thinking about responsive design right from the beginning of their projects, and no wonder. In 2014 mobile devices crossed the line and outstripped traditional computers as the main way in which people browse the web. We glimpse the web now through many screens in a digital hall of mirrors. The sites we built in 2014 had to be equally accessible whether users were on IE8 at the library, or tweeting from their Android while base jumping. The MEAN stack 2014 put to rest for good the idea that JavaScript was a minor-league language that just couldn’t hack it on the back end. In the last twelve months MEAN development has shown us just how streamlined and powerful Node can be when harnessed with front-end JavaScript and JSON data storage. 2014 was for MongoDB, Express, Angular and Node had their break-out year this year as the hottest band in web dev. Data visualisation Did you know that all the knowledge available in the whole world before 1800 compresses to fewer bytes than Twitter streams in a minute? Actually, I just made that up. But it is true that we are generating and storing data at an increasingly hectic rate. When it comes to making visual sense of it, web tech has had a big role to play. D3 continued to hold its own as one of the most important tools in web development this year. We’ve all been thinking visually about charts and infographics. Which brings us to… Flat design The internet we built in 2014 was flat and stripy, and it’s wonderful.  Google’s unveiling of Material Design at this year’s I/O conference cemented the trend we’d all been seeing. Simple vector graphics, CSS animations and a mature code-based approach to visuals has swept the scene. There are naysayers of course (and genuine questions about accessibility, which we’ll be blogging about next year) but overall this aesthetic feels mature. Like moments in traditional architecture, 2014 felt like a year in which we cemented a recognisable design era. Testing and build tools Yes, we know. The least fun part of JavaScript – testing it and building, rebuilding, rebuilding. Chances are though that if you were involved in any large-scale web development this year you’ve now got a truly impressive Bat-utility belt of tools to work with. From Yeoman, to Gulp or Grunt, to Jasmine, to PhantomJS, updates have made everything a little more sophisticated. Cross-platform hybrid apps For decades we’ve thought about HTML/CSS/JavaScript as browser languages. With mobile technology though, we’ve broadened thinking and bit by bit JS has leaked out of the browser. When you think about it, our phones and tablets are full of little browser-like mutants, gleefully playing with servers and streaming data while downplaying the fact that their grandparents were Netscape and IE6. This year the number of hybrid mobile apps – and their level of sophistication – has exploded. We woke up to the fact that going online on mobile devices can be repackaged in all kinds of ways while still using web-tech to do all the heavy lifting. All in all, it’s been an exciting year. Happy New Year, and here’s to our new adventures in 2015!
Read more
  • 0
  • 0
  • 1354