Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech Guides

851 Articles
article-image-5-more-2d-game-dev-engines
Ed Bowkett
09 Jan 2015
4 min read
Save for later

5 more 2D Game Engines I didn’t consider

Ed Bowkett
09 Jan 2015
4 min read
In my recent blog, I covered 5 game engines that you can use to create 2D games. The response in the comments and on other social media websites was encouraging but also pointed out other 2D game engines. Having briefly looked at these, I thought it would be a good idea to list alternatives down. In this blog we will cover 5 game engines you can use to create 2D games. 2D games are very appealing for a wide range of reasons. They’re great for the indie game scene, they’re great to learn the fundamentals of game development, and it’s a great place to start coding and you have fun doing it. I’ve thrown in some odd ones that you might not have considered before and remember, this isn’t a definitive list! Just my thoughts! LÖVE2D LÖVE2D Platform game LÖVE2D is a 2D game framework that you can use to make 2D games using Lua, a lightweight scripting language. It can be used across Windows, Linux and Mac and costs nothing to use. The code is easy enough to use, though it might be useful to learn Lua as well. Once you get over that, games can be created with ease, and clones of Mario and Snake have become ever popular with this engine.  It has support for Box2D implementation, networking abilities and user created plugins. However, the possible downside to LÖVE is that it’s only for desktops, however to learn how to programme games this is a good starting point. Libgdx A puzzle game in Libgdx Libgdx is a game development framework written in Java. It’s cross platform which is a major plus when developing games and can be deployed across Windows, Linux and Mac. It’s also free which a benefit is to aspiring game developers. It has multiple third party support for other tools such as Spine and Nextpeer, whilst also allowing BOX2d physics and rendering capabilities through openGL. Example projects include puzzle games, tower defense games and platformers. Extremely fun for the indie developers and hobbyists. Just learn Java…… Gamesalad Creating a game using GameSalad Similar to its rivals, Construct 2 and Gamemaker, GameSalad is a game engine aimed at non-programmers. It uses a drag and drop system, similar to its competitors. Further benefits of GameSalad are that it doesn’t require any programming knowledge; instead you use Actors that defines the rules and behaviors of certain game objects. It’s cross platform which is another big plus, however to unlock the full capabilities of cross platform development you need to pay $299 a year, which is excessive for a game engine, that, whilst is good for hobbyists and beginner game developers, for what it does, the cost-value of the product isn’t that great. Still, you can try it for free and it has the same qualities as other engines. Stencyl Stencyl Stencyl is a game engine that is free (free for Flash, other platforms need to be paid for) and again is a great alternative to the other drag and drop game engines that they have. Again supporting multiple platforms, support for shaders, follows an actor system, animations and support for iOS 8. The cost isn’t too bad either, for the cheaper option, with the ability to publish on web and desktop priced at $99 a year and studio priced at $199 a year.  V-Play A basic 2D game using V-Play A game development tool I actually know very little about but was pitched by our customers as a good alternative. V-Play appears to be component based, written in Javascript and QML (QT markup language) cross platform; it even appears to have plugins for monetizing your game and game analytics for assessing your game. It also allows for touch, includes level design, and includes Box2D as the physics engine. Whilst I know this is brief about what V-Play does and offers, as I’ve not really come across it, I don’t know too much about it, possibly one to write a blog on for the future! This blog was to show off the other frameworks I had not considered in my previous blog, proposed by our readers. It shows that there are always more options out there; it all depends on what you want, how much you want to spend and the quality you expect from it. These are all valid choices and have opened me up to a game development tool I’d not tinkered with before, so Christmas should be fun for me!
Read more
  • 0
  • 0
  • 3150

article-image-2014-you-want-a-container-with-that
Julian Ursell
09 Jan 2015
4 min read
Save for later

2014: You Want A Container With That?

Julian Ursell
09 Jan 2015
4 min read
2014 Roundup: Networking, Virtualization and Cloud 2014 has been a year of immense movement in the networking, virtualization and cloud space. DevOps, infrastructure-as-code, containerisation, configuration management, lightweight Linux, hybrid cloud; all of these concepts carry a certain gravitas right now and will do even more so as we enter 2015. Let's take a look at the major developments over the past year, and what this could mean heading into the immediate future. DevOps continues to transform software development and systems programming, with the rise of configuration management tools such as Ansible and SaltStack, and the expansion of incumbent config management tools, Chef and Puppet (Puppet looks set to release version 4.0 some time early next year, and announced its new Puppet Server project at PuppetConf recently). Hashicorp, prolific in the DevOps space (creator of Vagrant) has intriguingly unified all five of its open source projects under the umbrella of an all-in-one DevOps tool they have anointed Atlas. With the emergence of DevOps-oriented technologies geared towards transforming infrastructure into code and automating literally everything, developers and engineers have begun to approach projects from a broader perspective, speaking the new universal language of the DevOps engineer. Up in the clouds, the arena of competition among vendors has centred on the drive to develop hybrid solutions which will enable enterprises to take advantage of the heft behind open source platforms such as OpenStack, while combining public and private cloud environments. We've seen this with Red Hat's tuning of its own enterprise version of OpenStack for hybrid setups, and most recently with the announcement that Microsoft and Accenture are re-energising their alliance by combining their cloud technologies to provide a comprehensive super-stack platform which comprises Windows Azure, Server, System Center and the Accenture Cloud Platform. Big plays indeed. If there is a superstar this year, it has to be Docker. It has been the arranging metaphor for the development of a myriad of exciting new technologies, conceptual re-thinking, as well as a flurry of development announcements on both open source and enterprise fronts. The eventual release of Docker 1.0 was greeted to fanfare and rapture at DockerCon this June, after about a year of development and buzz, and surrounding the release has been a parallel drive to iterate on the lessons learned to create complimentary technologies and platforms. It has then been largely due to Docker that we've started to see the emergence of new Linux operating system architectures which are lightweight, and purpose built for massively scaled distributed computing, such as CoreOS, and Red Hat's prototype, Project Atomic. Both leverage Docker containers. The team behind CoreOS are even looking to rival Docker, with their prototype container runtime, Rocket, which promises to deliver an alternative approach to containerisation that they argue returns to the simplicity of the original manifesto for Docker containers as a composable, modular building block with a carefully designed specification. Google open sourced its Docker container orchestration tool Kubernetes, and  even Windows has jumped quickly on to the Docker train, with the development of a compatible Docker client for the next release of Windows Server. A year ago, Mitchell Hashimoto coined the term 'FutureOps' for a vision of 'immutable infrastructures', servers pre-built with images which replace the need for continual configuration and replacement, enable lightning fast server deployments, and that provision automatic recovery mechanisms capable of detecting and anticipating server failures. Considered the height of idealism to some who would argue that systems can never be immutable, whether this is an achievable reality or not, the changes and developments of the last year would seem to inch closer to making it happen. Docker is part of this big picture, whatever shape that may take - 2014 was the year it made big waves, the magnitude of which will undoubtedly continue into 2015 and beyond.
Read more
  • 0
  • 0
  • 1127

article-image-transparency-and-nwjs
Adam Lynch
07 Jan 2015
3 min read
Save for later

Transparency and NW.js

Adam Lynch
07 Jan 2015
3 min read
Yes, NW.js does support transparency, albeit it is disabled by default. One way to enable transparency is to use the transparency property to your application's manifest like this: { "name":"my-app", "main":"index.html", "description":"My app", "version":"0.0.1", "window":{ "transparent":true } } Transparency will then be enabled for the main window of your application from the start. Now, it's play time. Try giving a page's body a transparent or semi-transparent background color and any children an opaque background color in your CSS like this: body { background:transparent;//orbackground:rgba(255, 255, 255, 0.5); } body > * { background:#fff; } I could spend all day doing this. Programmatically enabling transparency The transparent option can also be passed when creating a new window: var gui = require('nw.gui'); var newWindow = newgui.Window.open('other.html', { position: 'center', width: 600, height: 800, transparent: true }); newWindow.show(); Whether you're working with the current window or another window you've just spawned, transparency can be toggled programmatically per window on the fly thanks to the Window API: newWindow.setTransparent(true); console.log(newWindow.isTransparent); // true The window's setTransparent method allows you to enable or disable transparency and its isTransparent property contains a Boolean indicating if it's enabled right now. Support Unfortunately, there are always exceptions. Transparency isn't supported at all on Windows XP or earlier. In some cases it might not work on later Windows versions, including when accessing the machine via Microsoft Remote Desktop or with some unusual themes or configurations. On Linux, transparency is supported if the window manager supports compositing. Aside from this, you'll also need to start your application with a couple of arguments. These can be set in your app's manifest under chromium-args: { "name":"my-app", "main":"index.html", "description":"My app", "version":"0.0.1", "window":{ "transparent":true }, "chromium-args":"--enable-transparent-visuals --disable-gpu" } Tips and noteworthy side-effects It's best to make your app frameless if it will be semi-transparent. Otherwise it will look a bit strange. This would depend on your use case of course. Strangely, enabling transparency for a window on Mac OS X will make its frame and toolbar transparent: Screenshot of a transparent window frame on Mac OS X Between the community and developers behind NW.js, there isn't certainty whether or not windows with transparency enabled should have a shadow like typically windows do. At the time of writing, if transparency is enabled in your manifest for example, your window will not have a shadow, even if all its content is completely opaque. Click-through NW.js even supports clicking through your transparent app to stuff behind it on your desktop, for example. This is enabled by adding a couple of runtime arguments to your chromium-args in your manifest. Namely --disable-gpu and --force-cpu-draw: { "name":"my-app", "main":"index.html", "description":"My app", "version":"0.0.1", "window":{ "transparent":true }, "chromium-args":"--disable-gpu --force-cpu-draw" } As of right now, this is only supported on Mac OS X and Windows. It only works with non-resizable frameless windows, although there may be exceptions depending on the operating system. One other thing to note is that click-through will only be possible on areas of your app that are completely transparent. If the target element of the click or an ancestor has a background color, even if it's 1% opaque in the alpha channel, the click will not go through your application to whatever is behind it. About the Author Adam Lynch is a TeamworkChat Product Lead & Senior Software Engineer at Teamwork. He can be found on Twitter @lynchy010.
Read more
  • 0
  • 0
  • 3385

article-image-relations-in-backbone
Andrew Burgess
05 Jan 2015
7 min read
Save for later

Relations In Backbone

Andrew Burgess
05 Jan 2015
7 min read
In this Backbone tutorial from Andrew Burgess, author of Backbone.js Blueprints, we’ll be looking at two key extensions that you can use when working with models and collections. As you will see, both give more rigidity to what is an incredibly flexible framework. This can be extremely valuable when you want a reliable way to perform certain front-end web development tasks. Relations In Backbone Backbone is an extremely flexible front-end framework. It is very unopinionated, as frameworks go, and can be bent to build anything you want. However, for some things you want your framework to be a little more opinionated and give you a reliable way to perfrom some operation. One of those things is relating models and collections. Sure, a collection is a group of models; but what if you want to relate them the other way: what if you want a model to have a "child" collection? You could role your own implementation of model associations, or you could use one of the Backbone extension libraries made expressly for this purpose. In this article, we'll look at two extension options: Backbone Associations and Backbone Relational. Backbone Associations We'll use the example of an employer with a collection of employees. In plain Backbone, you might have an Employer model and an Employees collection, but how can we relate an Employer instance to an Employees instance? For starters, let's create the Employee model: var Employee = Backbone.Model.extend({}); Notice that we are extending the regular Backbone.Model, not a special "class" that Backbone Associations gives us. However, we'll use a special class that next: var Employer = Backbone.AssociatedModel.extend({ relations: [{ type: Backbone.Many, key: 'employees', relatedModel: Employee }] }); The Employer will be an extention of Backbone.AssociatedModel class. We give it a special property: relations. It's an array, because a model can have multiple associations; but we'll just give it one for now. There are several properties that we could give a relation object, but only three are required. The first is type: it must be either Backbone.Many (if we are creating a 1:N relation) or Backbone.One (if we are creating a 1:1 relation). The second required parameter is key, which is the name of the property that will appear as the collection on the model instance. Finally, we have the relatedModel, which is a reference to the model class. Now, we can create Employee instances. var john = new Employee({ name: "John" }), jane = new Employee({ name: "Jane" }), paul = new Employee({ name: "Paul" }), kate = new Employee({ name: "Kate" }); Then, we can create an Employer instance and relate the Employee instances to it: var boss = new Employer({ name: 'Winston', employees: [john, jane, paul, kate] }); Notice that we've used the special relation key name employees. Even though we've assigned a regular array, it will be converted to a full Backbone.Collection object behind the scenes. This is great, because now you can use collection-specific functions, like this: boss.get('employees').pluck('name'); // ['John', 'Jane', 'Paul', 'Kate'] We can even use some special syntax with the get method to get properties on our nested models: boss.get('employees[0].name'); // 'John' boss.get('employees[3].name'); // 'Kate' Unfortunately, relations with Backbone Associations are a one-way thing: there's no relation going from Employee back to Employer. We can, of course, set an attribute on our model instances: john.set({ employer: boss }); But there's nothing special about this. We can make it an association, however, if we change our Employee from a regular Backbone.Model class to a Backbone.AssociatedModel class. var Employee = Backbone.AssociatedModel.extend({ relations: [{ type: Backbone.One, key: 'employer', relatedModel: 'Employer' }] }); Two things are different about this relation. First, it's a one-to-one (1:1) relation, so we use the type Backbone.One. Second, we make the relatedModel a string, instead of a reference to the Employer object; this is necessary because Employer comes after Employee in our code, and hence can't be references directly as this point; model class will look it up later, when it's needed. Now, we'll still have to set our employer attribute on Employee models, like so: john.set({ employer: boss }); The difference now is that we can use the features Backbone Associations provides us with, like nested getting: john.get('employer.name'); // Winston One more thing: I mentioned that the array attribute with the special key name becomes a collection object. By default, it's a generic Backbone.Collection. However, if you want to make your own collection class with some special features, you can add a collectionType property to the relation: var EmployeeList = Backbone.Collection.extend({ model: Employee }); var Employer = Backbone.AssociatedModel.extend({ relations: [{ type: Backbone.Many, collectionType: EmployeeList key: 'employees', relatedModel: Employee }] }); Backbone Relational Backbone Relational has very similar syntax to Backbone Associations; however, I prefer this library because it makes our relations two-way affairs from the beginning. First, both models are required to extend Backbone.RelationalModel. var Employee = Backbone.RelationalModel.extend({}); var EmployeeList = Backbone.Collection.extend({ model: Employee }); var Employer = Backbone.RelationalModel.extend({ relations: [{ type: Backbone.HasMany, key: 'employees', relatedModel: Employee, collectionType: EmployeeList reverseRelation: { key: 'employer' } }] }); Notice that our Employer class has a relations attributes. The key, relatedModel, and type attributes are required and perform the same duties their Backbone Associated counterparts do. Optionally, the collectionType property is also avaiable. The big difference with Backbone Relational—and the reason I prefer it—is because of the reverseRelation property: we can use this to make the relationship act two ways. We're giving it a single property here, a key: this value will be the attribute given to model instances on the other side of the relationship. In this case, it means that Employee model instances will have an employer attribute. We can see this in action if we create our employer and employees, just as we did before: var john = new Employee({ name: "John" }), jane = new Employee({ name: "Jane" }), paul = new Employee({ name: "Paul" }), kate = new Employee({ name: "Kate" }); var boss = new Employer({ name: 'Winston', employees: [john, jane, paul, kate] }); And now, we have a two-way relationship. Based on the above code, this part should be obvious: boss.get('employees').pluck('name'); // ['John', 'Jane', 'Paul', 'Kate'] But we can also do this: john.get('employer').get('name'); // Winston Even though we never gave the john instance an employer attribute, Backbone Relational did it for us, because we gave an object on one side of the relationship knowledge of the connection. We could have done it the other way, as well, by giving the employees an employer: var boss = new Employer({ name: 'Winston' }); var john = new Employee({ name: "John", employer: boss }), jane = new Employee({ name: "Jane", employer: boss }), paul = new Employee({ name: "Paul", employer: boss }), kate = new Employee({ name: "Kate", employer: boss }); boss.get('employees').pluck('name'); // ['John', 'Jane', 'Paul', 'Kate'] It's this immediate two-way connection that makes me prefer Backbone Relational. But both libraries have other great features, so check them out in full before making a decision. Backbone Associations Backbone Relational I've found that the best way to get better at using Backbone is to really understand what's going on behind the curtain, to get a feel for how Backbone "thinks." With this in mind, I wrote Backbone.js Blueprints. As you build seven different web applications, you'll learn how to use and abuse Backbone to the max. About the Author Andrew Burgess is a primarily JavaScript developer, but he dabbles in as many languages as he can find. He's written several eBooks on Git, JavaScript, and PHP, as well as Backbone.js Blueprints, published by Packt Publishing. He's also an web development instructor as Tuts+, where he produces videos on everything from JavaScript to the command line.
Read more
  • 0
  • 0
  • 2517

article-image-freecad-open-source-design-bleeding-edge
Michael Ang
31 Dec 2014
5 min read
Save for later

FreeCAD: Open Source Design on the Bleeding Edge

Michael Ang
31 Dec 2014
5 min read
Are you looking for software for designing physical objects for 3D printing or physical construction? Computer-aided design (CAD) software is used extensively in engineering when designing objects that will be physically constructed. Programs such as Blender or SketchUp can be used to design models for 3D printing but there’s a catch: it’s quite possible to design models that look great onscreen but don’t meet the "solid object" requirements of 3D printing. Since CAD programs are targeted at building real-world objects, they can be a better fit for designing things that will exist not just on the screen but in the physical world. D-printable Servo controlled Silly-String Trigger by sliptonic FreeCAD distinguishes itself by being open source, cross-platform, and designed for parametric modeling. Anyone is free to download or modify FreeCAD, and it works on Windows, Mac, and Linux. With parametric modeling, it’s possible to go back and change parameters in your design and have the rest of your design update. For example, if you design a project box to hold your electronics project and decide it needs to be wider, you could change the width parameter and the box would automatically update. FreeCAD allows you to design using its visual interface and also offers complete control via Python scripting. Changing the size of a hole by changing a parameter I recommend Bram De Vries’ FreeCAD tutorials on YouTube to help you get started with FreeCAD. The FreeCAD website has links to download the software and a getting started guide. FreeCAD is under heavy development (by a small group of individuals) so expect to encounter a little strangeness from time to time, and save often! If you’re used to using software developed by a large and well-compensated engineering team you may be surprised that certain features are missing, but on the other hand it’s really quite amazing how much FreeCAD offers in software that is truly free. You might find a few gaping holes in functionality, but you also won’t find any features that are locked out until you go "Premium". If you didn’t think I was geeky enough for loving FreeCAD, let me tell you my favorite feature: everything is scriptable using Python. FreeCAD is primarily written in Python and you have access to a live Python console while the program is running (View->Views->Python console) that you can use to interactively write code and immediately see the results. Scripting in FreeCAD isn’t through some limited programming interface, or with a limited programming language: you have access to pretty much everything inside FreeCAD using standard Python code. You can script repetitive tasks in the UI, generate new parts from scratch, or even add whole new "workbenches" that appear alongside the built-in features in the FreeCAD UI. Creating a simple part interactively with Python There are many example macros to try. One of my favorites allows you to generate an airfoil shape from online airfoil profiles. My own Polygon Construction Kit (Polycon) is built inside FreeCAD. The basic idea of Polycon is to convert a simple polygon model into a physical object by creating a set of 3D-printed connectors that can be used to reconstruct the polygon in the real world. The process involves iterating over the 3D model and generating a connector for each vertex of the polygon. Then each connector needs to be exported as an STL file for the 3D printing software. By implementing Polycon as a FreeCAD module I was able to leverage a huge amount of functionality related to loading the 3D model, generating the connector shapes, and exporting the files for printing. FreeCAD’s UI makes it easy to see how the connectors look and make adjustments to each one as necessary. Then I can export all the connectors as well-organized STL files, all by pressing one button! Doing this manually instead of in code could literally take hundreds of hours, even for a simple model. FreeCAD is developed by a small group of people and is still in the "alpha" stage, but it has the potential to become a very important tool in the open source ecosystem. FreeCAD fills the need for an open source CAD tool the same way that Blender and GIMP do for 3D graphics and image editing. Another open source CAD tool to check out is OpenSCAD. This tool lets you design solid 3D objects (the kind we like to print!) using a simple programming language. OpenSCAD is a great program–its simple syntax and interface is a great way to start designing solid objects using code and thinking in "X-Y-Z". My first implementation of Polycon used OpenSCAD, but I eventually switched over to FreeCAD since it offers the ability to analyze shapes as well as create them, and Python is much more powerful than OpenSCAD’s programming language. If you’re building 3D models to be printed or are just interested in trying out computer-aided design, FreeCAD is worth a look. Commercial offerings are likely going to be more polished and reliable, but FreeCAD’s parametric modeling, scriptability, and cross-platform support in an open source package are quite impressive. It’s a great tool for designing objects to be built in the real world. About the Author Michael Ang is a Berlin-based artist and engineer working at the intersection of art, engineering, and the natural world. His latest project is the Polygon Construction Kit, a toolkit used to bridge the virtual and physical realms by constructing real-world objects from simple 3D models. He is one of the organizers of Art Hack Day, an event for hackers whose medium is tech and artists whose medium is technology.
Read more
  • 0
  • 0
  • 4748

article-image-what-did-big-data-deliver-in-2014
Akram Hussein
30 Dec 2014
5 min read
Save for later

What Did Big Data Deliver In 2014?

Akram Hussein
30 Dec 2014
5 min read
Big Data has always been a hot topic and in 2014 it came to play. ‘Big Data’ has developed, evolved and matured to give significant value to ‘Business Intelligence’. However there is so much more to big data than meets the eye. Understanding enormous amounts of unstructured data is not easy by any means; yet once that data is analysed and understood, organisations have started to value its importance and need. ‘Big data’ has helped create a number of opportunities which range from new platforms, tools, technologies; to improved economic performances in different industries; through development of specialist skills, job creation and business growth. Let’s do a quick recap of 2014 and on what Big Data has offered to the tech world from the perspective of a tech publisher.   Data Science The term ‘Data Science’ has been around for sometime admittedly, yet in 2014 it received a lot more attention thanks to the demands created by ‘Big Data’. Looking at Data Science from a tech publisher’s point of view, it’s a concept which has rapidly been adopted with potential for greater levels of investment and growth.   To address the needs of Big data, Data science has been split into four key categories, which are; Data mining, Data analysis, Data visualization and Machine learning. Equally we have important topics which fit inbetween those such as: Data cleaning (Munging) which I believe takes up majority of a data scientist time. The rise in jobs for data scientists has exploded in recent times and will continue to do so, according to global management firm McKinsey & Company there will be a shortage of 140,000 to 190,000 data scientists due to the continued rise of ‘big data’ and also has been described as the ‘Sexiest job of 21st century’.   Real time Analytics The competitive battle in Big data throughout 2014 was focused around how fast data could be streamed to achieve real time performance. Real-time analytics most important feature is gaining instant access and querying data as soon as it comes through. The concept is applicable to different industries and supports the growth of new technologies and ideas. Live analytics are more valuable to social media sites and marketers in order to provide actionable intelligence. Likewise Real time data is becoming increasing important with the phenomenon known as ‘the internet of things’. The ability to make decisions instantly and plan outcome in real time is possible now than before; thanks to development of technologies like Spark and Storm and NoSQL databases like the Apache Cassandra,  enable organisations to rapidly retrieve data and allow fault tolerant performance. Deep Learning Machine learning (Ml) became the new black and is in constant demand by many organisations especially new startups. However even though Machine learning is gaining adoption and improved appreciation of its value; the concept Deep Learning seems to be the one that’s really pushed on in 2014. Now granted both Ml and Deep learning might have been around for some time, we are looking at the topics in terms of current popularity levels and adoption in tech publishing. Deep learning is a subset of machine learning which refers to the use of artificial neural networks composed of many layers. The idea is based around a complex set of techniques for finding information to generate greater accuracy of data and results. The value gained from Deep learning is the information (from hierarchical data models) helps AI machines move towards greater efficiency and accuracy that learn to recognize and extract information by themselves and unsupervised! The popularity around Deep learning has seen large organisations invest heavily, such as: Googles acquisition of Deepmind for $400 million and Twitter’s purchase of Madbits, they are just few of the high profile investments amongst many, watch this space in 2015!     New Hadoop and Data platforms Hadoop best associated with big data has adopted and changed its batch processing techniques from MapReduce to what’s better known as YARN towards the end of 2013 with Hadoop V2. MapReduce demonstrated the value and benefits of large scale, distributed processing. However as big data demands increased and more flexibility, multiple data models and visual tool became a requirement, Hadoop introduced Yarn to address these problems.  YARN stands for ‘Yet-Another-Resource-Negotiator’. In 2014, the emergence and adoption of Yarn allows users to carryout multiple workloads such as: streaming, real-time, generic distributed applications of any kind (Yarn handles and supervises their execution!) alongside the MapReduce models. The biggest trend I’ve seen with the change in Hadoop in 2014 would be the transition from MapReduce to YARN. The real value in big data and data platforms are the analytics, and in my opinion that would be the primary point of focus and improvement in 2015. Rise of NoSQL NoSQL also interpreted as ‘Not Only SQL’ has exploded with a wide variety of databases coming to maturity in 2014. NoSQL databases have grown in popularity thanks to big data. There are many ways to look at data stored, but it is very difficult to process, manage, store and query huge sets of messy, complex and unstructured data. Traditional SQL systems just wouldn’t allow that, so NoSQL was created to offer a way to look at data with no restrictive schemas. The emergence of ‘Graph’, ‘Document’, ‘Wide column’ and ‘Key value store’ databases have showed no slowdown and the growth continues to attract a higher level of adoption. However NoSQL seems to be taking shape and settling on a few major players such as: Neo4j, MongoDB, Cassandra etc, whatever 2015 brings, I am sure it would be faster, bigger and better! 
Read more
  • 0
  • 0
  • 1179
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €14.99/month. Cancel anytime
article-image-why-google-dart-will-never-win-battle-browser
Ed Gordon
30 Dec 2014
5 min read
Save for later

Why Google Dart Will Never Win The Battle For The Browser

Ed Gordon
30 Dec 2014
5 min read
This blog is not about programming languages as much as it’s about products and what makes good products (or more specifically, why good products sometimes don’t get used). I won’t talk about the advantages or disadvantages about the syntax or how they work as programming languages, but I will talk about the product side. We can all have an opinion on that, right? Real people use Dart. Really. I think we’ve all seen recently a growth in the number of adopters for ‘compile to JavaScript’ languages – TypeScript and Dart being the primary ones, and an honourable mention to CoffeeScript for trying before most others. Asana just switched out their hundreds of thousands of lines of JS code to TypeScript. I know that apps like Blosom are swapping out the JS-y bits of their code piece by piece. The axiom of my blog is that these things offer real developers (which I’m not) real advantages, right now.  They’re used because they are good products. They add productivity to a user-base that is famously short on time and always working to tight-deadlines. They take away no functionality (or very little, for the pedants out there) of JavaScript, but you get all the added benefits that the creators deigned to add. And for the select few, they can be a good choice. For online applications where a product lifespan may be 5 years, or less, worries about code support for the next 20 years (anyone who uses Perl still) melt away. They aren’t doing this because it’s hipster, they’re doing it because it works for them and that’s cool. I dig that. They will never, however, “ultimately… replace JavaScript as the lingua franca of web development”. Just missed the bull’s eye The main issue from a product perspective is that they are, by design, a direct response to the perceived shortcomings of JavaScript. Their value, and destiny as a product, is to be used by people who have struggled with JavaScript – is there anyone in the world who learned Dart before they learned JavaScript? They are linked to JavaScript in a way that limits their potential to that of JavaScript. If Dart is the Mercedes-Benz of the web languages (bear with me), then JavaScript is just “the car” (that is, all cars). If you want to drive over the alps, you can choose the comfort of a Merc if you can afford it, but it’s always going to ultimately be a car – four wheels that takes you from point-to-point. You don’t solve the problems of ‘the car’ by inventing a better car. You replace it by creating something completely different. This is why, perhaps, they struggle to see any kind of adoption over the long term. Google Trends can be a great proxy for market size and adoption, and as you can see “compile-to” languages just don’t seem to be able to hold ground over a long period of time After an initial peak of interest, the products tend to plateau or grow at a very slow rate. People aren’t searching for information on these products because in their limited capacity as ‘alternatives to JavaScript’ they offer no long term benefit to the majority of developers who write JavaScript. They have dedicated fans, and loyal users, but that base is limited to a small number of people. They are a ‘want’ product. No one needs them. People want the luxury of static typing, but you don’t need it. People want cleaner syntax, but don’t need it. But people need JavaScript. For “compile-to” languages to ever be more than a niche player, they need to transition from a ‘want’ product to a ‘need’ product. It’s difficult to do that when your product also ‘needs’ the thing that you’re trying to outdo. Going all out In fact, all the ‘compile to’ tools, languages and libraries have a glass ceiling that becomes pretty visible from their Google trends. Compared this to a Google language that IS its own product, Google Go, we can see stark differences Google Go is a language that offers an alternative to Python (and more, it’s a fully featured programming language), but it’s not even close to being Python. It can be used independently of Python – could you imagine if Google Go said, “We have this great product, but you can only use it in environments that already use Python. In fact, it compiles to Python. Yay.”. This could work initially, but it would stink for the long-term viability of Go as a product that’s able to grow organically and create its own ecosystem of tools, dedicated users, and carve out its own niche and area in which it thrives. Being decoupled from another product allows it to grow. A summary of sorts That’s not to say that JavaScript is perfect. It itself actually started as a language designed to coat-tail the fame of Java (albeit a very different language). And when there are so many voices trying to compete with it, it becomes apparent that not all is well with the venerable king of the web. ECMAScript 6 (and 7, 8, 9 ad infinitum) will improve on it, and make it more accessible – eventually incorporating in to it the ‘differences’ that set things like Dart and TypeScript apart, and taking the carpet from under their feet. It will remain the lingua france of the web until someone creates a product that is not beholden to JavaScript and not limited to what JavaScript can, or cannot, do. Dart will never win the battle for the browser. It is a product that many people want, but few actually need.
Read more
  • 0
  • 1
  • 3672

article-image-big-data-more-than-just-buzz-word
Akram Hussain
16 Dec 2014
4 min read
Save for later

Big Data Is More Than Just a Buzz Word!

Akram Hussain
16 Dec 2014
4 min read
We all agree big data sounds cool (well I think it does!), but what is it? Put simply, big data is the term used to describe massive volumes of data. We are thinking of data along the lines of "Terabytes," "Petabytes," and "Exabytes" in size. In my opinion, that’s as simple as it gets when thinking about the term "big data." Despite this simplicity, big data has been one of the hottest, and most misunderstood, terms in the Business Intelligence industry recently; every manager, CEO, and Director is demanding it. However, once the realization sets in on just how difficult big data is to implement, they may be scared off! The real reason behind the "buzz" was all the new benefits that organizations could gain from big data. Yet many overlooked the difficulties involved, such as:  How do you get that level of data?  If you do, what do you do with it?  Cultural change is involved, and most decisions would be driven by data. No decision would be made without it.  The cost and skills required to implement and benefit from big data. The concept was misunderstood initially; organisations wanted data but failed to understand what they wanted it for and why, even though they were happy to go on the chase. Where did the buzz start? I truly believe Hadoop is what gave big data its fame. Initially founded by Yahoo, used in-house, and then open sourced as an Apache project, Hadoop served a true market need for large scale storage and analytics. Hadoop is so well linked to big data that it’s become natural to think of the two together. The graphic above demonstrates the similarities in how often people searched for the two terms. There’s a visible correlation (if not causation). I would argue that “buzz words” in general (or trends) don’t take off before the technology that allows them to exist does. If we consider buzz words like "responsive web design", they needed the correct CSS rules; "IoT" needed Arduino, and Raspberry Pi and likewise "big data" needed Hadoop. Hadoop was on the rise before big data had taken off, which supports my theory. Platforms like Hadoop allowed businesses to collect more data than they could have conceived of a few years ago. Big data grew as a buzz word because the technology supported it. After the data comes the analysis However, the issue still remains on collecting data with no real purpose, which ultimately yields very little in return; in short, you need to know what you want and what your end goal is. This is something that organisations are slowly starting to realize and appreciate, represented well by Gartner’s 2014 Hype Cycle. Big data is currently in the "Trough of Disillusionment," which I like to describe as “the morning after the night before.” This basically means that realisation is setting in, the excitement and buzz of big data has come down to something akin to shame and regret. The true value of big data can be categorised into three sections: Data types, Speed, and Reliance. By this we mean: the larger the data, the more difficult it becomes to manage the types of data collected, that is, it would be messy, unstructured, and complex. The speed of analytics is crucial to growth and on-demand expectations. Likewise, having a reliable infrastructure is at the core for sustainable efficiency. Big data’s actual value lies in processing and analyzing complex data to help discover, identify, and make better informed data-driven decisions. Likewise, big data can offer a clear insight into strengths, weaknesses, and areas of improvements by discovering crucial patterns for success and growth. However, this comes at a cost, as mentioned earlier. What does this mean for big data? I envisage that the invisible hand of big data will be ever present. Even though devices are getting smaller, data is increasing at a rapid rate. When the true meaning of big data is appreciated, it will genuinely turn from a buzz word into one that smaller organisations might become reluctant to adopt. In order to implement big data, they will need to appreciate the need for structure change, the costs involved, the skill levels required, and an overall shift towards a data-driven culture. To gain the maximum efficiency from big data and appreciate that it's more than a buzz word, organizations will have to be very agile and accept the risks to benefit from the levels of change.
Read more
  • 0
  • 0
  • 1475

article-image-what-juju
Wayne Witzel
15 Dec 2014
7 min read
Save for later

What is Juju?

Wayne Witzel
15 Dec 2014
7 min read
Juju is a service orchestration system. Juju works with a wide variety of providers, including your own machines. It attempts to abstract away the underlying specifics of a provider and allows you to focus and think about your service deployments and how they relate to each other. Juju does that using something called a charm, which we will talk about in this post. Using a simple configuration file, you set up a provider. For this post, we will be using a local provider that doesn't require any special setup in the configuration file, but does depend on some local system packages being installed. In Part 2 of this two-part blog series, we will be using the EC2 provider, but for now, we will stick with the defaults. Both posts in this series assume you are using Ubuntu 14.04 LTS. What is a charm? A charm is a set of files that tell Juju how a service can be deployed and managed. Charms define properties that a service might have, for example, a MySQL charm knows it provides a SQL database and a WordPress charm knows it needs a SQL database. Since this information is encoded in the charm itself, Juju is able to fully manage this relation between the two charms. Let's take a look at just how this works. Using Juju First, you need to install Juju: sudo add-apt-repository ppa:juju/stable sudo apt-get update && sudo apt-get install juju-core juju-local Once you have Juju installed, you need to create the base configuration for Juju: juju generate-config Now that you have done that, let’s switch to our local environment. The local environment is a convenient way to test out different deployments with Juju without needing to set up any of the actual third-party providers like Amazon, HP, or Microsoft. The local environment will use LXC containers for each of the service deployments. juju switch local Bootstrapping First, we want to prepare our local environment to deploy services to it using charms. To do that, we do what is called bootstrapping. The juju bootstrap command will generate some configuration files and set up and start the Juju state machine. This is the machine that handles all of the orchestration commands for your Juju deployment: juju bootstrap Now that we have our Juju environment up and running, we can start issuing commands. First, let’s take a look at our state machine’s details. We can see this information using the status command: juju status I will be prompting you to use this command throughout this post. This command will help to let you know when services are deployed and ready. Deploying At this point, you can begin deploying services using charms: juju deploy wordpress You can check on the status of your WordPress deployment using the previously mentioned juju statuscommand. Juju also logs details about the creation of machines and deployment of services to those machines. In the case of the local environment, those logs live at /var/log/juju-USERNAME-local. This can be a great place to find detailed information should you encounter any problems, and will provide you with a general overview of commands that have been run for a given command. Continue by installing a database that we will later tell our WordPress installation to use: juju deploy mysql Once your deployments have completed, your juju status command will output something very similar to this. juju status environment: local machines: "0": agent-state: started agent-version: 1.20.6.1 dns-name: localhost instance-id: localhost series: trusty state-server-member-status: has-vote "1": agent-state: started agent-version: 1.20.6.1 dns-name: 10.0.3.196 instance-id: wwitzel3-local-machine-1 series: precise hardware: arch=amd64 "2": agent-state: started agent-version: 1.20.6.1 dns-name: 10.0.3.79 instance-id: wwitzel3-local-machine-2 series: trusty hardware: arch=amd64 services: mysql: charm: cs:trusty/mysql-4 exposed: false relations: cluster: - mysql db: - wordpress units: mysql/0: agent-state: started agent-version: 1.20.6.1 machine: "2" public-address: 10.0.3.79 wordpress: charm: cs:precise/wordpress-25 exposed: true relations: db: - mysql loadbalancer: - wordpress units: wordpress/0: agent-state: started agent-version: 1.20.6.1 machine: "1" open-ports: - 80/tcp public-address: 10.0.3.196 The important detail to note from the status output is that the agent-state parameters for both MySQL and WordPress are reading started. This is generally the sign that the deployment was successful and all is well. Relations Now that we've deployed our instances of WordPress and MySQL, we need to inform WordPress about our MySQL instance so it can use it as its database. In Juju, these are called relations. Charms expose relation types that they provide or that they can use. In the case of our sample deployment, MySQL provides the db relation, and WordPress needs something that provides a db relation. Do this with the following command: juju add-relation mysql wordpress The WordPress charm has what are called hooks. In the case of our add-relation command shown previously, the WordPress instance will run its relation changed hook, which performs the basic setup for WordPress, just like if you had gone through the steps of the install script yourself. Again, you will want to use the juju status command again to check on the status of the operation. You should notice that the WordPress instance status has gone from started back to installed. This is because it is now running the relationchanged hook on the installation. It will return back to the started status once this operation is complete. Exposing Finally, we need to expose our WordPress installation so people can actually visit it. By default, most charms you deploy with Juju will not be exposed. This means they will not be accessible out of the local network they are being deployed to. To expose the WordPress charm, we issue the following command: juju expose wordpress Now that we have exposed WordPress, we can visit our installation and continue the setup process. You can use juju status again to find the public address of your WordPress installation. Enter that IP address into your favorite browser and you should be greeted with the WordPress welcome page asking you to finish your installation. Logging Juju provides you with system logs for all of the machines under its control. For a local provider, you can view these logs in /var/log/juju-username-local/; for example, my username is wwitzel3, so my logs are at /var/log/juju-wwitzel3-local. In this folder, you will see individual log files for each machine as well as a combined all-machines.log file, which is an aggregation of all the machines’ log files. Tailing the all-machines.log file is a good way to get an overview of the actions that Juju is performing after you run a command. It is also great for troubleshooting should you run in to any issues with your Juju deployment. Up next So there you have it, a basic overview of Juju and a simple WordPress deployment. In Part 2 of this two-part blog series, I will cover a production deployment use case using Juju and Amazon EC2 by setting up a typical Node.js application stack. This will include HAProxy, multiple load-balanced Node.js units, a MongoDB cluster, and an ELK stack (Elasticsearch, Logstash, Kibana) for capturing all of the logging. About the author Wayne Witzel III resides in Florida and is currently working for Slashdot Media as a Senior Software Engineer, building backend systems using Python, Pylons, MongoDB, and Solr. He can be reached at @wwitzel3.
Read more
  • 0
  • 0
  • 1622

article-image-a-short-video-introduction-to-gulp
Maximilian Schmitt
12 Dec 2014
1 min read
Save for later

A short introduction to Gulp (video)

Maximilian Schmitt
12 Dec 2014
1 min read
  About The Author Maximilian Schmitt is a full-time university student with a passion for web technologies and JavaScript. Tools he uses daily for development include gulp, Browserify, Node.js, AngularJS, and Express. When he's not working on development projects, he shares what he has learned through his blog and the occasional screencast on YouTube. Max recently co-authored "Developing a gulp Edge". You can find all of Max's projects on his website at maximilianschmitt.me.
Read more
  • 0
  • 0
  • 1119
article-image-five-benefits-dot-net-going-open-source
Ed Bowkett
12 Dec 2014
2 min read
Save for later

Five Benefits of .NET Going Open Source

Ed Bowkett
12 Dec 2014
2 min read
By this point, I’m sure almost everyone has heard of the news about Microsoft’s decision to open source the .NET framework. This blog will cover what the benefits of this decision are for developers and what it means. Remember this is just an opinion and I’m sure there are differing views out there in the wider community. More variety People no longer have to stick with Windows to develop .NET applications. They can choose between operating systems and this doesn’t lock developers down. It makes it more competitive and ultimately, opens .NET up to a wider audience. The primary advantage of this announcement is that .NET developers can build more apps to run in more places, on more platforms. It means a more competitive marketplace, and improves developers and opens them up to one of the largest growing operating systems in the world, Linux. Innovate .NET Making .NET open source allows the code to be revised and rewritten. This will have dramatic outcomes for .NET and it will be interesting to see what developers do with the code as they continually look for new functionalities with .NET. Cross-platform development The ability to cross-develop on different operating systems is now massive. Previously, this was only available with the Mono project, Xamarin. With Microsoft looking to add more Xamarin tech to Visual Studio, this will be an interesting development to watch moving into 2015. A new direction for Microsoft By opening .NET up as open source software, Microsoft seems to have adopted a more "developer-friendly" approach under the new CEO, Satya Nadella. That’s not to say the previous CEO ignored developers, but by being more open as a company, and changing its view on open source, has allowed Microsoft to reach out to communities easier and quicker. Take the recent deal Microsoft made with Docker and it looks like Microsoft is heading in the right direction in terms of closing the gap between the company and developers. Acknowledgement of other operating systems When .NET first came around, around 2002, the entire world ran on Windows—it was the head operating system, certainly in terms of the mass audience. Today, that simply isn’t the case—you have Mac OSX, you have Linux—there is much more variety, and as a result .NET, by going open source, have acknowledged that Windows is no longer the number one option in workplaces.
Read more
  • 0
  • 0
  • 2318

article-image-beginner-bitcoin-project-network-visualizer
Alex Leishman
12 Dec 2014
9 min read
Save for later

Beginner Bitcoin Project - Network Visualizer

Alex Leishman
12 Dec 2014
9 min read
This post will give you a basic introduction to the bitcoin protocol by guiding you through how to create a simple, real-time visualization of transactions in the bitcoin network. Bitcoin is easy to understand on the surface, but very complex when you get into the details. The explanations in this guide are simplified to make the content accessible to people unfamiliar with bitcoins. In-depth documentation can be found at https://bitcoin.org. Overview of the bitcoin network Bitcoin is a public P2P payment network and ledger enabling people or machines to securely transfer value over the Internet without trusting a third party. The tokens used for this value exchange are called bitcoins (lowercase “b”). A bitcoin is divisible to eight decimal places. Currently one bitcoin has a market value of about $350. Bitcoins “sit” at a bitcoin address, just like money “sits” in a bank account. A bitcoin address is a public identifier like a bank account number. In order to send bitcoins from one address to another you must prove ownership of the sending address by signing a transaction with the private key of the sending address. This private key is like the PIN or password to your bank account. Every bitcoin address has a unique corresponding private key. The amount of bitcoins in each of the existing addresses is stored in a public ledger called the blockchain. The blockchain holds the history of all valid/accepted transactions sent through the bitcoin network. These transactions are what we will be visualizing. To create a live visualization of the network we must connect to a bitcoin node or set of nodes. Nodes are servers in the bitcoin network responsible for propagating and relaying transactions. It's important to note that not all transactions sent into the bitcoin network are valid, and most nodes will not relay an invalid transaction. Therefore, although a single node will eventually see any transaction accepted by the network, it will not see many of the spam or malicious transactions because most other nodes will not propagate them. There are some very well connected nodes (super nodes) or clusters of nodes that can provide a more comprehensive view of the live state of the network. [Blockchain.info](https://blockchain.info) operates a super node and allows developers free access to its data through both REST and WebSockets APIs. We will be using their WebSockets API for this project. Let's get coding. You can see the finished project code here: https://github.com/leishman/btc_network_visualizer. First we will create a basic index.html file with jQuery and our main.js file required. The only HTML element we need is a <div> container: <!DOCTYPE html> <head> <meta charset="utf-8"> <meta http-equiv="X-UA-Compatible" content="IE=edge,chrome=1"> <title></title> <meta name="description" content=""> <meta name="viewport" content="width=device-width, initial-scale=1"> <link rel="stylesheet" href="css/main.css"> </head> <body> <h1>Bitcoin Network Visualizer</h1> <!-- Visualization Container --> <div class="vizContainer js-visualize"></div> <!-- Load jQuery --> <script>window.jQuery || document.write('<script src="js/vendor/jquery-1.11.1.min.js"></script>')</script> <!-- Link main.js, where our visualization code is located --> <script src="js/main.js"></script> </body> </html> Let's throw some styling in there: /* gray container for visualization */ .vizContainer { width: 100%; height: 500px; background: #EEE; } /* outer element for bubble */ .txBubble { position: absolute; } /* inner element for bubble */ .txBubbleInner { width: 100%; height: 100%; position: relative; background: green; border-radius: 50%; -webkit-animation: expand 1s ease-in-out; } /* box displaying information about a transaction on click */ .toolTip { width: auto; height: 40px; padding: 0 5px; background: #AAA; border-radius: 4px; color: white; position: absolute; } /* words in tooltip */ .toolTip p { margin: 0; line-height: 40px; } /* Animations */ @-webkit-keyframes expand { 0% { width: 0; height: 0; left: 50%; top: 50%; } 100% { width: 100%; height: 100%; left: 0; top: 0; } } To get started, we must establish a WebSocket connection with blockchain.info. You can view their documentation [here](https://blockchain.info/api/api_websocket). The following code illustrates how to set up the WebSockets connection, subscribe to all unconfirmed (new) transactions in the network, and log the data to the console: /////////////////////////////////////////// /// Connect with Blockchain Websockets API //////////////////////////////////////////// // create new websocket object using a secure connection (wss) var blkchainSocket = new WebSocket('wss://ws.blockchain.info/inv'); // once the socket connection is established blkchainSocket.onopen = function(event) { var subMessage; // message to subscribe to all unconfirmed transactions subMessage = '{"op":"unconfirmed_sub"}'; // send message to subscribe blkchainSocket.send(subMessage); } // callback to execute when a message is displayed blkchainSocket.onmessage = function(event) { // Parse the data returned to convert to JSON var txData = JSON.parse(event.data); // log data to console console.log(txData); } If you run this code, you should see a live stream of transactions in your console. Let's take a look at the data we receive: { "op":"utx", "x":{ "hash":"427937e561d2ab6236014d92509a1a872eec327de2b9f6d84cfcbce8af2db935", "vin_sz":1, "vout_sz":2, "lock_time":"Unavailable", "size":259, "relayed_by":"127.0.0.1", "tx_index":68494254, "time":1415169009, "inputs":[ { "prev_out":{ "value":29970360, "addr":"17L4qAEjVbKZ99iXJFhWMzRWv3c8LHrUeq", "type":0 } } ], "out":[ { "value":3020235, // output 1, 3020235 Satoshis going to address 1DTnd.... "addr":"1DTnd8vwpJT3yo64xZST5Srm9Z5JEWQ6nA", "type":0 }, { "value":26940125, // output 2, 26940125 Satoshis going to address 17L4.... "addr":"17L4qAEjVbKZ99iXJFhWMzRWv3c8LHrUeq", "type":0 } ] } } There is a lot of information here. The only thing we will concern ourselves with is the output, which specifies the amount and destination of the coins being sent in the transaction. It is important to note that a single transaction can send various bitcoin amounts to multiple addresses. The output specifies the destinations and amounts of bitcoin sent by a transaction. Often a transaction has at least two outputs, where one of them sends bitcoins to a receiver and the other sends bitcoins as change back to the sender. The details of this are beyond the scope of this guide, but you can read more about it here. Now that we have the data, we can visualize it. Let's make a function called visualize that can be passed data sent from the API and display it on the screen: ///////////////////////// /// Visualize the data ///////////////////////// // Input: JSON data object representing a new Bitcoin transaction from Blockchain.info API // Result: Append a circle (bubble) to the DOM with a size proportional to the transaction value function visualize(data) { // declare variables var r, outputs, txDot, vizHeight, vizWidth, vizContainter, dot, txVal = 0, valNorm = 10000000; // query DOM for viz Container vizContainter = $('.js-visualize'); // get height and width of viz container vizHeight = vizContainter.height(); vizWidth = vizContainter.width(); // get value of first tx ouput (for test only) outputs = data.x.out; // sum all unspent outputs to calculate total value of Tx for(var i = 0; i < outputs.length; i++){ txVal += outputs[i].value; } // calculate radius of circle to display based on Tx value r = (txVal / valNorm) / 2; // generate random position randTop = randomInt(vizHeight) + 88; randLeft = randomInt(vizWidth) - r; // set min and max sizes for radius (r) if(r < 5) { r = 5; } else if(r > 100) { r = 100; } // create HTML elements to use as bubble txBubble = $('<div class="txBubble"><div class="txBubbleInner"></div></div>') .css({'top': randTop, 'left': randLeft, 'width': r, 'height': r}) .attr('data-txvalue', txVal); // add bubble element to DOM dot = vizContainter.append(txBubble); } This code creates a new element styled as a circle for each new transaction received. The size of the circle is represented by the total amount of bitcoins moved in a transaction. We set a max and min bound size of the circle to prevent invisible or over-sized circles from appearing. We can then add a tooltip to each bubble to display the total value of the transaction. The values returned by blockchain.info are expressed in Satoshis (named after the mysterious creator of bitcoin - Satoshi Nakamoto). One bitcoin is equal to 100 million Satoshis, so we have a function called satoshi2btc  do the conversion for us: ///////////////////////////////////////// /// Create a tooltip to display Tx data ////////////////////////////////////////// // Input: event, passed by calling function. showTooltip() acts as a callback function when a bubble is clicked // see $(document).on('click', '.txBubble', showTooltip); // Result: Display tooltip with the transaction value (in BTC) represented by the bubble function showTooltip(event) { // declare variables var addrs, value, tooltip; // get value of tx stored as data attribute value = $(this).data('txvalue'); // get coordinates of user's click xCoord = event.clientX; yCoord = event.clientY; // remove other tooltips to ensure only 1 is displayed at a time $('.toolTip').remove(); // create a tooltip and position it at user's click tooltip = $('<div class="toolTip"></div>') .css({'top': yCoord, 'left': xCoord}) .html('<p>' + satoshi2btc(value) + ' BTC</p>'); // add tooltip to DOM $('.js-visualize').append(tooltip); } // define random integer function // radomInt(5) will return a number from 0 to 4 function randomInt(range) { return Math.floor(Math.random() * range); } // convert Satoshis to BTC // There are 100,000,000 Satoshis in 1 Bitcoin function satoshi2btc(val) { return val / 100000000; } ///////////////////////////////////////// /// Bind Tooltip event on document load ////////////////////////////////////////// // bind showTooltip function on DOM load $(function() { $(document).on('click', '.txBubble', showTooltip); }); In summary, we used the blockchain.info WebSockets API to create a live JavaScript visualization of the transactions in the bitcoin network. We used our new understanding of the bitcoin protocol to visually represent the value of each transaction. This is just the tip of the iceberg and a great way to get your feet wet with bitcoin development. About the Author Alex Leishman is a software engineer who is passionate about bitcoin and other digital currencies. He works at MaiCoin.com where he is helping to build the future of money.
Read more
  • 0
  • 0
  • 2448

article-image-top-4-business-intelligence-tools
Ed Bowkett
04 Dec 2014
4 min read
Save for later

Top 4 Business Intelligence Tools

Ed Bowkett
04 Dec 2014
4 min read
With the boom of data analytics, Business Intelligence has taken something of a front stage in recent years, and as a result, a number of Business Intelligence (BI) tools have appeared. This allows a business to obtain a reliable set of data, faster and easier, and to set business objectives. This will be a list of the more prominent tools and will list advantages and disadvantages of each. Pentaho Pentaho was founded in 2004 and offers a suite, among others, of open source BI applications under the name, Pentaho Business Analytics. It has two suites, enterprise and community. It allows easy access to data and even easier ways of visualizing this data, from a variety of different sources including Excel and Hadoop and it covers almost every platform ranging from mobile, Android and iPhone, through to Windows and even Web-based. However with the pros, there are cons, which include the Pentaho Metadata Editor in Pentaho, which is difficult to understand, and the documentation provided offers few solutions for this tool (which is a key component). Also, compared to other tools, which we will mention below, the advanced analytics in Pentaho need improving. However, given that it is open source, there is continual improvement. Tableau Founded in 2003, Tableau also offers a range of suites, focusing on three products: Desktop, Server, and Public. Some benefits of using Tableau over other products include ease of use and a pretty simple UI involving drag and drop tools, which allows pretty much everyone to use it. Creating a highly interactive dashboard with various sources to obtain your data from is simple and quick. To sum up, Tableau is fast. Incredibly fast! There are relatively few cons when it comes to Tableau, but some automated features you would usually expect in other suites aren’t offered for most of the processes and uses here. Jaspersoft As well as being another suite that is open source, Jaspersoft ships with a number of data visualization, data integration, and reporting tools. Added to the small licensing cost, Jaspersoft is justifiably one of the leaders in this area. It can be used with a variety of databases including Cassandra, CouchDB, MongoDB, Neo4j, and Riak. Other benefits include ease of installation and the functionality of the tools in Jaspersoft is better than most competitors on the market. However, the documentation has been claimed to have been lacking in helping customers dive deeper into Jaspersoft, and if you do customize it the customer service can no longer assist you if it breaks. However, given the functionality/ability to extend it, these cons seem minor. Qlikview Qlikview is one of the oldest Business Intelligence software tools in the market, having been around since 1993, it has multiple features, and as a result, many pros and cons that include ones that I have mentioned for previous suites. Some advantages of Qlikview are that it takes a very small amount of time to implement and it’s incredibly quick; quicker than Tableau in this regard! It also has 64-bit in-memory, which is among the best in the market. Qlikview also has good data mining tools, good features (having been in the market for a long time), and a visualization function. These aspects make it so much easier to deal with than others on the market. The learning curve is relatively small. Some cons in relation to Qlikview include that while Qlikview is easy to use, Tableau is seen as the better suite to use to analyze data in depth. Qlikview also has difficulties integrating map data, which other BI tools are better at doing. This list is not definitive! It lays out some open source tools that companies and individuals can use to help them analyze data to prepare business performance KPIs. There are other tools that are used by businesses including Microsoft BI tools, Cognos, MicroStrategy, and Oracle Hyperion. I’ve chosen to explore some BI tools that are quick to use out of the box and are incredibly popular and expanding in usage.
Read more
  • 0
  • 0
  • 2992
article-image-deep-neural-networks-bridging-between-theory-and-practice
Sancho McCann
02 Dec 2014
4 min read
Save for later

Deep neural networks: Bridging between theory and practice

Sancho McCann
02 Dec 2014
4 min read
Recently, Packt signed up to offer print and ebook bundling through BitLit so that our readers can easily access their books in any format. BitLit is an innovative app that allows readers to bundle their books retroactively. Instead of relying on receipts, BitLit uses computer vision to identify print books by their covers and a reader by their signature. All you need to bundle a book with BitLit is a pen, your smartphone, and the book. Packt is really excited to have partnered with BitLit to offer bundling to our readers. We’ve asked BitLit’s Head of R&D, Sancho McCann, to give our readers a deeper dive on how BitLit uses pre-existing research on deep neural networks. Deep neural networks: Bridging between theory and practice What do Netflix recommendations, Google's cat video detector, and Stanford's image-to-text system all have in common? A lot of training data, and deep neural networks. This won’t be a tutorial about how deep neural networks work. There are already excellent resources for that (this one by Andrej Karpathy, for example). But, even with a full understanding of how deep neural nets, and even if you can implement one, bridging the gap between prototype implementation and a production-ready system may seem daunting. The code needs to be robust, flexible, and optimized for the latest GPUs. Fortunately, this work has already been done for you. This post describes how to take advantage of that pre-existing work. Software There is a plethora of deep neural network libraries available. Caffe, CUDA-Convnet, Theano, and others. At BitLit, we have selected Caffe. Its codebase is actively developed and maintained. It has an active community of developers and users. It has a large library of layer types and allows easy customization of your network’s architecture. It has already been adapted to take advantage of NVIDIA’s cuDNN, if you happen to have it installed. cuDNN is “a GPU-accelerated library of primitives for deep neural networks”. This library provides optimized versions of core neural network operations (convolution, rectified linear units, pooling), tuned to the latest NVIDIA architectures. NVIDIA’s benchmarking shows that Caffe accelerated by cuDNN is 1.2-1.3x faster than the baseline version of Caffe. In summary, the tight integration of NVIDIA GPUs, CUDA, cuDNN, and Caffe, combined with the active community of Caffe users and developers is why we have selected this stack for our deep neural network systems. Hardware As noted by Krizhevsky et al. in 2012, “All of our experiments suggest that our results can be improved simply by waiting for faster GPUs… ” This is still true today. We use both Amazon’s GPU instances and our own local GPU server. When we need to run many experiments in parallel, we turn to Amazon. This need arises when performing model selection. To determine how many neural net layers to use, how wide each layer should be, etc., we run many experiments in parallel to determine which network architecture produces the best results. Then, to fully train (or later, retrain) the selected model to convergence, we use our local, faster GPU server. [Selecting the best model via experimentation.] Amazon’s cheapest GPU offering is their g2.2xlarge instance. It contains an NVIDIA Kepler GK104 (1534 CUDA cores). Our local server, with an NVIDIA Tesla K40 (2880 CUDA cores), trains about 2x as quickly as the g2.2xlarge instance. NVIDIA’s latest offering, the K80, is again almost as twice as fast, benchmarked on Caffe. If you’re just getting started, it certainly makes sense to learn and experiment on an Amazon AWS instance before committing to purchasing a GPU that costs several thousand dollars. The spot price for Amazon’s g2.2xlarge instance generally hovers around 8 cents per hour. If you are an academic research institution, you may be eligible for NVIDIA’s Academic Hardware Donation program. They provide free top-end GPUs to labs that are just getting started in this field. It’s not that hard! To conclude, it is not difficult to integrate a robust and optimized deep neural network in a production environment. Caffe is well supported by a large community of developers and users. NVIDIA realizes this is an important market and is making a concerted effort to be a good fit for these problems. Amazon’s GPU instances are not expensive and allow quick experimentation. Additional Resources Caffe Example: Training on MNIST NVIDIA Academic Hardware Request About the Author Sancho McCann (@sanchom) is the Head of Research and Development at BitLit Media Inc. He has a Ph.D. in Computer Vision from the University of British Columbia.
Read more
  • 0
  • 0
  • 1840

article-image-art-hack-day
Michael Ang
28 Nov 2014
6 min read
Save for later

Art Hack Day

Michael Ang
28 Nov 2014
6 min read
Art Hack Day is an event for hackers whose medium is art and artists whose medium is tech. A typical Art Hack Day event brings together 60 artist-hackers and hacker-artists to collaborate on new works in a hackathon-style sprint of 48 hours leading up to a public exhibition and party. The artworks often demonstrate the expressive power of new technology, radical collaboration in art or a critical look at how technology affects society. The technology used is typically open, and sharing source code online is encouraged. Hacking an old reel-to-reel player for Mixtape. Photo by Vinciane Verguethen. As a participant (and now an organizer) of Art Hack Day I’ve had the opportunity to participate in three of the events. The spirit of intense creation in a collaborative atmosphere drew me to the Art Hack Day. As an artist working with technology it’s often possible to get bogged down in the technical details of realizing a project. The 48-hour hackathon format of Art Hack Day gives a concrete deadline to spur the process of creation and is short enough to encourage experimentation. When the exhibition of a new work is only 48 hours away, you’ve got to be focused and solve problems quickly. Going through this experience with 60 other people brings an incredible energy. Each Art Hack Day is based around a theme. Some examples include "Lethal Software", "afterglow", and "Disnovate". The Lethal Software art hack took place in San Francisco at Gray Area. The theme was inspired by the development of weaponized drones, pop-culture references like Robocop and The Terminator, and software that fights other software (e.g. spam vs spam filters). Artist-hackers were invited to create projects engaging with the theme that could be experienced by the public in-person and online. Two videogame remix projects included KillKillKill!!! where your character would suffer remorse after killing the enemy and YODO Mario (You Only Die Once) where the game gets progressively glitched out each time Mario dies, and the second player gets to move the holes in the ground in an attempt to kill Mario. DroneML presented a dance performance using drones and Cake or Death? (the project I worked on) repurposed a commercial drone into a CupCake Drone that delivered delicious pastries instead of deadly missiles. A video game character shows remorse in KillKillKill!!! The afterglow Art Hack Day in Berlin as part of the transmediale festival posed a question relating to the ever increasing amount of e-waste and overabundance of collected data: "Can we make peace with our excessive data flows and their inevitable obsolescence? Can we find nourishment in waste, overflow and excess?" Many of the projects reused discarded technology as source material. PRISM: The Beacon Frame caused controversy when a technical contractor thought the project seemed closer to the NSA PRISM surveillance project than an artistic statement and disabled the project. The Art Hack Day version of PRISM gave a demonstration of how easily cellular phone connections can be hijacked - festival visitors coming near the piece would receive mysterious text messages such as "Welcome to your new NSA partner network". With the show just blocks away from the German parliament and recent revelations of NSA spying the piece seemed particularly relevant. A discarded printer remade into a video game for PrintCade Disnovate was hosted by Parsons Paris as part of the inauguration for their MFA Design and Technology program. Art Hack Day isn’t shy of examining the constant drive for innovation in technology, and even the hackathon format that it uses: "Hackathons have turned into rallies for smarter, cheaper and faster consumption. What role does the whimsical and useless play in this society? Can we evaluate creation without resorting to conceptions of value? What worldview is implied by the language of disruption; what does it clarify and what does it obscure?" Many of the works in this Art Hack Day had a political or dystopian statement to make. WAR ZONE recreated historical missile launches inside Google Earth, giving a missile’s-eye view of the trajectory from launch site to point of impact. The effect was both mesmerizing and terrifying. Terminator Studies draws connections between the fictional Terminator movie and real-world developments in the domination of machines and surveillance. Remelt literally recast technology into a primitive form by melting down aluminum computer parts and forming them into Bronze Age weapons, evoking the fragility of our technological systems and often warlike nature. On a more light-hearted note Drinks At The Opening Party presented a table of empty beer bottles. As people took pictures of the piece using a flash a light sensor would trigger powerful shaking of the table that would actually break the bottles. Trying to preserve an image of the bottles would physically destroy them. Edward Snowden gets a vacation in Paris as Snowmba. Photo by Luca Lomazzi. The speed with which many of these projects were created is testament to the abundance of technology that is available for creative use. Rather than using technology in pursuit of "faster, better, more productive" artist-hackers are looking at the social impacts of technology and its possibilities for expression and non-utilitarian beauty. The collaborative and open atmosphere of the Art Hack Day gives rise to experimentation and new combinations of ideas. Technology is one of the most powerful forces shaping global society. The consummate artist-hacker uses technology in a creative way for social good. Art Hack Day provides an environment for these artist-hackers and hacker-artists to collaborate and share their results with the public. You can browse through project documentation and look for upcoming Art Hacks on the Art Hack Day website or via @arthackday on Twitter. Project credits Mixtape by John Nichols, Jenn Kim, Khari Slaughter and Karla Durango KillKillKill!!! by bigsley and Martiny DroneML by Olof Mathé, Dean Hunt, and Patrick Ewing YODO Mario (You Only Die Once) by Tyler Freeman and Eric Van Cake or Death? by Michael Ang, Alaric Moore and Nicolas Weidinger PRISM: The Beacon Frame by Julian Oliver and Danja Vasiliev  PrintCade by Jonah Brucker-Cohen and Michael Ang WAR ZONE by Nicolas Maigret, Emmanuel Guy and Ivan Murit Terminator Studies by Jean-Baptiste Bayle Remelt by Dardex Drinks At The Opening Party by Eugena Ossi, Caitlin Pickall and Nadine Daouk Snowmba by Evan Roth About the Author Michael Ang is a Berlin-based artist and engineer working at the intersection of art, engineering, and the natural world. His latest project is the Polygon Construction Kit, a toolkit for bridging the virtual and physical realms by constructing real-world objects from simple 3D models. He is a participant and sometimes organizer of Art Hack Day, an event for hackers whose medium is tech and artists whose medium is technology.
Read more
  • 0
  • 0
  • 2426