Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech Guides

851 Articles
article-image-npm-and-distribution-path-length-problems
Adam Lynch
07 Dec 2015
6 min read
Save for later

NPM and Distribution Path Length Problems

Adam Lynch
07 Dec 2015
6 min read
You might have been unfortunate enough to learn that Windows has a 256 character limit on file paths. You could've ran into this problem locally or on end users' machines. There's no real workaround but there are preventive measures you can take. Even if you haven't, feel free to take pleasure from reading my horror story. NPM Neither this problem nor our solution are exclusive to Node.js but a lot of the victims of the path length problem were probably running Node.js on Windows. Windows users know that they get left out in the cold often by npm package maintainers but even the design of npm itself is a problem on Windows from the get-go. npm stores your dependencies (listed in your package.json) in a node_modules directory. If those dependencies have dependencies of their own they're stored in their own node_modules directory (i.e. your-project/node_modules/a/node_modules/b/) and so on recursively. It's nice but in hindsight it's obviously incompatible with Windows's path length limit. Delete delete delete Most people have probably been lucky enough to only have come across this problem when trying to delete dependencies where then Windows complains that the path is too long. A simple way around this is to take a module halfway down deep into your dependency graph (i.e. node_modules/a/node_modules/b/node_modules/c/.../node_modules/h/) in Windows Explorer and move it somewhere closer to the root (e.g. node_modules/) to cut the file path down before trying to delete it again. This would have to be repeated for every culprit. There are also some tools which could help. I've noticed that you can delete really long paths while using 7-Zip File Manager to browse files. Runtime errors If you've ran into actual bugs caused by this, you could find a module halfway down the dependency graph and add it as a dependency to your project so it will be installed under the top level node_modules and not a node_modules directory n levels deep. Make sure to install the correct version and test thoroughly. There are also a few Node modules out there which "flatten" your dependency graph. The downside to these modules is that if there is a conflict (package A depends on version 1.0.0 of package Z and package B depends on version 3.2.1 of package Z) then the latest version of the module (package Z) is used, which could be problematic. So be careful. Can't npm fix this? You might see people reference Windows APIs (which support long paths) as a possible fix but it is very unlikely this will be fixed in npm. npm dedupe should help with this too but it's not reliable in my experience. Yes, they, can! This has been fixed as of npm 3.0.0 (yet to be released). Your dependencies will now be installed maximally flat. Insofar as is possible, all of your dependencies, and their dependencies, and THEIR dependencies will be installed in your project's node_modules folder with no nesting. You'll only see modules nested underneath one another when two (or more) modules have conflicting dependencies. Excuse me... Dances Mind you, it's a bit late for me. Unless npm 3 also ships with a time machine. More about that in a bit. Manually checking for exceedingly long paths Up until now, I've had to routinely check for long paths using Path Length Checker (on Windows) but a manual check is not good enough as stuff can still slip through the net. Introducing gulp-path-length So there's a simple Gulp plugin to help with this; gulp-path-length. You could use it like this in a Gulp task: var gulp = require('gulp'); var pathLength = require('gulp-path-length'); gulp.task('default', function(){ gulp.src('./example/path/to/directory/**', {read: false}) .pipe(pathLength()); }); If all is well, nothing will happen. If you have a path exceeding 256 characters, the gulp task will stop and an error reveal the offending path. This is really fast either way as Gulp doesn't need to read the contents of the files. The limit can be changed with a parameter; i.e. .pipe(pathLength({ maxLength: 50 });. This is fine if it's just for you locally but there are bigger fish to fry. Distributed long paths What if there are multiple developers working on your project? What if a developer is using Mac OS X or Linux? There could easily be false positives. It's one thing having issues locally or within a team, it's a whole other thing to have path length problems in production on end users machines. I've had that pleasure myself with Teamwork Chat, which we built on top of NW.js. NW.js is basically Node.js and Chromium mashed together to allow you to create desktop apps from Web apps. Any NW.js can access all of node's core modules and any other modules you've installed from nom, for example. Therefore the npm - Windows path length issue applies here too. Depending on how long the end user's username was, the user might've seen something like this when they tried to launch Teamwork Chat: A dummy application. None of our app code is executed. This means no error reports and no way the app could even auto-update once a patch was released. As a maintainer of nw-builder, I know we're not the only ones who have faced this problem. Is there anything we can do? Once the code is shipped, it's too late. Luckily in my case, we have a rough idea where the files will exist on end users machines thanks to our Windows installer. This is where gulp-path-length's rewrite option comes in. It can be used like this to simulate path lengths: var gulp = require('gulp'); var pathLength = require('gulp-path-length'); gulp.task('default', function(){ gulp.src('./example/path/to/directory/**', {read: false}) .pipe(pathLength({ rewrite: { match: './example/path/to/directory/', replacement: 'C:Usersa-long-username-hereAppDataBlahBlahBlah' } })); }); So it doesn't matter where you are on your filesystem or which operating system you're using, it will test the length of files in a given directory as if they're in a hypothetical directory on Windows. You could run this before you ship your code but we've added this to a compilation build step so we catch it as early as possible. If a Mac developer adds really long paths to the project (like an npm dependency which depends on a chain of lodash modules), they'll see right away that this will break stuff for some Windows users. For good measure, we also run it in a continuous integration step. About the Author Adam Lynch is a TeamworkChat Product Lead & Senior Software Engineer at Teamwork. He can be found on Twitter @lynchy010.
Read more
  • 0
  • 0
  • 1373

article-image-angularjs-2-the-tempest-we-should-all-embrace
Ed Gordon
19 Nov 2015
5 min read
Save for later

AngularJS 2.0 is a tempest we should all embrace

Ed Gordon
19 Nov 2015
5 min read
2016 will be the year of AngularJS 2.0 and it’s going to be awesome. AngularJS has been a known quantity to Packt for about 4 years, and has been around for 6. In the last 24 months, we’ve really seen it gain massive adoption amongst our user base. Conferences are held in its name. It will come as no surprise that it’s one of our best-selling topics. Thousands of apps have been deployed and created with it. People, do in fact, love it. So the decision to rewrite the entire project seems odd. A lot has been written about this already from developers who know their stuff. Some are for it, some against it, and some are a little more balanced. For a technically reasoned article, Rob Eisenberg’s blog about AngularJS 2.0 is the best of many I’ve read. For one that quotes Shakespeare, read on. At Packt I’ve been the commissioning editor on a fair number of products. You may remember me from such hits as MEAN Web Development and Mastering D3.js. While I may not have the developer nous, creating a product is the same process whether it is a good framework or a good book. And part of this process understanding when you’ve got a good product, and when you had a good product that needs ripping up, and starting over. What’s past is prologue AngularJS’s design was emergent from increased adoption. It started life as a tool to aid designers throw up a quick online form. It was an internal tool at Google. They didn’t realise that every Joe Web Developer would be using it to power their client’s not-so-SEO-friendly bespoke applications. It’s the equivalent of what would happen if people started using this blog as a template for all future blogs. I’d enjoy it for the first few years, living the blogosphere high-life, then people would start moaning to me, and I would hate it. I’d have to start again, for my own health as much as for the health of the millions of bloggers who were using my formatting to try and contain their vastness. So we’re agreed that they need to change things. Good. Oh brave new world/That has such features in’t Many frameworks change things whilst maintaining backwards compatibility. WordPress is a famous example of doing everything possible to avoid introducing breaking-changes at any major update. The result is, by now, a pretty bloated application that much like Angular, started out serving a very different purpose to how it now finds itself being deployed. It’s what gave rise to smaller, lighter-weight platforms like Ghost, designed purely for blogging. AngularJS however is not an example of developers maintaining backwards compatibility. It takes pleasure in starting over. In fact, you can just about rip up your old Angular apps now. It’s for your own good. By starting from a clean slate, the Angular team have the chance to design AngularJS in to what it should be rather than what it ended up being. It may not make sense to the developers who are using Angular 1.x at the moment, but to be frank Google doesn’t care. It cares about good products. It’s planning a product that will endeavour to remain relevant in to the future, rather than spending its time trying to patch up something that was a result of rushed 2010 thinking. Part of this attempt at continued relevance is TypeScript. TypeScript extends the capabilities of ES6; moving to AngularJS 2.0 before ES7 is released means that it’s recommended that TypeScript is used to make the most of what Angular offers. This is a big move, but it’s an attempt at moving the capabilities forward. Doing something is always preferable to doing nothing. The other headline act, and related to the ES6 features is the move to make Angular compatible with Web Components. Web Components will redefine what web development means, in time, and making sure that their framework is on hand to help deliver them safely to developers is again a smart product decision. The temporary pain of the rewrite will be rewarded by increased ease of use and longevity for the developers and clients who build and consume AngularJS applications. There are a whole host more features; a move to mobile-first design, which I understand, and lots of technical and syntax improvements, which I don’t; increased performance, and plenty more too. Every decision is being made to make Angular a better product for everyone who uses it. Gentle breath of yours my sails/Must fill, or else my project fails AngularJS 2.0 has been a divisive figure in the web development world. I’ve been at Packt for three years and can’t remember a time when such a popular and well-used technology completely ripped up everything they had and started again. It will set a precedent in software that will shape the future, either way it ‘goes down’. What we should focus on is that this wholesale change is designed to make the product better – not just now, but in to the future - and that decision should be applauded. It’s not unheard of for Google to stop/start/abandon high-profile projects (cough Google Glass cough), but they should be recognised nonetheless for their dedication in trying to make this a more accessible and useful platform long term. Ultimately though, it will be the users who decide if they won or lost. The team are bringing a different project in the hope that people see its advantages, but no matter the intent a product is only useful if the consumers find it useful. Through our ‘gentle breath’, the Angular project will fly or fail. Let’s embrace it.
Read more
  • 0
  • 0
  • 1898

article-image-knowledge-motion
Sam Wood
17 Nov 2015
3 min read
Save for later

Knowledge in Motion

Sam Wood
17 Nov 2015
3 min read
Since releasing their first video training title in March 2013, Packt Publishing have been working hard on producing high quality video learning. Now, to celebrate their 150th video title, Packt are inviting everyone to try out learning in motion with a massive 80% discount sale across all their video products. Why learn from video? See example projects in action Write your code along with your instructor Master a technology in just a few hours Why does Packt believe in video learning? Packt wants to show the world how to put software to work. When surveyed, Packt's video users said that they most prized the practicality of video, and the ability to be able to watch a project working visually. Video offers the chance for learners to write their code along with their instructor, to really go hands-on when getting to grips with a new technology. In just a few hours, video lets its viewers gain a level of technical mastery - whether pushing their skills to the limit, or diving in to something completely new. What does Packt Video offer? Courses on Web Development, Big Data, App Dev, Game Dev, and more Over 350 hours of Video Training spread across 150+ courses Expert authors from companies such as Google, IBM, and Yahoo Time is valuable - so all of Packt Video is built around the twin concepts of Curation and Concision. Expert authors and editors work hard to ensure each video is focused on the key information on each subject, taught in the most efficient and informative manner possible. All Packt video courses are authored by experts from across the globe - all with working professional experience as top consultants, trainers, or lead developers at some of the world’s most innovative organisations. From Python to JavaScript, 3D Printing to Data Visualization, Packt Video encompasses a huge variety of subjects. Packt are proud to know that 9/10 developers would buy a Packt Video Course again, and 8/10 would recommend Packt’s Video Courses to their friends and colleagues. Get started with Video Week 80% off all courses for this week only Free courses available to view for this week only with a Packt login Want to try out Packt Video and start skilling up with new and dynamic learning? For this week only, all video courses are on offer at an amazing 80% discount. Top courses are available to view for free in our PacktLib web platform for anyone with a Packt account - check them out now!
Read more
  • 0
  • 0
  • 1381

article-image-diving-juju
Adam Israel
17 Nov 2015
4 min read
Save for later

Diving into Juju

Adam Israel
17 Nov 2015
4 min read
In my last post, I introduced you to Juju and talked about how it could help you. Now I'd like to walk you through some real world examples of Juju in action. Workloads Bundles are used to represent workloads. Bundles can be simple, like the Wordpress example in the previous post, or complex. OpenStack If you're not familiar with OpenStack, it is a collection of services for managing a cloud computing platform. Think Infrastructure as a Service (IAAS). I've heard horror stories from people who've spent months trying to deploy and configure OpenStack. Several different tools for automating an OpenStack Deployment have been developed, and this video from the October 2015 OpenStack Summit compares them (with Juju a strong favorite): How easy is it to install OpenStack? It's this easy as this: $ juju quickstart openstack-base Sit back and wait, and soon you'll have an OpenStack environment with Keystone, Glance, Ceph, Nova-compute and more ready for testing. Big Data This is the area I'm personally most excited about. Big Data solutions are all aimed at taking some of the complexity out of analysing huge datasets. The base of these workloads is Apache Hadoop, bundled with tools like mapreduce, Tez, Hive, Pig and Storm. Drink from the Twitter firehose with Flume. Crunch Open Data from your favorite city or government to spot trends in voter turnout or track neighborhood gentrification or crime rates. Containers Containers are the new hot thing, but there's no reason why you can't use Juju to orchestrate the deployment of them. Docker? No problem. Kubernetes? Juju does that, too. There are advantages to containerizing your application. It gives you a nice layer of isolation, and the addition of container networking with Flannel makes it even more powerful. Juju steps in to compliment the benefits of a container by offering a way to manage and scale them in the cloud. As a developer, you can write a dockerfile to launch your application, and use the Docker charm to deploy it. Things at scale For a typical development workflow, you may only need the bare minimum of machines to run your application. Once you deploy to the cloud for production use, you're going to need the ability to scale your application. For example, if your database is running slow, you can easily add scale up: juju add-unit mysql -n 3 This would add three new units to MySQL and configure replication and failover, things that are complicated and often fragile to do by hand. Benchmarking The cloud offers a dizzying array of hardware options. Spinning rust or SSD. Lots of memory, or CPU, or both. 1 or 10 Gigabit networking. You can speculate about which options are best suited for your application but even the most well-informed of guesses can be wrong when put to practice. Benchmarking provides the ability to exercise a service in order to evaluate its performance, and collect hardware and software statistics to monitor how your workload is performing. Maybe you want to test your database under load, or stress your web application, or identify potential bottlenecks. Could it be disk or network I/O slowing you down? Is it poorly optimized database queries? This is the tool you'll want to use to answer those questions. Workloads are complex things, with many moving parts. Like Hydra, bottlenecks are a shifting target; strike down one and two more rise to take it's place. In order to tune workloads, I've gone hunting for blog posts or white papers showing best practices for the services I use. I'm often frustrated, though, because all the pretty graphs in the world don't help me if I can't replicate the results. It leads to a trust issue; sure, it ran fast for you, but how do I recreate it? Benchmarking's focus on repeatable, reliable testing means that you can repeat benchmarks over and over again and expect to see similar results. You can then make adjustments to your hardware or software, repeat the benchmark and compare the results. That effort can then be distilled into best practices that anyone using or deploying a service can benefit from. Conclusions Juju is a robust devops tool, reducing the complexity of cloud development and orchestration. It's growing community of users and contributors, including IBM, Intel, Microsoft, Cisco and China Telecom means it's going to be around for a long time. Dig deeper into the best DevOps solutions with a look at the best command line tools in our article – read it now! About the author Adam Israel has worn many hats over the past twenty years, from help desk to Chief Technical Officer, from point of sale software to search engines and ad server platforms. He currently works at Canonical, Ltd as a Software Engineer, with focus on cloud development and operations.
Read more
  • 0
  • 0
  • 1568

article-image-introduction-juju
Adam Israel
16 Nov 2015
5 min read
Save for later

An introduction to Juju

Adam Israel
16 Nov 2015
5 min read
You've finished that application you've been working on for weeks or months and you're ready to show it to the world. Now you login to your favorite cloud, launch instances, setup monitoring, security groups and firewalls. All of that is well and good, but it's tedious work. You have to learn how to use all of your clouds proprietary bits and services. If you decide to move to a different cloud later, you'll have to learn that provider’s quirks and possibly rewrite portions of your application that depended on cloud-specific services. Juju solves that problem (and more). Juju is a cloud orchestration and service modeling toolkit. With it, you can deploy your application to clouds like Amazon, Azure, Google Compute Engine (GCE), Joyent or Digital Ocean, or to your own bare metal via MAAS. Best of all, Juju does this in a repeatable, reliable fashion with a few simple commands. Why should you care? Being in DevOps means being agile. It provides the fast iteration of writing code, testing code, and deploying code. And Juju embraces the DevOps philosophy by taking the tedious and time-consuming tasks and making them nimble. In the past, I managed dozens of servers with a set of bash scripts wrapped around ssh and rsync to deploy updated code, and manually managed database and memcached clusters as well as load balancers. Later, we upgraded to a Puppet and Kickstart workflow. Each method worked, and was an improvement on the previous, but neither was spectacular. I wish I'd known about Juju at the time. I would have spent way less time deploying and more time coding. The anatomy of Juju A Charm is a structured collection of files that describe what you're installing, how to install it, its configuration options and what other service(s) it speaks to. . ├── actions │ ├── backup │ └── benchmark ├── hooks │ ├── config-changed │ ├── install │ ├── start │ ├── stop │ └── upgrade-charm ├── config.yaml └── metadata.yaml Actions are scripts that run against your application. Not all charms have them, but those that do usually encapsulate administrative tasks. They can also run benchmarks, to analyze the performance on your application, for example, across different clouds or hardware configurations in order to find the best balance between cost and performance. Hooks are things that run in response to something happening. They are also idempotent, meaning that they can be run multiple times with the same result. The standard set of hooks include: The install hook is the first executed. As the name implies, it installs any software needed to run a program. The start and stop hooks start or stop your application. The config-changed hook is executed any time one of the options defined in config.yaml is changed. The upgrade-charm hook handles updates to the charm or your application. As a user, these hooks are executed for you when certain events happen. As a developer, there are a series of best practices to help you write charms that fit the above model. Relationships Relationships define the services your application interacts with. When a relationship is joined, each side exchanges information. This handshake may include credentials, hostnames and ports, filesystem locations and more. Hooks, like the ones above, are executed when the state of a relationship changes. For example, adding a relation between your application and a database would fire the database-relation-joined event, which would provide you with the host name, database name, and credentials to use. The database would receive the host name of your application, allowing it to set ACLs to secure itself. This removes the need for editing configuration files by hand or script. A simplistic example of relations is Wordpress. In its metadata.yaml it declares that it requires a database, and can optionally use memcached, as visualized here in a screenshot taken from the Juju GUI. The tiles represent deployed services, and the line between them represents the relationship. To achieve this model, we run the commands: $ juju deploy mysql --constraints="cpu-cores=8 mem=32G" Added charm "cs:trusty/mysql-29" to the environment. $ juju deploy memcached Added charm "cs:trusty/memcached-11" to the environment. $ juju deploy wordpress Added charm "cs:trusty/wordpress-3" to the environment. $ juju add-relation wordpress mysql $ juju add-relation wordpress memcached $ juju expose wordpress Scalability As cool as charms and relationships are, you see what they can really do when it comes to scalability. In the above example, you can simply add machines to the Wordpress charm, which will automatically configure load balancing between each machine. Bundles So far, we've talked about charms and relationships. The real magic happens when you put them all together to create a model of your workload. We can create a bundles.yaml file that describes the services to deploy, any configuration options that should be changed from their default, and each services relation to each another using this bundle.yaml: $ juju quickstart bundle.yaml What's next In my next post, we'll explore real world examples of these concepts, including how to take the pain out of deploying OpenStack or Big Data solutions. About the author Adam Israel has worn many hats over the past twenty years, from help desk to Chief Technical Officer, from point of sale software to search engines and ad server platforms. He currently works at Canonical, Ltd as a Software Engineer, with focus on cloud development and operations.
Read more
  • 0
  • 0
  • 1737

article-image-you-down-oop
Liz Tom
11 Nov 2015
7 min read
Save for later

You down with OOP?

Liz Tom
11 Nov 2015
7 min read
I just did a quick Google search, I'm not the first one to think I'm clever. But I couldn't think of a better name. So a little background on me. I'm still a beginner and I'm still learning a lot. I've been a Software Developer for about a year now and some basic concepts are just starting to become more clear to me. So I'd look at this as a very beginning intro to OOP and hopefully you'll be so interested by what I have to say you'll explore further! What is OOP? OOP stands for Object-Oriented Programming. Why would you want to learn about this concept? If done correctly, it makes code more maintainable in the long run. It's easier to follow and easier for another dev to jump in to the project, even if that other dev is yourself in the future. Huh? What's the Difference? Object-Oriented Programming vs Functional vs Procedural. I've always understood each of these concepts by themselves but not how they fit in with each other or what the advantages of choosing one type over the other. These are not the only programming paradigms out there but they are very popular paradigms. Before we can really dive into OOP it's important to understand how it differs from other programming paradigms. Object-Oriented: Object-Oriented programming is a type of abstraction done by using classes and objects. Classes define a data structure and can send and receive messages and also manipulate data. Objects are the instances of these classes. Functional: Functional programming is a set of functions. These functions take input and put out output, ideally there is no internal state that would affect the output of an input. This means no matter what input you put in, you should always receive the same output. Procedural: Procedural is probably what you first learned. This involves telling the computer a set of steps that it should execute. Polymorphism It's morphin' time! I've heard this word a lot but I was too nervous to ask what it meant. It's okay, I'm here to help you out. This is a concept found in Object-Oriented Programming, it's a way to have similar objects act differently. The classic example is using a class Animal. You can have many different animals and they can all have the method speak. However, cats and dogs will say different things when they speak. so dog.speak() would result in 'woof' while cat.speak() would result in 'meow'. Inheritance Objects can inherit features from other objects. If you have an object that shares a lot of similar functionality as another object but varies slightly, you might want to use inheritance. But don't abuse inheritance, just because you can do it doesn't mean this is something you should be using everywhere. OOP is used to help make more maintainable code, if you have classes inheriting all over the place with each other, it ends up making your code less maintainable. Encapsulation Encapsulation is a fancy word describing private methods and variables. These methods and variables should only be avaliable to the class they belong to, which helps to make sure that you aren't doing anything with a different class that will have unintended results. While in Python there is no actual privacy, there is a convention to help prevent you from using something you shouldn't be touching. The convention is prepending variable names with an underscore. Python! So now you have a little background on the basics of OOP, what does this look like in the real world? I'll use Python as an example in this case. Python is a multi-paradigm language that is great for object-oriented programming and that's fairly easy to read, so even if you haven't seen any Python in your life before, you should be able to get a handle on what's going on. So we're going to make a zoo. Zoos have animals, so in order to create an animal, let's start with an Animal class. class Animal(): def __init__(self, name): self.name = name All we've done here is make an animal class and set the name. What can differ between animals besides their name? Number of legs? Color? The food they eat? There are so many things that might be different but let's just pick a few. Don't worry too much about syntax. If you like the look of Python, I recommend checking it out because Python is fun to write! class Animal(): def __init__(self, name, legs, color, food): self.name = name self.legs = legs self.color = color self.food = food Now I can make any animal I want. dog = Animal('dog', 4, 'black', 'dog food') cat = Animal('cat', 4, 'orange', 'cat food') But maybe we should take care of these animals? All of the animals at this zoo happen to get hungry based on the number of legs they have and how long it's been since their last feeding. When their hunger levels reach an 8 on a scale of 1-10 (10 being the hungriest they've ever felt in their life!), we'll feed them! First we need to allow each animal to store its hunger value. class Animal(): def init(self, name, legs, color, food, hunger): self.name = name self.legs = legs self.color = color self.food = food self.hunger = hunger Now we can add a method (a name for function that belongs to a class) to the class to see if we should feed the animal and if we need to feed it, we'll feed it. class Animal(): def init(self, name, legs, color, food, hunger): self.name = name self.legs = legs self.color = color self.food = food self.hunger = hunger def time_to_feed(self, hours_since_meal): self.hunger = 0.3 * (hours_since_meal * self.legs) + self.hunger if self.hunger >= 8: print('Time to feed the ' + self.name + ' ' + self.food + '!') else: print(self.name + ' is full.') dog = Animal('dog', 4, 'brown', 'dog food', 8) dog.time_to_feed(8) cat = Animal('cat', 4, 'pink', 'cat food', 2) cat.time_to_feed(2) Ok but why is this good? Well let's say I didn't do this in an object-oriented manner. I could do it this way: dogName = 'dog' dogLegs = 4 dogColor = 'brown' dogFood = 'dog food' dogHunger = 8 catName = 'cat' catLegs = 4 catColor = 'pink' catFood = 'cat food' catHunger = 2 Before when I wanted to add a hunger level, I just needed to add more parameters to my __init__ method. Now I need to make sure I'm adding in parameters to each object individually. I'm already tired. I could put those all in arrays but then I'm relying on the order of multiple arrays to always stay in the order I expect them in. Then I create the same time_to_feed function. But now it's not clear what time_to_feed is being used for. A future developer joining this project might have hard time figuring out that you meant this for your animals. I hope you enjoyed this little introduction into Object-Oriented Programming. If you want to jump into learning more JavaScript, why not start by finding out the difference between mutability and immutability? Read on now About the author Liz Tom is a Creative Technologist at iStrategyLabs in Washington D.C. Liz's passion for full stack development and digital media makes her a natural fit at ISL. Before joining iStrategyLabs, she worked in the film industry doing everything from mopping blood off of floors to managing budgets.
Read more
  • 0
  • 0
  • 2073
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €14.99/month. Cancel anytime
article-image-typography-101
Owen Roberts
06 Nov 2015
8 min read
Save for later

Typography 101

Owen Roberts
06 Nov 2015
8 min read
Let’s talk about fonts for a bit. When you’re working on a site, how much time do you actually spend on them? Probably not too long, why bother about something like letters when you’ve got to make sure your work is perfectly responsive after all? Fonts are actually pretty essential and aren’t really given enough credit in how important they are for people; after all, would you use Comic Sans on a business site? Of course not. The wrong font can reduce readability, put customers off, and generally gives your site an unprofessional look up there with a garish color scheme. You want to avoid it at all costs. A quick search online shows us there are around 200,000 different fonts available for download. That’s a lot to choose from – way too many when you think about it, can you think of 200,000 different situations where a different font is absolutely needed? Where do you even begin?! To help you out and give you a bit of a fighting chance in this article I’ll take you through the very basics of typography and hopefully get you thinking about fonts so you can start looking at it in more detail – constantly use Times New Roman? Here’s where to get moving away from that! Types of Fonts With 200,000 fonts you’re going to have a hard time collecting them into strict classifications, but it’s generally agreed that there are 4 key groups most can fit into; Serif, Sans-Serif, Script, and Decorative. There’re a few more groups we could touch upon like Blackletter (Think stereotypical Ye Olde Butcherede Englishe) and Calligraphic (Which are partially connected to, but distinct enough from Script to be classed on its own) but these aren’t used as much anymore and are more of a novelty these days. Now, let’s look at each of the 4 in turn briefly and see what makes them up, when to use them, and more importantly, when not to. Serif Fonts Times New Roman “Serifs” are the small lines attached to the end of each stroke in letters and symbols; some people refer to them as “feet”. This font family is very much the ur-example, first used in stone inscriptions as far back as Roman times. The style of a Serif font feels incredibly formal with a lot of impact when used correctly; as such they’re better for business projects or for something that will be read in print. Classic examples of a Serif font are Times New Roman, Garamond, and Centaur. As they were designed to be used in the physical world the transition to computers has created as few problems specific to Serif fonts. Most of these problems stem from the actual serifs themselves – as the amount of pixels per inch on a screen is around 100 Serifs can appear too large, too small, or just disappear which makes reading them a hassle to some people depending on the type of device they use. If you really want to use a Serif font online then try to use some of the more modern variations – Georgia was specifically designed to get around these limitations and look good on low resolution screens for example! Sans-Serif Fonts Calibri (Body) If Serif fonts are known by having feet attached to every stroke then San-Serifs are the opposite; lacking any sort of feet at all, this gives them an informal and clear feel. Sans-Serif is currently the most popular font choice on the internet, and it’s easy to see why. Sans do not have to worry about the problems Serif fonts face with computers and so can be seen easily on any screen regardless of resolution or type. It’s also really helpful when designing advertisements, as it can easily be read in small sizes from a long distance compared to the rest. This; combined with their informal feel means that they suit many sites without too much effort. Everyone’s favorite font Comic Sans is an example of Sans-Serif, so now you know why everyone loves to use it so much! For a font that doesn’t evoke pure rage whenever someone sees it though, try the classic Arial, Helvetica, or Verdana for your responsive projects. The only real problem with Sans-Serif is that, as far as font types go, it’s very much the “Master of None” style. Don’t be afraid to use them all over the place, but when looking to create some impact you might want to try pairing a body of Sans-Serif with a Serif as a header to create a tasteful contrast! Script Fonts Kunstler Script Script fonts are designed to look like cursive handwriting. If you picture in your mind a handwritten invitation then you’ve probably got a good idea about what to expect. The golden rule to using Script fonts is usually “don’t” because they’re so incredibly difficult to read; ever struggled to understand someone else’s handwriting? That’s exactly what Script fonts are like most of the time. There is, however, at least one place you can use them if the planets align right – the title. If the brand’s character calls for it then they can be effective; they add a touch of class when used sparingly… but when used incorrectly they can make a mess of your work. If you really want to give them a try have a look at Blackadder ITC or Lucida Handwriting. Decorative Fonts Westwood LET The most diverse category by far, as there is very little that really interlinks each one together. A lecturer I once knew referred to them as “WordArt for professionals” and that’s a pretty apt way to describe their use – adverts, announcements, posters, and billboards. They can range from cheesy cloud shaped letters to graffiti inspired words. Never use Decorative fonts for main bodies of texts. The less they’re used the better to ensure they keep the impact they offer on the page. Unless you’re working on something with a theme like a circus, Art Nouveau inspired, or a website aimed at children you’re probably best off avoiding them all together really. Want to try one? Stencil is available on most word processors and gives a good idea for you to get what this style is all about. Things to Keep in Mind So we have our 4 font types, but can we use them all in one project? Of course not! Here are a few tips to keep in mind now you know the difference between each type of font: Only use 2-3 types of font per project. Treat your fonts like a hierarchy, with the main choice being complimented by the others. Never change font mid-sentence or paragraph. Ever. When combining Serif and Sans-Serif use Serif as the heading and Sans for the main text. Try to avoid combining fonts that look too similar. If you squint and can’t tell much difference then it’s better to change one to make them stand out more. Context is important; a font that doesn’t fit your work is like suddenly TYPING IN ALL CAPS FOR NO REASON. Another thing to keep in mind is that while fonts can have a huge impact on the feel of your work they're only one part of a whole. Don't slack off with the rest of your design too! While you're here, why not grab the second edition of our fan favourite Responsive Web Design with HTML5 and CSS3? It lives up to the hype and then some! Already got it? Well, there's a few other titles at the bottom on this page for you to browse too! Where to Next? So there we go; an overview of the 4 main font types and their uses. Hopefully you got something out of that and are interested in finding out more. I may come back to this topic one day, but for now I’d recommend moving out of your comfort zone and start testing out different fonts with your own eyes. If you’re looking for something specific to look into, then why not looking into what certain fonts can do to drawn readers to certain sections, or even how they can be used to subliminally influence emotions? A strategic font choice is just as effective as any design decision in that regard! Or you can do what I do, and just start using Helvetica. Helvetica’s great.
Read more
  • 0
  • 0
  • 1120

article-image-modern-go-development
Xavier Bruhiere
06 Nov 2015
8 min read
Save for later

Modern Go Development

Xavier Bruhiere
06 Nov 2015
8 min read
  The Go language indisputably generates lot of discussions. Bjarne Stroustrup famously said: There are only two kinds of languages: the ones people complain about and the ones nobody uses. Many developers indeed share their usage retrospectives and the flaws they came to hate. No generics, no official tool for vendoring, built-in methods break the rules Go creators want us to endorse. The language ships with a bunch of principals and a strong philosophy. Yet, The Go Gopher is making its way through companies. AWS is releasing its Go SDK, Hashicorp's tools are written in Go, and so are serious databases like InfluxDB or Cockroach. The language doesn't fit everywhere, but its concurrency model, its cross-platform binary format, or its lightning speed are powerful features. For the curious reader, Texlution digs deeper on Why Golang is doomed to succeed. It is also intended to be simple. However, one should gain a clear understanding of the language's conventions and data structures before producing efficient code. In this post, we will carefully setup a Go project to introduce a robust starting point for further development. Tooling Let's kickoff the work with some standard Go project layout. New toys in town try to rethink the way they are organized, but I like to keep things simple as long as it just works. Assuming familiarity with the Go installation and GOPATH mess, we can focus on the code's root directory. ➜ code tree -L 2 . ├── CONTRIBUTING.md ├── CHANGELOG.md ├── Gomfile ├── LICENCE ├── main.go ├── main_test.go ├── Makefile ├── shippable.yml ├── README.md ├── _bin │   ├── gocov │   ├── golint │   ├── gom │   └── gopm └── _vendor    ├── bin    ├── pkg    └── src To begin with, README.md, LICENCE and CONTRIBUTING.md are usual important documents for any code expected to be shared or used. Especially with open source, we should care about and clearly state what the project does, how it works and how one can (and cannot) use it. Writing a Changelog is also a smart step in that direction. Package manager The package manager is certainly a huge matter of discussion among developers. The community was left to build upon the go get tool and many solutions arisen to bring deterministic builds to Go code. While most of them are good enough tools, Godep is the most widely used, but Gom is my personal favorite: Simplicity with explicit declaration and tags # Gomfile gom 'github.com/gin-gonic/gin', :commit => '1a7ab6e4d5fdc72d6df30ef562102ae6e0d18518' gom 'github.com/ogier/pflag', :commit => '2e6f5f3f0c40ab9cb459742296f6a2aaab1fd5dc' Dependency groups # Gomfile (continuation) group :test do # testing libraries gom 'github.com/franela/goblin', :commit => 'd65fe1fe6c54572d261d9a4758b6a18d054c0a2b' gom 'github.com/onsi/gomega', :commit => 'd6c945f9fdbf6cad99e85b0feff591caa268e0db' gom 'github.com/drewolson/testflight', :commit => '20e3ff4aa0f667e16847af315343faa39194274a' # testing tools gom 'golang.org/x/tools/cmd/cover' gom 'github.com/axw/gocov', :commit => '3b045e0eb61013ff134e6752184febc47d119f3a' gom 'github.com/mattn/goveralls', :commit => '263d30e59af990c5f3316aa3befde265d0d43070' gom 'github.com/golang/lint/golint', :commit => '22a5e1f457a119ccb8fdca5bf521fe41529ed005' gom 'golang.org/x/tools/cmd/vet' end Self-contained project # install gom binary go get github.com/mattn/gom # ... write Gomfile ... # install production and development dependencies in `./_vendor` gom -test install We just declared and bundled full requirements under its root directory. This approach plays nicely with trendy containers. # we don't even need Go to be installed # install tooling in ./_bin mkdir _bin && export PATH=$PATH:$PWD/_bin docker run --rm -it --volume $PWD/_bin:/go/bin golang go get -u -t github.com/mattn/gom # asssuming the same Gomfile as above docker run --rm -it --volume $PWD/_bin:/go/bin --volume $PWD:/app -w /app golang gom -test install An application can quickly rely on a significant number of external resources. Dependency managers like Gom offers a simple workflow to avoid breaking-change pitfalls - a widespread curse in our fast paced industry. Helpers The ambitious developer in love with productivity can complete its toolbox with powerful editor settings, an automatic fix, a Go repl, a debugger, and so on. Despite being young, the language comes with a growing set of tools helping developers to produce healthy codebase. Code With basic foundations in place, let's develop a micro server powered by Gin, an impressive web framework I had great experience with. The code below highlights commonly best practices one can use as a starter. // {{ Licence informations }} // {{ build tags }} // Package {{ pkg }} does ... // // More specifically it ... package main import ( // built-in packages "log" "net/http" // third-party packages "github.com/gin-gonic/gin" flag "github.com/ogier/pflag" // project packages placeholder ) // Options stores cli flags type Options struct { // Addr is the server's binding address Addr string } // Hello greets incoming requests // Because exported identifiers appear in godoc, they should be documented correctly func Hello(c *gin.Context) { // follow HTTP REST good practices with an adequate http code and json-formatted response c.JSON(http.StatusOK, gin.H{ "hello": "world" }) } // Handler maps endpoints with callbacks func Handler() *gin.Engine { // gin default instance provides logging and crashing recovery middlewares router := gin.Default() router.GET("/greeting", Hello) return router } func main() { // parse command line flags opts := Options{} flag.StringVar(&opts.Addr, "addr", ":8000", "server address") flag.Parse() if err := Handler().Run(opts.Addr); err != nil { // exit with a message and a code status 1 on errors log.Fatalf("error running server: %vn", err) } } We're going to take a closer look at two important parts this snippet is missing : error handling and interfaces' benefits. Errors One tool we could have mentioned above is errcheck, which checks that you checked errors. While it sometimes produces cluttered code, Go error handling strategy enforces rigorous development : When justified, use errors.New("message") to provide a helpful output. If one needs custom arguments to produce a sophisticated message, use fmt.Errorf("math: square root of negative number %g", f) For even more specific errors, let's create new ones: type CustomError struct { arg int prob string } // Usage: return -1, &CustomError{arg, "can't work with it"} func (e *CustomError) Error() string { return fmt.Sprintf("%d - %s", e.arg, e.prob) } Interfaces Interfaces in Go unlock many patterns. In the gold age of components, we can leverage them for API composition and proper testing. The following example defines a Project structure with a Database attribute. type Database interface { Write(string, string) error Read(string) (string, error) } type Project Structure { db Database } func main() { db := backend.MySQL() project := &Project{ db: db } } Project doesn't care of the underlying implementation of the db object it receives, as long as this object implements Database interface (i.e. implements read and write signatures). Meaning, given a clear contract between components, one can switch Mysql and Postgre backends without modifying the parent object. Apart from this separation of concern, we can mock a Database and inject it to avoid heavy integration tests. Hopefully this tiny, carefully written snippet should not hide too much horrors and we're going to build it with confidence. Build We didn't join a Test Driven Development style but let's catch up with some unit tests. Go provides a full-featured testing package but we are going to level up the game thanks to a complementary combo. Goblin is a thin framework featuring Behavior-driven development close to the awesome Mocha for node.js. It also features an integration with Gomega, which brings us fluent assertions. Finally testflight takes care of managing the HTTP server for pseudo-integration tests. // main_test.go package main import ( "testing" . "github.com/franela/goblin" . "github.com/onsi/gomega" "github.com/drewolson/testflight" ) func TestServer(t *testing.T) { g := Goblin(t) //special hook for gomega RegisterFailHandler(func(m string, _ ...int) { g.Fail(m) }) g.Describe("ping handler", func() { g.It("should return ok status", func() { testflight.WithServer(Handler(), func( r*testflight.Requester) { res := r.Get("/greeting") Expect(res.StatusCode).To(Equal(200)) }) }) }) } This combination allows readable tests to produce readable output. Given the crowd of developers who scan tests to understand new code, we added an interesting value to the project. It would certainly attract even more kudos with a green test-suite. The following pipeline of commands try to validate a clean, bug-free, code smell-free, future-proof and coffee-maker code. # lint the whole project package golint ./... # run tests and produce a cover report gom test -covermode=count -coverprofile=c.out # make this report human-readable gocov convert c.out | gocov report # push the reslut to https://coveralls.io/ goveralls -coverprofile=c.out -repotoken=$TOKEN Conclusion Countless posts conclude this way, but I'm excited to state that we merely scratched the surface of proper Go coding. The language exposes flexible primitives and unique characteristics one will learn the hard way one experimentation after another. Being able to trade a single binary against a package repository address is such an example, like JavaScript support. This article introduced methods to kick-start Go projects, manage dependencies, organize code, offered guidelines and testing suite. Tweak this opinionated guide to your personal taste, and remember to write simple, testable code. About the author Xavier Bruhiere is the CEO of Hive Tech. He contributes to many community projects, including Occulus Rift, Myo, Docker and Leap Motion. In his spare time he enjoys playing tennis, the violin and the guitar. You can reach him at @XavierBruhiere.
Read more
  • 0
  • 0
  • 3061

article-image-basic-game-engine-patterns-that-make-game-development-simple
Daan van
04 Nov 2015
10 min read
Save for later

Basic Game Engine Patterns that Make Game Development Simple

Daan van
04 Nov 2015
10 min read
The phrase "Do not reinvent the wheel" is often heard when writing software. It definitely makes sense not to spend your time on tasks that others already have solved. Reinventing the wheel has some real merit. It teaches you alot about the problem, especially what decision need to be made when solving it. So in this blog post we will reinvent a game engine to learn what is under the hood of most game development tools. StopWatch We are going to learn about game engines by creating a game from scratch. The game we will create is a variant of a stop watch game. You will need to press the spacebar for a fixed amount of time. The object is to come as close as you can get to a target time. You can play the finished game to get a feeling for what we are about to create. Follow along If you want to follow along, download StopWatch-follow-along.zip and extract it in a suitable location. If you now open index.html in your browser you should see a skeleton of the game. Create the Application We will get things started by adding an application.js. This file will be responsible to start the game. Open the directory in your favorite editor and open index.html. Add the following script tag before the closing body-tag. <script src="js/application.js"></script> This references a JavaScript file that does not exist yet. Create a directory js below the root of the project and create the file js/application.js with the following content (function(){ console.log('Ready to play!'); })(); This sets up an immediatly invoked function expression that creates a scope to work in. If you reload the StopWatch game and open the developer tools, you should Ready to play! in the console. Create the Library The application.js setup the game, so we better create something to setup. In index.html above the reference to js/application.js, refer to js/stopwatch.js: <script src="js/stopwatch.js"></script> <script src="js/application.js"></script> stopwatch.js will contain our library that deal with all the game related code. Go ahead and create it with the following content: (function(stopwatch){ })(window.stopwatch = window.stopwatch || {}); The window.stopwatch = window.stopwatch || {} makes sure that a namespace is created. Just to make sure we have wired everything up correctly change application.js so that it checks the stopwatch namespace is available. (function(){ if (!stopwatch) { throw new Error('stopwatch namespace not found'); } console.log('Ready to play!'); })(); If all goes well you still should be greeted with Ready to play! in the browser. Creating a Game Something should be responsible of keeping track of game state. We will create a Game object for this. Open js/stopwatch.js and add the following code. var Game = stopwatch.Game = function(seconds){ this.target = 1000 * seconds; // in milliseconds }; Inside the immediatly invoked function expression. This creates an constructor that accepts a number of seconds that will serve as the target time in the game. Creating a Game object The application.js is responsible for all the setup, so it should create a game object. Open the js/application.js and add: var game = new stopwatch.Game(5); window.game = game; The first line create a new game with a target of five seconds. The last line exposes it so we can inspect it in the console. Reload the StopWatch game in the browser and type game in the console. It should give you a representation of the game we just created. Creating a GameView Having a Game object is great, but what use is it to us if we can not view it? We will create a GameView for that purpose. We would like to show the target time in the game view so go ahead and add the following line to index.html, just below the h1-tag. <div id="stopwatch"><label for="target">Target</label><span id="target" name="target">?</span></div> This will create a spot for us to place the target time in. If you refresh the StopWatch game, you should see "Target: ?" in the window. Just like we create a Game, we are going to create a GameView. Head over to js/stopwatch.js and add: var GameView = stopwatch.GameView = function(game, container){ this.game = game; this.container = container; this.update(); }; GameView.prototype.update = function(){ var target = this.container.querySelector('#target'); target.innerHTML = this.game.target; }; The GameView constructor accepts a Game object and a container to place the game in. It stores these arguments and then calls the update method. The update method searches within the container for the the tag with id target and writes the value of game.target into it. Creating a GameView object Now that we created a GameView we better hook it up to the game object we already created. Open js/application.js and change it to: var game = new stopwatch.Game(5); new stopwatch.GameView(game, document.getElementById('stopwatch')) This will create a GameView object with the game object and the div-tag we just created. If you refresh the StopWatch game the question mark will be substituted with the target time of 5000 milliseconds. Show Current Time Besides the target time, we would also want to show the current time, i.e. the time that is ticking away towards the target. This is quit similar to the target, with a slight twist. Instead of a property we are using a getter-method. In index.html add a line for the current time. <label for="current">Current</label><span id="current" name="current">?</span> In js/stopwatch.js, right after the constructor add: Game.prototype.current = function(){ return 0; }; Finally change the update-method of GameView to also update the current state. var target = this.container.querySelector('#target'); target.innerHTML = this.game.target; var current = this.container.querySelector('#current'); current.innerHTML = this.game.current(); Refresh the StopWatch game to see the changes. Starting & Stopping the Game We would like the current time start ticking when we press the spacebar and stop ticking when we release the spacebar. For this we are going to create start and stop methods on the Game. We also need to keep track if the game is already started or stopped, so we start by initializing them in the constructor. Change the Game constructor, found in the js/stopwatch.js, to initialize started and stopped properties. this.target = 1000 * seconds; // in milliseconds this.started = false; this.stopped = false; Next add a start and stop method that record that time when the game was started and stopped. Game.prototype.start = function(){ if (!this.started) { this.started = true; this.startTime = new Date().getTime(); this.time = this.startTime; } }; Game.prototype.stop = function(){ if (!this.stopped) { this.stopped = true; this.stopTime = new Date().getTime(); } } At last, we can change the current method to use the start and stop times. if (this.started) { return (this.stopped ? this.stopTime: this.time) - this.startTime; } return 0 If you now refresh the StopWatch game, we can test the functionality in the console tab. The follow excerpt demonstrates that Ready to play > game.start(); < undefined > // wait a few seconds > game.stop(); < undefined > game.current(); < 7584 // depends on how long you wait Update the GameView You might have noticed that despite the current of the game changed, the GameView did not reflect this. I.e. after running the above excerpt, the StopWatch window still shows zero for the current time. Let's create a game loop that continously updates the view. In order to achieve this we need to assign the GameView object to a variable and update it inside the game loop. Change js/application.js accordingly: var view = new stopwatch.GameView(game, document.getElementById('stopwatch')); function loop(){ view.update(); requestAnimationFrame(loop); } loop(); This uses the requestAnimationFramefunction to schedule the next run of the loop. If you now refresh the StopWatch game and rerun the excerpt above the current should be updated in the view. Update the Game Eventhough the GameView is updated when we start and stop the game, it still does not show the current time when it is ticking. Let's remedy this. The current-method of the Game is depending on the time property, but this is not updated. Create a tick-method on Game in js/stopwatch.js that updates the time property. Game.prototype.tick = function(){ this.time = new Date().getTime(); }; and call it in the game loop in js/application.js. function loop(){        game.tick();        view.update();        requestAnimationFrame(loop);    } Refreshing the game and rerunning the excerpt will update the current time when it ticks. Connect User Input Manipulating the Game object is fine when checking that the game works, but it is not very useable. We will change that. The Game will process user input. It will start when the spacebar is pressed and will stop when the spacebar is released again. We can make this happen by registering listeners for keydown and keyup events. When an event is triggered the listeners get called and can inspect the event and check what key was pressed, as demonstrated in the following code in js/application.js. document.body.addEventListener('keydown', function(event){ if (event.keyCode == 32 /* space */) { game.start(); } }); document.body.addEventListener('keyup', function(event){ if (event.keyCode == 32 /* space */) { game.stop(); } }); Try this out in the browser and you will be able to control the game with the spacebar. Birds Eye View Lets take a step back and see what we have achieved and identify the key parts. Game state, we created a Game that is responsible for keeping track of all the details of the game. **Game View*, next we created a GameView that is responsible for presenting the Game to the player. Game Loop, the game loop continuoulsy triggers the Game View to render it self and it updates the Game. User Input like time, key presses or mouse movement, is transformed into Game controls that change the game state. These are the four key ingredients to every game. Every game engine provides means to create, manipulate and manage all these aspect. Although there are local variations how game engines achieve this, it all drills down to this. Summary We took a peek under the hood of game engines by creating a game and identifying what is common to all games. I.e. game state to keep track of the game, game view to present to the players, user input to control the game and game loop to breath life into the game. About the author Daan van Berkel is an enthusiastic software craftsman with a knack for presenting technical details in a clear and concise manner. Driven by the desire for understanding complex matters, Daan is always on the lookout for innovative uses of software.
Read more
  • 0
  • 0
  • 1318

article-image-10-predictions-tech-2025
Richard Gall
02 Nov 2015
4 min read
Save for later

10 Predictions for Tech in 2025

Richard Gall
02 Nov 2015
4 min read
Back to the Future Day last month got us thinking – what will the world look like in 2025? And what will technology look like? We’ve pulled together our thoughts into one listicle packed with predictions – please don’t hold us to them… Everything will be streamed – all TV will be streamed through the internet. Every new TV will be smart, which means applications will become a part of the furniture in our homes. Not only will you be able to watch just about anything you can imagine, you’ll also be able to play any game you want. The end of hardware – with streaming dominant, hardware will become less and less significant. You’ll simply need a couple of devices and you’ll be able to do just about anything you want. With graphene flooding the market, these devices will also be more efficient than anything we’re used to today – graphene batteries could make consumer tech last for weeks with a single charge. Everything is hardware – Hardware as we know it might be dead, but the Internet of Things will take over every single aspect of everyday life – essentially transforming everyday objects into hardware. From fridges to pavements, even the most quotidian artefacts will be connected to a large network. Everything will be in the cloud – our stream-only future means we’re going to be living in a world where the cloud reigns supreme. You can begin to see how everything fits together – from the Internet of Things to the decline in personal hardware, everything will become dependent on powerful and highly available distributed systems. Microservices will be the dominant form of cloud architecture – There’s a number of ways we could build distributed systems and harness cloud technology, but if 2015 is anything to go by, microservices are likely to become the most dominant way in which we deploy and manage applications in the cloud. This movement towards modular and independent units – or individual ‘services’ – will not simply be the agile option, but will be the obvious go-to choice for anyone managing applications in the cloud. Even in 2015, you would have to have a good reason to go back to the old, monolithic way of doing things… Apple and Google rule the digital world – Sure, this might not be much of a prediction given the present state of affairs, but it’s difficult to see how anyone can challenge the two global tech giants. Their dominance is likely to increase, not decline. This means every aspect of our interaction with software – as consumers or developers – will be dictated by their commercial interests in 2025. Of course, one of the more interesting subplots over the next ten years will be whether we see a resistance to this standardization. Perhaps we might even see a resurgence of a more radical and vocal Open Source culture. Less Specialization, Democratization of Development – Even if our experience of software is defined by huge organizations like Google and Apple, it’s also likely that development will become much simpler. Web components have already done this (just take a look at React and Ember), which means JavaScript web development might well morph into something accessible to all. True, this might mean more mediocrity across the web – but it’s not like we’re all going to be building the same Geocities sites we were making in 2002… Client and Server collapse on each other – this follows on from the last prediction. The reason we’re going to see less specialization in development is that the very process of development will no longer be siloed. We’ll all be building complete apps that simply connect to APIs somewhere in the cloud. Isomorphic codebases will become standard – whether this means we will still be using Node.js is another matter… We’ll be living in a ‘Post-Big Data’ World – The Big Data revolution is over – it’s permanently taken root in every aspect of our lives. By 2025 data will have become so ‘Big’, largely due to the Internet of Things, that we’ll have to start thinking of better ways to deal with it and, of course, understand it. If we don’t, we’re going to be submerged in oceans of dirty data. iOS will become sentient - iOS 30, the 2025 iteration of iOS, will become self-aware and start making decisions for humanity. I welcome this wholeheartedly, never having to decide what to eat for dinner ever again. Special thanks to Ed Gordon, Greg Roberts, Amey Varangaonkar and Dave Barnes for their ideas and suggestions. Let us know your predictions for the future of tech – tweet us @PacktPub or add your comments below. What’s the state of tech today? And what can we expect over the next 12 months? Check out our Skills and Salary Reports to find out.
Read more
  • 0
  • 0
  • 2191
article-image-openstack-liberty-meets-mitaka-love-tokyo
Fahad Siddiqui
26 Oct 2015
5 min read
Save for later

OpenStack Liberty meets Mitaka; Love in Tokyo

Fahad Siddiqui
26 Oct 2015
5 min read
Since July 2010, when OpenStack was founded by Rackspace and Nasa, it has gone beyond boundaries and is one of the most outstanding Open Source projects around. Would you believe OpenStack helps CERN’s staff store and manage their massive amounts of data? For the last five years, Stackers have contributed their time and expertise to build a reliable and ready-to-use cloud infrastructure. Today more than 1,000 companies and 30,000 individuals from 172 countries are part of the project. Cloud is a vernacular term. Most people are on the cloud but not all end-users fully understand exactly what the cloud is. It is a challenge which OpenStack helps to address every day. Today it fuels the passion and imagination of a community of thousands of developers around the globe. The OpenStack event takes place six-monthly to communicate and discuss the new innovations and elements for the next software release. With only a day left until the OpenStack Tokyo Summit, users, developers, cloud-prophets and enterprises are excited to discuss the new release and the future of cloud computing. The summit is a four-day event, starting on the 27th of October where thousands of Stackers will be involved in hundreds of talks, workshops and lots of networking. The summit is scheduled with two views in mind. The main conference is dedicated to the vast majority of attendees - new-to-advanced users. The design summit is for the technical contributors and operators who have contributed to the Mitaka release cycle. For those who don't already know, the OpenStack Foundation name their software releases in alphabetical order; the last update, which was announced this July, is called "Liberty". The last summit was held in Vancouver where more than 6,000 attendees from 967 companies and 55 countries participated. Giants like Red Hat, HP, VmWare, Dell, IBM joined startups like Mirantis, SwiftStack, and Bluebox to make the world a more cloudy place. Highlights, keynotes, presentations, breakout and breakthrough session videos can be found here. This time the theme is "OpenStack-Powered Planet", which reflects a more responsible, advanced and secure infrastructure with OpenStack Cloud. Speakers from Bitnami, Huawei, NEC, ComCast, Metacloud and Fujitsu will discuss the power of a global network of public and private clouds, and OpenStack as a combined engine for advanced cloud computing technologies. New entrants in the Asia Pacific cloud market will be highlighted in visionary keynotes and breakout sessions, further underlining the community's advancement in cloud infrastructure. Reliability, security and improvement of the application ecosystem, and next-gen users will be highlighted to support the theme. Insightful breakout sessions will focus on software-led networking and recent progress on the OpenStack Neutron project! Attendees will also get a chance to get an insight into OpenStack's influence and development in areas such as Internet of Everything (IoE), next-gen telecom networks and the emerging Network Function Virtualisation (NFV). A few of the best talks and sessions which everyone is excited about include those by Amadeus, AppFormix, CERN, City Network, Comcast, Deutsche Telekom, eBay, GMO Internet, Huawei, Intel, Lithium Technologies, Kirin, NTT, PayPal, SKT, TubeMogul, WalmartLabs, Workday and Yahoo! These companies will give presentations, share case studies, conduct hands-on workshops and collaborative design sessions to help users Skill Up their knowledge and expertise with OpenStack. A number of Packt authors will also be present at the event, some of whom are speaking at and conducting working sessions. Find out more below: James Denton, Principal Network Architect for Rackspace, and author of Learning OpenStack Networking (Neutron) first and second edition will talk about a few uncommon issues with Nova and Neutron. Participants will learn some basic troubleshooting procedures, including tips, tricks, and processes of elimination. You can find more about the session here. Egle Sigler, Principal Architect on the Private Cloud team at Rackspace, OpenStack Foundation Board member, and co-author of OpenStack Cloud Computing Cookbook - Third Edition will be busy on the first two days of the event, giving talks, presentations and leading the work group sessions. At the event’s opening session, she will co-present the keynote about the Use Case of OpenStack in Yahoo! JAPAN. Later, she will be involved in a discussion on diversity in the OpenStack community and will also address a few basic as well as critical issues about DefCore. On the second day of the event she will dive deep into DefCore to discuss the latest issues with OpenStack interoperability. You can find more about the sessions here. Sriram Subramanian, Director of Software Engineering at Juniper Networks and co-author of the latest OpenStack Networking Cookbook will talk about ways of improving firewall performance and enhancement of OpenStack FWaaS. This session will include a demo of the work in progress. You can find more about his session here. Arthur Berezin, Director of Product at GigaSpaces and author of Production Ready OpenStack - Recipes for Successful Environments will give a talk about how to Build Cloud Native Microservices with Hybrid Workloads on OpenStack. On the 2nd day of the event, he will discuss ways of migrating Enterprise Applications into OpenStack. On the third day of the event he will go deep into Hybrid Cloud Orchestration on OpenStack. You can find more about his sessions here. If you’re going to OpenStack Summit Tokyo, we hope you have a great time. Keep an eye on #LoveOpenStackTokyo for our take on the conference as well as exclusive offers and content. Also please share your thoughts and tales about the event – even if you’re not there – using the hashtag #LoveOpenStackTokyo Every day one random tweet with hashtag #LoveOpenStackTokyo will get selected and the winner will get an eBook of one of the above mentioned speaker (author). Find more OpenStack tutorials and content on our dedicated page - visit it here.
Read more
  • 0
  • 0
  • 1679

article-image-one-second-website-10x-your-site-performance
Dave Barnes
20 Oct 2015
5 min read
Save for later

The One Second Website : 10x your site performance

Dave Barnes
20 Oct 2015
5 min read
Last year, Patrick Hamann gave a talk for Google Developers on Breaking News at 1000ms. It lays out how Patrick and his team built a 1-second web site for the Guardian, improving performance almost 10 times. I learned a lot from the talk, and I’ve summarized that below. Here’s the video: And you can browse the slides here: Web speed has come to a head recently. Facebook’s Instant Articles put speed on everyone’s radar. A news page takes 8 seconds to load, and that puts people off clicking links. Like many others, I couldn’t quite believe things had got this bad. We have fast broadband and wifi. How can a 1,000 word article take so long? So there’s a lot of discussion around the problem, but Patrick’s talk lays out many of the solutions. Here’s the keys I took from it: The problem Sites are slow, really slow. 8 seconds is normal. And yet, people really care about speed. It’s a user’s second most important feature, right after “easy to find content”. In fact, if it takes more than a second for a page to respond people start to assume the site is broken. If most pages take more than a second, people start to assume the web is broken. And we wonder why 91% of mobile impressions are in apps, not the web. The budget Patrick set a hard budget for page loads of 1 second, and measured everything against that. This is his BHAG — make the web site nearly 10x faster. But once the goal is clear, people have a habit of finding solutions. The harder the goal, the more radical the solutions people will find. Modest goals lead to modest problem solving. Next time you want to improve something, set a 10x goal, get serious about it — and let everybody’s ingenuity loose on the solution. The solution Patrick and his team’s radical solutions revolved around 4 key principles. Deliver core content first There’s a lot of stuff on a news article page, but what we really want to see is the article content. Patrick’s team got serious about the really important stuff, creating a ‘swim lane’ system. The important stuff — the core article content — was put into a fast lane, loaded first, and then the rest built around it. This made the goal more doable. The whole page didn’t need to load in 1000ms. If the core content loaded in 1s people could read it, and by the time they had the rest of the page would be ready. (Even the flimsiest Guardian article will take more than 1s to read!.) Core content should render within 1000ms Here’s the problem. To get content to the reader in 1000ms you have only 400ms to play with, because the basic network overhead takes 600ms over a good 3g connection. So to really supercharge speed, the Guardian inlined the critical CSS. For the Guardian, the critical CSS is the article formatting. The rest can come a bit later. The new site uses JavaScript to download, store, and load CSS on demand rather than leaving that decision to the browser.  From: https://speakerdeck.com/patrickhamann/breaking-news-at-1000ms-front-trends-2014 Every feature must fail gracefully Fonts are a recognizable part of the Guardian brand, important despite the overhead. But not that important. The new design fails decisively and fast when it’s right to do so: Decision tree — fallback vs. custom font. The really clever bit of the whole set up is the font JSON. Instead of downloading several font binaries, just one JSON request downloads all the fonts in base64 encoding. This means some overhead in file size, but replaces several requests with just one cachable object: A nice trick, and one you can use yourself: they made webfontjson an Open Source project. Every request should be measured The final pillar really comes down to really knowing your shit. Graph and measure EVERYTHING that affects your performance and your time-to-render budget. In addition to the internal analytics platform Opan, Patrick uses Speedcurve to monitor and report on performance against a set of benchmarks over time: Sum up For everyone… big improvements come from BIG GOALS and ingenious solutions. Be ambitious and set a budget/goal that gives great customer benefit, then work towards it. For web developers: Performance is a requirement. Everybody has to have it as a priority from day one. Take the one second web site challenge. Make that your budget, and measure, optimize, repeat. Make the core content download first, render it in the fast lane. Then build the rest around the outside. Now if that whet your appetite, watch the video. Especially if you’re more involved in web dev, I’m sure you’ll learn a lot more from it than I did! What techniques do you use to 10x your site’s performance? From 11th to 17th April save up to 70% on some of our very best web development products. It's the perfect opportunity to explore - and learn - the tools and frameworks that can help you unlock greater performance and build even better user experiences. Find them here.
Read more
  • 0
  • 0
  • 1330

article-image-pep-8-beautiful-code-and-tyranny-guidelines
Dave Barnes
20 Oct 2015
4 min read
Save for later

PEP 8, beautiful code, and the tyranny of guidelines

Dave Barnes
20 Oct 2015
4 min read
I’m no Python developer, but I learned a great deal from Raymond Hettinger’s Pycon 2015 presentation, Beyond PEP 8: Best practices for beautiful intelligible code. It could as well have been called ‘the danger of standards’: For those not in the know (and I wasn’t), PEP 8 is a set of style guidelines for writing presentable Python code. Hettinger takes the view that it’s good, but not perfect. But even so, its mere existence and misuse is enough to cause all kinds of problems. Bad rules and minor atrocities The first problem with any set of guidelines is the most obvious. No set of guidelines short enough to write down will get it right in every situation. The weakest part of PEP 8 is its rule on line lengths. 79 characters is just too short for many purposes, especially with indentation, and especially when writing debugging scripts. But once the rules exist, and your code is judged against them, you better follow them. So how will you get your code down to 79 characters per line? There are plenty of options, all PEP 8 compliant and they all make the code worse: Use shorter variable names. Break the line in some arbitrary place. Switch to two-space indentation instead of four-space. Hettinger calls these ‘minor atrocities’ — expedient little sins that make the world worse. There are two problems at work here. First, 79 characters is just not enough for a lot of purposes. These days, 90 characters might be better. But beyond that, any absolute number is dangerous. Developers need to remember that lines above a certain length are pushing it (whether 80 or 90 characters). But they should know that sometimes a good long line is better than 2 bad short ones. As George Orwell said about his own guidelines: break any of them sooner than doing anything outright barbarous. Getting PEP 8'ed Once a standard exists, there’s a great temptation to impose that standard on other people’s code arbitrarily. You might even believe you’re doing the world a favor. The PEP 8 standards warn against this with a quote from Ralph Waldo Emmerson: “A foolish consistency is the hobgoblin of little minds”. But that doesn’t stop eager PEP 8ers diving in and complyifying other developers’ code. Once we know how to do something — and we believe it’s useful and productive — we’ll be tempted to jump in and do it wherever we can. Thus once somebody knows how, they can’t help but perpetrate PEP 8. With any guideline: follow it yourself sooner than impose it on others. Missing the gorilla But by far the biggest danger of guidelines is they can distract us from what really matters. We can pay great attention to following the guidelines and miss the most important things. If the rules of PEP 8, or any guideline, become our criteria then we limit our judgment — our perception — to the issues covered by the guideline. As an example Hettinger PEP 8ifies a chunk of Python code, making substantial improvements to its readability. But not one person in the audience notices the real issue: this is not really good Python code, it’s a Java approach copied into Python. Once that issue is seen, it’s addressed… and the code quality and readability is transformed, to the point where even I can understand it (and can also understand the appeal of Python, because it uses Python’s unique features). Being Pythonic Behind all this is a simple principle. Good Python code is Good. Python. Code. It isn’t good Java code or good C code. It has to be judged against the standard of what good Python looks like. This is perhaps the key to quality Python and quality everything. Quality is not in adherence to a long list of guidelines. It comes from having a clear idea of what good is, and bringing your work as close to that idea/ideal as you can. Finishing One other thing I learned from the video: how to end a talk if you overrun. Raymond handles the MC with such grace, and you’ll have to watch the video to the end to learn from that.
Read more
  • 0
  • 0
  • 1804
article-image-intro-meteor-js-full-stack-developers
Ken Lee
14 Oct 2015
9 min read
Save for later

Intro to Meteor for JS full-stack developers

Ken Lee
14 Oct 2015
9 min read
If you are like me, a JavaScript full-stack developer, your choices of technology might be limited when dealing with modern app/webapp development. You could choose a MEAN stack (MongoDB, Express, AngularJS, and Node.js), learn all four of these technologies in order to mix and match, or employ some ready frameworks, like DerbyJS. However , none of them provide you with the one-stop shop experience like Meteor, which stands out among the few on the canvas. What is Meteor? Meteor is an open-source "platform" (more than a framework) in pure JavaScript that is built on top of Node.js, facilitating via DDP protocol and leveraging MongoDB as data storage. It provides developers with the power to build a modern app/webapp that is equipped with production-ready, real-time (reactive), and cross-platform (web, iOS, Android) capabilities. It was designed to be easy to learn, even for beginners, so we could focus on developing business logic and user experience, rather than getting bogged down with the nitty-gritty of learning technologies' learning curve. Your First Real-time App: Vote Me Up! Below, we will be looking at how to build one reactive app with Meteor within 30 mins or less. Step 1: Installation (3-5 Mins) For OS X or Linux developers, head over to the terminal and install the official release from Meteor. $ curl https://install.meteor.com/ |sh For Windows developers, please download the official installer here. Step 2: Create an app (3-5 mins) After we have Meteor installed, we now can create new app simply by: $ meteor create voteMeUp This will create a new folder named voteMeUp under the current working directory. Check under the voteMeUp folder -- we will see that three files and one folder have been created. voteMeUp/ .meteor/ voteMeUp.html voteMeUp.css .meteor is for internal use. We should not touch this folder. The other three files are obvious enough even for beginners -- the HTML markup, stylesheet, and one JavaScript that made the barebone structure one can get for web/webapp development. The default app structure tells us that Meteor gives us freedom on folder structure. We can organise any files/folders we feel appropriate as long as we don't step onto the special folder names Meteor is looking at. Here, we will be using a basic folder structure for our app. You can visit the official documentation for more info on folder structure and file load order. voteMeUp/ .meteor/ client/ votes/ vote.html vote.js main.html collections/ votes.js server/ presets.js publications.js Meteor is a client-database-server platform. We will be writing codes for client and server independently, communicating through the reactive DB drivers API, publications, and subscriptions. For a brief tutorial, we just need to pay attention to the behaviour of these folders Files in client/ folder will run on client side (user browser) Files in server/ folder will run on server side (Node.js server) Any other folders, i.e. collections/ would be run on both client and server Step 3: Add some packages (< 3 mins) Meteor is driven by an active community, since developers around the world are creating reusable packages to compensate app/webapp development. This is also why Meteor is well-known for rapid prototyping. For brevity’s sake, we will be using packages from Meteor: underscore. Underscore is a JavaScript library that provides us some useful helper functions, and this package provided by Meteor is a subset of the original library. $ meteor add underscore There are a lot useful packages around; some are well maintained and documented. They are developed by seasoned web developers around the world. Check them out: Iron Router/Flow Router, used for application routing Collection2, used for automatic validation on insert and update operations Kadira, a monitoring platform for your app Twitter Bootstrap, a popular frontend framework by Twitter Step 3: Start the server (< 1 min) Start the server simply by: $ meteor Now we can visit site http://localhost:3000. Of course you will be staring at a blank screen! We haven't written any code yet. Let's do that next. Step 4: Write some code (< 20 Mins) As you start to write the first line of code, by the time you save the file, you will notice that the browser page will auto reload by itself. Thanks to the hot code push mechanism built-in, we don't need to refresh the page manually. Database Collections Let's start with the database collection(s). We will keep our app simple, since we just need one collection, votes, that we will put it in collections/votes.js like this: Votes = newMongo.Collection('votes'); All files in the collections/ folder run on both the client and the server side. When this line of code is executed, a mongo collection will be established on the server side. On the client side, a minimongo collection will be established. The purpose of the minimongo is to reimplement the MongoDB API against an in-memory JavaScript database. It is like a MongoDB emulator that runs inside our client browser. Some preset data We will need some data to start working with. We can put this in server/presets.js. These are just some random names, with vote count 0 to start with. if (Votes.find().count() === 0) { Votes.insert({ name: "Janina Franny", voteCount: 0 }); Votes.insert({ name: "Leigh Borivoi", voteCount: 0 }); Votes.insert({ name: "Amon Shukri", voteCount: 0 }); Votes.insert({ name: "Dareios Steponas", voteCount: 0 }); Votes.insert({ name: "Franco Karl", voteCount: 0 }); } Publications Since this is for an educational purpose, , we will publish (Meteor.publish()) all the data to the client side under server/publications.js. You most likely would not do this for a production application. Planning for publication is one major step in Meteor app/webapp development, so we don't want to publish too little or too much data over to client. Just enough data is what we always keep an eye out for. Meteor.publish('allVotes', function() { return Votes.find(); }); Subscriptions Once we have the publication in place, we can start to subscribe to them by the name, allVotes, as shown above in the publication. Meteor provides template level subscription, which means we could subscribe to the publication when a template is loaded, and get unsubscribed when the template is destroyed. We will be putting in our subscription under client/votes/votes.js, (Meteor.subscribe()). onCreated is a callback when the template name votes is being created. Template.votes.onCreated(function() { Meteor.subscribe('allVotes'); }); The votes template put in client/votes/votes.html would be some simple markup such as the following: <template name="votes"> <h2>All Votes</h2> <ul> {{#each sortedVotes}} <li>{{name}} ({{voteCount}}) <button class="btn-up-vote">Up Vote</button></li> {{/each}} </ul> <h3>Total votes: {{totalVotes}}</h3> </template> If you are curious what those markups are with {{ and }}, enter Meteor Blaze, which is a powerful library for creating live-updating on client side. Similar to AngularJS and React, Blaze serves as the default front-end templating engine for Meteor, but it is simpler to use and easier to understand. The Main Template There must be somewhere to start our application. client/main.html is the place to kickoff our template(s). <body> {{> votes}} </body> Helpers In order to show all of our votes we will need some helper functions. As you can see from the previous template, {{#each sortedVotes}}, where a loop should happen and print out the names and their votes in sorting order {{totalVotes}}, which is supposed to show the total vote count. We will put this code into the same file we have previously worked on: client/votes/votes.js, and the complete code should be: Template.votes.onCreated(function() { Meteor.subscribe('allVotes'); }); Template.votes.helpers({ 'sortedVotes': function() { return Votes.find({}, { sort: { voteCount: -1 } }) }, 'totalVotes': function() { var votes = Votes.find(); if (votes.count() > 0) { return _.reduce(votes.fetch(), function(memo, obj) { return memo + obj.voteCount; }, 0); } } }); Sure enough, the helpers will return all of the votes, sorted in descending order (the larger number on top), and returning the sum (reduce - function provided by underscrore) of votes. This is all we need to show the vote listing. Head over to the browser, and you should be seeing the listing on-screen! Events In order to make the app useful and reactive we need an event to update the listing on the fly when someone votes on the names. This can be done easily with an event binding to the 'Up Vote' button. We will be adding the event handler in the same file: client/votes/votes.js Template.votes.onCreated(function() { Meteor.subscribe('allVotes'); }); Template.votes.helpers({ 'sortedVotes': function() { return Votes.find({}, { sort: { voteCount: -1 } }) }, 'totalVotes': function() { var votes = Votes.find(); if (votes.count() > 0) { return _.reduce(votes.fetch(), function(memo, obj) { return memo + obj.voteCount; }, 0); } } }); Template.votes.events({ 'click .btn-up-vote': function() { Votes.update({ _id: this._id }, { $inc: { voteCount: 1 } }); } }); This new event handler just did a quick and dirty update on the Votes collections, by field name _id. Each event handler will have this pointing to the current template context -- i.e. the {{#each}} in the template indicates a new context. So this._id will return the current _id of each record in the collection. Step 5: Done. Enjoy your first real-time app! You can now visit the site with different browsers/tabs open side by side. Action on one will trigger the reactive behavior on the other. Have fun voting! Conclusion By now, we can see how easily we can build a fully functional real-time app/webapp using Meteor. With "great power comes great responsibility[RJ8] " (pun intended), and proper planning/structuring for our app/webapp is of the utmost importance once we have empowered by these technologies. Use it wisely and you can improve both the quality and performance of your app/webapp. Try it out, and let me know if you are sold. Resources: Meteor official site Meteor official documentation Meteor package library: Atmosphere Discover Meteor Want more JavaScript content? Look no further than our dedicated JavaScript page.  About the Author Ken Lee is the co-found of Innomonster Pte. Ltd. (http://innomonster.com/), a specialized website/app design & development company based in Singapore. He has eight years of experience in web development, and is passionate about front-end and JS full-stack development. You can reach him at [email protected].
Read more
  • 0
  • 0
  • 4553

article-image-testing-unity
Travis Scott
14 Oct 2015
6 min read
Save for later

Testing with Unity

Travis Scott
14 Oct 2015
6 min read
What is Testing? Traditionally in Software Development, testing plays an integral role in both the maintainability and quality of the product. Of course, in game development, user acceptance testing is performed frequently. Each time you play the game you check if your newly added creation works the way you intended -- this is user acceptance testing. While this is great, frequently game development testing only centers on "user" centric testing principles. Two testing levels that are often left out are Unit and Integration Testing. We're going to focus on Unit testing, as that's frequently the first step of Automated testing. Unit testing, for our purposes, is an automated test, which verifies the functionality of a specific section of code. There is some debate whether this should be a method, or whether the test should have an even smaller scope. I understand both arguments, and personally agree with the second, yet usually find myself resorting to the first. The reason for this is the nature of a unit test. Our goal is to give our test an input, and expect a certain output or result. The simplest way to give an input and get an output? Methods. So up to this point, why have games not utilized testing as much as enterprise level software? Well, many large game companies have already taken this route, while small indie teams frequently don't have the budget, or the longevity to do it. But by testing from the start of your project, and using TDD (Test-Driven Development) processes, you'll find testing becomes a natural starting point for a new feature. So how do we do this? Unity Test Tools Unity has released an official testing Library. For the sake of this demo, we'll be using the standard C# for development. The testing library can be found in the asset store, under Unity Testing Tools. Once you have added the package to your project, you can begin writing and running your tests. Example Test To write the the tests were going to start with a basic script. Let’s make a new script and call it Cat. The file will have the following code: public class Cat { private int lives; public Cat() { this.lives = 9; } public void useLife(){ lives > 0 ? lives-- : lives; } public int getLives(){ return lives; } public bool IsAlive(){ return lives > 0; } } Nice and simple. We can use one of the cat’s lives, get the current amount of lives, and check if our cat is alive. Now, in traditional TDD, we should have written the test first. Since I'm assuming as a reader you may never have seen testing, I prefer you understand what we're testing first. So our goal here is to make sure our logic is working as expected. No matter how simple the logic may seem, we as programmers know that we still make many mistakes. Because we do make mistakes, writing the test before the logic can be a great benefit to us. That way, you won't let a small bug last in your code. The earlier we can catch a bug, the less of a problem it is. So let’s take a look at a test object. To create a test, create an Editor folder in your assets. Inside the Editor folder create a C# file called CatTest.cs. Inside, we will write: using NUnit.Framework [TestFixture] [Category("Testing Cat")] public class CatTest { Cat cat; [SetUp] public void Init() { cat = new Cat(); } [Test] public void DoesUsingLifeReduceTheCatsLivesByOne() { int currentLives = cat.getLives(); Assert.AreEqual(currentLives - 1, cat.useLife()); } [Test] public void ReducingACatsLivesBelowZeroHasNoImpact() { EmptyCatLives(); cat.useLife(); Assert.AreEqual(0, cat.getLives()); } [Test] public void CatIsAliveWhileItHasLives() { Assert.IsTrue(cat.IsAlive); } [Test] public void ReducingCatsLivesToZeroWillMakeIsAliveFalse() { EmptyCatLives(); Assert.IsFalse(cat.isAlive()); } public void EmptyCatLives(){ for(int i = 0; i < cat.getLives(); i++){ cat.useLife(); } } } Finally to run our tests, there should be a Unit Test Tools dropdown in the navigation bar at the top of the unity pane. Select it, and choose Run Unit Tests. Once the small window opens click play. An important piece of any projects testing is it's Methods names. These names are frequently used to describe the tests purpose. This is actually a great way to introduce a new team member to the project, as this can give new members a way to step-by-step walk through your code, understanding what the purpose of its functions are. In our test, Test Fixture and Category, for this purpose, define our scripts tests. The [Setup] will be run before each test. We use it to initialize our class cat. That way we know each test starts off with a clean instantiation of Cat. The four methods with [Test] are our tests. They will determine that the code written in our script will return what we expect it to. For example, assume a new team member comes to our project and believes that a cat truly loses two lives at a time. Out of excitement, they change your code without even consulting you. Assuming they run the tests as expected, you or this new team member will then be notified that something isn't right. The numbers aren't lining up to what they should be. You've corrected this bug before it's even gotten dangerous. In the enterprise world, it’s the norm for many team members to come and go on projects. Games are no exception, and by having safeguards like Unit and Integration testing, you'll be able to catch those small bugs before they get too far. I recommend a further reading on Unity Tests, and testing in general, as this just scratched the surface! About the Authors Denny is a Mobile Application Developer at Canadian Tire Development Operations. While working, Denny regularly uses Unity to create in-store experiences, but also works on other technologies like Famous, Phaser.IO, LibGDX, and CreateJS when creating game-like apps. He also enjoys making non-game mobile apps, but who cares about that, am I right? Travis is a Software Engineer, living in the bitter region of Winnipeg, Canada. His work and hobbies include Game Development with Unity or Phaser.IO, as well as Mobile App Development. He can enjoy a good video game or two, but only if he knows he'll win!
Read more
  • 0
  • 0
  • 1325