Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech Guides

851 Articles
article-image-how-to-handle-aws-through-the-command-line
Felix Rabe
04 Jan 2016
8 min read
Save for later

How to Handle AWS Through the Command Line

Felix Rabe
04 Jan 2016
8 min read
Are you a developer running OS X or Linux, and would like to give Amazon AWS a shot? And do you prefer the command line over fancy GUIs? Read on and you might have your first AWS-provided, Ubuntu-powered Nginx web server running in 30 to 45 minutes. This article will guide you from just having OS X or Linux installed on your computer to a running virtual server on Amazon Web Services (AWS), completely controlled via its Command Line Interface (CLI). If you just start out using Amazon AWS, you'll be able to benefit from the AWS Free Tier for 12 months. This tutorial only uses resources available via the AWS free tier. (Disclaimer: I am not affiliated with Amazon in any other way than as a user.) Required skills: A basic knowledge of the command line (shell) and web technologies (SSH, HTTP) is all that's needed. Open your operating system's terminal to follow along. What are Amazon Web Services (AWS)? Amazon AWS is a collection of services based on Amazon's infrastructure that Amazon provides to the general public. These services include computing resources, file storage, databases, and even crowd-sourced manual labor. See http://aws.amazon.com/products/ for an overview of the provided services. What is the Amazon AWS Command Line Interface (CLI)? The AWS CLI tool enables to control all operational aspects of AWS from the command line. This is a great advantage for automating processes and for people (like me) with a preference for textual user interfaces. How to create an AWS account Head over to https://aws.amazon.com/ and sign up for an account. This process will require you to have a credit card and a mobile phone at hand. Then come back and read on. How to generate access keys Before the AWS CLI can be configured for use, you need to create a user with the required permissions and download his access keys (AWS Access Key ID and AWS Secret Access Key) for use in the AWS CLI. In the AWS Console (https://console.aws.amazon.com/), open your account menu in the upper right corner and click on "Security Credentials": Security Credentials If a dialog pops up for you, just dismiss it for now by clicking "Continue to Security Credentials". Then, in the sidebar on the left, click on "Groups": Groups Create a group (e.g. "Developers") and attach the policy "AmazonEC2FullAccess" to it. Then, in the sidebar on the left, click on "Users": Users Create a new user, and then copy or download the security credentials to a safe place. You will need them soon. Click on the new user, then "Add User to Groups" to add the user to the group you've just created before. This gives the user (and the keys) the required capabilities to manipulate EC2 Install AWS CLI via Homebrew (OS X) (Linux users can skip to the next section.) On OS X, Homebrew provides a simple way to install other software from the command line and is widely used. Even though the AWS CLI documentation recommends installation via pip (the Python package manager), I chose to install AWS CLI via Homebrew as it is more common. AWS CLI on Homebrew might lag behind a version compared to pip, though. Open the Terminal application and install Homebrew by running: ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)" The installation script will guide you through the necessary steps to get Homebrew set up. Once finished, install AWS CLI using: brew install awscli aws --version If the last command successfully shows you the version of the AWS CLI, you can continue on to the section about configuring AWS CLI. Install AWS CLI via pip (Linux) On Debian or Ubuntu Linux, run: sudo apt-get install python-pip sudo pip install awscli aws --version On Fedora, run: sudo yum install python-pip sudo pip install awscli aws --version If the last command successfully shows you the version of the AWS CLI, you can continue on with the next section. Configure AWS CLI and Run Your First Virtual Server Run aws configure and paste in the credentials you've received earlier: $ aws configure AWS Access Key ID [None]: AKIAJXIXMECPZBXKMK7A AWS Secret Access Key [None]: XUEZaXQ32K+awu3W+I/qPyf6+PIbFFORNM4/3Wdd Default region name [None]: us-west-1 Default output format [None]: json Here you paste the credentials you've copied or downloaded above, for "AWS Access Key ID" and "AWS Secret Access Key". (Don't bother trying the values given in the example, as I've already changed the keys.) We'll use the region "us-west-1" here. If you want to use another one, you will have to find an equivalent AMI (HD image, "Ubuntu Server 14.04 LTS (HVM), SSD Volume Type", ID ami-df6a8b9b in region "us-west-1") with a different ID for your region. The output formats available are "json", "table" and "text", and can be changed for each individual AWS CLI command by appending the --output <format> option. "json" is the default and produces pretty-printed (though not key-sorted) JSON output. "table" produces a human-readable presentation. "text" is a tab-delimited format that is easy to parse in shell scripts. Help on AWS CLI The AWS CLI is well documented on http://aws.amazon.com/documentation/cli/, and man pages for all commands are available by appending help to the end of the command line: aws help aws ec2 help aws ec2 run-instances help Amazon EC2 Amazon EC2 is the central piece of AWS. The EC2 website says: "Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides resizable compute capacity in the cloud." In other words, EC2 provides the capability to run virtual machines connected to the Internet. That's what will make our Nginx run, so let's make use of it. To do that, we need to enable SSH networking and generate an SSH key for logging in. Setting Up the Security Group (Firewall) Security Groups are virtual firewalls. To make a virtual machine accessible, it is associated with one (or more) security group. A security group defines which ports are open and to what IP ranges. Without further ado: aws ec2 create-security-group --group-name tutorial-sg --description "Tutorial security group" aws ec2 authorize-security-group-ingress --group-name tutorial-sg --protocol tcp --port 22 --cidr 0.0.0.0/0 aws ec2 authorize-security-group-ingress --group-name tutorial-sg --protocol tcp --port 80 --cidr 0.0.0.0/0 This creates a security group called "tutorial-sg" with ports 22 and 80 open to the world. To confirm that you have set it up correctly, you can then run: aws ec2 describe-security-groups --group-name tutorial-sg --query 'SecurityGroups[0].{name:GroupName,description:Description,ports:IpPermissions[*].{from:FromPort,to:ToPort,cidr:IpRanges[0].CidrIp,protocol:IpProtocol}}' The --query option is a great way to filter through AWS CLI JSON output. You can safely remove the --query option from the aws ec2 describe-security-groups command to see the JSON output in full. The AWS CLI documentation has more information about AWS CLI output manipulation and the --query option. Generate an SSH Key To actually log in via SSH, we need an SSH key: aws ec2 create-key-pair --key-name tutorial-key --query 'KeyMaterial' --output text > tutorial-key.pem chmod 0400 tutorial-key.pem Run Your First Instance (Virtual Machine) on EC2 Finally, we can run our first instance on AWS! Remember that the image ID "ami-df6a8b9b" is specific to the region "us-west-1". In case you wonder about the size of the disk, this command will also create a new 8 GB disk volume based on the size of the specified disk image: instance=$(aws ec2 run-instances --image-id ami-df6a8b9b --count 1 --instance-type t2.micro --security-groups tutorial-sg --key-name tutorial-key --query 'Instances[0].InstanceId' --output text) ; echo Instance: $instance This shows you the IP address and the state of the new instance in a nice table: aws ec2 describe-instances --instance-ids $instance --query 'Reservations[*].Instances[*].[InstanceId,PublicIpAddress,State.Name]' --output table Install Nginx and Open Your Shiny New Website And now we can log in to the new instance to install Nginx: ipaddr=$(aws ec2 describe-instances --instance-ids $instance --query 'Reservations[0].Instances[0].PublicIpAddress' --output text) ; echo IP: $ipaddr ssh -i tutorial-key.pem ubuntu@$ipaddr sudo apt-get update && ssh -i tutorial-key.pem ubuntu@$ipaddr sudo apt-get install -y nginx If you now open the website at $ipaddr in your browser (OS X: `open http://$ipaddr, Ubuntu:xdg-open http://$ipaddr`), you should be greeted with the "Welcome to nginx!" message. Success! :) Cleaning Up In case you might want to stop your instance again: aws ec2 stop-instances --instance-ids $instance To start the instance again, substitute start for stop (caution: the IP address will probably change), and to completely remove the instance (including the volume), substitute terminate for stop. Resources Amazon Web Services: http://aws.amazon.com/ AWS CLI documentation: http://aws.amazon.com/documentation/cli/ AWS CLI EC2 reference: http://docs.aws.amazon.com/cli/latest/reference/ec2/index.html Official AWS Tutorials: http://docs.aws.amazon.com/gettingstarted/latest/awsgsg-intro/gsg-aws-tutorials.html AWS Command Line Interface: http://aws.amazon.com/cli/ Homebrew: http://brew.sh/ About the author Felix Rabe is a developer who develops in Go and deploys in Docker. He can be found on Github and Twitter @felixrabe.
Read more
  • 0
  • 0
  • 5151

article-image-its-time-get-rid-white-space
Owen Roberts
31 Dec 2015
7 min read
Save for later

It's Time to Get Rid of That White Space

Owen Roberts
31 Dec 2015
7 min read
It’s been a few years since Responsive Web first burst onto the scene as a revolution in how we developed and designed websites. Gone are the days of websites that looked the same no matter what device or resolution you were using; replaced by reactive containers, grids, a few images, and white. Lots and lots of white. Ever since joining Packt I’ve started to look more and more at how every website I visit looks with a more critical eye, and the biggest common thread I’ve seen is the constant light grey or white that make up many of the biggest websites on the web. What to Avoid Let’s use Reddit’s new beta mobile design as an example. Reddit has always been pretty minimalist when it comes to design, but the current version shown above can be a nightmare to use at times. Have a go for yourself and check a few of the posts on the front page – buttons are far too small to be of use to most fingers, and I’ve often found myself clicking through to a video when I’ve wanted to read the comments. And look at all that white space! There have been more than a few times where I’ve clicked on something expecting it to take me to the article only to find that, actually, I’m meant to click slightly to the left. This is probably one of the biggest problems with white space, and for a website like Reddit where you can visit several times a day for several different topics can add up to a lot of frustration for customers. Obviously there are reasons for a white background sometimes – it’s what we’re used to, it’s easy to read large blocks of text with - it works with practically everything, why wouldn’t you use it?! Well, even without going into that some people naturally struggle with reading large chunks of text on a white background, in 2014 mobile officially passed the desktop as the average user’s device of choice; having your website stand out on mobile while also looking good is more important than ever in the world of responsive web. New customers can get put off by the slightest thing, and in today’s web a bland website can be taken as a warning sign. This is a huge shame, because it doesn’t take much to let your site stand out and shine against the rest in the world of mobile. Good use of space and fits on all devices. This is what you’re aiming for. (Not really) Luckily I’ve noticed is that a few developers are finally starting to put their own spin on the common responsive design we’ve seen for the last 3 years. In this article I’ll show you a few favorite twists some larger companies have done this year and hopefully inspire you to experiment ways to make your sites stand out so much more with very little effort! Words Aren’t Enough Heavy use of images was once frowned upon for mobile sites, but as savvy mobile users start getting bigger data limits we’re seeing an increase in richer mobile sites filled with auto-playing videos, gifs, and more. White space was great for saving data when data limits were a serious concern but with contacts offering high to unlimited data limits we can be a bit more relaxed when it comes to our limits. When used right pictures are far more effective than just reams and reams of words after all. So, let's look at 3 examples of websites I've used recently that really stood out to me and see how they effectively cut down on white space. GAME’s website here is filled to the brim with color and carefully chosen ads – in fact, 3/4ths of the front page is just ads with a few other links to back it up. This makes sense from the developers standpoint, as a shopping site needs to do two key things: Draw those who do not visit the site regularly to the biggest deals in order to get them to order the big bundles and the newest (And therefore most expensive) products. Get the customers exactly where they want to be with as few clicks as possible. The more links a user has to go through to get to what they need the more likely they are to just give up and leave, probably to head to Amazon, which is what you want to avoid. The simple presentation of this page really works for it. It's colorful and really helps draw the eye to where the designers want you to go. I really like the lack of words outside of the different categories on this page as well – all this color, all these images are used to create a professional looking site that doesn't use a lot of mobile data without relying on a heavy amount of white everywhere. Deliveroo is a food delivery service for restaurants, and that really comes through in its presentation. If you've ever used takeout apps before they can come across very differently depending on where you look (As an experiment, check the website designs and copy for Domino, Pizza Hut, and Papa Johns; they've all a distinct tone), Deliveroo's design sidesteps this, and helps all the restaurants featured on the site appear equally professional. This continues through the design scheme as well, subtle blues and a majority of the background being made up of professional looking dishes helps entice the customer while also being simple to set up. One thing I really like about this site is that the only white space found on the page, the forms, actually helps draw the customer immediately to them so they can get started off straight away. White space is a tool, just like everything else you have in a toolbox, and making the best use of it is important. Nandos, everyone’s favorite meme of 2015, has probably one of the best mobile designs I’ve seen for a website a user may visit. Every aspect of the website is aimed to look good and keeps the company aesthetic baked in – material design is in full force too. The initial page still manages to be minimalist, consisting of a swipe ad and a few navigation buttons helpfully fixed to the top of the current screen. The best part is with a simple tap of the menu button everything a hungry soul needs replaces the app without having to load up – it’s quick, painless, and dynamic. Why is this great? Simply put, it manages to capture the spirit of minimalism without looking bland. The font color naturally meshes well with the background making it easier on the eyes, everything is where it needs to be for easily flicking between, and it’s a real joy to mess around with. Final Thoughts I think it’s important to stress again that the typical use of bare containers on a white background isn’t a bad thing by itself, but we’re entering a time where other businesses and developers are now looking to catch the eye of everyone they can and the way to do that these days is through a powerful mobile site. You don’t have to be a designer or spend a lot of money to make a few simple changes that really bring your site to the next level. Give it a shot and see what mixing things up does to your site!
Read more
  • 0
  • 0
  • 1600

article-image-docker-has-turned-us-all-sysadmins
Richard Gall
29 Dec 2015
5 min read
Save for later

Docker has turned us all into sysadmins

Richard Gall
29 Dec 2015
5 min read
Docker has been one of my favorite software stories of the last couple of years. On the face of it, it should be pretty boring. Containerization isn't, after all, as revolutionary as most of the hype around Docker would have you believe. What's actually happened is that Docker has refined the concept, and found a really clear way of communicating the idea. Deploying applications and managing your infrastructure doesn't sound immediately 'sexy'. After all, it was data scientist that was proclaimed the sexiest job of the twenty-first century; sysadmins hardly got an honorable mention. But Docker has, amazingly, changed all that. It's started to make sysadmins sexy… And why should we be surprised? If a SysAdmin's role is all about delivering software, managing infrastructures, maintaining it and making sure it performs for the people using it, it's vital (if not obviously sexy). A decade ago, when software architectures were apparently immutable and much more rigid, the idea of administration wasn't quite so crucial. But now, in a world of mobile and cloud, where technology is about mobility as much as it is about stability (in the past, tech glued us to desktops; now it's encouraging us to work in the park), sysadmins are crucial. Tools like Docker are crucial to this. By letting us isolate and package applications in their component pieces we can start using software in a way that's infinitely more agile and efficient. Where once the focus was on making sure software was simply 'there,' waiting for us to use it, it's now something that actively invites invention, reconfiguration and exploration. Docker's importance to the 'API economy' (which you're going to be hearing a lot more about in 2016) only serves to underline its significance to modern software. Not only does it provide 'a convenient way to package API-provisioning applications, but it also 'makes the composition of API-providing applications more programmatic', as this article on InfoWorld has it. Essentially, it's a tool that unlocks and spreads value. Can we, then, say the same about the humble sysadmin? Well yes – it's clear that administering systems is no longer a matter of simple organization, a question of robust management, but is a business critical role that can be the difference between success and failure. However, what this paradigm shift really means is that we've all become SysAdmins. Whatever role we're working in, we're deeply conscious of the importance of delivery and collaboration. It's not something we expect other people to do, it's something that we know is crucial. And it's for that reason that I love Docker – it's being used across the tech world, a gravitational pull bringing together disparate job roles in a way that's going to become more and more prominent over the next 12 months. Let's take a look at just two of the areas in which Docker is going to have a huge impact. Docker in web development Web development is one field where Docker has already taken hold. It's changing the typical web development workflow, arguably making web developers more productive. If you build in a single container on your PC, that container can then be deployed and managed anywhere. It also gives you options: you can build different services in different containers, or you can build a full-stack application in a single container (although Docker purists might say you shouldn't). In a nutshell, it's this ability to separate an application into its component parts that underlines why microservices are fundamental to the API economy. It means different 'bits' – the services – can be used and shared between different organizations. Fundamentally though, Docker bridges the difficult gap between development and deployment. Instead of having to worry about what happens once it has been deployed, when you build inside a container you can be confident that you know it's going to work – wherever you deploy it. With Docker, delivering your product is easier (essentially, it helps developers manage the 'ops' bit of DevOps, in a simpler way than tackling the methodology in full); which means you can focus on the specific process of development and optimizing your products. Docker in data science Docker's place within data science isn't quite as clearly defined or fully realised as it is in web development. But it's easy to see why it would be so useful to anyone working with data. What I like is that with Docker, you really get back to the 'science' of data science – it's the software version of working in a sterile and controlled environment. This post provides a great insight on just how great Docker is for data – admittedly it wasn't something I had thought that much about, but once you do, it's clear just how simple it is. As the author of puts it: 'You can package up a model in a Docker container, go have that run on some data and return some results - quickly. If you change the model, you can know that other people will be able to replicate the results because of the containerization of the model.' Wherever Docker rears its head, it's clearly a tool that can be used by everyone. However you identify – web developer, data scientist, or anything else for that matter – it's worth exploring and learning how to apply Docker to your problems and projects. Indeed, the huge range of Docker use cases is possibly one of the main reasons that Docker is such an impressive story – the fact that there are thousands of other stories all circulating around it. Maybe it's time to try it and find out what it can do for you?
Read more
  • 0
  • 0
  • 3640

article-image-level-your-companys-big-data-resource-management
Timothy Chen
24 Dec 2015
4 min read
Save for later

Level Up Your Company's Big Data With Resource Management

Timothy Chen
24 Dec 2015
4 min read
Big data was once one of the biggest technology hypes, where tons of presentations and posts talked about how the new systems and tools allows large and complex data to be processed that traditional tools wasn't able to. While Big data was at the peak of its hype, most companies were still getting familiar with the new data processing frameworks such as Hadoop, and new databases such as HBase and Cassandra. Fast foward to now where Big data is still a popular topic, and lots of companies has already jumped into the Big data bandwagon and are already moving past the first generation Hadoop to evaluate newer tools such as Spark and newer databases such as Firebase, NuoDB or Memsql. But most companies also learn from running all of these tools, that deploying, operating and planning capacity for these tools is very hard and complicated. Although over time lots of these tools have become more mature, they are still usually running in their own independent clusters. It's also not rare to find multiple clusters of Hadoop in the same company since multi-tenant isn't built in to many of these tools, and you run the risk of overloading the cluster by a few non-critical big data jobs. Problems running indepdent Big data clusters There are a lot of problems when you run a lot of these independent clusters. One of them is monitoring and visibility, where all of these clusters have their own management tools and to integrate the company's shared monitoring and management tools is a huge challenge especially when onboarding yet another framework with another cluster. Another problem is multi-tenancy. Although having independent clusters solves the problem, another org's job can overtake the whole cluster. It still doesn't solve the problem when a bug in the Hadoop application just uses all the available resources and the pain of debugging this is horrific. A another problem is utilization, where a cluster is usually not 100% being utilized and all of these instances running in Amazon or in your datacenter are just racking up bills for doing no work. There are more major pain points that I don't have time to get into. Hadoop v2 The Hadoop developers and operators saw this problem, and in the 2nd generation of Hadoop they developed a separate resource management tool called YARN to have a single management framework that manages all of the resources in the cluster from Hadoop, enforce the resource limitations of the jobs, integrate security in the workload, and even optimize the workload by placing jobs closer to the data automatically. This solves a huge problem when operating a Hadoop cluster, and also consolidates all of the Hadoop clusters into one cluster since it allows a finer grain control over the workload and saves effiency of the cluster. Beyond Hadoop Now with the vast amount of Big data technologies that are growing in the ecosystem, there is a need to integrate a common resource management layer among all of the tools since without a single resource management system across all the frameworks we run back into the same problems as we mentioned before. Also when all these frameworks are running under one resource management platform, a lot of options for optimizations and resource scheduling are now possible. Here are some examples what could be possible with one resource management platform: With one resource management platform the platform can understand all of the cluster workload and available resources and can auto resize and scale up and down based on worklaods across all these tools. It can also resize jobs according to priority. The cluster is able to detect under utilization from other jobs and offer the slack resources to Spark batch jobs while not impacting your very important workloads from other frameworks, and maintain the same business deadlines and save a lot more cost. In the next post I'll continue to cover Mesos, which is one such resource management system and how the upcoming features in Mesos allows optimizations I mentioned to be possible. For more Big Data tutorials and analysis, visit our dedicated Hadoop and Spark pages. About the author Timothy Chen is a distributed systems engineer and entrepreneur. He works at Mesosphere and can be found on Github @tnachen.
Read more
  • 0
  • 0
  • 1445

article-image-level-your-companys-big-data-mesos
Timothy Chen
23 Dec 2015
5 min read
Save for later

Level Up Your Company's Big Data with Mesos

Timothy Chen
23 Dec 2015
5 min read
In my last post I talked about how using a resource management platform can allow your Big Data workloads to be more efficient with less resources. In this post I want to continue the discussion with a specific resource management platform, which is Mesos. Introduction to Mesos Mesos is an Apache top-level project that provides an abstraction to your datacenter resources and an API to program against these resources to launch and manage your workloads. Mesos is able to manage your CPU, memory, disk, ports and other resources that the user can custom defines. Every application that wants to use resources in the datacenter to run tasks talks with Mesos is called a scheduler. It uses the scheduler API to receive resource offers and each scheduler can decide to use the offer, decline the offer to wait for future ones, or hold on the offer for a period of time to combine the resources. Mesos will ensure to provide fairness amongst multiple schedulers so no one scheduler can overtake all the resources. So how does your Big data frameworks benefit specifically by using Mesos in your datacenter? Autopilot your Big data frameworks The first benefit of running your Big data frameworks on top of Mesos, which by abstracting away resources and providing an API to program against your datacenter, is that it allows each Big data framework to self-manage itself without minimal human intervention. How does the Mesos scheduler API provide self management to frameworks? First we should understand a little bit more what does the scheduler API allows you to do. The Mesos scheduler API provides a set of callbacks whenever the following events occurs: New resources available, task status changed, slave lost, executor lost, scheduler registered/disconnected, etc. By reacting to each event with the Big data framework's specific logic it allows frameworks to deploy, handle failures, scale and more. Using Spark as an example, when a new Spark job is launched it launches a new scheduler waiting for resources from Mesos. When new resources are available it deploys Spark executors to these nodes automatically and provide Spark task information to these executors and communicate the results back to the scheduler. When some reason the task is terminated unexpectedly, the Spark scheduler receives the notification and can automatically relaunch that task on another node and attempt to resume the job. When the machine crashes, the Spark scheduler is also notified and can relaunch all the executors on that node to other available resources. Moreover, since the Spark scheduler can choose where to launch the tasks it can also choose the nodes that provides the most data locality to the data it is going to process. It can also choose to deploy the Spark executors in different racks to have more higher availability if it's a long running Spark streaming job. As you can see, by programming against an API allows lots of flexibility and self-managment for the Big data frameworks, and saves a lot of manually scripting and automation that needs to happen. Manage your resources among frameworks and users When there are multiple Big data frameworks sharing the same cluster, and each framework is shared with multiple users, providing a good policy around ensuring the important users and jobs gets executed becomes very important. Mesos allows you to specify roles, where multiple frameworks can belong to a role. Mesos then allows operators to specify weights among these roles, so that the fair share is enforced by Mesos to provide the resources according to the weight specified. For example, one might provide 70% resources to Spark and 30% resources to general tasks with the weighted roles in Mesos. Mesos also allows reserving a fixed amount of resources per agent to a specific role. This ensures that your important workload is guaranteed to have enough resources to complete its workload. There are more features coming to Mesos that also helps multi-tenancy. One feature is called Quota where it ensures over the whole cluster that a certain amount of resources is reserved instead of per agent. Another feature is called dynamic reservation, which allows frameworks and operators to reserve a certain amount of resources at runtime and can unreserve them once it's no longer necessary. Optimize your resources among frameworks Using Mesos also boosts your utilization, by allowing multiple tasks from different frameworks to use the same cluster and boosts utilization without having separate clusters. There are a number of features that are currently being worked on that will even boost the utilization even further. The first feature is called oversubscription, which uses the tasks runtime statistics to estimate the amount of resources that is not being used by these tasks, and offers these resources to other schedulers so more resources is actually being utilized. The oversubscription controller also monitors the tasks to make sure when the task is being affected by sharing resources, it will kill these tasks so it's no longer being affected. Another feature is called optimistic offers, which allows multiple frameworks to compete for resources. This helps utilization by allowing faster scheduling and allows the Mesos scheduler to have more inputs to choose how to best schedule its resources in the future. As you can see Mesos allows your Big data frameworks to be self-managed, more efficient and allows optimizations that are only possible by sharing the same resource management. If you're curious how to get started you can follow at the Mesos website or Mesosphere website that provides even simpler tools to use your Mesos cluster. Want more Big Data tutorials and insight? Both our Spark and Hadoop pages have got you covered. About the author Timothy Chen is a distributed systems engineer and entrepreneur. He works at Mesosphere and can be found on Github @tnachen.
Read more
  • 0
  • 0
  • 1832

article-image-python-web-development-frameworks-django-flask
Owen Roberts
22 Dec 2015
5 min read
Save for later

Python Web Development Frameworks: Django or Flask?

Owen Roberts
22 Dec 2015
5 min read
I love Python, I’ve been using it close to three years now after a friend gave me a Raspberry Pi they had grown bored with. In the last year I’ve also started to seriously get into web development for my own personal projects but juggling all these different languages can sometimes get a bit too much for me; so this New Year I’ve promised myself I’m going to get into the world of Python web development. Python web dev has exploded in the last year. Django has been around for a decade now, but with long term support and the wealth of improvements that we’ve seen to the framework in just the last year it’s really reaching new heights of popularity. Not only Django, but Flask’s rise to fame has meant that writing a web page doesn’t have to involve reams and reams of code too! Both these frameworks are about cutting down on time spent coding without sacrificing quality, but which one do you go for? In this blog I’m going to show you the best bundles you need to get started with taking Python to the world of the web with titles I've been recommended - and at only $5 per eBook, hopefully this little hamper list inspires you to give something new a try for 2016! So, first of all which do you start with, Django or Flask? Let’s have a look at each and see what they can do for you. Route #1: Django So the first route to enter the world of Python web dev is Django, also touted as “the web framework for perfectionists with deadlines”. Django is all about clean, pragmatic design and getting to your finished app in as little time as possible. Having been around the longest it's also got a great amount of support meaning it's perfect for larger, more professional projects. The best way to get started is with our Django By Example or Learning Django Web Development titles. Both have everything you need to take the first steps in the world of web development in Python; taking what you already know and applying it in new ways. The By Example title is great as it works through 4 different applications to see how Django works in different situations, while the Learning title is a great supplement to learning the key features that need to be used in every application. Now that the groundwork has been laid, we need to build upon that. With Django we've got to catch up with 10 years of experience and community secrets fast! Django Design Patterns and Best Practices is filled with some of the community's best hacks and cheats to get the most out of developing Django, so if you're a developer who likes to save time and avoid mistakes (and who doesn't?!) then this book is the perfect desk companion for any Django lover. Finally, to top everything off and prepare us for the next steps in the world of Django why not try a new paradigm with Test-Driven Development with Django? I'm honestly one of those developers that hates having to test right at the end, so being able to peel down a complex critical task into layers throughout just makes more sense to me. Route #2: Flask Flask has exploded in popularity in the last year and it's not hard to see why – with the focus on as much minimal code as possible, Python is perfect for developers who are looking to get a quick web page up, as well as those who just hate having to write mountains of code when a single line can do. As an added bonus the creators of the framework looked at Django and took on board feedback from that community as well, so you get the combined force of two different frameworks at your fingertips. Flask is easy to pick up, but difficult to master, so having a good selection of titles to help you along is the best way to get involved in this new world of Python web dev. Learning Flask Framework is the logical first step for getting into Flask. Released last month it's come heartily recommended as the all-in-one first stop to getting the most out of Flask. Want to try a different way to learn though? Well, Learning Flask video is a great supplement to the learning title, it shows us everything we need to start building our first Flask titles in just under 2 hours – almost as quick as it takes the average Flask developer to build their own sites. The Flask Framework Cookbook is the next logical step as a desktop companion for someone just starting their own projects. Having over 80 different recipes to get the most out of the framework is essential for those dipping their feet into this new world without worrying about losing everything. Finally, Flask Blueprints is something a little different, and is especially good for getting the most out of Flask. Now, if you're serious about learning Flask you're likely to get everything you need quickly, but the great thing about the framework is how you apply it. The different projects inside this title make sure you can make the most out of Flask's best features for every project you might come across! Want to explore more Python? Take a look at our dedicated Python page. You'll find our latest titles, as well as even more free content.
Read more
  • 0
  • 0
  • 5644
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €14.99/month. Cancel anytime
article-image-smart-learning-is-medium-agnostic
Richard Gall
22 Dec 2015
4 min read
Save for later

Smart Learning is Medium Agnostic

Richard Gall
22 Dec 2015
4 min read
What’s the best way to learn? This is a question that’s been plaguing people for centuries. Currently educationalists and publishers are the ones responsible for the troublesome waters of this age-old problem, but it’s worth remembering it’s nothing new. What’s more, this question is bound up with technological development – just as the Gutenberg printing press made knowledge more accessible and shareable, changing the way people learn, back in the 15th century, many saw it as something satanic, its ability to reproduce identical copies of text regarded as witchcraft. Today, witchcraft is something we can only dream of – we’re all searching for the next Gutenberg Press, trying to find the best way to reconfigure and package the expansive and disorienting effects of the internet into something that is simple, accessible and ‘sticky’ (we really need a better word). But in all this effort to find the solution to our problems we keep forgetting that there’s no single answer – what we really want are options. The metaphor ‘tailor-made’ is perhaps a little misleading – when you buy a tailor made suit, it has been created to fit you. But we can’t think of learning like that. However tempting it is to say ‘I’m a visual learner’, it’s never entirely true. It depends on what you’re learning, what mood you’re in, how you’re approaching a topic and a whole range of other reasons. When it comes to software and tech, the different ways in which we might want to learn are more pronounced. Some days we might really want to watch a video tutorial to see code in action; maybe ‘visual learners’ will most appreciate video courses, but even those that favour a classic textbook sometimes want to be shown rather than told what to do. Similarly, if we know we need a comprehensive journey through a new idea, a new technology then we might invest in a course (if you can’t convince your boss to cough up the cash, that is…). In particular, in terms of strategic and leadership skills, people are still willing to invest big money to undergo training, even in an age of online courses and immediate information – indeed, these face-to-face, ‘irl’ courses are more important than ever before. But we also know that a book is a reliable resource that you can constantly return to. You can navigate it however you like – read it from back to front, start in the middle, tear out the pages and stick them on your ceiling so you can read them while you’re lying down – the choice is yours. But what about eBooks? True, you probably shouldn’t stick it to your ceiling, but you can still navigate it, annotate it, and share snippets on social media. You can also carry loads of them around – so if, like me, you’re indecisive, you can remain confident that an entire library of books is safe in your bag should you ever need one – whether you need a quick programming solution or an amusing anecdote. But even then, that’s not the end of it. ‘Learning’ is often seen as a very specific thing. It sounds like LinkedIn wisdom, but it’s true that learning happens everywhere – from the quick blogs that give you an insight on what’s important to the weighty tomes that contain everything you need to know about Node.js Design Patterns and using Python to build machine learning models. Today, then, it’s all about navigating these different ways of learning. It’s about being aware of what’s important and making sure you’re using the resources that are going to help you not only solve a problem or get the job done, but also to think better and to become more creative. There’s no one ‘right way’ to learn – smart learners are always open to new experiences and are medium agnostic.  If you don’t know where to start download and read our free Skill Up Year in Review report. Covering the trends that defined tech in 2015, it also looks ahead to the future, providing you with a useful learning roadmap.
Read more
  • 0
  • 0
  • 1189

article-image-eight-things-developers-last-spent-5
Sam Wood
21 Dec 2015
1 min read
Save for later

Eight Things Developers Last Spent $5 On

Sam Wood
21 Dec 2015
1 min read
In our Skill Up Year in Review survey, we asked developers what they last spent $5 on. Here's what a few of them said. 1. Coffee Developers are machines that turn caffeine into code: over 300 surveyed developers last spent $5 on a coffee. 2. Apps and Games "I swear, I only spent $5 on Candy Crush Saga purchases!" 3. A Casino Chip We're confident that spending $5 to skill up your tech knowledge is a better way to get rich than gambling. ;) 4. A Cat We couldn't find a picture of a cat on the internet; please accept this pug GIF instead. 5. Cat Pajamas We managed to find a cat picture! We assume that 'Cat Pajamas' means pajamas to be worn by cats. 6. Bearings for an R2D2 droid   $5 is a better price than you'll be charged at Tosche Station, that's for sure. (Also, I've now fulfilled my obligatory Christmas 2016 Star Wars reference.) 7. Homeworld We assume this is the game, but we're charmed by the idea of being able to buy interplanetary real estate for $5. 8. Skilling Up! Over 800 developers last spent their $5 on a book, course, or some other learning resource. Developers never really do stop learning!
Read more
  • 0
  • 0
  • 1640

article-image-nwjs-app-and-shortcut-apis
Adam Lynch
18 Dec 2015
5 min read
Save for later

NW.js: The App and Shortcut APIs

Adam Lynch
18 Dec 2015
5 min read
The NW.js GUI library provides an "App" API, which contains a variety of methods and properties, some of which are essential to pretty much any app, and some have more obscure use cases. You can access the API as follows: var gui = require('nw.gui'); gui.App.quit(); As you can see from the example, the App API contains a quit method, which will kill your application. gui.App.argv, gui.App.dataPath, and gui.App.manifest are properties containing an array of arguments passed to your application when it was executed, an object representing your app's JSON manifest, and the application's data path in user's directory. gui.App.dataPath is typically a directory with the name you gave as the name property in your app manifest, located in the current user's "AppData/Local/" directory on Windows, ~/Library/Application Support/ on Mac OS X, or in ~/.config/ on Linux. vargui. = require('nw.gui'); gui.App.on('open', function(command){ gui.App.closeAllWindows(); }); The App API gives us two events we can listen for: open and reopen. The open event is fired when someone opens a file with your application, i.e. from the command line like: myapp a.txt. The function passed will receive the entire command (myapp a.txt) as the only argument. The reopen is exclusive to Mac OS X and is fired when the user clicks the dock icon for your app while it is already running. Also used in the example is the gui.App.closeAllWindows method which could come in handy if your app contains multiple windows. Other methods out of scope for this post include ones for getting and setting proxy configuration, editing cross-origin policies, setting where crash dumps get written to when NW.js itself crashes, and forcing a crash in the browser or renderer. Keyboard shortcuts This is also where you'll find methods to add or remove "global hot keys", i.e. keyboard shortcuts. To add a shortcut, you could do like this: var gui = require('nw.gui'); var shortcut = newgui.Shortcut({ key : "V", active : function() { console.log("Shortcut: " + this.key + " pressed."); }, failed : function(msg) { // Error adding / parsing the key console.log(msg); } }); gui.App.registerGlobalHotKey(shortcut); With the above code, any time the user presses the A key, the active callback is called. We create a new Shortcut instance (another piece of the the NW.js' GUI library) and pass it to gui.App.registerGlobalHotKey. The failed callback is called if there was a problem adding or parsing the key option. This is useful for development because there are some peculiar restrictions on what can be passed as the key option. The key option has to contain exactly one "key" (no more, no less) and can contain zero or more "modifiers". A "key" is one of the following: A-Z, 0-9, Comma, Period, Home, End, PageUp, PageDown, Insert, Delete, Arrow keys (Up, Down, Left, Right) and the Media Keys (MediaNextTrack, MediaPlayPause, MediaPrevTrack, MediaStop). The supported "modifiers" are: Ctrl, Alt, and Shift. Strangely, it was decided that on Mac OS X Ctrl would bind to Command instead, intentionally. I find this very strange as it seems you cannot bind any shortcuts which use the ctrl key on a Mac because of this. Hopefully in a future version Ctrl will map to Ctrl, Command will be supported, and it would be up to the user to check the current platform and bind to the correct keys accordingly. For clarity, here are a few example key bindings: - A: Valid. - A+B: Fails; you're not allowed to have multiple "keys". - Alt+Shift+T: Valid. - Ctrl+B: Valid but maps to Command+B on Mac OS X. It's not recommended to bind a shortcut to just one "key" like the A key as it'll block usage of that key for other applications while your app is running or until your app "unregisters" it. Unbinding a shortcut The API is pretty symmetric in that you can call gui.App.unregisterGlobalHotKey to remove or unbind a shortcut. You do have to pass the Shortcut instance again though which is a bit awkward. Thankfully, you can also pass a new Shortcut instance with just the key option as a workaround. So either of the last two lines here would work: var gui = require('nw.gui'); var shortcut = newgui.Shortcut({ key : 'V', active : function() { console.log('Shortcut: ' + this.key + ' pressed.'); }, failed : function(msg) { console.log(msg); } }); gui.App.registerGlobalHotKey(shortcut); gui.App.unregisterGlobalHotKey(shortcut); // option 1 gui.App.unregisterGlobalHotKey({key: 'V'}); // option 2 Events The Shortcut instance emits active and failed events as well as accepting them as options. You could either or both if you'd like. Here's an unrealistic example: var gui = require('nw.gui'); var shortcut = newgui.Shortcut({ key : 'V', active : function() { console.log('ACTIVE: Constructor option'); }, failed : function(msg) { console.log('FAILED (Constructor option): ' + msg); } }); shortcut.on('active', function(){ console.log('ACTIVE: Event listener'); }); shortcut.on('failed', function(msg){ console.log('FAILED (Event listener): ' + msg); }); gui.App.registerGlobalHotKey(shortcut); System-wide These shortcuts are system-wide and will be called even if your app isn't focused. If you'd like to have shortcuts which do something when your app is focused, then you could check if the app is focused in the active callback using the Window API we'll cover later. Summary and Alternative This article provides a quick look at the App API, but if you really need to bind to keys, which aren't supported by this API, or if you can't use this API because your app will be used in both NW.js and on the Web, then you could use a JavaScript library to bind your shortcuts. These will not be "global" or "system-wide" though. About The Author Adam Lynch is a TeamworkChat Product Lead & Senior Software Engineer at Teamwork. He can be found on Twitter @lynchy010.
Read more
  • 0
  • 0
  • 2506

article-image-5-things-defined-tech-2015
Richard Gall
18 Dec 2015
5 min read
Save for later

5 Things That Defined Tech in 2015

Richard Gall
18 Dec 2015
5 min read
2015 has been the year that the future of tech has become more clearly defined. For us at Packt, it’s been a year of reflection and analysis. We’ve been finding out more about the lives of our customers, looking at what’s driven their careers and technical expertise over the last decade and looking ahead towards the challenges not only of 2016 but also the decade ahead. Our Skill Up Skills and Salary Reports were at the centre of this, and have given us a fresh perspective on the lives that we build around the tech we use everyday. But we certainly don’t want to stop learning about what makes you tick – and what keeps you up at night. That’s why we’ve ended the Year with our very first Year in Review. It’s a chance for us to join the dots between the year that’s been and gone and the one that’s just ahead. So what was important? Take a look at some of the key findings, and then read the full report yourself. Python, Python, Python What better place to begin than with Python. Our end of year survey found that not only was Python the fastest growing topic of 2015 – being the most adopted programming language on the planet – it is also going to be the fastest growing topic of 2016. Surely this alone underlines that it is now the language of software par excellence. That’s not to say it’s superior to other languages – there are, of course, plenty of reasons to not use Python – but its versatility and ease of use means it’s an accessible route into the software world, a solution to a vast range of contemporary problems, from building websites to analysing data. Specialized Programming Languages If Python has reached into just about every corner of the programming world, a curious counterpoint is the emergence of more specialized programming languages. In many ways, these languages share a number of Python’s distinctive characteristics. Languages like Go (which was far and away the winner when it came to languages people wanted to learn), Julia and Clojure all offer a level of control, their clear and expressive syntax perfect for complex problem solving and high performance. As the programming world becomes obsessed with speed and performance, it’s clear that these languages will be sticking around for some time. Bigger, Better, Faster Data The next generation of Big Data and Data Science is already here. You probably already knew that – but the topics that emerged from our survey results indicate the direction in which the world of data is heading. Deep Learning was the most notable trend that is on the agenda for 2016. Clearly, nowMachine Learning has become embedded in our everyday lives (professional and personal), 2016 is going to be all about creating even more sophisticated algorithms that can produce detailed levels of insight that would have been unimaginable a few years ago. Alongside this – perhaps as a corollary to this next-level machine learning – is the movement towards rapid, or even real-time Big Data processing. Tools such as Apache Kafka, Spark and Mesos all point towards this trend, all likely to become key tools in our Big Data infrastructures not only over the next 12 months but also over the next few years. Internet of Things mightFinally Become a Reality Even 12 months ago the Internet of Things looked like little more than a twinkle in the eye of a futurologist. Today it has finally taken form – we’re not there yet, but it’s starting to look like something that’s going to have a real impact not only on the way we work, but also the way we understand software’s relationship to the world around us. The growth of wearables, and the rise of applications connected to real-world objects (we love home automation), are the first step towards a world that is entirely connected. It’s important to remember this is going to have a huge impact on everyone working in tech – from the developers creating applications to the analysts charged with harnessing this second explosion of data. The Future of JavaScript Many web developers we spoke to listed AngularJS as the most useful things they learned in 2015 – many more also said they planned on learning it in 2016. The impact of the eagerly-awaited Angular 2.0 remains to be seen, but it’s likely that the best way to prepare for the next generation of Angular is by getting to grips with Angular now! It would be unwise to see Angular’s dominance in isolation – it’s the growth of full-stack development that’s been crucial in 2015, and something that is going to shape the next 12 months in web development. Node.js featured as a key topic for many of our customers, highlighting that innovation in web development appears to be driven by tools that provide new ways of working with JavaScript. Although Node and Angular have a real hold when it comes to JavaScript, we should also pay attention to newer frameworks like React.js and Meteor. These are frameworks that are tackling the complexity and heft of today’s websites and applications through radical simplicity – if you’re a web developer, you cannot afford to ignore them. Download our Year in Review and explore our key findings in more detail. Then start exploring the topics and tools that you need to learn by taking advantage of our huge $5 offer!
Read more
  • 0
  • 0
  • 1292
article-image-trick-question-what-devops
Michael Herndon
10 Dec 2015
7 min read
Save for later

Trick Question: What is DevOps?

Michael Herndon
10 Dec 2015
7 min read
An issue that plagues DevOps is the lack of a clearly defined definition. A Google search displays results that state that DevOps is empathy, culture, or a movement. There are also derivations of DevOps like ChatOps and HugOps. A lot of speakers mentions DEVOPS but no-one seemed to have a widely agreed definition of what DEVOPS actually means. — Stephen Booth (@stephenbooth_uk) November 19, 2015 Proposed Definition of DevOps: "Getting more done with fewer meetings." — DevOps Research (@devopsresearch) October 12, 2015 The real head-scratchers are the number of job postings for DevOps Engineers and the number of certifications for DevOps that are popping up all over the web. The job title Software Engineer is contentious within the technology community, so the job title DevOps engineer is just begging to take pointless debates to a new level. How do you create a curriculum and certification course that has any significant value on an unclear subject? For a methodology that has an emphasis on people, empathy, communications, it falls woefully short heeding its own advice and values. On any given day, you can see the meaning debated on blog posts and tweets. My current understanding of DevOps and why it exists DevOps is an extension of the agile methodology that is hyper-focused on bringing customers extraordinary value without compromising creativity (development) or stability (operations). DevOps is from the two merged worlds of Development and Operations. Operations in this context include all aspects of IT such as system administration, maintenance, etc. Creation and stability are naturally at odds with each other. The ripple effect is a good way to explain how these two concepts have friction. Stability wants to keep the pond from becoming turbulent and causing harm. Creation leads to change which can act as a random rock thrown into the water sending ripples throughout the whole pond, leading to undesired side effects that causes harm to the whole ecosystem. DevOps seeks to leverage the momentum of controlled ripples to bring about effective change without causing enough turbulence to impact the whole pond negatively. The natural friction between these two needs often drives a wedge between development and operations. Operations worry that a product update may include broken functionality that customers have come to depend on, and developers worry that sorely needed new features may not make it to customers because of operation's resistance to change. Instead of taking polarizing positions, DevOps is focused on blending those two positions into a force that effectively provides value to the customer without compromising the creativity and stability that a product or service needs to compete in an ever-evolving world. Why is a clear singular meaning needed for DevOps? The understanding of a meaning is an important part of sending a message. If an unclear word is used to send a message, and then the word risks becoming noise and the message risks becoming uninterpreted or misinterpreted. Without a clear singular meaning, you risk losing the message that you want people to hear. In technology, I see messages get drowned in noise all the time. The problem of multiple meanings In communication's theory, noise is anything that interferes with understanding. Noise is more than just the sounds of static, loud music, or machinery. Creating noise can be simple as using obscure words to explain a topic or providing an unclear definition that muddles the comprehension of a given subject. DevOps suffers from too much noise that increases people's uncertainty of the word. After a reading a few posts on DevOps, each one with its declaration of the essence of DevOps, DevOps becomes confusing. DevOps is empathy! DevOps is culture! DevOps is a movement! Because of noise, DevOps seems to stands for multiple ideas plus agile operations without setting any prioritization or definitive context. OK, so which is it? Is it one of those or is it all of them? Which idea is the most important? Furthermore, these ideas can cause friction as not everyone shares the same view on these topics. DevOps is supposed to reduce friction between naturally opposing groups within a business, not create more of it. People can get behind making more money and working fewer hours by strategically providing customers with extraordinary value. Once you start going into things that people can consider personal, people can start to feel excluded for not wanting to mix the two topics, and thus you diminish the reach of the message that you once had. When writing about empathy, one should practice empathy and consider that not everyone wants to be emotionally vulnerable in the workplace. Forcing people to be emotionally vulnerable or fit a certain mold for culture can cause people to shut down. I would argue that all businesses need people that are capable of empathy to argue on the behalf of the customer and other employees, but it's not a requirement that all employees are empathetic. At the other end of the spectrum, you need people that are not empathetic to make hard and calculating decisions. One last point on empathy, I've seen people write on empathy and users in a way that should have been about the psychology of users or something else entirely. Empathy is strictly understanding and sharing the feelings of another. It doesn't cover physical needs or intellectual ones, just the emotional. So another issue with crossing multiple topics into one definition, is that you risk damaging two topics at once. This doesn't mean people should avoid writing about these topics. Each topic stands on its own merit. Each topic deserves its own slate. Empathy and culture are causes that any business can take up without adopting DevOps. They are worth writing about, just make sure that you don't mix messages to avoid confusing people. Stick to one message. Write to the lowest common denominator Another aspect of noise is using wording that is a barrier to understanding a given definition. DevOps is the practice of operations and development engineers participating together in the entire service lifecycle, from design through the development process to production support. - the agile admin People that are coming from outside the world of agile and development are going to have a hard time piecing together the meaning of a definition like that. What my mind sees when reading something like that is same sound that a teacher in Charlie Brown makes. Blah, Blah Blah. blah! Be kind to your readers. When you want them to remember something, make it easy to understand and memorable. Write to appeal to all personality styles In marketing, you're taught to write to appeal to 4 personality styles: driver, analytical, expressive, and amiable. Getting people to work together in the workplace also requires appealing to these four personality types. There is a need for a single definition of DevOps that appeals to the 4 personality styles or at the very least, refrains from being a barrier to entry. If a person needs to persuade a person with a driver type of personality, but the definition includes language that invokes an automatic no, then it puts people who want to use DevOps at a disadvantage. Give people every advantage you can for them to adopt DevOps. Its time for a definition for DevOps One of the main points of communication is to reduce uncertainity. Its hypocritical to introduce a word without a definite meaning that touchs upon importance of communication and then expect people to take it seriously when the definition constantly changes. Its time that we have a singular definition for DevOps so that people use it for the hiring process, certifications, and that market it can do so without the risk of the message being lost or co-opted into something that is not. About the author Michael Herndon is the head of DevOps at Solovis, creator of badmishka.co, and all around mischievous nerdy guy.
Read more
  • 0
  • 0
  • 1701

article-image-praise-play-how-rediscover-tech-and-rethink-innovation
Richard Gall
10 Dec 2015
5 min read
Save for later

Rediscover Tech and Rethink Innovation

Richard Gall
10 Dec 2015
5 min read
What’s technology for? How often do we actually ask that question? Very rarely, I’d wager. So many minutes and web pages are dedicated to discussing ‘disruption’ and ‘innovation’ that it’s easy to lose sight of what we’re trying to achieve as tech professionals. Yes, we always want things to be ‘better’, but what exactly does that mean? Usually ‘better’ means ‘faster’ or ‘more efficient.’ Again, I’d ask the same question – why do we want things to be faster? And, ok, it sounds stupid, but why might we want things to be more efficient? These sort of questions are perhaps facetious, but they highlight just how easy it is to lose sight of what makes tech fun, inspiring and exciting. Indeed, it’s useful to note that a lot of the innovations and trends we see across the tech world – from microservices to web components to data lakes, are driven by the relentless hand of capital. It’s all about speed and scale; being able to manage bigger deployments, more users, larger sets of data, without losing performance and without requiring more resources – human or otherwise. But let’s move away from that for a moment. It’s only when we start to innovate by what we in Britain like to call ‘mucking about’ that we can start to rethink exactly what technology is for. True, it might well be the case that technology isn’t really for anything, but isn’t that remarkable enough? Because once we recognise that the tools and technologies that define our everyday lives – both leisure and work – don’t have to be the way they are and used the way they are, it’s then that we can begin to rediscover, reinvent what we might do. It’s against a world where just about all technological advancement fits into a Daft Punk schema of ‘harder, better, faster, stronger’ that a movement towards play and creation has started to develop. Maker culture is perhaps the most obvious demonstration of this, where the emphasis is on the relationship between inventive technologies and the crushing reality of meatspace. 3D printing and robotics take their place alongside more traditional forms of craft, as a new kind of innovation – rooted in exploration, rather than acceleration, takes hold. Go to a ‘FabLab’, one the community workshops that have come to define modern maker culture and ‘digital fabrication’, and you’ll see this exploration in practice. Here you’ll be able to access a huge range of tools, from 3D printers and laser cutters to sewing machines. In a FabLab you can build whatever you want – you can learn from others and share ideas, maybe even materials and tools if you’re nice. Innovation here isn’t driven by changing business demands and market pressures – it’s all about creativity. But it’s not just about Maker Culture – the DIY ethos is catching on to the mainstream. Just look at the huge popularity of Raspberry Pi and other microboards. The very fact that the Raspberry Pi bridges the gap between adults and children says a lot about how we might characterise it – it is at once a ‘toy’ but also a tool. We don’t have to decide which one it is – what’s great is that it’s both at the same time. This means that it’s something we can play with. We can use it to try out new ideas, experiment with different projects - it’s only once you go through that process of play and invention that you can begin to unlock true innovation. This isn’t an innovation that responds to the demands of the marketplace, but instead an innovation that is motivated by curiosity and the sheer joy of experimentation. It’s a little corny, but a good way to think about the value of Maker Culture and creative hardware like Raspberry Pi is by recalling John Keating’s words in Dead Poet’s Society. “We don’t read and write poetry because it’s cute… we read and write poetry because we’re members of the human race” he says. What does this mean? Essentially, it means that we write poetry ‘just because’. We do it simply because we want to. Playing with your Raspberry Pi is like writing poetry – you do it simply because you want to. And how often can we say that about technology? Yes, we’re curious and we want to learn new things, but this curiosity is nevertheless informed by our careers, business strategies – whatever lingers over you as you spend your day working. It’s only when you step back and start to ‘play’ that the tech world takes on a new complexion that is filled with possibility and new routes of innovation. So why not make time to do just that? Don't forget to download and read our Year in Review, to revisit the last 12 months in the tech world and find out what's set to be important for 2016.
Read more
  • 0
  • 0
  • 1168

article-image-manage-your-apps-without-losing-your-mind-0
Michael Herndon
10 Dec 2015
6 min read
Save for later

Manage your apps without losing your mind

Michael Herndon
10 Dec 2015
6 min read
Setting up a new computer, reinstalling windows, updating software, or managing the apps on multiple computers is a hassle. Even after the creation of app stores, like the Windows store, finding and managing the right apps for your Windows desktop is still a tedious chore. Work is piling up, and the last thing you need to worry about is hunting down and reinstalling your favorite apps after a recent computer crash. Ain't Nobody got time for that. Thankfully there is already a solution for your app management woes called package managers. It's not the most intuitive name but the software gets the job done. If you're wondering why you haven't heard of package managers, they are a poorly marketed tool. Even some software developers that I have met over the years are unaware of their existence. All of which begs the question, what is a package manager? What is a package manager? It's a system of applications that control the installation, updates, and removal process of packages on a platforms such as Windows, OSX. Packages can bundle one or more applications or parts of applications and include the scripts to manage these items. You could think of it as an app store that includes the apps that your app store forgot to add for your desktop. The Linux world has had package managers like apt-get and yum for years. Macs have a package manager called homebrew. For Windows, there is a tool called Chocolatey. Chocolatey stands for Chocolate nugget, a play on words that acknowledges that the package manager is built on top of a Microsoft technology called nugget. It can install applications like Evernote, OneDrive, OneNote, Google Chrome from within one location. What are the benefits of package managers? The benefits of package managers are repeatability, speed, and scale. Repeatability, a package can be installed on the same machine multiples times or multiple machines. Speed, a package manager makes it easy to find, download, and install the software within a few keystrokes. Scale, a package manager can be used to apply the same software across multiple computers from a home network to a whole company. It can also install multiple packages at once. Let us assume that that a company wants to install Google Chrome, Paint.Net, and custom company apps. The apps require specific installations of Java and .NET. A package manager can be used to install all those items. The company will have to create their packages for their custom apps. The packages will need to specify dependencies on Java and.NET. When the package manager detects a required dependency, the dependency is installed before the app is. The package manager can apply a required list of programs to all the machines in an organization. The package manager is also capable of updating the applications. For the remainder of the article, I will focus on using Chocolatey as a the package manager for Windows. Get Chocolatey To run chocolatey, you need a program called Powershell. Powershell exists on Windows 7 and above or Windows Server 2008 and above. Open Powershell from the start menu. Go to chocolatey.org, copy the installion code, iex ((new-object net.webclient).DownloadString('https://chocolatey.org/install.ps1')). Paste the code into the blue shell. PS C:> iex ((new-object net.webclient).DownloadString('https://chocolatey.org/install.ps1')) Press enter. This will install chocolatey on your machine. Customize the Chocolatey Install Chocolatey has two configuration options for installation. $env:ChocolateyInstall will instruct chocolatey to install itself to the specified path. The default is c: $env:ChocolateyBinRoot will instruct chocolate to install programs that folder based like PHP or MySQL to install at the given location. The default is c: For example, let us assume the company wishes to customize chocolatey's install. It wants Chocolatey to exist in the folder c:. It also wants to the bin root folder to use c:. Type or copy $env:ChocolateyInstall = "c:optchocolatey"; into Powershell and hit enter. PS C:> $env:ChocolateyInstall = "c:optchocolatey"; Type or copy $env:ChocolateyBinRoot = "c:optchocolatey"; into Powershell and hit enter. PS C:> $env:ChocolateyBinRoot = "c:optchocolatey"; Copy the Chocolatey install code into Powershell and then hit enter. PS C:> iex ((new-object net.webclient).DownloadString('https://chocolatey.org/install.ps1')) Use Chocolatey After Chocolatey's installation finishes, you can search for packages from the command line. List all programs with GoogleChrome in the name. To list packages, you will use the list command. Type choco list GoogleChrome into Powershell and hit Enter. PS C:> choco list GoogleChrome List all packages Type choco list into Powershell and hit enter. PS C:> choco list List all packages and write them to a file Type choco list > c:usersmhernDesktoppackages.txt into Powershell. Replace "mhern" with the name of your user folder on Windows. Hit enter. PS C:> choco list > c:usersmhernDesktoppackages.txt Install chrome To install a package like GoogleChrome you will use the install command. Type choco install chrome GoogleChrome -y into Powershell and hit enter. The flag, -y, means that you accept the license agreement for the software that you are installing. PS C:> Choco install GoogleChrome -y Install multiple programs Type choco install php mysql -y into Powershell and hit enter. This will install PHP and MySQL onto your windows machine. PHP and MySQL will be installed into c:tools by default. PS C:> choco install chrome PHP MySQL -y Upgrade a program To upgrade a program with chocolatey, you will use the upgrade command. In older versions of chocolatey it was the update command. Type choco upgrade GoogleChrome -y into Powershell and hit enter. Though you really do not need to update GoogleChrome as it will update itself. PS C:> choco upgrade GoogleChrome -y Upgrade all programs Type choco upgrade all -y into Powershell and hit enter. PS C:> choco upgrade all -y Upgrade chocolatey Type choco upgrade chocolatey into Powershell and hit enter. PS C:> choco upgrade chocolatey Uninstall chrome To uninstall a package, use the uininstall command. Type choco uninstall GoogleChrome into Powershell and hit enter. PS C:> choco uninstall GoogleChrome Further reading Now you should be able to find and install applications with ease. If you want to investigate Chocolatey further, I suggest reading through the wiki and familarizing yourself with powershell and running powershell as administrator. Disclosure I supported Chocolatey's kickstarter campaign. About the author Michael Herndon is the head of DevOps at Solovis, creator of badmishka.co, and all around mischievous nerdy guy.
Read more
  • 0
  • 0
  • 2142
article-image-retro-modding-with-micro-computers
Sarah C
08 Dec 2015
4 min read
Save for later

Retro-Modding: Rebuilding the past with consumer micro-computers

Sarah C
08 Dec 2015
4 min read
Micro-controllers lend themselves to all kinds of projects. From home automation to robotics, the long-predicted Internet of Things is finally flashing and beeping its way into our hobby time. Among the most satisfying uses of these tiny boards is to rebuild the great computer milestones of the past. Projects that took teams of expert developers years to create not so very long ago can now be reproduced with a few weeks of tinkering. They’re fun, they’re nostalgic, they’re a great way to learn, and they make fantastic gifts. The number of single-chip boards and micro-computers available is growing all the time, and becoming ever more affordable. (We here at Packt are excited for the Banana-Pi, Galileo, and Humming Board.) For now, there are three main options if you want to get to work on fixing the past for yourself. Speed dating with the three big players: Arduino is a single circuit board with a microprocessor. It’s tiny, it’s light, it’s great for custom firmware – use it to power wearables, LED displays, remote-control devices, or an army of tiny robot slaves. Raspberry Pi is a micro-computer – unlike Arduino, it runs an operating system. Since its memory lives on an SD card, you can build multiple projects for one board and just swap its brains out whenever you want. Raspberry Pi’s support for audio and video makes it great for retro-gaming, media projects, and a wealth of other projects dreamed up by its active user base. BeagleBone is another micro-computer like Raspberry Pi. It’s got a powerful processor and all the connectors in the universe. (Seriously – you can connect anything to it. Twice. The Borg probably run on Beaglebone.) It’s a great fit for powering home automation, robotics projects, or your very own Rube Goldberg-esque media player. Once you’ve chosen your hardware, there are thousands of options for repurposing all those bags of old electronics that you were keeping just in case 1996 rolled back around. Rebuild an old toy Remember when ComQuest was the cutting edge of the toy catalog? Or Speak and Say devices? They’ve long since been outstripped by better tech, but for those of us who grew up with them they still hold a certain power. You never forget that first sense of mystery – how does it work? How does it speak? Why can’t I teach it to call my sister names? Now you can. With an old case and an ARM board you can hack your own childhood as much as you like. Hand-held gaming A common project with micro-computers is to refurbish or rebuild an old Game Boy. With a Raspberry Pi, an emulator, and an old case (or a 3D printer and a sense of adventure) you can make your own Game Boy. Higher resolution, better colours, clearer sound, and any peripherals you like are optional. (You could even go all out and build a Game Boy with a Sega emulator.) Stylish USB keyboards Nowadays the Commodore 64 is slightly less powerful than the remote control for your TV or the thermostat in your fridge. But with a little modification, it makes for a nostalgic and surprisingly comfortable USB keyboard. Computer cases Less about the coding, and more about crafting and perhaps a little soldering – but the robust plastics of old devices can make for great protective casing for the delicate and exposed circuitry of most ARM boards.  One seller on Etsy is even 3D printing replicas of the Apple II to make Raspberry Pi cases. Arcade games Think big. A single Raspberry Pi can power your very own arcade machine with hundreds of games and no need for quarters. Unless you want to incorporate a coin slot – then you can certainly do that too. How you build the case is up to you – sprayed metal, polished mahogany, or cardboard and poster paints are all Raspberry Pi compatible. Beautiful old radios For years before the age of disposable plastics, radios and televisions were designed to be part of your house’s furniture. Scouting around the right markets or websites you can find some truly beautiful broken electronics. Repairing them with original parts is costly when it’s even possible. But Internet radios are some of the simplest problems around for ARM boards. There’s no reason why you can’t combine the aesthetic best of the mid twentieth-century with a state of the art interior for less than the cost of a hipster knock-off transistor.
Read more
  • 0
  • 0
  • 1531

article-image-4-reasons-why-openstack-tokyo-proved-openstack-has-come-age
Richard Gall
07 Dec 2015
6 min read
Save for later

4 Reasons Why OpenStack Tokyo proved that OpenStack has Come of Age

Richard Gall
07 Dec 2015
6 min read
If OpenStack had always appeared to be the trendy cloud solution, acclaimed and praised yet never quite taking hold over the software world’s popular imagination, the OpenStack Summit in Tokyo at the end of October marked its maturity. While it may have spent the past few years defining and setting the standard for what modern organizations could do with Cloud from the fringes, as 2015 draws to a close there’s no doubting that it is now a core part of the mainstream. This can only be good for an organization that has its sights set on becoming the key player in the market, but it also means new responsibilities and changing expectations. Maturity often means dealing with the crushing reality of ‘real life’, but OpenStack proved in Tokyo that this doesn’t have to mean you become boring… Here’s 4 reasons why the OpenStack Summit proved that OpenStack has now come of age. OpenStack Certification The announcement that The OpenStack Foundation, the non-profit organization that drives the OpenStack project, is going to set up certificated training for Cloud admins is a distinctive mark of maturity that is likely to cement OpenStack’s position within the market. Mark Collier, the foundation’s COO, explains "As OpenStack matures and enters bigger and bigger markets… what people typically want to do is really start to take the software and put it to use… They just want to operate it - and so that's where we see the biggest impact in terms of skills.” The certification signals that the organization is trying to address a difficult and challenging reality – that there is a talent gap of knowledgeable and experienced Cloud admins. Of course, tackling this will be crucial to OpenStack both expanding and consolidating its use. Collier’s suggestion that this certification (which is due to be rolled out in 2016) is the first of many emphasises that OpenStack isn’t drawing away from its versatility as a cloud platform but is instead harnessing and refining it, so users can have more confidence and greater purpose. Project Navigator Project Navigator neatly follows on from the OpenStack Foundation’s certification, as it is part of the same thematic trend – OpenStack’s movement towards giving users more control over their software. Essentially it will help users identify their key needs, and direct them towards products and services that most suit their needs. Built on a wealth of user data about what types of projects are built on what software, Project Navigator delivers really useful information in one accessible interface/dashboard. Whoever uses it will be able to see what other people are doing, and can then base their own decisions on a wider consensus of what works with what. Project Navigator demonstrates that OpenStack is acutely aware of the huge range of its users’ needs, and, indeed, the potentially confusing scope of possibilities that OpenStack offers. Just as the certification provides a way of defining best practices and emphasising the core features of OpenStack from different user’s perspectives, Project Navigator similarly helps to define the different ways in which OpenStack can be used. But what’s most impressive about the project is that it’s managed to retain the Open Source values of openness and creativity. As Collier explained, because Project Navigator is driven by user data “we're not really making a judgment call. It's more just a reflection of where the market is”. What we have then, is a platform that speaks to the concerns of high-level technology strategists, making key organizational decisions, that doesn’t dictate how something should be done, but instead simply outlines what people are doing now. OpenStack Liberty OpenStack Liberty was at the centre of October’s Summit. If the Summit represents a watershed moment in OpenStack’s lifespan, the 12th version of the cloud solution expertly demonstrates and underlines that the organization is listening to users and committed to tackling the key challenges that lie ahead. Adding role-based accessed to Neutron (OpenStack’s networking project that is often described as unnecessarily complex by critics) and Heat will, as ZDNet put it, “provide fine controls over security settings at all levels of the network and API.” The issue of scalability, too appears to be being tackled by Liberty, as the new version of cells set to become “the default way in which Nova is deployed”. Essentially, scaling will simply become a case of adding new cells to the single cell that makes up a Nova instance. (If you want a comprehensive look at what’s new in Liberty, Mirantis helpfully run through every single new feature, which you can read here.) Liberty has been described as a move towards a ‘Big Tent’ model, whereby projects are brought together to become part of a more coherent whole. As Jonathan Bryce, the OpenStack Foundation’s Executive Director, explains, "With the Big Tent shift, it has allowed people within the OpenStack community to select different focus areas, so we're seeing a lot of innovation." Again, this move lets OpenStack emphasise the sheer range of possibilities on offer, while still putting forward a singular vision. Indeed, perhaps Bryce is being a little disingenuous – yes, it’s about letting people focus on what they want, but it’s likely that over time innovation will be driven by people looking at how different projects intersect and work together in new ways. Going International – OpenStack is growing outside of the U.S. It’s significant that this October’s summit was held in Tokyo. Although the OpenStack user base is predominantly located in North America, with 44% of users based there, it’s worth noting that 28% of users are based in Asia, with Europe on 22%. Clearly, there is a lot of room for growth into these areas, but announcements such as the training certification and Project Navigator, positioned alongside improvements to the core OpenStack offering have been designed to do exactly that. Yahoo Japan provides a great case study of OpenStack being used on a large-scale outside of the U.S. Yahoo claim to be running more than 50,000 virtual machines on OpenStack – as Mark Collier points out, this means that there are just “A team of six running 10 billion page views”. It’s true that a single success story shouldn’t necessarily be taken as evidence of some wider trend – but the fact that OpenStack are so interested in talking about it provides a clear indication that they are looking for new stories to promote the project, which will help them reach out and engage new people. For a comprehensive look at OpenStack, pick up the latest edition of OpenStack Cloud Computing Cookbook today. Packed with more than 110 recipes, it helps you properly get to grips with the platform so you can harness the opportunities it creates for more productive, collaborative and efficient working.
Read more
  • 0
  • 0
  • 1308