Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech Guides

851 Articles
article-image-disruption-rest-us
Erol Staveley
25 Jun 2015
8 min read
Save for later

Disruption for the Rest of Us

Erol Staveley
25 Jun 2015
8 min read
Whilst the ‘Age of Unicorns’ might sound like some terrible MS-DOS text adventure (oh how I miss you, Hugo), right now there is at least one new $1B startup created every month in the US. That’s all very well and good, but few people actually seem to stop and think about what this huge period of technical innovation means for everyday developers. You know, the guys and girls who actually build the damn stuff. Turns out, there are a lot of us: As startups focus on disruption and big business focuses on how they can take the good parts to implement within their own monolithic organizational structures, both sides are paying through the nose for good talent. Supply is low, demand is high, and that’s a good place to be if you’re a skilled developer. But what does being a skilled developer really mean? Defining Skill Foregoing the cliché definition of ‘skill’ from some random dictionary, skill in a development sense is almost another way of saying flexibility. Many (but not all) of the ‘good’ developers I’ve met actually associate strongly with calling themselves engineers – we like to understand how things work, we like to solve problems. Anything between a problem and a solution is a means to an end, and it’s not always just about using what we already know or are most comfortable with. At the best of times this can be wonderfully creative and rewarding, and yet it can be soul-crushingly irritating when you hit a brick wall. Brick walls are why tutorials exist. It’s why StackOverflow exists. It’s why many find it hard to initially switch to functional programming, and why seamlessly moving from one framework to another is the underlying promise that most books use in their promotional copy. However you spin it, flexibility is an essential part of being a good developer (or engineer, if that’s what you prefer). The problem is that this mental flexibility is actually an incredibly rare competency to have by default. The skill of learning in and of itself takes practice. Your ability to absorb new information entirely depends on your level of exposure to new thoughts and ideas. Paul Graham’s piece on Why You Weren’t Meant to Have a Boss articulates this better than I can (alongside some other key themes about personal development): I was talking recently to a founder who considered starting a startup right out of college, but went to work for Google instead because he thought he'd learn more there. He didn't learn as much as he expected. Programmers learn by doing, and most of the things he wanted to do, he couldn't—sometimes because the company wouldn't let him, but often because the company's code wouldn't let him. Between the drag of legacy code, the overhead of doing development in such a large organization, and the restrictions imposed by interfaces owned by other groups, he could only try a fraction of the things he would have liked to. He said he has learned much more in his own startup, despite the fact that he has to do all the company's errands as well as programming, because at least when he's programming he can do whatever he wants. This isn’t to say that working in a big organization entirely limits your openness to new ideas, it just makes it harder to express them 9-to-5 in the context of your on-paper role. There are always exceptions to the rule though - the BBC is a great example of a large organization that embraces new technologies and frameworks at a pace that would put many startups to shame. Staying Updated It’s hard to keep up with every framework-of-the-month, but in doing so you’re making a commitment to stay at the top of your game. Recruiters exploit this to full effect – they’ll frequently take a list of up-and-coming technologies used by an employer and scour LinkedIn and GitHub to identify leads. But we don’t just use new frameworks and languages for the sake of it. Adding an arbitrary marker on LinkedIn doesn’t prove that I deeply understand the benefits or downsides of a particular technology, or when it might be best not to use it at all. That understanding comes from experimentation, from doing. So why experiment with something new in the first place? It’s likely that there will be something to them – they might be technically impressive, or help us get from point A to point B faster in a more efficient manner. They help us achieve something. It’s not enough to just passively be aware of what’s hot and then skim the documentation. It’s actually really hard to really stay motivated and generate real personal value doing that. To keep up with the rate of technical innovation you need a real interest in your field, and a passion for solving complex problems. After all, software development really is about creative problem solving. That individual drive and creativity is what employers want to see above all else, hands down. Funnily enough, we also want to see this sort of thinking from our employers: Turns out we care about how much we’re paid - after all, Apple won’t just give us free iPhones (not yet anyway, Taylor). It just so happens that because supply is low, we can also afford to put making a difference as a priority. Even if we assume ‘making a difference’ is an aspiration you’d align with whilst taking a survey, the relatively minimal gap from salary is a significant indicator of how picky we can afford to be. Startups want to change the world, disrupt how established businesses work, whilst having a strong cross-functional alignment towards a legitimate, emotionally coherent vision. That fundamental passion aligns very well with developers who want to ‘make a difference’, who also have a strong level of individual drive and creativity. Larger businesses that don’t predominantly operate in the technology sector will have a much harder time cultivating that image. This way of thinking is just another part of the harsh disconnect between startup culture and the rest of society. It’s hard if you’re not working in technology to understand the private buses, the beanbag chairs, the unlimited holiday policies – all things intended to set startups apart and attract talent that’s in high demand. All those perks exist specifically to attract talented engineers. If JavaScript, Python, or lets even say C++ were common everyday ‘second languages’, things would be very different. Change is Coming It’s not hard to identify this deficit in technical skills. You can see it starting to be addressed in government schemes like Code for America. In the UK, England is about to make programming a required part of the curriculum from the ages of 5-16 (with services delivered by Codecademy). In a decade the number of people out there in the job market with strong programming skills will have grown exponentially, specifically because we all collectively recognize the shortage of good engineering talent. As the pool of readily talented developers increases, recruitment will be less about the on-paper qualification or just having a computer science background - it’ll be about what you’ve built, what excites you or what your OS contributions look like. You can already see these questions emerging as the staple of many technical interviews. Personal growth and learning will be expected in order to stay current, not just as a nice-to-have on top of your Java repertoire. And we won’t be able to be as picky, because there will be more of us around :). Skilling Up If that sounded a little like scaremongering, then good. We’re in a job market bubble right now, but the pop will be slow and gradual, not immediate (so maybe we’re in a deflating balloon?). Like any market where demand is high and supply is low, there will eventually be a period where things normalize. The educational infrastructure to support this is being built rapidly, and the increasing availability of great learning content (both free and premium) is only going one way. Development is more accessible than ever before, and you can pretty much learn the basics of most languages now without spending a penny. When we’re talking about being a skilled developer in a professional market it’s not going to be about what technologies you’re comfortable with or what books you’ve read. It’s going to be about what you’ve built using those technologies and resources. There will always be a market for creative problem solvers, the trick is becoming one of them. So the key to keeping on top of the job market? Dust off that Raspberry Pi you’ve had in your desk drawer, get back into that side project you’ve let atrophy on GitHub - just get out there and build things. Learn by doing, and flex those creative, problem solving neurons. And if you happen to need a hand? We’ll probably have a Packt book on it. Shameless plug, right? During June we surveyed over 20,000 IT professionals to find out what technologies they are currently using and plan to learn in the next 12 months. Find out more in our Skill Up industry reports.
Read more
  • 0
  • 0
  • 1623

article-image-kubernetes-hands
Ryan Richard
22 Jun 2015
6 min read
Save for later

Hands on with Kubernetes

Ryan Richard
22 Jun 2015
6 min read
In February I wrote a high level overview of the primary Kubernetes features. In this blog post, we’ll actively use all of these features to deploy a simple 2-tier application inside of a Kubernetes cluster. I highly recommend reading the intro blog before getting started. Setup The easiest way to deploy a cluster is to use the Google Container Engine, which is available on your Google Compute Engine account. If you don’t have an account, you may use one of the available Getting Started guides in the official Github repository. One of the great things about Kubernetes is that it will function almost identically regardless of where it’s deployed with the exception of some cloud provider integrations. I’ve created a small test cluster on GCE, which resulted in three instances being created. I’ve also added my public SSH key to master node so that I may log in via SSH and use the kubectl command locally. kubectl is the CLI for Kubernetes and you can also install it locally on your workstation if you prefer. My demo application is a small python based app that leverages redis as a backend. The source is available here. It expects Docker style environment variables for to point to the redis server and will purposely throw a 5XX status code if there are issues reaching the database. Walkthrough First we’re going to change the Kubernetes configuration to allow privileged containers. This is only being done for demo purposes and shouldn’t be used in a production environment if you can avoid it. This is for the logging container we’ll be deploying with the application. SSH into the master instance. Run the following commands to update the salt configuration sudo sed -i 's/false/true/' /srv/pillar/privilege.sls sudo salt '*' saltutil.refresh_pillar sudo salt-minion Reboot your non-master nodes to force the salt changes. On the master create a redis-master.yaml file with the following content once the nodes are back online: id: redis-master kind: Pod apiVersion: v1beta1 labels: name: redis-master desiredState: manifest: version: v1beta1 id: redis-master containers: - name: redis-master image: dockerfile/redis ports: - containerPort: 6379 I’m using a Pod as opposed to a replicationController since this is a stateful service and it would not be appropriate to run multiple redis nodes in this scenario. Once ready, instruct kubenetes to deploy the container: kubectl create -f redis-master.yaml kubectl get pods Create a redis-service.yaml with the following: kind: Service apiVersion: v1beta1 id: redis port: 6379 selector: name: redis-master containerPort: 6379 kubectl create –f redis-service.yaml kubectl get services Notice that I’m hard coding the service port to match the standard redis port of 6379. Making these match isn’t required as so long as the containerPort is correct. Under the hood, creating a service causes a new iptables entry to be created on each node. The entries will automatically redirect traffic to a port locally where kube-proxy is listening. Kube-proxy is in turn aware of where my redis-master container is running and will proxy connections for me. To prove this works, I’ll connect to redis via my local address (127.0.0.1:60863) which does not have redis running and I’ll get a proper connection to my database which is on another machine: Seeing as that works, let’s get back to the point at hand and deploy our application. Write a demoapp.yaml file with the following content: id: frontend-controller apiVersion: v1beta1 kind: ReplicationController labels: name: frontend-controller desiredState: replicas: 2 replicaSelector: name: demoapp podTemplate: labels: name: demoapp desiredState: manifest: id: demoapp version: v1beta3 containers: - name: frontend image: doublerr/redis-demo ports: - containerPort: 8888 hostPort: 80 - name: logentries privileged: true command: - "--no-stats" - "-l" - "<log token>" - "-j" - "-t" - "<account token>" - "-a app=demoapp" image: logentries/docker-logentries volumeMounts: - mountPath: /var/run/docker.sock name: dockersock readOnly: true volumes: - name: dockersock source: hostDir: path: /var/run/docker.sock In the above description, I’m grouping 2 containers based on my redis-demo image and the logentries image respectively. I wanted to show the idea of sidecar containers, which are containers deployed alongside of the primary container and whose job is to support the primary container. In the above case, the sidecar forwards logs to my logentries.com account tagged with name of my app. If you’re following along you can sign up for a free logentries account to test this out. You’ll need to create a new log, retrieve the log token and account token first. You can then replace the <log token> and <account token> in the yaml file with your values. Deploy the application kubectl create -f demoapp.yaml kubectl get pods   If your cloud provider is blocking port 80 traffic, make sure to allow it directly to your nodes and you should be able to see the app running in a browser once the pod status is “Running”.   Co-locating Containers Co-locating containers is a powerful concept worth spending some time talking about. Since Kubernetes guarantees co-located containers be run together, my primary container doesn’t need to be aware of anything beyond running the application. In this case logging is dealt with separately. If I want to switch logging services, I just need to redeploy the app with a new sidecar container that is able to send the logs elsewhere. Imagine doing this for monitoring, application content updates, etc . You can really see the power of co-locating containers together. On a side note the logentries image isn’t perfectly suited for this methodology. It’s designed such that you should run 1 of these containers per docker host and it will forward all container logs upstream. It also requires access to the docker socket on the host. A better design for Kubernetes paradigm would be a container that only collects STDOUT and STDERR for the container it’s attached to. The logentries image works for this proof of concept though and I can see errors in my account: In closing, Kubernetes is fun to deploy applications into especially if you start thinking of how best to leverage group containers. Most stateless applications will want to leverage the ReplicationController instead of a single pod and services help tie everything together. For more Docker tutorials, insight and analysis, visit our dedicated Docker page.  About the Author Ryan Richard is a systems architect at Rackspace with a background in automation and OpenStack. His primary role revolves around research and development of new technologies. He added the initial support for the Rackspace Cloud into the Kubernetes codebase. He can be reached at: @rackninja on Twitter.
Read more
  • 0
  • 0
  • 2267

article-image-8-nosql-databases-compared
Janu Verma
17 Jun 2015
5 min read
Save for later

8 NoSQL Databases Compared

Janu Verma
17 Jun 2015
5 min read
NoSQL, or non-relational databases, are increasingly used in big data and real-time web applications. These databases are non-relational in nature and they provide a mechanism for storage and the retrieval of information that is not tabular. There are many advantages of using NoSQL database: Horizontal Scalability Automatic replication (using multiple nodes) Loosely defined or no schema (Huge advantage, if you ask me!) Sharding and distribution Recently we were discussing the possibility of changing our data storage from HDF5 files to some NoSQL system. HDF5 files are great for the storage and retrieval purposes. But now with huge data coming in we need to scale up, and also the hierarchical schema of HDF5 files is not very well suited for all sorts of data we are using. I am a bioinformatician working on data science applications to genomic data. We have genomic annotation files (GFF format), genotype sequences (FASTA format), phenotype data (tables), and a lot of other data formats. We want to be able to store data in a space and memory efficient way and also the framework should facilitate fast retrieval. I did some research on the NoSQL options and prepared this cheat-sheet. This will be very useful for someone thinking about moving their storage to non-relational databases. Also, data scientists need to be very comfortable with the basic ideas of NoSQL DB's. In the course Introduction to Data Science by Prof. Bill Howe (UWashinton) on Coursera, NoSQL DB's formed a significant part of the lectures. I highly recommend the lectures on these topics and this course in general. This cheat-sheet should also assist aspiring data scientists in their interviews. Some options for NoSQL databases: Membase: This is key-value type database. It is very efficient if you only need to quickly retrieve a value according to a key. It has all of the advantages of memcached when it comes to the low cost of implementation. There is not much emphasis on scalability, but lookups are very fast. It has a JSON format with no predefined schema. The weakness of using it for important data is that it's a pure key-value store, and thus is not queryable on properties. MongoDB: If you need to associate a more complex structure, such as a document to a key, then MongoDB is a good option. With a single query, you are going to retrieve the whole document and it can be a huge win. However using these documents like simple key/value stores would not be as fast and as space-efficient as Membase. Documents are the basic unit. Documents are in JSON format with no predefined schema. It makes integration of data easier and faster. Berkeley DB: It stores records in key-value pairs. Both key and value can be arbitrary byte strings, and can be of variable lengths. You can put native programming language data structures into the database without converting to a foreign record first. Storage and retrieval are very simple, but the application needs to know what the structure of a key and a value is in advance, it can't ask the DB. Simple data access services. No limit to the data types that can be stored. No special support for binary large objects (unlike some others) Berkeley DB v/s MongoDB: Berkeley DB has no partitioning while MongoDB supports sharding. MongoDB has some predefined data types like float, string, integer, double, boolean, date, and so on. Berkeley DB has key-value store and MongoDb has documents. Both are schema free. Berkeley DB has no support for Python, for example, although there are many third parties libraries. Redis: If you need more structures like lists, sets, ordered sets and hashes, then Redis is the best bet. It's very fast and provides useful data-structures. It just works, but don't expect it to handle every use-case. Nevertheless, it is certainly possible to use Redis as your primary data-store. But it is used less for distributed scalability, but optimizes high performance lookups at the cost of no longer supporting relational queries. Cassandra: Each key has values as columns and columns are grouped together into sets called column families. Thus each key identifies a row of a variable number of elements. A column family contains rows and columns. Each row is uniquely identified by a key. And each row has multiple columns. Think of a column family as a table, each key-value pair being a row. Unlike RDBMS, different rows in a column family don't have to share the same set of columns, and a column may be added to one or multiple rows at any time. A hybrid between a key-value and a column-oriented database. Has a partially defined schema. Can handle large amounts of data across many servers (clusters), is fault-tolerant and robust. Examples were originally written by Facebook for the Inbox search, and later replaced by HBase. HBase: It is modeled after Google's Bigtable DB. The deal use for HBase is in the situations when you need improved flexibility, great performance, scaling and have Big Data. The data structure is similar to Cassandra where you have column families. Built on Hadoop (HDFS), and can do MapReduce without any external support. Very efficient for storing sparse data . Big data (2 billion rows) is easy to deal with. Examples scalable email/messaging system with search. HBase V/S Cassandra: Hbase is more suitable for data warehousing and large scale data processing and analysis (indexing the web as in a search engine) and Cassandra is more apt for real time transaction processing and the serving of interactive data. Cassandra is more write-centric and HBase is more read-centric. Cassandra has multi- data center support, which can be very useful. Resources NoSQL explained Why NoSQL Big Table About the Author Janu Verma is a Quantitative Researcher at the Buckler Lab, Cornell University, where he works on problems in bioinformatics and genomics. His background is in mathematics and machine learning and he leverages tools from these areas to answer questions in biology.
Read more
  • 0
  • 0
  • 4845

article-image-k-means-clustering
Janu Verma
15 Jun 2015
6 min read
Save for later

K-Means Clustering

Janu Verma
15 Jun 2015
6 min read
Clustering is one of the very important data mining and machine learning techniques. Clustering is a procedure for discovering groups of closely related elements in the dataset. Many a times we want to cluster the data into some categories, such as grouping similar users, modeling user behavior, identifying species of Irises, categorizing news items, classifying textual documents, and more. One of the most common clustering method is K-Means, which is a simple iterative method to partition the data into K - clusters. Algorithm Before we apply K-means to cluster data, it is required to express the data as vectors. In most of the cases, the data is given as a matrix of type [nSamples, nAttributes], which can be thought of as nSamples vectors each with a dimension of nAttributes. There are certain cases where some work has to be done to render the data into linear algebraic language, such as: A corpus of textual documents - We compute Term-frequency of a text document in the corpus as a vector of dimension=(vocabulary of the corpus) where the coefficient of each dimension is the frequency in the document of the word corresponding to the dimension. document1 = freq(word_1), freq(word_2), .....,freq(word_n) There are other choices of creating vectors from text documents, such as TFIDF vectors, binary vectors and more. If I'm trying to cluster by Twitter friends, I can represent each friend as a vector: number of followers, number of friends, number of tweets, count of favorite tweets After we have vectors representing data points, we will cluster these data vectors into K clusters using the following algorithm. Initialize the procedure by randomly selecting K vectors as cluster centroids. For each vector compute its Euclidean distance with each of the centroids and assign the vector to its closest centroid. When all of the objects have been assigned, recalculate the centroids as the mean (average) of all members of the cluster. Repeat the previous two steps until convergence – when the clusters no longer change. You can also choose other distance measures such as: Cosine similarity, Pearson correlation, Manhatten distance, and so on. Example We'll do cluster analysis of the wine dataset. This data contains 13 chemical measurements on 178 Italian wine samples. The data is taken from the UCI Machine Learning Repository. I'll use R to do this analysis, but it can very easily be done in other programing languages such as Python. We'll use the R package rattle, which is GUI for data mining in R. We use rattle only to access data from the UCI Machine Learning Repository. # install rattle install.packages("rattle") library(rattle) # load data data(wine) # what does the data look like head(wine) Type Alcohol Malic Ash Alcalinity Magnesium Phenols Flavanoids Nonflavanoids Proanthocyanins Color Hue Dilution Proline 1 1 14.23 1.71 2.43 15.6 127 2.80 3.06 0.28 2.29 5.64 1.04 3.92 1065 2 1 13.20 1.78 2.14 11.2 100 2.65 2.76 0.26 1.28 4.38 1.05 3.40 1050 3 1 13.16 2.36 2.67 18.6 101 2.80 3.24 0.30 2.81 5.68 1.03 3.17 1185 4 1 14.37 1.95 2.50 16.8 113 3.85 3.49 0.24 2.18 7.80 0.86 3.45 1480 5 1 13.24 2.59 2.87 21.0 118 2.80 2.69 0.39 1.82 4.32 1.04 2.93 735 6 1 14.20 1.76 2.45 15.2 112 3.27 3.39 0.34 1.97 6.75 1.05 2.85 1450 The first column contains the types of the wine. We will use K-Means as a learning model to predict the types:           # remove te first column input <- scale(wine[-1]) A drawback of K-means clustering is that we have to pre-decide on the number of clusters. First, we'll define a function to compute the optimal number of clusters by looking at the clusters sum of squares for a different number of clusters. wssplot <- function(data, nc=15, seed=1234){ wss <- (nrow(data)-1)*sum(apply(data,2,var)) for (i in 2:nc){ set.seed(seed) wss[i] <- sum(kmeans(data, centers=i)$withinss)} plot(1:nc, wss, type="b", xlab="Number of Clusters", ylab="Within groups sum of squares")} Now we'll compute the optimal number of clusters using the wss function we defined above: pdf("Number of Clusters.pdf") wssplot(input) dev.off() The plot shows within groups the sums of squares vs. the number of clusters extracted. The sharp decreases from 1 to 3 clusters (with a little decrease after) suggesting a 3-cluster solution. This shows that the optimal number of clusters is 3. Now we will cluster the data into 3 clusters using the kmeans() function in R. set.seed(1234) # Clusters fit <- kmeans(input, 3, nstart=25) Let's plot the clusters: # Plot the Clusters require(graphics) plot(input, col=fit$cluster) points(fit$centers, col=1:3, pch = 8, cex = 2) We can also visualize the clustered data more transparently using the R package ggplot2: # ggplot visual df <- data.frame(input) df$cluster <- factor(fit$cluster) centers <- as.data.frame(fit$centers) require(ggplot2) ggplot(data=df, aes(x=Alcohol,y=Malic,color=cluster)) + geom_point() The sizes of the 3 clusters can be computed as: # Size of the Clusters size <- fit$size size >>> [1] 62 65 51 Thus we have three clusters of wines of size 62, 65 and 51. The means of the columns (chemicals) for each of the cluster can be computed using the aggregate function. # Means of the columns for the Clusters mean_coulmns <- aggregate(input, by=list(fit$cluster), FUN=mean) mean_columns Let's now measure how good this clustering is. We can use K-Means as a predictive model to assign new data points to one of the 3 clusters. First, we should check how well this assignment is for the training set. A metric for this evaluation is called Cross Tabulation. This is a table comparing the type assigned by clustering and original values. # Measuring How Good is the Clustering # Cross Tabulation : A table comparing type assigned by clustering and original values cross <- table(wine$Type, fit$cluster) cross >>> 1 2 3 1 59 0 0 2 3 65 3 3 0 0 48 This shows that the clustering gives a pretty good prediction. Let's fit cluster centers to each observation: fit_centers <- fitted(fit) # Residue residue <- input - fitted(fit) This shows that the clustering gives a pretty good prediction. Assign to each observation the corresponding cluster: mydata <- data.frame(input, fit$cluster) write.table(mydata, file="clustered_observations.csv", sep=",", row.names=F, col.names=T, quote=F) The full code is available here. Further Reading K-Means in python Clustering Twitter friends Vectors from text data Want more Machine Learning tutorials and content? Visit our dedicated Machine Learning page here. About the Author Janu Verma is a Quantitative Researcher at the Buckler Lab, Cornell University, where he works on problems in bioinformatics and genomics. His background is in mathematics and machine learning and he leverages tools from these areas to answer questions in biology.
Read more
  • 0
  • 0
  • 3959

article-image-introduction-service-workers
Sebastian Müller
05 Jun 2015
7 min read
Save for later

An Introduction to Service Workers

Sebastian Müller
05 Jun 2015
7 min read
The shiny new Service Workers provide powerful features such as a scriptable cache for offline support, background syncing, and push notifications in your browser. Just like Shared Workers, a Service Worker runs in the background, but it can even run when your website is not actually open. These new features can make your web apps feel more like native mobile apps. Current Browser Support As of writing this article, Service Workers is enabled in Chrome 41 by default. But this does not mean that all features described in the W3 Service Workers Draft are fully implemented yet. The implementation is in the very early stages and things may change. In this article, we will cover the basic caching features that are currently available in Chrome. If you want to use Service Workers in Firefox 36, it is currently a flagged feature that you must enable manually. In order to do this, type “about:config” into your URL field and search for “service worker” in order to set the “dom.Service Workers.enabled” setting to true. After a restart of Firefox, the new API is available for use. Let’s get started - Registering a Service Worker index.html <!DOCTYPE html> <html> <head> <meta charset="utf-8" /> <title>ServiceWorker Demo</title> </head> <body> <script> if ('serviceWorker' in navigator) { navigator.serviceWorker.register('my-service-worker.js') .then(function(registration) { console.log('yay! serviceWorker installed!', registration); }, function(error) { console.log('something went wrong!:', error); }); } </script> </body> </html> To register a Service Worker, we call navigator.serviceWorker.register() with a path to our Service Worker file. Due to security reasons, it is important that your service worker file is located at the top-level relative to your website. Paths like ‘scripts/my-service-worker.js’ won’t work. The register method returns a Promise, which is fulfilled when the installation process is successful. The promise can be rejected if you, e.g., have a syntax error in your Service Worker file. Cool, so let’s review what a basic Service Worker that lives in the ‘my-service-worker.js’ file might look like. A basic Service Worker my-service-worker.js: this.addEventListener('install', function(event) { console.log('install event!'); // some logic here... }); In our Service Worker file, we can register event listeners for several events that are triggered by the browser. In our case, we listen for the ‘install’ event, which is triggered when the browser sees the Service Worker the first time. Later on, we will add some code to the ‘install’ event listener to make our web app offline ready. For now, we add a simple ‘console.log’ message to be able to check that our event listener function was called. Now when you open the index.html file in your browser (important: you need to be running a webserver to serve these two files), you should see a success log message in the Chrome developer tools console. You might wonder why the ‘install event!’ log message from the service worker file is not showing up in the console. This is due to the fact that all Service Workers are running in a separate thread. Next, we will cover how to debug Service Workers. In Chrome, you can open the URL “chrome://serviceworker-internals” to get a list of Service Workers registered in your browser: When you visit this page, right after you’ve visited the index.html, you should see a worker with the installation status: ‘ACTIVATED’ and running status ‘RUNNING’. Then you will know that everything went fine. After a while, the worker should be in the running status ‘STOPPED’. This is due to the fact that Chrome completely handles the lifetime of a Service Worker. You have no guarantee how long your service worker runs after the installation, for example. After digging into the basics of installing Service Workers, we clearly have no advantages yet. So let’s take look at the described offline caching features of Service Workers next. Make your Web apps Offline Ready Let’s face it: The Cache Manifest standard to make your apps offline ready has some big disadvantages. It’s not possible to script the caching mechanism in any way. You have to let the browser handle the caching logic. With Service Workers, the browser gives you the moving parts and lets you handle the caching stuff the way you want. So let’s dive into the basic caching mechanisms that Service Workers provide. Let’s get back to our index.html file and add an image named ‘my-image.png’ to the body that we want to have available when we are offline: <body> <img src="my-image.png" alt="my image" /> … </body> Now that we have an image in our index.html, let’s extend our existing service worker to cache our image ‘my-image.png’ and our index.html for offline usage: // (1.) importScripts('./cache-polyfill.js'); // this event listener gets triggered when the browser sees the ServiceWorker the first time this.addEventListener('install', function(event) { console.log('install!'); // (2.) event.waitUntil( caches.open('my-cache') .then(function(cache) { console.log('cache opened'); // (3.) return cache.addAll([ '/', '/my-image.png' ]); }) ); }); this.addEventListener('fetch', function(event) { // (4.) event.respondWith( caches.match(event.request).then(function(response) { if (response) { // (5.) console.log('found response in cache for:', event.request.url); return response; } // (6.) return fetch(event.request); }) ); }); We use a global function available in the Service Worker context called ‘importScripts’ that lets us load external scripts for using libraries and other stuff in our Service Worker’s logic. As of writing this article, not all caching API are implemented in the current version of Chrome. This is why we are loading a cache polyfill that adds the missing API that is needed for our application to work in the browser. In our install event listener, we use the waitUntil method from the provided event object to tell the browser with a promise when the installation process in our Service Worker is finished. The provided promise is the return value of the caches.open() method that opens the cache with name ‘my-cache’. When the cache has been opened successfully, we add the index.html and our image to cache. The browser pre-fetches all defined files and adds them to the cache. Only when all requests have been successfully executed is the installation step of the service worker finished and the Service Worker can be activated by the browser. The event listener for the event type ‘fetch’ is called when the browser wants to fetch a resource, e.g., an image. In this listener, you have full control of what you want to send as a response with the event.respondWith() method. In our case, we open up the cache used in the install event to see if we have a cached response for the given request. When we find a matching response for the given request, we return the cached response. With this mechanism, you are able to serve all cached files, even if you are offline or the webserver is down. We have no cached response for the given request and the browser will handle the fetching for the uncached file with the shiny new fetch API. To see if the cache is working as intended, open the index.html file your browser and shut down your web server afterward. When you refresh your page, you should get the index.html and the my-image.png file out of the Service Worker cache: With this few lines of Service Worker code, you have implemented a basic offline-ready web application that caches your index.html and your image file. As already mentioned, this is only the beginning of Service Workers and many more features like push notifications will be added this year. About the Author Sebastian Müller is Senior Software Engineer at adesso AG in Dortmund, Germany. He spends his time building Single Page Applications and is interested in JavaScript Architectures. He can be reached at @Sebamueller on Twitter and as SebastianM on Github.
Read more
  • 0
  • 0
  • 1983

article-image-brief-history-minecraft-modding
Aaron Mills
03 Jun 2015
7 min read
Save for later

A Brief History of Minecraft Modding

Aaron Mills
03 Jun 2015
7 min read
Minecraft modding has been around since nearly the beginning. During that time it has gone through several transformations or “eras." The early days and early mods looked very different from today. I first became involved in the community during Mid-Beta, so everything that happened before then is second hand knowledge. A great deal has been lost to the sands of time, but the important stops along the way are remembered, as we shall explore. Minecraft has gone through several development stages over the years. Interestingly, these stages also correspond to the various “eras” of Minecraft Modding. Minecraft Survival was first experienced as Survival Test during Classic, then again in the Indev stage, which gave way to Infdev, then to Alpha and Beta before finally reaching Release. But before all that was Classic. Classic was released in May of 2009 and development continued into September of that year. Classic saw the introduction of Survival and Multiplayer. During this period of Minecraft’s history, modding was in its infancy. On the one hand, Server modding thrived during this stage with several different Server mods available. (These mods were the predecessors to Bukkit, which we will cover later.) Generally, the purpose of these mods was to give server admins more tools for maintaining their servers. On the other hand, however, Client side mods, ones that add new content, didn’t really start appearing until the Alpha stage. Alpha was released in late June of 2010, and it would continue for the rest of the year. Prior to Alpha, came Indev and Infdev, but there isn’t much evidence of any mods during that time period, possibly because of the lack of Multiplayer in Indev and Infdev. Alpha brought the return of Multiplayer, and during this time Minecraft began to see its first simple Client mods. Initially it was just simple modification of existing content: adding support for Higher Resolution textures, new arrow types, bug fixes, compass modifications, etc. The mods were simple and small. This began to change, though, beginning with the creation of the Minecraft Coder Pack, which was later renamed the Mod Coder Pack, commonly known as MCP. (One of the primary creators of MCP, Michael “Searge” Stoyke, now actually works for Mojang.) MCP saw its first release for Alpha 1.1.2_01 sometime in mid 2010. Despite being easily decompiled, Minecraft code was also obfuscated. Obfuscation is when you take all the meaningful names and words in the code and replace it with non-human readable nonsense. The computer can still make sense of it just fine, but humans have a hard time. MCP resolved this limitation by applying meaningful names to the code, making modding significantly easier than ever before. At the same time, but developing completely independently, was the server mod hMod, which gave some simple but absolutely necessary tools to server admins. However, hMod was in trouble as the main dev was MIA. This situation eventually led to the creation of Bukkit, a server mod designed from the ground up to support “plugins” and do everything that hMod couldn’t do. Bukkit was created by a group of people who were also eventually hired by Mojang: Nathan 'Dinnerbone' Adams, Erik 'Grum' Broes, Warren 'EvilSeph' Loo, and Nathan 'Tahg' Gilbert. Bukkit went on to become possibly the most popular Minecraft mod ever created. Many in fact believe its existence is largely responsible for the popularity of online Minecraft servers. However, it will remain largely incompatible with client side mods for some time. Not to be left behind, the client saw another major development late in the year: Risugami’s ModLoader. ModLoader was transformational. Prior to the existence of ModLoader, if you wanted to use two mods, you would have to manually merge the code, line by line, yourself. There were many common tasks that couldn’t be done without editing Minecraft’s base code, things such as adding new blocks and items. ModLoader changed that by creating a framework where simple mods could hook into ModLoader code to perform common tasks that previously required base edits. It was simple, and it would never really expand beyond its original scope. Still, it led modding into a new era. Minecraft Beta, what many call the “Golden Age” of modding, was released just before Christmas in 2010 and would continue through 2011. Beta saw the rise of many familiar mods that are still recognized today, including my own mod, Railcraft. Also IndustrialCraft, Buildcraft, Redpower, and Better than Wolves all saw their start during this period. These were major mods that added many new blocks and features to Minecraft. Additionally, the massive Aether mod, which recently received a modern reboot, was also released during Beta. These mods and more redefined the meaning of “Minecraft Mods”. They existed on a completely new scale, sometimes completely changing the game. But there were still flaws. Mods were still painful to create and painful to use. You couldn’t use IndustrialCraft and Buildcraft at the same time; they just edited too many of the same base files. ModLoader only covered the most common base edits, barely touching the code, and not enough for a major mod. Additionally, to use a mod, you still had to manually insert code into the Minecraft jar, a task that turned many players off of modding. Seeing that their mods couldn’t be used together, the creators of several major mods launched a new project. They would call it Minecraft Forge. Started by Eloraam of Redpower and SpaceToad of Buildcraft, it would see rapid adoption by many of the major mods of the time. Forge built on top of ModLoader, greatly expanding the number of base hooks and allowing many more mods to work together than was previously possible. This ushered in the true “Golden Age” of modding, which would continue from Beta and into Release. Minecraft 1.0 was released in November of 2011, heralding Minecraft’s “Official” release. Around the same time, client modding was undergoing a shift. Many of the most prominent developers were moving on to other things, including the entire Forge team. For the most part, their mods would survive without them, but some would not. Redpower, for example ceased all development in late 2012. Eloraam, SpaceToad, and Flowerchild would hand the reigns of Forge off to LexManos, a relatively unknown name at the time. The “Golden Age” was at an end, but it was replaced by an explosion of new mods and modding was becoming even more popular than ever. The new Forge team, consisting mainly of LexManos and cpw, would bring many new innovations to modding. Eventually they even developed a replacement for Risugami’s ModLoader, naming it ForgeModLoader and incorporating it into Forge. Users would no longer be required to muck around with Minecraft’s internals to install mods. Innovation has continued to the present day, and mods for Minecraft have become too numerous to count. However, the picture for server mods hasn’t been so rosy. Bukkit, the long dominant server mod, suffered a killing blow in 2014. Licensing conflicts developed between the original creators and maintainers, largely revolving around the who “owned” the project after the primary maintainers resigned. Ultimately, one of the most prolific maintainers used a technicality to invalidate the rights of the project to use his code, effectively killing the entire project. A replacement has yet to develop, leaving the server community limping along on increasingly outdated code. But one shouldn’t be too concerned about the future. There have been challenges in the past, but nearly every time a project died, it was soon replaced by something even better. Minecraft has one of the largest, most vibrant, and most mainstream modding communities ever to exist. It’s had a long and varied history, and this has been just a brief glimpse into that heritage. There are many more events, both large and small, that have helped shape the community. May the future of Minecraft continue to be as interesting. About the Author Aaron Mills was born in 1983 and lives in the Pacific Northwest, which is a land rich in lore, trees, and rain. He has a Bachelor's Degree in Computer Science and studied at Washington State University Vancouver. He is best known for his work on the Minecraft Mod, Railcraft, but has also contributed significantly to the Minecraft Mods of Forestry and Buildcraft as well some contributions to the Minecraft Forge project.
Read more
  • 0
  • 0
  • 18832
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €14.99/month. Cancel anytime
article-image-introduction-sklearn
Janu Verma
16 Apr 2015
7 min read
Save for later

Introduction to Sklearn

Janu Verma
16 Apr 2015
7 min read
This is an introductory post on scikit-learn where we will learn basic terminology and functionality of this amazing Python package. We will also explore basic principles of machine learning and how machine learning can be done with sklearn. What is scikit-learn (sklearn)? scikit-learn is a python framework for machine learning. It has an efficient implementation of various machine learning and data mining algorithms. It is easy to use and accessible to everybody – open source, and a commercially usable BSD license. Data Scientists love Python and most scientists in the industry use this as their data science stack:                numpy + pandas + sklearn Dependencies Python (>= 2.6) numpy (>= 1.6.1) scipy (>= 0.9) matplotlib (for some tasks) Installation Mac - pip install -U numpy scipy scikit-learn Linux - sudo apt-get install build-essential python-dev python-setuptools python-numpy python-scipy libatlas-dev libatlas3gf-base After you have installed sklearn and all its dependencies, you are ready to dive further. Input data Most machine learning algorithms implemented in sklearn expect the input data in the form of a numpy array of shape [nSamples, nFeatures]. nSamples is the number of samples in the data. Each sample is an observation or an instance of the data. A sample can be a text document, a picture, a row in a database or a csv file – anything you can describe with a fixed set of quantitative traits. nFeatures is the number of features or distinct traits that describe each sample quantitatively. Features can be real-valued, boolean or discrete. The data can be very high dimensional, such as with hundreds of thousands of features, and it can be sparse, such as most of the features values are zero. Example As an example, we will look at the Iris dataset, which comes with sklearn and every other ML package that I know of! from sklearn.datasets import load_iris iris = load_iris() input = iris.data output = iris.target What are the number of samples and features in this dataset ? Since the input data is a numpy array, we can access its shape using the following: nSamples = input.shape[0] nFeatures = input.shape[1] >> nSamples = 150 >> nFeatures = 4 This dataset has 150 samples, where each sample has 4 features. Let's look at the names of the target output: iris.target_names >> array(['setosa','versicolor', 'virginica'], dtype='|S10') To get a better idea of the data, let's look at a sample: input[0] >> array([5.1, 3.5, 1.4, 0.2]) output[0] >> 0 The data is given as a numpy array of shape (150,4) which consists of the measurements of physical traits for three species of irises. The features include: sepal length in cm sepal width in cm petal length in cm petal width in cm The target values {0,1,2} denote three species: Setosa Versicolour Virginica Here is the basic idea of machine learning. The basic setting for a supervised machine learning model is as follows: We have a labeled training set, such as samples with known values of a target. We are given an unlabeled testing set, such as samples for which the target values are unknown. The goal is to build a model that trains on the labeled data to predict the output for the unlabeled data. Supervised learning is further broken down into two categories: classification and regression. In classification, the target value is discrete In regression, the target value is continuous. There are various machine learning methods that can be used to build a supervised learning model, for example decision trees, k-nearest neighbors, SVM, linear and logistic regression, random forests, and more. I'll not talk about these methods and their differences in this post. I will give an illustration of using sklearn for predictive modeling using a regression and a classification model. Iris Example continued (Clasification): We saw that data is a numpy array of shape (150,4) consisting of measurements of physical traits for three iris species. Goal The task is to build a machine learning model to predict the species of a sample given the values of the features. We will split the iris set into a training and a test set. The model will be built on a training set and evaluated on the test set. Before we do that, let's look at the general outline of a machine learning model in sklearn. Outline of sklearn models: The basic outline of a sklearn model is given by the following pseudocode. input = labeled data X_train = input.features Y_train = input.target algorithm = sklearn.ClassImplementingTheAlgorithm(parameters of the algorithm) fitting = algorithm.fit(X_train, Y_train) X_test = unlabeled set prediction = algorithm.predict(X_test) Here, as before, the labeled training data is in the form of a numpy array with X_train as the array of feature values and Y_train as the corresponding target values. In sklearn, different machine learning algorithms are implemented as classes and we will choose the class corresponding to the algorithm we want to use. Each class has a method called fit which fits the input training data to estimate the parameters of the algorithm. Now with these estimated parameters, the predict method computes the estimated value of the target for the test examples. sklearn model on iris data: Following the general outline of the sklearn model, we will now build a model on iris data to predict the species. from sklearn.datasets import load_iris iris = load_iris() X = iris.data Y = iris.target from sklearn import cross_validation X_train, X_test, Y_train, Y_test = cross_validation.train_test_split(X,Y, test_size=0.4) from sklearn.neighbors import KNeighborsClassifier algorithm = KNeighborsClassifier(n_neighbors=5) fitting = algorithm.fit(X_train, Y_train) prediction = algorithm.predict(X_test) The iris data set is split into a training and a test set using a cross validation class from sklearn. The 60% of the iris data was formed and the remaining 40% was the test. The cross_validation picks training and test examples randomly. We used the K-nearest neighbor algorithm to build this model. There is no reason for choosing this method, other than simplicity. The prediction of the sklearn model is a label from {0,1,2} for each of the test case. Let's check how well this model performed: from sklearn.metrics import accuracy_score accuracy_score(Y_test, prediction) >> 0.97 Regression: We will discuss the simplest example of fitting a line through the data. # Create some simple data import numpy as np np.random.seed(0) X = np.random.random(size=(20, 1)) y = 3 * X.squeeze() + 2 + np.random.normal(size=20) # Fit a linear regression to it from sklearn.linear_model import LinearRegression model = LinearRegression(fit_intercept=True) model.fit(X, y) print ("Model coefficient: %.5f, and intercept: %.5f"% (model.coef_, model.intercept_)) >> Model coefficient: 3.93491, and intercept: 1.46229 # model prediction X_test = np.linspace(0, 1, 100)[:, np.newaxis] y_test = model.predict(X_test) Thus we get the values of the target (which were continous). We gave a simple model based on sklearn implementation of K-Nearest neighbor algorithm and linear regression. You can try other models. The python code will be same for most of the methods in sklearn, except for a change in the name of the algorithm. Discovert more Machine Learning content and tutorials on our dedicated Machine Learning page. About the Author Janu Verma is a Quantitative Researcher at the Buckler Lab, Cornell University, where he works on problems in bioinformatics and genomics. His background is in mathematics and machine learning and he leverages tools from these areas to answer questions in biology. He holds a Masters in Theoretical Physics from University of Cambridge in UK, and he dropped out from mathematics PhD program (after 3 years) at Kansas State University. He has held research positions at Indian Statistical Institute – Delhi, Tata Institute of Fundamental Research – Mumbai and at JN Center for Advanced Scientific Research – Bangalore. He is a voracious reader and an avid traveler. He hangs out at the local coffee shops, which serve as his office away from office. He writes about data science, machine learning and mathematics at Random Inferences.
Read more
  • 0
  • 0
  • 2729

article-image-best-angular-yet-new-features-angularjs-13
Sebastian Müller
16 Apr 2015
5 min read
Save for later

The best Angular yet - New Features in AngularJS 1.3

Sebastian Müller
16 Apr 2015
5 min read
AngularJS 1.3 was released in October 2014 and it brings with it a lot of new and exciting features and performance improvements to the popular JavaScript framework. In this article, we will cover the new features and improvements that make AngularJS even more awesome. Better Form Handling with ng-model-options The ng-model-options directive added in version 1.3 allows you to define how model updates are done. You use this directive in combination with ng-model. Debounce for Delayed Model Updates In AngularJS 1.2, with every key press, the model value was updated. With version 1.3 and ng-model-options, you can define debounce time in milliseconds, which will delay the model update until the user hasn’t pressed a key in the configured time. This is mainly a performance feature to save $digest cycles that would normally occur after every key press when you don’t use ng-model-options: <input type="text" ng-model="my.username" ng-model-options="{ debounce: 500 }" /> updateOn - Update the Model on a Defined Event An alternative to the debounce option inside the ng-model-options directive is updateOn. This updates the model value when the given event name is triggered. This is also a useful feature for performance reasons. <input type="text" ng-model="my.username" ng-model-options="{ updateOn: 'blur' }" /> In our example, we only update the model value when the user leaves the form field. getterSetter - Use getter/setter Functions in ng-model app.js: angular.module('myApp', []).controller('MyController', ['$scope', function($scope) { var myEmail = '[email protected]'; $scope.user = { email: function email(newEmail) { if (angular.isDefined(newEmail)) { myEmail = newEmail; } return myEmail; } }; }]); index.html: <div ng-app="myApp" ng-controller="MyController"> current user email: {{ user.email() }} <input type="email" ng-model="user.email" ng-model-options="{ getterSetter: true }" /> </div> When you set getterSetter to true, Angular will treat the referenced model attribute as a getter and setter method. When the function is called with no parameter, it’s a getter call and AngularJS expects that you return the current assigned value. AngularJS calls the method with one parameter when the model needs to be updated. New Module - ngMessages The new ngMessages module provides features for a cleaner error message handling in forms. It’s a feature that is not contained in the core framework and must be loaded via a separate script file. index.html: … <body> ... <script src="angular.js"></script> <script src="angular-messages.js"></script> <script src="app.js"></script> </body> app.js: // load the ngMessages module as a dependency angular.module('myApp', ['ngMessages']);  The first version contains only two directives for error message handling: <form name="myForm"> <input type="text" name="myField" ng-model="myModel.field" ng-maxlength="5" required /> <div ng-messages="myForm.myField.$error" ng-messages-multiple> <div ng-message="maxlength"> Your field is too long! </div> <div ng-message="required"> This field is required! </div> </div> </form> First, you need a container element that has an “ng-messages” directive with a reference to the $error object of the field you want to show error messages for. The $error object contains all validation errors that currently exist. Inside the container element, you can use the ng-message directive for every error type that can occur. Elements with this directive are automatically hidden when no validation error for the given type exists. When you set the “ng-messages-multiple” attribute on the element, you are using the “ng-messages” directive and all validation error messages are displayed at the same time. Strict-DI Mode AngularJS provides multiple ways to use the dependency injection mechanism in your application. One way is not safe to use when you minify your JavaScript files. Let’s take a look at this example: angular.module('myApp', []).controller('MyController', function($scope) { $scope.username = 'JohnDoe'; }); This example works perfectly in the browser as long as you do not minify this code with a JavaScript minifier like UglifyJS or Google Closure Compiler. The minified code of this controller might look like this: angular.module('myApp', []).controller('MyController', function(a) { a.username = 'JohnDoe'; }); When you run this code in your browser, you will see that your application is broken. Angular cannot inject the $scope service anymore because the minifier changed the function parameter name. To prevent this type of bug, you have to use this array syntax: angular.module('myApp', []).controller('MyController', ['$scope', function($scope) { $scope.username = 'JohnDoe'; }]); When this code is minified by your tool of choice, AngularJS knows what to inject because the provided string ‘$scope’ is not rewritten by the minifier: angular.module('myApp', []).controller('MyController', ['$scope', function(a) { a.username = 'JohnDoe'; }]); Using the new Strict-DI mode, developers are forced to use the array syntax. An exception is thrown when they don’t use this syntax. To enable the Strict-DI mode, you have to add the ng-strict-di directive to the element that you are using for the ng-app directive: <html ng-app="myApp" ng-strict-di> <head> </head> <body> ... </body> </html> IE8 Browser Support Angular 1.2 had built-in support for Internet Explorer 8 and up. Now that the global market share of IE8 has dropped and it takes a lot of time and extra code to support the browser, the team decided to drop support for the browser that was released back in 2009. Summary This article shows only a few new features added to Angular1.3. To learn about all of the new features, read the changelog file on Github or check out the AngularJS 1.3 migration guide. About the Author Sebastian Müller is Senior Software Engineer at adesso AG in Dortmund, Germany. He spends his time building Single Page Applications and is interested in JavaScript Architectures. He can be reached at @Sebamueller on Twitter and as SebastianM on Github.
Read more
  • 0
  • 0
  • 4654

article-image-year-was-2014-game-development
Ed Bowkett
01 Apr 2015
6 min read
Save for later

The Year that was: 2014 in Game Development

Ed Bowkett
01 Apr 2015
6 min read
This blog will focus on the year that was, the top 5 important events to come out of game development and what implications this had on the wider community. Bear in mind this is my opinion, but feel free to share other events you found equally as important. 1) AAA game engines become more freely available As I mentioned in one of my first blogs of 2014, the Game Developers Conference this year was pretty spectacular for one reason. The three industry standard game engines, Crytek, Epic Games and Unity both announced major updates to their engines. Unreal introduced a price point of $19 a month to gain full access to their AAA engine. Crytek too introduced a price paint that is exceptional, $10 a month for their amazing engine, CryEngine. For these prices, not only can budding game developers finally develop awesome games, but the tools that were really for studio only, has now expanded to consumers such as you and me. This seismic change can only be a positive change for the game industry. 2) VR moved forward…kind of As linked above in my first blog, I confessed I didn’t get Virtual Reality. This is mostly down to my genetics where I get motion sickness, but also because every time a VR headset is announced, it seems to get more hype than feels necessary. I mean take the Oculus Rift. People got excited for it and stated that both the headset and VR in general would revolutionize the way we would play games. As soon as Oculus Rift got purchased for a pretty handsome sum by Facebook, the critics and naysayers came out and went into overload. If Virtual Reality is set to be a big thing, and it's my opinion it will be, then criticism should not be focused on whoever is developing the headset, but the technology itself. Don't abandon a technology in its infancy because a social network you don't particularly like has decided VR is worth investing in. Wait and give it a chance to mature. If you insist that the Oculus Rift is not the way forward, because heaven forbid, it has the Zuckerberg Curse on it, there are alternatives like the Project Morpheus from Sony, the Archos VR Headset and Samsung’s VR Headset. 3) Microsoft buys Mojang Minecraft is a classic of a game. Endless hours poured into constructing buildings and structures, ranging from castles to the USS Enterprise as well as exploring a seemingly endless world. What also made Minecraft so special was that it wasn’t a large studio that made it; it was an indie developer, Markus ‘Notch’ Persson who lovingly created the game. So it came as a little bit of a surprise when Microsoft bought Mojang for a reported $2.5 billion. Whilst little has been done since the purchase (it’s only been 3 months) this purchase feels quite shrewd from Microsoft, bearing in mind sales of its flagship console, the Xbox One has been struggling against its competitor, the PS4. By purchasing such an iconic game studio as Mojang thus providing Microsoft with such a fanbase, this gives it a platform to again begin to develop and grow its gamer base. 4) Quality of games decreasing This section is going to be a bit of a rant which I apologise for. Assassin’s Creed Unity was a highly anticipated game. Priced at a pricey $60 you would expect, as a hard working gamer, to get a high quality game on release. Sadly this wasn’t the case. There were so many glitches, particularly for the PC version that I felt quite embarrassed I had forked out $60 for it. Whilst a day one patch did happen, it shook my confidence in buying games at full price again. I would rather wait for a legendary Steam sale where the game has already been through several patches. Whilst I accept that sometimes, glitches just happen, the added pressure of both the price to the consumer, combined with other alternatives for gamers to enjoy, means that the game needs to be of the highest quality when released. The fact it wasn’t shakes consumer confidence and lessens the enjoyment factor. Plus not everyone has great internet speeds to download monstrous patches. It also beggars belief that none of the glitches were noticed when internal testing occurred. Having tested games briefly in 2008, I had many a great time glitching on various pieces of terrain. But it was reported and as such the glitch fixed. The fact that so many glitches went unnoticed suggests more to the internal testing then it does the time constraints of getting the game out on time. Nonetheless, Assassin’s Creed is a hugely popular franchise, as such, thousands of gamers purchased it and unfortunately (or fortunately if you are Ubisoft) the market has spoken and we will continue to purchase buggy games. 5) GamerGate I was pondering whether to include this at all, given I’ve not blogged about it in the past and done my best to avoid discussing this in any of my blogs or tweets. The truth of the matter it has to be mentioned. For those that don’t know, Gamergate was sparked from an accusation that girl game developers slept with game reviewers to get more favorable reviews. From this spawned a wide ranging debate that can be interpreted by some as a necessary conversation for the gaming industry to have, by others as a cesspit of toxicity and harassment. The argument itself has mutated so much that as it stands I no longer know what it stands for. During its short time, it’s stood for free speech, video game journalism ethics, to connotations of racism, homophobia and a troll feeding pit. Sadly what it’s succeeded in doing is showing the video games industry in a negative light. This is a big shame, for an industry still arguably in its infancy stage. I hope that 2015 will bring about what makes the gaming community so special and we can get back to what it should be about, making great art that are loved by all. It will be interesting how the gaming industry grows from Gamergate and if it can.
Read more
  • 0
  • 0
  • 1375

article-image-encryption-cloud-overview
Robi Sen
31 Mar 2015
9 min read
Save for later

Encryption in the Cloud: An Overview

Robi Sen
31 Mar 2015
9 min read
In this post we will look at how to secure your AWS solution data using encryption (if you need a primer on encryption here is a good one). We will also look at some of various services from AWS and other third party vendors that will help you not only encrypt your data, but take care of more problematic issues such as managing keys. Why Encryption Whether it’s Intellectual Property (IP) or simply just user names and passwords, your data is important to you and your organization. So, keeping it safe is important. Although hardening your network, operating systems, access management and other steps can greatly reduce the chance of being compromised, the cold hard reality is that, at some point, in your companies’ existence that data will be compromised. So, assuming that you will be compromised is one major reason we need to encrypt data. Another major reason is the likelihood of accidental or purposeful inappropriate data access and leakage by employees which, depending on what studies you look at, is perhaps the largest reason for data exposure. Regardless of the reason or vector, you never want to expose important data unintentionally, and for this reason encrypting your sensitive information is fundamental to basic security. Three states of data Generally we classify data as having three distinct states: Data at rest, such as when your data is in files on a drive or data in a database Data in motion, such as web requests going over the Internet via port 80 Data in use, which is generally data in RAM or data being used by the CPU In general, the most at risk data is data at rest and data in motion, both of which are reasonably straight forward to secure in the cloud, although their implementation needs to be carefully managed to maintain strong security. What to encrypt and what not to Most security people would love to encrypt anything and everything all the time, but encryption creates numerous real or potential problems. The first of these is that encryption is often computationally expensive and can consume CPU resources, especially when you’re constantly encrypting and decrypting data. Indeed, this has been one of the main reasons why vendors like Google did not encrypt all search traffic until recently. Another reason people often do not widely apply encryption is that it creates potential system administration and support issues since, depending on the encryption approach you take, you can create complex issues for managing your keys. Indeed, even the most simple encryption systems, such as encrypting a whole drive with a single key, requires strong key management in order to be effective. This can create added expense and resource costs since organizations have to implement human and automated systems to manage and control keys. While there are many more reasons people do not widely implement encryption, the reality is that you usually have to make determinations on what to encrypt. Most organizations follow a process for deciding on what to encrypt in the following manner: 1- What data must be private? This might be Personal Identifying Information, credit card numbers, or the like that is required to be private for compliances reasons such as PCI or FISMA. 2- What level of sensitivity is this data? Some data such as PII often has federal data security requirements that are dictated by what industry you are in. For example, in health care HIPPA requirements dictate the minimum level of encryption you must use (see here for an example). Other data might require further encryption levels and controls. 3-What is the data’s value to my business? This is a tricky one. Many companies decide they need little to no encryption for data assuming it is not important, such as their user’s email addresses. Then they get compromised and their users spammed and have their identities stolen potentially causing real legal damages to the company or destroying their reputation. Depending on your business and your business model, even if you are not required to encrypt your data, you may want to in order to protect your company, its reputation or the brand. 4-What is the performance cost of using a specific encryption approach to data and how will it affect my business? These high level steps will give you a sense of what you should encrypt or need to encrypt and how to encrypt it. Item 4 is specifically important, in that while it might be nice to encrypt all your data with 4096 Elliptic Curve encryption keys, this will most likely create too high of a computational load and bottle neck on any high transactional application, such as an e-commerce store, to be practical to implement. This takes us to our next topic, which is choosing encryption approaches. Encryption choices in the cloud for Data at Rest Generally there are two major choices to make when encrypting data, especially data at rest. These are: 1 – Encrypt only key sensitive data such as logins, passwords, social security and similar data. 2 – Encrypt everything. As we have pointed out, while encrypting everything would be nice, there are a lot of potential issues with this. In some cases, however, such as backing up data to S3 or Glacier for long term storage, it might be a total no brainer. More typically, thought, numerous factors weigh in. Another choice you have to make with cloud solutions is where you will do your encryption. This needs to be influenced by your specific application requirements, business requirements, and the like. When deploying cloud solutions you also need think about how you interact with your cloud system. While you might be using a secure VPN from your office or home, you need to think about encrypting your data on your client systems that interact with your AWS-based system. For example, if you upload data to your system, don’t just trust in SSL. You should make sure you use the same level of encryption you use on AWS on your home or office systems. AWS allows you to support server side encryption, client side encryption, or server side encryption with the ability to use your own keys that you manage on the client. This is an important and recent feature - the ability to use your own - since various federal and business security standards require you to maintain possession of your own cryptographic keys. That being said, managing your own keys can be difficult to do well. AWS offers some help with Hardware Security Modules with their CloudHSM. Another route is the multiple vendors that offer services to help you manager enterprise key management such as CloudCipher. Data in Motion Depending on your application users, you may need to send sensitive data to your AWS instances without being able to encrypt the data on their side first. An example is when creating a membership to your site where you want to protect their password or during an e-commerce transition were you want to protect credit card and other information. In these cases, instead of using regular HTTP, you want to use HTTP Secure protocol or HTTPS. HTTPS makes use of SSL/TLS, an encryption protocol for data in motion, to encrypt data as it travels over the network. While HTTPS can affect performance of web servers or network applications, its benefits often far outweigh the negligible overheard it creates. Indeed, AWS makes extensive use of SSL/TLS to protect network traffic between you and AWS and between various AWS services. As such, you should make sure to protect any data, in motion, with a reputable SSL certificate. Also, if you are new to using SSL for your application, you should strongly consider reviewing OWASP’s excellent cheat sheet on SSL. Finally, as stated earlier, don’t just trust in SSL when sharing sensitive data. The best practice is to hash or encrypt any and all sensitive data when possible, since attackers can sometimes, and have, compromised SSL security. Data in Use Data in use encryption, the encryption of data when it’s being used in RAM or by the CPU, is generally a special case in encryption that is mostly ignored in modern hosted applications. This is because it is very difficult and often not considered worth the effort for systems hosted on the premise. Cloud vendors though, like AWS, create special considerations for customers, since the cloud vendor controls have physical access to your computer. This can potentially allow a malicious actor with access to that hardware to circumvent data encryption by accessing a system’s physical memory to steal encryption keys or steal data that is in plain text in memory. As of 2012, the Cloud Security Alliance has started to recommend the use of encryption for data in use as a best practice; see here. For this reason, a number of vendors have started offering data in use encryption specifically for cloud systems like AWS. This should be considered only for systems or applications that have the most extreme security requirements such as national security. Companies like Privatecore and Vaultive currently offer services that allow you to encrypt your data even from your service provider. Summary Encryption and its proper use is a huge subject and we have only been able to lightly touch on the topic. Implementing encryption is rarely easy, yet AWS takes much of the difficult out of encryption by providing a number of services for you. That being said, being aware of what your risks are, how encryption can help mitigate those risks, what specific types of encryption to use, and how it will affect your solution requires continued study. To help you with this, some useful reference material has been provided. Encryption References OWASP: Guide to Cryptography OWASP: Password Storage Cheat Sheet OWASP: Cryptographic Storage Cheat Sheet Best Practices: Encryption Technology Cloud Security Alliance: Implementation Guidance, Category 8: Encryption AWS Security Best Practices From 4th to the 10th April join us for Cloud Week - save 50% on our top cloud titles or pick up any 5 for just $50! Find them here. About the author Robi Sen, CSO at Department 13, is an experienced inventor, serial entrepreneur, and futurist whose dynamic twenty-plus year career in technology, engineering, and research has led him to work on cutting edge projects for DARPA, TSWG, SOCOM, RRTO, NASA, DOE, and the DOD. Robi also has extensive experience in the commercial space, including the co-creation of several successful start-up companies. He has worked with companies such as UnderArmour, Sony, CISCO, IBM, and many others to help build out new products and services. Robi specializes in bringing his unique vision and thought process to difficult and complex problems allowing companies and organizations to find innovative solutions that they can rapidly operationalize or go to market with.
Read more
  • 0
  • 0
  • 2840
article-image-make-most-your-data-syncfusion
Daniel Jebaraj
26 Mar 2015
3 min read
Save for later

Make the Most of Your Data with Syncfusion

Daniel Jebaraj
26 Mar 2015
3 min read
Syncfusion, Inc., a developer solutions company, recently formed a partnership with PacktPub that enables each company to share the other’s valuable assets with different communities of learners. Both are committed to helping all types of developers stay up to date on the latest technologies, and believe that as programming evolves, so should your learning experience. With e-books from PacktPub and software from Syncfusion, you have access to the comprehensive information and flexible tools you need to keep up with changing developer environments. Here, Syncfusion Vice President Daniel Jebaraj provides some insight on the company’s latest endeavors in the field of big data processing and analysis. This has been an exciting year for Syncfusion. In addition to maintaining our quarterly release cycle for our Essential Studio suite of .NET and JavaScript components, we also launched our first-ever data science offerings for Windows developers: the Syncfusion Big Data Platform and Essential Predictive Analytics. The term “big data” is thrown around a lot these days, but what does it really mean? For us, it means a shift that enables businesses to use the data they already collect every day to their advantage. Basic customer data, such as the amount spent on each purchase, the number of times an item is purchased, and how frequently, can help business leaders make valuable predictions about future market trends. We recently had the opportunity to prototype a custom big data solution for Rudolph Technologies, a company that provides software and equipment for several manufacturing industries, including semiconductor, LED, and flat panel display systems. The company needed to manage vast amounts of data to analyze its expanding portfolio without losing productivity. The Syncfusion solution enabled the company to store its data in an Apache Hadoop-based data warehouse, making it available to more effective analysis in the form of batch processing. A set of the data was then replicated in Apache Cassandra, and Syncfusion implemented a backend service that enables data to be served to any device, including mobile devices. As we noted in a recent CodeProject article, using our Big Data Platform can quickly help you make the most of your data without ever leaving the comfortable and familiar Windows environment. You have complete access to Hadoop and can use tools like Sqoop, Pig, and Hive to analyze your information in one convenient, easy-to-use interface. In addition, you’ll save money on deploying, composing, and executing jobs regardless of the environment you’re working in. If you’re interested in learning more about what data science can do for your company, PacktPub has published some great resources on the subject, including Practical Data Science Cookbook and Hadoop Beginner’s Guide. If you’re already familiar with R or Hadoop, check out Big Data Analytics with R and Hadoop for tips on how to integrate the two. About the Author Daniel Jebaraj joined Syncfusion in 2001. As vice president, Daniel leads Syncfusion’s product development while actively engaging with customers and overseeing product release cycles. He holds a master’s degree in Industrial Engineering from Clemson University.
Read more
  • 0
  • 0
  • 1193

article-image-minecraft-modding-experiences-and-starter-advice
Martijn Woudstra
18 Mar 2015
6 min read
Save for later

Minecraft Modding Experiences and Starter Advice

Martijn Woudstra
18 Mar 2015
6 min read
For three years now, I have tried to help a lot of people enjoy Minecraft in a different way. One specific thing I want to talk about today are add-ons for Minecraft. This article covers my personal experience, as well as some advice and tips for people who want to start developing and bring their own creativity to life in Minecraft. We all know the game Minecraft, where you live in a blocky world, and you can create literally everything you can imagine. So what could possibly be more fun than making the empire state building? Inventing new machines, formulating new spells,  designigning factories and automatic builders for starters! However, as most of you probably know already, these features are not present in the Minecraft world, and this is where people like me jump in. We are Mod Developers. We are the people who bring these awesome things to life, and change the game in ways you may find much more enjoyable to make Minecraft more enjoyable. Although all of this might seem easy to do, it actually takes a lot of effort and creativity.  Let me walk you through the process. Community feedback is priority number one. You can’t have a fun mod if nobody else enjoys it. Sometimes I read an article on a forum about a really good idea. I then get to work!.  However, just like traditional game development, a small idea that is posted on the forum, must be fully thought through. People who come up with the ideas usually don’t think of everything when they post their idea. You must think about things such as how to balance a given idea with vanilla Minecraft. What do you want your creation to look like? Do you need to ask for help from other authors? All of these things are essential steps for making a good modification to the amazing Minecraft experience that you crave. You should start by writing down all of your ideas and concepts. A workflow chart helps to make sure the most essential things are done first, and the details happen later. Usually I keep all of my files in a Google Drive, so I can share my files with others. In my opinion, the actual modding phase is the coolest part, but it takes the longest amount of time. If you want to create something that is new and innovative you might soon realize it is something that you’ve never worked with before, which can be hard to create. For example, for a simple feature such as making a block spin automatically, you could easily work for two hours just to create the basic movements.  This is where experience kicks in. When you make your first modification, you might bump into the smallest problems. These little problems kept me down for quite a long time. It can be quite a stressful process, but don’t give up! Luckily for me, there were a lot of people in the Minecraft modding community who were kind enough to help me out through the early stages of my development career. At this moment I have reached a point where my experience allows most problems to easily be solved. Therefore, mod development has become a lot more fun.  I even decided to join a modding team, and I took over as lead on that project. Our final mod turned out to be amazing. A little later, I started a tutorial series together with a good friend of me, for people who wanted to start with the amazing art of making Minecraft mods. This tutorial series was quite a success, with 7000 views on the website, and almost 2000 views on YouTube. I do my best to help people make their first steps into this amazing community,  by making tutorials, writing articles about my experiences, and describing my idea on how to get into modding. What I noticed right away, is that people tend to go too fast in the beginning. Minecraft is written in Java, a programming language. I have spoken to some people, who didn’t even know this, and yet were trying to make a mod. Unfortunately, life doesn’t work like that. You need to know about the basic language, before you can use it properly. Therefore, my first advice to you is to learn the basics of Java. There are hundreds of tutorials online that can teach you what you need to know. Personally, that’s how I learned Java too! Next up is to get involved into the community: Minecraft Forge is basically a bridge between the standard Minecraft experience and the limitless possibilities of a modded Minecraft game. Minecraft Forge has a wide range of modders, who definitely do not mind giving you some advice, or helping out with problems. Another good way to learn quickly is to team up with someone. Ask around on the forums for a teacher, or someone just as dedicated as you, and work together on a project you both want to truly bring to life. Start making a dummy mod, and help each other when you get stuck. Not a single person has the same way of tackling a task, and perhaps you can absorb some good habits from your teammate. When I did this, I learned a thousand new ways to write pieces of code I would have never thought of on my own. The last and most important thing I want to mention in this post is to always have fun doing what you’re doing. If you’re having a hard time enjoying modding, take a break. Modding will pull you back if you really want it again. And I am speaking from personal experience. About this author Martijn Woudstra lives in a city called Almelo in the Netherlands. Right now he is studying Clinical Medicine at the University of Twente. He learned Java programming about 3 years ago. For over a year he has been making Minecraft mods, which required him to learn how to translate Java into the API used to make the mods. He enjoys teaching others how to mod in Minecraft, and along with his friend Wuppy, has created the Orange Tutorial site (http://orangetutorial.com). The site contains tutorials and high quality videos and understandable code. This is must see resource if you are interested in crafting Minecraft mods.
Read more
  • 0
  • 0
  • 2354

article-image-2015-year-deep-learning
Akram Hussain
18 Mar 2015
4 min read
Save for later

Is 2015 the Year of Deep Learning?

Akram Hussain
18 Mar 2015
4 min read
The new phenomenon to hit the world of ‘Big Data’ seems to be ‘Deep Learning’. I’ve read many articles and papers where people question whether there’s a future for it, or if it’s just a buzzword that will die out like many a term before it. Likewise I have seen people who are genuinely excited and truly believe it is the future of Artificial intelligence; the one solution that can greatly improve the accuracy of our data and development of systems. Deep learning is currently a very active research area, by no means is it established as an industry standard, but rather one which is picking up pace and brings a strong promise of being a game changer when dealing with raw, unstructured data. So what is Deep Learning? Deep learning is a concept conceived from machine learning. In very simple terms, we think of machine learning as a method of teaching machines (using complex algorithms to form neural networks) to make improved predictions of outcomes based on patterns and behaviour from initial data sets.   The concept goes a step further however. The idea is based around a set of techniques used to train machines (Neural Networks) in processing information that can generate levels of accuracy nearly equivalent to that of a human eye. Deep learning is currently one of the best providers of solutions regarding problems in image recognition, speech recognition, object recognition and natural language processing. There are a growing number of libraries that are available, in a wide range of different languages (Python, R, Java) and frameworks such as: Caffe,Theanodarch, H20, Deeplearning4j, DeepDist etc.   How does Deep Learning work? The central idea is around ‘Deep Neural Networks’. Deep Neural Networks take traditional neural networks (or artificial neural networks) and build them on top of one another to form layers that are represented in a hierarchy. Deep learning allows each layer in the hierarchy to learn more about the qualities of the initial data. To put this in perspective; the output of data in level one is then the input of data in level 2. The same process of filtering is used a number of times until the level of accuracy allows the machine to identify its goal as accurately as possible. It’s essentially a repeat process that keeps refining the initial dataset. Here is a simple example of Deep learning. Imagine a face, we as humans are very good at making sense of what our eyes show us, all the while doing it without even realising. We can easily make out ones: face shape, eyes, ears, nose, mouth etc. We take this for granted and don’t fully appreciate how difficult (and complex) it can get whilst writing programs for machines to do what comes naturally to us. The difficulty for machines in this case is pattern recognition - identifying edges, shapes, objects etc. The aim is to develop these ‘deep neural networks’ by increasing and improving the number of layers - training each network to learn more about the data to the point where (in our example) it’s equal to human accuracy. What is the future of Deep Learning? Deep learning seems to have a bright future for sure, not that it is a new concept, I would actually argue it’s now practical rather than theoretical. We can expect to see the development of new tools, libraries and platforms, even improvements on current technologies such as Hadoop to accommodate the growth of Deep Learning. However it may not be all smooth sailing. It is still by far very difficult and time consuming task to understand, especially when trying to optimise networks as datasets grow larger and larger, surely they will be prone to errors? Additionally, the hierarchy of networks formed would surely have to be scaled for larger complex and data intensive AI problems.     Nonetheless, the popularity around Deep learning has seen large organisations invest heavily, such as: Yahoo, Facebook, Googles acquisition of Deepmind for $400 million and Twitter’s purchase of Madbits. They are just few of the high profile investments amongst many. 2015 really does seem like the year Deep learning will show its true potential. Prepare for the advent of deep learning by ensuring you know all there is to know about machine learning with our article. Read 'How to do Machine Learning with Python' now. Discover more Machine Learning tutorials and content on our dedicated page. Find it here.
Read more
  • 0
  • 0
  • 3020
article-image-chromebots-increasing-accessibility-new-makers
David Resseguie
18 Mar 2015
5 min read
Save for later

Chromebots: Increasing Accessibility for New Makers

David Resseguie
18 Mar 2015
5 min read
Something special happens when a kid (or adult) makes an LED blink on their own for the first time. Once new programmers realize that they can control the world around them, their minds are opened to a whole new world of possibilities. DIY electronics and programming are more accessible than ever with the introduction of the Arduino and, more recently, Open Source programming frameworks like Johnny-Five for building Nodebots (JavaScript-powered robots!). But there are still some basic configuration and dependency requirements that can be roadblocks to new users. Our goal as a community should be to simplify the process and develop tools that help users get to their “aha” moment faster. Chris Williams, author of the popular node-serialport library used by the Nodebots community, summarized this goal as: “Reduce the time to awesome.” Johnny-Five does a fantastic job of abstracting away many of the complexities of interactive with Arduinos, sensors, and actuators. But its use still depends on things like installing a particular firmware (Firmata) on the Arduino and setting up a proper Node.js environment for running user’s applications. These requirements are often a stumbling block to those that are just learning electronics and/or programming. So how do we simplify the process further and help new users get to “awesome” faster? Enter Chromebots. Chromebots is an Open Source Chrome Application that rolls up all the requirements for building Nodebots into a simple interface that can run on any desktop, laptop, or even Chromebooks that are becoming popular in classrooms. The Chromebots appllication combines firmata.js, a browser serialport implementation, and all the Node.js dependencies you need to get started building Nodebots right away. It even uses a new JavaScript-based Arduino binary loader to install Firmata for you. There is nothing else to install and no special configuration required. Let’s see just how easy it is to get started. 1) Install Chromebots First, you need to install the “Johnny-Five Chrome” application from the Chrome web store. Once installed, you can launch the Chromebots application via the “Apps” icon in the bookmarks bar of Chrome or the Chrome App Launcher that’s installed to your taskbar (Windows) or Dock (Mac). You’ll be presented with a window like this: 2) Connect your Arduino Plug in your Arduino UNO (or compatible board) via USB and click the blue refresh button next to the Port selection box. The Chromebots app will automatically detect which serial port is assigned to your Arduino. Depending on what operating system you are using, it will be something like “COM3” or “/dev/tty.usbmodem1411”. If you aren’t sure which port is the correct one to choose, simply unplug the Arduino, refresh the list, then plug it back in and see which one shows up new. 3) Install Firmata If you haven’t already installed Firmata on your Arduino (or just aren’t sure), click the “Install Firmata” button. The TX/RX lights will flash briefly on your Arduino, and then the process is complete. 4) Add an LED to pin 13 For our first sample program, we’ll just blink an LED. The easiest way to do this is to insert an LED directly on the Arduino. The longer lead on the LED is positive and connects to pin 13. The shorter negative lead is inserted into ground (GDN) next to pin 13. 5) Run your Johnny-Five program Now you’re ready to run your first program! By default, the Chromebots app starts out with a sample Johnny-Five program that waits for a connection to the Arduino, defines an LED on pin 13, and calls the blink() function. Click the “Run” button and the LED you plugged into pin 13 will start blinking rapidly. And that’s it. You’re now ready to explore the power of Johnny-Five to build your own Nodebot! The Chromebots app makes several variables available for your use. The “five” variable is the standard Johnny-Five library. The “io” variable represents the Firmata instance for the board. jQuery (“$”) and lodash (“_”) are also available as convenience libraries. So what next? I recommend trying a few of the Johnny-Five example programs to get you started with understanding how the framework is used. Note, if you’d like access to the JavaScript console for debugging purposes, there’s one additional step you need to take to enable debugging inside a packaged Chrome Application. Inside Chrome, enter the following into the address bar: “chrome://flags”. Find the option for “Enable debugging for packed apps” and turn it on. Restart your browser (including the Chromebots app) and now you can right-click inside Chromebots and select the “Inspect Element” option in the menu to gain access to the standard Chrome Developer Tools. Now build something awesome and then share it with the Nodebots community! I can’t wait to see what you create. About the author David Resseguie is a member of the Computational Sciences and Engineering Division at Oak Ridge National Laboratory and lead developer for Sensorpedia. His interests include human computer interaction, Internet of Things, robotics, data visualization, and STEAM education. His current research focus is on applying social computing principles to the design of information sharing systems. He can be found on Twitter @Resseguie.
Read more
  • 0
  • 0
  • 1912

article-image-what-i-want-hardware-2015
Ed Bowkett
18 Mar 2015
5 min read
Save for later

What I want to happen in Hardware - 2015

Ed Bowkett
18 Mar 2015
5 min read
Apple Watch Set for a tentative release in Spring 2015, the Apple Watch is being termed as ‘the wearable to watch out for in 2015’. Whilst I can certainly see why, it’s a product made by Apple, for £300 it’s not a cheap wearable. Nonetheless, we should expect the same high quality as with all other Apple products. The cost is the only thing putting me off, as well as tying me down into an iphone contract, but apart from that, Apple Watch is set to be one of the most highly sought after piece of hardware for 2015. Wearables Wearables isn’t going to go away. It continued its march in 2014, the market got swamped by similar products that essentially all did the same thing, just under a different name and varying prices. From the conclusion of CES 2015, this is set to continue. Don’t get me wrong, if these wearables aid in improving your health and wellbeing, then I am all for them. However, when there are some ‘advances’ in technology such as the self-tying shoe, then you have to question how far, or rather, how ridiculous it is getting. However, 2015 is set to continue with the wearables. I’m not anti-wearables, I have a Fitbit, I’ve considered getting other wearables, I just question when it becomes too much. Call it ‘wearable weariness’. I’m also aware I will likely be one of the minority. Virtual Reality 2014 also sparked quite possibly the biggest acquisition in hardware, the purchase of Oculus Rift by Facebook for a reported $2billion, which shows a strong commitment from Facebook towards the future development of Virtual Reality. Whilst 2014 was all about the headsets, for example Project Morpheus from Sony, Samsung’s own VR Headset. Basically Virtual Reality is set to stay. I’m expecting huge things from this area in 2015, however I’m not expecting perfection if that makes sense. Virtual Reality for games will take a leap forward and for gamers and hobbyists alike it will continue to fascinate, but it will still be in its infantile state. Whilst I love the fact that all these companies are finally becoming aware of the popularity of Virtual Reality, they do also need to work together to ensure VR becomes a thing rather than an ambition. As such, I feel that whilst I would love Virtual Reality to be a reality in 2015, I think that it would be a push for this to happen. Of course, I don’t know why I’d love it to happen, considering I suffer from motion sickness, but I can dream I guess. Steam Machine A little over a year ago, Steam Machines were announced. These were set to be ‘living room’ pcs from Valve. A year later and we’re still waiting. But are we? I mean a ‘living room’ PC is basically just a computer right? That happens to be in your living room? With the delay, obviously Valve’s partners have gone ahead and published their own ‘version’ of Steam machines. Alienware have released the Alienware Alpha. I’m expecting further partners to release their version and I’m expecting customisation to be heavily marketed. I hope that Valve also announces/innovates their own machine and encourages cheap, but game ready computers. That’s the future, and that’s what I want for 2015. Internet of Things Finally, the Internet of Things. In another blog I referred to this as this should really be classed as the Internet of Everything. This is what it’s becoming. Technology has permeated everything. The amount of devices I can now access with wifi/bluetooth and make me things is extraordinary. For example, there is a coffee machine, where if I send a text, it brews a coffee for me. I mean, that’s great, but is that really needed? Or am I just being a grumpy grouch? Similarly there are devices which monitors my mood (I don’t know how) and can then translate that in terms of the lighting. Again, do I need that? I’m viewing this area both with cautious but also with grudging respect. Of course I need a text making coffee machine. Of course I need a device that will track my sleep patterns and tell me what I need to do to improve it. Of course I need that wristband that tells me I don’t do enough exercise. I need all the nagging machines to tell me that! On a serious note, technology reaching all walks of life can only be a good thing, but we need to ensure that our functionality as a human being isn’t eroded. Sounds scarily like Terminator I know, but I’d like to make myself a good cup of coffee once in a while. Maybe once a month? I’m sure there are other hardware moments to watch out for in 2015, including the evolution of ARM boards and the ever decreasing size of mobile phones. So what are you anticipating?
Read more
  • 0
  • 0
  • 1140