Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech Guides

851 Articles
article-image-after-angular-take-the-meteor-challenge
Sarah C
28 Nov 2014
4 min read
Save for later

What’s next after Angular? Take the Meteor challenge!

Sarah C
28 Nov 2014
4 min read
This month the Meteor framework hit version 1.0. We’ve been waiting to see this for a while here at Packt, and have definitely not been disappointed. Meteor celebrated their launch with a bang – Meteor Day saw old hands and n00bs from around the globe gather together to try out the software and build new things. You might have experienced the reverberations across the Web. Was it a carefully crafted and clever bit of marketing? Obviously. But in Meteor’s case, we can forgive a little fanfare. Maybe you’re jaded and worn out with a barrage of new tools for web development. You should make an exception for Meteor. Maybe JavaScript isn’t your thing, and you don’t have any interest in working with Node on the backend. You should make an exception for Meteor. I’m not trying to shill anything here – every resource I’ll mention in the course of this post is entirely free. I just think the Meteor web application stack is something special. Why does Meteor matter for a modern Web? If you haven’t come across it before, Meteor is a full stack JavaScript framework for the modern Web. It’s agnostic about how you want to structure your app – MVC, MVVM, MVW, stick everything in one folder with filenames such as TestTemplate(2).js –hey, man, you do you! As long as you keep your client and server concerns separate (there are special built-in rules for the client, server, and public folders to help it do its synchronous magic), Meteor won’t judge. The framework’s clarion cry is that creating application software should be radically simple . We all know that the Web looks different now than it did even a couple of years ago. The app is queen. Single-page web apps have made the Internet programmatic and reactive. The proliferation of mobile apps redefining the online path between customers and businesses are moving us even further away from treating the Internet as a static point of reference. “Pages” are a less and less an accurate metaphor for the visualization of our shared digital realm. Today’s Internet is deep, receptive, active, and aware. Given that, it’s hard to argue against making JavaScript app development simpler. Simple doesn’t mean shoddy, or hacky. It comes from thinking about the Web as it exists now and making the right demands of a framework. Meteor.js lives its philosophy – a multi-user, real-time web-app can be put together in a couple of hours with time to spare for pretty UI design and to window shop for packages. Don’t believe me? Try it out for yourself! Throwing down the gauntlet Originally, we wanted to do a Meteor challenge for the staff here in our Birmingham offices. The winner would have gotten something sweet – perhaps an extra turn on the water slide or an exemption from her turn feeding the Packt scorpions. Alas, in the end the obligation to get on with our actual jobs (helping you guys learn software) got in the way of making this happen. So I’m outsourcing the challenge to you, dear reader. Your mission: Download Meteor 1.0. Prototype an app. Use the time left over to feel pleased with yourself. You get extra credit if: The app has a particular appeal for book lovers (like us!) or It contains a good pun If you’re a Linux or Mac user you can get started right away. If you’re on Windows, you’ll need to use a virtual environment, either in your browser or using something like Vagrant. Don’t worry, the Meteor site has tutorials to get you started in a trice. After that, you can check out all kinds of great learning resources made available by the devs and the community. Get started with the official docs and tutorial, then move on to more hardcore tips and tricks at BulletProof Meteor. The more aurally inclined and those of you who like to code while you drive might prefer to check out the Meteor Podcast. (Please do not code while you drive! – The Legal Team.) When you get stuck, hit up the community on the G+ group. Or browse MeteorHelp for a collation of other sources of information. Most importantly, let me know how you get on with it! We’re excited to see what you come up with. Do you see yourself making Meteor part of your workflow in future? Check out our JavaScript Tech Page for more insight into Meteor and full-stack JS development.
Read more
  • 0
  • 0
  • 3246

article-image-why-uber-is-wrong
Ed Gordon
28 Nov 2014
5 min read
Save for later

Why Uber was wrong (and I’m right…)

Ed Gordon
28 Nov 2014
5 min read
I recently stumbled across an old Uber blog that offended me. It was a puerile clickbait article that claimed to be able to identify the areas in a city that were more prone to engage in one-night stands. It also offended me in its assumptions about the data it presented to support its claim. This blog isn’t about the group of people that it calls “RoGers” (“Ride of Glory-ers”), it’s about how when you go looking for results, you can find whatever data you need. If you want to read it in its full glory, the original Uber article can be found here. So. Some facts. People who have one night stands (according to Uber): Often do it on a Friday and Saturday Leave after 10pm and return within 4-6 hours In San Francisco, they do it from the highlighted locations–the darker areas are where the proportion of RoGers outweighs the proportion of non-RoGers From the A–B locations and a somewhat artificial timeframe, we’re lead to believe that we’re looking at the sinners of San Francisco. Let’s have a look at an alternate reality where there are fewer people having Uber-fuelled sex. My theory is this; young people go out. They go out after the hours of 10pm, and return before 6am. They go out to drink within a few miles of their home. They do this, as all good employees, on a Friday and Saturday night because, well, who likes working with a hangover? I will call them ReGs (Regular Everyman Guys/Girls). Locating my demographic To establish where people actually live, I took a sample from 91,000 apartment listings from datasf and ran it through Google Map Engine, which lets n00bs like me create maps: Google Map Engine only lets me do 500 rows of data for free, so it’s limited, but you can see that most people live in north-eastern San Francisco. This will come as no surprise to people who live there, but as I’ve only ever seen San Francisco in films (Homeward Bound et al.), I thought it prudent to prove it. Basically, we can establish that people live where Uber say they live. Lock the doors, this is going to be a wild one. Who’s living here though? I think the maximum age of decency for a 6-hour drinking bender is probably about 33. So I needed to know that a large portion of the RoGer area was full of young people occupying these apartments. Finding an age map of a city was really difficult, but after Googling “San Franciso Age Map” I found one at http://synthpopviewer.rti.org/. The blue represents ages 15–34. Red is 55-64. Young people live in San Francisco! Who knew? More specifically, the “heat map” areas seem to match up nicely to the Uber data: But where do they go?! A city full of young people. What do they do at night? Are they really RoGers? There’s an article from growthhackers that says the no.1 reason for Uber use (and subsequent) growth is “Restaurants and Nightlife”. It seems like a reasonable assumption that people want to drink rather than drive, so I mapped out the restaurants in San Fran (hoping that restaurants = bars and clubs too). Again, there’s a clear grouping around similar areas. Young people live in San Francisco. They are surrounded by restaurants and bars. I’m using my own experiences with the body of 27,000 Birmingham students, and of being a worker in my mid-20s, that most go out on a Friday and Saturday night and that they do it after 10pm (normally about 11pm) and return at around 3am. They aren’t going out for Rides of Glory, they’re going out to practice expressive dance until the early hours. What it all means My narrative still smells a bit right? I’m ignoring that half of the “young people” in my sample can’t drink, I’m assuming that the people who can actually go out at night, and I’m assuming that my restaurant map also represents bars and nightclubs. The data about apartment listings was basically pointless. And the same can be said for data of the RoGers of Uber. We’re told that because a young city, full of workers and students take trips between 10pm and 6am, they’re all playing away. It’s an analysis as full of assumptions as my own. Uber knew what they wanted (more clicks) before they came to their conclusion. When you do this in the real world, it can lead to big mistakes. Data-driven decisions aren’t a half and half approach. If you choose that path, you must be dedicated to it–get all the possible relevant data points, and allow people who know what these data points mean to come up with conclusions from them.   When you ask a question before you get the data, you end up with what you want. In this scenario, I ended up with ReGs. Uber ended up with RoGers. I think I’m more correct than they are because their conclusion is stupid. But we’re both likely to be wrong in the end. We went in to the big world of data with a question (what would make a good blog), and ended up with clouded judgment. When you’re investing the future of your company based on clouded data, this approach would have bigger implications than producing a clickbait blog. Next time, I’ll get the data first and then let that tell me what will make a good blog.
Read more
  • 0
  • 0
  • 1938

article-image-common-kafka-addons
Timothy Chen
11 Nov 2014
5 min read
Save for later

Common Kafka Addons

Timothy Chen
11 Nov 2014
5 min read
Apache Kafka is one of the most popular choices in choosing a durable and high-throughput messaging system. Kafka's protocol doesn't conform to any queue agnostic standard protocol (that is, AMQP), and provides concepts and semantics that are similar, but still different, from other queuing systems. In this post I will cover some common Kafka tools and add-ons that you should consider employing when using Kafka as part of your system design. Data mirroring Most large-scale production systems deploy their systems to multiple data centers (or A availability zones / regions in the cloud) to either avoid a SPOF (Single Point of Failure) when the whole data center is brought down, or reduce latency by serving systems closer to customers at different geo-locations. Having all Kafka clients all reading across data centers to access data as needed is quite expensive in terms of network latency, and it affects service performance. For Kafka to have the best performance in throughput and latency, all services should ideally communicate to a Kafka cluster within the same data center. Therefore, the Kafka team built a tool called MirrorMaker that is also employed in production at Linkedin. MirrorMaker itself is an installed daemon that sets up a configured number of replication streams from the destination cluster pulling from the source cluster, and is able to recover from failures and records its state in Zookeeper. With MirrorMaker you can set up Kafka clients that can read/write from clusters in the same DC. This aggregation from other brokers is replicated asynchronously and the local changes are polled from other clusters as well. Auditing Kafka is often served as a pub/sub queue between a frontend collecting service and a number of downstream services that includes batching frameworks, logging services, or event processing systems. Kafka works really well with various downstream services because it holds no state of each client (which is impossible for AMQP). Kafka also allows each consumer to consume data at different offsets of the same partition with high performance. Also, typically systems not only have one cluster of Kafka, but multiple Kafka clusters. These clusters act as a pipeline where a consumer of one Kafka cluster feeds into a recommendation system that writes that output into another set of Kafka clusters. One common need for a data pipeline is to have logging/auditing, to ensure that all of the data you produce from the source is reliably delivered into each stage. If this data is not delivered, then you will know the percentage of data that is missing. Kafka out-of-the-box doesn't provide this functionality, but it can be added using Kafka directly. One implementation is to give each stage of your pipeline an ID, and in the producer code at each stage write out the sum of the number of records in a configurable window that is pushed into Kafka along with the stage ID into a specific topic (that is, counts) at each stage of the pipeline. For example, with a Kafka pipeline that consists of stage A -> B -> C, you could imagine simple code such as the following to write out counts at a configured window: producer.send(topic, messages); sum += messages.count(); lastUpdatedAt = System.currentTimeMillis(); if (lastUpdatedAt - lastAudited >= WINDOW_MS) { lastAuditedAt = System.currentTimeMillis(); auditing.send("counts", new Message(new AuditMessage(stageId, sum, lastAuditedAt).toBytes()); } At the very bottom of the pipeline the counts topic will have the aggregate of counts from each pipeline, and a custom consumer can pull in all of the count messages and partition by stage and compare the sums. The results at each window can also be graphed to show the number of messages that are flowing through the system. This is what is done at LinkedIn to audit their production pipeline, and has been suggested for a while to be incorporated into Kafka itself but that hasn't happened yet. Topic partition assignments Kafka is highly available, since it offers replication and allows users to define the number of acknowledgments and the broker assignment of each replicated data for each partition. By default, if no assignment is given, then it's randomly assigned. Random assignment might not be suitable, especially if you have more requirements of how you want to place these data replicas. For example, if you are hosting your data on the cloud and want to withstand an availability zone failure, then placing more than one AZ for your data replication would be a good idea. Another example would be rack awareness in your data center. You can definitely build an extra tool that generates a specific replica assignment based on all of this information. Conclusion The Kafka tools described in this post are some common tools and features companies in the community often employ, but depending upon your system there might be other needs to consider. The best way to see if someone has implemented a similar feature that is open source is to email the mailing list or ask on IRC (freenode #kafka). About The Author Timothy Chen is a distributed systems engineer at Mesosphere Inc., The Apache Software Foundation. His interests include: open source technologies, big data, and large-scale distributed systems. He can be found on Github as tnachen.
Read more
  • 0
  • 0
  • 2231

article-image-top-5-nosql-databases
Akram Hussain
31 Oct 2014
4 min read
Save for later

Top 5 NoSQL Databases

Akram Hussain
31 Oct 2014
4 min read
NoSQL has seen a sharp rise in both adoption and migration from the tried and tested relational database management systems. The open source world has accepted it with open arms, which wasn’t the case with large enterprise organisations that still prefer and require ACID-compliant databases. However, as there are so many NoSQL databases, it’s difficult to keep track of them all! Let’s explore the most popular and different ones available to us: 1 - Apache Cassandra Apache Cassandra is an open source NoSQL database. Cassandra is a distributed database management system that is massively scalable. An advantage of using Cassandra is its ability to manage large amounts of structured, semi-structured, and unstructured data. What makes Cassandra more appealing as a database system is its ability to ‘Scale Horizontally’, and it’s one of the few database systems that can process data in real time and generate high performance and maintain high availability. The mixture of a column-oriented database with a key-value store means not all rows require a column, but the columns are grouped, which is what makes them look like tables. Cassandra is perfect for ‘mission critical’ big data projects, as Cassandra offers ‘no single point of failure’ if a data node goes down. 2 - MongoDB MongoDBis an open source schemaless NoSQL database system; its unique appeal is that it’s a ‘Document database’ as opposed to a relational database. This basically means it’s a ‘data dumpster’ that’s free for all. The added benefit in using MongoDB is that it provides high performance, high availability, and easy scalability (auto-sharding) for large sets of unstructured data in JSON-like files. MongoDB is the ultimate opposite to the popular MySQL. MySQL data has to be read in rows and columns, which has its own set of benefits with smaller sets of data. 3 - Neo4j Neo4j is an open source NoSQL ‘graph-based database’. Neo4j is the frontrunner of the graph-based model. As a graph database, it manages and queries highly connected data reliably and efficiently. It allows developers to store data more naturally from domains such as social networks and recommendation engines. The data collected from sites and applications are initially stored in nodes that are then represented as graphs. 4 - Hadoop Hadoop is easy to look over as a NoSQL database due to its ecosystem of tools for big data. It is a framework for distributed data storage and processing, designed to help with huge amounts of data while limiting financial and processing-time overheads. Hadoop includes a database known as HBase, which runs on top of HDFS and is a distributed, column-oriented data store. HBase is also better known as a distributed storage system for Hadoop nodes, which are then used to run analytics with the use of MapReduce V2, also known as Yarn. 5 - OrientDB OrientDB has been included as a wildcard! It’s a very interesting database and one that has everything going for it, but has always been in the shadows of Neo4j. Orient is an open source NoSQL hybrid graph-document database that was developed to combine the flexibility of a document database with the complexity of a graph database (Mongo and Neo4j all in one!). With the growth of complex and unstructured data (such as social media), relational databases were not able to handle the demands of storing and querying this type of data. Document databases were developed as one solution and visualizing them through nodes was another solution. Orient has combined both into one, which sounds awesome in theory but might be very different in practice! Whether the Hybrid approach works and is adopted remains to be seen.
Read more
  • 0
  • 0
  • 4504

article-image-mobile-forensics-data-on-the-move
Julian Ursell
31 Oct 2014
5 min read
Save for later

Data on the Move: The Growing Frontier of Mobile Forensics

Julian Ursell
31 Oct 2014
5 min read
"The autopsy report details that the victim was wearing a Google Glass at the time of death." "So it looks like we're through the looking glass on this one!" "Be respectful detective, a man just died." CSI: Miami-esque exchange aside, the continual advancements made in wearable smart technologies, such as the Google Glass, smart watches, and other peripherals mean the expertise and versatility of professional analysts working in the digital forensics space will face ever greater challenges in the future. The original innovation of smartphones steepened the learning curve for forensic investigators and analysts, who have been required to adapt to the rapid development of mobile systems approaching the computing power and intelligence of desktop computers. Since then, this difficulty has only escalated with the constant iteration of new mobile hardware capabilities and updates to mobile operating systems. The velocity at which mobile technology updates makes it a nightmare for analysts to keep up to speed with system architectures (whether Android, iOS, Windows, or Blackberry) so they have the ability to forensically examine devices in a range of critical, sometimes criminal, investigations. That’s even before considering knock-off phones and those that may have been on the wrong end of a baseball bat. For forensic experts, the art of data extraction is an imperative one to master, as crucial evidence lies in the artefacts stored on devices, and encompasses common system files such as texts, emails, call logs, pictures, videos, web histories, passwords, PINs, and unlock patterns, but also less typical objects stored on third-party applications. Geolocation data, timestamps, and user accounts can all provide key evidence to working out the what, where, when, how, why for an investigation. "Perishable" or anonymous messaging services such as Snapchat and Whisper add another dimension to the discoverability of data that is intended to be temporary or anonymous (although Whisper has come under fire recently for storing confidential data, contrary to the application’s anonymity promise). In cases where app data has been "destroyed" or anonymised, forensic technicians need to extract deleted data through manual decoding and even piece together the evidence, Columbo-style, to unravel the perpetrators and the crime. The sophistication of numerous third-party applications and the types of data they are capable of storing adds a considerable degree of complexity and demands a lot in terms of forensic method and data analysis. Mobile forensics is a developing discipline, and with the rise of smart wearables, there is yet another dimension for analysts to get to grips with in the future. The smartwatch is still in the infancy stage of sophistication and adoption among consumers, but the impending release of the Apple Watch, along with the already available Samsung Gear and Pebble Steel ranges indicate that the market is going to expand in the next few years, and this makes it likely that smartwatches will become another addition in the digital (mobile) forensics space. The interesting kink in smartwatch technology is the paired interface they must share with phones, as the devices must effectively be synced in order to function, so that the watch receives notifications (texts, calls) pushed from the phone. The event logs stored on both devices when phone and watch interact may prove to be an important forensic artefact should they ever be the cause of investigation, and while right now, native apps on smartwatches are on the limited side (contacts, calendar, media, weather), greater sophistication in the realm of smartwatch apps cannot be far away. A hugely intriguing layer for mobile forensics is brought by the Google Glass and its array of functionalities, as once it eventually becomes globally available it will become an important device for analysts to understand how to image and pull apart. The Glass can be used for typical smartphone activities, such as sending messages, making calls, taking pictures, and social media interaction, but it's the ability to enable on-the-fly navigation and translation out in the real world, along with voice commanded Google search and access to real-time information updates through Google Now that make it particularly fascinating from a forensics standpoint. Even considering the familiarity experts will have with Android systems, the unique properties of the Glass in its use of voice commands and the search and geospatial information it collects will potentially provide crucial artefacts in investigations. Examiners will need to know how to pull voice command event logs and parse timeline data, recover deleted visual data, analyse GPS usage and locations, and even determine when in time a Glass was on or off. A student in digital forensics has even begun attempting to forensically examine the Glass. At this point in time, Glass wearers are those select few chosen for the Explorer beta program, but we should fully expect—when the device becomes completely publically available—for it to become popular enough for it to make another significant addition to the field of smart device forensics. Apparently Google Glass carriers are split into two camps—‘Explorers’ and ‘Glassholes’. Whatever the persuasion, forensic investigators may be required to look through a glass, darkly, sooner than they think.
Read more
  • 0
  • 0
  • 1838

article-image-python-data-stack
Akram Hussain
31 Oct 2014
3 min read
Save for later

Python Data Stack

Akram Hussain
31 Oct 2014
3 min read
The Python programming language has grown significantly in popularity and importance, both as a general programming language and as one of the most advanced providers of data science tools. There are 6 key libraries every Python analyst should be aware of, and they are: 1 - NumPY NumPY: Also known as Numerical Python, NumPY is an open source Python library used for scientific computing. NumPy gives both speed and higher productivity using arrays and metrics. This basically means it's super useful when analyzing basic mathematical data and calculations. This was one of the first libraries to push the boundaries for Python in big data. The benefit of using something like NumPY is that it takes care of all your mathematical problems with useful functions that are cleaner and faster to write than normal Python code. This is all thanks to its similarities with the C language. 2 - SciPY SciPY: Also known as Scientific Python, is built on top of NumPy. SciPy takes scientific computing to another level. It’s an advanced form of NumPy and allows users to carry out functions such as differential equation solvers, special functions, optimizers, and integrations. SciPY can be viewed as a library that saves time and has predefined complex algorithms that are fast and efficient. However, there are a plethora of SciPY tools that might confuse users more than help them. 3 - Pandas Pandas is a key data manipulation and analysis library in Python. Pandas strengths lie in its ability to provide rich data functions that work amazingly well with structured data. There have been a lot of comparisons between pandas and R packages due to their similarities in data analysis, but the general consensus is that it is very easy for anyone using R to migrate to pandas as it supposedly executes the best features of R and Python programming all in one. 4 - Matplotlib Matplotlib is a visualization powerhouse for Python programming, and it offers a large library of customizable tools to help visualize complex datasets. Providing appealing visuals is vital in the fields of research and data analysis. Python’s 2D plotting library is used to produce plots and make them interactive with just a few lines of code. The plotting library additionally offers a range of graphs including histograms, bar charts, error charts, scatter plots, and much more. 5 - scikit-learn scikit-learn is Python’s most comprehensive machine learning library and is built on top of NumPy and SciPy. One of the advantages of scikit-learn is the all in one resource approach it takes, which contains various tools to carry out machine learning tasks, such as supervised and unsupervised learning. 6 - IPython IPython makes life easier for Python developers working with data. It’s a great interactive web notebook that provides an environment for exploration with prewritten Python programs and equations. The ultimate goal behind IPython is improved efficiency thanks to high performance, by allowing scientific computation and data analysis to happen concurrently using multiple third-party libraries. Continue learning Python with a fun (and potentially lucrative!) way to use decision trees. Read on to find out more.
Read more
  • 0
  • 0
  • 10253
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €14.99/month. Cancel anytime
article-image-5-2d-game-engines-you-might-not-have-considered
Ed Bowkett
30 Oct 2014
5 min read
Save for later

5 2D Game Engines you might not have considered

Ed Bowkett
30 Oct 2014
5 min read
In this blog we will cover 5 game engines you can use to create 2D games. 2D games are very appealing for a wide range of reasons. They’re great for the indie game scene, they’re great to learn the fundamentals of game development, and it’s a great place to start coding and you have fun doing it. I’ve thrown in some odd ones that you might not have considered before and remember, this isn’t a definitive list—just my thoughts! Construct 2 Construct 2 start page Construct 2 is fantastically simple to get into, thanks to its primary focus on people with no programming experience and its drag-and-drop editor that allows the quick creation of games. It is an HTML5-based game editor and is fast and easy to learn for novices, with the ability to create certain games such as platformers and shooters very quickly. Added to this, the behavior system is very simple and rapid to use, so Construct 2 is an easy choice for new people to get to grips with their first game. Furthermore, with the announcement that Construct 2 is now supported on the Wii U, along with a booming indie game market, Construct 2 has become an appealing engine to use, particularly to the non-programmer user base. However, one potential downside for aspiring game developers is the cost of Construct 2, coming in at a pricey £79.99. Pygame A basic game in Pygame I mention this game library simply because Python is just so goddamned awesome, and, with the amount of people using Python currently, it deserves a mention. With an increasing amount of games being created using Pygame, it’s possibly an area of interest to those that haven’t considered it in the past. It’s really easy to get started, it’s cross-platform and, have I said it’s made in Python? As far as negatives go, performance-wise, Python isn’t the greatest when it comes to large games, but this shouldn’t affect 2D development. It’s also free and open source. You just need to learn Python… GameMaker 'Hotline Miami' screenshot made in GameMaker Similar to Construct 2, GameMaker uses a simple drag-and-drop system. It primarily uses 2D graphics, but also allows limited use of 3D graphics. It has its own scripting language, Game Maker Language (GML for short), and is compatible with Windows, Mac, Ubuntu, Android, iOS, Windows Phone, and Tizen. A wealth of titles has been created via GameMaker including the bestsellers, Hotline Miami and Spelunky. Gamemaker is very simple for creating animations and sounds and allows you to create these within minutes, allowing you to focus on other things, such as possibly making it multiplayer. Also, as mentioned above, the ease of exporting to multiple platforms is a great plus for developers wanting to expand their audience quickly. Again a potential downside is the cost of Gamemaker, which comes in at $49.99. Cocos2D/Cocos2D-X Cocos2D-X game Cocos2d is an open source framework, with Cocos2d-x being a branch within this. It allows developers to exploit their existing C++, Lua, and JavaScript knowledge, and its ability to deploy cross-platform on Android, Windows Phone, Mac, and Windows saves significant cost and time for many developers. Many of the top grossing mobile games are made by Cocos2d-x and many of the leading game studios create games using Cocos2d-X. With its breadth of languages, usability, and functions, Cocos2D-x really is a no brainer.  A potential downside for users is it is entirely library-based, so it is entirely in code. This means in order to set up scenes, everything has to be done by the developer. However, it is free, which is always a bonus. Unity A basic 2D game using Unity A curveball to end this list. Everyone has heard of Unity and the great 3D games you can create. However, with 4.3, it introduced a huge opportunity to develop 2D games on Unity’s game engine. Having used it, the ability to quickly set up a 2D environment and the ease of quickly creating a basic game using professional tools is very appealing. Coupled with the performance of Unity, it ensures that your games will be polished and developers will be able to learn a bit of C# should they wish. It comes with a cost, either at $1,500 or at $75 a month (that’s if you want to go professional, but it’s free to dabble with), which for some is a stretch and is easily the most expensive on this list. This list provides a personal view on what I consider great 2D frameworks to get to grips with. It is not a definitive list; there are others out there that also do the job that these do, however I hope I have provided a balance and produced some alternatives that people might not have considered before. Read 'Tappy Defender - Building the home screen' to start developing your own game in Android Studio.
Read more
  • 0
  • 0
  • 5140

article-image-5-alternatives-to-raspberry-pi
Ed Bowkett
30 Oct 2014
4 min read
Save for later

5 Alternative Microboards to Raspberry Pi

Ed Bowkett
30 Oct 2014
4 min read
This blog will show you five alternative boards to Raspberry Pi that are currently on the market and which you might not have considered before, as they aren’t as well known. There are others out there, but these are the ones that I’ve either dabbled in or have researched and am excited for. Hummingboard Figure 1: Hummingboard The Hummingboard has been argued to be more powerful than a Raspberry Pi, and certainly the numbers do seem to support this: 1 GHz vs 700 MHz, and more RAM in the available models, varying from 512MB to 1GB. What’s even better with the Hummingboard is the ability to take out the CPU and memory module should you need to upgrade them in the future. It also allows you to run on many open source operating systems such as Debian, XBMC, and Android. However, it is also more costly than a Raspberry Pi, coming in at $55 for the 512MB model and a pricey $100 for the 1GB model. However, I feel that the performance per cost is worth it, and it will be interesting to see what the community does with the Hummingboard. Banana Pi Figure 2: Banana Pi Whilst some people can look at the name of the Banana Pi and assume that it is a clone of the famous Raspberry Pi, it’s actually even better. With 1GB of RAM and a dual core processor running at 1 GHz, it’s even more powerful than its namesake (albeit still a fruit). It includes an Ethernet port, micro-USB port, and a DSI for graphics, and can also run Android, Ubuntu, and Debian, as well as Raspberry Pi Image and Cubieboard Image. If you are seeking to upgrade from a Raspberry Pi, this is quite possibly the board to go for. It will set you back around $50, but again, when you think about the performance you get for the price, this is a great deal. Cubieboard Figure 3: Cubieboard The Cubieboard has been around for a couple of years now, so can be considered an early-adoption board. Nonetheless, the Cubieboard is very powerful, runs a 1 GHz processer, has an extra infrared sensor, which is good for using as a media center, and also comes with a SATA port. One compelling point that the Cubieboard has, along with its performance, is its cost. It comes in at just $49. Considering the Raspberry Pi sells at $35, this is not that much of a price leap and gives you much more zing for your bucks. Initially, of course, Arduino and Raspberry Pi had huge communities, whereas Cubieboard didn’t. However, this is changing, and hence the Cubieboard deserves a mention. Intel Galileo Figure 4: Intel Galileo Arduino was one of the first boards to be sold to the mass market. Intel took this and developed their own boards, which led to the birth of the Intel Galileo. Arduino-certified, this board combines Intel technology with their ready-made expansion cards (shields) as well as Arduino libraries. The Galileo can be programmable with OS X, Windows, and Linux. However, a real negative to the Galileo is the performance, coming in at just 400 MHz. This, combined with the cost, $70, means it’s one of the weakest in terms of price-performance on this list. However, if you want to develop on Windows with the relative safety of Arduino libraries, this is probably the board for you. Raspberry Pi Pad OK, OK. I know this isn’t strictly a microboard. However, the Raspberry Pi Pad was announced on the 21st October, and it’s a pretty big deal. Essentially, it’s a touchscreen display that will run on Raspberry Pi. So, you can essentially build a Raspberry Pi tablet. That’s pretty impressive, and awesome at the same time. I think this will be the thing to watch out for in 2015, and it will be cool to see what the community makes of it. This blog covered alternative microboards that you might not have considered before. It’s thrown a curveball at the end and generally tried to provide different boards other than the usual Raspberry Pi, Beaglebone, and Arduino. About the author Ed Bowkett is Category Manager of Game Development and Hardware at Packt Publishing. When not imagining what the future of games will be in 5 years’ time, he is usually researching up on how to further automate his home using the latest ARM boards.
Read more
  • 0
  • 0
  • 2943

article-image-responsive-design-is-hard
Ed Gordon
29 Oct 2014
7 min read
Save for later

Responsive Web Design is Hard

Ed Gordon
29 Oct 2014
7 min read
Last week, I embarked on a quest to build my first website that would simultaneously deliver on two puns; I would “launch” my website with a “landing” page that was of a rocket sailing across the stars. On my journey, I learned SO much that it probably best belongs in a BuzzFeed list. 7 things only a web dev hack would know “Position” is a thing no one on the Internet knows about. You change the attribute until it looks right, and hope no one breaks it. The Z-index has a number randomly ascribed until the element goes where you want. CSS animations are beyond my ability as someone who’s never really written CSS before. So is parallax scrolling. So is anything other than ‘width: x%’. Hosting sites ring you. All the time. They won’t leave you alone. The more tabs you have open the better you are as a person. Alt+Tab is the best keyboard hack ever. Web development is 60% deleting things you once thought were integral to the design. So, I bought a site, jslearner.com (cool domain, right?), included the boilerplate Bootstrap CDN, and got to work. Act I: Design, or, ‘how to not stick to plan’ Web design starts with the design bit, right? My initial drawing, like all great designs, was done on the back of an envelope that contained relatively important information. (Author’s note: I’ve now lost the envelope because I left it in the work scanner. Please can I get it back?!) As you can clearly see from the previous image, I had a strong design aesthetic for the site, right from the off. The rocket (bottom left) was to travel along the line (line for illustration purposes only) and correct itself, before finally landing on a moon that lay across the bottom of the site. In a separate drawing, I’d also decided that I needed two rows consisting of three columns each, so that my rocket could zoom from bottom left to top right, and back down again. This will be relevant in about 500 words. Confronting reality I’m a terrible artist, as you can see from my hand-drawn rocket. I have no eye for design. After toying with trying to draw the assets myself, I decided to pre-buy them. The pack I got from Envato, however, came as a PNG and a file I couldn’t open. So, I had to hack the PNG (puts on shades): I used Pixlr and magic-wanded the other planets away, so I was left with a pretty dirty version of the planet I wanted. After I had hand-painted the edges, I realised that I could just magic-wand the planet I wanted straight out. This wouldn’t be the first 2 hours I wasted. I then had to get my rocket in order. Another asset paid for, and this time I decided to try and do it professionally. I got Inkscape, which is baffling, and pressed buttons until my rocket looked like it had come to rest. So this: After some tweaking, became this: After flipping the light sources around, I was ready to charge triumphantly on to the next stage of my quest; the fell beast of design was slain. Development was going to be the easy part. My rocket would soar across the page, against a twinkling backdrop, and land upon my carefully crafted assets. Act II: Development, or, ‘responsive design is hard’ My first test was to actually understand the Bootstrap column thingy… CSS transformations and animations would be taking a back seat in the rocket ship. These columns and rows were to hold my content. I added some rules to include the image of the planets and a background color of ‘space blue’ (that’s a thing, I assure you). My next problem was that the big planet wasn’t sitting at the bottom of the page. Nothing I could do would rectify this. The number of open tabs is increasing… This was where I learned the value of using the Chrome/Mozilla developer tools to write rules and see what works. Hours later, I figured out that ‘fixed position’ and ‘100% width’ seemed to do the trick. At this point, the responsive element of the site was handling itself. The planets generally seemed to be fine when scaling up and down. So, the basic premise was set up. Now I just had to add the rocket. Easy, right? Responsive design is really quite hard When I positioned my rocket neatly on my planet – using % spacing of course – I decided to resize the browser. It went literally everywhere. Up, down, to the side. This was bad. It was important to the integrity of my design for the rocket to sit astride the planet. The problem I was facing was that I just couldn’t get the element to stay in the same place whilst also adjusting its size. Viewing it on a 17-inch desktop, it looked like the rocket was stuck in mid-air. Not the desired effect. Act III: Refactoring, or, ‘sticking to plan == stupid results’ When I ‘wireframed’ my design (in pencil on an envelope), for some reason I drew two rows. Maybe it’s because I was watching TV, whilst playing Football Manager. I don’t know. Whatever the reason, the result of this added row was that when I resized, the moon stuck to its row, and the rocket went up with the top of the browser. Responsive design is as much about solid structure as it is about fancy CSS rules. Realising this point would cost me hours of my life. Back to the drawing board. After restructuring the HTML bits (copy/paste), I’d managed to get the rocket/moon in to the same div class. But it was all messed up, again. Why tiny moon? Why?! Again, I spent hours tweaking CSS styles in the browser until I had something closer to what I was looking for. Rocket on moon, no matter the size. I feel like a winner, listen to the Knight Rider theme song, and go to bed. Act IV: Epiphany, or, ‘expectations can be fault tolerant’ A website containing four elements had taken me about 15 hours of work to make look ‘passable’. To be honest, it’s still not great, but it does work. Part of this is my own ignorance of speedier development workflows (design in browser, use the magic wand, and so on). Another part of this was just how hard responsive design is. What I hadn’t realised was how much of responsive design depends on clever structure and markup. I hadn’t realised that this clever structure doesn’t even start with HTML – for me, it started with a terrible drawing on the back of an envelope. The CSS part enables your ‘things’ to resize nicely, but without your elements in the right places, no amount of {z-position: -11049;} will make it work properly. It’s what makes learning resources so valuable; time invested in understanding how to do it properly is time well spent. It’s also why Bootstrap will help make my stuff look better, but will never on its own make me a better designer.
Read more
  • 0
  • 0
  • 2295

article-image-module-development-in-angular-js
Patrick Marabeas
29 Oct 2014
5 min read
Save for later

Exploring Module Development in AngularJS

Patrick Marabeas
29 Oct 2014
5 min read
This started off as an article about building a simple ScrollSpy module. Simplicity got away from me however, so I'll focus on some of the more interesting bits and pieces that make this module tick! You may wish to have the completed code with you as you read this to see how it fits together as a whole - as well as the missing code and logic. Modular applications are those that are "composed of a set of highly decoupled, distinct pieces of functionality stored in modules" (Addy Osmani). By having loose coupling between modules, the application becomes easier to maintain and functionality can be easily swapped in and out. As such, the functionality of our module will be strictly limited to the activation of one element when another is deemed to be viewable by the user. Linking, smooth scrolling, and other features that navigation elements might have, won’t be covered. Let's build a ScrollSpy module! Let's start by defining a new module. Using a chained sequence rather than declaring a variable for the module is preferable so you don't pollute the global scope. This also saves you when other modules have used the same var. 'use strict'; angular.module('ngScrollSpy', []); I'm all about making modules that are dead simple to implement for the developer. We don’t need superfluous parents, attributes, and controller requirements! All we need is: A directive (scrollspyBroadcast) that sits on each content section and determines whether it's been scrolled to (active and added to stack) or not. A directive (scrollspyListen) that sits on each navigation (or whatever) element and listens for changes to the stack—triggering a class if it is the current active element. We'll use a factory (SpyFactory) to deal with the stack (adding to, removing from, and broadcasting change). The major issue with a ScrollSpy module (particularly in Angular) is dynamic content. We could use MutationObservers —but they aren't widely supported and polling is just bad form. Let's just leverage scrolling itself to update element positions. We could also take advantage of $rootScope.$watch to watch for any digest calls received by $rootScope, but it hasn't been included in the version this article will link to. To save every single scrollspyBroadcast directive from calculating documentHeight and window positions/heights, another factory (PositionFactory) will deal with these changes. This will be done via a scroll event in a run block. This is a basic visualization of how our module is going to interact: Adding module-wide configuration By using value, provider, and config blocks, module-wide configuration can be implemented without littering our view with data attributes, having a superfluous parent wrapper, or the developer needing to alter the module file. The value block acts as the default configuration for the module. .value('config', { 'offset': 200, 'throttle': true, 'delay': 100 }) The provider block allows us to expose API for application-wide configuration. Here we are exposing config, which the developer will be able to set in the config block. .provider('scrollspyConfig', function() { var self = this; this.config = {}; this.$get = function() { var extend = {}; extend.config = self.config; return extend; }; return this; }); The user of the ScrollSpy module can now implement a config block in their application. The scrollspyConfig provider is injected into it (note, the injected name requires "Provider" on the end)—giving the user access to manipulate the modules configuration from their own codebase. theDevelopersFancyApp.config(['scrollspyConfigProvider', function(scrollspyConfigProvider) { scrollspyConfigProvider.config = { offset: 500, throttle: false, delay: 100 }; }]); The value and provider blocks are injected into the necessary directive—config being extended upon by the application settings. (scrollspyConfig.config). .directive('scrollspyBroadcast', ['config', 'scrollspyConfig', function(config, scrollspyConfig) { return { link: function() { angular.extend(config, scrollspyConfig.config); console.log(config.offset) //500 ... Updating module-wide properties It wouldn't be efficient for all directives to calculate generic values such as the document height and position of the window. We can put this functionality into a service, inject it into a run block, and have it call for updates upon scrolling. .run(['PositionFactory', function(PositionFactory) { PositionFactory.refreshPositions(); angular.element(window).bind('scroll', function() { PositionFactory.refreshPositions(); }); }]) .factory('PositionFactory', [ function(){ return { 'position': [], 'refreshPositions': function() { this.position.documentHeight = //logic this.position.windowTop = //logic this.position.windowBottom = //logic } } }]) PositionFactory can now be injected into the required directive. .directive('scrollspyBroadcast', ['config', 'scrollspyConfig', 'PositionFactory', function(config, scrollspyConfig, PositionFactory) { return { link: function() { console.log(PositionFactory.documentHeight); //1337 ... Using original element types <a data-scrollspyListen>Some text!</a> <span data-scrollspyListen>Some text!</span> <li data-scrollspyListen>Some text!</li> <h1 data-scrollspyListen>Some text!</h1> These should all be valid. The developer shouldn't be forced to use a specific element when using the scrollspyListendirective. Nor should the view fill with superfluous wrappers to allow the developer to retain their original elements. Fortunately, the template property can take a function (which takes two arguments tElement and tAttrs). This gives access to the element prior to replacement. In this example, transclusion could also be replaced by using element[0].innerText instead. This would remove the added child span that gets created. .directive('scrollspyListen', ['$timeout', 'SpyFactory', function($timeout, SpyFactory) { return { replace: true, transclude: true, template: function(element) { var tag = element[0].nodeName; return '<' + tag + ' data-ng-transclude></' + tag + '>'; }, ... Show me all of it! The completed codebase can be found over on GitHub. The version at the time of writing is v3.0.0. About the Author Patrick Marabeas is a freelance frontend developer who loves learning and working with cutting edge web technologies. He spends much of his free time developing Angular Modules, such as ng-FitText, ng-Slider, and ng-YouTubeAPI. You can follow him on Twitter @patrickmarabeas.
Read more
  • 0
  • 0
  • 4877
article-image-tiny-business-sites
Sarah C
26 Sep 2014
5 min read
Save for later

Change the World with Laziness - The Case for Building Tiny Business Websites

Sarah C
26 Sep 2014
5 min read
Most businesses don’t have websites... Seriously, it’s true. More than half of all small businesses have no dedicated web space, and those make up the vast majority of all actual enterprises. Despite the talented companies offering affordable services for businesses. Despite every high-traffic business blog and magazine haranguing, cajoling, or gawping in dismay at how stupid this looks to anyone who knows anything about the modern customer. Despite it all, adoption still remains staggeringly low. I could link you to lots of statistics on this but I won’t. I don’t need to – you already know. We’ve all been there. Where’s the nearest fishmonger? Is the florist open after five? Maybe you need a dohicky to make the washing machine connect to the whatsit. So you take a 45 minute round trip to the big hardware store. Later you find out that there was a Google-dodging shop selling the whatsit-dohickies two minutes away from your house. (Go on, ask me how I spent my weekend.) Why is there still such a huge disparity between how customers and businesses behave? Well, let’s look at it from the other side. What can you – yes, you, person with techy acumen – do to help local businesses in a global and virtual world? I’m beginning to think we could change the world in a lunch break simply by being lazier. First, think small - really small There are a lot of fantastic sites out there offering hassle-free website solutions for medium and large businesses. But chances are the store or service you’re going out of your mind trying to track down is really tiny. One man with a van kind of tiny. The number of employees in your average small business in America? Probably three. They’re busy. They have a full workflow already. So why aren’t we offering them the bare-bones solutions they need? Lose the backend A lot of boxed solutions offer a simple CMS even in their most basic standard sites. And I’ll own up to it myself – when my sister needed a website for her start-up and turned to me because I “know e-mail stuff” I groused, complained, and did my sisterly duty with a quick WordPress setup. But here’s the thing – to somebody already out of their element, a CMS is effort to learn and work to maintain. It becomes a hassle and then a point of guilt and resentment. Very quickly it’s a bug, not a feature, as the site’s year-round Christmas greeting remains trapped in a mire of forgotten logins. A lot of businesses know they need a website. What we forget to tell them is that securing their own domain with a basic single page is better than nothing. At least until they’re ready to level up. Embrace the human database E-commerce software offers fantastic options for stock control and listing services. Seriously – it’d make you weep with pride how awesome the things developers have created for business websites are. Carousels, lightboxes, stock-tracking, integration with ordering systems: web developers are so damn clever. Be proud, be inspired. Now put that all aside and embrace the fact that small businesses are more likely to succeed running “LoisDB”.  “Lois” is the woman who has worked there since the start. She answers the phones. She knows where they put that stock that they had to move because it was blocking the door. Lois doesn’t scale and has terrible latency issues around lunchtime. But on the other hand, she’s ahead of the game on natural-language recognition and ad-hoc querying. Ditch the database and make Lois part of your design plan. Which takes us to: The single most important element of any tiny business website When you cut through it all, there’s really only one indispensable element of a tiny business website: It’s the work of a minute to make a responsive button that will ring a business from your mobile device, and yet it is the simplest way to gain all the information you need without fiddling around with any clunky UI or anachronistic Christmas greetings. If you’ve got an extra thirty seconds to spare you could even add a “callto” option for Skype. Let Google do the rest of the work for you Okay, there may be one other crucial element for a bricks and mortar store – a map. So add it to the front (and possibly only) page. But use the Google Maps API and let the search engine that let you down so bitterly in the first place do the hard work. As an extra bonus, Google will also turn up any Twitter feed or Facebook account the business might be running on the side in the same search. Maybe that’s close enough without any need to integrate them at all. The idea of such bad practice might bring you out in hives. It’s not a replacement for good websites. But it’s a way of on-boarding the stubbornly intractable with a bare minimum of effort on everyone’s part. Later we can stir ambitions with words like SEO and dynamic content. For now, if those with the talent and skill were sometimes willing to do a patchy job, we might change the world for the benefit of all customer-kind*. *Me
Read more
  • 0
  • 0
  • 1181

article-image-what-i-learned-6-months-open-source-3d-printer
Michael Ang
26 Sep 2014
7 min read
Save for later

What 6 Months with an Open Source 3D Printer Taught Me

Michael Ang
26 Sep 2014
7 min read
3D printing is certainly a hot topic today, and having your own printer at home is becoming increasingly popular. There are a lot of options to choose from, and in this post I'll talk about why I chose to go with an open source 3D printer instead of a proprietary pre-built one, and what my experience with the printer has been. By sharing my 6 months of experience I hope to help you decide which kind of printer is best for you. My Prusa i3 Berlin 3D printer after 6 months Back in 2006 I had the chance to work with a 3D printer when the thought of having a 3D printer at home was mostly a fantasy. The printer in question was made by Stratasys, at the Eyebeam Art+Tech center in New York City. That printer cost upwards of $30,000—not exactly something to have at your house! The idea of doing something wrong with the printer and having to call a technician in to fix it was also a little intimidating. (My website has some of my early experiments with 3D printing.) Flash forward to today and there are literally dozens (or probably hundreds) of 3D printer designs available on the market. The designs range from high-end printers that can print plastic with embedded carbon fiber, to popular designs from MakerBot and DIY kits on eBay. One of the first low-cost 3D printers was the RepRap. The goal of the RepRap project is to create a self-replicating machine, where the parts for the machine can be fabricated by the machine itself. In practice this means that many of the parts of a RepRap-style 3D printer are actually printed on a RepRap printer. Most people who build RepRap printers start with a kit and then assemble the printer themselves. If the idea of a self-replicating machine sounds interesting, then RepRap may be for you. RepRap is now more of a philosophy and community than any specific printer. Once you assemble your printer you can make changes and upgrades to the machine by printing yourself new parts. There are certainly some challenges to building your own printer, though, so let's look at some of the advantages and disadvantages of going with an open source printer (building from a kit) versus a pre-packaged printer. Advantages of a pre-assembled commercial printer: Should print right out of the box Less tinkering needed to get good prints Each printer of a particular model is the same, making it easier to get support Advantages of an open source (RepRap-style) kit: Typically cheaper than pre-built Learn more about how the printer works Easier to make changes to the machine, and complete plans are available Easier to experiment with, for example different printing materials Disadvantages to pre-assembled: Making changes may void your warranty Typically more expensive May be locked into specific software or filament Disadvantages of open source: Can take a lot of work to get good prints Potentially lots of decisions to make, not pre-packaged May spend as much time on the machine as actually printing Technical differences aside, the idea of being part of an open source community based on the freedom to share knowledge and designs was really appealing. With that in mind I had a look at different open source 3D printer designs and capabilities. Since the RepRap designs are open source, anyone can modify them and create a "new" printer. In the end I settled on a variation of the Prusa i3 RepRap printer that is designed in Berlin, where I live. The process of getting a RepRap printer working can be challenging, because there's so much to learn at first. The Prusa i3 Berlin can be ordered as a kit with everything needed to build the printer, and with a workshop where you build the printer with the machine's designers over the course of a weekend. Two days to build a working 3D printer from a pile of parts? Yes, it can be done! Most of the parts in the printer kit Building the printer at the workshop saved an incredible amount of time. Questions like "does this look tight enough?" and "how does this part fit in here?" were answered on the spot. There are very active forums for RepRap printers with lots of people willing to help diagnose problems. But a few questions with even a one day turnaround time quickly adds up. By the end of the two days my printer was fully assembled and actually printed out a little plastic robot! This was pretty satisfying knowing that the printer had started the weekend as a bundle of parts. Quite a lot of wires Assembling the plastic extruders Thus began my 6-month (so far) adventure in 3D printing. It has been an awesome and at times frustrating journey. I mainly bought my printer to create connectors for my Polygon Construction Kit (Polycon). I'm printing connectors that assemble with some rods to make structures much larger than could be printed in one piece. My printer has been working well for that, but the main issue has been reliability and need for continual tweaking. Instead of just "hitting print" there is a constant struggle to keep everything lined up and printing smoothly. Printing on my RepRap is a lot more like baking a soufflé than ordering a burger. Completed printer in my studio Some highlights of the journey so far: Printing out parts strong enough to assemble some of my Polycon sculptures and show them at an art show in Berlin Designing my own accessories for the printer and having them downloaded more than 1,000 times on Thingiverse (not bad for some rather specialized tools) Printing upgrades for the printer, based on the continually updated source files Being able to get replacement parts at the hardware store, when one of the long threaded rods in the printer wore out Sculpture with 3D printed connectors. Image courtesy of Lehrter Siebzehn. And the lowlights: Never quite knowing if a print is going to complete successfully (though this can be a problem with many printers) Having enough trouble getting my first extruder working reliably for long prints that I haven't had time to get dual-extrusion prints working Accessory I designed for calibrating the printer, which I then shared with others As time goes on and I keep working on the printer, it's slowly getting more reliable, and I'm able to do more complicated prints without constant intervention. The learning process has been valuable too - I'm now able to look at basically every part of the machine and understand exactly what it's supposed to do. Once you really understand how a 3D printer works, you start to wonder what kind of upgrades are possible, or what other kinds of machine you could design. Printed upgrade parts A pre-packaged printer makes a lot of sense if you're mostly interested in printing things. The learning process for building your own printer can either be interesting or a frustrating obstacle, depending on your point of view. When you look at a print from your RepRap printer, it's incredible to consider that it is all built off the contributions and sharing of knowledge of a large community. If you're not just interested in making things, but making things that make things, then a RepRap printer might be for you! Upgraded printer with polygon sculpture About the author: Michael Ang is a Berlin-based artist and engineer working at the intersection of art, engineering, and the natural world. His latest project is the Polygon Construction Kit, a toolkit for bridging the virtual and physical worlds by translating simple 3D models into physical structures.
Read more
  • 0
  • 0
  • 4397

article-image-new-languages-brave-new-world
Akram Hussain
26 Sep 2014
6 min read
Save for later

New Languages: Brave New World

Akram Hussain
26 Sep 2014
6 min read
The tech world has seen a number of languages emerge, grow, and become super popular, but equally it has seen its fair share of failures and things that make you ask yourself “just why?” We initially had the dominant set of languages introduced to us many years ago (in the 80s), which are still popular and widely used; these include C++, C, Fortran, Erlang, Pearl, SQL, Objective C, and so on. There is nothing to suggest these languages will die out completely or even lose their market share, but the world of programming really came to life in the 90s in an era known as the “Internet age” where a new set of languages came to the party. During this period, a set of “special” languages emerged and I personally would go as far as to say they revolutionized the way we programme. These were languages like JavaScript, Python, Java, R, Haskell, Ruby, and PHP. What’s more interesting is that you see a huge demand for these languages currently on the market (even after 20 years!) and you certainly wouldn’t categorize them as new; so why are they still so popular? Has the tech market stalled? Have developers not moved on? Do we have everything we need from these languages? And what’s next for the future? The following image helps explain the introduction and growth of these languages, in terms of use and adoption; it’s based on Redmonks’ analysis which compares the popularity of Stackoverflow tags and Github repositories: This graph shows a number of languages with the movement from left to right as a positive one. It’s apparent that the languages that were introduced in the 90s are at the forefront of programming; there are even few from the 80s, which supports my earlier statement that older languages don’t die out but seem to improve and adapt over time. However with time and the ever changing tech market, new demands always arise and where there are problems, there are developers with solutions. Recently and over the past few years, we have seen the emergence of new programming languages. Interestingly they seem to be very similar to the older ones, but they have that extra oomph that makes them so popular. I would like to introduce you to a set of languages that may be around for many years to come and may even shape the tech world in the future. Could they be the next generation? Scala is a multi-paradigm programming language supports both object-oriented and functional programming. It is a scripting language used to build applications for the JVM. Scala has seen increased adoption from Java developers due to its flexibility, as it provides the ability to carry out functional programming capabilities. Could Scala replace Java? Go, introduced by Google, is a statically-typed programming language with syntax similar to C. It has been compared and seen as a viable alternative to major languages such as Java or C++. However, Go is different thanks to its inherent support for concurrent programming, where it independently executes tasks, and computations are designed to interact with each other that can be run on a single processor or multi-core processors. Swift is Apple’s new programming language, unveiled in June 2014. It is set to replace Objective-C as the lingua franca for developing apps for Apple operating systems. As a multi-paradigm language, it has expressive features familiar to those used to working with modern functional languages, while also keeping the object-oriented features of Objective-C. F# is a multi-paradigm programming language that encompasses object-oriented features but is predominantly focused on functional programming. F# was developed by Microsoft as an alternative to C#, touted as a language that can do everything C# can but better. The language was primarily designed with the intention of applying it to data-driven concepts. One of the greatest benefits of using F# is the interoperability with other .NET languages. This means code written in F# can work with different parts of an application written in C#.  Elixir  is a functional programming language that leverages features of the Erlang VM and has syntax similar to Ruby. Elixir offers concurrency, high scalability, and fault-tolerance, enabling higher levels of productivity and extensibility while maintaining compatibility with Erlang’s tools and ecosystem.  Clojure is a dynamic, general-purpose programming language that runs on the Java Virtual Machine that offers interactive development with the speed and reliable runtime of the JVM. It takes advantage of Java libraries, services, and all of the resources of the JVM ecosystem. Dart, introduced by Google, is a pure object-oriented language with C-style syntax. Developers look at Dart as a JavaScript competitor that offers simplicity, flexibility, better performance, and security. Dart was designed for web development and to scale complex web applications. Juliais an expressive and dynamic multi-paradigm language. It’s as fast as C and it can be used for general programming. It is supposed to be a high level programming language with syntax similar to Matlab and Fortran. The language is predominantly used in the field of data science, and is one to keep an eye out for as it’s tipped to rival R and Python in the future.   D is another multi-paradigm programming language that allows developers to write “elegant” code. There’s demand for D as it’s a genuine improvement over C++, while still offering all the benefits of C++. D can be seen as a solution for developers who build half their application in Ruby/Python and then use C++ to “deal with the bottle-necks”. D lets you have all the benefits of both of these languages. Rust is another multi-paradigm systems language developed by Mozilla. It has been touted as a valuable alternative to C++. Rust combines strong concurrent programming, low-level abstraction, with super-fast performance, making it ideal for high-level projects. Rust’s type system ensures memory errors are minimized, a problem that is common in C++ with memory leaks. However, for the moment Rust isn’t designed to replace C++ but improve on its flaws, yet with advancements in the future you never know… From this list of new languages, it’s clear that the majority of them were created to solve issues of previous languages. They are all subsets of a similar language, but better refined to meet the modern day developer’s needs. There has been an increase in support for functional languages and there’s also been a steep rise of multi-paradigm features, which suggests the need for flexible programming. Whether we’re looking at the new “class” of languages for the future remains to be seen, but one thing is for sure: they were designed to make a difference in an increasingly new and brave tech world. 
Read more
  • 0
  • 0
  • 2090
article-image-arduino-yun-welcome-to-the-internet-things
Michael Ang
26 Sep 2014
6 min read
Save for later

Arduino Yún - Welcome to the Internet of Things

Michael Ang
26 Sep 2014
6 min read
Arduino is an open source electronics platform that makes it easy to interface with sensors, lights, motors, and much more with a small standalone board. Arduino Yún combines a standard Arduino micro-controller with a tiny Linux computer, all on the same board! The Arduino micro-controller is perfectly suited to interfacing with hardware like sensors and motors, and the Linux computer makes it easy to get online to the Internet and perform more intensive tasks. The combination really is the best of both worlds. This post will introduce Arduino Yún and give you some ideas of the possibilities that it opens up. The key to the Yún is that it has two separate processors on the board. The first provides the normal Arduino functions using an ATmega32u4 micro-controller. This processor is perfect for running "low-level" operations like driving timing-sensitive LED light strips or interfacing with sensors. The second processor is an Atheros AR9331 "system on a chip" that is typically used in WiFi access points and routers. The Atheros processor runs a version of Linux derived from OpenWRT and has built-in WiFi that lets it connect to a WiFi network or act as an access point. The Atheros is pretty wimpy by desktop standards (400MHz processor and 64MB RAM) but it has no problem downloading webpages or accessing an SD card, for example—two tasks that would otherwise require extra hardware and be challenging for a standard Arduino board. One selling point for the Arduino Yún is that the integration between the two processors is quite good and you program the Yún using the standard Arduino IDE (currently you need the latest beta version). You can program the Yún by connecting it by a USB cable to your computer, but much more exciting is to program it over the air, via WiFi! When you plug the Yún into a USB power supply it will create a WiFi network with a name like "Arduino Yun-90A2DAF3022E". Connect to this network with your computer and you will be connected to the Yún! You'll be able to access the Yún's configuration page by going to http://arduino.local in your web browser and you should be able to reprogram the Yún from the Arduino IDE by selecting the network connection in Tools-> Port. There's a new access point in town Being able to reprogram the board over WiFi is already worth the price of admission for certain projects. I made a sound-reactive hanging light sculpture and it was invaluable to adjust and "dial in" the program inside the sculpture while it was hanging in the air. Look ma, no wires! Programming over the air The Bridge library for Arduino Yún is used to communicate between the processors. A number of examples using Bridge are provided with the Arduino IDE. With Bridge you can do things like controlling the pins on the Arduino from a webpage. For example, loading http://myArduinoYun.local/arduino/digital/13/1 in your browser could turn on the built-in LED. You can also use Bridge to download web pages, or run custom scripts on the Linux processor. Since the Linux processor is a full-blown computer with an SD card reader and USB, this can be really powerful. For example, you can write a Python script that runs on the Linux processor, and trigger that script from your Arduino sketch. The Yún is ideally suited for the "Internet of Things". Want to receive an e-mail when your cat comes home? Attach a switch to your pet door and have your Yún e-mail you when it sees the door open. Want to change the color of an LED based on the current weather? Just have the Linux processor download the current weather from Yahoo! Weather and the ATMega micro-controller can handle driving the LEDs. Temboo provides library code and examples for connecting to a large variety of web services. The Yún doesn't include audio hardware, but because the Linux processor supports USB peripherals, it's easy to attach a low-cost USB sound card. This tutorial has the details on adding a sound card and playing an mp3 file in response to a button press. I used this technique for my piece Forward Thinking Sound at the Art Hack Day in Paris that used a Yún to play modem sounds while controlling an LED strip. With only 48 hours to complete a new work from scratch, being able to get an mp3 playing from the Yún in less than an hour was amazing! Yún with USB sound card, speakers and LED strip. Forward Thinking Sound at Art Hack Day Paris. The Arduino Yún is a different beast than the Raspberry Pi and BeagleBone Black. Where the other boards are best thought of as small computers (with video output, built-in audio, and so on) the Arduino Yún is best thought of as the combination of an Arduino board and WiFi router than can run some basic scripts. The Yún is unfortunately quite a bit more expensive than a standard Arduino board, so you may not want to dedicate one to each project. The experience of programming the Yún is generally quite good—the Arduino IDE and Bridge library make it easy to use the Yún as a "regular" Arduino and ease into the network/Linux features as you need them. Once you can program your Arduino over WiFi and connect to the Internet, it's a little hard to go back! About the author: Michael Ang is a Berlin-based artist and engineer working at the intersection of art, engineering, and the natural world. His latest project is the Polygon Construction Kit, a toolkit for bridging the virtual and physical realms by constructing real-world objects from simple 3D models. He is one of the organizers of Art Hack Day, an event for hackers whose medium is tech and artists whose medium is technology.
Read more
  • 0
  • 0
  • 2437

article-image-destiny-pulp-fiction
Ed Gordon
26 Sep 2014
5 min read
Save for later

Destiny; pulp fiction

Ed Gordon
26 Sep 2014
5 min read
Ideas in science fiction matter. Time and place in science fiction are never “our time” and “our place”. The location and time frame serve to create a canvas for discussion about contemporary issues; everything from The Time Machine onwards talks about issues bigger than “how big guns get in the future”. They talk about alienation from nature (We), they talk about the detachment from self (Nineteen Eighty-Four), they talk about class (Brave New World), reality (anything by Philip K. Dick), societal structure and development (Canticle for Liebowitz); anything that affects readers, science fiction finds a way to talk about it. Destiny, however, talks about nothing and ideas don’t matter to it. It’s the empty vacuum of space and little more than cheap pulp fiction. There's a line towards the end of the terrible, shallow, incomprehensible, and disappointing story mode of Destiny that succinctly sums up the game's issues. In a neat twist of dramatic irony, the third character of the story (maybe the fourth if we count Bill Nighy) says to your hero, "You want to turn it into a battleground? How...unimaginative" It's a beautiful moment of irony, but one that I'm not quite sure landed with the game's script writers. This one line expertly sums up all that is wrong with Destiny. It encompasses the lost potential for the game's environments, the terribly dull and uninspired lore (The Traveler, The Fallen, The Awoken; I'm a fan of “everyman” fiction, but it comes across as lazy), and the panic-inducing, strangely moreish gameplay that keeps you going back for one more terrible mission. Destiny has taken the canvas that science fiction gave it and its developers crafted, and declined to do anything more than turn it into a never-ending battleground. Judging books by their covers It’s not all bad for Destiny. It was built with a custom, in-house game engine. It uses Havok for its physics. It's a highly engineered piece of work, which brings the best out of the old-gen systems, and hints at what's available for the new gen systems. It looks great. It's an example of how to craft “time and place” in a game; art assets, lighting, and effects all work together to make an intriguing canvas to paint your Destiny onto. Yes, the rusted cars are a clichéd hangover narrative hangover from Iraq's gruesome Highway of Death. Yes, nothing screams trite post-apocalypse more than a rusted factory inhabited by weird non-humans. But I challenge anyone to walk through the missions and not be impressed by the feel of the place. The gameplay mechanics are also, by and large, the best you'll find in an FPS. I felt more control and more in command than I ever have (this was, in part, due to the fact I'm a few levels higher than my enemies which can actually take away a lot of the challenge of defeating the hordes of monsters that are thrown at you...). The game's mechanical structure and overall development quality is superb. You should check it out (rental, preferably). The in-house engine looks swell, and the music is probably one of the best soundtracks I've heard on a game before—it's truly epic. Well done Paul McCartney. And well done developers. You made a good game. It’s just that the writers let you down. I’ve got a bad feeling about this… Packt do amazing game development books. We've just released a new raft of Unity books to help people make more awesome games for different platforms. We've been publishing game development books for five years, and will continue for a long time yet. We've always aimed to provide the skills you need to create games. They teach you sprites, animations, GUI development, 2D, 3D, how to create multiplayer games, and much more besides. They don't, however, teach you ideas. All our books require you to have an idea for a game. We can arm you with the skills, but unless you want a clone of an FPS lifted straight from the book, you're going to need to bring your own twist of originality.  Our seminal Unity book, Unity Game Development Essentials, hit upon this in the tagline: “If you have an idea for a game but lack the skills to create it, this book is the perfect introduction.” The most recent content release, a Raid for level 26 super Guardians, is a great example of how few ideas Destiny is bringing to the table. It’s a mission that takes 10 hours of playing to complete and requires five other level 26 friends. It took 1606 deaths (2 minutes a life) for an expert clan to complete it. This is not innovation. It’s just more. More of the same bullet-sponge enemies, more of the same gameplay, for more time. It’s more of your life that you aren’t going to get back. The whole thing just reeks of bad ideas and bored story-telling. Against the backdrop of the most expensive scenes ever created, and to the strains of sublime music, Destiny forgot the most important ingredient in the game development (and science fiction) recipe—great ideas. There's a paucity of quality ideas throughout that, even with the most talented developers in the world, Bungie can't hide. Destiny isn't a bad game; it's just a game that forgot that the first step of game development is a good idea.
Read more
  • 0
  • 0
  • 1316