Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech Guides

851 Articles
article-image-top-5-newish-javascript-libraries-arent-angularjs
Ed Gordon
30 Jul 2014
5 min read
Save for later

Top 5 Newish JavaScript Libraries (That Aren't AngularJS...)

Ed Gordon
30 Jul 2014
5 min read
AngularJS is, like, so 2014. Already the rumblings have started that there are better ways of doing things. I thought it prudent to look into the future to see what libraries are on the horizon for web developers now, and in the future. 5. Famo.us “Animation can explain whatever the mind of man can conceive.” - Walt Disney Famo.us is a clever library. It’s designed to help developers create application user interfaces that perform well; as well, in fact, as native applications. In a moment of spectacular out-of-the-box thinking, Famo.us brings with it its own rendering engine to replace the engine that browsers supply. To get the increase in performance from HTML5 apps that they wanted, Famo.us looked at which tech does rendering best, namely game technologies, such as Unity and Unreal Engine.  CSS is moved into the framework and written in JavaScript instead. It makes transformations and animations quicker. It’s a new way of thinking for web developers, so you best dust off the Unity Rendering tutorials… Famo.us makes things running in the browser as sleek as they’re likely to be over the next few years, and it’s massively exciting for web developers. 4. Ractive “The meeting of two personalities is like the contact of two chemical substances: if there is any reaction, both are transformed.” - Carl Jung Manipulating the Document Object Model (which ties together all the webpages we visit) has been the major foe of web developers for years. Mootools, YUI, jQuery, AngularJS, Famo.us, and everything between have offered developers productivity solutions to enable them to manipulate the DOM to their client’s needs in a more expedient manner. One of the latest libraries to help DOM manipulators at large is Ractive.js, developed by the team at The Guardian (well, mainly one guy – Rich Harris). Its focus remains on UI, so while it borrows heavily from Angular (it was initially called AngularBars), it’s a simpler tool at heart. Or at least, it approaches the problems of DOM manipulation in as simple a way as possible. Ractive is part of the reactive programming direction that JavaScript (and programming, generally) seems to be heading in at the moment. 3. DC.js “A map does not just chart, it unlocks and formulates meaning; it forms bridges between here and there, between disparate ideas that we did not know were previously connected.” ― Reif Larsen, The Selected Works of T.S. Spivet DC.js, borrowing heavily from both D3 and Crossfilter, enables you to visualize linked data through reactive (a theme developing in this list) charts. I could try and explain the benefits in text, but sometimes, it’s worth just going to have a play around (after you’ve finished this post). It uses D3 for the visualization bit, so everything’s in SVG, and uses Crossfilter to handle the underlying linkage of data. For a world of growing data, it provides users with immediate and actionable insight, and is well worth a look. This is the future of data visualization on the web. 2. Lo-dash "The true crime fighter always carries everything he needs in his utility belt, Robin." - Batman There’s something appealing about a utility belt; something that’s called to all walks of life, from builders to Batman, since man had more than one tool at his disposal. Lo-dash, and Underscore.js that came before it, is no different. It’s a library of useful JavaScript functions that abstract some of the pain away of JS development, whilst boosting performance over Underscore.js. It’s actually based around Underscore, which at the time of writing is the most depended upon library used in Node, but builds on the good parts, and gets rid of the not so good. Lo-dash will take over from Underscore in the near future. Watch this space. 1. Polymer “We are dwarfs astride the shoulders of giants. We master their wisdom and move beyond it. Due to their wisdom we grow wise and are able to say all that we say, but not because we are greater than they.” - Isaiah di Trani As with a lot of things, rather than trying to reinvent solutions to existing problems, Google is trying to just reinvent the things that lead to the problem. Web Components is a W3 standard that’s going to change the way we build web applications for the better, and Polymer is the framework that allows you to build these Web Components now. Web Components envision a world where, as a developer, you can select a component from the massive developer shelf of the Internet, call it, and use it without any issues. Polymer provides access to these components; UI components such as a clock – JavaScript that’s beyond my ability to write at least, and a time-sink for normal JS developers – can be called with: <polymer-ui-clock></polymer-ui-clock> Which gives you a pretty clock that you can actually go and customize further if you want. Essentially, they put you in a dialog with the larger development world, no longer needing to craft solutions for your single project; you can use and reuse components that others have developed. It allows us to stand on the shoulders of giants. It’s still some way off standardization, but it’s going to redefine what application development means for a lot of people, and enable a wider range applications to be created quickly and efficiently. “There's always a bigger fish.” - Qui-Gon Jin There will always be a new challenger, an older guard, and a bigger fish, but these libraries represent the continually changing face of web development. For now, at least!
Read more
  • 0
  • 0
  • 4273

article-image-devops-evolution-and-revolution
Julian Ursell
24 Jul 2014
4 min read
Save for later

DevOps: An Evolution and a Revolution

Julian Ursell
24 Jul 2014
4 min read
Are we DevOps now? The system-wide software development methodology that breaks down the problematic divide between development and operations is still in that stage where enterprises implementing the idea are probably asking that question, working out whether they've reached the endgame of effective collaboration between the two spheres and a systematic automation of their IT service infrastructure. Considered to be the natural evolution of Agile development and practices, the idea and business implementation of DevOps is rapidly gaining traction and adoption in significant commercial enterprises and we're very likely to see a saturation of DevOps implementations in serious businesses in the near future. The benefits of DevOps for scaling and automating the daily operations of businesses are wide-reaching and becoming more and more crucial, both from the perspective of enabling rapid software development as well as delivering products to clients who demand and need more and more frequent releases of up-to-date applications. The movement towards DevOps systems moves in close synchronization with the growing demand for experiencing and accessing everything in real time, as it produces the level of infrastructural agility to roll out release after release with minimal delays. DevOps has been adopted prominently by hitters as big as Spotify, who have embraced the DevOps culture throughout the formative years of their organization and still hold this philosophy now. The idea that DevOps is an evolution is not a new one. However, there’s also the argument to be made that the actual evolution from a non-DevOps system to a DevOps one entails a revolution in thinking. From a software perspective, an argument could be made that DevOps has inspired a minor technological revolution, with the spawning of multiple technologies geared towards enabling a DevOps workflows. Docker, Chef, Puppet, Ansible, and Vagrant are all powerful key tools in this space and vastly increase the productivity of developers and engineers working with software at scale. However, it is one thing to mobilize DevOps tools and implement them physically into a system (not easy in itself), but it is another thing entirely to revolve the thinking of an organization round to a collaborative culture where developers and administrators live and breathe in the same DevOps atmosphere. As a way of thinking, it requires a substantial cultural overhaul and a breaking down of entrenched programming habits and the silo-ization of the two spheres. It's not easy to transform the day-to-day mind-set of a developer so that they incorporate thinking in ops (monitoring, configuration, availability) or vice versa of a system engineer so they are thinking in terms of design and development. One can imagine it is difficult to cultivate this sort of culture and atmosphere within a large enterprise system with many large moving parts, as opposed to a startup which may have the “day zero” flexibility to employ a DevOps approach from the roots up. To reach the “state” of DevOps is a long journey, and one that involves a revolution in thinking. From a systematic as well as cultural point of view, it takes a considerable degree of ground breaking in order to shatter (what is sometimes) the monolithic wall between development and operations. But for organizations that realize that they need the responsiveness to adapt to clients on demand and have the foresight to put in place system mechanics that allow them to scale their services in the future, the long term benefits of a DevOps revolution are invaluable. Continuous and automated deployment, shorter testing times, consistent application monitoring and performance visibility, flexibility when scaling, and a greater margin for error all stem from a successful DevOps implementation. On top of that a survey showed that engineers working in a DevOps environment spent less time firefighting and more productive time focusing on self-improvement, infrastructure improvement, and product quality. Getting to that point where engineers can say “we’re DevOps now!” however is a bit of a misconception, because it’s more than a matter of sharing common tools, and there will be times where keeping the bridge between devs and ops stable and productive is challenging. There is always the potential that new engineers joining an organization can dilute the DevOps culture, and also the fact that DevOps engineers don't grow overnight. It is an ongoing philosophy, and as much an evolution as it is a revolution worth having.
Read more
  • 0
  • 0
  • 1698

article-image-5-go-libraries-frameworks-and-tools-you-need-to-know
Julian Ursell
24 Jul 2014
4 min read
Save for later

5 Go Libraries, Frameworks, and Tools You Need to Know

Julian Ursell
24 Jul 2014
4 min read
Golang is an exciting new language seeing rapid adoption in an increasing number of high profile domains. Its flexibility, simplicity, and performance makes it an attractive option for fields as diverse as web development, networking, cloud computing, and DevOps. Here are five great tools in the thriving ecosystem of Go libraries and frameworks. Martini Martini is a web framework that touts itself as “classy web development”, offering neat, simplified web application development. It serves static files out of the box, injects existing services in the Go ecosystem smoothly, and is tightly compatible with the HTTP package in the native Go library. Its modular structure and support for dependency injection allows developers to add and remove functionality with ease, and makes for extremely lightweight development. Out of all the web frameworks to appear in the community, Martini has made the biggest splash, and has already amassed a huge following of enthusiastic developers. Gorilla Gorilla is a toolkit for web development with Golang and offers several packages to implement all kinds of web functionality, including URL routing, optionality for cookie and filesystem sessions, and even an implementation with the WebSockets protocol, integrating it tightly with important web development standards. groupcache groupcache is a caching library developed as an alternative (or replacement) to memcached, unique to the Go language, which offers lightning fast data access. It allows developers managing data access requests to vastly improve retrieval time by designating a group of its own peers to distribute cached data. Whereas memcached is prone to producing an overload of database loads from clients, groupcache enables a successful load out of a huge queue of replicated processes to be multiplexed out to all waiting clients. Libraries such as Groupcache have a great value in the Big Data space as they contribute greatly to the capacity to deliver data in real time anywhere in the world, while minimizing potential access pitfalls associated with managing huge volumes of stored data. Doozer Doozer is another excellent tool in the sphere of system and network administration which provides a highly available data store used for the coordination of distributed servers. It performs a similar function to coordination technologies such as ZooKeeper, and allows critical data and configurations to be shared seamlessly and in real time across multiple machines in distributed systems. Doozer allows the maintenance of consistent updates about the status of a system across clusters of physical machines, creating visibility about the role each machine plays and coordinating strategies for failover situations. Technologies like Doozer emphasize how effective the Go language is for developing valuable tools which alleviate complex problems within the realm of distributed system programming and Big Data, where enterprise infrastructures are modeled around the ability to store, harness and protect mission critical information.  GoLearn GoLearn is a new library that enables basic machine learning methods. It currently features several fundamental methods and algorithms, including neural networks, K-Means clustering, naïve Bayesian classification, and linear, multivariate, and logistic regressions. The library is still in development, as are the number of standard packages being written to give Go programmers the ability to develop machine learning applications in the language, such as mlgo, bayesian, probab, and neural-go. Go’s continual expansion into new technological spaces such as machine learning demonstrates how powerful the language is for a variety of different use cases and that the community of Go programmers is starting to generate the kind of development drive seen in other popular general purpose languages like Python. While libraries and packages are predominantly appearing for web development, we can see support growing for data intensive tasks and in the Big Data space. Adoption is already skyrocketing, and the next 3 years will be fascinating to observe as Golang is poised to conquer more and more key territories in the world of technology.
Read more
  • 0
  • 0
  • 2433

article-image-buying-versus-renting-pros-and-cons-moving-cloud
Kristen Hardwick
01 Jul 2014
5 min read
Save for later

Buying versus Renting: The Pros and Cons of Moving to the Cloud

Kristen Hardwick
01 Jul 2014
5 min read
Convenience One major benefit of the IaaS model is the promise of elasticity to support unforeseen demand. This means that the Cloud vendor will provide the ability to quickly and easily scale the provided resources up or down, based on the actual usage requirements. This typically means that an organization can plan for the ″average″ case instead of the “worst case” of usage, simultaneously saving on costs and preventing outages. Additionally, since the systems provided through cloud vendors are usually virtual machines running on the vendor’s underlying hardware, the process of adding new machines, increasing the disk space, or subscribing to new services is usually just a change through a web UI, instead of a complicated hardware or software acquisition process. This flexibility is an appealing factor because it significantly reduces the waiting time required to support a new capability. However, this automation benefit is sometimes a hindrance to administrators and developers that need to access the low-level configuration settings of certain software. Additionally, since the services are being offered through a virtualized system, continuity in the underlying environment can’t be guaranteed. Some applications – for example, benchmarking tools – may not be suitable for that type of environment. Cost One appealing factor for the transition to the cloud is cost–but in certain situations, using the cloud may not actually be cheaper. Before making a decision, your organization should evaluate the following factors to make sure the transition will be beneficial. One major benefit is the impact on your organization′s budget. If the costs are transitioned to the cloud, they will usually count as operational expenditures, as opposed to capital expenditures. In some situations, this might make a difference when trying to get the budget for the project approved. Additionally, some savings may come in the form of reduced maintenance and licensing fees. These expenditures will be absorbed into the monthly cost, rather than being an upfront requirement. When subscribing to the cloud, you can disable any unnecessary resources ondemand, reducing costs. In the same situation with real hardware, the servers would be required to remain on 24/7 in order to provide the same access benefits. On the other hand, consider the size of the data. Vendors have costs associated with moving data into or out of the cloud, in addition to the charge for storage. In some cases, the data transfer time alone would prohibit the transition. Also, the previously mentioned elasticity benefits that draw some people into the cloud–scaling up automatically to meet unexpected demand–can also have unexpected impact on the monthly bill. These costs are sometimes difficult to predict, and since the cloud computing pricing model is based on usage, it is important to weigh the possibility of an unanticipated hefty bill against an initial hardware investment. Reliability Most cloud vendors typically guarantee service availability or access to customer support. This places that burden on the vendor, as opposed to being assumed by the project′s IT department. Similarly, most cloud vendors provide backup and disaster recovery options either as add-ons or built-in to the main offering. This can be a benefit for smaller projects that have the requirement, but do not have the resources to support two full clusters internally. However, even with these guarantees, vendors still need to perform routine maintenance on their hardware. Some server-side issues will result in virtual machines being disabled or relocated – usually communicated with some advanced notice. In certain cases this will cause interruptions and require manual interaction from the IT team. Privacy All data and services that get transitioned into the cloud will be accessible from anywhere via the web–for better or worse. Using this system, the technique of isolating the hardware onto its own private network or behind a firewall is no longer possible. On the positive side, this means that everyone on the team will be able to work using any Internet-connected device. On the negative side, this means that every precaution needs to be taken so that the data stays safe from prying eyes. For some organizations, the privacy concerns alone are enough to keep projects out of the cloud. Even assuming that the cloud can be made completely secure, stories in the news about data loss and password leakage will continue to project a negative perception of inherent danger. It is important to document all precautions being taken to protect the data and make sure that all affected parties in the organization are comfortable moving to the cloud. Conclusion The decision of whether or not to move into the cloud is an important one for any project or organization. The benefits of flexibility of hardware requirements, built–in support, and general automation must be weighed against the drawbacks of decreased control over the environment and privacy. About the author Kristen Hardwick has been gaining professional experience with software development in parallel computing environments in the private, public, and government sectors since 2007. She has interfaced with several different parallel paradigms, including Grid, Cluster, and Cloud. She started her software development career with Dynetics in Huntsville, AL, and then moved to Baltimore, MD, to work for Dynamics Research Corporation. She now works at Spry where her focus is on designing and developing big data analytics for the Hadoop ecosystem.
Read more
  • 0
  • 0
  • 1472

article-image-3-reasons-why-the-cloud-is-a-terrible-metaphor-and-one-why-it-isnt-2
Sarah
01 Jul 2014
4 min read
Save for later

3 Reasons Why "the Cloud" Is a Terrible Metaphor (and One Why It Isn't)

Sarah
01 Jul 2014
4 min read
I have a lot of feelings about “the cloud” as a metaphor for networked computing. All my indignation comes too late, of course. I’ve been having this rant for a solid four years, and that ship has long since sailed–the cloud is here to stay. As a figurative expression for how we compute these days, it’s proven to have way more sticking power than, say, the “information superhighway”. (Remember that one?) Still, we should always be careful about the ways we use figurative language. Sure, you and I know we’re really talking about odd labyrinths of blinking lights in giant refrigerator buildings. But does your CEO? I could talk a lot about the dangers of abstracting away our understanding of where our data actually is and who has the keys. But I won’t, because I have even better arguments than that. Here are my three reasons why “the cloud” is a terrible metaphor: 1. Clouds are too easy to draw. Anyone can draw a cloud. If you’re really stuck you just draw a sheep and then erase the black bits. That means that you don’t have to have the first clue about things like SaaS/PaaS/IaaS or local persistent storage to include “the cloud” in your Power Point presentation. If you have to give a talk in half an hour about the future of your business, clouds are even easier to draw than Venn Diagrams about morale and productivity. Had wecalled it “ Calabi–Yau Manifold Computing” the world would have saved hundreds of man hours spent in nonsensical meetings.The only thing sparingus from a worse fate is the stalling confusion that comes from trying to combine slide one–“The Cloud”–and slide two–”BlueSky Thinking!”. 2. Hundreds of Victorians died from this metaphor. Well, okay, not exactly. But in the nineteenth century, the Victorians had their own cloud concept–the miasma. The basic tenet was that epidemic illnesses were caused by bad air in places too full of poor people wearing fingerless gloves (for crime). It wasn’t until John Snow pointed to the infrastructure that people worked out where the disease was coming from. Snow mapped the pattern of pipes delivering water to infected areas and demonstrated that germs at one pump were causing the problem. I’m not saying our situation is exactly analogous. I’m just saying if we’re going to do the cloud metaphor again, we’d better be careful of metaphorical cholera. 3. Clouds might actually be alive. Some scientists reckon that the mechanism that lets clouds store and release precipitation is biological in nature. If this understanding becomes widespread, the whole metaphor’s going to change underneath us. Kids in school who’ve managed to convince the teacher to let them watch a DVD instead of doing maths will get edu-tained about it. Then we’re all going to start imagining clouds as moving colonies of tiny little cartoon critters. Do you want to think about that every time you save pictures of your drunken shenanigans to your Dropbox? And one reason why it isn’t a bad metaphor at all: 1. Actually, clouds are complex and fascinating . Quick pop quiz–what’s the difference between cirrus fibrates and cumulonimbus? If you know the answer to that, you’re most likely either a meteorologist, or you’re overpaid to sit at your desk googling the answers to rhetorical questions. In the latter case, you’ll have noticed that the Wikipedia article on clouds is about seventeen thousand words long. That’s a lot of metaphor. Meteorological study helps us to track clouds as they move from one geographic area to another, affecting climate, communications, and social behaviour. Through careful analysis of their movements and composition, we can make all kinds of predictions about how our world will look tomorrow. The important point came when we stopped imagining chariots and thundergods, and started really looking at what lay behind the pictures we’d painted for ourselves.
Read more
  • 0
  • 0
  • 1565

article-image-things-consider-when-migrating-cloud
Kristen Hardwick
01 Jul 2014
5 min read
Save for later

Things to Consider When Migrating to the Cloud

Kristen Hardwick
01 Jul 2014
5 min read
After the decision is made to make use of a cloud solution like Amazon Web Services or Microsoft Azure, there is one main question that needs to be answered – “What’s next?...” There are many factors to consider when migrating to the cloud, and this post will discuss the major steps for completing the transition. Gather background information Before getting started, it’s important to have a clear picture of what is meant to be accomplished in order to call the transition a success.Keeping the following questions at the forefront during the planning stages will help guide your process and ensure the success of the migration. What are the reasons for moving to the cloud? There are many benefits of moving to the cloud, and it is important to know what the focus of the transition should be. If the cost savings are the primary driver, vendor choice may be important. Prices between vendors vary, as do the support services that are offered–that might make a difference in future iterations. In other cases, the elasticity of hardware may be the main appeal. It will be important to ensure that the customization options are available at the desired level. Which applications are being moved? When beginning the migration process, it is important to make sure that the scope of the effort is clear. Consider the option of moving data and applications to the cloud selectively in order to ease the transition. Once the organization has completed a successful small-scale migration into the cloud, a second iteration of the process can take care of additional applications. What is the anticipated cost? A cloud solution will have variable costs associated with it, but it is important to have some estimation of what is expected. This will help when selecting vendors, and it will allow for guidance in configuring the system. What is the long-term plan? Is the new environment intended to eventually replace the legacy system? To work alongside it? Begin to think about the plan beyond the initial migration. Ensure that the selected vendor provides service guarantees that may become requirements in the future, like disaster recovery options or automatic backup services. Determine your actual cloud needs One important thing to maximize the benefits of making use of the cloud is to ensure that your resources are sufficient for your needs. Cloud computing services are billed based on actual usage, including processing power, storage, and network bandwidth. Configuring too few nodes will limit the ability to support the required applications, and too many nodes will inflate costs. Determine the list of applications and features that need to be present in the selected cloud vendor. Some vendors include backup services or disaster recovery options as add-on services that will impact the cost, so it important to decide whether or not these services are necessary. A benefit with most vendors is that these services are extremely configurable, so subscriptions can be modified. However, it is important to choose a vendor with packages that make sense for your current and future needs as much as possible, since transitioning between vendors is not typically desirable. Implement security policies Since the data and applications in the cloud are accessed over the Internet, it is of the utmost importance to ensure that all available vendor security policies are implemented correctly. In addition to the main access policies, determine if data security is a concern. Sensitive data such as PII or PCI may have regulations that impact data encryption rules, especially when being accessed through the cloud. Ensure that the selected vendor is reliable in order to safeguard this information properly. In some cases, applications that are being migrated will need to be refactored so that they will work in the cloud. Sometimes this means making adjustments to connection information or networking protocols. In other cases, this means adjusting access policies or opening ports. In all cases, a detailed plan needs to be made at the networking, software, and data levels in order to make the transition smooth. Let’s get to work! Once all of the decisions have been made and the security policies have been established and implemented, the data appropriate for the project can be uploaded to the cloud. After the data is transferred, it is important to ensure that everything was successful by performing data validation and testing of data access policies. At this point, everything will be configured and any application-specific refactoring or testing can begin. In order to ensure the success of the project, consider hiring a consulting firm with cloud experience that can help guide the process. In any case, the vendor, virtual machine specifications, configured applications and services, and privacy settings must be carefully considered in order to ensure that the cloud services provide the solution necessary for the project. Once the initial migration is complete, the plan can be revised in order to facilitate the migration of additional datasets or processes into the cloud environment. About the author Kristen Hardwick has been gaining professional experience with software development in parallel computing environments in the private, public, and government sectors since 2007. She has interfaced with several different parallel paradigms, including Grid, Cluster, and Cloud. She started her software development career with Dynetics in Huntsville, AL, and then moved to Baltimore, MD, to work for Dynamics Research Corporation. She now works at Spry where her focus is on designing and developing big data analytics for the Hadoop ecosystem.
Read more
  • 0
  • 0
  • 2816
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €14.99/month. Cancel anytime
article-image-rise-data-science
Akram Hussain
30 Jun 2014
5 min read
Save for later

The Rise of Data Science

Akram Hussain
30 Jun 2014
5 min read
The rise of big data and business intelligence has been one of the hottest topics to hit the tech world. Everybody who’s anybody has heard of the term business intelligence, yet very few can actually articulate what this means. Nonetheless it’s something all organizations are demanding. But you must be wondering why and how do you develop business intelligence? Enter data scientists! The concept of data science was developed to work with large sets of structured and unstructured data. So what does this mean? Let me explain. Data science was introduced to explore and give meaning to random sets of data floating around (we are talking about huge quantities here, that is, terabytes and petabytes), which are then used to analyze and help identify areas of poor performance, areas of improvement, and areas to capitalize on. The concept was introduced for large data-driven organisations that required consultants and specialists to deal with complex sets of data. However, data science has been adopted very quickly by organizations of all shapes and sizes, so naturally an element of flexibility would be required to fit data scientists in the modern work flow. There seems to be a shortage for data scientists and an increase in the amount of data out there. The modern data scientist is one who would be able to apply analytical skills necessary to any organization with or without large sets of data available. They are required to carry out data mining tasks to discover relevant meaningful data. Yet, smaller organizations wouldn’t have enough capital to invest in paying for a person who is experienced enough to derive such results. Nonetheless, because of the need for information, they might instead turn to a general data analyst and help them move towards data science and provide them with tools/processes/frameworks that allow for the rapid prototyping of models instead. The natural flow of work would suggest data analysis comes after data mining, and in my opinion analysis is at the heart of the data science. Learning languages like R and Python are fundamental to a good data scientist’s tool kit. However would a data scientist with a background in mathematics and statistics and little to no knowledge of R and Python still be as efficient? Now, the way I see it, data science is composed of four key topic areas crucial to achieving business intelligence, which are data mining, data analysis, data visualization, and machine learning. Data analysis can be carried out in many forms; it’s essentially looking at data and understanding it to make a factual conclusion from it (in simple terms). A data scientist may choose to use Microsoft Excel and VBA to analyze their data, but it wouldn’t be as accurate, clean, or as in depth as using Python or R, but it sure would be useful as a quick win with smaller sets of data. The approach here is that starting with something like Excel doesn’t mean it’s not counted as data science, it’s just a different form of it, and more importantly it actually gives a good foundation to progress on to using things like MySQL, R, Julia, and Python, as with time, business needs would grow and so would expectations of the level of analysis. In my opinion, a good data scientist is not one who knows more than one or two languages or tools, but one who is well-versed in the majority of them and knows which language and skill set are best suited to the task in hand. Data visualization is hugely important, as numbers themselves tell a story, but when it comes to representing the data to customers or investors, they're going to want to view all the different aspects of that data as quickly and easily as possible. Graphically representing complex data is one of the most desirable methods, but the way the data is represented varies dependent on the tool used, for example R’s GGplot2 or Python’s Matplotlib. Whether you’re working for a small organization or a huge data-driven company, data visualization is crucial. The world of artificial intelligence introduced the concept of machine learning, which has exploded on the scene and to an extent is now fundamental to large organizations. The opportunity for organizations to move forward by understanding a consumer’s behaviour and equally matching their expectations has never been so valuable. Data scientists are required to learn complex algorithms and core concepts such as classifications, recommenders, neural networks, and supervised and unsupervised learning techniques. This is just touching the edges of this exciting field, which goes into much more depth especially with emerging concepts such as deep learning.   To conclude, we covered the basic fundamentals of data science and what it means to be data scientists. For all you R and Python developers (not forgetting any mathematical wizards out there), data science has been described as the ‘Sexiest job of 21st century’  as well as being handsomely rewarding too. The rise in jobs for data scientists has without question exploded and will continue to do so; according to global management firm McKinsey & Company, there will be a shortage of 140,000 to 190,000 data scientists due to the continued rise of ‘big data’.
Read more
  • 0
  • 0
  • 1527

article-image-progression-maker
Travis Ripley
30 Jun 2014
6 min read
Save for later

Progression of a Maker

Travis Ripley
30 Jun 2014
6 min read
There’s a natural path for the education of a maker that takes place within the techshops and makerspaces. It begins in the world of tools you may already know, like handheld tools or power tools, and quickly creeps into an unknown world of machines suited to bring any desire to fruition. At first, taking any classes may seem like a huge investment, but the payback you receive from the knowledge is priceless. I can’t even put a price on the payback I’ve earned from developing these maker skills, but I can tell you that the number of opportunities is overflowing. I know it doesn’t sound like much, but the opportunities to grow and learn also increase your connections and that’s what helps you to create an enterprise. Your options for education all depend upon what is available to you locally. As the ideology of technological dissonance has been growing culturally, it is influencing advancements on open source and open hardware. It has a big impact on the trend of creating incubators, startups, techshops, and makerspaces on a global scale. When I first began my education into the makerspace, I was worried that I’d never be able to learn it all. I started small by reading blogs and magazines, and eventually I decided to take a chance and sign up for a membership at our local makerspace: http://www.Makerplace.com. There I was given access to a variety of tools that would be too bulky and loud for my house and workspace, not to mention extremely out of my price range. When I first started at the Makerplace, I was overwhelmed by the amount of technology that was available to me, and I was daunted by the degree of difficulty it would take to even use these machines. But you can only learn so much from videos and books; the real trial begins when you put that knowledge to work with hands-on experience. I was ready to get some experience under my belt. The degree of difficulty for a student can vary, obviously, by experience, and how well one grasps the concepts. I started by taking a class that offers a brief introduction to a topic and some guidance from an expert. After that, you learn on your own and will break things such as materials, end mills, electronic components, and lots of consumables (I do not condone breaking fingers, body parts, or huge expensive tools). This stage is key, because once you understand what can and will go wrong, you’ll undeniably want more training from an expert. And as the saying goes, “practice makes perfect,” which is the key to mastery. As you begin your education, it will become apparent to you what classes will need to come next. The best place to start is learning the obvious software necessary to develop your tangible goods. For those of you who are interested I will list the suggested order of the tools and experience I have learned from ground zero. I suggest the first tools to learn are the Laser, Waterjet, and Plasma CNC cutters, as they can precisely cut shapes out of sheet type material. The laser is the easiest to learn, and can be used to not only cut, but engrave wood, acrylics, metal, and other sheet type materials. Most likely the makerspaces and hackerspaces that you have access to will have this available. The Waterjet and Plasma CNC machines will depend upon the workshop, since they require more room, along with the outfitting of vapor and fume containment equipment. The next set of tools that require a bigger learning curve are the Multi-Axis CNC Mills, Routers, Conventional Mill, and Lathe. CNC (Computer Numerical Control) is the automation of machine tools. These processes of controlled material removal today are collectively known as Subtractive Manufacturing. This requires you to take unfinished work pieces made of materials such as metals, plastics, ceramics, and wood and create 2D/3D shapes, which can be made into tools or finished as tangible objects. The CNC routers are for the same process, but they use sheet materials, such as plywood, MDF, and foam. The first time I took a tour of the makerplace, these machines looked so intimidating. They were big, loud, and I had no clue what they were used for. It wasn’t until I gained further insight into manufacturing that I understood how valuable these tools are. The learning curve is gradual, since there are multiple moving parts and operations happening at once. I took the CNC fundamentals class, which was required before operating any of these machines. I then completed the conventional Mill and Lathe classes before moving on to the CNC machines. I suggest the steps in this order, since understanding the conventional process will play an integral role in how you design your parts to be machined using the CNC machines. I found out the hard way why endmills were called consumables, as I scrapped many parts and broke many endmills. This is a great skill to understand as it directly compliments the Additive processes, such as 3D printing. Once you have a grasp on the basics of automated machinery, the next step is to learn welding and plasma cutting equipment and metal forming tools. This skill opens many possibilities and opportunities to makers, such as making and customizing frames, chassis, and jigs. Along the way you will also learn how to use the metal forming tools to create and craft three-dimensional shapes from thin-gauge sheet metal. And last but not least, depending on how far you want to develop your learning, there are large air compressors, such as bead blasters and paint sprayers used with tools that require constant pressure in the metal forming category. There is also high temperature equipment, such as furnaces, ovens, and acrylic sheet benders, and my personal new favorite, the vacuum formers that bend and form plastic into complex shapes. With all of these new skills under my belt, a network of like-minded individuals, and a passion for knowledge in manufacturing and design, I was able to produce and create products at a pro level, which totally changed my career. Whatever your curious intentions may be, I encourage you to take on a new challenge, such as learning manufacturing skills, and you will be guaranteed a transformative look at the world around you, from consumer to maker. About the Author Travis Ripley is a designer/developer. He enjoys developing products with composites, woods, steel, and aluminum, and has been immersed in the Maker community for over two years. He also teaches game development at the University of California, Los Angeles. He can be found @travezripley.
Read more
  • 0
  • 0
  • 2215

article-image-notes-javascript-learner
Ed Gordon
30 Jun 2014
4 min read
Save for later

Notes from a JavaScript Learner

Ed Gordon
30 Jun 2014
4 min read
When I started at Packt, I was an English grad with a passion for working with authors, editorial rule, and really wanted to get to work structuring great learning materials for consumers. I’d edited the largest Chinese-English dictionary ever compiled without speaking a word of Chinese, so what was tech but a means to an end that would allow me to work on my life’s ambition? Fast forward 2 years, and hours of independent research and reading Hacker News, and I’m more or less able to engage in a high level discussion about any technology in the world, from Enterprise class CMIS to big data platforms. I can identify their friends and enemies, who uses what, why they’re used, and what learning materials are available on the market. I can talk in a more nebulous way of their advantages, and how they ”revolutionized” that specific technology type. But, other than hacking CSS in WordPress, I can’t use these technologies. My specialization has always been in research, analysis, and editorial know-how. In April, after deploying my first WordPress site (exploration-online.com), I decided to change this. Being pretty taken with Python, and having spent a lot of time researching why it’s awesome (mostly watching Monty Python YouTube clips), I decided to try it out on Codecademy. I loved the straightforward syntax, and was getting pretty handy at the simple things. Then Booleans started (a simple premise), and I realised that Python was far too data intensive. Here’s an example: · Set bool_two equal to the result of-(-(-(-2))) == -2 and 4 >= 16**0.5 · Set bool_three equal to the result of 19 % 4 != 300 / 10 / 10 and False This is meant to explain to a beginner how the Boolean operator “and” returns “TRUE” when statements on either side are true. This is a fairly simple thing to get, so I don’t really see why they need to use expressions that I can barely read, let alone compute... I quickly decided Python wasn’t for me. I jumped ship to JavaScript. The first thing I realised was that all programming languages are pretty much the same. Variables are more or less the same. Functions do a thing. The syntax changes, but it isn’t like changing from English to Spanish. It’s more like changing from American English to British English. We’re all talking the same, but there are just slightly different rules. The second thing I realized was that JavaScript is going to be entirely more useful to me in to the future than Python. As the lingua franca of the Internet, and the browser, it’s going to be more and more influential as adoption of browser over native apps increases. I’ve never been a particularly “mathsy” guy, so Python machine learning isn’t something I’m desperate to master. It also means that I can, in the future, work with all the awesome tools that I’ve spent time researching: MongoDB, Express, Angular, Node, and so on. I bought Head First JavaScript Programming, Eric T. Freeman, Elisabeth Robson, O’Reilly Media, and aside from the 30 different fonts used that are making my head ache, I’m finding the pace and learning narrative far better than various free solutions that I’ve used, and I actually feel I’m starting to progress. I can read things now and hack stuff on W3 schools examples. I still don’t know what things do, but I no longer feel like I’m standing reading a sign in a completely foreign language. What I’ve found that books are great at is reducing the copy/paste mind-set that creeps in to online learning tools. C/P I think is fine when you actually know what it is you’re copying. To learn something, and be comfortable using it in to the future, I want to be able to say that I can write it when needed. So far, I’ve learned how to log the entire “99 Bottles of Beer on the Wall” to the console. I’ve rewritten a 12 line code block to 6 lines (felt like a winner). I’ve made some boilerplate code that I’ve got no doubt I’ll be using for the next dozen years. All in all, it feels like progress. It’s all come from books. I’ll be updating this series regularly when I’ve dipped my toe into the hundreds of tools that JavaScript supports within the web developer’s workflow, but for now I’m going to crack on with the next chapter. For all things JavaScript, check out our dedicated page! Packed with more content, opinions and tutorials, it's the go-to place for fans of the leading language of the web. 
Read more
  • 0
  • 0
  • 4613

article-image-what-zerovm
Lars Butler
30 Jun 2014
6 min read
Save for later

What is ZeroVM?

Lars Butler
30 Jun 2014
6 min read
ZeroVM is a lightweight virtualization technology based on Google Native Client (NaCl). While it shares some similarities with traditional hypervisors and container technologies, it is unique in a number of respects. Unlike KVM and LXC, which provide an entire virtualized operating system environment, it isolates single processes and provides no operating system or kernel. This allows instances to start up in a very short time: about five milliseconds. Combined with a high level of security and zero execution overhead, ZeroVM is well-suited to ephemeral processes running untrusted code in multi-tenant environments. There are of course some limitations inherent in the design. ZeroVM cannot be used as a drop-in replacement for something like KVM or LXC. These limitations, however, were the deliberate design decisions necessary in order to create a virtualization platform specifically for building cloud applications. How ZeroVM is different to other virtualization tools Blake Yeager and Camuel Gilyadov gave a talk at the 2014 OpenStack Summit in Atlanta which summed up nicely the main differences between hypervisor-based virtual machines (KVM, Xen, and so on), containers (LXC, Docker, and so on), and ZeroVM. Here are the key differences they outlined: Traditional VM Container ZeroVM Hardware Shared Shared Shared Kernel/OS Dedicated Shared None Overhead High Low Very low Startup time Slow Fast Fast Security Very secure Somewhat secure Very secure Traditional VMs and containers provide a way to partition and schedule shared server resources for multiple tenants. ZeroVM accomplishes the same goal using a different approach and with finer granularity. Instead of running one or more application processes in a traditional virtual machine, applications written for ZeroVM must be decomposed in microprocesses, and each one gets its own instance. The advantage of in this case is that you can avoid long running VMs/processes which accumulate state (leading to memory leaks and cache problems). The disadvantage, however, is that it can be difficult to port existing applications. Each process running on ZeroVM is a single stateless unit of computation (much like a function in the “purely functional” sense; more on that to follow), and applications need to be structured specifically to fit this model. Some applications, such as long-running server applications, would arguably be impossible to re-implement entirely on ZeroVM, although some parts could be abstracted away to run inside ZeroVM instances. Applications that are predominantly parallel and involve many small units of computation are better suited to run on ZeroVM. Determinism ZeroVM provides a guarantee of functional determinism. What this means in practice is that with a given set of inputs (parameters, data, and so on), outputs are guaranteed to always be the same. This works because there are no sources of entropy. For example, the ZeroVM toolchain includes a port of glibc, which has a custom implementation of time functions such that time advances in a deterministic way for CPU and I/O operations. No state is accumulated during execution and no instances can be reused. The ZeroVM Run-Time environment (ZRT) does provide an in-memory virtual file system which can be used to read/write files during execution, but all writes are discarded when the instance terminates unless an output “channel” is used to pipe data to the host OS or elsewhere. Channels and I/O “Channels” are the basic I/O abstraction for ZeroVM instances. All I/O between the host OS and ZeroVM must occur over channels, and channels must be declared explicitly in advance. On the host, a channel can map to a file, character device, pipe, or socket. Inside an instance, all channels are presented as files that can be written to/read from, including devices like stdin, stdout, and stderr. Channels can also be used to connect multiple instances together to create arbitrary multi-stage job pipelines. For example, a MapReduce-style search application with multiple filters could be implemented on ZeroVM by writing each filter as a separate application/script and piping data from one to the next. Security ZeroVM has two key security components: static binary validation and a limited system call API. Static validation occurs before “untrusted” user code is executed to ensure that there are no accidental or malicious instructions that could break out of the sandbox and compromise the host system. Binary validation in this instance is largely based on the NaCl validator. (For more information about NaCl and its validation, you can read the following whitepaper http://static.googleusercontent.com/media/research.google.com/en//pubs/archive/34913.pdf.) To further lock down the execution environment, ZeroVM only supports six system calls via a "trap" interface: pread, pwrite, jail, unjail, fork, and exit. By comparison, containers (LXC) expose the entire Linux system call API which presents a larger attack surface and more potential for exploitation. ZeroVM is lightweight ZeroVM is very lightweight. It can start in about five milliseconds. After the initial validation, program code is executed directly on the hardware without interpretation overhead or hardware virtualization. It's easy to embed in existing systems The security and lightweight nature of ZeroVM makes it ideal to embed in existing systems. For example, it can be used for arbitrary data-local computation in any kind of data store, akin to stored procedures. In this scenario, untrusted code provided by any user with access to the system can be executed safely. Because inputs and outputs must be declared explicitly upfront, the only concerns remaining are data access rules and quotas for storage and computation. Contrasted with a traditional model, where storage and compute nodes are separate, data-local computing can be a more efficient model when the cost of transferring data over the network to/from compute nodes outweighs the actual computation time itself. The tool has already been integrated with OpenStack Swift using ZeroCloud (middleware for Swift). This turns Swift into a “smart” data store, which can be used to scale parallel computations (such as multi-stage MapReduce jobs) across large collections of objects. Language support C and C++ applications can run on ZeroVM, provided that they are cross-compiled to NaCl using the provided toolchain. At present there is also support for Python 2.7 and Lua. Licensing All projects under the ZeroVM umbrella are licensed under Apache 2.0, which makes ZeroVM suitable for both commercial and non-commercial applications (the same as OpenStack).
Read more
  • 0
  • 0
  • 2911
article-image-virtual-reality-and-social-e-commerce-rift-between-worlds
Julian Ursell
30 Jun 2014
4 min read
Save for later

Virtual Reality and Social E-Commerce: a Rift between Worlds?

Julian Ursell
30 Jun 2014
4 min read
It’s doubtful many remember Nintendo’s failed games console, the Virtual Boy, which was one of the worst commercial nose dives for a games console in the past 20 years. Commercial failure though it was, the concept of virtual reality back then and up till the present day is still intriguing for many people considering what the sort of technology that can properly leverage VR is capable of. The most significant landmark in this quarter of technology in the past 6 months undoubtedly is Facebook’s acquisition of the Oculus Rift VR headset manufacturer, Oculus VR. Beyond using the technology purely for creating new and immersive gaming experiences (you can imagine it’s pretty effective for horror games), there are plans at Facebook and amongst other forward-thinking companies of mobilizing the tech for transforming the e-commerce experience into something far more interactive than the relatively passive browsing experience it is right now. Developers are re-imagining the shopping experience through the gateway of virtual reality, in which a storefront becomes an interactive user experience where shoppers can browse and manipulate the items they are looking to buy ( this is how the company Chaotic Moon Studios imagines it), adding another dimension to the way we can evaluate and make decisions on the items we are looking to purchase. On the surface there’s a great benefit to being able to draw the user experience even closer to the physical act of going out into the real world to shop, and one can imagine a whole other array of integrated experiences that can extend from this (say, for example, inspecting the interior of the latest Ferrari). We might even be able to shop with others, making decisions collectively and suggesting items of interest to friends across social networks, creating a unified and massively integrated user experience. Setting aside the push from the commercial bulldozer that is Facebook, is this kind of innovation something that people will get on board with? We can probably answer with some confidence that even with a finalized experience, people are not going to instantly “buy-in” to virtual reality e-commerce, especially with the requirement of purchasing an Oculus Rift (or any other VR headset that emerges, such as Sony’s Morpheus headset) for this purpose. Factor in the considerable backlash against the KickStarter-backed Oculus Rift after its buyout by Facebook and there’s an even steeper hill of users already averse to engaging with the idea. From a purely personal perspective, you might also ask if wearing a headset is going to be anything like the annoying appendage of wearing 3D glasses at the cinema, on top of the substantial expense of acquiring the Rift headset. 3D cinema actually draws a close parallel – both 3D and VR are technology initiatives attempted and failed in years previous, both are predicated on higher user costs, and both are never too far away from being harnessed to that dismissive moniker of “gimmick”. From Facebook’s point of view we can see why incorporating VR activity is a big draw for them. In terms of keeping social networking fresh, there’s only so far re-designing the interface and continually connecting applications (or the whole Internet) through Facebook will take them. Acquiring Oculus is one step towards trying to augment (reinvigorate?) the social media experience, orchestrating the user (consumer) journey for business and e-commerce in one massive virtual space. Thought about in another way, it represents a form of opt-in user subscription, but one in which the subscription is based upon a strong degree of sustained investment from users into the idea of VR, which is something that is extremely difficult to engineer. It’s still too early to say whether the tech mash-up between VR, social networking, and e-commerce is one in which people will be ready to invest (and if they will ever be ready). You can’t fault the idea on the basis of sheer innovation, but at this point one would imagine that users aren’t going to plunge head first into a virtual reality world without hesitation. For the time being, perhaps, people would be more interested in more productive uses of immersive VR technology, say for example flying like a bird.
Read more
  • 0
  • 0
  • 1749

article-image-aspiring-data-analyst-meet-your-new-best-friend-excel
Akram Hussain
30 Jun 2014
4 min read
Save for later

Aspiring Data Analyst, Meet Your New Best Friend: Excel

Akram Hussain
30 Jun 2014
4 min read
In general, people want to associate themselves with cool job titles and one that indirectly says both that you’re clever and you get paid well, so what’s better than telling someone you’re a data analyst? Personally, as a graduate in Economics I always thought my natural career progression would be to go into a role of an analyst working for a banking organization, a private hedge fund, or an investment firm. I’m guessing at some point all people with a background in maths or some form of statistics have envisaged becoming a hotshot investment banker, right? However, the story was very different for me; I somehow was fortunate enough to fall into the tech world and develop a real interest in programming. What I found really interesting was that programming languages and data sets go hand in hand surprisingly well, which uncovered a relatively new field to me known as data science. Here’s how the story goes – I combined my academic skills with programming, which opened up a world of opportunity, allowing me to appreciate and explore data analysis on a whole new level. Nowadays, I’m using languages like Python and R to mix background knowledge of statistical data with my new-found passion. Yet that’s not how it started. It started with Excel. Now if you want to eventually move into the field of data science, you have to become competent in data analysis. I personally recommend Excel as a starting point. There are many reasons for this, one being that you don’t have to be technical wizard to get started and more importantly, Excel’s functionalities for data analysis are more powerful than you would expect and a lot quicker and efficient in resolving queries and allowing you to visualize them too. Excel has an inbuilt Data tab to get you started: The screenshot shows the basic analytical features to get you started within Excel. It’s separate to any functions and sum calculations that could be used. However, one useful and really handy plugin called Data Analysis is missing from that list. If you click on: File | Options | Add-ins and then choose Analysis tool and Analysis tool pack - VBA from the list and select Go, you will be prompt with the following image: Once you select the add-ins (as shown above) you will now find an awesome new tag in your data tab called Data Analysis: This allows you to run different methods of analysis on your data, anything from histograms, regressions, correlations, to t-tests. Personally I found this to save me tons of time. Excel also offers features such as Pivot-tables and functions like V-look ups, both extremely useful for data analysis, especially when you require multiple tables of information for large sets of data. A V-look up function is very useful when trying to identify products in a database that have the same set of IDs but are difficult to find. A more useful feature for analysis I found was using pivot tables. One of the best things about a pivot table is that it saves so much time and effort when you have a large set of data that you need to categorize and analyze quickly from a database. Additionally, there’s a visual option named a pivot chart, which allows you to visualize all your data in the pivot table. There are many useful tutorials and training available online on pivot tables for free. Overall, Excel provides a solid foundation for most analysts starting out. A general search on the job market for “Excel data” returns a search result of over 120,000 jobs all specific to an analyst role. To conclude, I wouldn’t underestimate Excel for learning the basics and getting valuable experience with large sets of data. From there, you can progress to learning a language like Python or R (and then head towards the exciting and supercool field of data science). With R’s steep learning curve, Python is often recommended as the best place to start, especially for people with little or no background in programming. But don’t dismiss Excel as a powerful first step, as it can easily become your best friend when entering the world of data analysis.
Read more
  • 0
  • 0
  • 1562

article-image-frontend-frameworks-bootstrapping-beginners
Ed Gordon
30 Jun 2014
3 min read
Save for later

Frontend Frameworks: Bootstrapping for Beginners

Ed Gordon
30 Jun 2014
3 min read
I was on the WebKit.org site the other day, and it struck me that it was a fairly ugly site for the home page of such a well-known browser engine. Lime green to white background transition, drop-shadow headers. It doesn’t even respond; what? I don’t want to take anything away from its functionality – it works perfectly well – but it did bring to mind the argument about frontend frameworks and the beautification of the Internet. When the Internet started to become a staple of our daily compute, it was an ugly place. Let’s not delude ourselves in thinking every site looked awesome. The BBC, my home page since I was about 14, looked like crap until about 2008. As professional design started improving, it left “home-brew” sites still looking old, hacky, and unloved. Developers and bedroom hacks, not au fait with the whims of JavaScript or jQuery, were left with an Internet that still looks prehistoric. A gulf formed between the designers who were getting paid to make content look better and those who wanted to, but didn’t have the time. It was the haves, and the have nots. Whilst the beautification of websites built by the “common man” is a consequence of the development of dozens of tools in the open source arena, I’m ascribing the flashpoint as Twitter Bootstrap. Yes, you can sniff a Bootstrap site a mile off, and yes it loads a bit slower except for the people who use Bootstrap (me), and yes some of the mark-up syntax is woeful. It does remain, however, a genuine enabler of web design that doesn’t suck. The clamor of voices that have called out Bootstrap for the reasons mentioned above, I think, have really misunderstood who should be using this tool. I would be angry if I paid a developer to knock me up a hasty site in Bootstrap. Designers should only be using Bootstrap to knock up a proof of concept (Rapid Application Development), before building a bespoke site and living fat off the commission. If, however, someone asked me to make a site in my spare time, I’m only ever going to be using Bootstrap (or, in fairness, Foundation), because it’s quick, easy, and I’m just not that good with HTML, CSS, or JavaScript (though I’m learning!). Bootstrap, and tools like it, abstract away a lot of the pain that goes into web development (really, who cares if your button is the same as someone else’s?) for people who just want to add their voice to the sphere and be heard. Having a million sites that look similar but nice, to me is a better scenario than having a million sites that are different and look like the love child of a chalkboard and MS Paint. What’s clear is that it has home-brew developers contributing to the conversation of presentation of content; layout, typography, iconography. Anyone who wants to moan can spend some time on the wayback machine.
Read more
  • 0
  • 0
  • 3245
article-image-mysteries-big-data-and-orient-db
Julian Ursell
30 Jun 2014
4 min read
Save for later

The Mysteries of Big Data and the Orient … DB

Julian Ursell
30 Jun 2014
4 min read
Mapping the world of big data must be a lot like demystifying the antiquated concept of the Orient, trying to decipher a mass of unknowns. With the ever multiplying expanse of data and the natural desire of humans to simultaneously understand it—as soon as possible and in real time—technology is continually evolving to allow us to make sense of it, make connections between it, turn it into actionable insight, and act upon it physically in the real world. It’s a huge enterprise, and you’ve got to imagine with the masses of data collated years before on legacy database systems, without the capacity for the technological insight and analysis we have now, there are relationships within the data that remain undefined—the known unknowns, the unknown knowns, and the known knowns (that Rumsfeld guy was making sense you see?). It's fascinating to think what we might learn from the data we have already collected. There is a burning need these days to break down the mysteries of big data and developers out there are continually thinking of ways we can interpret it, mapping data so that it is intuitive and understandable. The major way developers have reconceptualized data in order to make sense of it is as a network connected tightly together by relationships. The obvious examples are Facebook or LinkedIn, which map out vast networks of people connected by various shared properties, such as education, location, interest, or profession. One way of mapping highly connectable data is by structuring data in the form of a graph, a design that has emerged in recent years as databases have evolved. The main progenitor of this data structure is Neo4j, which is far and away the leader in the field of graph databases, mobilized by a huge number of enterprises working with big data. Neo4j has cornered the market, and it's not hard to see why—it offers a powerful solution with heavy commercial support for enterprise deployments. In truth there aren't many alternatives out there, but alternatives exist. OrientDB is a hybrid graph document database that offers the unique flexibility of modeling data in the form of either documents, or graphs, while incorporating object-oriented programming as a way of encapsulating relationships. Again, it's a great example of developers imagining ways in which we can accommodate the myriad of different data types, and relationships that connect it all together. The real mystery of the Orient(DB) however, is the relatively low (visible) adoption of a database that offers both innovation, and reputedly staggering levels of performance (claims are that it can store up to 150,000 records a second). The question isn't just why it hasn't managed to dent a market essentially owned by Neo4j, but why, on its own merits, haven’t more developers opted for the database? The answer may in the end be vaguely related to the commercial drivers—outside of Europe it seems as if OrientDB has struggled to create the kind of traction that would push greater levels of adoption, or perhaps it is related to the considerable development and tuning of the project for use in production. Related to that, maybe OrientDB still has a way to go in terms of enterprise grade support for production. For sure it's hard to say what the deciding factor is here. In many ways it’s a simple reiteration of the level of difficulty facing startups and new technologies endeavoring to acquire adoption, and that the road to this goal is typically a long one. Regardless, what both Neo4j and OrientDB are valuable for is adapting both familiar and unfamiliar programming concepts in order to reimagine the way we represent, model, and interpret connections in data, mapping the information of the world.
Read more
  • 0
  • 0
  • 1644

article-image-icon-haz-hamburger
Ed Gordon
30 Jun 2014
7 min read
Save for later

Icon Haz Hamburger

Ed Gordon
30 Jun 2014
7 min read
I was privileged enough recently to be at a preview of Chris Chabot’s talk on the future of mobile technology. It was a little high-line (conceptual), but it was great at getting the audience thinking about the implications that “mobile” will have in the coming decades; how it will impact our lives, how it will change our perceptions, and how it will change physically. The problem with this, however, is that mobile user experience just isn’t ready to scale yet. The biggest challenge facing mobile isn’t its ability to handle an infinite increase in traffic; it’s how we navigate this new world of mobile experiences. Frameworks like Bootstrap et al have enabled designers to make content look great on any platform, but finding your way around, browsing, on mobile is still about as fun as punching yourself in the face. In a selection of dozens of applications, I’m in turns required to perform a ballet of different digital interface interaction: pressing, holding, sliding, swiping, pulling (but never pushing?!), and dragging my way to finding the article of choice. The hamburger eats all One of the biggest enablers of scalable user interface design is going to be icons, right? A picture paints a thousand words. An icon that can communicate “Touch me for more… ” is more valuable in the spatio-prime-real estate of the mobile web than a similarly wordy button. Of course, when the same pictures start meaning a different thousand words, everything starts getting messy. The best example of icons losing meaning is the humble hamburger icon. Used by so many sites and applications to achieve such different end goals, it is becoming unusable. Here are a few examples from fairly prominent sites:   Google+: Opens a reveal menu, which I can also open by swiping left– [MG1] right. SmashingMag: Takes me to the bottom of the page, with no faculty to get back up without scrolling manually. The reason for this remains largely unclear to me. eBay: Changes the view of listed items. Feels like the Wilhelm Scream of UI design. Linked in: Drop-down list of search options, no menu items. IGN: Reveal menu which I can only close by pressing a specific part of the “off” page. Can’t slide it open. There’s an emerging theme here, in that it’s normally related to content menus (or search), and it normally happens by some form of CSS trickery that either drops down or reveals the “under” menu. But this is generally speaking. There’s no governance, and it introduces more friction to the cross-site browsing experience. Compare the hamburger to the humble magnifying glass: How many people have used a magnifying glass? I haven’t. Despite this set back, through consistent use of the icon with consistent results, we’ve ended up with a standard pattern that increases the usability and user experience of a site. Want to find something? Click the magnifying glass. The hamburger isn’t the only example of poorly implemented navigation, it’s just indicative of the distances we still have to cover to get to a point where mobile navigation is intuitive. The “Back”, “Forward”, and “Refresh” buttons have been a staple of browsers since Netscape Navigator–they have aided the navigation of the Web as we know it. As mobile continues to grow, designers need similarly scalable icons, with consistent meaning. This may be the hamburger in the future, but it’s not at that point yet. Getting physical, or, where we discuss touch Touch isn’t yet fully realized on mobile devices. What can I actually press? Why won’t Google+ let me zoom in with the “pinch” function? Can I slide this carousel, or not? What about off-screen reveals? Navigating with touch at the moment really feels like you’re a beta tester for websites; trying things that you know work on other sites to see if they work here. This, as a consumer, isn’t the base of a good user experience. Just yesterday, I realised I could switch tabs on Android Chrome by swiping the grey nav bar. I found that by accident. The one interaction that has come out with some value is the “Pull to refresh” action. It’s intuitive, in its own way, and it’s also used as a standard way of refreshing content across Facebook, Twitter, and Google+—any site that has streamed content. People can use this function without thinking about it and without many visual prompts now that it’s remained the standard for a few years. Things like off-screen reveals, carousel swiping, and even how we highlight text are still so in flux that it becomes difficult to know how to achieve a given action from one site to the next. There’s no cross application consistency that allows me to navigate my own browsing experience with confidence. In cases such as the Android Chrome, I’m actually losing functionality that developers have spent hours (days?) creating. Keep it mobile, stupid Mobile commerce is a great example of forgetting the “mobile” bit of browsing. Let’s take Amazon. If I want to find an Xbox 360 RPG, it takes me seven screens and four page loads to get there. I have to actually load up a list of every game, for every console, before I can limit it to the console I actually own. Just give me the option to limit my searches from the home page. That’s one page load and a great experience (cheques in the post please, Amazon). As a user, there are some pretty clear axioms for mobile development: Browser > app. Don’t make me download an app if it’s going to require an Internet connection in the future. There’s no value in that application. Keep page calls to a minimum. Don’t trust my connection. I could be anywhere. I am mobile. Mobile is still browsing. I don’t often have a specific need; if I do, Google will solve that need. I’m at your site to browse your content. Understanding that mobile is its own entity is an important step – thinking about connection and page calls is as important as screen size. Tools such as Hood.ie are doing a great job in getting developers and designers to think about “offline first”. It’s not ready yet, but it is one possible solution to under the hood navigation problems. Adding context A lack of governing design principles in the emergent space of mobile user experience is limiting its ability to scale to the place we know it’s headed. Every new site feels like a test, with nothing other than how to scroll up and down being hugely apparent. This isn’t to say all sites need to be the same, but for usability and accessibility to not be impacted, they should operate along a few established protocols. We need more progressive enhancement and collaboration in order to establish a navigational framework that the mobile web can operate in. Designers work in the common language of signification, and they need to be aware that they all work in the same space. When designing for that hip new product, remember that visitors aren’t arriving at your site in isolation–they bring with them the great burden of history, and all the hamburgers they’ve consumed since. T.S. Eliot said that “No poet, no artist of any art, has his complete meaning alone. His significance, his appreciation is the appreciation of his relation to the dead poets and artists”. We don’t work alone. We’re all in this together.
Read more
  • 0
  • 0
  • 1961