Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech Guides

851 Articles
article-image-beating-jquery-making-web-framework-worth-its-weight-code
Erik Kappelman
20 Apr 2016
5 min read
Save for later

Beating jQuery: Making a Web Framework Worth its Weight in Code

Erik Kappelman
20 Apr 2016
5 min read
Let me give you a quick disclaimer. This is a bit of a manifesto. Last year I started a little technology company with some friends of mine. We were lucky enough to get a solid client for web development right away. He was an author in need of a blogging app to communicate with the fans of his upcoming book. In another post I have detailed how I used Angular.js, among other tools, to build this responsive, dynamic web app. Using Angular.js is a wonderful experience and I would recommend it to anyone. However, Angular.js really only looks good by comparison. By this I mean, if we allow any web framework to exist in a vacuum and not simply rank them against one another, they are all pretty bad. Before you gather your pitchforks and torches to defend your favorite flavor let me explain myself. What I am arguing in this post is that many of the frameworks we use are not worth their weight in code. In other words, we add a whole lot of code to our apps when we import the frameworks, and then in practice using the framework is only a little bit better than using jQuery, or even pure JavaScript. And yes I know that using jQuery means including a whole bunch of code into your web app, but frameworks like Angular.js are many times built on top of jQuery anyway. So, the weight of jQuery seems to be a necessary evil. Let’s start with a simple http request for information from the backend. This is what it looks like in Angular.js: $http.get('/dataSource').success(function(data) { $scope.pageData = data; }); Here is a similar request using Ember.js: App.DataRoute = Ember.Route.extend({ model: function(params) { return this.store.find('data', params.data_id); } }); Here is a similar jQuery request: $.get( "ajax/stuff.html", function( data ) { $( ".result" ).html( data ); alert( "Load was performed." ); }); It's important for readers to remember that I am a front-end web developer. By this, I mean I am sure there are complicated, technical, and valid reasons why Ember.js and Angular.js are far superior to using jQuery. But, as a front-end developer, I am interested in speed and simplicity. When I look at these http requests and see that they are overwhelmingly similar I begin to wonder if these frameworks are actually getting any better. One of the big draws to Angular.js and Ember.js is the use of handlebars to ease the creation of dynamic content. Angular.js using handlebars looks something like this: <h1> {{ dynamicStuff }} </h1> This is great because I can go into my controller and make changes to the dynamicStuff variable and it shows up on my page. However, the following accomplishes a similar task using jQuery. $(function () { var dynamicStuff = “This is dog”; $(‘h1’).html( dynamicStuff ); }); I admit that there are many ways in which Angular.js or Ember.js make developing easier. DOM manipulation definitely takes less code and overall the development process is faster. However, there are many times that the limitations of the framework drive the development process. This means that developers sacrifice or change functionality simply to fit the framework. Of course, this is somewhat expected. What I am trying to say with this post is that if we are going to sacrifice load-times and constrict our development methods in order to use the framework of our choice can they at least be simpler to use? So, just for the sake of advancement lets think about what the perfect web framework would be able to do. First of all, there needs to be less set up. The brevity and simplicity of the http request in Angular.js is great, but it requires injecting the correct dependencies in multiple files. This adds stress, opportunities to make mistakes and development time. So, instead of requiring the developer to grab each specific tool for each specific implementation what if the framework took care of that for you? By this I mean if I were to make an http request like this: http( ‘targetURL’ , get , data) When the source is compiled or interpreted the needed dependencies for this http request are dynamically brought into the mix. This way we can make a simpler http request and we can avoid the hassle of setting up the dependencies. As far as DOM manipulation goes, the handlebars seem to be about as good as it gets. However, there needs to be better ways to target individual instances of a repeated elements such as <p> tags holding the captions in a photo gallery. The current solutions for problems like this one are overly complex. Especially when this issue involves one of the most common things on the internet, a photo gallery. About the Author As you can see, I am more of a critic than a problem solver. I believe the issues I bring up here are valid. As we all become more and more entrenched in the Internet of Things, it would be nice if the development process caught up with the standards of ease that end users demand.
Read more
  • 0
  • 0
  • 2071

article-image-what-are-edge-analytics
Peter Worthy
22 May 2017
5 min read
Save for later

What are Edge Analytics?

Peter Worthy
22 May 2017
5 min read
We already know that mobile is a big market with growing opportunities.  We are also hearing about the significant revenue that the IoT will generate. Machina Research predicts that the revenue opportunity will increase to USD$4 trillion by 2025. In the mainstream, both of these technologies are heavily reliant on the cloud, and as they become more pervasive, issues such as response delay and privacy are starting to surface. That’s where Edge Computing and Edge Analytics can help. Cloud, Fog and Mist  As the demand for more complex applications on mobile increases, we needed to offload some of the computational demands from the mobile device. An example is speech recognition and processing applications such as Siri, which needs to access cloud-based servers in order to process user’s requests.  Cloud enabled a wide range of services to be delivered on mobile due to almost unlimited processing and storage capability. However, the trade-off was the delay arising from the fact that the cloud infrastructure was often a large distance from the device.  The solution is to move some of the data processing and storage to either a location closer to the device (a “cloudlet” or “edge node”) or to the device itself. “Fog Computing” is where some of the processing and storage of data occurs between the end devices and cloud computing facilities. “Mist Computing” is where the processing and storage of data occurs in the end devices themselves. These are collectively known as “Edge Computing” or “Edge Analytics” and, more specifically for mobile, “Mobile Edge Computing” (MEC).  The benefits of Edge Analytics  As a developer of either mobile or IoT technology, Edge Analytics provides significant benefits. Responsiveness In essence, the proximity of the cloudlets or edge nodes to the end devices reduces latency in response. Often high bandwidth is possible and jitter is reduced. This is particularly important where a service is sensitive to response delays or to services with high processing demands such as VR or AR.  Scalability By processing the raw data either in the end device or in the cloudlet, the demands that are placed on the central cloud facility is reduced because smaller volumes of data need to be sent to the cloud. This allows a greater number of connections to that facility.  Maintaining privacy Maintaining privacy is a significant concern for IoT. Processing data in either end devices or at cloudlets gives the owner of that data the ability to control the data that is release before it is sent to the cloud. Further, the data can be made anonymous or aggregated before transmission.  Increased development flexibility  Developers of mobile or IoT technology are able to use more contextual information and a wider range of SDKs specific to the device.  Dealing with cloud outages In March this year, Amazon AWS had a server outage, causing significant disruption for many services that relied upon their S3 storage service. Edge computing and analytics could effectively allow your service to remain operational through a fallback to a cloudlet.  Examples  IoT technology is being used to monitor and then manage traffic conditions in a specific location. Identifying traffic congestion, determining the source of that congestion and determining alternative routes requires fast data processing. Using cloud computing results in response delays associated with the transmission of significant volumes of data both to and from the cloud. Edge Analytics means the data is processed closer to that location, and then sent to drivers in much shorter time.  Another example is supporting the distribution of localized content such as the addition of advertisements to a video stream that is only being distributed within a small area without having to divert the stream to another server for processing.  Open issues  As with all emerging technology, there are a number of open or unresolved issues for Edge Analytics. A major and non-technical issue is, what is the business model that will support the provision of cloudlets or edge nodes, and what will be the consequent cost of providing these services? Security also remains a concern: how will the perimeter security of cloudlets compare to that implemented in cloud facilities? As IoT continues to grow, so will the variety of needs for the processing and management of that data. How will cloudlets cope with this demand?  Increased responsiveness, flexibility, and greater control over data to reduce privacy breach risk are strong (and not the only) reasons for adopting Edge Analytics in the development of your mobile or IoT service. It presents a real opportunity in the differentiation of service offerings in a market that is only going to get more competitive in the near future. Consider both the different options that are available to you, and the benefits and pitfalls of those options.  About the author Peter Worthy is an Interaction Designer currently completing a PhD exploring human values and the design of IoT technology in a domestic environment.  Professionally Peter’s interests range from design research methods, understanding HCI and UX in emerging technologies, and physical computing.  Pete also works at a University tutoring across a range of subjects and supporting a project that seeks to develop context aware assistive technology for people living with dementia.
Read more
  • 0
  • 0
  • 2068

article-image-adblocking-and-future-web
Sam Wood
11 Apr 2016
6 min read
Save for later

Adblocking and the Future of the Web

Sam Wood
11 Apr 2016
6 min read
Kicked into overdrive by Apple's iOS9 infamously coming with adblocking options for Safari, the content creators of the Internet have woken up to the serious challenge of ad-blocking tech. The AdBlock+ Chrome extension boasts over 50 million active users. I'm one of them. I'm willing to bet that you might be one too. AdBlock use is rising massively and globally and shows no sign of slowing down. Commentators have blamed the web-reading public, have declared web publishers have brought this on themselves and even made worryingly convincing arguments that adblocking is a conspiracy by corporate supergiants to kill the web as we know it. They all agree on one point though - the way we present and consume web content is going to have to evolve or die. So how might adblocking change the web? We All Go Native One of the most proposed and most popular solutions to the adblocking crisis is to embrace "native" advertising. Similar to sponsorship or product placement in other media, native advertising interweaves its sponsor into the body of the content piece. By doing so, an advert is made immune to the traditional scripts and methods that identify and block net ads. This might be a thank you note to a sponsor at the end of a blog, an 'advertorial' upsell of a product or service, or corporate content marketing where a company produces and promotes their own content in a bid to garner your attention for their paid products. (Just like this blog. I'm afraid it's content marketing. Would you like to buy a quality tech eBook? How about the Web Developer's Reference guide - your Bible for everything you need to know about web dev! Help keep this Millennial creative in a Netflix account and pop culture tee-shirts.) The Inevitable Downsides Turns out nobody wants to read sponsored content - only 24% of readers scroll down on a native ad. A 2014 survey by Contently revealed two-thirds of respondents saying they felt deceived by sponsored advertising. We may see this changing. As the practice becomes more mainstream, readers come to realize it does not impact on quality or journalistic integrity. But it's a worrying set of statistics for anyone who hoped direct advertising might save them from the scourge of adblock. The Great App Exodus There's a increasingly popular prediction that adblocking may lead to a great exodus of content from browser-based websites to exist more and more in a scattered app-based ecosystem. We can already see the start of this movement. Every major content site bugs you to download its dedicated app, where ads can live free of fear. If you consume most of your mobile media through Snapchat Discover channels, through Facebook mobile sharing, or even through IM services like Telegram, you'll be reading your web content in that apps dedicated built-in reader. That reader is, of course, free of adblocking extensions. The Inevitable Downsides The issue here is one of corporate monopoly. Some journalists have criticized Facebook Instant (the tech which has Facebook host articles from popular news sites for increased load times) for giving Facebook too much power over the news business. Vox's Matthew Yglesias's predicts restructuring where "instead of digital media brands being companies that build websites, they will operate more like television studios — bringing together teams that collaborate on the creation of content, which is then distributed through diverse channels that are not themselves controlled by the studio." The control that these platforms could exert raises troubling concerns for the future of the Internet as a bastion of free and public speech. User Experience with Added Guilt Alongside adding advertising <script> tags, web developers are increasingly creating sites that detect if you're using AdBlocking software and punishing you accordingly. This can take many forms - from a simple plea to be put on your whitelist in order to keep the servers running, to the cruel and inhuman: Some sites are going as far as actively blocking content for users using adblockers. Trying accessing an article on the likes of Forbes or CityAM with an adblocker turned on. You'll find yourself greeted with an officious note and a scrambled page that refuses to show you the goods unless you switch off the blocker. The Inevitable Downsides No website wants to be in a position where it has to beg or bully their visitors. Whilst your committed readers might be happy to whitelist your URL, antagonizing new users is a surefire way to get them to bounce from the site. Sadly, sabotaging their own sites for ad blocking visitors might be one of the most effective ways for 'traditional' web content providers to survive. After all, most users block ads in order to improve their browsing experience. If the UX of a site on the whitelist is vastly superior to the UX under adblock, it might end up being more pleasant to browse with the extension off. An Uneasy Truce between Adblockers and Content In many ways adblocking was a war that web adverts started. From the pop-up to the autoplaying video, web ad software has been built to be aggressive. The response of adblockers is an indiscriminate and all-or-nothing approach. As Marco Arment, creator of the Peace adblocking app, notes "Today’s web readers [are so] fed up that they disable all ads, or even all Javascript. Web developers and standards bodies couldn’t be more out of touch with this issue, racing ahead to give browsers and Javascript even more capabilities without adequately addressing the fundamental problems that will drive many people to disable huge chunks of their browser’s functionality." Both sides need to learn to trust one another again. The AdBlock+ Chrome extension now comes with an automatic whitelist of sites; 'guilt' website UX works to remind us that a few banner ads might be the vital price we pay to keep our favorite mid-sized content site free and accessible. If content providers work to restore sanity to the web on their ends, then our need for adblocking software as users will diminish accordingly. It's a complex balance that will need a lot of good will from both 'sides' - but if we're going to save the web as we know it, then a truce might be necessary. Building a better web? How about checking out our Essential Web Dev? Get five titles for only $50!  
Read more
  • 0
  • 1
  • 2042

article-image-top-5-devops-tools-increase-agility
Darrell Pratt
14 Oct 2016
7 min read
Save for later

Top 5 DevOps Tools to Increase Agility

Darrell Pratt
14 Oct 2016
7 min read
DevOps has been broadly defined as a movement that aims to remove the barriers between the development and operations teams within an organization. Agile practices have helped to increase speed and agility within development teams, but the old methodology of throwing the code over the wall to an operations department to manage the deployment of the code to the production systems still persists. The primary goal of the adoption of DevOps practices is to improve both the communication between disparate operations and development groups, and the process by which they work. Several tools are being used across the industry to put this idea into practice. We will cover what I feel is the top set of those tools from the various areas of the DevOps pipeline, in no particular order. Docker “It worked on my machine…” Every operations or development manager has heard this at some point in their career. A developer commits their code and promptly breaks an important environment because their local machine isn’t configured to be identical to a larger production or integration environment.  Containerization has exploded onto the scene and Docker is at the nexus of the change to isolate code and systems into easily transferable modules. Docker is used in the DevOps suite of tools in a couple of methods. The quickest win is to first use Docker to provide the developers with easily useable containers that can mimic the various systems within the stack. If a developer is working on a new RESTful service, they can checkout the container that is setup to run Node.js or Spring Boot, and write the code for the new service with the confidence that the container will be identical to the service environment on the servers. With the success of using Docker in the development workflow, the next logical step is to use Docker in the build stage of the CI/CD pipeline. Docker can help to isolate the build environment’s requirements across different portions of the larger application. By containerizing this step, it is easy to use one generic pipeline to build components as different spanning from Ruby and Node.js to Java and Golang. Git & JFrog Artifactory Source control and artifact management acts as afunnel for the DevOps pipeline. The structure of an organization can dictate how they run these tools, be it hosted or served locally. Git’s decentralized source code management and high-performance merging features have helped it to become the most popular tool in version control systems. Atlassian BitBucket and GitHub both provide a good set of tooling around Git and are easy to use and to integrate with other systems. Source code control is vital to the pipeline, but the control and distribution of artifacts into the build and deployment chain is important as well. >Branching in Git Artifactory is a one-stop shop for any binary artifact hosted within a single repository, which now supports Maven, Docker, Bower, Ruby Gems, CocoaPods, RPM, Yum, and npm. As the codebase of an application grows and includes a broader set of technologies, the ability to control this complexity from a single point and integrate with a broad set of continuous integration tools cannot be stressed enough. Ensuring that the build scripts are using the correct dependencies, both external and internal, and serving a local set of Docker containers reduces the friction in the build chain and will make the lives of the technology team much easier. Jenkins There are several CI servers to choose from in the market. The hosted set of tools such as Travis CI, Codeship, Wercker and Circle CI are all very well suited to drive an integration pipeline and each caters slightly better to an organization that is more cloud focused (source control and hosting), with deep integrations with GitHub and cloud providers like AWS, Heroku, Google and Azure. The older and less flashy system is Jenkins. fcommunity that is constantly adding in new integrations and capabilities to the product. The Jenkins Pipeline feature provides a text-based DSL for creating complex workflows that can move code from repository to the glass with any number of testing stages and intermediate environment deployments. The pipeline DSL can be created from code and this enables a good scaffolding setup for new projects to be integrated into the larger stack’s workflow >Pipeline example Jenkins has continued to nurture a large community that is constantly adding in new integrations and capabilities to the product. The Jenkins Pipeline feature provides a text-based DSL for creating complex workflows that can move code from repository to the glass with any number of testing stages and intermediate environment deployments. The pipeline DSL can be created from code and this enables a good scaffolding setup for new projects to be integrated into the larger stack’s workflow. Hashicorp Terraform At this point we have a system that can build and manage applications through the software development lifecycle. The code is hosted in Git, orchestrated through testing and compilation with Jenkins, and running in reliable containers, and we are storing and proxying dependencies in Artifactory. The deployment of the application is where the operations and development groups come together in DevOps. Terraform is an excellent tool to manage the infrastructure required for running the applications as code itself. There are several vendors in this space — Chef, Puppet and Ansible to name just a few. Terraform sits at a higher level than many of these tools by acting as more of a provisioning system than a configuration management system. It has plugins to incorporate many of the configuration tools, so any investment that an organization has made in one of those systems can be maintained. Load balancer and instance config Where Terraform excels is in its ability to easily provision arbitrarily complex multi-tiered systems, both locally or cloud hosted. The syntax is simple and declarative, and because it is text, it can be versioned alongside the code and other assets of an application. This delivers on “Infrastructure as Code.” Slack A chat application was probably not what you were expecting in a DevOps article, but Slack has been a transformative application for many technology organizations. Slack provides an excellent platform for fostering communication between teams (text, voice and video) and integrating various systems.  The DevOps movement stresses onremoval of barriers between the teams and individuals who work together to build and deploy applications. Web hooks provide simple integration points for simple things such as build notifications, environment statuses and deployment audits. There is a growing number of custom integrations for some of the tools we have covered in this article, and the bot space is rapidly expanding into AI-backed members of the team that answer questions about builds and code to deploy code or troubleshoot issues in production. It’s not a surprise that this space has gained its own name, ChatOps. Articles covering the top 10 ChatOps strategies will surely follow. Summary In this article, we covered several of the tools that integrate into the DevOps culture and how those tools are used and are transforming all areas of the technology organization. While not an exhaustive list, the areas that were covered will give you an idea of the scope of the space and how these various systems can be integrated together. About Darrell Pratt Darrell Pratt is a technologist who is responsible for a range of technologies at Cars.com, where he is the director of software development and delivery. He is passionate about technology and still finds time to write a bit of code and hack on hardware projects. Find him on Twitter here: @darrellpratt.
Read more
  • 0
  • 0
  • 2029

article-image-android-o-whats-new-and-why-its-been-introduced
Raka Mahesa
07 May 2017
5 min read
Save for later

Android O: What's new and why it's been introduced

Raka Mahesa
07 May 2017
5 min read
Eclaire, Froyo, Gingerbread, Honeycomb, Ice Cream Sandwich, Jelly Bean, Kit Kat, Lollipop, Marshmallow, and Nougat. If you thought that was just a list of various sweet treats, well, you're not wrong, but it's also a list of Android version names. And if you guessed that the next version of Android starts with O, well you're exactly right because Google themselves have announced Android O – the latest version of Android.  So, what's new in the O version of Android? Let's find out.  Notifications have always been one of Android's biggest strengths. Notifications on Android are informative, versatile, and customizable so they fit their users' need. Google clearly understands this and has kept improving the notification system of Android. They have overhauled how the notifications look, made notifications more interactive, and given users a way to manage the importance of each notification. So, of course, for this version of Android, Google added even more features to the notification system.  The biggest feature added to the notification system on Android O is the Notification Channel. Basically, Notification Channel is an API that allows developers to define categories for notifications from their apps. App users will then be able to control the setting for each category of notifications. This way, users can fine tune applications so they only show notifications that the users think are important.  For example, let's say you have a chat application and it has 2 notification channels. The first channel is for notifying users when a new chat message arrives and the second one is for when the user is added to someone else's friend list. Some users may only care about the new chat messages, so they can turn off certain types of notifications instead of turning of all notifications from the app.  Other features added to Android O notification system is Notification Snoozing and Notification Timeout. Just like in alarm, Notification Snoozing allows the user to snooze a notification and let it reappear later when the user has time. Meanwhile, Notification Timeout allows developers to set a timeout duration for the notifications. Imagine that you want to notify a user about a flash sale that only runs for 2 hours. By adding timeout, the notification can remove itself when the event is over. Okay, enough about notifications – what else is new in Android O?  Autofill Framework  One of the newest things introduced with Android O is the Autofill Framework. You know how browsers can remember your full name, email address, home address, and other stuff and automatically fill in a registration form with that data? Well, the same capability is coming to Android apps via the Autofill Framework. An app can also register itself as an Autofill Service. For example, if you made a social media app, you can let other apps use the user's account data from your app to help users fill their forms.  Account data  Speaking of account data, with Android O, Google has removed the ability for developers to get user's account data using the GET_ACCOUNT permission, forcing developers to use the account chooser dialog instead. So with Android O, developers can no longer automatically fill in a text field with the user's email address and name, and have to let users pick accounts on their own.  And it's not just form filling that gets reworked. In an effort to improve battery life and phone performance, Android O has added a number of limitations to background processes. For example, on Android O, apps running in the background (that is, apps that don't have any of their interface visible to users) will not be able to get users’ location data as frequently as before. Also, apps in the background can no longer create and use background processes.  Do keep in mind that some of those limitations will impact any application running on Android O, not just apps that were built using the O version of the SDK. So if you have an app that relies on background processes, you may want to check your app to ensure it works fine on Android O.  App icons  Let's talk about something with more visual: App icons. You know how manufacturers add custom skins to their phones to differentiate their products from competitors? Well, some time ago they also changed the shape of all app icons to fit the overall UI of their phones and thisbroke some carefully designed icons. Fortunately, with the Adaptive Icon feature introduced in Android O, developers will be able to design an icon that can adjust to a variety of shapes.  We've covered a lot, but there are still tons of other features added to Android O that we haven't discussed, including: multi-display support, a new native Audio API, Keyboard Navigation, new APIs to manage WebView, new Java 8 APIs, and more. Do check out the official documentation for those.  That being said, we're still missing the most important thing: What is going to be the full name for Android O? I can only think of Oreo at the moment. What about you?  About the author  Raka Mahesa is a game developer at Chocoarts (chocoarts.com), who is interested in digital technology in general. Outside of work hours, he likes to work on his own projects, with Corridoom VR being his latest released game. Raka also regularly tweets as @legacy99.
Read more
  • 0
  • 0
  • 2028

article-image-biggest-sysadmin-and-security-salary-and-skills-survey-2015
Packt Publishing
03 Aug 2015
1 min read
Save for later

The biggest Sysadmin and Security salary and skills survey of 2015

Packt Publishing
03 Aug 2015
1 min read
See the highlights from our comprehensive Skill Up IT industry salary reports, with data from over 20,000 IT professionals. Read on to discover which skills you should learn and which industry to get into to earn the big bucks! Download the full size infographic here.    
Read more
  • 0
  • 0
  • 2023
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at £15.99/month. Cancel anytime
article-image-jupyter-data-laboratory-part-1
Marijn van
18 May 2016
5 min read
Save for later

Jupyter as a Data Laboratory: Part 1

Marijn van
18 May 2016
5 min read
This is part one of a two-part piece on Jupyter, a computing platform used my many scientists to perform their data analysis and modeling. This first part will help you understand what Jupyter is, and the second part will cover why it represents a leap forward in scientific computing. Jupyter: a data laboratory If you think that scientists, famous for being careful and precise, always produce well-documented, well-tested, and beautiful code, you are dead wrong. More often than not, a scientist's local code folder is an ill-organized heap of horrible spaghetti code that will give any seasoned software developer nightmares. But the scientist will sleep soundly. That is because usually, the sort of programming that scientists do is a lot different from software development. They tend to write programming code for a whole different purpose, with a whole different mindset, and with a whole different approach to computing. If you have never done scientific computing before—by which I mean you have never used your computer to analyze measurement data or to "do science"—then leave your TDD, SCRUM, agile, and so on at the door and come join me for a little excursion into Jupyter. The programming language is your user interface Over the years, programmers have created applications to cover most computing needs of most users. In domains such as content creation, communication, and entertainment, chances are good that someone already wrote an application that does what you want to do. If you're lucky, there's even a friendly GUI to help guide you through the process. But in science, the point is usually to try something that nobody has done before. Hence, any application used for data analysis needs to be flexible. The application has to enable the user to do, well, anything imaginable, with a dataset; and the GUI paradigm breaks down. Instead of presenting the user with a list of available options, it becomes more efficient to just ask the user what needs to be accomplished. When driven to the extreme, you end up dropping the whole concept of an application and working directly with a programming language. So it is understandable that when you start Jupyter, you are staring at a mostly blank screen with a blinking cursor. Realize that behind that blinking cursor sits the considerable computational power your computer—most likely a multicore processor, gigabytes of RAM, and terabytes of storage space, awaiting your command. In many domains, a programming language is used to create an application, which in turn presents you with an interface to do the operation you wanted to do in the first place. In scientific computing, however, the programming language is your interface. The ingredients of a data laboratory I think of Jupyter as a data laboratory. The heart of a data laboratory is a REPL (a read-eval-print loop, which allows you to enter lines of programming code that immediately get executed, and the result is displayed on the screen). The REPL can be regarded as a workbench, and loading a chunk of data into working memory can be regarded as placing a sample on it, ready to be examined. Jupyter offers several advanced REPL environments, most notably IPython, which runs on your terminal and also ships with its own tricked out terminal to display inline graphics and offer easier copy-paste. However, the most powerful REPL that Jupyter offers runs in your browser, allowing you to use multiple programming languages at the same time and embed inline markdown, images, videos, and basically anything the browser can render. The REPL allows access to the underlying programming language. Since the language acts as our primary user interface, it needs to get out of our way as much as possible. This generally means it should be high-level with terse syntax and not be too picky about correctness. And of course, it must support an interpreted mode to allow a quick back-and-forth between a line of code and the result of the computation. Of the multitude of programming languages supported by Jupyter, it ships with Python by default, which fulfills the above requirements nicely. In order to work with the data efficiently (for example, to get it onto your workbench in the first place), you'll want software libraries (which can be regarded as shelves that hold various tools like saws, magnifiers, and pipettes). Over the years, scientists have contributed a lot of useful libraries to the Python ecosystem, so you can have your pick of favorite tools. Since the APIs that are exposed by these libraries are as much a part of the user interface as the programming language, a lot of thought gets put into them. While executing single lines or blocks of code to interactively examine your data is essential, the final ingredient of the data laboratory is the text editor. The editor should be intimately connected to the REPL and allow for a seamless transmission of text between the two. The typical workflow is to first try a step of the data analysis live in the REPL and, when it seems to work, write it down into a growing analysis script. More complicated algorithms are written in the editor first in an iterative fashion, testing the implementation by executing the code in the REPL. Jupyter's notebook environment is notable in this regard, as it blends the REPL and the editor together. Go check it out If you are interested in learning more about Jupyter, I recommend installing it and checking out this wonderful collection of interesting Jupyter notebooks. About the author Marijn van Vliet is a postdoctoral researcher at the Department of Neuroscience and Biomedical Engineering of Aalto University in Finland. He received his PhD in biomedical sciences in 2015.
Read more
  • 0
  • 0
  • 2021

article-image-skill-up-2017-what-we-learned-about-tech-pros
Packt
17 Jul 2017
2 min read
Save for later

Skill Up 2017: What we learned about tech pros and developers

Packt
17 Jul 2017
2 min read
The results are in. 4,731 developers and tech professionals have spoken. And we think you’ll find what they have to say pretty interesting. From the key tools and trends that are disrupting and changing the industry, to learning patterns and triggers, this year’s report takes a bird’s eye view on what’s driving change and what’s impacting the lives of developers and tech pros around the globe in 2017. Here’s the key findings - but download the report to make sure you get the full picture of your peers professional lives. 60% of our respondents have either a ‘reasonable amount of choice’ or a significant ‘amount of choice’ over the tools they use at work - which means that understanding the stack and the best ways to manage it is a key part of any technology professionals knowledge. 28% of respondents believe technical expertise is used either ‘poorly’ or ‘very poorly’ in their organization. Almost half of respondents believe their manager has less technical knowledge than they do. People who work in tech are time poor - 64% of respondents say time is the biggest barrier to their professional development The Docker revolution is crossing disciplines, industries and boundaries - it’s a tool being learned by professionals across industries. Python is the go-to language for a huge number of different job roles - from management to penetration testers. 40% of respondents dedicate time to learning every day - a further 44% dedicate time once a week. Young tech workers are keen to develop the skillset they need to build a career but can find it hard to find the right resources - they also say they lack motivation Big data roles are among the highest paying in the software landscape - demonstrating that organizations are willing to pay big bucks for people with the knowledge and experience. Tools like Kubernetes and Ansible are increasing in popularity - highlighting that DevOps is becoming a methodology - or philosophy - that organizations are starting to adopt. That’s not everything - but it should give you a flavour of the topics that this year’s report touches on. Download this year’s Skill Up report here.
Read more
  • 0
  • 0
  • 2005

article-image-carthage-dependency-management-made-git
Nicholas Maccharoli
01 Apr 2016
4 min read
Save for later

Carthage: Dependency management made git-like

Nicholas Maccharoli
01 Apr 2016
4 min read
Why do I need another dependency manager? Carthage is a decentralized dependency manager for the iOS and OS X frameworks. Unlike CocoaPods, Carthage has no central location for hosting repository information (like pod specs). It dictates nothing as to what kind of project structure you should have, aside from optionally having a Carthage/ folder in your project's root folder, and housing built frameworks in Build/ and optionally source files in Checkouts/ if you are building directly from source.This folder hierarchy is automatically generated after running the command, bash carthage bootstrap. Carthage leaves it open to the end user to decide how they want to manage third-party libraries, either by having both the Cartfile and Carthage/* folders checked in under source control or just the Cartfile that lists the frameworks that you wish to use in your project under source control. Since there is no centralized source of information, project discovery is more difficult with carthage, but other than that, normal operation is simpler and less error prone when compared to other package managers. The Setup of Champions The best way to install and manage carthage, in my opinion, is through HomeBrew. Just run the following command and you should be in business in no time: brew install carthage If for some reason you don't want to go the homebrew route, you are still in luck! Just download the latest and greatest carthage.pkg from the Releases Page. Common Carthage Work-flow Create a cart file with dependencies listed, and optionally, branch or version info. Cartfile grammar notes: The first keyword is either 'git' for a repository not hosted on github, or 'github' for a repository hosted on github. Next is the location of the repository. If the prefix is 'git', then this will be the same as the address you type when running git clone. The third piece is either the branch you wish to pull the latest from, or the version number of a release with one of the following operators: ==, >= or ~>. github "ReactiveCocoa/ReactiveCocoa" "master" #Latest version of the master branch of reactive cocoa github "rs/SDWebImage" ~> 3.7 # Version 3.7 and versions compatible with 3.7 github "realm/realm-cocoa" == 0.96.2 #Only use version 0.96.2 Basic Commands Assuming that all went well with the installation step, you should now be able to run carthage bootstrap and watch carthage go through the Cartfile one by one and fetch the frameworks (or build them after fetching from source, if using --no-use-binaries) Given that this goes without a hitch, all that is left to do is add a new run script phase to your target. To do this, simply click on your target in XCode, and under the 'Build Phases' tab, click the '+' button and select "New Run Script Phase" Type this in the script section: /usr/local/bin/carthage copy-frameworks And then, below the box where you just typed the last line, add the input files of all the frameworks you wish to include and their dependencies. Last but not least Once again, click on your target and navigate to the General tab, and then go to the section Linked Frameworks and Libraries and add the frameworks from [Project Root]/Carthage/Build/[iOS or Mac ]/* to your project. At this point, everything should build and run just fine. As the project requirements change and you wish to add, remove, or upgrade framework versions, just edit the Cartfile, run the command carthage update, and if needed add new or remove unused frameworks from your project settings. It's that simple! A Note Source Control with Carthage Given that all of your project's third party source and frameworks are located under the Carthage/ folder, in my experience, it is much easier to just simply place this entire folder under source control. The merits of doing so are simple: when cloning the project or switching branches, there is no need to run carthage bootstrap or carthage update. This saves a considerable amount of time, and the only expense for doing so is an increase in the size of the repository. About the author Nick Maccharoli is an iOS / Backend developer and Open Source enthusiast working with a startup in Tokyo, enjoying the current development scene. You can see what he is up to at @din0sr or github.com/nirma. 
Read more
  • 0
  • 0
  • 1989

article-image-deployment-done-right-teletraan
Bálint Csergő
03 Aug 2016
5 min read
Save for later

Deployment done right – Teletraan

Bálint Csergő
03 Aug 2016
5 min read
Tell me, how do you deploy your code? If you still GIT pull on your servers, you are surely doing something wrong. How will you scale that process? How will you eliminate the chance of human failure? Let me help you; I really want to. Have you heard that Pinterest open sourced its awesome deployment system called Teletraan? If not, read this post. If yes, read it still, and maybe you can learn from the way we use it in production at endticket. What is Teletraan? It is a deployment system that consists of 3 main pieces: The deploy service is a Dropwizard-based Java web service that provides the core deploy support. It's actually an API, the very heart and brain of this deployment service. The deploy board is a Django-based web UI used to perform day-to-day deployment works. Just an amazing user interface for Teletraan. The deploy agent is the Python script that runs on every host and executes the deploy scripts. Is installing it a pain? Not really; the setup is simple. But if you're using Chef as your configmanagement tool of choice, take thesesince they might prove helpful: chef-teletraan-agent, chef-teletraan. Registering your builds Let the following snippet speak for itself. import requests headers = {'Content-type': 'application/json'} r = requests.post("%s/v1/builds" % teletraan_host, data=json.dumps({'name': teletraan_name, 'repo': name, 'branch': branch, 'commit': commit, 'artifactUrl': artifact_base_url + '/' + artifact_name, 'type': 'GitHub'}), headers=headers ) I've got the system all set up.What now? Basically, you have to make your system compatible with Teletraan. You must have an aritfact repository available to store your builds, and add deploy scripts to your project. Create a directory called "teletraan" in your project root. Add the following scripts to it: POST_DOWNLOAD POST_RESTART PRE_DOWNLOAD PRE_RESTART RESTARTING Although referred as Deploy Scripts, they can be written in any programming language as long as they are executable. Sometimes, the same build artifact can be used to run different services based on different configurations. In this case, create different directories under the top-level teletraan with the deploy environment names, and put the corresponding deploy scripts under the proper environment directories separately. For example: teletraan/serverx/RESTARTING teletraan/serverx/POST_RESTART teletraan/servery/RESTARTING What do these scripts do? The host level deploy cycle looks the following way: UNKNOWN->PRE_DOWNLOAD->[DOWNLOADING]->POST_DOWNLOAD->[STAGING]->PRE_RESTART->RESTARTING->POST_RESTART->SERVING_BUILD Autodeploy? You can define various environments. In our case, every successful master build ends up on the staging cluster automatically. It is powered by Teletraan’s autodeploy feature. It works nicely. Whenever Teletraan detects a new build, it gets automatically pushed to the servers. Manual control We don’t autodeploy the code to the production cluster. Teletraan offers a feature called "Promoting builds". Whenever a build proves to be valid at the staging cluster (some Automated end-to-end testing; and of course, manual testing is involved in the process) the developer has the ability to promote a build to production. Oh noes!Things went wrong. Is there a way to go back? Yes there is a way! Teletraan gives you the ability to roll back any build which happens to be failing. And this can happen instantly. Teletraan keeps a configurable numberof builds on the server of every deployed project; an actual deploy is just a symlink being changed to the new release. Rolling deployments, oh the automation! Deploy scripts should always run flawlessly. But let's say they do actually fail. What happens then? You can define it. There are three policies in Teletraan: Continue with the deployment.Move on to the next host as ifnothing happened. Roll back everything to the previous version. Make sure everything is fine; it's more important than a hasty release. Remove the failing node from production. We have enough capacity left anyway, so let's just cut off the dead branches! This gives you all the flexibility and security you need when deploying your code to any HA environment. Artifacts? Teletraan just aims to be a deployment system, and nothing more. And it does its purpose amazingly well. You just have to notify it about builds. Also you just have to make sure that your tarballs are available to every node where deploy agents are running. Lessons learned from integrating Teletraan into our workflow It was acutally a pretty good experience even when I was fiddling with the Chef part. We use Drone as our CI server, and there was no plugin available for Drone, so thathad to be done also. Teletraan is a new kid on the block, so you might have to write some lines of code to make it apart of your existing pipeline. But I think that if you're willing to spend a day or two on integrating it, it will pay off for you. About the author BálintCsergő is a software engineer from Budapest, currently working as an infrastructure engineer at Hortonworks. He loves Unix systems, PHP, Python, Ruby, the Oracle database, Arduino, Java, C#, music, and beer.
Read more
  • 0
  • 0
  • 1985
article-image-year-machine-learning
Owen Roberts
22 Jan 2016
5 min read
Save for later

This Year in Machine Learning

Owen Roberts
22 Jan 2016
5 min read
The world of data has really boomed in the last few years. When I first joined Packt Hadoop was The Next Big Thing on the horizon and what people are now doing with all the data we have available to us was unthinkable. Even in the first few weeks of 2016 we’re already seeing machine learning being used in ways we probably wouldn’t have thought about even a few years ago – we’re using machine learning for everything from discovering a supernova that was 570 billion times brighter than the sun to attempting to predict this year’s Super Bowl winners based on past results, but So what else can we expect in the next year for machine learning and how will it affect us? Based on what we’ve seen over the last three years here are a few predictions about what we can expect to happen in 2016 (With maybe a little wishful thinking mixed in too!) Machine Learning becomes the new Cloud Not too long ago every business started noticing the cloud, and with it came a shift in how companies were structured. Infrastructure was radically adapted to take full advantage that the benefits that the cloud offers and it doesn’t look to be slowing down with Microsoft recently promising to spend over $1 billion in providing free cloud resources for non-profits. Starting this year it’s plausible that we’ll see a new drive to also bake machine learning into the infrastructure. Why? Because every company will want to jump on that machine learning bandwagon! The benefits and boons to every company are pretty enticing – ML offers everything from grandiose artificial intelligence to much more mundane such as improvements to recommendation engines and targeted ads; so don’t be surprised if this year everyone attempts to work out what ML can do for them and starts investing in it. The growth of MLaaS Last year we saw Machine Learning as a Service appear on the market in bigger numbers. Amazon, Google, IBM, and Microsoft all have their own algorithms available to customers. It’s a pretty logical move and why that’s not all surprising. Why? Well, for one thing, data scientists are still as rare as unicorns. Sure, universities are creating new course and training has become more common, but the fact remains we won’t be seeing the benefits of these initiatives for a few years. Second, setting up everything for your own business is going to be expensive. Lots of smaller companies simply don’t have the money to invest in their own personal machine learning systems right now, or have the time needed to fine tune it. This is where sellers are going to be putting their investments this year – the smaller companies who can’t afford a full ML experience without outside help. Smarter Security with better protection The next logical step in security is tech that can sense when there are holes in its own defenses and adapt to them before trouble strikes. ML has been used in one form or another for several years in fraud prevention, but in the IT sector we’ve been relying on static rules to detect attack patterns. Imagine if systems could detect irregular behavior accurately or set up risk scores dynamically in order to ensure users had the best protection they could at any time? We’re a long way from this being fool-proof unfortunately, but as the year progresses we can expect to see the foundations of this start being seen. After all, we’re already starting to talk about it. Machine Learning and Internet of Things combine We’re already nearly there, but with the rise in interest in the IoT we can expect that these two powerhouses to finally combine. The perfect dream for IoT hobbyists has always been like something out of the Jetsons or Wallace and Gromit –when you pass that sensor by the frame of your door in the morning your kettle suddenly springs to life so you’re able to have that morning coffee without waiting like the rest of us primals; but in truth the Internet of Things has the potential to be so much more than just making the lives of hobbyists much easier. By 2020 it is expected that over 25 billion ‘Things’ will be connected to the internet, and each one will be collating reams and reams of data. For a business with the capacity to process this data they can collect the insight they could collect is a huge boon for everything from new products to marketing strategy. For IoT to really live up to the dreams we have for it we need a system that can recognize and collate relevant data, which is where a ML system is sure to take center stage. Big things are happening in the world of machine learning, and I wouldn’t be surprised if something incredibly left field happens in the data world that takes us all by surprise, but what do you think is next for ML? If you’re looking to either start getting into the art of machine learning or boosting your skills to the next level then be sure to give our Machine Learning tech page a look; it’s filled our latest and greatest ML books and videos out right now along with the titles we’re realizing soon, available to preorder in your format of choice.
Read more
  • 0
  • 0
  • 1983

article-image-introduction-service-workers
Sebastian Müller
05 Jun 2015
7 min read
Save for later

An Introduction to Service Workers

Sebastian Müller
05 Jun 2015
7 min read
The shiny new Service Workers provide powerful features such as a scriptable cache for offline support, background syncing, and push notifications in your browser. Just like Shared Workers, a Service Worker runs in the background, but it can even run when your website is not actually open. These new features can make your web apps feel more like native mobile apps. Current Browser Support As of writing this article, Service Workers is enabled in Chrome 41 by default. But this does not mean that all features described in the W3 Service Workers Draft are fully implemented yet. The implementation is in the very early stages and things may change. In this article, we will cover the basic caching features that are currently available in Chrome. If you want to use Service Workers in Firefox 36, it is currently a flagged feature that you must enable manually. In order to do this, type “about:config” into your URL field and search for “service worker” in order to set the “dom.Service Workers.enabled” setting to true. After a restart of Firefox, the new API is available for use. Let’s get started - Registering a Service Worker index.html <!DOCTYPE html> <html> <head> <meta charset="utf-8" /> <title>ServiceWorker Demo</title> </head> <body> <script> if ('serviceWorker' in navigator) { navigator.serviceWorker.register('my-service-worker.js') .then(function(registration) { console.log('yay! serviceWorker installed!', registration); }, function(error) { console.log('something went wrong!:', error); }); } </script> </body> </html> To register a Service Worker, we call navigator.serviceWorker.register() with a path to our Service Worker file. Due to security reasons, it is important that your service worker file is located at the top-level relative to your website. Paths like ‘scripts/my-service-worker.js’ won’t work. The register method returns a Promise, which is fulfilled when the installation process is successful. The promise can be rejected if you, e.g., have a syntax error in your Service Worker file. Cool, so let’s review what a basic Service Worker that lives in the ‘my-service-worker.js’ file might look like. A basic Service Worker my-service-worker.js: this.addEventListener('install', function(event) { console.log('install event!'); // some logic here... }); In our Service Worker file, we can register event listeners for several events that are triggered by the browser. In our case, we listen for the ‘install’ event, which is triggered when the browser sees the Service Worker the first time. Later on, we will add some code to the ‘install’ event listener to make our web app offline ready. For now, we add a simple ‘console.log’ message to be able to check that our event listener function was called. Now when you open the index.html file in your browser (important: you need to be running a webserver to serve these two files), you should see a success log message in the Chrome developer tools console. You might wonder why the ‘install event!’ log message from the service worker file is not showing up in the console. This is due to the fact that all Service Workers are running in a separate thread. Next, we will cover how to debug Service Workers. In Chrome, you can open the URL “chrome://serviceworker-internals” to get a list of Service Workers registered in your browser: When you visit this page, right after you’ve visited the index.html, you should see a worker with the installation status: ‘ACTIVATED’ and running status ‘RUNNING’. Then you will know that everything went fine. After a while, the worker should be in the running status ‘STOPPED’. This is due to the fact that Chrome completely handles the lifetime of a Service Worker. You have no guarantee how long your service worker runs after the installation, for example. After digging into the basics of installing Service Workers, we clearly have no advantages yet. So let’s take look at the described offline caching features of Service Workers next. Make your Web apps Offline Ready Let’s face it: The Cache Manifest standard to make your apps offline ready has some big disadvantages. It’s not possible to script the caching mechanism in any way. You have to let the browser handle the caching logic. With Service Workers, the browser gives you the moving parts and lets you handle the caching stuff the way you want. So let’s dive into the basic caching mechanisms that Service Workers provide. Let’s get back to our index.html file and add an image named ‘my-image.png’ to the body that we want to have available when we are offline: <body> <img src="my-image.png" alt="my image" /> … </body> Now that we have an image in our index.html, let’s extend our existing service worker to cache our image ‘my-image.png’ and our index.html for offline usage: // (1.) importScripts('./cache-polyfill.js'); // this event listener gets triggered when the browser sees the ServiceWorker the first time this.addEventListener('install', function(event) { console.log('install!'); // (2.) event.waitUntil( caches.open('my-cache') .then(function(cache) { console.log('cache opened'); // (3.) return cache.addAll([ '/', '/my-image.png' ]); }) ); }); this.addEventListener('fetch', function(event) { // (4.) event.respondWith( caches.match(event.request).then(function(response) { if (response) { // (5.) console.log('found response in cache for:', event.request.url); return response; } // (6.) return fetch(event.request); }) ); }); We use a global function available in the Service Worker context called ‘importScripts’ that lets us load external scripts for using libraries and other stuff in our Service Worker’s logic. As of writing this article, not all caching API are implemented in the current version of Chrome. This is why we are loading a cache polyfill that adds the missing API that is needed for our application to work in the browser. In our install event listener, we use the waitUntil method from the provided event object to tell the browser with a promise when the installation process in our Service Worker is finished. The provided promise is the return value of the caches.open() method that opens the cache with name ‘my-cache’. When the cache has been opened successfully, we add the index.html and our image to cache. The browser pre-fetches all defined files and adds them to the cache. Only when all requests have been successfully executed is the installation step of the service worker finished and the Service Worker can be activated by the browser. The event listener for the event type ‘fetch’ is called when the browser wants to fetch a resource, e.g., an image. In this listener, you have full control of what you want to send as a response with the event.respondWith() method. In our case, we open up the cache used in the install event to see if we have a cached response for the given request. When we find a matching response for the given request, we return the cached response. With this mechanism, you are able to serve all cached files, even if you are offline or the webserver is down. We have no cached response for the given request and the browser will handle the fetching for the uncached file with the shiny new fetch API. To see if the cache is working as intended, open the index.html file your browser and shut down your web server afterward. When you refresh your page, you should get the index.html and the my-image.png file out of the Service Worker cache: With this few lines of Service Worker code, you have implemented a basic offline-ready web application that caches your index.html and your image file. As already mentioned, this is only the beginning of Service Workers and many more features like push notifications will be added this year. About the Author Sebastian Müller is Senior Software Engineer at adesso AG in Dortmund, Germany. He spends his time building Single Page Applications and is interested in JavaScript Architectures. He can be reached at @Sebamueller on Twitter and as SebastianM on Github.
Read more
  • 0
  • 0
  • 1983

article-image-picking-tensorflow-can-now-pay-dividends-sooner
Sam Abrahams
23 May 2016
9 min read
Save for later

Picking up TensorFlow can now pay dividends sooner

Sam Abrahams
23 May 2016
9 min read
It's been nearly four months since TensorFlow, Google's computation graph machine learning library, was open sourced, and the momentum from its launch is still going strong. Over the time, both Microsoft and Baidu have released their own deep-learning libraries (CNTK and warp-ctc, respectively), and the machine learning arms race has escalated even further with Yahoo open sourcing CaffeOnSpark. Google hasn't been idle, however, and with the recent releases of TensorFlow Serving and the long awaited distributed runtime, now is the time for businesses and individual data scientists to ask: is it time to commit to TensorFlow? TensorFlow's most appealing features There are a lot of machine learning libraries available today—what makes TensorFlow stand out in this crowded space? 1. Flexibility without headaches TensorFlow heavily borrows concepts from the more tenured machine learning library Theano. Many models written for research papers were built in Theano, and its composable, node-by-node writing style translates well when implementing a model whose graph was drawn by hand first. TensorFlow's API is extremely similar. Both Theano and TensorFlow feature a Python API for defining the computation graph, which then hooks into high performance C/C++ implementations of mathematical operations. Both are able to automatically differentiate their graphs with respect to their inputs, which facilitates learning on complicated neural network structures and both integrate tightly with Numpy for defining tensors (n-dimensional arrays). However, one of the biggest advantages TensorFlow currently has over Theano (at least when comparing features both Theano and TensorFlow have) is its compile time. As of the time of writing this, Theano's compile times can be quite lengthy and although there are options to speed up compilation for experimentation, they come at the cost of a slower output model. TensorFlow's compilation is much faster, which leads to less headaches when trying out slightly different versions of models. 2. It's backed by Google (and the OSS community) At first, it may sound more like brand recognition than a tangible advantage, but when I say it's 'backed' by Google, what I mean is that Google is seriously pouring tons of resources into making TensorFlow an awesome tool. There is an entire team at Google dedicated on maintaining and improving the software steadily and visibly, while simultaneously running a clinic on how to properly interact with and engage the open source community. Google proved itself willing to adopt quality submissions from the community as well as flexible enough to adapt to public demands (such as moving the master contribution repository from Google's self-hosted Gerrit server to GitHub). These actions combined with genuinely constructive feedback from Google's team on pull-requests and issues helped make the community feel like this was a project worth supporting. The result? A continuous stream of little improvements and ideas from the community while the core Google team works on releasing larger features. Not only does TensorFlow recieve the benefits of a larger contributor base because of this, it also is more likely to withstand user decay as more people have invested time in making TensorFlow their own. 3. Easy visualizations and debugging with TensorBoard TensorBoard was the shiny toy that shipped on release with the first open source version of TensorFlow, but it's much more than eye candy. Not only can you use it as a guide to ensure what you've coded matches your reference model, but you can also keep track of data flowing through your model. This is especially useful when debugging subsections of your graph, as you can go in and see where any hiccups may have occurred. 4. TensorFlow Serving cuts the development-deployment cycle by nearly half The typical life cycle of machine learning models in the business world is generally as follows: Research and develop a model that is more accurate/faster/more descriptive than the previous model Write down the exact specifications of the finalized model Recreate the model in C++/C/Java/some other fast, compiled language Push the new model into deployment, replacing the old model Repeat On release, TensorFlow promised to "connect research and production." However, the community had to wait until just recently for that promise to come to fruition with TensorFlow Serving. This software allows you to run it as a server that can natively run models built in TensorFlow, which makes the new life cycle look like this: Research and develop a new model Hook the new model into TensorFlow Serving Repeat While there is overhead in learning how to use TensorFlow Serving, the process of hooking up new models stays the same, whereas rewriting new models in a different language is time consuming and difficult. 5. Distributed learning out of the box The distributed runtime is one of the newest features to be pushed to the TensorFlow repository, but it has been, by far, the most eagerly anticipated aspect of TensorFlow. Without having to incorporate any other libraries or software packages, TensorFlow is able to run distributed learning tasks on heterogenous hardware with various CPUs and GPUs. This feature is absolutely brand new (it came out in the middle of writing this post!), so do your research on how to use it and how well it runs. Areas to look for improvement TensorFlow can't claim to be the best at everything, and there are several sticking points that should be addressed sooner rather than later. Luckily, Google has been making steady improvements to TensorFlow since it was released, and I would be surprised if most of these were not remedied within the next few months. Runtime speed Although the TensorFlow team promises deployment worthy models from compiled TensorFlow code, at this time, its single machine training speed lags behind most other options. The team has made improvements in speed since its release, but there is still more work to be done. In-place operations, a more efficient node placement algorithm, and better compression techniques could help here. Distributed benchmarks are not available at this time—expect to see them after the next official TensorFlow release. Pre-trained models Libraries such as Caffe, Torch, and Theano have a good selection of pre-trained, state-of-the-art models that are implemented in their library. While Google did release a version of its Inception-v3 model in TensorFlow, it needs more options to provide a starting place for more types of problems. Expanded distributed support Yes, TensorFlow did push code for it's distributed runtime, but it still needs better documentation as well as more examples. I'm incredibly excited that it's available to try out right now, but it's going to take some time for most people to put it into production. Interested in getting up and running with TensorFlow? You'll need a primer on Python. Luckily, our Python Fundamentals course in Mapt gives you an accessible yet comprehensive journey through Python - and this week it's completely free. Click here, login, then get stuck in... The future Most people want to use software that is going to last for more than a few months—what does the future look like for TensorFlow? Here are my predictions about the medium-term future of the library. Enterprise-level distributions Just as Hadoop has commercial distributions of its software, I expect to see more and more companies offering supported suites that tie into TensorFlow. Whether they have more pre-trained models built on top of Keras (which already supports a TensorFlow backend), or make TensorFlow work seamlessly with a distributed file system like Hadoop, I forsee a lot of demand for enterprise features and support with TensorFlow. TensorFlow's speed will catch up (and most users won't need it) As mentioned earlier, TensorFlow still lags behind many other libraries out there. However, with the improvements already made; it's clear that Google is determined to make TensorFlow as efficient as possible. That said, I believe most applications of TensorFlow won't desperately need the speed increase. Of course, it's nice to have your models run faster, but most businesses out there don't have petabytes of useful data to work with, which means that model training usually doesn't take the "weeks" that we often see claimed as training time. TensorFlow is going to get easier, not more difficult, over time While there are definitely going to be many new features in upcoming releases of TensorFlow, I expect to see the learning curve of the software go down as more resources, such as tutorials, examples, and books are made available. The documentation's terminology has already changed in places to be more understandable; navigation within the documentation should improve over time. Finally, while most of the latest features in TensorFlow don't have the friendliest APIs right now, I'd be shocked if more user-friendly versions of TensorFlow Serving and the distributed runtime weren't in the works right now. Should I use TensorFlow? TensorFlow appears primed to fulfil the promise that was made back in November: a distributed, flexible data flow graph library that excels at neural network composition. I leave it to you decision makers to figure out whether TensorFlow is the right move for your own machine learning tasks, but here is my overall impression of TensorFlow: no other machine learning framework targeted at production-level tasks is as flexible, powerful, or improving as rapidly as TensorFlow. While other frameworks may carry advantages over TensorFlow now, Google is putting the effort into making consistent improvements, which bodes well for a community that is still in its infancy. About the author Sam Abrahams is a freelance data engineer and animator in Los Angeles, CA. He specializes in real-world applications of machine learning and is a contributor to TensorFlow. Sam runs a small tech blog, Memdump, and is an active member of the local hacker scene in West LA.
Read more
  • 0
  • 0
  • 1967
article-image-10-to-dos-for-industrial-internet-architects
Aaron Lazar
24 Jan 2018
4 min read
Save for later

10 To-dos for Industrial Internet Architects

Aaron Lazar
24 Jan 2018
4 min read
[box type="note" align="" class="" width=""]This is a guest post by Robert Stackowiak, a technology business strategist at the Microsoft Technology Center. Robert has co-authored the book Architecting the Industrial Internet with Shyam Nath who is the director of technology integrations for Industrial IoT at GE Digital. You may also check out our interview with Shyam for expert insights into the world of IIoT, Big Data, Artificial Intelligence and more.[/box] Just about every day, one can pick up a technology journal or view an on-line technology article about what is new in the Industrial Internet of Things (IIoT). These articles usually provide insight into IIoT solutions to business problems or how a specific technology component is evolving to provide a function that is needed. Various industry consortia, such as the Industrial Internet Consortium (IIC), provides extremely useful documentation in defining key aspects of the IIoT architecture that the architect must consider. These broad reference architecture patterns have also begun to consistently include specific technologies and common components. The authors of the title Architecting the Industrial Internet felt the time was right for a practical guide for architects.The book provides guidance on how to define and apply an IIoT architecture in a typical project today by describing architecture patterns. In this article, we explore ten to-dos for Industrial Internet Architects designing these solutions. Just as technology components are showing up in common architecture patterns, their justification and use cases are also being discovered through repeatable processes. The sponsorship and requirements for these projects are almost always driven by leaders in the line of business in a company. Techniques for uncovering these projects can be replicated as architects gain needed discovery skills. Industrial Internet Architects To-dos: Understand IIoT: Architects first will seek to gain an understanding of what is different about the Industrial Internet, the evolution to specific IIoT solutions, and how legacy technology footprints might fit into that architecture. Understand IIoT project scope and requirements: They next research guidance from industry consortia and gather functional viewpoints. This helps to better understand the requirements their architecture must deliver solutions to, and the scope of effort they will face. Act as a bridge between business and technical requirements: They quickly come to realize that since successful projects are driven by responding to business requirements, the architect must bridge the line of business and IT divide present in many companies. They are always on the lookout for requirements and means to justify these projects. Narrow down viable IIoT solutions: Once the requirements are gathered and a potential project appears to be justifiable, requirements and functional viewpoints are aligned in preparation for defining a solution. Evaluate IIoT architectures and solution delivery models: Time to market of a proposed Industrial Internet solution is often critical to business sponsors. Most architecture evaluations include consideration of applications or pseudo-applications that can be modified to deliver the needed solution in a timely manner. Have a good grasp of IIoT analytics: Intelligence delivered by these solutions is usually linked to the timely analysis of data streams and care is taken in defining Lambda architectures (or Lambda variations) including machine learning and data management components and where analysis and response must occur. Evaluate deployment options: Technology deployment options are explored including the capabilities of proposed devices, networks, and cloud or on-premises backend infrastructures. Assess IIoT Security considerations: Security is top of mind today and proper design includes not only securing the backend infrastructure, but also extends to securing networks and the edge devices themselves. Conform to Governance and compliance policies: The viability of the Industrial Internet solution can be determined by whether proper governance is put into place and whether compliance standards can be met. Keep up with the IIoT landscape: While relying on current best practices, the architect must keep an eye on the future evaluating emerging architecture patterns and solutions. [author title="Author’s Bio" image="http://"]Robert Stackowiak is a technology business strategist at the Microsoft Technology Center in Chicago where he gathers business and technical requirements during client briefings and defines Internet of Things and analytics architecture solutions, including those that reside in the Microsoft Azure cloud. He joined Microsoft in 2016 after a 20-year stint at Oracle where he was Executive Director of Big Data in North America. Robert has spoken at industry conferences around the world and co-authored many books on analytics and data management including Big Data and the Internet of Things: Enterprise Architecture for A New Age, published by Apress, five editions of Oracle Essentials, published by O'Reilly Media, Oracle Big Data Handbook, published by Oracle Press, Achieving Extreme Performance with Oracle Exadata, published by Oracle Press, and Oracle Data Warehousing and Business Intelligence Solutions, published by Wiley. You can follow him on Twitter at @rstackow. [/author]  
Read more
  • 0
  • 0
  • 1966

article-image-icon-haz-hamburger
Ed Gordon
30 Jun 2014
7 min read
Save for later

Icon Haz Hamburger

Ed Gordon
30 Jun 2014
7 min read
I was privileged enough recently to be at a preview of Chris Chabot’s talk on the future of mobile technology. It was a little high-line (conceptual), but it was great at getting the audience thinking about the implications that “mobile” will have in the coming decades; how it will impact our lives, how it will change our perceptions, and how it will change physically. The problem with this, however, is that mobile user experience just isn’t ready to scale yet. The biggest challenge facing mobile isn’t its ability to handle an infinite increase in traffic; it’s how we navigate this new world of mobile experiences. Frameworks like Bootstrap et al have enabled designers to make content look great on any platform, but finding your way around, browsing, on mobile is still about as fun as punching yourself in the face. In a selection of dozens of applications, I’m in turns required to perform a ballet of different digital interface interaction: pressing, holding, sliding, swiping, pulling (but never pushing?!), and dragging my way to finding the article of choice. The hamburger eats all One of the biggest enablers of scalable user interface design is going to be icons, right? A picture paints a thousand words. An icon that can communicate “Touch me for more… ” is more valuable in the spatio-prime-real estate of the mobile web than a similarly wordy button. Of course, when the same pictures start meaning a different thousand words, everything starts getting messy. The best example of icons losing meaning is the humble hamburger icon. Used by so many sites and applications to achieve such different end goals, it is becoming unusable. Here are a few examples from fairly prominent sites:   Google+: Opens a reveal menu, which I can also open by swiping left– [MG1] right. SmashingMag: Takes me to the bottom of the page, with no faculty to get back up without scrolling manually. The reason for this remains largely unclear to me. eBay: Changes the view of listed items. Feels like the Wilhelm Scream of UI design. Linked in: Drop-down list of search options, no menu items. IGN: Reveal menu which I can only close by pressing a specific part of the “off” page. Can’t slide it open. There’s an emerging theme here, in that it’s normally related to content menus (or search), and it normally happens by some form of CSS trickery that either drops down or reveals the “under” menu. But this is generally speaking. There’s no governance, and it introduces more friction to the cross-site browsing experience. Compare the hamburger to the humble magnifying glass: How many people have used a magnifying glass? I haven’t. Despite this set back, through consistent use of the icon with consistent results, we’ve ended up with a standard pattern that increases the usability and user experience of a site. Want to find something? Click the magnifying glass. The hamburger isn’t the only example of poorly implemented navigation, it’s just indicative of the distances we still have to cover to get to a point where mobile navigation is intuitive. The “Back”, “Forward”, and “Refresh” buttons have been a staple of browsers since Netscape Navigator–they have aided the navigation of the Web as we know it. As mobile continues to grow, designers need similarly scalable icons, with consistent meaning. This may be the hamburger in the future, but it’s not at that point yet. Getting physical, or, where we discuss touch Touch isn’t yet fully realized on mobile devices. What can I actually press? Why won’t Google+ let me zoom in with the “pinch” function? Can I slide this carousel, or not? What about off-screen reveals? Navigating with touch at the moment really feels like you’re a beta tester for websites; trying things that you know work on other sites to see if they work here. This, as a consumer, isn’t the base of a good user experience. Just yesterday, I realised I could switch tabs on Android Chrome by swiping the grey nav bar. I found that by accident. The one interaction that has come out with some value is the “Pull to refresh” action. It’s intuitive, in its own way, and it’s also used as a standard way of refreshing content across Facebook, Twitter, and Google+—any site that has streamed content. People can use this function without thinking about it and without many visual prompts now that it’s remained the standard for a few years. Things like off-screen reveals, carousel swiping, and even how we highlight text are still so in flux that it becomes difficult to know how to achieve a given action from one site to the next. There’s no cross application consistency that allows me to navigate my own browsing experience with confidence. In cases such as the Android Chrome, I’m actually losing functionality that developers have spent hours (days?) creating. Keep it mobile, stupid Mobile commerce is a great example of forgetting the “mobile” bit of browsing. Let’s take Amazon. If I want to find an Xbox 360 RPG, it takes me seven screens and four page loads to get there. I have to actually load up a list of every game, for every console, before I can limit it to the console I actually own. Just give me the option to limit my searches from the home page. That’s one page load and a great experience (cheques in the post please, Amazon). As a user, there are some pretty clear axioms for mobile development: Browser > app. Don’t make me download an app if it’s going to require an Internet connection in the future. There’s no value in that application. Keep page calls to a minimum. Don’t trust my connection. I could be anywhere. I am mobile. Mobile is still browsing. I don’t often have a specific need; if I do, Google will solve that need. I’m at your site to browse your content. Understanding that mobile is its own entity is an important step – thinking about connection and page calls is as important as screen size. Tools such as Hood.ie are doing a great job in getting developers and designers to think about “offline first”. It’s not ready yet, but it is one possible solution to under the hood navigation problems. Adding context A lack of governing design principles in the emergent space of mobile user experience is limiting its ability to scale to the place we know it’s headed. Every new site feels like a test, with nothing other than how to scroll up and down being hugely apparent. This isn’t to say all sites need to be the same, but for usability and accessibility to not be impacted, they should operate along a few established protocols. We need more progressive enhancement and collaboration in order to establish a navigational framework that the mobile web can operate in. Designers work in the common language of signification, and they need to be aware that they all work in the same space. When designing for that hip new product, remember that visitors aren’t arriving at your site in isolation–they bring with them the great burden of history, and all the hamburgers they’ve consumed since. T.S. Eliot said that “No poet, no artist of any art, has his complete meaning alone. His significance, his appreciation is the appreciation of his relation to the dead poets and artists”. We don’t work alone. We’re all in this together.
Read more
  • 0
  • 0
  • 1961