Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech Guides

851 Articles
article-image-taking-advantage-spritekit-cocoa-touch-0
Milton Moura
04 Jan 2016
7 min read
Save for later

Taking advantage of SpriteKit in Cocoa Touch

Milton Moura
04 Jan 2016
7 min read
Since Apple announced SpriteKit at WWDC 2013, along with iOS 7, it has been promoted as a framework for building 2D games with high-performance graphics and engaging gameplay. But, as I will show you in this post, by taking advantage of some of it's features in your UIKit-based application, you'll be able to add some nice visual effects to your user interface without pulling too much muscle. We will use the latest stable Swift version, along with Xcode 7.1 for our code examples. All the code in this post can be found in this github repository. SpriteKit's infrastructure SpriteKit provides an API for manipulating textured images (sprites), including animations, applying image filters, with optional physics simulation and sound playback. Although Cocoa Touch also provides other frameworks for these things, like Core Animation, UIDynamics and AV Foundation, SpriteKit is especially optimized for doing these operations in batch and performs them on a lower lever, transforming all graphics operations directly into OpenGL commands. The top-level user interface object for SpriteKit are SKView's, that can be added to any application view controller, and then are used to present scene objects, of type SKScene, composed of possibly multiple nodes with content, that will render seamlessly with other layers or views that might also be contained in the application's current view hierarchy. This allows us to add smooth and optimized graphical effects to our application UI, enriching the user experience and keeping our refresh rate at 60hz. Our sample project To show how to combine typical UIKit controls with SpriteKit, we'll build a sample login screen, composed of UITextFields, UIButtons and UILabels, for our wonderful new WINTER APP. But instead of a boring, static background, we'll add an animated particle effect to simulate falling snow and apply a Core Image vignette filter to mask them under a niffy spotlight-type effect. 1. Creating the view hierarchy We'll start with a brand new Swift Xcode project, selecting the iOS > Single View Application template and opening the Main Storyboard. In the existing View Controller Scene, we add a new UIView that anchors to it's parent view's sides, top and bottom and change it's class from the default UIView to SKView. Also make sure the background color for this view is dark, so that the particles that we'll add later have a nice contrast. Now, we'll add a few UITextFields, UILabels and UIButtons to replicate the following login screen. Also, we need an IBOutlet to our SKView. Let's call it sceneView. This is the SpriteKit view where we will add the SKScene with the particle and image filter effect. 2. Adding a Core Image filter We're done with UIKit for now. We currently have a fully (well, not really) functional login screen and it's now time to make it more dynamic. The first thing we need is a scene, so we'll add a new Swift class called ParticleScene. In order to use SpriteKit's objects, let's not forget to add an import statement for that and declare that our class is an SKScene.    import SpriteKit    class ParticleScene : SKScene    {        ...    } The way we initialize a scene in SpriteKit is by overriding the didMoveToView(_:) method, which is called when a scene is added to an SKView. So let's do that and setup the Core Image filter. If you are not familiar with Core Image, it is a powerful image processing framework that provides over 90 filters that can be applied in real time to images, videos and, coincidentally, to SpriteKit nodes, of type SKNode. An SKNode is the basic unit of content in SpriteKit and our SKScene is one big node for rendering. Actually, SKScene is an SKEffectNode, which is a special type of node that allows its content to be post processed using Core Image filters. In the following snippet, we add a CIVignetteEffect filter centered on our scene and with a radius equal to the width of our view frame:    override func didMoveToView(view: SKView) {        scaleMode = .ResizeFill               // initialize the Core Image filter        if let filter = CIFilter(name: "CIVignetteEffect") {            // set the default input parameter values                filter.setDefaults()            // make the vignette center be the center of the view               filter.setValue(CIVector(CGPoint: view.center), forKey: "inputCenter")            // set the radius to be equal to the view width                filter.setValue(view.frame.size.width, forKey: "inputRadius")            // apply the filter to the current scene                self.filter = filter                self.shouldEnableEffects = true            }            presentingView = view        } If you run the application as is, you'll notice a nice spotlight effect behind our login form. But we're not done yet. 3. Adding a particle system Since this is a WINTER APP, let's add some falling snow flakes in the background. Add a new SpriteKit Particle File to the project and select the Snow template. Next, we add a method to setup our particle node emitter, an SKEmitterNode, that hides all the complexity of a particle system:    func startEmission() {        // load the snow template from the app bundle        emitter = SKEmitterNode(fileNamed: "Snow.sks")        // emit particles from the top of the view        emitter.particlePositionRange = CGVectorMake(presentingView.bounds.size.width, 0)        emitter.position = CGPointMake(presentingView.center.x, presentingView.bounds.size.height)        emitter.targetNode = self        // add the emitter to the scene        addChild(emitter) } To finish things off, let's create a new property to hold our particle scene in the ViewController and start the particle in the viewDidAppear() method:      class ViewController: UIViewController {    ...    let emitterScene = ParticleScene()    ...    override func viewDidAppear(animated: Bool) {        super.viewDidAppear(animated)      emitterScene.startEmission()    } } And we're done! We now have a nice UIKit login form with an animated background that is much more compelling than a simple background color, gradient or texture. Where to go from here You can explore more Core Image filters to add stunning effects to your UI but be warned that some are not prepared for real-time, full-frame rendering. Indeed, SpriteKit is very powerful and you can even use OpenGL shaders in nodes and particles. You are welcome to checkout the source code for this article and you'll see that it has a little extra Core Motion trick, that shifts the direction of the falling snow according to the position of your device. About the author Milton Moura (@mgcm) is a freelance iOS developer based in Portugal. He has worked professionally in several industries, from aviation to telecommunications and energy and is now fully dedicated to creating amazing applications using Apple technologies. With a passion for design and user interaction, he is also very interested in new approaches to software development. You can find out more at http://defaultbreak.com
Read more
  • 0
  • 0
  • 2207

article-image-what-does-infrastructure-code-actually-mean
Raka Mahesa
12 Apr 2017
5 min read
Save for later

What does 'Infrastructure as Code' actually mean?

Raka Mahesa
12 Apr 2017
5 min read
15 years ago, adding an additional server to a project's infrastructure was a process that could take days, if not weeks. Nowadays, thanks to cloud technology, you can get a new server ready for your project in just a few seconds with a couple of clicks.  New and better technology doesn't mean it doesn't have its own set of problems though. Because it's very easy to add servers to your project now, your capability to manage your project infrastructure usually doesn't grow as fast as the size of your infrastructure. This leads to a lot of problems in the backend, such as inconsistent server configurations or configurations that can't be replicated. It's a common problem among massive web projects, so various approaches to tackle that problem are being devised. One such approach is known as 'Infrastructure as Code.' (image from http://cdn2.itpro.co.uk/sites/itpro/files/server_room.jpg)  Before we go on talking about Infrastructure as Code, let's first make sure that we understand the basics, which is infrastructure configuration and automation. Before an infrastructure (or a server) can be used to run a web application, it first has to be configured so that it has all of the requirements needed to run that application. This configuration ranges from very basic, such as operating systems and database types, to user accounts and software runtimes. And when dealing with virtual machines, configuration can even include the amount of RAM, storage space, and processing power a server would have.  All of those configurations are usually done by typing in the required commands on a terminal connected to the infrastructure. Of course, you can do it all manually, typing commands to install the needed software one by one, but what if you have to do that to tens, if not hundreds, of servers? That's where infrastructure automation comes in. By saving all of the needed commands to a script file, we can easily repeat this process to other servers that need to be configured by simply running that script.  All right, now that we have the basics behind us, let's move on. What does Infrastructure as Code really mean?  Infrastructure as Code, also known as Programmable Infrastructure, is a process for managing, computing, and networking infrastructure using software development methodologies. These methodologies include version control, testing, continuous integration, and other practices. It's an approach for handling servers by treating infrastructure as if it were code, hence the name.  But wait, because infrastructure automation uses script files for configuring servers, isn't it the same as treating infrastructure as code? Does it mean that Infrastructure as Code is just a cool term for infrastructure automation? Or are they actually different things?  Well, infrastructure automation is indeed one part of the Infrastructure as Code process, but it's the other part—the software development practices part—that differentiates the two of them. By employing software project methodologies, Infrastructure as Code can ensure that the automation will work reliably and consistently on every part of your infrastructure.  For example, by using version control systems on the server configuration script, any changes made to the file will be tracked, so when a problem arises in the server, we can find out exactly which changes caused that problem. Another software development practice that can be applied on infrastructure automation is automated testing. Having this practice would make it safer for developers to add changes to the script because any error added to the project can be detected quickly. All of these practices help ensure that the script configuration files are correct and reliable, which in turn ensures a robust and consistent infrastructure. (image from https://assets.pcmag.com/media/images/417346-back-up-your-cloud-how-to-download-all-your-data.jpg?thumb=y)  There's also one more thing to consider. Do not confuse Infrastructure as Code (IaC) with Infrastructure as a Service (IaaS). Infrastructure as a Service is a cloud computing service that provides infrastructure to developers and helps them manage it. This service allows developers to easily monitor and configure resources in their infrastructure. Examples for these types of cloud services are Amazon Web Services, Microsoft Azure, and the Google Compute Engine.  So, if both Infrastructure as Code and Infrastructure as a Service help developers manage their infrastructure, how do they exactly differ? Well, to put it in simple terms, IaaS is a tool (hammer) that gives developers a way to quickly configure their infrastructure, while Infrastructure as Code is a method to utilize such tools (carpentry). Just like how you can do carpentry without a hammer, you're not restricted to using IaaS if you want to run Infrastructure as Code practices on your infrastructure.  That said, one of the big requirements of being able to run Infrastructure as Code practices is to run the project on a dynamic infrastructure system. That is, a platform where you can programmatically create, destroy, and manage infrastructure resources on demand. While you can implement this system on your own private infrastructure, most of the IaaS available on the market already has this capability, making itthe perfect platform to run the Infrastructure as Code process.  That's the gist of the Infrastructure as Code approach. There are plenty of tools out there that enable you to apply Infrastructure as Code, including Ansible, Puppet, and Chef. Go check them out if you want to try this methodology for yourself.  About the author Raka Mahesa is a game developer at Chocoarts, http://chocoarts.com/, who is interested in digital technology in general. Outside of work hours, he likes to work on his own projects, with Corridoom VR being his latest released game. Raka also regularly tweets at @legacy99. 
Read more
  • 0
  • 0
  • 2202

article-image-stitch-fix-full-stack-data-science-winning-strategies
Aaron Lazar
05 Dec 2017
8 min read
Save for later

Stitch Fix: Full Stack Data Science and other winning strategies

Aaron Lazar
05 Dec 2017
8 min read
Last week, a company in San Francisco was popping bottles of champagne for their achievements. And trust me, they’re not at all small. Not even a couple of weeks gone by, since it was listed on the stock market and it has soared to over 50%. Stitch Fix is an apparel company run by co-founder and CEO, Katrina Lake. In just a span of 6 years, she’s been able to build the company with an annual revenue of a whopping $977 odd million. The company has been disrupting traditional retail and aims to bridge the gap of personalised shopping, that the former can’t accomplish. Stitch Fix is more of a personalized stylist, rather than a traditional apparel company. It works in 3 basic steps: Filling a Style Profile: Clients are prompted to fill out a style profile, where they share their style, price and size preferences. Setting a Delivery Date: The clients set a delivery date as per their availability. Stitch Fix mixes and matches various clothes from their warehouses and comes up with the top 5 clothes that they feel would best suit the clients, based on the initial style profile, as well as years of experience in styling. Keep or Send Back: The clothes reach the customer on the selected date and the customer can try on the clothes, keep whatever they like or send back what they don’t. The aim of Stitch Fix is to bring a personal touch to clothes shopping. According to Lake, “There are millions and millions of products out there. You can look at eBay and Amazon. You can look at every product on the planet, but trying to figure out which one is best for you is really the challenge” and that’s the tear Stitch Fix aims to sew up. In an interview with eMarketer, Julie Bornstein, COO of Stitch Fix said “Over a third of our customers now spend more than half of their apparel wallet share with Stitch Fix. They are replacing their former shopping habits with our service.” So what makes Stitch Fix stand out among its competitors? How do they do it? You see, Stitch Fix is not just any apparel company. It has created the perfect formula by blending human expertise with just the right amount of Data Science to enable it to serve its customers. When we’re talking about the kind of Data Science that Stitch Fix does, we’re talking about a relatively new and exciting term that’s on the rise - Full Stack Data Science. Hello Full Stack Data Science! For those of you who’ve heard of this before, cheers! I hope you’ve had the opportunity to experience its benefits. For those of you who haven’t heard of the term, Full Stack Data Science basically means a single data scientist does their own work, which is mining data, cleans it, writes an algorithm to model it and then visualizes the results, while also stepping into the shoes of an engineer, implementing the model, as well as a Project Manager, tracking the entire process and ensuring it’s on track. Now while this might sound like a lot for one person to do, it’s quite possible and practical. It’s practical because of the fact that when these roles are performed by different individuals, they induce a lot of latency into the project. Moreover, a synchronization of priorities of each individual is close to impossible, thus creating differences within the team. The Data (Science) team at Stitch Fix is broadly categorized based on what area they work on: Because most of the team focuses on full stack, there are over 80 Data Scientists on board. That’s a lot of smart people in one company! On a serious note, although unique, this kind of team structure has been doing well for them, mainly because it gives each one the freedom to work independently. Tech Treasure Trove When you open up Stitch Fix’s tech toolbox, you won’t find Aladdin’s lamp glowing before you. Their magic lies in having a simple tech stack that works wonders when implemented the right way. They work with Ruby on Rails and Bootstrap for their web applications that are hosted on Heroku. Their data platform relies on a robust Postgres implementation. Among programming languages, we found Python, Go, Java and JavaScript also being used. For an ML Framework, we’re pretty sure they’re playing with TensorFlow. But just working with these tools isn’t enough to get to the level they’re at. There’s something more under the hood. And believe it or not, it’s not some gigantic artificial intelligent system running on a zillion cores! Rather, it’s all about the smaller, simpler things in life. For example, if you have 3 different kinds of data and you need to find a relationship between them, instead of bringing in the big guns (read deep learning frameworks), a simple tensor decomposition using word vectors would do the deed quite well. Advantages galore: Food for the algorithms One of the main advantages Stitch Fix has, is that they have almost 5 years’ worth client data. This data is obtained from clients in several ways like through a Client Profile, After-Delivery Feedback, Pinterest photos, etc. All this data is put through algorithms that learn more about the likes and dislikes of clients. Some interesting algorithms that feed on this sumptuous data are on the likes of collaborative filtering recommenders to group clients based on their likes, mixed-effects modeling to learn about a client’s interests over time, neural networks to derive vector descriptions of the Pinterest images and to compare them with in-house designs, NLP to process customer feedback, Markov chain models to predict demand, among several others. A human Touch: When science meets art While the machines do all the calculations and come up with recommendations on what designs customers would appreciate, they still lack the human touch involved. Stitch Fix employs over 3000 stylists. Each client is assigned a stylist who knows the entire preference of the client at the glance of a custom-built interface. The stylist finalizes the selections from the inventory list also adding in a personal note that describes how the client can accessorize the purchased items for a particular occasion and how they can pair them with any other piece of clothing in their closet. This truly advocates “Humans are much better with the machines, and the machines are much better with the humans”. Cool, ain't it? Data Platform Apart from the Heroku platform, Stitch Fix seems to have internal SaaS platforms where the data scientists effectively carry out analysis, write algorithms and put them into production. The platforms exhibit properties like data distribution, parallelization, auto-scaling, failover, etc. This lets the data scientists focus on the science aspect while still enjoying the benefits of a scalable system. The good, the bad and the ugly: Microservices, Monoliths and Scalability Scalability is one of the most important aspects a new company needs to take into account before taking the plunge. Using a microservice architecture helps with this, by allowing small independent services/mini applications to run on their own. Stitch Fix uses this architecture to improve scalability although, their database is a monolith. They now are breaking the monolith database into microservices. This is a takeaway for all entrepreneurs just starting out with their app. Data Driven Applications Data-driven applications ensure that the right solutions are built for customers. If you’re a customer-centric organisation, there’s something you can learn from Stitch Fix. Data-Driven Apps seamlessly combine the operational and analytic capabilities of the organisation, thus breaking down the traditional silos. TDD + CD = DevOps Simplified Both Test Driven Development and Continuous Delivery go hand in hand and it’s always better to imbibe this culture right from the very start. In the end, it’s really great to see such creative and technologically driven start-ups succeed and sail to the top. If you’re on the journey to building that dream startup of yours and you need resources for your team, here’s a few books you’ll want to pick up to get started with: Hands-On Data Science and Python Machine Learning by Frank Kane Data Science Algorithms in a Week by Dávid Natingga Continuous Delivery and DevOps : A Quickstart Guide - Second Edition by Paul Swartout Practical DevOps by Joakim Verona    
Read more
  • 0
  • 0
  • 2197

article-image-10-predictions-tech-2025
Richard Gall
02 Nov 2015
4 min read
Save for later

10 Predictions for Tech in 2025

Richard Gall
02 Nov 2015
4 min read
Back to the Future Day last month got us thinking – what will the world look like in 2025? And what will technology look like? We’ve pulled together our thoughts into one listicle packed with predictions – please don’t hold us to them… Everything will be streamed – all TV will be streamed through the internet. Every new TV will be smart, which means applications will become a part of the furniture in our homes. Not only will you be able to watch just about anything you can imagine, you’ll also be able to play any game you want. The end of hardware – with streaming dominant, hardware will become less and less significant. You’ll simply need a couple of devices and you’ll be able to do just about anything you want. With graphene flooding the market, these devices will also be more efficient than anything we’re used to today – graphene batteries could make consumer tech last for weeks with a single charge. Everything is hardware – Hardware as we know it might be dead, but the Internet of Things will take over every single aspect of everyday life – essentially transforming everyday objects into hardware. From fridges to pavements, even the most quotidian artefacts will be connected to a large network. Everything will be in the cloud – our stream-only future means we’re going to be living in a world where the cloud reigns supreme. You can begin to see how everything fits together – from the Internet of Things to the decline in personal hardware, everything will become dependent on powerful and highly available distributed systems. Microservices will be the dominant form of cloud architecture – There’s a number of ways we could build distributed systems and harness cloud technology, but if 2015 is anything to go by, microservices are likely to become the most dominant way in which we deploy and manage applications in the cloud. This movement towards modular and independent units – or individual ‘services’ – will not simply be the agile option, but will be the obvious go-to choice for anyone managing applications in the cloud. Even in 2015, you would have to have a good reason to go back to the old, monolithic way of doing things… Apple and Google rule the digital world – Sure, this might not be much of a prediction given the present state of affairs, but it’s difficult to see how anyone can challenge the two global tech giants. Their dominance is likely to increase, not decline. This means every aspect of our interaction with software – as consumers or developers – will be dictated by their commercial interests in 2025. Of course, one of the more interesting subplots over the next ten years will be whether we see a resistance to this standardization. Perhaps we might even see a resurgence of a more radical and vocal Open Source culture. Less Specialization, Democratization of Development – Even if our experience of software is defined by huge organizations like Google and Apple, it’s also likely that development will become much simpler. Web components have already done this (just take a look at React and Ember), which means JavaScript web development might well morph into something accessible to all. True, this might mean more mediocrity across the web – but it’s not like we’re all going to be building the same Geocities sites we were making in 2002… Client and Server collapse on each other – this follows on from the last prediction. The reason we’re going to see less specialization in development is that the very process of development will no longer be siloed. We’ll all be building complete apps that simply connect to APIs somewhere in the cloud. Isomorphic codebases will become standard – whether this means we will still be using Node.js is another matter… We’ll be living in a ‘Post-Big Data’ World – The Big Data revolution is over – it’s permanently taken root in every aspect of our lives. By 2025 data will have become so ‘Big’, largely due to the Internet of Things, that we’ll have to start thinking of better ways to deal with it and, of course, understand it. If we don’t, we’re going to be submerged in oceans of dirty data. iOS will become sentient - iOS 30, the 2025 iteration of iOS, will become self-aware and start making decisions for humanity. I welcome this wholeheartedly, never having to decide what to eat for dinner ever again. Special thanks to Ed Gordon, Greg Roberts, Amey Varangaonkar and Dave Barnes for their ideas and suggestions. Let us know your predictions for the future of tech – tweet us @PacktPub or add your comments below. What’s the state of tech today? And what can we expect over the next 12 months? Check out our Skills and Salary Reports to find out.
Read more
  • 0
  • 0
  • 2191

article-image-the-decentralized-web-trick-or-treat
Bhagyashree R
31 Oct 2018
3 min read
Save for later

The decentralized web - Trick or Treat?

Bhagyashree R
31 Oct 2018
3 min read
The decentralized web refers to a web which is not dominated by powerful monopolies. It’s actually a lot like the web we have now, but with one key difference: its underlying architecture is decentralized, so that it becomes much difficult for any one entity to take down any single web page, website, or service. It takes control away from powerful tech monopolies. Why are people excited about the decentralized web? In effect, the decentralized web is a lot like the earliest version of the web. It aims to roll back the changes that came with Web 2.0, as we began to communicate with each other and share information through centralized services provided by big companies such as Google, Facebook, Microsoft, and Amazon. The decentralized web aims to make us less dependent on these tech giants. Instead, users will have control over their data enabling them to directly interact and exchange messages with others in their network. Blockchain offers a perfect solution to helping us achieve a decentralized web. By creating a decentralized public digital ledger of transactions, you can take the power out of established monopolies and back to those who are simply part of the decentralized network. We saw some advancements in the direction of decentralized web with the launch of Tim Berners-Lee’s startup, Inrupt. The goal of this startup is to get rid of the tech giant’s monopolies on user data. Tim Berners-Lee hopes to achieve this with the help of his open source project, Solid.  Solid provides every user a choice of where they want to store their data, which specific people and groups can access the select elements in a data, and which apps you use. Further examples are Cloudflare introducing IPFS Gateway, which allows you to easily access content from InterPlanetary File System (IPFS), and, more recently, Origin DApp, which is a true peer to peer marketplace on the Ethereum blockchain with origin-js. A note of caution Despite these advances, the decentralized web is still in its infancy. There are still no “killer apps” that promises the same level of features that are we used to now. Many of the apps that do exist are clunky and difficult to use. One of the promises that decentralized makes is being faster, but there is a long way to go on that. There are much bigger issues related to governance such as how the decentralized web will come together when no one is in charge and what is the guarantee that it will not become centralized again. Is the decentralized web a treat… or a trick? Going by the current status of decentralized web, it seems to be a trick. No one likes “change” and it takes a long time to get used to the change. The decentralized web has to offer much more to replace the current functionalities we enjoy. Cloudflare’s decentralized vision of the web: InterPlanetary File System (IPFS) Gateway to create distributed websites Origin DApp: A decentralized marketplace on Ethereum mainnet aims to disrupt gig economy platforms like Airbnb and Uber Tim Berners-Lee plans to decentralize the web with ‘Solid’, an open-source project for “personal empowerment through data”
Read more
  • 0
  • 0
  • 2189

article-image-6-key-skills-data-scientist-role
Amey Varangaonkar
21 Dec 2017
6 min read
Save for later

6 Key Areas to focus on while transitioning to a Data Scientist role

Amey Varangaonkar
21 Dec 2017
6 min read
[box type="note" align="" class="" width=""]The following article is an excerpt taken from the book Statistics for Data Science, authored by James D. Miller. The book dives into the different statistical approaches to discover hidden insights and patterns from different kinds of data.[/box] Being a data scientist is undoubtedly a very lucrative career prospect. In fact, it is one of the highest paying jobs in the world right now. That said, transitioning from a data developer role to a data scientist role needs careful planning and a clear understanding of what the role is all about. In this interesting article, the author highlights six key skills must give special attention to, during this transition. Let's start by taking a moment to state what I consider to be a few generally accepted facts about transitioning to a data scientist. We'll reaffirm these beliefs as we continue through this book: Academia: Data scientists are not all from one academic background. They are not all computer science or statistics/mathematics majors. They do not all possess an advanced degree (in fact, you can use statistics and data science with a bachelor's degree or even less). It's not magic-based: Data scientists can use machine learning and other accepted statistical methods to identify insights from data, not magic. They are not all tech or computer geeks: You don't need years of programming experience or expensive statistical software to be effective. You don't need to be experienced to get started. You can start today, right now. (Well, you already did when you bought this book!) Okay, having made the previous declarations, let's also be realistic. As always, there is an entry-point for everything in life, and, to give credit where it is due, the more credentials you can acquire to begin out with, the better off you will most likely be. Nonetheless, (as we'll see later in this chapter), there is absolutely no valid reason why you cannot begin understanding, using, and being productive with data science and statistics immediately. [box type="info" align="" class="" width=""]As with any profession, certifications and degrees carry the weight that may open the doors, while experience, as always, might be considered the best teacher. There are, however, no fake data scientists but only those with currently more desire than practical experience.[/box] If you are seriously interested in not only understanding statistics and data science but eventually working as a full-time data scientist, you should consider the following common themes (you're likely to find in job postings for data scientists) as areas to focus on: Education Common fields of study here are Mathematics and Statistics, followed by Computer Science and Engineering (also Economics and Operations research). Once more, there is no strict requirement to have an advanced or even related degree. In addition, typically, the idea of a degree or an equivalent experience will also apply here. Technology You will hear SAS and R (actually, you will hear quite a lot about R) as well as Python, Hadoop, and SQL mentioned as key or preferable for a data scientist to be comfortable with, but tools and technologies change all the time so, as mentioned several times throughout this chapter, data developers can begin to be productive as soon as they understand the objectives of data science and various statistical mythologies without having to learn a new tool or language. [box type="info" align="" class="" width=""]Basic business skills such as Omniture, Google Analytics, SPSS, Excel, or any other Microsoft Office tool are assumed pretty much everywhere and don't really count as an advantage, but experience with programming languages (such as Java, PERL, or C++) or databases (such as MySQL, NoSQL, Oracle, and so on.) does help![/box] Data The ability to understand data and deal with the challenges specific to the various types of data, such as unstructured, machine-generated, and big data (including organizing and structuring large datasets). [box type="info" align="" class="" width=""]Unstructured data is a key area of interest in statistics and for a data scientist. It is usually described as data having no redefined model defined for it or is not organized in a predefined manner. Unstructured information is characteristically text-heavy but may also contain dates, numbers, and various other facts as well.[/box] Intellectual Curiosity I love this. This is perhaps well defined as a character trait that comes in handy (if not required) if you want to be a data scientist. This means that you have a continuing need to know more than the basics or want to go beyond the common knowledge about a topic (you don't need a degree on the wall for this!) Business Acumen To be a data developer or a data scientist you need a deep understanding of the industry you're working in, and you also need to know what business problems your organization needs to unravel. In terms of data science, being able to discern which problems are the most important to solve is critical in addition to identifying new ways the business should be leveraging its data. Communication Skills All companies look for individuals who can clearly and fluently translate their findings to a non-technical team, such as the marketing or sales departments. As a data scientist, one must be able to enable the business to make decisions by arming them with quantified insights in addition to understanding the needs of their non-technical colleagues to add value and be successful. This article paints a much clearer picture on why soft skills play an important role in becoming a better data scientist. So why should you, a data developer, endeavor to think like (or more like) a data scientist? Specifically, what might be the advantages of thinking like a data scientist? The following are just a few notions supporting the effort: Developing a better approach to understanding data Using statistical thinking during the process of program or database designing Adding to your personal toolbox Increased marketability If you found this article to be useful, make sure you check out the book Statistics for Data Science, which includes a comprehensive list of tips and tricks to becoming a successful data scientist by mastering the basic and not-so-basic concepts of statistics.  
Read more
  • 0
  • 0
  • 2187
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at £15.99/month. Cancel anytime
article-image-5g-trick-or-treat
Melisha Dsouza
31 Oct 2018
3 min read
Save for later

5G - Trick or Treat?

Melisha Dsouza
31 Oct 2018
3 min read
5G - or "fifth generation" - mobile internet is coming very soon - possibly early next year. It promises much faster data download speeds - 10 to 20 times faster than we have now. With an improvement in upload speeds, wider coverage and more stable connections, 5G is something to watch out for. Why are people excited about 5G? Mobile is today the main way people use the internet. That change has come at an amazing pace. With this increase in mobile users, demand for services, like music and video streaming, has skyrocketed.. This can cause particular problems when lots of people in the same area access online mobile services at the same time, leading to a congestion of existing spectrum bands, thus resulting in service breakdowns. 5G will use the radio spectrum much more efficiently, enabling more devices to access mobile internet services at the same time. But it’s not just about mobile users. It’s also about the internet of things and smart cities. For example, as cities look to become better connected, with everything from streetlights to video cameras in some way connected to the internet, this network will support this infrastructure in a way that would have previously been impossible. From swarms of drones carrying out search and rescue missions, yo fire assessments and traffic monitoring, 5G really could transform the way we understand and interact with our environment.  It’s not just about movies downloading faster, it’s also about autonomous vehicles communicating with each other seamlessly and reading live map and traffic data to take you to your destination in a more efficient and environmentally friendly way. 5G will also go hand-in-hand with AI, propagating its progress! 5G: trick or treat? All this being said, there will be an increase in cost to employ skilled professionals to manage 5G networks. Users will also need to buy new smartphones that support this network - even some of the most up to date phones will need to be replaced. When  4G was introduced in 2009/10, compatible smartphones came onto the market before the infrastructure had been rolled out fully. That’s a possibility with 5G, but it does look like it might take a little more time.. This technology is still under development and will take some time to be fully operational without any issues. We will leave it up to you decide if the technology is a Trick or a Treat! How 5G Mobile Data will propel Artificial Intelligence (AI) progress VIAVI releases Observer 17.5, a network performance management and diagnostics tool
Read more
  • 0
  • 0
  • 2177

article-image-falling-love-chatbots
Ben James
06 Dec 2016
5 min read
Save for later

Falling in love with chatbots

Ben James
06 Dec 2016
5 min read
I recently watched the excellent film Her by Spike Jonze. Without giving too much away, the lead character slowly becomes enamoured with his new, emotionally-capable OS. It got me thinking, "How far off are chatbots from being like this?". Lately, countless businesses are turning to chatbots as a way of more meaningfully interacting with their users. Everyone from Barclays to UPS is trying bots in the hope of increasing engagement and satisfaction. Why talk to a suit over the phone when you can message a bot whenever you want? They'll always reply to you and they can read your personal data just like magic, solving your problems in a timely and friendly way. However, one inherant trend I've noticed is that everything is task based. Platforms like Amazon's Alexa, api.ai, and wit.ai are all based around resolving a user query to an intent and then acting on it. That's great, and works rather well nowadays, but what if you want to tell a story? What if you want to make your user feel grief, happiness, or guilt? Enter SuperScript SuperScript is a chatbot framework designed around conversations, written from the bottom-up in lovely asynchronous Node.js. Unlike other frameworks, it puts more emphasis on how a typical human conversation might go over more traditional methods like intent mapping. Let's find out a little more about how it works. To get started, you can run the following in your terminal. Note that we're using the alpha version at the moment, which is nearing completion but is still liable to change: npm install -g superscript@alpha bot-init myBot// Creates a bot in myBot directory cd myBot npm install parse npm run build npm run start Then, in another tab, run: telnet localhost 2000 And you'll find that you can talk to a very simple script that'll say “hello!”. Let's see how that script works. Open chat/main.ss and take a look. Triggers (what you say to the bot) are prefixed with the + symbol, while replies (what it says back to you) are prefixed with the - symbol, like this: + Hi - Greetings, human! How are you? Under the hood, Hi gets transformed into ~emohello, which matches a number of different greetings like Hey, Greetings or just the original Hi. So, whenever you greet the bot, it'll say Greetings, human!. Conversations Conversations are the first of the building blocks of true power of SuperScript, and look something like this: + * good * % Greetings, human! How are you? - That's great to hear! The % references a previous reply that the bot must have said in order for this bit of chat to be considered. In this case, SuperScript will only look at this gambit (a trigger plus a reply) if the last bot reply was Greetings, human! How are you?. Let's imagine it was this. Note how we have a star “*” in the trigger this time. This matches anything. Here, if the user says something like Good thanks orI'm feeling good, then we'll match the trigger and send the response That's great to hear!. But, if the user doesn't say anything with good in it, we're not going to match anything at present. So let's write a new trigger to catch anything else. We can actually go a step further and use a plugin function to analyse the user's input: + (*) % Greetings, human! How are you? - ^sentiment(<cap1>) Here, we have a conversation just as we did earlier. But, now we have a plugin function,sentiment,which will analyze the captured input <cap1> (what the user said to the bot) and respond with an appropriate message. Let's write a plugin function using the npm library sentiment. Create a new file, say, sentiment.js, and stick it into the plugins folder in your bot directory. Inside this file, write something like this: import sentimentLib from 'sentiment'; const sentiment = functionsentiment(message, callback) { const score = sentimentLib(message).score; if (score >2) { returncallback(null, "Good for you."); } elseif(score < -2) { returncallback(null, "I'm glad you share my angst."); } returncallback(null, "I'm pretty so-so too."); }; exportdefault { sentiment }; Now, we can access our plugin from our bot, so give it a whirl. Pretty neat, huh? Topics The other part of SuperScript that we get a load of power and flexibility from is its support of topics. In normal conversations, humans tend to spend a lot of time on a specific subject before moving onto another subject. For example, this hipster programming bot will talk about programming until you don't move on to another topic: > topic programming + My favourite language is Brainfuck - Mine too! I love its readability! + My favourite language is (*) - Hah! <cap1> SUCKS! Brainfuck is where it's at! + spaces or tabs - Why not mix both? + * 1337 * - You speak 13375P34K too?! < topic Once you've hit any of these triggers, you're in the programming topic and won't get out unless you say something that doesn't match any of the triggers within the topic. Topics and conversations are the foundations of building conversational interfaces, and you can build a whole lot of interesting experiences around them. We're really at the beginning of these types of chatbots, but as interest grows in interactive stories, they'll only get better and better. About the author Ben James is currently the Technical Director at To Play For, creating games, interactive stories, and narratives using artificial intelligence. Follow us on Twitter at @ToPlayFor.
Read more
  • 0
  • 0
  • 2171

article-image-webgl-games
Alvin Ourrad
05 Mar 2015
5 min read
Save for later

WebGL in Games

Alvin Ourrad
05 Mar 2015
5 min read
In this post I am not going to show you any game engine, nor framework, nor library. This post is a more general write-up that aims to give you a more general overview of the technology that powers some of these frameworks : WebGL. Introduction Back in the days, in 2011, 3D in the browser was not really a thing outside of the realm of Flash, and the websites didn't make much use of the canvas element like they do today. During that year, the Khronos Group started an initiative called WebGL. This project was about creating an implementation of OpenGL ES 2.0 in a royalty free, standard, and cross browser API. Even though the canvas element can only draw 2d primitives, it actually is possible to render 3D graphics at a decent speed with this element.  By making a clever use of perspective and using a lot of optimizations, MrDoob with THREE.js managed to create a 3D canvas renderer, which quite frankly offers stunning results as you can see here and there. But, even though canvas can do the job, its speed and level of hardware-acceleration is nothing compared to the one WebGL benefits from, especially when you take into account the browsers on lower-end devices such as our mobile phones. Fast-forward in time, when Apple officially announced the support of WebGL for mobile Safari in IOS 8, the main goal was reached, since most of the recent browsers were able to use this 3D technology natively. Can I have 3D ? It's very likely that you can now, although there are still some graphics cards that were not made to support WebGL, but the global support is very good now. If you are interested in learning how to make 3D graphics in the browser, I recommend you do some research about a library called THREE.js. This library has been around for a while and is usually what most people choose to get started with, as this library is just a 3D library and nothing more. If you want to interact with the mouse, or create a bowling game, you will have to use some additional plugins and/or libraries. 3D in the gaming landscape As the support and the awareness around WebGL started rising, some entrepreneurs and companies saw it as a way to create a business or wanted to take part in this 3D adventure. As a result, several products are available to you if you want to delve into 3D gaming. Playcanvas This company likes saying that they re-created "Unity in the browser", which is not far from the truth really. Their in-browser editor is very complete, and mimics the entity-component system that exists in Unity. However, I think the best thing they have created among their products is their real-time collaboration feature. It allows you to work on a project with a team and instantly updates the editor and the visuals for everyone currently viewing it. The whole engine was also open sourced a few months ago, which has given us beautiful demos like this one:  http://codepen.io/playcanvas/pen/ctxoD Feel free to check out their website and give their editor a try:  https://playcanvas.com Goo technology Goo technology is an environment that encompasses a 3D engine, the Goo engine, an editor and a development environment. Goo create is also a very nicely designed 3D editor in the browser. What I really like about Goo is their cartoony mascot, "Goon" that you can see in a lot of their demos and branding, which adds a lot of fun and humanity to them. Have fun watching this little dude in his adventures and learn more about the company in these links:  http://www.goocreate.com Babylonjs I wasn't sure if this one was worth including, Babylon is a contestant to THREE.js created by Microsoft that doesn't want to be "just a rendering engine," but wants to add some useful components available out-of-the-box such as camera controls, a physics engine, and some audio capabilities. Babylon is relatively new and definitely not as battle-tested as THREE.js, but they created a set of tools that help you get started with it that I like, namely the playground and the shader editor. 2D ? Yes, there is a major point that I haven't mentioned yet. WebGL has been used across more 2D games that you might imagine. Yes, there is no reason why 2D games shouldn’t have this level of hardware-acceleration. The first games that used WebGL for their 2D needs were Rovio and ZeptoLabs for the ports of their respective multi-million-dollar hits that are Angry Birds and Cut the Rope to JavaScript. When pixi.js came out, a lot of people started using it for their games. The major HTML5 game framework, Phaser is also using it. Play ! This is the end of this post, I hope you enjoyed it and that you want to get started with these technologies. There is no time to waste -- it's all in your hands. About the author Alvin Ourrad is a web developer fond of the web and the power of open standards. A lover of open source, he likes experimenting with interactivity in the browser. He currently works as an HTML5 game developer.
Read more
  • 0
  • 0
  • 2169

article-image-what-can-tech-industry-learn-maker-community
Raka Mahesa
11 Jun 2017
5 min read
Save for later

What can the tech industry learn from the Maker community?

Raka Mahesa
11 Jun 2017
5 min read
Just a week prior to the writing of this post, Maker Faire Bay Area was opened for three days in San Mateo, exhibiting hundreds of makers and attracting hundreds of thousands of attendees. Maker Faire is the grand gathering for the Maker movement. It's a place where the Maker community can showcase their latest projects and connect with other fellow makers easily.  The Maker community has always had a close connection with the technology industry. They use the latest technologies in their projects, they form their community within Internet forumsand they share their projects and tutorials on video-sharing websites. It's a community born from how accessible technology nowadays is, so what can the tech industry learn from this positive community?  Let's begin with examining the community itself. What is the Maker movement?  Defining the Maker movement in a simple way is not easy. It's not exactly a movement because there's no singular entity that tries to rally people into it and decide what to do next. It's also not merely a community of tinkerers and makers that work together. The best way to sum up the entirety of the Maker movement is to say that it's a culture.  The Maker culture is a culture that revels in the creation of things. It's a culture where people are empowered to move from being a consumer to being a creator. It's a culture that involves people making the tools they need on their own. It's a culture that involves people sharing the knowledge of their creations with other people. And while the culture seems to be focused on technological projects like electronics, robotics, and 3D printing; the Maker community also involves non-technological projects like cooking, jewelry, gardening, and food.  While a lot of these DIY projects are simple and seem to be made for entertainment purposes, a few of them have the potential to actually change the world. For example, e-NABLE is an international community which has been using 3D printers to provide free prosthetic hands and arms for those who need it. This amazing community started its life when a carpenter in South Africa, who lost his fingers in an accident, collaborated with an artist-engineer in the US to create a replacement hand. Little did they know that their work would start such a large movement.  What lesson can the tech industry draw from the Maker culture?  One of the biggest takeaways of the Maker movement, is how much of it relies on collaboration and sharing. With no organization or company to back them, the community has to turn to itself to share their knowledge and encourage other people to become a maker. And only by collaborating with each other can an ambitious DIY project come to fruition. For example, robotics is a big, complex topic. It's very hard for one person to understand all the aspects needed to build a functioning robot from scratch. But by pooling knowledge from multiple people with their own specializations, such a project is possible.  Fortunately, collaboration is something that the tech industry has been doing for a while. The Android smartphone is a collaborative effort between a software company and hardware companies. Even smartphones themselves are usually made by components from different companies. And in the software developer community side, the spirit of helping each other is alive and well; as can be seen by the popularity of websites like StackOverflow and GitHub.  Another lesson that can be learned from the Maker community is the importance of accessibility in encouraging other people to join the community. The technology industry has always been worried about how there are not enough engineers for every technology company in the world. Making engineering tools and lessons more accessible to the public seems like a good way to encourage more people to be an engineer. After all, cheap 3D printers and computers, as well as easy-to-find tutorials, are the reasons why the Maker community could grow this fast.  One other thing that the tech industry can learn from the Maker community is about how a lot of big, successful projects are started by trying to solve a smaller, personal problem. One example of such project is Quadlock, a company that started its venture simply because the founders wanted to have a bottle opener integrated to their iPhone case. After realizing that other people wanted to have a similar iPhone case, they started to work on more iPhone cases and now they're running a company producing these unique cases.  The Maker Movement is such an amazing culture, and it's still growing, day by day. While all the points written above are great lessons that we can all apply in our lives, I'm sure there is still a lot more that we can learn from this wonderful community.  About the Author  RakaMahesa is a game developer at Chocoarts: http://chocoarts.com/, who is interested in digital technology in general. Outside of work hours, he likes to work on his own projects, with Corridoom VR being his latest released game. Raka also regularly tweets as @legacy99. 
Read more
  • 0
  • 0
  • 2169
article-image-shift-swift-2017
Shawn Major
27 Jan 2017
3 min read
Save for later

Shift to Swift in 2017

Shawn Major
27 Jan 2017
3 min read
It’s a great time to be a Swift developer because this modern programming language has a lot of momentum and community support behind it and a big future ahead of it. Swift became a real contender when it became open source in December 2015, giving developers the power to build their own tools and port it into the environments in which they work. The release of Swift 3 in September 2016 really shook things up by enabling broad scale adoption across multiple platforms – including portability to Linus/x86, Raspberry Pi, and Android. Swift 3 is the “spring cleaning” release that, while not being backwards compatible, has resulted in a massively cleaner language and ensured sound and consistent language fundamentals that will carry across to future releases. If you’re a developer using Swift, the best thing you can do is get on board with Swift 3 as the next release promises to deliver stability from 3.0 onwards. Swift 4 is expected to be released in late 2017 with the goals of providing source stability for Swift 3 code and ABI stability for the Swift standard library. Despite this shake up that occurred with the new release, developers are still enthusiastic about Swift – it was one of the “most loved” programming languages in StackOverflow’s 2015 and 2016 Developer Surveys. Swift was also one of the top 3 trending techs in 2016 as it’s been stealing market share from Objective C. The keen interest that developers have in Swift is reflected by the +35,000 stars it has amassed on Github and the impressive amount of ongoing collaboration between its core team and the wider community. Rumour has it that Google is considering making Swift a “first class” language and that Facebook and Uber are looking to make Swift more central to their operations. Lyft’s migration of its iOS app to Swift in 2015 shows that the lightness, leanness, and maintainability of the code are worth it and services like the web server and toolkit Perfect are proof that the server-side Swift is ready. People are starting to do some cool and surprising things with Swift. Including… Shaping the language itself. Apple has made a repository on Github called swift-evolution that houses proposals for enhancements and changes to the Swift language. Developers are bringing Swift 3 to as many ARM-based systems as possible. For example, you can get Swift 3 for all the Raspberry Pi boards or you can program a robot in Swift on a BeagleBone. IBM has adopted Swift as the core language for their cloud platform. This opens the door to radically simpler app dev. Developers will be able to build the next generation of apps in native Swift from end-to-end, deploy applications with both server and client components, and build microservice APIs on the cloud. The Swift Sandbox lets developers of any level of experience can actively build server-based code. Since launching it’s had over 2 million code runs from over 100 countries. We think there are going to be a lot of exciting opportunities for developers to work with Swift in the near future. The iOS Developer Skill Plan on Mapt is perfect for diving into Swift and we have plenty of Swift 3 books and videos if you have more specific projects in mind.The large community of developers using iOS/OSX and making libraries combined with the growing popularity of Swift as a general-purpose language makes jumping into Swift a worthwhile venture. Interested in what other developers have been up to across the tech landscape? Find out in our free Skill Up: Developer Talk report on the state of software in 2017.
Read more
  • 0
  • 0
  • 2161

article-image-security-2017-whats-new-and-whats-not
Erik Kappelman
22 Feb 2017
5 min read
Save for later

Security in 2017: What's new and what's not

Erik Kappelman
22 Feb 2017
5 min read
Security has been a problem for web developers since before the Internet existed. By this, I mean network security was a problem before the Internet—the network of networks—was created. Internet and network security has gotten a lot of play recently in the media, mostly due to some high-profile hacks that have taken place. From the personal security perspective, very little has changed. The prevalence of phishing attacks continues to increase as networks become more secure. This is because human beings remain a serious liability when securing a network. However, this type of security discussion is outside the scope of this blog.  Due to the vast breadth of this topic, I am going to focus on one specific area of web security; we will discuss securing websites and apps from the perspective of an open source developer, and I will focus on the tools that can be used to secure Node.js. This is not an exhaustive guide to secure web development. Consider this blog a quick overview of the current security tools available to Node.js developers.  A good starting point is a brief discussion on injection theory. This article provides a more in-depth discussion if you are interested. The fundamental strategy for injection attacks is figuring out a way to modify a command on the server by manipulating unsecured data. Aclassic example is the SQL injection, in which SQL is injected through a form into the server in order to compromise the server’s database. Luckily, injection is a well-known infiltration strategy and there are many tools that help defend against it.  One method of injection compromises HTTP headers. A quick way to secure your Node.js project from this attack is through the use of the helmet module. The following code snippet shows how easy it is to start using helmet with the default settings:  var express = require('express') var helmet = require('helmet') var app = express() app.use(helmet()) Just the standard helmet settings should go a long way toward a more secure web app. By default, helmet will prevent clickjacking, remove the X-Powered-By header, keep clients from sniffing the MIME type, add some small cross-site scripting protections (XSS), and add other protections. For further defense against XSS, use of the sanitizer module is probably a good idea. The sanitizer module is relatively simple. It helps remove syntax from HTML documents that could allow for easy XSS.   Another form of injection attacks is the SQL injection. This attack consists of injecting SQL into the backend as a means of entry or destruction. The sqlmap project offers a tool that can test an app for SQL injection vulnerabilities. There are many tools like sqlmap, and I would recommend weaving a variety of automated vulnerability testing into your development pattern. One easy way to avoid SQL injection is the use of parameterized queries. The PostgreSQL database module supports parameterized queries as a guard against SQL injection.  A fundamental part of any secure website or app is the use of secure transmission via HTTPS. Accomplishing encryption for your Node.js app can be fairly easy, depending on how much money you feel like spending. In my experience, if you are already using a deployment service, such as Heroku, it may be worth the extra money to pay the deployment service for HTTPS protection. If you are categorically opposed to spending extra money on web development projects, Let’s Encrypt is a free and open way to supply your web app with browser-trusted HTTPS protection. Furthermore, Let’s Encrypt automates the process of using an SSL certificate. Let’s Encrypt is a growing project and is definitely worth checking out, if you haven’t already.  Once you have created or purchased a security certificate, Node’s onboard https can do the rest of the work for you. The following code shows how simply HTTPS can be added to a Node server once a certificate is procured:  // curl -k https://localhost:8000/ const https = require('https'); const fs = require('fs'); const options = {   key: fs.readFileSync('/agent2-key.pem'),   cert: fs.readFileSync('/agent2-cert.pem') }; https.createServer(options, (req, res) => { res.writeHead(200); res.end('hello securityn'); }).listen(8000); If you are feeling adventurous, the crypto Node module offers a suite of OpenSSL functions that you could use to create your own security protocols. These include hashes, HMAC authentication, ciphers, and others.  Internet security is often overlooked by hobbyists or up-and-coming developers. Instead of taking a back seat, securing a web app should be one of your highest priorities, especially as threats on the Web become greater with each passing day. As far as the topic of the blog post, what’s new and what’s not, most of what I have discussed is not new. This is in part due to the proliferation of social engineering as a means to compromise networks instead of technological methods. Most of the newest methods for protecting networks revolve around educating and monitoring authorized network users, instead of more traditional security activities. What is absolutely new (and exciting) is the introduction of Let’s Encrypt. Having access to free security certificates that are easily deployed will benefit individual developers and Internet users as a whole. HTTPS should become ubiquitous as Let’s Encrypt and other similar projects continue to grow.  As I said at the beginning of this blog, security is a broad topic. This blog has merely scratched the surface of ways to secure a Node.js app. I do hope, however, some of the information leads you in the right, safe direction.  About the Author Erik Kappelman is a transportation modeler for the Montana Department of Transportation. He is also the CEO of Duplovici, a technology consulting and web design company. 
Read more
  • 0
  • 0
  • 2159

article-image-most-important-skills-you-need-devops
Rick Blaisdell
19 Jul 2017
4 min read
Save for later

The most important skills you need in DevOps

Rick Blaisdell
19 Jul 2017
4 min read
During the last couple of years, we’ve seen how DevOps has exploded and has become one of the most competitive differentiators for every organization, regardless of their size. When talking about DevOps, we refer to agility and collaboration, keys that unlock a business’s success. However, to make it work for your business, you have to first understand how DevOps works, and what skills are required for adopting this agile business culture. Let’s look over this in more detail. DevOps culture Leaving the benefits aside, here are the three basic principles of a successful DevOps approach: Well-defined processes Enhanced collaboration across business functions Efficient tools and automation  DevOps skills you need Recently, I came across an infographic showing the top positions that technology companies are struggling to fill, and DevOps was number one on the list. Surprising? Not really. If we're looking at the skills required for a successful DevOps methodology, we will understand why finding a good DevOps engineer akin to finding a needle in a haystack. Besides communication and collaboration, which are the most obvious skills that a DevOps engineer must have, here is what draws the line between success, or failure. Knowledge of Infrastructure – whether we are talking about datacenter-based or cloud infrastructure, a DevOps engineer needs to have a deep understanding of different types of infrastructure and its components (virtualization, networking, load balancing, etc). Experience with infrastructure automation tools – taking into consideration that DevOps is mainly about automation, a DevOps engineer must have the ability to implement automation tools at any level. Coding - when talking about coding skills for DevOps engineers, I am not talking about just writing the code, but rather delivering solutions. In a DevOps organization, you need to have well-experienced engineers that are capable of delivering solutions. Experience with configuration management tools – tools such as Puppet, Chef, or Ansible are mandatory for optimizing software deployment and you need engineers with the know-how. Understanding continuous integration – being an essential part of a DevOps culture, continuous integration is the process that increases the engagement across the entire team and allows source code updates to be run whenever is required. Understanding security incident response – security is the hot button for all organizations, and one of the most pressing challenges to overcome. Having engineers that have a strong understanding of how to address various security incidents and developing a recovery plan, is mandatory for creating a solid DevOps culture.  Beside the above skills that DevOps engineers should have, there are also skills that companies need to adopt: Agile development – agile environment is the foundation on which the DevOps approach has been built. To get the most out of this innovative approach, your team needs to have strong collaboration capabilities to improve their delivery and quality. You can create your dream team by teaching different agile approaches such as Scrum, Kaizen, and Kanban. Process reengineering – forget everything you knew. This is one good piece of advice. The DevOps approach has been developed to polish and improve the traditional Software Development Lifecycle but also to highlight and encourage collaboration among teams, so an element of unlearning is required.  The DevOps approach has changed the way people collaborate with each other, and improving not only the processes, but their products and services as well. Here are the benefits:  Ensure faster delivery times – every business owner wants to see his product or service on the market as soon as possible, and the DevOps approach manages to do that. Moreover, since you decrease the time-to-market, you will increase your ROI; what more could you ask for? Continuous release and deployment – having strong continuous release and deployment practices, the DevOps approach is the perfect way to ensure the team is continuously delivering quality software within shorter timeframes. Improve collaboration between teams – there has always been a gap between the development and operation teams, a gap that has disappeared once DevOps was born. Today, in order to deliver high-quality software, the devs and ops are forced to collaborate, share, and revise strategies together, acting as a single unit.  Bottom line, DevOps is an essential approach that has changed not only results and processes, but also the way in which people interact with each other. Judging by the way it has progressed, it’s safe to assume that it's here to stay.  About the Author Rick Blaisdell is an experienced CTO, offering cloud services and creating technical strategies, which reduce IT operational costs and improve efficiency. He has 20 years of product, business development, and high-tech experience with Fortune 500 companies, developing innovative technology strategies. 
Read more
  • 0
  • 0
  • 2159
article-image-what-innovation-strategy
Hari Vignesh
08 Oct 2017
5 min read
Save for later

What is an innovation strategy?

Hari Vignesh
08 Oct 2017
5 min read
Despite massive investments of management time and money, innovation remains a frustrating pursuit in many companies. Innovation initiatives frequently fail, and successful innovators have a hard time sustaining their performance — as Polaroid, Nokia, Sun Microsystems, Yahoo, Hewlett-Packard, and countless others have found. Why is it so hard to build and maintain the capacity to innovate? The reasons go much deeper than the commonly cited cause: a failure to execute. The problem with innovation improvement efforts is rooted in the lack of an innovation strategy. Innovation strategy — definition Innovation strategy can be defined as, a plan made by an organization to encourage advancements in technology or services, usually by investing in research and development activities. For example, an innovation strategy developed by a high technology business might entail the use of new management or production procedures and the invention of technology not previously used by competitors. Innovation strategy — a short description An innovation strategy is a plan to grow market share or profits through product and service innovation. When looking at innovation strategy through a jobs-to-be-done lense, we see that an effective strategy must correctly inform which job executor, job, and segment to target to achieve the most growth, and which unmet needs must be targeted to help customers get the job done better. When it comes to creating the solution, an innovation strategy must also indicate whether a product improvement, or a disruptive or breakthrough innovation approach is best. Unfortunately, most innovation strategies fail in these regards, which is why innovation success rates are anemic. Myths that mislead Innovation strategy is not about selecting activities to pursue that are different from those of competitors. This is the myth that misleads. Selecting activities is not a strategy. An innovation strategy is about creating winning products, which means products that are in an attractive market, target a profitable customer segment, address the right unmet needs, and help customers get a job done better than any competing solution. Only after a company produces a winning product or service should it consider what activities are needed to deliver that product or service. Tactics for innovation strategy Global competition and a weak economy have made growth more challenging than ever. Yet, some organizations such as Apple, Amazon, and Starbucks seem to defy the laws of economic gravity.The most successful growth companies adopt at least four best practices. Find the next S-Curve Nothing grows forever. The best products, markets, and business models go through a predictable cycle of growth and maturity, often depicted as an S-curve. Diminishing returns set in as the most attractive customers are reached, price competition emerges, the current product loses its luster, customer support challenges emerge, new operating skills are required, and so on. Unfortunately, growth company leaders are often blinded-sided by this predictable speed bump. Once the reality of the S-curve becomes apparent, it may be too late to design the next growth strategy. The time to innovate — the innovation window — is when the first growth curve hits an inflection point. How do you know when you’re hitting the inflection point? You never know. So the best companies are forever paranoid and make innovation a continuous process. Lean on customers Successful growth companies have a deep understanding of their customers’ problems. Many are embracing tools such as the customer empathy map to uncover new opportunities to create value. This customer insight is the foundation for their lean approach to product innovation: rapid prototyping, design partnerships with lead users, and pivoting to improve their product and business model. Think like a designer Managers are trained to make choices, but they don’t always have good options. Innovation involves creating new options. This is where designers excel. Apple’s exceptional user experiences were largely the creation of Jonathan Ive, a professional designer and Steve Jobs’ right hand man. Lead the way Unless the CEO makes innovation a priority, it won’t happen. Innovation requires a level of risk-taking and failure that’s impossible without executive air cover. The best growth companies create a culture of innovation: Howard Schultz decided Starbucks had lost its way. He flew in every store manager from around the world to help redesign its café experience. Google encourages employees to spend a day per week on new ideas. P&G tracks the percentage of revenues from new products and services. Gray Advertising gives a Heroic Failure Award to the riskiest ideas… that fail! Final thoughts Finally, without an innovation strategy, different parts of an organization can easily wind up pursuing conflicting priorities — even if there’s a clear business strategy. Sales representatives hear daily about the pressing needs of the biggest customers. Marketing may see opportunities to leverage the brand through complementary products or to expand market share through new distribution channels. Business unit heads are focused on their target markets and their particular P&L pressures. R&D scientists and engineers tend to see opportunities in new technologies. Diverse perspectives are critical to successful innovation. But without a strategy to integrate and align those perspectives around common priorities, the power of diversity is blunted or, worse, becomes self-defeating.  About the author  Hari Vignesh Jayapalan is a Google Certified Android app developer, IDF Certified UI & UX Professional, street magician, fitness freak, technology enthusiast, and wannabe entrepreneur. He can be found on Twitter @HariofSpades.
Read more
  • 0
  • 0
  • 2155

article-image-how-take-business-centric-approach-security
Hari Vignesh
03 Sep 2017
6 min read
Save for later

How to take a business-centric approach to security

Hari Vignesh
03 Sep 2017
6 min read
Today’s enterprise is effectively borderless, because customers and suppliers transact from anywhere in the world, and previously siloed systems are converging on the core network. The shift of services (and data) into the cloud, or many clouds, adds further complexity to the security model. Organizations that continue to invest in traditional information security approaches either fall prey to cyber threats or find themselves unprepared to deal with cyber crimes.  I think it is about time for organizations to move their cyber security efforts away from traditional defensive approaches to a proactive approach aligned with the organization’s business objectives.  To illustrate and simplify, let’s classify traditional information security approaches into three types. IT infrastructure-centric approach In this traditional model, organizations tend to augment their infrastructure with products of a particular vendor, which form building blocks for their infrastructure. As the IT infrastructure vendors extend their reach into security, they introduce their security portfolio to solve the problems their product generally introduces. Microsoft, IBM, and Oracle are some examples who have complete a range of products in IT Infrastructure space. In most such cases the decision maker would be the CIO or Infrastructure Manger with little involvement from the CISO and Business representatives. Security-centric approach This is another traditional model whereby security products and services are selected based upon discrete needs and budgets. Generally, only research reports are referred and products with high rating are considered, with a “rip-and-replace” mentality rather than any type of long-term allegiance. Vendors like FireEye, Fortinet, Palo Alto Networks, Symantec, and Trend Micro fall in this category. Generally, the CISO or security team is involved with little to no involvement from the CIO or Business representatives. Business-centric approach This is an emerging approach, wherein decisions affecting cybersecurity of an organization are made jointly by corporate boards, CIOs, and CISOs. This new approach helps organizations to plan for an effective security program which is driven by business requirements with a holistic scope including all business representatives, CIO, CISO, 3rd parties, suppliers& partners; this improves the cybersecurity effectiveness, operational efficiency and helps to align enterprise goals and objectives.  The traditional approaches to cybersecurity are no longer working, as the critical link between the business and cybersecurity are missing. These approaches are generally governed by enterprise boundaries which no longer exist with the advent of cloud computing, mobile & social networking. Another limitation with traditional approaches, they are very audit-centric and compliance driven, which means the controls are limited by audit domain and driven largely by regulatory requirements. Business-centric approach to security Add in new breeds of threat that infiltrate corporate networks and it is clear that CIOs should be adopting a more business-centric security model. Security should be a business priority, not just an IT responsibility.  So, what are the key components of a business-centric security approach? Culture Organizations must foster a security conscious culture whereby every employee is aware of potential risks, such as malware propagated via email or saving corporate data to personal cloud services, such as Dropbox. This is particularly relevant for organizations that have a BYOD policy (and even more so for those that don’t and are therefore more likely to beat risk of shadow IT). According to a recent Deloitte survey, 70 per cent of organizations rate their employees’ lack of security awareness as an ‘average’ or ‘high’ vulnerability. Today’s tech-savvy employees are accessing the corporate network from all sorts of devices, so educating them around the potential risks is critical. Policy and procedures As we learned from the Target data breach, the best technologies are worthless without incident response processes in place. The key outcome of effective policy and procedures is the ability to adapt to evolving threats; that is, to incorporate changes to the threat landscape in a cost-effective manner. Controls Security controls deliver policy enforcement and provide hooks for delivering security information to visibility and response platforms. In today’s environment, business occurs across, inside and outside the office footprint, and infrastructure connectivity is increasing. As a result, controls for the environment need to extend to where the business operates. Key emergent security controls include: Uniform application security controls (on mobile, corporate and infrastructure platforms) Integrated systems for patch management Scalable environment segmentation (such as for PCI compliance) Enterprise Mobility Application Management for consumer devices Network architectures with Edge-to-Edge Encryption Monitoring and management A 24×7 monitoring and response capability is critical. While larger enterprises tend to build their own Security Operations Centers, the high cost of having staff around the clock and the need to find and retain skilled security resources is too costly for the medium enterprise. Moreover, according to Verizon Enterprise Solutions, companies only discover breaches through their own monitoring in 31 per cent of cases. An outsourced solution is the best option, as it enables organisations to employ sophisticated technologies and processes to detect security incidents, but in a cost-effective manner. A shift in focus It’s never been more critical for organizations to have a robust security strategy. But despite the growing number of high-profile data breaches, too much information security spending is dedicated to the prevention of attacks, and not enough is going into improving (or establishing) policies and procedures, controls and monitoring capabilities. A new approach to security is needed, where the focus is on securing information from the inside out, rather than protecting information from the outside in. There is still value in implementing endpoint security software as a preventative measure, but those steps now need to be part of a larger strategy that must address the fact that so much information is outside the corporate network.  The bottom line is, planning Cybersecurity with a business-centric approach can lead to concrete gains in productivity, revenue, and customer retention. If your organization is among the majority of firms that don’t, now would be a great time to start.  About the Author  Hari Vignesh Jayapalan is a Google Certified Android app developer, IDF Certified UI & UX Professional, street magician, fitness freak, technology enthusiast, and wannabe entrepreneur. He can be found on Twitter @HariofSpades. 
Read more
  • 0
  • 0
  • 2146