Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech Guides

851 Articles
article-image-is-your-enterprise-measuring-the-right-devops-metrics
Guest Contributor
17 Sep 2018
6 min read
Save for later

Is your Enterprise Measuring the Right DevOps Metrics?

Guest Contributor
17 Sep 2018
6 min read
As of 2018, 17% of the companies worldwide have fully adopted DevOps while 14% are still in the consideration stage. Amazon, Netflix and Target are few of the companies that have attained success with DevOps. Amazon’s move to Amazon Web Services resulted in their ability to scale their capacity up or down as needed for the servers, thus allowing their engineers to deploy their own code to the server whenever they wanted to. This resulted in continuous deployment, thus reducing the duration as well as number of outages experienced by the companies using AWS. Netflix used DevOps to improve their cloud infrastructure and to ensure smooth streaming of videos online. When you say “we have adopted DevOps in your Enterprise”, what do you really mean? It means you have adopted a software philosophy that integrates software development and operations, thus reducing the time to market your end product. The questions which come next are: How do you measure the true success of DevOps in your organization? Have you been working on the right metrics all along? Let’s talk about first measuring DevOps in organizations. It is all about uptime, transactions per second, bugs fixed, the commits and other operational as well as productivity metrics. This is what most organizations tend to look at as metrics, when you talk about DevOps. But are these the Right DevOps Metrics? For a while, companies have been working on a set of metrics, discussed above, to determine the success of the DevOps. However, these are not the right metrics, and should not be considered. A metric is an indicator of the performance of the DevOps, and not every single indicator will determine the success. Your metrics might differ based on the data you collect. You would end up collecting large volumes of data; however, not every data available can be converted into a metric. Here’s how you can determine the metrics for your DevOps. Avoid using too many metrics You should, at the most, use 10 metrics. We suggest using less than 10 in fact. The fewer the metrics used, the better your judgment would be. You should broaden your perspective when choosing the metrics. It is important to choose metrics that account for the overall organizational health, and don’t just take into consideration the operational and development data. Metrics that connect with your organization What is the ultimate aim for your organization? How would you determine your organization is successful? The answer to these questions will help you determine the metrics. Most organizations determine their success based on customer experience and the overall operational efficiency. You will need to choose metrics that help you determine these two values. Tie the metrics to your goals As a businessperson, you are more concerned with customer attrition, bad feedback and non-returning customers than the lines of code that goes into creating a successful software product. You will need to tie your DevOps success metrics to these goals. While you are concerned about the failure of your website or the downtime, the true concern is the customer’s abandonment of your website. Causes that affect the DevOps While the business metrics will help you measure the success to a certain extent, there are certain things that affect the operations and development teams. You will need to check these causes, and go to the root to understand how it affects the DevOps teams  and what needs to be done to create a balance between the development and operational teams. Next, we will talk about the actual DevOps metrics that you should take into consideration when deriving value for your organization and measuring the success. The Velocity With most of the enterprise elements being automated, velocity is one of the most important metrics that will determine the success of your DevOps. The idea is to get the updates out to the users in the quickest and fastest way possible, without compromising on security or reliability. You stay competitive, offer new features and boost customer retention. The two variables that help measure this tangible metric include deployment frequency and deployment lead time. The former measures the frequency of releases and the latter measures the speed at which the team commits a code and pushes forth the update. Service Quality Service quality directly impacts the goals set forth by the organization, and is intangible. The idea is to maintain the service quality throughout the releases and  changes made to the application. The variables that determine this metric include change failure rate, number of support tickets and MTTR (Mean time to recovery). When you release an update, and that leads to an error or fault in the application, it is the change failure rate. In case there are bugs or performance issues in your releases, and these are being reported, then the variable number of support tickets or errors comes into existence. MTTR is the variable that measures the number of issues resolved and the time taken to solve them. The idea is to be more responsive to the problems faced by the customers. User Experience This is the final metric that impacts the success of your DevOps. You need to check if all the features and updates you have insisted upon are in sync with the user needs. The variables that are concerned with measuring this aspect include feature usage and business impact. You will need to check how many people from the target audience are using the new feature update you have released, and determine their personas. You can check the number of sessions, completed transactions and duration of the session to quantify the number of people. Check their profiles to get their personas.. Planning your DevOps strategy It is not easy to roll out DevOps in your organization, and expect agility immediately. You need to have a perfect strategy, align it to your business goals, and determine the effective DevOps metrics to determine the success of your roll out. Planning is of essence for a thorough roll out of DevOps. It is important to consider every data, when you have DevOps in your organization. Make sure you store and analyze every data, and use the data that suits the DevOps metrics you have determined for success. It is important that the DevOps metrics are aligned to your business goals and the objectives you have defined. About Author: Vishal Virani is a Founder and CEO of Coruscate Solutions, a mobile app development company. He enjoys writing about technology, mobile apps, custom web development and latest industry trends.
Read more
  • 0
  • 0
  • 2796

article-image-cloud-security-tips-locking-your-account-down-aws-identity-access-manger-iam
Robi Sen
15 Jul 2015
8 min read
Save for later

Cloud Security Tips: Locking Your Account Down with AWS Identity Access Manager (IAM)

Robi Sen
15 Jul 2015
8 min read
With the growth of cloud services such as Google’s Cloud Platform, Microsoft Azure, Amazon’s Web Service, and many others,developers and organizations have unprecedented access to low cost, high performance infrastructure that can scale as needed. Everyone from individuals to major companies have embraced the cloud as their platform of choice to host their IT services and applications; especially small companies and start-ups. Yet for many reasons, those who have embraced the cloud have often been slow to recognize the unique security considerations that face cloud users. When you host your own servers, the cloud operates on a shared risk model were the cloud provider focuses on providing physical security, failover, and high level network perimeter protection. The cloud user is understood to be securing their operating systems, data, applications, and the like. This means that while your cloud provider provides incredible services for your business, you are responsible for much of the security, including implementing access controls, intrusion prevention, intrusion detection, encryption, and the like. Often, because cloud services are made so accessible and easy to setup, users don’t bother to secure them, nor often even know the need to. If you’re new to the cloud and new to security, this post is for you. While we will focus on using Amazon Web Services,the basic concepts apply to most cloud services regardless of vendor. Access control Since you’re using virtual resources that are already setup, the AWS cloud, one of the most important things you need to do right away is secure access to your account and images. First, you want to lock down your AWS account. This is the login and password that you are assigned when you setup your AWS account and anyone who has access to this can purchase new services, change your services, and generally cause complete havoc for you. Indeed AWS accounts sell on hacker’s sites and Darknet sites for good money; usually so the person who buys your hacked/stolen AWS account wants to setup bit coin miners. Don’t give yours out or make it easily accessible. For example, many developers embed logins, passwords, and AWS keys into their code, which is a very bad practice, and then have their accounts compromised by criminals. The first thing you need to do after getting your Amazon login and password is to store it using a tool such as mSecure or LastPass that allows you to save them in an encrypted file or database. It should then never go into a file, document, or public place. It is also strongly advised to use Multi-Factor Authentication (MFA). Amazon allows you MFA via physical devices or straightforward smart phone applications. You can read more about Amazon’s MFA here and here. Once your AWS account information is secure you should then use AWS’s Identity and Access Management (IAM) system to give each user under your master AWS account access with specific privileges according to best practices. Even if you are the only person who uses your AWS account, you should consider using IAM to create a couple of users that have access based on their role, such as a content developer who only has the ability to move files in out of specific directories, or a developer who can start and stop instances, and the like. Then always use the role with the least privileges needed to get your work done as possible. While this might seem cumbersome, you will quickly get used to it, you will be much safer, and finally if your project grows, you will already have the groundwork to ramp up safely. Creating an IAM group and user In this section, we will create an administrator group and add ourselves as the user. If you do not currently have an AWS account, you can get a free account from AWS here. Be advised you will have to have a valid credit card and a phone number to validate your account with, but Amazon will give you the account to play with free for a year (see terms here). For this example, what you need to do is: Create an administrator group that we will give group permissions to for our AWS account’s resources Make a user for ourselves and then add the user to the administrator group Finally create a password for the user so we can access to the AWS Management Console To do this, first sign into the IAM console. Now click on the Groups link and then select Create New Group: Now name the new group Administrator and select Next Step: Next, we need to assign a group policy. You can build your own (see more information here), but this should generally be avoided until you really understand AWS security policies and AWS in general. Amazon has a number of predeveloped policy templates that work great until your applications and architecture gets more complex and grows. So for now just simply select the Administrator Access policy as shown here: You should now see a screen that shows your new policy. You can then click next and then Create Group: You should now see the new Administrator group policy under Group Name: In reality, you would probably want to create all your different group accounts and then associate your users, but for now we are just going to create the Administrator account then create a single user and add it to the Administrator group. Creating a new IAM user account Now that you have created an Administrator group, let's add a user to it. To do this, go to the navigation menu, select the user, and then click Create New Users. You should then see a new screen. You have the option to create access keys for this user. Depending on the user, you may or may not need to do this, but for now go ahead and select that option box and then select Create: IAM will now create the user and give you the option to view the new key or download and save it. Go ahead and download the credentials. Usually it’s good practice to then save those credentials into your password manager such as mSecure or LastPass and not share them with anyone except for the specific user. Once you have downloaded the userscredentials, go ahead and select Close, which will return you to the Users screen. Now click on the user you created. You should now see something like the following (the username has been removed from the figure): Now select Add User to Groups. You should now see the group listing, which only shows one if you’re following along.Now select the Administrator group and then select Add to Groups. You should be taken back to the Users content page and should now see that your user is assigned to the Administrator group. Now, staying in the same screen, scroll down until you see the Security Credentials part of the page. Now click Manage Password. You will now be asked to either select an auto-generated password or click Assign a custom password. Go ahead and create your own password and select Apply. You should be taken back to your user content screen and under security credentials section, you should now see that the password field has been updated from No to Yes. You should also strongly consider using your MFA tool, in my case the AWS virtual MFA Android application, to make the account even more secure. Summary In this article, we talked about the first step in securing your cloud services is controlling access to them. We looked at how AWS allows this via the IAM, allowing you to create groups and group security policies tied to a group, and then how to add users to the group enablingyou to secure your cloud resources based on best practices. Now that you have done that, you can go ahead and add more groups and or users to your AWS account as you need to.However, before you do that, make sure you thoroughly read the AWS IAM documentation; links are supplied at the end of the post. Resources for AWS IAM IAM User Guide Information on IAM Permissions and Policies IAM Best Practices About Author Robi Sen, CSO at Department 13, is an experienced inventor, serial entrepreneur, and futurist whose dynamic twenty-plus year career in technology, engineering, and research has led him to work on cutting edge projects for DARPA, TSWG, SOCOM, RRTO, NASA, DOE, and the DOD. Robi also has extensive experience in the commercial space, including the co-creation of several successful start-up companies. He has worked with companies such as UnderArmour, Sony, CISCO, IBM, and many others to help build out new products and services. Robi specializes in bringing his unique vision and thought process to difficult and complex problems allowing companies and organizations to find innovative solutions that they can rapidly operationalize or go to market with.            
Read more
  • 0
  • 0
  • 2794

article-image-makers-journey-3d-printing
Travis Ripley
30 Jun 2014
14 min read
Save for later

A Maker's Journey into 3D printing

Travis Ripley
30 Jun 2014
14 min read
If you’ve visited any social media outlets, you’ve probably come across a never-ending list of new words and terms—the Internet of Things, technological dissonance, STEM, open source, tinkerer, maker culture, constructivism, DIY, fabrication, rapid-prototyping, techshop, makerspace, 3D printers, Raspberry Pi, wearables, and more. These terms are typically used to describe a Maker, or they have something to do with Maker culture. Follow along to learn about my particular journey into the Maker culture, specifically in the 3D printing space. The rise of the maker culture Maker culture is on the rise. This is a culture that thrives at the intersection of technology and innovation at the informal, social, and peer-led level. The interactions of skilled people driven to share their knowledge with others, develop new pathways, and create solutions for current problems have built a new community. I am proud to say that I am a Maker-Tinkerer (or that I have some form of motivated ADHD that drives me to engage in engineering-oriented pursuits). My journey started at ground zero while studying 3D design and development. A maker's journey I knew there was more that I could do with my knowledge of rendering the three-dimensional surface of an object. Early on, however, I only thought about extending my knowledge for entertainment purposes, such as video games. I didn’t understand the power of having this knowledge and the way it could help create real-world solutions. Then, I came across an issue of Make Magazine and it changed my mental state overnight—I had to create tangible things. Now that I had the information to send me in the right direction, I needed an outlet. An industry friend mentioned a local Hackerspace, known as Deezmaker, which was holding informational workshops about 3D printing. So, I signed up for an introductory class. I had no clue what I was getting myself into as I crossed that first threshold, but by that evening, I was versed in topics that I thought were far from my mental capabilities. I was hooked. The workshop consisted of part lecture, and part hands-on material. I learned that you couldn't just start using a 3D printer. You actually need to have some basic understanding of the manufacturing process, like understanding that layers of material need to be successfully laid down in order to move on to the next stage in the process. Being the curious, impatient, and overly enthusiastic man-child that I am, this was the most difficult part for me, as I couldn’t wait to engage in this new world. 3D printing Almost two years later, I am fully immersed in the world of 3D printing. I currently have a 3D printer at home (which is almost obsolete, by today’s standards) and I have access to multiple printers at a local techshop/makerspace known as Makerplace here in San Diego, Ca. I use this technology regularly, since I have changed directions in my career as a 3D artist towards Manufacturing Engineering and Rapid Prototyping. I am currently attending a Machine Technology/Engineering program at San Diego City College; (for more info on the best Machining program in the country visit http://www.JCbollinger.com). The benefit for me using 3D printers is rapidly producing iterations of prototypes for my clientele, since most people feel more reassured in the process if they have tangible and solid objects and are more likely to trust you as a designer. I feel that having access to this also helps me complete more jobs successfully given that turnaround times for updates can be as little as a few hours, rather than days or weeks (depending on the size/scale). Currently I have a few reoccurring clients that want updates often, and by showing them my progress, the iterations are fewer and I can move onto the next project with no hesitation given how we can successfully see design updates rapidly and minimize the flaws and failures. I produce prototypes for all industries: toys, robotics, vehicles, and so on. Think of it as producing solutions, and how you can either make something better or simpler. Entertaining the idea of a challenge and solving these challenges has benefits as with each new design job you have all these tangible objects to look at and examine. As a hobbyist, the technology has made it easy to produce new or even obsolete items. For example, I love Transformers, but you know how plastic does two things very well: it breaks and gets lost. I came across a forum where guys were distributing the programs for the arm extrusions that break (no one likes gluing), so I printed the parts that had been missing for decades, rebuilt the armature that had for so long been displaced, and then like magic I felt like I was six years old again with a perfectly working Transformer. Here are a few things that I've learned along the way: 3D printing is also known as Additive Manufacturing. It is the process of producing three-dimensional objects in which successive layers of varied material are extruded under computer-controlled equipment that is fed information from 3D models. These models are derived from a data source that processes the information into machine language. The plastic extrusion technology that is now becoming slowly more popular is known as Fused Deposition Modeling (FDM). This process was developed in the early 1990s for the application of job production, mass production, rapid prototyping, product development, and distributed manufacturing. The principle of FDM is that material is laid down in layers. There are many other processes such as Selective Heat Sintering (SHS), Selective Laser Sintering (SLS), Stereolithography (SLA), and Plaster-Based 3D Printing (PP) to name a few. We will keep it simple here and go over the FDM process for now, as most of the printers at the hobbyist level use this process. The FDM process significantly affected roles within the production and manufacturing industries, as wearing multiple hats as an engineer, designer, and operator and as growth made the technology more affordable to an array of industrial fields. In contrast, CNC Machining, which is a Subtractive Manufacturing process, has been incorporated naturally to work together in this development. The influence of this technology in the industrial and manufacturing industries created exposure to new methods of production at exponential rates, for example Automation. For the home-use and hobbyist market, the 3D printers produced by the open source/open hardware initiative can be stemmed directly or indirectly from the RepRap.org project, which is a free to low-cost desktop 3D printer that is self-replicating. That being said, you can thank them for starting this revolution. By getting involved in this community you are benefiting everyone by spreading the spark that will continue to create new developments in manufacturing and consumer technology. The FDM process can be done with a multitude of materials; the two most popular options at this time are PLA (Polylactic acid) and ABS (Acrylonitrile butadiene styrene). Both PLA and ABS have pros and cons, depending upon your model structure. The future use of the print and client requests and understanding the fundamental differences between the two can help you determine your choice of one over the other, or in case of owning a printer with two extruders, how they can be combined. In some cases, PVA (Polyvinyl Acetate) is also used as support material (in the case of two extruders) unlike PLA or ABS, which if used as support material will require cleanup when finishing a print. PVA is water soluble, so you can soak your print in warm water and the support structures will dissolve away. PLA (Polylactic Acid) is a strong biodegradable plastic that is derived from renewable resources: cornstarch and sugarcane. It is more resistant to UV rays than ABS (so you will not see fading with your prints). Also, it sticks better than any other material to the surface of your hotplate (minimal warping), which is a huge advantage. It prints at -180* C, and it can create an ooze, and if your nozzle is loaded it will drip, which also means that leaving a print in your car on a hot day may cause damage. ABS (Acrylonitrile butadiene styrene) is stronger than PLA, but is non-biodegradable; it is a synthetic monomer produced from propylene and ammonia. This means it has more rigidity than PLA, but is also more flexible. It is a colorfast material (which means it will hold its color for years). It prints at -220*C, and is amorphous and therefore has no true melting point, so a heated bed is needed as warping can and will occur (usually because the bed is not hot enough—at least 80*C —or the Z axis is not calibrated correctly). Printer options For the hobbyist maker, there are a few 3D printer options to consider. Depending upon your skill level, your needs, budget and commitments, there is a printer out there for you. The least expensive, smallest, and most straightforward printer available on the market is Printrbot Simple Maker’s 3D Printer. Retailing at $349.99, this printer comes in a kit that includes the bare necessities you need to get started. It is capable of printing a 4” cube. You can also purchase it already assembled for a little extra. The kit and PLA filament are available at www.makershed.com. The 3D printer I started on, personally own, and recommend is the Afina H480 3D printer. Retailing at $1299.99, this printer provides the easiest setup right out of the box, it’s fully assembled, comes with a heated platform for the aid of adhesion and for less chance of warping, and can print up to a 5” cube. It also comes loaded with its own native 3D software, where you can manipulate your .STL files. It has an automated utility to calibrate the printer’s build platform with the printhead, and also automatically generates any support setup material and the “raft”, which is the base support for your prints. There is so much more to it, but as I said I recommend this for beginners, and it is also available through www.makershed.com. For the person who wants to print, and is at the hobbyist and semi-professional level, consider the next generation in 3D printing, the MAKERBOT Replicator. It is quick and efficient. Retailing at $2899.00, this machine has an extremely high layer resolution, LCD display, and if you run out of filament (ABS/PLA), there is no need to start over; this machine will alert you via computer or smartphone that a replacement is needed. There are many types of 3D printers available, with options including open source, open hardware, filament types, delta style mechanics, single/double extruders, and the list goes on. My main suggestion is to try before you buy, either at a local hackerspace or a local Makerfaire. It’s a worthwhile investment that pays for itself. Choosing your tools Before you begin, it's also important to choose your design tools. There are many great open source tools to choose from. Here are some of my favorites. When it comes to design tools, there is a multitude of cost effective and free tools out there to get you started. First off, the 3D printing process has a required “tool-chain” that must be followed in order to complete the process, roughly broken down into three parts: CAD (Computer Aided Design): Tools used to design 3D parts for printing. There are very few interchangeable CAD file formats that are sometimes referred to as parametric files. The most widely used interchangeable mesh file format is .STL (Stereolithography). This format is the most important as it used by CAM tools. CAM (Computer Aided Manufacturing): Tools handling the intermediate step of translating CAD files into a machine-friendly format. Firmware for electronics: This is what runs the onboard electronics of the printer, and is the closest to actual programming; a process known as cross compiling. Here are my best picks for each category, known as FLOSS (free/libre/open source software). FLOSS CAD tools, for example OpenSCAD, FreeCAD, and HeeksCAD for the most part create these parametric files that usually represent parts or assemblies in terms of CSG (Constructive Solid Geometry) which basically represent a tree of Boolean operations performed on primitive shapes such as cubes, spheres, cylinders, and pyramids. These are modified numerically and with great precision and the geometry is a mathematical representation of such, no matter how much you zoom in or out. Another category of CAD tool that represents the parts as 3D polygon mesh is for the most part used for special effects in movies or video games (CG). They are also a little more user friendly, and examples would be Autodesk Maya and Autodesk 3ds Max (these choices are subscription/retail-based). But there are also open source and free versions of this tool such as Autodesk 123D, Google Sketchup, and Blender; I suggest the latter options, since they are free, user friendly, and they are much easier to learn since their options are narrowed down strictly to producing 3D meshes. If you need more precision you should look at OpenSCAD (my favorite), as it was created directly for making physical objects rather than game design or animation. OpenSCAD is easy to learn, with a simple interface, it is powerful and cross-platform, and there are many examples you can use along with strong community support. Next, you’ll need to convert your 3D masterpiece (.stl) into a machine friendly format known as G-Code. This process is also known as “slicing”. You’re going to need some CAM software to produce the “tool paths,” which is the next stop in the tool chain. Most of the slicing software available is open source. Some examples are Slic3r (the most popular, with an ease of use recommended for beginners), Skeinforge (dated, but still one of the best), Cura, and MatterSlice. There is also great closed source slicing software out there. One in particular is KISSlicer, which is a pro version that supports multi-extruder printing. The next stop after slicing is using software known as: A G-Code interpreter, which breaks down each line of the code into electronic signals. A G-Code sender, which sends the signals to the motors on the printer to tell them how to move. This software is usually directly linked to an EMC (Electronic Machine Controller), which controls the printer directly. It can also be linked to an integrated hardware interface that has a G-Code interpreter built in, which loads the G-Code directly from a memory card (SD card/USB). The last stop is the firmware, which controls the electronics onboard the printer. For the most part, the CPUs that control these machines are simple microcontrollers that are usually Arduino-based, and they are compiled using the Arduino IDE. This process may sound time consuming, but once you go through the tool chain process a few times, it becomes second nature, just like driving a manual transmission in a car. Where to go from here? When I finished my first hackerspace workshop, I had been assimilated into a culture that I was not only benefiting from personally, but a culture that I could share my knowledge with and contribute to. I have received far more in my journey as a maker than any previous endeavor. To anyone who is curious, and mechanically inclined (or not), who believes they have an answer to a solution, I challenge you. I challenge you to make the leap into this culture—join a hackerspace, attend a makerfaire, and enrich your life and the lives of others. About the Author Travis Ripley is a designer/developer. He enjoys developing products with composites, woods, steel, and aluminum, and has been immersed in the Maker community for over two years. He also teaches game development at the University of California, Los Angeles. He can be found @travezripley.
Read more
  • 0
  • 0
  • 2783

article-image-automation-and-robots-trick-or-treat
Savia Lobo
31 Oct 2018
3 min read
Save for later

Automation and Robots - Trick or Treat?

Savia Lobo
31 Oct 2018
3 min read
Advancements in AI are on a path of reinventing the way organizations work. Last year, we wrote about RPA, which made front-end manual jobs redundant. This year, we have actual robots on the field. Last month, iRobot, the intelligent robot making company revealed its latest robot, Roomba i7+, that maps and stores your house and also empties the trash automatically. Last week, Google announced its plans to launch a ‘Cloud Robotics platform’ for developers in 2019, which will encourage efficient robotic automation in highly dynamic environments. Earlier this month, Amazon announced that it is opening a chain of 3,000 cashier-less stores across the US by 2021. And most recently, Walmart also announced that it is going to launch a cashierless store next year. The terms ‘Automation’ and ‘Robotics’ sometimes have a crossover, as Robots can be used to automate physical tasks while many types of automation have nothing to do with physical robots. The emergence of AI robots will reduce the need for a huge human workforce, boost the productivity of organizations and reduce their time to market. For example, customer service and other front-end jobs can function 24*7*365 without an uninterrupted service. Within industrial automation, robots can automate time-consuming physical processes. Collaborative robots will carry out a task in the same way a human would, albeit more efficiently! The positives aside, AI there is a danger of it getting out of control as machines can go rogue without humans in the loop. That is why members of European Parliament (MEPs) passed a resolution recently on banning autonomous weapon systems. They emphasized that weapons like these, without proper human control over selecting and attacking targets are a disaster waiting to happen. At the more mundane end of the social spectrum, the dangers of automation are still very real. Robots are expected to significantly replace a lot of human labor. For instance, as per the World Economic Forum survey, in 5 years, machines will do half of our job tasks of today as 1 in 2 employees would need reskilling/upskilling. Another study by renowned economist Andy Haldane, The Bank of England’s chief economist says 15 million jobs in Britain at stake with Artificial Intelligence robots set to replace humans at workforce. As of now, having AI for organizations is a treat due to the different advantages they provide over humans. Although it will replace jobs, people can upskill their knowledge to continue thriving in the automation augmented future. Four interesting Amazon patents in 2018 that use machine learning, AR, and robotics How Rolls Royce is applying AI and robotics for smart engine maintenance Home Assistant: an open source Python home automation hub to rule all things smart
Read more
  • 0
  • 0
  • 2780

article-image-transfer-learning
Graham Annett
07 Oct 2016
7 min read
Save for later

Transfer Learning

Graham Annett
07 Oct 2016
7 min read
The premise of transfer learning is the idea that a model trained on a particular dataset can be used and applied to a different dataset. While the notion has been around for quite some time, very recently it's become useful along with Domain Adaptation as a way to use pre-trained neural networks for highly specific tasks (such as in Kaggle competitions) and various fields. Prerequisites For this post, I will be using Keras 1.0.3 configured with TensorFlow 0.8.0. Simple Overview and Example Before using VGG-16 with pre-trained weights, let’s first use a simple example on our own small net to see how it works. For this example we will be using a MNIST trained net and then fine-tuning the last layers to allow for it to predict on a dataset of smiling or not smiling images. from keras.datasets import mnist from keras.models import Sequential from keras.layers import Dense, Dropout, Activation, Flatten from keras.layers import Convolution2D, MaxPooling2D from keras.utils.np_utils import to_categorical from scipy.misc import imresize def rgb_g(img): grayscaled = 0.2989 * img[:,0,:,:] + 0.5870 * img[:,1,:,:] + 0.1140 * img[:,2,:,:] return grayscaled (X, Y), (_, _) = cifar10.load_data() nb_classes = len(np.unique(Y_train)) Y = np_utils.to_categorical(Y, nb_classes) X = X.astype('float32')/255. # converts 3 channels to 1 and resizes image X = rgb_g(X) X_tmp = [] for i in range(X.shape[0]): X_tmp.append(imresize(X[i], (28,28))) X = np.array(X_tmp) X = X.reshape(-1,1,28,28) model = Sequential() model.add(Convolution2D(32,3,3, border_mode='same', input_shape=(1,28,28))) model.add(Activation('relu')) model.add(Convolution2D(32,3,3, border_mode='same')) model.add(Activation('relu')) model.add(MaxPooling2D((2,2))) model.add(Dropout(.25)) model.add(Flatten()) model.add(Dense(128)) model.add(Activation('relu')) model.add(Dropout(0.5)) model.add(Dense(nb_classes)) model.add(Activation('softmax')) model.compile(loss='categorical_crossentropy', optimizer='adadelta') One thing to notice is that our input for the neural net is 1x28x28. This is important, because as the data we feed in must match this dimension, the MNIST and CIFAR datasets are not images of the same size or number of color channels (MNIST is 1x28x28 while CIFAR10 is 3x32x32, where the first image represents the number of channels in the image). There are a few ways to accommodate this, but generally you are working with what the prior weights and model were trained on and must resize and adjust your input accordingly (for instance, grayscaled images can be repeated from 1 channel into 3 channels to use on RGB trained models). With this model we now will load data from MNIST and fit again, but only fine tune on the last few layers. First let’s look at the model and some of the features of the model. > model.layers [<keras.layers.convolutional.Convolution2D at 0x1368fe358>, <keras.layers.core.Activation at 0x1368fe3c8>, <keras.layers.convolutional.Convolution2D at 0x136905ba8>, <keras.layers.core.Activation at 0x136905898>, <keras.layers.convolutional.MaxPooling2D at 0x136930828>, <keras.layers.core.Dropout at 0x136930860>, <keras.layers.core.Flatten at 0x136947550>, <keras.layers.core.Dense at 0x136973240>, <keras.layers.core.Activation at 0x136973780>, <keras.layers.core.Dropout at 0x13697ef98>, <keras.layers.core.Dense at 0x136988a20>, <keras.layers.core.Activation at 0x136e29ef0>] > model.layers[0].trainable True With keras, we have the ability to specify whether we want a layer to be trainable or not. A trainable layer means that its weights that are learned via fitting the model will update. For this experiment we will be doing what is called fine tuning on only the last layer without changing the number of classes. We still want to keep the last few layers so we will set all the layers but the last 2 to be trainable such that the learned weights will stay the same: for l in range(len(model.layers)-2): model.layers[l].trainable=False model.compile(loss='categorical_crossentropy', optimizer='adadelta') Note: we must also recompile every time we adjust the model's layers. This is oftentimes a tedious process with Theano so can be useful when initially experimenting to use TensorFlow. Now we can train a few epochs on the MNIST dataset and see how well it's priorly learned weights work. (X_mnist, y_mnist), (_, _) = mnist.load_data() y_mnist = np_utils.to_categorical(y_mnist) X_mnist = X_mnist.reshape(-1,1,28,28) model.fit(X_mnist, y_mnist, batch_size=32, nb_epoch=5, validation_split=.2) We can also train on the dataset but use different final layers in the model. If, for instance, you were interested in fine tuning the model based on some dataset with 1 single binary classification, you could do something like: model.pop() model.pop() model.add(Dense(1)) model.add(Activation('softmax')) model.train(x_train, y_train) While this example is quite small and the weights are easily learned, the premise that network weights that took a few days or even weeks to learn isn't that uncommon. Also, having a large pre-trained network can be useful to both gauge your own network results as well as to incorporate into other aspects of your deep learning model. Using VGG-16 for Transfer Learning There are a few well known pre-trained models and weights that while plausibly you could train on your own computer, often the training time is much too long [D1] and requires specialized hardware to train. VGG-16 is perhaps one of the better known of these, but there are many others and Caffe has a nice listing of them. Using the VGG-16 is quite simple and allows for a previously trained model that is quite adaptable without having to spend a large amount of time training. With this type of model, we are able to load the model and use these weights; then we can remove the final layers to change to for instance a binary classification problem. Using these pre-trained networks take specialized hardware usually and require the it [D2] may not work on all computers and GPU's. You need to download the pre-trained weights available here and there is also a gist explaining the general use of it. from keras.models import Sequential from keras.layers.core import Flatten, Dense, Dropout from keras.layers.convolutional import Convolution2D, MaxPooling2D, ZeroPadding2D from keras.optimizers import Adam model = Sequential() model.add(ZeroPadding2D((1, 1), input_shape=(3, 224, 224))) model.add(Convolution2D(64, 3, 3, activation='relu')) model.add(ZeroPadding2D((1, 1))) model.add(Convolution2D(64, 3, 3, activation='relu')) model.add(MaxPooling2D((2, 2), strides=(2, 2))) model.add(ZeroPadding2D((1, 1))) model.add(Convolution2D(128, 3, 3, activation='relu')) model.add(ZeroPadding2D((1, 1))) model.add(Convolution2D(128, 3, 3, activation='relu')) model.add(MaxPooling2D((2, 2), strides=(2, 2))) model.add(ZeroPadding2D((1, 1))) model.add(Convolution2D(256, 3, 3, activation='relu')) model.add(ZeroPadding2D((1, 1))) model.add(Convolution2D(256, 3, 3, activation='relu')) model.add(ZeroPadding2D((1, 1))) model.add(Convolution2D(256, 3, 3, activation='relu')) model.add(MaxPooling2D((2, 2), strides=(2, 2))) model.add(ZeroPadding2D((1, 1))) model.add(Convolution2D(512, 3, 3, activation='relu')) model.add(ZeroPadding2D((1, 1))) model.add(Convolution2D(512, 3, 3, activation='relu')) model.add(ZeroPadding2D((1, 1))) model.add(Convolution2D(512, 3, 3, activation='relu')) model.add(MaxPooling2D((2, 2), strides=(2, 2))) model.add(ZeroPadding2D((1, 1))) model.add(Convolution2D(512, 3, 3, activation='relu')) model.add(ZeroPadding2D((1, 1))) model.add(Convolution2D(512, 3, 3, activation='relu')) model.add(ZeroPadding2D((1, 1))) model.add(Convolution2D(512, 3, 3, activation='relu')) model.add(MaxPooling2D((2, 2), strides=(2, 2))) model.add(Flatten()) model.add(Dense(4096, activation='relu')) model.add(Dropout(0.5)) model.add(Dense(4096, activation='relu')) model.add(Dropout(0.5)) model.add(Dense(1000, activation='softmax')) model.load_weights('vgg16_weights.h5') for l in model.layers[:-2]: l.trainable = False model.layers.pop() model.layers.pop() model.add(Dropout(0.5)) model.add(Dense(1, activation='softmax')) model.compile(optimizer=Adam() loss='categorical_crossentropy', metrics=['accuracy']) You should now be able to try out your own models to experience the benefits of transfer learning. About the author Graham Annett is an NLP engineer at Kip.  He has been interested in deep learning for a bit over a year and has worked with and contributed to Keras.  He can be found on GitHub or Here..
Read more
  • 0
  • 0
  • 2758

article-image-is-novelty-ruining-web-development
Antonio Cucciniello
17 Jan 2018
5 min read
Save for later

Is novelty ruining web development?

Antonio Cucciniello
17 Jan 2018
5 min read
If you have been paying attention to the world of web development lately, it can seem a little chaotic. There are brand new frameworks and libraries that come out each and every day. These frameworks are sometimes related to previous ones that recently came out, or they are attempting to develop something entirely new. As new technologies emerge, they change things that could have been standard for a long time. With these changes happening fairly often, it can be beneficial or frustrating, depending on the situation. Let's take a look at why the creation of new technologies in web development could be a benefit for some developers, or a negative for others. Why change and novelty in web development is awesome Let's first take a look at how the rapid changes in web development can be a wonderful thing for some developers. New tools and techniques to learn With new tech constantly emerging, you will always have something new to learn as a developer. This keeps the field interesting (at least for me and other developers I know that actually like the field). It allows you to continuously add to your skillset as well. You will constantly be challenged with the newer frameworks when learning them, which will help you learn future technologies faster. Having this skill of being a constant learner is crucial in a field that is always improving. Competition When there are a high number of frameworks that do similar things, the best ones will be the ones that are used by the majority of people. For instance, there are tons of front-end frameworks like React and Angular, but React and Angular are the ones that survive simply because of their popularity and ease of use. This is similar to how capitalism works: Only the best will survive. This creates a culture of innovation in the web development community and causes even more tech to be developed, but at a higher quality. Even better products A large amount of technology being released in a short period of time allows for developers to develop creative and stunning web pages using various combinations of technologies working together. If websites are stunning and easy to use, businesses are more likely to get customers to use their products. If customers are more likely to use products, that probably means they are spending money and therefore growing the economy. Who does not love awesome products anyway? Workflows become more efficient and agile When better web development tools are created, it becomes easier for other web developers out there to create their own web apps. Usually newer technologies present a brand new way of accomplishing something that happened to previously be more difficult. With this increased ability it allows you to build on top of the shoulder of giants, allowing new developers to create something that previously was too difficult or time consuming. Why change and novelty in web development is a pain Now let's take a look at how the ever-changing state of web development can be a bad thing for web developers. New tools require more learning time With each new technology, the user must learn exactly how it works and how it can even benefit their company or project. There is some time in the beginning that must be spent on actually figuring out how to get the new technology to work. Depending on the documentation, this can sometimes be easier than others, but that extra time can definitely hurt if you are attempting to hit a hard deadline. Identifying risk v. reward can be a challenge With attempting something new, there is always a risk involved. It can turn out that this framework will take up a large portion of your time to implement, and it may only give you a minor performance increase, or minor reduction in development time. You must make this tradeoff yourself. Sometimes it is worth it, other times it definitely is not. Support lifespans are getting shorter for many tools Something that is not popular or widely used will tend to lose support. You may have been an early adopter, when you thought that this technology would be great. Just because the technology was supported today, does not mean it will be supported in the future, when you plan on using it. Support can sometimes make or break the usage of an application and it can sometimes be safer to go with a more stable framework. In my opinion, an ever-changing web development landscape is a good thing, and you just need to keep up. But I've attempted to give you both sides of the coin in order for you to make a decision on your own. Antonio Cucciniello is a Software Engineer with a background in C, C++ and JavaScript (Node.js) from New Jersey. His most recent project called Edit Docs is an Amazon Echo skill that allows users to edit Google Drive files using your voice. He loves building cool things with software, reading books on self-help and improvement, finance, and entrepreneurship. Follow him on twitter @antocucciniello, and follow him on GitHub.
Read more
  • 0
  • 0
  • 2756
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $15.99/month. Cancel anytime
article-image-biggest-cloud-adoption-challenges
Rick Blaisdell
10 Sep 2017
3 min read
Save for later

The biggest cloud adoption challenges

Rick Blaisdell
10 Sep 2017
3 min read
The cloud technology industry is growing rapidly, as companies are understanding the profitability and efficiency benefits that cloud computing can provide. Public, private, or a combination of various cloud models are used by 70 percent of U.S. companies who have at least one application in the cloud according to IDG Enterprise Cloud Computing Survey. In addition, almost 93 percent of organizations across the world use cloud services according to Building Trust in a Cloudy Sky Survey. Even though cloud adoption is increasing, it's important that companies develop a strategy before moving their data and using cloud technology to increase efficiency. This strategy is especially important because transitioning to the cloud is often a challenging process. If you're thinking of making this transition, here is a list of cloud adoption challenges that you should be aware of. Technology It's important to take into consideration the complex issues that can arise with new technology. For example, some applications are not built for cloud, or require certain compliance requirements that will not be met in a pure cloud environment. In this instance, a solution could be a hybrid environment with configured security requirements. People Moving to the cloud could be met with resistance, especially from people who have spent most of their time managing physical infrastructure. The largest organization will have a long transition to full cloud adoption. Small companies that are tech savvy will have an easier time making this change. Most modern IT departments will choose an agile approach to cloud adoption, although some employers might not be that experiences in these types of operational changes. The implementation takes time, but you can transform existing operating models to enable a cloud to be more approachable for the company. Psychological barriers Psychologically, there will be many questions. Will the cloud be more secure? Can I maintain my SLAs? Will I find the right technical support services? In 2017, cloud providers can meet all of those expectations and at the same time, reduce overall expenses. Costs Many organizations that decide to move to the cloud do not estimate costs properly. Even though the pricing seems to be simple, the more moving parts there are, the more the liklihood of incorrect costs estimates. When starting the migration to the cloud, look for tools that will help you estimate cloud costs and ROI, whilst taking into consideration all possible variables. Security One of the CIO's concerns when it comes to moving to the cloud is security and privacy. The management team needs to know if the cloud provider they plan to work with has a bullet proof environment. This is a big challenge because a data breach could not only put the company reputation at risk, but could also be the result of a huge financial loss for a company. The first step in adopting cloud services is to be able to identify all of the challenges that will come with the process. It is essential to work with the cloud provider to facilitate a successful cloud implementation. Are there any challenges that you consider crucial to a cloud transition? Let us know what you think in the comments section. About the Author Rick Blaisdell is an experienced CTO, offering cloud services and creating technical strategies, which reduce IT operational costs and improve efficiency. He has 20 years of product, business development, and high-tech experience with Fortune 500 companies developing innovative technology strategies.
Read more
  • 0
  • 0
  • 2745

article-image-deep-learning-set-revolutionize-music-industry
Sugandha Lahoti
11 Dec 2017
6 min read
Save for later

Deep Learning is all set to revolutionize the music industry

Sugandha Lahoti
11 Dec 2017
6 min read
Isn’t it spooky how Facebook can identify faces of your friends before you manually tag them? Have you been startled by Cortana, Siri or Google Assistant when they instantly recognize and act on your voice as you speak to these virtual assistants? Deep Learning is the driving force behind these uncanny yet innovative applications. The next thing that is all set to dive into deep learning is the music industry. Neural networks not only ease production and generation of songs, but also assist in music recommendation, transcription and classification. Here are some ways that deep learning will elevate music and the listening experience itself: Generating melodies with Neural Nets At the most basic level, a deep learning algorithm follows 3 simple steps for music generation: First, the neural net is trained with a sample data set of songs which are labelled. The labelling is done based on the emotions you want your song to convey (happy, sad, funny, etc). For training, the program converts the speech of the data set in text format and then creates vector for each word.  The training data can also be in the form of MIDI format which is a standard protocol for encoding musical notes. After completing the training, the program is fed with a set of emotions as input. It identifies the associated input vectors and compares them to training vectors. The output is a melody or chords that represent the desired emotions. Long short-term memory (LSTM) architectures are also used for music generation. They take structured input of a music notation. These inputs are then encoded as vectors and fed into an LSTM at each timestep. LSTM then predicts the encoding of the next timestep. Fully connected convolutional layers are utilized to increase the music quality and to represent rich features in the frequency domain. Magenta, the popular art and music project of Google has launched Performance RNN, which is an LSTM-based recurrent neural network. It is designed to produce multiple sounds with expressive timing and dynamics. In other words, Performance RNN determines which notes to play, when to play them, and how hard to strike each note. IBM’s Watson Beat uses a neural network to produce complete tracks by understanding music theory, structure, and emotional intent. According to Richard Daskas, a music composer working on the Watson Beat project, “Watson only needs about 20 seconds of musical inspiration to create a song.” Transcripting music with deep learning Deep learning methods can also be used for arranging a piece of music for a different instrument. LSTM networks are a popular choice for music transcription and modelling. These networks are trained using a large dataset of pre-labelled music transcriptions (expressed with ABC notation). These transcriptions are then used to generate new music transcriptions. In fact, transformed audio data can be used to predict the group of notes currently being played. This can be achieved by treating the transcription model as an image classification problem. For this, an image of an audio is used, called as Spectrogram. A spectrogram displays how the spectrum or frequency content changes over time. A Short Time Fourier Transform (STFT) or a constant Q transform is used to create this spectrogram. The spectrogram is then feeded to a Convolutional Neural network(CNN). The CNN estimates current notes from audio data and determines what specific notes are present by analysing 88 output nodes for each of the piano keys. This network is generally trained using large number of examples from MIDI files spanning several different genres of music. Magenta has developed The NSynth dataset, which is a high-quality multi-note dataset for music transcription. It is inspired by image recognition datasets and has a huge collection of annotated musical notes. Make better music recommendations Neural Nets are also used to make intelligent music recommendations and are a step ahead of the traditional Collaborative filtering networks. Using neural networks, the system can analyse the songs saved by the users, and then utilize those songs to make new recommendations.  Neural nets can also be used to analyze songs based on musical qualities such as pitch, chord progression, bass, etc. Using the similarities between songs having the same traits as each other, neural networks can detect and predict new songs.  Thus providing recommendation based on similar lyrical and musical styles. Convolutional neural networks (CNNs) are utilized for making music recommendations. A time-frequency representation of the audio signal is fed into the network as the input. 3 second audio clips are randomly chosen from the audio samples to train the neural network. The CNNs are then used to predict latent factors from music audio by taking the average of the predictions for consecutive clips. The feature extraction layers and pooling layers permits operation on several timescales. Spotify is working on a music recommendation system with a CNN. This recommendation system, when trained on short clips of songs, can create playlists based on the audio content only. Classifying music according to genre Classifying music according to a genre is another achievement of neural nets. At the heart of this application lies the LSTM network. At the very first stage, convolutional layers are used for feature extraction from the spectrograms of the audio file. The sequence of features so obtained is given as input to the LSTM layer. LSTM evaluates dependencies of the song across both short time period as well as long term structure. After the LSTM, the input is fed into a fully connected, time-distributed layer which essentially gives us a sequence of vectors. These vectors are then used to output the network's evaluation of the genre of the song at the particular point of time. Deepsound uses GTZAN dataset and an LSTM network to create a model for music genre recognition.  On comparing the mean output distribution with the correct genre, the model gives almost 67% of accuracy. For musical pattern extraction, MFCC feature dataset is used for audio analysis. First, the audio signal is extracted in the MFCC format. Next, the input song is modified into an MFCC map. This Map is then split to feed it as the input of the CNN. Supervised learning is used for automatically obtaining musical pattern extractors, considering the song label is provided. The extractors so acquired, are used for restoring high-order pattern-related features. After high-order classification, the result is combined and undergoes a voting process to produce the song-level label. Scientists from Queen Mary University of London trained a neural net with over 6000 songs in a ballad, hip-hop, and dance to develop a neural network that achieves almost 75% accuracy in song classification. The road ahead Neural networks have advanced the state of music to whole new level where one would no longer require physical instruments or vocals to compose music. The future would see more complex models and data representations to understand the underlying melodic structure. This would help models create compelling artistic content on their own. Combination of music with technology would also foster a collaborative community consisting of artists, coders and deep learning researchers, leading to a tech-driven, yet artistic future.  
Read more
  • 0
  • 0
  • 2745

article-image-4-transformations-ai-powered-ecommerce
Savia Lobo
23 Nov 2017
5 min read
Save for later

Through the customer's eyes: 4 ways Artificial Intelligence is transforming ecommerce

Savia Lobo
23 Nov 2017
5 min read
We have come a long way from what ecommerce looked like two decades ago. From a non-existent entity, it has grown into a world-devouring business model that is a real threat to the traditional retail industry. It has moved from a basic static web page with limited product listings to a full grown virtual marketplace where anyone can buy or sell anything from anywhere at anytime at the click of a button. At the heart of this transformation are two things: customer experience and technology. This is what Jeff Bezos, founder & CEO of Amazon, one of the world’s largest ecommerce sites believes: “We see our customers as invited guests to a party, and we are the hosts. It's our job every day to make every important aspect of the customer experience a little bit better.” Now with the advent of AI, the retail space especially e-commerce is undergoing another major transformation that will redefine customer experiences and thereby once again change the dynamics of the industry. So, how is AI-powered ecommerce actually changing the way shoppers shop? AI-powered ecommerce makes search easy, accessible and intuitive Looking for something? Type it! Say it!...Searching for a product you can’t name? No worries. Just show a picture. "A lot of the future of search is going to be about pictures instead of keywords." - Ben Silbermann, CEO of Pinterest We take that statement with a pinch of salt. But we are reasonably confident that a lot of product search is going to be non-text based. Though text searches are common, voice and image searches in e-commerce are now gaining traction. AI makes it possible for the customer to move beyond simple text-based product search and search more easily and intuitively through voice and visual product searches. This also makes search more accessible. It uses Natural Language Processing to understand the customer’s natural language, be it in text or speech to provide more relevant search results. Visual product searches are made possible through a combination of computer vision, image recognition, and reverse image search algorithms.   Amazon Echo, a home-automated speaker has a voice assistant Alexa that helps customers to buy products online by having simple conversations with Alexa. Slyce, uses a visual search feature, wherein the customer can scan a barcode, a catalog, and even a real image; just like Amazon’s in-app visual feature. Clarifai helps developers to build applications that detect images and videos and searches related content. AI-powered ecommerce makes personalized product recommendations   When you search for a product, the AI underneath recommends further options based on your search history or depending on what other users who have similar tastes found interesting. Recommendations engines employ one or a combination of the three types of recommendation algorithms: content-based filtering, collaborative filtering, and complementary products. The relevance and accuracy of the results produced depend on various factors such as the type of recommendation engine used, the quantity and quality of data used to train the system, the data storage and retrieval strategies used amongst others. For instance, Amazon uses DSSTNE (Deep Scalable Sparse Tensor Network Engine, pronounced as Destiny) to make customized product recommendations to their customers. The customer data collected and stored is used by DSSTNE to train and generate predictions for customers. The data processing itself takes place on CPU clusters whereas the training and predictions take place on GPUs to ensure speed and scalability. Virtual Assistants as your personal shopping assistants   Now, what if we said you can have all the benefits we have discussed above without having to do a lot of work yourself? In other words, what if you had a personal shopping assistant who knows your preferences, handles all the boring aspects of shopping (searching, comparing prices, going through customers reviews, tracking orders etc.) and brought you products that were just right with the best deals? Mona, one such personal shopper, can do all of the above and more. It uses a combination of artificial intelligence and big data to do this. Virtual assistants can either be fully AI driven or a combination of AI-human collaboration. Chatbots also assist shoppers but within a more limited scope. They can help resolve customer queries with zero downtime and also assist in simple tasks such as notify the customer of price changes, place and track orders etc. Dominos has a facebook messenger Bot that enables customers to order food. Metail, an AI-powered ecommerce website, take in your body measurements. With this, you can actually see how a clothing would look on you. Botpress helps developers to build their own chatbots consuming lesser time. Maximizing CLV (customer lifetime value) with AI-powered CRM AI-powered ecommerce in CRM aims to help businesses predict CLV and sell the right product to the right customer at the right time, every time leveraging the machine learning and predictive capabilities of AI. It also helps businesses provide the right level of customer service and engagement. In other words, by combining the predictive capabilities and automated 1-1 personalization, an AI backed CRM can maximize CLV for every customer!    Salesforce Einstein, IBM Watson are some of the frontrunners in this space. IBM Watson, with its cognitive touch, helps ecommerce sites analyze their mountain of customer data and glean useful insights to predict a lot of things like what customers are looking for, the brands that are popular, and so on.  It can also help with dynamic pricing of products by predicting when to discount and when to increase the price based on analyzing demand and competitions’ pricing tactics. It is clear that AI not only has the potential to transform e-commerce as we know it but that it has already become central to the way leading ecommerce platforms such as Amazon are functioning. Intelligent e-commerce is here and now. The near future of ecommerce is omnicommerce driven by the marriage between AI and robotics to usher in the ultimate customer experience - one that is beyond our current imagination.
Read more
  • 0
  • 0
  • 2736

article-image-google-daydream
RakaMahesa
15 Mar 2017
5 min read
Save for later

Google Daydream

RakaMahesa
15 Mar 2017
5 min read
Google Cardboard, with more than 5 million users, is a success for Google. So, it's not a big surprise when Google announced their next step into the world of virtual reality with the evolution of Google Cardboard: the Google Daydream, a more robust and enhanced mobile VR platform.  So, what is Google Daydream? Is it just a better version of Google Cardboard? How does it differ from Google Cardboard?  Well, they are both platforms for mobile VR apps that can be viewed in a mobile headset. Unlike Cardboard though, Google Daydream has a set of specifications that mobile devices and headsets must follow. This means developers would know exactly what kind of input the user of their apps would have, something that wasn’t possible on the Cardboard platform.  The biggest and the most notable feature of Google Daydram compared with Cardboard however, is the addition of a motion-based controller. Users will now be able to use this remote-like controller to point and interact with the virtual world much more intuitively. With this controller, developers would be able to build a better and more immersive VR experience.  As can be seen in the image above, there are 4 physical inputs available to the user on the Daydream Controller: Touchpad (the big circular pad) App Button (the button with the line symbol) Home button (the button with the circle symbol) Volume buttons (the buttons on the side) And since it's a motion-based controller, the controller comes with various sensors to detect the user's hand movement. Do note that the movement that can be detected by the controller is mostly limited to rotational movement, unlike the fully positional tracked controller on a PC VR platform.  Two more things to keep in mind: the first one is that the home and volume buttons are not accessible to developers and are reserved for the platform's functionality. The second one is that the touchpad is only capable of detecting single touch. And since the documentation doesn't mention multitouch being added in the future, it's safe to assume that the controller is designed for single touch and will stay that way for the foreseeable future.  All right, now that we know about what the controller can do, let's dive deeper into the Google Daydream SDK and figure out how to use the Daydream Controller in our apps.  Before we go further though, let's make sure we have all the requirements for developing Daydream apps: Unity 5.6 (with native Daydream support) Google VR SDK for Unity v1.2 Daydream Controller or an Android phone with Gyroscope. Yes, you don't have to own the controller to develop a controller-compatible app, so don't fret. Instead. we're going to emulate the daydream controller using an Android phone. To do that, all we need to do is to install the controller emulator APK on our phone and run the emulator app. Then, to enable the emulator to be detected on Unity Editor, we simply connect the phone to the computer with a USB cable.  Do note that we can't connect the actual Daydream Controller to our computer and will only be able to use the controller when it's paired to a mobile phone. So you may want to use the emulator for testing purposes even if you have the controller.  To start reading user input from the controller, we first must add the GvrControllerMain prefab to our scene. Afterwards, we can simply use the GvrController API to detect any user interaction with the device. The GvrController API behaves similarly to Unity's Input API, so you're in luck if you're familiar with the Unity project.  Like the Unity Input API, there are three functions to use if we want to find out the state of the buttons on the controller. Use the GvrController.ClickButtonDown property to check if the touchpad was just clicked, the GvrController.ClickButtonUp property to check if the touchpad was just released, and the GvrController.ClickButton property to see if the user is holding down the touchpad click. Simply replace the "ClickButton" part with "AppButton" to detect the state of the app button on the controller.  The API for the controller's touchpad is similar to the Unity Mouse Input API as well. First, we need to find out if the touchpad is being used by calling the GvrController.IsTouching property. Then, we can read the touch position with GvrController.TouchPos property. There is no function for detecting swipes and other movements, but you should be able to create your own detector by reading the touch position changes.  For traditional controllers, these properties should be enough to get all the user inputs. However, Daydream Controller is a controller for VR, so there's still another aspect we should read: Movement. Using the GvrController.Orientation property, we can get a rotational value of the controller's orientation in the real world. We can then apply that value to a GameObject in our scene and have it mirror the movement of the controller.  And that's it for our introduction to the Daydream Controller. The world of virtual reality is still vast and unexplored, and every day, new ways to interact with the VR world are being tried out.So, keep experimenting!  About the author  Raka Mahesa is a game developer at Chocoarts who is interested in digital technology in general. Outside of work hours, he likes to work on his own projects, with Corridoom VR being his latest released game. Raka also regularly tweets as @legacy99. 
Read more
  • 0
  • 0
  • 2732
article-image-understanding-the-role-aiops-plays-in-the-present-day-it-environment
Guest Contributor
17 Dec 2019
7 min read
Save for later

Understanding the role AIOps plays in the present-day IT environment

Guest Contributor
17 Dec 2019
7 min read
In most conversations surrounding cybersecurity these days, the term “digital transformation,” gets frequently thrown in the mix, especially when the discussion revolves around AIOps. If you’ve got the slightest bit of interest in any recent developments in the cybersecurity world, you might have an idea of what AIOps is. However, if you didn’t already know- AIOps refers to a multi-layered, modern technology platform that allows enterprises to maximize IT operations by integrating AI and machine learning to detect and solve cybersecurity issues as they occur. As the name suggests, AIOps makes use of essential AI technology such as machine learning for the overall improvement of an organization’s IT operations. However, today- the role that AIOps plays has shifted dramatically- which leaves a lot of room for confusion to harbor amongst cybersecurity officers since most enterprises prefer to take the more conventional route as far as AI application is concerned. To utilize the most out of AIOps, enterprises need to understand the significance of the changes in the present-day IT environment, and how those changes influence the AI’s applications. To aid readers in understanding the volatility of the relationship between AI’s applications and the IT environment it is applicable in, we’ve put together an article that dives into the differences between conventional monitoring methods and present-day enterprise needs. Moreover, we’ll also be shining a light on the importance of the adoption of AIOps in enterprises as well. How has the IT environment changed in the modern times? Before we can get into every nook and cranny of why the transition from a traditional approach to a more modern approach matters, we’d like to make one thing very clear. Just because a specific approach works for one organization in no way guarantees that it would work for you. Perhaps the greatest advice any business owner could receive is to plan according to the specific requirements of their security and IT infrastructure. The greatest shortcoming of many CISOs and CSOs is that they fail to understand the particular needs of their IT environment and rely on conventional applications of AI to maximize the overall IT experience. Speaking of traditional AIOps applications, since the number of ‘moving’ parts or components involved was significantly less in number- the involvement of AI was far less complex, and therefore much easier to monitor and control. In a more modern setting, however, with the wave of digitalization and the ever-growing reliance that enterprises have on cloud computing systems, the number of components involved has increased, which also makes understanding the web much more difficult. Bearing witness to the ever-evolving and complex nature of today’s IT environment are the results of the research conducted by Dynatrace. The results explicitly state that something as simple as a single web or mobile application transaction can involve a staggering number of 37 different components or technologies on average. Taking this into account, relying on a traditional approach to AI becomes redundant, and ineffective since the conventional approach relies on an extremely limited understanding and fails to make sense of all the information provided by an arsenal of tools and dashboards. Not only is the conventional approach to AIOps impractical within the modern IT context, but it is also extremely outdated. Having said that perhaps the only approach that fits in the modern-day IT environment is a software intelligence-centric approach, which allows for fast-paced and robust solutions to present-day IT complexities. How important is AIOps for enterprises today? As we’ve already mentioned above, the present-day IT infrastructure requires a drastic change in the relationship that enterprises have had with AIOps so far. For starters, enterprises and organizations need to realize the importance of the role that AIOps plays. Unfortunately, however, there’s an overarching tendency seen in enterprises that enables them the naivety of labeling investing in AIOps as yet another “IT expense.” On the contrary, AIOps is essential for companies and organizations today, since every company is undergoing digitalization, along with increasing their reliance on modern technology more and more. Some cybersecurity specialists might even argue that each company is slowly turning into a software company, primarily because of the rise in cloud-computing systems. AIOps also works on improving the ‘business’ aspect of an enterprise, since the modern consumer looks for enterprises that offer innovative features, along with their ability to enhance user experience through an impeccable and seamless digital experience. Furthermore, in the competitive economic conditions of today, carrying out business operations in a timely manner is critical to an enterprise’s longevity- which is where the integration of AI can help an organization function smoothly. It should also be pointed out that the employment of AIOps opens up new avenues for businesses to step into since it removes the element of fear present in most business owners. The implementation of AIOps also enables an organization to make quick-paced releases since it takes IT problems out of the equation. These problems usually consist of bugs, regulation, and compliance, along with monitoring the overall IT experience being provided to consumers. How can enterprises ensure the longevity of their reliance on AIOps? When it comes to the integration of any new technology into an organization’s routine functions, there are always questions to be asked regarding the impact of the continued reliance on modern technology. To demonstrate the point we’ve made above, let’s return to a tech we’ve referred to throughout the article- cloud computing. Introduced in the 1960s, cloud computing revolutionized data storage to what it is today. However, after a couple of years and some unfortunate cyberattacks launched on cloud storage networks, cybersecurity specialists have found some dire problems with complete dependency on cloud computing storage. Similarly, many cybersecurity specialists and researchers wonder about the negative impacts that a dependency on AIOps could have in the future. When it comes to ensuring enterprises about the longevity of amalgamating AIOps into an enterprise, we’d like to give our assurance through the following reasons: Unlike cloud computing, developments in AIOps are heavily rooted in real-time data fed to the algorithm by an IT team. When you strip down all the fancy IT jargon, the only thing identity you need to trust is that of your IT personnel. Since AIOps relies on smart auto-remediation capabilities, business owners can see an immediate response geared by the employed algorithms. One such way that AIOps deploys auto-remediation strategies is by sending out alerts of any possible issue- the practice of which enables businesses to operate on the “business” side of the spectrum since they’ve got a trustworthy agent to rely on. Conclusion At the end of the article, we can only reinstate what’s been said before, in a thousand different ways- it’s high time that enterprises welcome change in the form of AIOps, instead of resisting it. In the modern age of digitalization, the key differences seen in the modern-day IT landscape should be reason enough for enterprises to be on the lookout for new alternatives to securing their data, and by extension- their companies. Author Bio Rebecca James is an enthusiastic cybersecurity journalist. A creative team leader, editor of PrivacyCrypts. What is AIOps and why is it going to be important? 8 ways Artificial Intelligence can improve DevOps Post-production activities for ensuring and enhancing IT reliability [Tutorial]
Read more
  • 0
  • 0
  • 2729

article-image-5-ways-machine-learning-is-transforming-digital-marketing
Amey Varangaonkar
04 Jun 2018
7 min read
Save for later

5 ways Machine Learning is transforming digital marketing

Amey Varangaonkar
04 Jun 2018
7 min read
The enterprise interest in Artificial Intelligence is surging. In an era of cut-throat competition where it’s either do or die, businesses have realized the transformative value of AI to gain an upper hand over their rivals. Given its direct contribution to business revenue, it comes as no surprise that marketing has become one of the major application areas of machine learning. Per Capgemini, 84% of marketing organizations are implementing Artificial Intelligence in 2018, in some capacity 3 out of the 4 organizations implementing AI techniques have managed to increase the sales of their products and services by 10% or more. In this article, we look at 5 innovative ways in which machine learning is being used to enhance digital marketing. Efficient lead generation and customer acquisition One of the major keys to drive business revenue is getting more customers on board who will buy your products or services repeatedly. Machine learning comes in handy to identify potential leads and convert those leads into customers. With the help of the pattern recognition techniques, it is possible to understand a particular lead’s behavioral and purchase trends. Through predictive analytics, it is then possible to predict if a particular lead will buy the product or not. Then, that lead is put into the marketing sales funnel to perform targeted marketing campaigns which may ultimately result into a purchase. A cautionary note here - with GDPR (General Data Protection Regulation) in place across the EU (European Union), there are restrictions in the manner AI algorithms can be used to make automated decisions based on the consumer data. This will make it imperative for the businesses to strictly follow the regulation and operate under its purview, or they could face heavy penalties. As long as businesses respect privacy and follow basic human decency such as asking for permission to use a person’s data or informing them about how their data will be used, marketers can reap the benefits of data driven marketing like never before. It all boils down to applying common sense while handling personal data, as one GDPR expert put it. But we all know how uncommon, that sense is! Customer churn prediction is now possible ‘Customer churn rate’ is a popular marketing term referring to the number of customers who opt out of a particular service offered by the company over a given time period. The churn time is calculated based on the customer’s last interaction with the service or the website. It is crucial to track the churn rate as it is a clear indicator of the progress - or the lack of it - that a business is making. Predicting the customer churn rate is difficult - especially for e-commerce businesses selling a product - but it is not impossible thanks to machine learning. By understanding the historical data and the user’s past website usage patterns, these techniques can help a business identify the customers who are most likely to churn out soon and when that is expected to happen. Appropriate measures can then be taken to retain such customers - by giving special offers and discounts, timely follow-up emails, and so on - without any human intervention. American entertainment giants Netflix make perfect use of churn prediction to keep the churn rate at just 9%, lower than any of the subscription streaming services out there today. Not just that, they also manage to market their services to drive more customer subscriptions. Dynamic pricing made easy In today’s competitive world, products need to be priced optimally. It has become imperative that companies define an extremely competitive and relevant pricing for their products, or else the customers might not buy them. On top of this, there are fluctuations in the demand and supply of the product, which can affect the product’s pricing strategy. With the use of machine learning algorithms, it is now possible to forecast the price elasticity by considering various factors such as the channel on which the product is sold. Other  factors taken into consideration could be the sales period, the product’s positioning strategy or the customer demand. For example, eCommerce giants Amazon and eBay tweak their product prices on a daily basis. Their pricing algorithms take into account factors such as the product’s popularity among the customers, maximum discount that can be offered, and how often the customer has purchased from the website. This strategy of dynamic pricing is now being adopted by almost all the big retail companies even in their physical stores. There are specialized software available which are able to leverage machine learning techniques to set dynamic prices to the products. Competera is one such pricing platform which transforms retail through ongoing, timely, and error-free pricing for category revenue growth and improvements in customer loyalty tiers. To know more about how dynamic pricing actually works, check out this Competitoor article. Customer segmentation and radical personalization Every individual is different, and has unique preferences, likes and dislikes. With machine learning, marketers can segment users into different buyer groups based on a variety of factors such as their product preferences, social media activities, their Google search history and much more. For instance, there are machine learning techniques that can segment users based on who loves to blog about food, or loves to travel, or even which show they are most likely to watch on Netflix! The website can then recommend or market products to these customers accordingly. Affinio is one such platform used for segmenting customers based on their interests. Content and campaign personalization is another widely-recognized use-case of machine learning for marketing. Machine learning algorithms are used to build recommendation systems that take into consideration the user’s online behavior and website usage to analyse and recommend products that he/she is likely to buy. A prime example of this is Google’s remarketing strategy, which tries to reconnect with the customers who leave the website without buying anything by showing them relevant ads across different devices. The best part about recommendation systems is that they are able to recommend two completely different products to two customers with a different usage pattern. Incorporating them within the website has turned out to be a valuable strategy to increase the customer’s loyalty and the overall lifetime value. Improving customer experience Gone are the days when the customer who visited a website had to use the ‘Contact Me’ form in case of any query, and an executive would get back with the answer. These days, chatbots are integrated in almost every ecommerce website to answer ad-hoc customer queries, and even suggest them products that fit their criteria. There are live-chat features included in these chatbots as well, which allow the customers to interact with the chatbots and understand the product features before they buy any product. For example, IBM Watson has a really cool feature called the Tone Analyzer. It parses the feedback given by the customer and identifies the tone of the feedback - if it’s angry, resentful, disappointed, or happy. It is then possible to take appropriate measures to ensure that the disgruntled customer is satisfied, or to appreciate the customer’s positive feedback - whatever may be the case. Marketing will only get better with machine learning Highly accurate machine learning algorithms, better processing capabilities and cloud-based solutions are now making it possible for companies to get the most out of AI for their marketing needs. Many companies have already adopted machine learning to boost their marketing strategy, with major players such as Google and Facebook already leading the way. Safe to say many more companies - especially small and medium-sized businesses - are expected to follow suit in the near future. Read more How machine learning as a service is transforming cloud Microsoft Open Sources ML.NET, a cross-platform machine learning framework Active Learning : An approach to training machine learning models efficiently
Read more
  • 0
  • 10
  • 2729

article-image-introduction-sklearn
Janu Verma
16 Apr 2015
7 min read
Save for later

Introduction to Sklearn

Janu Verma
16 Apr 2015
7 min read
This is an introductory post on scikit-learn where we will learn basic terminology and functionality of this amazing Python package. We will also explore basic principles of machine learning and how machine learning can be done with sklearn. What is scikit-learn (sklearn)? scikit-learn is a python framework for machine learning. It has an efficient implementation of various machine learning and data mining algorithms. It is easy to use and accessible to everybody – open source, and a commercially usable BSD license. Data Scientists love Python and most scientists in the industry use this as their data science stack:                numpy + pandas + sklearn Dependencies Python (>= 2.6) numpy (>= 1.6.1) scipy (>= 0.9) matplotlib (for some tasks) Installation Mac - pip install -U numpy scipy scikit-learn Linux - sudo apt-get install build-essential python-dev python-setuptools python-numpy python-scipy libatlas-dev libatlas3gf-base After you have installed sklearn and all its dependencies, you are ready to dive further. Input data Most machine learning algorithms implemented in sklearn expect the input data in the form of a numpy array of shape [nSamples, nFeatures]. nSamples is the number of samples in the data. Each sample is an observation or an instance of the data. A sample can be a text document, a picture, a row in a database or a csv file – anything you can describe with a fixed set of quantitative traits. nFeatures is the number of features or distinct traits that describe each sample quantitatively. Features can be real-valued, boolean or discrete. The data can be very high dimensional, such as with hundreds of thousands of features, and it can be sparse, such as most of the features values are zero. Example As an example, we will look at the Iris dataset, which comes with sklearn and every other ML package that I know of! from sklearn.datasets import load_iris iris = load_iris() input = iris.data output = iris.target What are the number of samples and features in this dataset ? Since the input data is a numpy array, we can access its shape using the following: nSamples = input.shape[0] nFeatures = input.shape[1] >> nSamples = 150 >> nFeatures = 4 This dataset has 150 samples, where each sample has 4 features. Let's look at the names of the target output: iris.target_names >> array(['setosa','versicolor', 'virginica'], dtype='|S10') To get a better idea of the data, let's look at a sample: input[0] >> array([5.1, 3.5, 1.4, 0.2]) output[0] >> 0 The data is given as a numpy array of shape (150,4) which consists of the measurements of physical traits for three species of irises. The features include: sepal length in cm sepal width in cm petal length in cm petal width in cm The target values {0,1,2} denote three species: Setosa Versicolour Virginica Here is the basic idea of machine learning. The basic setting for a supervised machine learning model is as follows: We have a labeled training set, such as samples with known values of a target. We are given an unlabeled testing set, such as samples for which the target values are unknown. The goal is to build a model that trains on the labeled data to predict the output for the unlabeled data. Supervised learning is further broken down into two categories: classification and regression. In classification, the target value is discrete In regression, the target value is continuous. There are various machine learning methods that can be used to build a supervised learning model, for example decision trees, k-nearest neighbors, SVM, linear and logistic regression, random forests, and more. I'll not talk about these methods and their differences in this post. I will give an illustration of using sklearn for predictive modeling using a regression and a classification model. Iris Example continued (Clasification): We saw that data is a numpy array of shape (150,4) consisting of measurements of physical traits for three iris species. Goal The task is to build a machine learning model to predict the species of a sample given the values of the features. We will split the iris set into a training and a test set. The model will be built on a training set and evaluated on the test set. Before we do that, let's look at the general outline of a machine learning model in sklearn. Outline of sklearn models: The basic outline of a sklearn model is given by the following pseudocode. input = labeled data X_train = input.features Y_train = input.target algorithm = sklearn.ClassImplementingTheAlgorithm(parameters of the algorithm) fitting = algorithm.fit(X_train, Y_train) X_test = unlabeled set prediction = algorithm.predict(X_test) Here, as before, the labeled training data is in the form of a numpy array with X_train as the array of feature values and Y_train as the corresponding target values. In sklearn, different machine learning algorithms are implemented as classes and we will choose the class corresponding to the algorithm we want to use. Each class has a method called fit which fits the input training data to estimate the parameters of the algorithm. Now with these estimated parameters, the predict method computes the estimated value of the target for the test examples. sklearn model on iris data: Following the general outline of the sklearn model, we will now build a model on iris data to predict the species. from sklearn.datasets import load_iris iris = load_iris() X = iris.data Y = iris.target from sklearn import cross_validation X_train, X_test, Y_train, Y_test = cross_validation.train_test_split(X,Y, test_size=0.4) from sklearn.neighbors import KNeighborsClassifier algorithm = KNeighborsClassifier(n_neighbors=5) fitting = algorithm.fit(X_train, Y_train) prediction = algorithm.predict(X_test) The iris data set is split into a training and a test set using a cross validation class from sklearn. The 60% of the iris data was formed and the remaining 40% was the test. The cross_validation picks training and test examples randomly. We used the K-nearest neighbor algorithm to build this model. There is no reason for choosing this method, other than simplicity. The prediction of the sklearn model is a label from {0,1,2} for each of the test case. Let's check how well this model performed: from sklearn.metrics import accuracy_score accuracy_score(Y_test, prediction) >> 0.97 Regression: We will discuss the simplest example of fitting a line through the data. # Create some simple data import numpy as np np.random.seed(0) X = np.random.random(size=(20, 1)) y = 3 * X.squeeze() + 2 + np.random.normal(size=20) # Fit a linear regression to it from sklearn.linear_model import LinearRegression model = LinearRegression(fit_intercept=True) model.fit(X, y) print ("Model coefficient: %.5f, and intercept: %.5f"% (model.coef_, model.intercept_)) >> Model coefficient: 3.93491, and intercept: 1.46229 # model prediction X_test = np.linspace(0, 1, 100)[:, np.newaxis] y_test = model.predict(X_test) Thus we get the values of the target (which were continous). We gave a simple model based on sklearn implementation of K-Nearest neighbor algorithm and linear regression. You can try other models. The python code will be same for most of the methods in sklearn, except for a change in the name of the algorithm. Discovert more Machine Learning content and tutorials on our dedicated Machine Learning page. About the Author Janu Verma is a Quantitative Researcher at the Buckler Lab, Cornell University, where he works on problems in bioinformatics and genomics. His background is in mathematics and machine learning and he leverages tools from these areas to answer questions in biology. He holds a Masters in Theoretical Physics from University of Cambridge in UK, and he dropped out from mathematics PhD program (after 3 years) at Kansas State University. He has held research positions at Indian Statistical Institute – Delhi, Tata Institute of Fundamental Research – Mumbai and at JN Center for Advanced Scientific Research – Bangalore. He is a voracious reader and an avid traveler. He hangs out at the local coffee shops, which serve as his office away from office. He writes about data science, machine learning and mathematics at Random Inferences.
Read more
  • 0
  • 0
  • 2729
article-image-5-mistakes-web-developers-make-when-working-mongodb
Charanjit Singh
21 Oct 2016
5 min read
Save for later

5 Mistakes Web Developers Make When Working with MongoDB

Charanjit Singh
21 Oct 2016
5 min read
MongoDB is a popular document-based NoSQL database. Here in this post, I am listing some mistakes that I've found developers make while working on MongoDB projects. Database accessible from the Internet Allowing your MongoDB database to be accessible from the Internet is the most common mistake I've found developers make in the wild. Mongodb's default configuration used to expose the database to Internet; that is, you can connect to the database using the URL of the server it's being run on. It makes perfect sense for starters who might be deploying a database on a different machine, given how it is the path of least resistance. But in the real world, it's a bad default value that often is ignored. A database (whether Mongo or any other) should be accessible only to your app. It should be hidden in a private local network that provides access to your app's server only. Although this vulnerability has been fixed in newer versions of MongoDB, make sure you change the config if you're upgrading your database from a previous version, and that the new junior developer you hired didn't expose the database that connects to the Internet with the application server. If it's a requirement to have a database accessible from the open-Internet, pay special attention to securing the database. Having a whitelist of IP addresses that only have access to the database is almost always a good idea. Not having multiple database users with access roles Another possible security risk is having a single MongoDB database user doing all of the work. This usually happens when developers with little knowledge/experience/interest in databases handle the database management or setup. This happens when database management is treated as lesser work in smaller software shops (the kind I get hired for mostly). Well, it is not. A database is as important as the app itself. Your app is most likely mainly providing an interface to the database. Having a single user to manage the database and using the same user in the application for accessing the database is almost never a good idea. Many times this exposes vulnerabilities that could've been avoided if the database user had limited access in the first place. NoSQL doesn't mean "secure" by default. Security should be considered when setting the database up, and not left as something to be done "properly" after shipping. Schema-less doesn't mean thoughtless When someone asked Ronny why he chose MongoDB for his new shiny app, his response was that "it's schema-less, so it’s more flexible". Schema-less can prove to be quite a useful feature, but with great power comes great responsibility. Often times, I have found teams struggling with apps because they didn't think the structure for storing their data through when they started. MongoDB doesn’t require you to have a schema, but it doesn't mean you shouldn't properly think about your data structure. Rushing in without putting much thought into how you're going to structure your documents is a sure recipe for disaster. Your app might be small and simple and so easy right now, but simple apps become complicated very quickly. You owe your future self to have a proper well thought out database schema. Most programming languages that provide an interface to MongoDB have libraries to impose some kind of database schema on MongoDB. Pick your favorite and use it religiously. Premature Sharding Sharding is an optimization, so doing it too soon is usually a bad idea. Many times a single replica set is enough to run a fast smooth MongoDB that meets all of your needs. Most of the time a bad schema and (bad) indexing are the performance bottlenecks many users try to solve with sharding. In such cases sharding might do more harm because you end up with poorly tuned shards that don't perform well either. Sharding should be considered when a specific resource, like RAM or concurrency, becomes a performance bottleneck on some particular machine. As a general rule, if your database fits on a single server, sharding provides little benefit anyway. Most MongoDB setups work successfully without ever needing sharding. Replicas as backup Replicas are not backup. You need to have a proper backup system in place for your database and not consider replicas as a backup mechanism. Consider what would happen if you deploy the wrong code that ruins the database. In this case, replicas will simply follow the master and have the same damage. There are a variety of ways that you can use to backup and restore your MongoDB, be it filesystem snapshots or mongodump or a third party service like MMS. Having proper timely fire drills is also very important. You should be confident that the backups you're making can actually be used in a real-life scenario. Practice restoring your backups before you actually need them and verify everything works as expected. A catastrophic failure in your production system should not be the first time when you try to restore from backups (often only to find out you're backing up corrupt data). About the author Charanjit Singh is a freelance JavaScript (React/Express) developer. Being an avid fan of functional programming, he’s on his way to take on Haskell/Purescript as his main professional languages.
Read more
  • 0
  • 0
  • 2728

article-image-do-you-have-technical-skills-command-90000-salary
Packt Publishing
24 Jul 2015
2 min read
Save for later

Do you have the technical skills to command a $90,000 salary?

Packt Publishing
24 Jul 2015
2 min read
In the 2015 Skill Up Survey Packt talked to more than 20,000 people who work in IT globally to identify what skills are valued in technical roles and what trends are changing and emerging. Responses from the App Development community provided us with some great insight into how skills are rated across multiple industries, job roles and experience levels. The world of App Development is highly varied, and can be super competitive too, so we wanted to find out what industries are best for those just entering the market. We also discovered which technologies are proving to be most popular and where can you earn the best salaries. We also had some very specific questions for which we wanted answers. How relevant are desktop developer skills? Which is the most popular platform for mobile development? Is functional programming the way of the future? What is the essential software choice for professional game development? Some of the results were surprising! Here’s a taster of our findings... If you are looking for your first role in App Development, the Government sector pays well for those with less experience. But competition is fierce - with under 5% of those working in this sector having less than 3 years experience. Unsurprisingly, Game Developers reported the lowest average salaries across all industries. It’s clear that Game Developers work for love, not money! However, which type of Developer earns the most and which industry sector is the most lucrative? Experienced Developers can find out who pays the most to utilize all of your expertise and experience. Also discover what Developers are building and what technologies they are using….here’s a sneak peek: And finally, what about the future? What is going to be the Next Big Thing? Wearables or Big Data perhaps? Is there a place for desktop in the mobile age? Read the rest of the report to see what skills you need to build on and which technologies are poised to take the App Development world by storm so you can get ahead of the competition! Click here to download the full report
Read more
  • 0
  • 0
  • 2698