Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech Guides

851 Articles
article-image-future-python-3-experts-views
Richard Gall
27 Mar 2018
7 min read
Save for later

The future of Python: 3 experts' views

Richard Gall
27 Mar 2018
7 min read
Python is the fastest growing programming language on the planet. This year’s Stack Overflow survey produces clear evidence that it is growing at an impressive rate. And it’s not really that surprising - versatile, dynamic, and actually pretty easy to learn, it’s a language that is accessible and powerful enough to solve problems in a range of fields, from statistics to building APIs. But what does the future hold for Python? How will it evolve to meet the needs of its growing community of engineers and analysts? Read the insights from 3 Python experts on what the future might hold for the programming language, taken from Python Interviews, a book that features 20 conversations with leading figures from the Python community. In the future, Python will spawn other more specialized languages Steve Holden (@HoldenWeb), CTO of Global Stress Index and former chairman and director of The PSF: I'm not really sure where the language is going. You hear loose talk of Python 4. To my mind though, Python is now at the stage where it's complex enough. Python hasn't bloated in the same way that I think the Java environment has. At that maturity level, I think it's rather more likely that Python's ideas will spawn other, perhaps more specialized, languages aimed at particular areas of application. I see this as fundamentally healthy and I have no wish to make all programmers use Python for everything; language choices should be made on pragmatic grounds. I've never been much of a one for pushing for change. Enough smart people are thinking about that already. So mostly I lurk on Python-Dev and occasionally interject a view from the consumer side, when I think that things are becoming a little too esoteric. The needs of the Python community are going to influence where the language goes in future Carol Willing (@WillingCarol), former director of The Python Foundation, core developer of CPython, and Research Software Engineer at Project Jupyter. I think we're going to continue to see growth in the scientific programming part of Python. So things that support the performance of Python as a language and async stability are going to continue to evolve. Beyond that, I think that Python is a pretty powerful and solid language. Even if you stopped development today, Python is a darn good language. I think that the needs of the Python community are going to feed back into Python and influence where the language goes. It's great that we have more representation from different groups within the core development team. Smarter minds than mine could provide a better answer to your question. I'm sure that Guido has some things in mind for where he wants to see Python go. Mobile development has been an Achilles' heel for Python for a long time. I'm hoping that some of the BeeWare stuff is going to help with the cross-compilation. A better story in mobile is definitely needed. But you know, if there's a need then Python will get there. I think that the language is going to continue to move towards the stuff that's in Python 3. Some big code bases, like Instagram, have now transitioned from Python 2 to 3. While there is much Python 2.7 code still in production, great strides have been made by Instagram, as they shared in their PyCon 2017 keynote. There's more tooling around Python 3 and more testing tools, so it's less risky for companies to move some of their legacy code to Python 3, where it makes business sense to. It will vary by company, but at some point, business needs, such as security and maintainability, will start driving greater migration to Python 3. If you're starting a new project, then Python 3 is the best choice. New projects, especially when looking at microservices and AI, will further drive people to Python 3. Organizations that are building very large Python codebases are adopting type annotations to help new developers Barry Warsaw (@pumpichank), member of the Python Foundation team at LinkedIn, former project leader of GNU Mailman: In some ways it's hard to predict where Python is going. I've been involved in Python for 23 years, and there was no way I could have predicted in 1994 what the computing world was going to look like today. I look at phones, IoT (Internet of things) devices, and just the whole landscape of what computing looks like today, with the cloud and containers. It's just amazing to look around and see all of that stuff. So there's no real way to predict what Python is going to look like even five years from now, and certainly not ten or fifteen years from now. I do think Python's future is still very bright, but I think Python, and especially CPython, which is the implementation of Python in C, has challenges. Any language that's been around for that long is going to have some challenges. Python was invented to solve problems in the 90s and the computing world is different now and is going to become different still. I think the challenges for Python include things like performance and multi-core or multi-threading applications. There are definitely people who are working on that stuff and other implementations of Python may spring up like PyPy, Jython, or IronPython. Aside from the challenges that the various implementations have, one thing that Python has as a language, and I think this is its real strength, is that it scales along with the human scale. For example, you can have one person write up some scripts on their laptop to solve a particular problem that they have. Python's great for that. Python also scales to, let's say, a small open source project with maybe 10 or 15 people contributing. Python scales to hundreds of people working on a fairly large project, or thousands of people working on massive software projects. Another amazing strength of Python as a language is that new developers can come in and learn it easily and be productive very quickly. They can pull down a completely new Python source code for a project that they've never seen before and dive in and learn it very easily and quickly. There are some challenges as Python scales on the human scale, but I feel like those are being solved by things like the type annotations, for example. On very large Python projects, where you have a mix of junior and senior developers, it can be a lot of effort for junior developers to understand how to use an existing library or application, because they're coming from a more statically-typed language. So a lot of organizations that are building very large Python codebases are adopting type annotations, maybe not so much to help with the performance of the applications, but to help with the onboarding of new developers. I think that's going a long way in helping Python to continue to scale on a human scale. To me, the language's scaling capacity and the welcoming nature of the Python community are the two things that make Python still compelling even after 23 years, and will continue to make Python compelling in the future. I think if we address some of those technical limitations, which are completely doable, then we're really setting Python up for another 20 years of success and growth.
Read more
  • 0
  • 2
  • 6925

article-image-why-oracle-losing-database-race
Aaron Lazar
06 Apr 2018
3 min read
Save for later

Why Oracle is losing the Database Race

Aaron Lazar
06 Apr 2018
3 min read
When you think of databases, the first thing that comes to mind is Oracle or IBM. Oracle has been ruling the database world for decades now, and it has been able to acquire tonnes of applications that use its databases. However, that’s changing now, and if you didn’t know already, you might be surprised to know that Oracle is losing the database race. Oracle = Goliath Oracle was and still is ranked number one among databases, owing to its legacy in the database ballpark. Source - DB Engines The main reason why Oracle has managed to hold its position is because of lock-in, a CIO’s worst nightmare. Migrating data that’s accumulated over the years is not a walk in the park and usually has top management flinching every time it’s mentioned. Another reason is because Oracle is known to be aggressive when it comes to maintaining and enforcing licensing terms. You won’t be surprised to find Oracle ‘agents’ at the doorstep of your organisation, slapping you with a big fine for non-compliance! Oracle != Goliath for everyone You might wonder whether even the biggies are in the same position, locked-in with Oracle. Well, the Amazons and Salesforces of the world have quietly moved away from lock-in hell and have their applications now running on open-source projects. In fact, Salesforce plans to be completely free of Oracle databases by 2023 and has even codenamed this project “Sayonara”. I wonder what inspired the name! Enter the “Davids” of Databases While Oracle’s databases have been declining, alternatives like SQL Server and PostgreSQL have been steadily growing. SQL Server has been doing it in leaps and bounds, with a growth rate of over 30%. Amazon and Microsoft’s cloud based databases have seen close to 10x growth. While one might think that all Cloud solutions would have dominated the database world, databases like Google Cloud SQL and IBM Cognos have been suffering very slow to no growth as the question of lock-in arises again, only this time with a cloud vendor. MongoDB has been another shining star in the database race. Several large organisations like HSBC, Adobe, Ebay, Forbes and MTV have adopted MongoDB as their database solution. Newer organisations have been resorting to adopt these databases instead to looking to Oracle. However, it’s not really eating into Oracle’s existing market, at least not yet. Is 18c Oracle’s silver bullet? Oracle bragged a lot about 18c, last year, positioning it as a database that needs little to no human interference thanks to its ground-breaking machine learning; one that operates at less than 30 minutes of downtime a year and many more features. Does this make Microsoft and Amazon break into a sweat? Hell no! Although Oracle has strategically positioned 18c as a database that lowers operational cost by cutting down on the human element, it still is quite expensive when compared to its competitors - they haven’t dropped their price one bit. Moreover, it can’t really automate “everything” and there’s always a need for a human administrator - not really convincing enough. Quite naturally customers will be drawn towards competition. In the end, the way I look at it, Oracle already had a head start and is now inches from the elusive finish line, probably sniggering away at all the customers that it has on a leash. All while cloud databases are slowly catching up and will soon be leaving Oracle in a heap of dirt. Reminds me of that fable mum used to read to me...what’s it called...The hare and the tortoise.
Read more
  • 0
  • 0
  • 6910

article-image-what-are-generative-adversarial-networks-gans-and-how-do-they-work
Richard Gall
11 Sep 2018
3 min read
Save for later

What are generative adversarial networks (GANs) and how do they work? [Video]

Richard Gall
11 Sep 2018
3 min read
Generative adversarial networks, or GANs, are a powerful type of neural network used for unsupervised machine learning. Made up of two competing models which run in competition with one another, GANs are able to capture and copy variations within a dataset. They’re great for image manipulation and generation, but they can also be deployed for tasks like understanding risk and recovery in healthcare and pharmacology. GANs are actually pretty new - they were first introduced by Ian Goodfellow in 2014. Goodfellow developed them to tackle some of the issues with similar neural networks, including the Boltzmann machine and autoencoders. Both the Boltzmann machine and autoencoders use the Markov Decision Chain which has a pretty high computational cost. This efficiency gives engineers significant gains - which you need if you’re working at the cutting edge of artificial intelligence. How do Generative Adversarial Networks work? Let's start with a simple analogy. You have a painting - say the Mona Lisa - and we have a master forger who wants to create a duplicate painting. The forger does this by learning how the original painter - Leonardo Da Vinci - produced the painting. Meanwhile, you have an investigator trying to capture the forger and ‘second guess’ the rules the forger is learning. To map this onto the architecture of a GAN, the forger is the generator network, which learns the distribution of classes while the investigator is the discriminator network, which learning the boundaries between those classes - the formal ‘shape’ of the dataset. Applications of GANs Generative adversarial networks are used for a number of different applications. One of the best examples is a Google Brain project back in 2016 - researchers used GANs to develop a method of encryption. This project used 3 neural networks - Alice, Bob, and Eve. Alice’s job was to send an encrypted message to Bob. Bob’s job was to decode that message, while Eve’s job was to intercept it. To begin with Alice’s messages were easily intercepted by Eve. However, thanks to Eve’s adversarial work, Alice began to develop its own encryption strategy - it took 15,000 runs for Alice to successfully encrypt a message that could be deciphered by Bob that Eve couldn’t intercept. Elsewhere, GANs are also being used in fields such as drug research. The neural networks can be trained on the existing drugs and suggest new synthetic chemical structures that improve on drugs that already exist. Generative adversarial networks: the cutting edge of artificial intelligence As we’ve seen, GANs offer some really exciting opportunities in artificial intelligence. There are two key advantages you need to remember: GANs solve the problem of generating data when you don’t have enough to begin with and they require no human supervision. This is crucial when you think about the cutting edge of artificial intelligence, both in terms of the efficiency of running the models, and the real-world data we want to use - which could be poor quality or have privacy and confidentiality issues, as much healthcare data does.
Read more
  • 0
  • 0
  • 6833

article-image-how-should-web-developers-learn-machine-learning
Chris Tava
12 Jun 2017
6 min read
Save for later

How should web developers learn machine learning?

Chris Tava
12 Jun 2017
6 min read
Do you have the motivation to learn machine learning? Given its relevance in today's landscape, you should be motivated to learn about this field. But if you're a web developer, how do you go about learning it? In this article, I show you how. So, let’s break this down. What is machine learning? You may be wondering why machine learning matters to you, or how you would even go about learning it. Machine learning is a smart way to create software that finds patterns in data without having to explicitly program for each condition. Sounds too good to be true? Well it is. Quite frankly many of the state-of-the-art solutions to the toughest machine learning problems don’t even come close to reaching 100 percent accuracy and precision. This might not sound right to you if you’ve been trained, or have learned, to be precise and deterministic with the solutions you provide to the web applications you’ve worked on. In fact, machine learning is such a challenging problem domain that data scientists describe problems to be tractable or not. Computer algorithms can solve tractable problems in a reasonable amount of time with a reasonable amount of resources, whereas, in-tractable problems simply can’t be solved. Decades more of R&D is needed at a deep theoretical level, to bring approaches and frameworks forward that will then take years to be applied and be useful to society. Did I scare you off? Nope? Okay great. Then you accept this challenge to learn machine learning.  But before we dive into how to learn machine learning, let's answer the question: Why does learning machine learning matter to you?  Well, you're a technologist and as a result, it’s your duty, your obligation, to be on the cutting edge. The technology world is moving at a fast clip and it’s accelerating. Take for example, the shortened duration between public accomplishments of machine learning versus top gaming experts. It took a while to get to the 2011 Watson v. Jeopardy champion, and far less time between AlphaGo and Libratus. So what's the significance to you and your professional software engineering career? Elementary dear my Watson—just like the so-called digital divide between non-technical and technical lay people, there is already the start of a technological divide between top systems engineers and the rest of the playing field in terms of making an impact and disrupting the way the world works.  Don’t believe me? When’s the last time you’ve programmed a self-driving car or a neural network that can guess your drawings? Making an impact and how to learn machine learning The toughest part about getting started with machine learning is figuring out what type of problem you have at hand because you run the risk of jumping to potential solutions too quickly before understanding the problem. Sure you can say this of any software design task, but this point can’t be stressed enough when thinking about how to get machines to recognize patterns in data. There are specific applications of machine learning algorithms that solve a very specific problem in a very specific way and it’s difficult to know how to solve a meta-problem if you haven’t studied the field from a conceptual standpoint. For me, a break through in learning machine learning came from taking Andrew Ng’s machine learning course on courser. So taking online courses can be a good way to start learning.  If you don’t have the time, you can learn about machine learning through numbers and images. Let's take a look.  Numbers Conceptually speaking, predicting a pattern in a single variable based on a direct—otherwise known as a linear relationship with another piece of data—is probably the easiest machine learning problem and solution to understand and implement.  The following script predicts the amount of data that will be created based on fitting a sample data set to a linear regression model: https://github.com/ctava/linearregression-go. Because there is somewhat of a fit of the sample data to a linear model, the machine learning program predicted that the data created in the fictitious Bob’s system will grow from 2017, 2018.  Bob’s Data 2017: 4401Bob’s Data 2018 Prediction: 5707  This is great news for Bob and for you. You see, machine learning isn’t so tough after all. I’d like to encourage you to save data for a single variable—also known as feature—to a CSV file and see if you can find that the data has a linear relationship with time. The following website is handy in calculating the number of dates between two dates: https://www.timeanddate.com/date/duration.html. Be sure to choose your starting day and year appropriately at the top of the file to fit your data. Images Machine learning on images is exciting! It’s fun to see what the computer comes up with in terms of pattern recognition, or image recognition. Here’s an example using computer vision to detect that grumpy cat is actually a Persian cat: https://github.com/ctava/tensorflow-go-imagerecognition. If setting up Tensorflow from source isn’t your thing, not to worry. Here’s a Docker image to start off with: https://github.com/ctava/tensorflow-go. Once you’ve followed the instructions in the readme.md file, simply:  Get github.com/ctava/tensorflow-go-imagerecognition Run main.go -dir=./ -image=./grumpycat.jpg Result: BEST MATCH: (66% likely) Persian cat Sure there is a whole discussion on this topic alone in terms of what Tensorflow is, what’s a tensor, and what’s image recognition. But I just wanted to spark your interest so that maybe you’ll start to look at the amazing advances in the computer vision field. Hopefully this has motivated you to learn more about machine learning based on reading about the recent advances in the field and seeing two simple examples of predicting numbers, and classifying images.I’d like to encourage you to keep up with data science in general. About the Author  Chris Tava is a Software Engineering / Product Leader with 20 years of experience delivering applications for B2C and B2B businesses. His specialties include: program strategy, product and project management, agile software engineering, resource management, recruiting, people development, business analysis, machine learning, ObjC / Swift, Golang, Python, Android, Java, and JavaScript.
Read more
  • 0
  • 0
  • 6816

article-image-what-is-the-future-of-on-demand-e-commerce-apps
Guest Contributor
18 Jun 2019
6 min read
Save for later

What is the future of on-demand e-commerce apps?

Guest Contributor
18 Jun 2019
6 min read
On-demand apps almost came as a movement in the digital world and transformed the way we avail services and ready-to-use business deliverables. -E-commerce stores like Amazon and eBay were the first on-demand apps and over time the business model penetrated across other niches. Now, from booking a taxi ride online to booking food delivery to booking accommodation in a distant city, on-demand apps are making spaces for every different customer interaction. As these on-demand apps are gradually building the foundation for a fully-fledged on-demand economy, the future of e-commerce will depend on how new and cutting-edge features are introduced and how the user experience can be boosted with new UI and UX elements. But before taking a look into the future of on-demand e-commerce, it is essential to understand the evolution of the on-demand apps in recent years.   Let us have a brief look at various facets of this ongoing evolution.   Mobile-push for change: Already mobile search has surpassed desktop search in both volume and frequency. Moreover, mobile has become a lifestyle factor allowing instant access to services and contents. It is a mobile device’s round the clock connectivity and ease of keeping in constant touch that has made it a key to the thriving on-demand economy.   Overwhelming Social Media penetration: The penetration of social media across all spheres of life has helped people staying connected while communicating almost on anything and everything, giving businesses a never-before opportunity to cater to the customer demands. Addressing value as well as a convenience: With the proliferation of on-demand apps, we can see two gross categories of consumers- the value-oriented and the convenience-oriented consumers. Besides giving priority to more value at a lesser cost, the on-demand apps are now facilitating more convenient and timely delivery of products. Frictionless business process: Allowing easy and smooth purchase with least friction in the business process has become the subject of demand for most consumers. Frictionless and smooth customer experience and delivery are the two most important criteria that on-demand apps fulfill.   How to cater to customers with on-demand e-commerce apps? If as a business you want to cater to your customers with on-demand apps, there are several ways you can do that. When providing customers more value is your priority, you can only ensure this with easier, connected and smooth e-shopping experience. 4 specific ways you can cater to your customers with on-demand e-commerce apps. By trying and testing various services, you can easily get a first-hand feel of how these services work. Next, evaluate what the services do best and what they don’t. Now, think about how you can deliver a better service for your customers. To transform your existing business into an on-demand business, you can also partner with a service provider who can ensure same-day delivery of your products to the customers. You can partner with services like Google Express, Instacart, Amazon, PostMates, Google Express, Uber Rush, etc. You can also utilize the BOPUS (by online, pick up in store) model to cater to many customers who find this helpful. Always make sure to minimize the time and trouble for the customers to pick up products from your store. Providing on-site installation of the product can also boost customer experience. You can partner with a service provider to install the product and guide the customers about its usage. How on-Demand apps are transforming the face of business? The on-demand economy is experiencing a never-before boom and there are too many examples of how it has transformed businesses. The emergence of Uber and Airbnb is an excellent example of how on-demand apps deliver popular service for several daily needs. Just as Uber transformed the way we think of transport, Airbnb transformed the way we conceive booking accommodations and hotels in places of travel. Similarly, apps like Swiggy, Just Eat and Uber Eats are continuing to change the way we order foods from restaurants and food chains. The same business model is slowly penetrating across other niches and products. From the daily consumable goods to the groceries, now almost everything is being delivered through on-demand apps to our doorstep. Thanks to customer-centric UI and UX elements in mobile apps and an increasing number of businesses paving the way for unique and innovative shop fronts, personalization has become one of the biggest driving factors for on-demand mobile apps. Consumers also have got the taste of personalized shopping experience, and they are increasingly demanding products, services and shopping experience that suit their specific needs and preferences. This is one area where on-demand apps within the same niche are competitive in a bid to deliver better customer experience and win more business. The Future of On-demand eCommerce Apps The future of the on-demand e-commerce apps will mainly revolve around new concepts and breakthrough ideas of providing customers more ease and convenience. From gesture-based checkout and payment processing to product search through images to video chat, a lot of breakthrough features will shape the future of on-demand e-commerce apps. Conversational Marketing Unlike the conventional marketing channels that follow the one-way directive, in the new era of on-demand e-commerce apps, conversational marketing will play a bigger role. From intelligent Chatbots to real-time video chat communication, we have a lot of avenues to utilise conversational marketing methods. Image-Based Product Search By integrating image search technology with the e-commerce interfaces customers can be provided with an easy and effortless ways of searching for products online. They can take photos of nearby objects and can search for those items across e-commerce stores.   Real-time Shopping Apps What about getting access to products just when and where you need them? Well, such ease of shopping in real-time may not be a distant thing of the future, thanks to real-time shopping apps. Just when you need a particular product, you can shop it then and there and based upon availability, the order can be accepted and delivered from the nearest store in time. Gesture-Based Login Biometrics is already part and parcel of smart user experience. Gestures are also used in the latest mobile handsets for login and authentication. So, those days are not far when the gestures will be used for customer login and authentication in the e-commerce store. This will make the entire shopping experience easier, effortless and least time-consuming. Conclusion The future of on-demand e-commerce apps is bright. In the years to come, the on-demand apps are going to be more mainstream and commonplace to transform the business process and the way customers are served by retailers across the niches. Author Bio Atman Rathod is the Co-founder at CMARIX TechnoLabs Pvt. Ltd. with 13+ years of experience. He loves to write about technology, startups, entrepreneurship and business. His creative abilities, academic track record and leadership skills made him one of the key industry influencers as well. You can find him on Linkedin, Twitter, and Medium. Declarative UI programming faceoff: Apple’s SwiftUI vs Google’s Flutter What Elon Musk can teach us about Futurism & Technology Forecasting 12 Visual Studio Code extensions that Node.js developers will love [Sponsored by Microsoft]
Read more
  • 0
  • 0
  • 6769

article-image-5-ways-artificial-intelligence-is-transforming-the-gaming-industry
Amey Varangaonkar
01 Dec 2017
7 min read
Save for later

5 Ways Artificial Intelligence is Transforming the Gaming Industry

Amey Varangaonkar
01 Dec 2017
7 min read
Imagine yourself playing a strategy game, like Age of Empires perhaps. You are in a world that looks real and you are pitted against the computer, and your mission is to protect your empire and defeat the computer, at the same time. What if you could create an army of soldiers who could explore the map and attack the enemies on their own, based on just a simple command you give them? And what if your soldiers could have real, unscripted conversations with you as their commander-in-chief to seek instructions? And what if the game’s scenes change spontaneously based on your decisions and interactions with the game elements, like a movie? Sounds too good to be true? It’s not far-fetched at all - thanks to the rise of Artificial Intelligence! The gaming industry today is a market worth over a hundred billion dollars. The Global Games Market Report says that about 2.2 billion gamers across the world are expected to generate an incredible $108.9 billion in game revenue by the end of 2017. As such, gaming industry giants are seeking newer and more innovative ways to attract more customers and expand their brands. While terms like Virtual Reality, Augmented Reality and Mixed Reality come to mind immediately as the future of games, the rise of Artificial Intelligence is an equally important stepping stone in making games smarter and more interactive, and as close to reality as possible. In this article, we look at the 5 ways AI is revolutionizing the gaming industry, in a big way! Making games smarter While scripting is still commonly used for control of NPCs (Non-playable character) in many games today, many heuristic algorithms and game AIs are also being incorporated for controlling these NPCs. Not just that, the characters also learn from the actions taken by the player and modify their behaviour accordingly. This concept can be seen implemented in Nintendogs, a real-time pet simulation video game by Nintendo. The ultimate aim of the game creators in the future will be to design robust systems within games that understand speech, noise and other sounds within the game and tweak the game scenario accordingly. This will also require modern AI techniques such as pattern recognition and reinforcement learning, where the characters within the games will self-learn from their own actions and evolve accordingly. The game industry has identified this and some have started implementing these ideas - games like F.E.A.R and The Sims are a testament to this. Although the adoption of popular AI techniques in gaming is still quite limited, their possible applications in the near-future has the entire gaming industry buzzing. Making games more realistic This is one area where the game industry has grown leaps and bounds over the last 10 years. There have been incredible advancements in 3D visualization techniques, physics-based simulations and more recently, inclusion of Virtual Reality and Augmented Reality in games. These tools have empowered game developers to create interactive, visually appealing games which one could never imagine a decade ago. Meanwhile, gamers have evolved too. They don’t just want good graphics anymore; they want games to resemble reality. This is a massive challenge for game developers, and AI is playing a huge role in addressing this need. Imagine a game which can interpret and respond to your in-game actions, anticipate your next move and act accordingly. Not the usual scripts where an action X will give a response Y, but an AI program that chooses the best possible alternative to your action in real-time, making the game more realistic and enjoyable for you. Improving the overall gaming experience Let’s take a real-world example here. If you’ve played EA Sports’ FIFA 17, you may be well-versed with their Ultimate Team mode. For the uninitiated, it’s more of a fantasy draft, where you can pick one of the five player choices given to you for each position in your team, and the AI automatically determines the team chemistry based on your choices. The team chemistry here is important, because the higher the team chemistry, the better the chances of your team playing well. The in-game AI also makes the playing experience better by making it more interactive. Suppose you’re losing a match against an opponent - the AI reacts by boosting your team’s morale through increased fan chants, which in turn affects player performances positively. Gamers these days pay a lot of attention to detail - this not only includes the visual appearance and the high-end graphics, but also how immersive and interactive the game is, in all possible ways. Through real-time customization of scenarios, AI has the capability to play a crucial role in taking the gaming experience to the next level. Transforming developer skills The game developer community have always been innovators in adopting cutting edge technology to hone their technical skills and creativity. Reinforcement Learning, a sub-set of Machine Learning, and the algorithm behind the popular AI computer program AlphaGo, that beat the world’s best human Go player is a case in point. Even for the traditional game developers, the rising adoption of AI in games will mean a change in the way games are developed. In an interview with Gamasutra, AiGameDev.com’s Alex Champandard says something interesting: “Game design that hinges on more advanced AI techniques is slowly but surely becoming more commonplace. Developers are more willing to let go and embrace more complex systems.” It’s safe to say that the notion of Game AI is changing drastically. Concepts such as smarter function-based movements, pathfinding, inclusion of genetic algorithms and rule-based AI such as fuzzy logic are being increasingly incorporated in games, although not at a very large scale. There are some implementation challenges currently as to how academic AI techniques can be brought more into games, but with time these AI algorithms and techniques are expected to embed more seamlessly with traditional game development skills. As such, in addition to knowledge of traditional game development tools and techniques, game developers will now have to also skill up on these AI techniques to make smarter, more realistic and more interactive games. Making smarter mobile games The rise of the mobile game industry today is evident from the fact that close to 50% of the game revenue in 2017 will come from mobile games - be it smartphones or tablets. The increasingly high processing power of these devices has allowed developers to create more interactive and immersive mobile games. However, it is important to note that the processing power of the mobile games is yet to catch up to their desktop counterparts, not to mention the lack of a gaming console, which is beyond comparison at this stage. To tackle this issue, mobile game developers are experimenting with different machine learning and AI algorithms to impart ‘smartness’ to mobile games, while still adhering to the processing power limits. Compare today’s mobile games to the ones 5 years back, and you’ll notice a tremendous shift in terms of the visual appearance of the games, and how interactive they have become. New machine learning and deep learning frameworks & libraries are being developed to cater specifically to the mobile platform. Google’s TensorFlow Lite and Facebook’s Caffe2 are instances of such development. Soon, these tools will come to developers’ rescue to build smarter and more interactive mobile games. In Conclusion Gone are the days when games were just about entertainment and passing time. The gaming industry is now one of the most profitable industries of today. As it continues to grow, the demands of the gaming community and the games themselves keep evolving. The need for realism in games is higher than ever, and AI has an important role to play in making games more interactive, immersive and intelligent. With the rate at which new AI techniques and algorithms are developing, it’s an exciting time for game developers to showcase their full potential. Are you ready to start building AI for your own games? Here are some books to help you get started: Practical Game AI Programming Learning game AI programming with Lua
Read more
  • 0
  • 0
  • 6768
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at £15.99/month. Cancel anytime
article-image-predictive-analytics-with-amazon-ml
Natasha Mathur
09 Aug 2018
9 min read
Save for later

Predictive Analytics with AWS: A quick look at Amazon ML

Natasha Mathur
09 Aug 2018
9 min read
As artificial intelligence and big data have become a ubiquitous part of our everyday lives, cloud-based machine learning services are part of a rising billion-dollar industry. Among the several services currently available in the market, Amazon Machine Learning stands out for its simplicity. In this article, we will look at Amazon Machine Learning, MLaaS, and other related concepts. This article is an excerpt taken from the book 'Effective Amazon Machine Learning' written by Alexis Perrier. Machine Learning as a Service Amazon Machine Learning is an online service by Amazon Web Services (AWS) that does supervised learning for predictive analytics. Launched in April 2015 at the AWS Summit, Amazon ML joins a growing list of cloud-based machine learning services, such as Microsoft Azure, Google prediction, IBM Watson, Prediction IO, BigML, and many others. These online machine learning services form an offer commonly referred to as Machine Learning as a Service or MLaaS following a similar denomination pattern of other cloud-based services such as SaaS, PaaS, and IaaS respectively for Software, Platform, or Infrastructure as a Service. Studies show that MLaaS is a potentially big business trend. ABI Research, a business intelligence consultancy, estimates machine learning-based data analytics tools and services revenues to hit nearly $20 billion in 2021 as MLaaS services take off as outlined in this business report  Eugenio Pasqua, a Research Analyst at ABI Research, said the following: "The emergence of the Machine-Learning-as-a-Service (MLaaS) model is good news for the market, as it cuts down the complexity and time required to implement machine learning and thus opens the doors to an increase in its adoption level, especially in the small-to-medium business sector." The increased accessibility is a direct result of using an API-based infrastructure to build machine-learning models instead of developing applications from scratch. Offering efficient predictive analytics models without the need to code, host, and maintain complex code bases lowers the bar and makes ML available to smaller businesses and institutions. Amazon ML takes this democratization approach further than the other actors in the field by significantly simplifying the predictive analytics process and its implementation. This simplification revolves around four design decisions that are embedded in the platform: A limited set of tasks: binary classification, multi-classification, and regression A single linear algorithm A limited choice of metrics to assess the quality of the prediction A simple set of tuning parameters for the underlying predictive algorithm That somewhat constrained environment is simple enough while addressing most predictive analytics problems relevant to business. It can be leveraged across an array of different industries and use cases. Let's see how! Leveraging full AWS integration The AWS data ecosystem of pipelines, storage, environments, and Artificial Intelligence (AI) is also a strong argument in favor of choosing Amazon ML as a business platform for its predictive analytics needs. Although Amazon ML is simple, the service evolves to greater complexity and more powerful features once it is integrated into a larger structure of AWS data related services. AWS is already a major factor in cloud computing. Here's what an excerpt from The Economist, August  2016 has to say about AWS (http://www.economist.com/news/business/21705849-how-open-source-software-and-cloud-computing-have-set-up-it-industry): AWS shows no sign of slowing its progress towards full dominance of cloud computing's wide skies. It has ten times as much computing capacity as the next 14 cloud providers combined, according to Gartner, a consulting firm. AWS's sales in the past quarter were about three times the size of its closest competitor, Microsoft's Azure. This gives an edge to Amazon ML, as many companies that are using cloud services are likely to be already using AWS. Adding simple and efficient machine learning tools to the product offering mix anticipates the rise of predictive analytics features as a standard component of web services. Seamless integration with other AWS services is a strong argument in favor of using Amazon ML despite its apparent simplicity. The following architecture is a case study taken from an AWS January 2016 white paper titled Big Data Analytics Options on AWS (http://d0.awsstatic.com/whitepapers/Big_Data_Analytics_Options_on_AWS.pdf), showing a potential AWS architecture for sentiment analysis on social media. It shows how Amazon ML can be part of a more complex architecture of AWS services: Comparing performances in Amazon ML services Keeping systems and applications simple is always difficult, but often worth it for the business. Examples abound with overloaded UIs bringing down the user experience, while products with simple, elegant interfaces and minimal features enjoy widespread popularity. The Keep It Simple mantra is even more difficult to adhere to in a context such as predictive analytics where performance is key. This is the challenge Amazon took on with its Amazon ML service. A typical predictive analytics project is a sequence of complex operations: getting the data, cleaning the data, selecting, optimizing and validating a model and finally making predictions. In the scripting approach, data scientists develop codebases using machine learning libraries such as the Python scikit-learn library or R packages to handle all these steps from data gathering to predictions in production. As a developer breaks down the necessary steps into modules for maintainability and testability, Amazon ML breaks down a predictive analytics project into different entities: datasource, model, evaluation, and predictions. It's the simplicity of each of these steps that makes AWS so powerful to implement successful predictive analytics projects. Engineering data versus model variety Having a large choice of algorithms for your predictions is always a good thing, but at the end of the day, domain knowledge and the ability to extract meaningful features from clean data is often what wins the game. Kaggle is a well-known platform for predictive analytics competitions, where the best data scientists across the world compete to make predictions on complex datasets. In these predictive competitions, gaining a few decimals on your prediction score is what makes the difference between earning the prize or being just an extra line on the public leaderboard among thousands of other competitors. One thing Kagglers quickly learn is that choosing and tuning the model is only half the battle. Feature extraction or how to extract relevant predictors from the dataset is often the key to winning the competition. In real life, when working on business-related problems, the quality of the data processing phase and the ability to extract meaningful signal out of raw data is the most important and time-consuming part of building an effective predictive model. It is well known that "data preparation accounts for about 80% of the work of data scientists" (http://www.forbes.com/sites/gilpress/2016/03/23/data-preparation-most-time-consuming-least-enjoyable-data-science-task-survey-says/). Model selection and algorithm optimization remains an important part of the work but is often not the deciding factor when the implementation is concerned. A solid and robust implementation that is easy to maintain and connects to your ecosystem seamlessly is often preferred to an overly complex model developed and coded in-house, especially when the scripted model only produces small gains when compared to a service-based implementation. Amazon's expertise and the gradient descent algorithm Amazon has been using machine learning for the retail side of its business and has built a serious expertise in predictive analytics. This expertise translates into the choice of algorithm powering the Amazon ML service. The Stochastic Gradient Descent (SGD) algorithm is the algorithm powering Amazon ML linear models and is ultimately responsible for the accuracy of the predictions generated by the service. The SGD algorithm is one of the most robust, resilient, and optimized algorithms. It has been used in many diverse environments, from signal processing to deep learning and for a wide variety of problems, since the 1960s with great success. The SGD has also given rise to many highly efficient variants adapted to a wide variety of data contexts. We will come back to this important algorithm in a later chapter; suffice it to say at this point that the SGD algorithm is the Swiss army knife of all possible predictive analytics algorithm. Several benchmarks and tests of the Amazon ML service can be found across the web (Amazon, Google, and Azure: https://blog.onliquid.com/machine-learning-services-2/ and Amazon versus scikit-learn: http://lenguyenthedat.com/minimal-data-science-2-avazu/). Overall results show that the Amazon ML performance is on a par with other MLaaS platforms, but also with scripted solutions based on popular machine learning libraries such as scikit-learn. For a given problem in a specific context and with an available dataset and a particular choice of a scoring metric, it is probably possible to code a predictive model using an adequate library and obtain better performances than the ones obtained with Amazon ML. But what Amazon ML offers is stability, an absence of coding, and a very solid benchmark record, as well as a seamless integration with the Amazon Web Services ecosystem that already powers a large portion of the Internet. Amazon ML service pricing strategy As with other MLaaS providers and AWS services, Amazon ML only charges for what you consume. The cost is broken down into the following: An hourly rate for the computing time used to build predictive models A prediction fee per thousand prediction samples And in the context of real-time (streaming) predictions, a fee based on the memory allocated upfront for the model The computational time increases as a function of the following: The complexity of the model The size of the input data The number of attributes The number and types of transformations applied At the time of writing, these charges are as follows: $0.42 per hour for data analysis and model building fees $0.10 per 1,000 predictions for batch predictions $0.0001 per prediction for real-time predictions $0.001 per hour for each 10 MB of memory provisioned for your model These prices do not include fees related to the data storage (S3, Redshift, or RDS), which are charged separately. During the creation of your model, Amazon ML gives you a cost estimation based on the data source that has been selected. The Amazon ML service is not part of the AWS free tier, a 12-month offer applicable to certain AWS services for free under certain conditions. To summarize, we presented a simple introduction to the Amazon ML service. Amazon ML is built on a solid ground, with a simple yet very efficient algorithm driving its predictions. If you found this post useful, be sure to check out the book  'Effective Amazon Machine Learning' to learn about predictive analytics and other concepts in AWS machine learning. Integrate applications with AWS services: Amazon DynamoDB & Amazon Kinesis [Tutorial] AWS makes Amazon Rekognition, its image recognition AI, available for Asia-Pacific developers AWS Elastic Load Balancing: support added for Redirects and Fixed Responses in Application Load Balancer
Read more
  • 0
  • 0
  • 6747

article-image-how-to-choose-components-to-build-a-basic-robot
Prasad Ramesh
31 Dec 2018
10 min read
Save for later

How to choose components to build a basic robot 

Prasad Ramesh
31 Dec 2018
10 min read
This post will show you how to choose a robot chassis kit with wheels and motors, a motor controller, and some power for the robot, talking through the trade-offs and things to avoid. This article is an excerpt from a book written by Danny Staple titled Learn Robotics Programming. In this book, you will learn you'll gain experience of building a next-generation collaboration robot Choosing a robot chassis kit The chassis, like the controller, is a fundamental decision when making a robot. Although these can be self-made using 3D printing or toy hacking, the most simple place to start is with a robot chassis kit. These kits contain sets of parts to start off your robot build. A chassis can be changed, but it would mean rebuilding the robot. The internet has plenty of robot chassis kits around. Too many, so how do you choose one? Size Getting the size for a robot right matters too. Take a look at the following photos: Chassis 1 is 11 cm in and just about fits a controller in it, but is too tiny. This will make it hard to build your robot. Squeezing the controller, power, and all the sensors into this small space would need skill and experience beyond the scope of a first robot build. Chassis 2 is Armbot. This large robot is 33 cm by 30 cm, with an arm reach of another 300 mm. It needs eight AA batteries, big motors, and a big controller. These add to the expense and may cause issues around power handling for a new builder. It has lots of space, but issues around weight and rigidity. Armbot is one of my most expensive robots, excluding the cost of the arm! Chassis 3 in the preceding image will fit the Pi, batteries, and sensor, but without being large and bulky. It is around the right dimensions, being between 15-20 cm long and 10-15 cm wide. Those that have split levels might be great for this, but only one or two levels, as three or four will make a robot top heavy and may cause it to topple. This has enough space and is relatively easy to build. Wheel count Some robot chassis kits have elaborate movement methods, legs, tank tracks, and tri-star wheels, to name a few. While these are fun and I encourage experimenting with them, this is not the place to start at. So, I recommend a thoroughly sensible, if basic, wheels on motors version. There are kits with four-wheel drive and six-wheel drive. These can be quite powerful and will require larger motor controllers. They may also chew through batteries, and you are increasing the likelihood of overloading something. This also makes for trickier wiring, as seen in the following: Two-wheel drive is the simplest to wire in. It usually requires a third wheel for balance. This can be a castor wheel, roller ball, or just a Teflon sled for tiny robots. Two wheels are also the easiest to steer, avoiding some friction issues seen with robots using four or more wheels. Two wheels won't have the pulling power of four or six-wheel drive, but they are simple and will work. They are also less expensive: Wheels and motors A kit for a beginner should come with the wheels and the motors. The wheels should have simple non-pneumatic rubber tires. The most obvious style for inexpensive robots is shown in the following photo. There are many kits with these in them: The kit should also come with two motors, one for each wheel, and include the screws or parts to mount them onto the chassis. I recommend DC Gear motors, as the gearing will keep the speed usable while increasing the mechanical pushing power the robot has. Importantly, the motors should have the wires connected, like the first motor in the following photo: It is tricky to solder or attach these wires to the small tags on motors, and poorly attached ones do have a frustrating habit of coming off. The kits you will want to start with have these wires attached, as can be seen in the following: Another point to note is that where the motors are mounted, the kits should have some encoder wheels, and a slot to read them through. The encoder wheels are also known as odometry, tacho, or tachometer wheels. Simplicity You don't want to use a complex or hard-to-assemble kit for your first robot build. I've repeated this throughout with two-wheel drive, two motors with the wires soldered on and steering clear of large robots, or unusual and interesting locomotion systems, not because they are flawed, but because it's better to start simple. There is a limit to this, a robot kit that is a fully built and enclosed robot leaves little room for learning or experimentation and would actually require toy hacking skills to customize. Cost Related to simplicity is cost. Robot chassis kits can be brought from around $15, up to thousands of dollars. Larger and more complex robots tend to be far more costly. Here, I am aiming to keep to the less costly options or at least show where they are possible. Conclusion So, now you can choose a chassis kit, with two wheels and a castor, two motors with wires soldered on them, slots, and encoder wheels. These are not expensive, and widely available on popular internet shopping sites as "Smart Car Chassis," with terms like "2WD": The kit I'm working with looks like the preceding photo when assembled without the Raspberry Pi. Choosing a motor controller The next important part you'll need is a motor controller. Much like the motors, there are a number of trade-offs and considerations before buying one. Integration level Motor controllers can be as simple as motor power control driven from GPIO pins directly, such as the L298. This is the cheapest solution: a generic L298N motor controller can be connected to some of the IO pins on the Raspberry Pi. These are reasonably robust and have been easily available for a long time. They are flexible, but using parts like this will take up more space and need to be wired point to point, adding complexity to the build: Others are as complex as whole IO controller boards, many of which hide their own controller similar to an Arduino, along with motor control chips. Although the cheapest and most flexible ways are the most basic controllers, those with higher integration will reduce size, keep the pin usage count low (handy when you are connecting a lot to the robot), and may simplify your robot build. They often come integrated with a power supply too. Motor controllers can be bought as fully integrated Raspberry Pi hats, boards designed to fit exactly on top of a Raspberry Pi. These tend to have a high level of integration, as discussed before, but may come at the cost of flexibility, especially if you plan to use other accessories. Pin usage When buying a motor controller in Raspberry Pi hat form, pin usage is important. If we intend to use microphones (PCM/I2S), servo motors, and I2c and SPI devices with this robot, having boards that make use of these pins is less than ideal. Simply being plugged into pins doesn't mean they are all used, so only a subset of the pins is usually actually connected on a hat. To get an idea of how pins in different boards interact on the Raspberry Pi, take a look at https://pinout.xyz , which lets you select Raspberry Pi boards and see the pin configuration for them. Controllers that use the I2C or serial bus are great because they make efficient use of pins and that bus can be shared. At the time of writing, PiConZero, the Stepper Motor Hat, and ZeroBorg all use I2C pins. The Full Function Stepper Motor Hat is able to control DC motors and servo motors, is cheap, and is widely available. It also has the pins available straight through on the top and an I2C connector on the side. It's designed to work with other hats and allow more expansion. Size The choice of this depends on the chassis, specifically the size of the motors you have. In simple terms, the larger your chassis, the larger a controller you will need. The power handling capacity of a motor controller is specified in amps. For a robot like the The Robot Kit I'm Using image, around 1 to 1.5 amps per channel is good. The consequence of too low a rating can be disaster, resulting in a robot that barely moves, while the components cook themselves or violently go bang. Too large a controller has consequences for space, weight, and cost: The level of integration can also contribute to size. A tiny board that stacks on a Pi would take up less space than separate boards. Related to size is if the board keeps the camera port on the Raspberry Pi accessible. Soldering As you choose boards for a robot, you will note that some come as kits themselves, requiring parts to be soldered on. If you are already experienced with this, it may be an option. For experienced builders, this becomes a small cost in time depending on the complexity of the soldering. A small header is going to be a very quick and easy job, and a board that comes as a bag of components with a bare board will be a chunk of an evening. Here, I will recommend components that require the least soldering. Connectors Closely related to soldering are the connectors for the motors and batteries. I tend to prefer the screw type connectors. Other types may require matching motors or crimping skills: Conclusion Our robot is space constrained; for this reason, we will be looking at the Raspberry Pi hat type form factor. We are also looking to keep the number of pins it binds to really low. An I2C-based hat will let us do this. The Full Function Stepper Motor Hat (also known as the Full Function Robot Expansion Board) gets us access to all the Pi pins while being a powerful motor controller: It's available in most countries, has space for the ribbon for the camera, and controls servo motors. I recommend the 4tronix PiConZero hat, or assembling a stack of PiBorg hats. These may be harder to source outside of the UK. The reader will need to adapt the code, and consider a tiny shim to retain access to the GPIO pins if using a different board. In this article, we learned about selecting the parts needed to build a basic robot. We looked at the size, wheel, cost, and connectors for the robot chassis and a controller. To learn more about robotics and build your own robot check out this book Learn Robotics Programming. Real-time motion planning for robots made faster and efficient with RapidPlan processor Boston Dynamics adds military-grade mortor (parkour) skills to its popular humanoid Atlas Robot Sex robots, artificial intelligence, and ethics: How desire shapes and is shaped by algorithms
Read more
  • 0
  • 0
  • 6746

article-image-streamline-your-application-development-process-in-5-simple-steps
Guest Contributor
23 Apr 2019
7 min read
Save for later

Streamline your application development process in 5 simple steps

Guest Contributor
23 Apr 2019
7 min read
Chief Information Officers (CIOs) are under constant pressure to deliver substantial results that meet business goals. Planning a project and seeing it through to the end is a critical requirement of an effective development process. In the fast-paced world of software development, getting results is an essential key for businesses to flourish. There is a certain pleasure you get from ticking off tasks from your to-do lists. However, this becomes a burden when you are drowning with a lot of tasks on your head. Signs of inefficient processes are prevalent in every business. Unhappy customers, stressed out colleagues, disappointing code reviews, missed deadlines, and increases in costs are just some of the examples that are the direct result of dysfunctional processes. By streamlining your workflow you will be able to compete with modern technologies like Machine Learning and Artificial Intelligence. Gaining access to such technologies will also help you to automate the workflow, making your daily processes even smoother. Listed below are 5 steps that can help you in streamlining your development process. Step 1: Creating a Workflow This is a preliminary step for companies who have not considered creating a better workflow. A task is not just something you can write down, complete, and tick-off. Complex, software related tasks are not like the “do-the-dishes” type of tasks. Usually, there are many stages in software development tasks like planning, organizing, reviewing, and releasing. Regardless of the niche of your tasks, the workflow should be clear. You can always use software tools such as Zapier, Nintex, and ProcessMaker, etc. to customize your workflow and assign levels-of-importance to particular tasks. This might appear as micro-management at first, but once it becomes a part of the daily routine, it starts to get easier. Creating a workflow is probably the most important factor to consider when you are preparing to streamline your software development processes. There are several steps involved when creating a workflow: Mapping the Process Process mapping mainly focuses on the visualization of the current development process which allows a top-down view of how things are working. You can do process mapping via tools such as Draw.io, LucidCharts, and Microsoft Visio, etc. Analyze the Process Once you have a flowchart or a swim lane diagram setup, use it to investigate the problems within the process. The problems can range from costs, time, employee motivation, and other bottlenecks. Redesign the Process When you have identified the problems, you should try to solve them step by step. Working with people who are directly involved in the process (e.g Software Developers) and gaining an on-the-ground insight can prove very useful when redesigning the processes. Acquire Resources You now need to secure the resources that are required to implement the new processes. With regards to our topic, it can range from buying licensed software, faster computers, etc. Implementing Change It is highly likely that your business processes change with existing systems, teams, and processes. Allocate your time to solving these problems, while keeping the regular operations in the process. Process Review This phase might seem the easiest, but it is not. Once the changes are in place, you need to review them accordingly so that they do not rise up again Once the workflow is set in place, all you have to do is to identify the bugs in your workflow plan. The bugs can range anywhere from slow tasks, re-opening of finished tasks, to dead tasks. What we have observed about workflows is that you do not get it right the first time. You need to take your time to edit and review the workflow while still being in the loop of the workflow. The more transparent and active your process is, the easier it gets to spot problems and figure out solutions. Step 2: Backlog Maintenance Many times you assume all the tasks in your backlog to be important. They might have, however, this makes the backlog a little too jam-packed. Well, your backlog will not serve a purpose unless you are actively taking part in keeping it organized. A backlog, while being a good place to store tasks, is also home to tasks that will never see the light of day. A good practice, therefore, would be to either clean up your backlog of dead tasks or combine them with tasks that have more importance in your overall workflow. If some of the tasks are relatively low-priority, we would recommend creating a separate backlog altogether. Backlogs are meant to be a database of tasks but do not let that fact get over your head. You should not worry about deleting something important from your backlog, if the task is important, it will come back. You can use sites like Trello or Slack to create and maintain a backlog. Step 3: Standardized Procedure for Tasks You should have an accurate definition of “done”. With respect to software development, there are several things you need to consider before actually accomplishing a task. These include: Ensure all the features have been applied The unit tests are finished Software information is up-to-date Quality assurance tests have been carried out The code is in the master branch The code is deployed in the production This is simply a template of what you can consider “done” with respect to a software development project. Like any template, it gets even better when you include your additions and subtractions to it. Having a standardized definition of “done” helps remove confusion from the project so that every employee has an understanding of every stage until they are finished. and also gives you time to think about what you are trying to achieve. Lastly, it is always wise to spend a little extra time completing a task phase, so that you do not have to revisit it several times. Step 4: Work in Progress (WIP) Control The ultimate factor that kills workflow is multi-tasking. Overloading your employees with constant tasks results in an overall decline in output. Therefore, it is important that you do not exert your employees with multiple tasks, which only increases their work in progress. In order to fight the problem of multitasking, you need to reduce your cycle times by having fewer tasks at one time. Consider setting a WIP limit inside your workflow by introducing limits for daily and weekly tasks. This helps to keep control of the employee tasks and reduces their burden. Step 5: Progress Visualization When you have everything set up in your workflow, it is time to represent that data to present and potential stakeholders. You need to make it clear that all of the features are completed and the ones you are currently working on. And if you will be releasing the product on time or no? A good way to represent data to senior management is through visualizations. With visualizations, you can use tools like Jira or Trello to make your data shine even more. In terms of data representation, you can use various free online tools, or buy software like Microsoft PowerPoint or Excel. Whatever tools you might use, your end-goal should be to make the information as simple as possible to the stakeholders. You need to avoid clutter and too much technical information. However, these are not the only methods you can use. Look around your company and see where you are lacking in your current processes. Take note of all of them, and research on how you can change them for the better. Author Bio Shawn Mike has been working with writing challenging clients for over five years. He provides ghostwriting, and copywriting services. His educational background in the technical field and business studies has given him the edge to write on many topics. He occasionally writes blogs for Dynamologic Solutions. Microsoft Store updates its app developer agreement, to give developers up to 95% of app revenue React Native Vs Ionic: Which one is the better mobile app development framework? 9 reasons to choose Agile Methodology for Mobile App Development
Read more
  • 0
  • 0
  • 6639

article-image-what-is-security-chaos-engineering-and-why-is-it-important
Amrata Joshi
21 Nov 2018
6 min read
Save for later

What is security chaos engineering and why is it important?

Amrata Joshi
21 Nov 2018
6 min read
Chaos engineering is, at its root, all about stress testing software systems in order to minimize downtime and maximize resiliency. Security chaos engineering takes these principles forward into the domain of security. The central argument of security chaos engineering is that current security practices aren’t fit for purpose. “Despite spending more on security, data breaches are continuously getting bigger and more frequent across all industries” write Aaron Rinehart and Charles Nwatu in a post published on opensource.com in January 2018. “We hypothesize that a large portion of data breaches are caused not by sophisticated nation-state actors or hacktivists, but rather simple things rooted in human error and system glitches.” The rhetorical question they’re asking is clear: should we wait for an incident to happen in order to work on it? Or should we be looking at ways to prevent them from happening at all? Why do we need security chaos engineering today? There are two problems that make security chaos engineering so important today. One is the way in which security breaches and failures are understood culturally across the industry. Security breaches tend to be seen as either isolated attacks or ‘holes’ within software - anomalies that should have been thought of but weren’t. In turn, this leads to a spiral of failures. Rather than thinking about cybersecurity in a holistic and systematic manner, the focus is all too often on simply identifying weaknesses when they happen and putting changes in place to stop them from happening again. You can see this approach even in the way organizations communicate after high-profile attacks have taken place - ‘we’re taking steps to ensure nothing like this ever happens again.’ While that sentiment is important for both customers and shareholders to hear, it also betrays exactly the problems Rinehart, Wong and Nwatu appear to be talking about. The second problem is more about the nature of software today. As the world moves to distributed systems, built on a range of services, and with an extensive set of software dependencies, vulnerabilities naturally begin to increase too. “Where systems are becoming more and more distributed, ephemeral, and immutable in how they operate… it is becoming difficult to comprehend the operational state and health of our systems' security,” Rinehart and Nwatu explain. When you take the cultural issues and the evolution of software together, it becomes clear that the only way cybersecurity is going to properly tackle today’s challenges is by doing an extensive rethink of how and why things happen. What security chaos engineering looks like in practice If you want to think about what the transition to security chaos engineering actually means in practice, a good way to think about it is seeing it as a shift in mindset. It’s a mindset that doesn’t focus on isolated issues but instead on the overall health of the system. Essentially, you start with a different question: don’t ask ‘where are the potential vulnerabilities in our software’ ask ‘where are the potential points of failure in the system?’ Rinehart and Nwatu explain: “Failures we can consist not only of IT, business, and general human factors but also the way we design, build, implement, configure, operate, observe, and manage security controls. People are the ones designing, building, monitoring, and managing the security controls we put in place to defend against malicious attackers.” By focusing on questions of system design and decision making, you can begin to capture security threats that you might otherwise miss. So, while malicious attacks might account for 47% of all security breaches, human error and system glitches combined account for 53%. This means that while we’re all worrying about the hooded hacker that dominates stock imagery, someone made a simple mistake that just about any software-savvy criminal could take advantage of. How is security chaos engineering different from penetration testing? Security chaos engineering looks a lot like penetration testing, right? After all, the whole point of pentesting is, like chaos engineering, determining weaknesses before they can have an impact. But there are some important differences that shouldn’t be ignored. Again, the key difference is the mindset behind both. Penetration testing is, for the most part, an event. It’s something you do when you’ve updated or changed something significant. It also has a very specific purpose. That’s not a bad thing, but with such a well-defined testing context you might miss security issues that you hadn’t even considered. And if you consider the complexity of a given software system, in which its state changes according to the services and requests it is handling, it’s incredibly difficult - not to mention expensive - to pentest an application in every single possible state. Security chaos engineering tackles that by actively experimenting on the software system to better understand it. The context in which it takes place is wide-reaching and ongoing, not isolated and particular. ChaoSlingr, the security chaos engineering tool ChaoSlingr is perhaps the most prominent tool out there to help you actually do security chaos engineering. Built for AWS, it allows you to perform a number of different ‘security chaos experiments’ in the cloud. Essentially, ChaosSlingr pushes failures into the system in a way that allows you to not only identify security issues but also to better understand your infrastructure. This SlideShare deck, put together by Aaron Rinehart himself, is a good introduction to how it works in a little more detail. Security teams have typically always focused on preventive security measures. ChaosSlingr empowers teams to dig deeper into their systems and improve it in ways that mitigate security risks. It allows you to be proactive rather than reactive. The future is security chaos engineering Chaos engineering has not quite taken off - yet. But it’s clear that the principles behind it are having an impact across software engineering. In particular, at a time when ever-evolving software feels so vulnerable - fragile even - applying it to cybersecurity feels incredibly pertinent and important. It’s true that the shift in mindset is going to be tough. But if we can begin to distrust our assumptions, experiment on our systems, and try to better understand how and why they work the way they do, we are certainly moving towards a healthier and more secure software world. Chaos Conf 2018 Recap: Chaos engineering hits maturity as community moves towards controlled experimentation Chaos engineering platform Gremlin announces $18 million series B funding and new feature for “full-stack resiliency” Gremlin makes chaos engineering with Docker easier with new container discovery feature
Read more
  • 0
  • 0
  • 6608
article-image-julia-for-machine-learning-will-the-new-language-pick-up-pace
Prasad Ramesh
20 Oct 2018
4 min read
Save for later

Julia for machine learning. Will the new language pick up pace?

Prasad Ramesh
20 Oct 2018
4 min read
Machine learning can be done using many languages, with Python and R being the most popular. But one language has been overlooked for some time—Julia. Why isn’t Julia machine learning a thing? Julia isn't an obvious choice for machine learning simply because it's a new language that has only recently hit version 1.0. While Python is well-established, with a large community and many libraries, Julia simply doesn't have the community to shout about it. And that's a shame. Right now Julia is used in various fields. From optimizing milk production in dairy farms to parallel supercomputing for astronomy, Julia has a wide range of applications. A common theme here is that these actions all require numerical, scientific, and sometimes parallel computation. Julia is well-suited to the sort of tasks where intensive computation is essential. Viral Shah, CEO of Julia Computing said to Forbes “Amazon, Apple, Disney, Facebook, Ford, Google, Grindr, IBM, Microsoft, NASA, Oracle and Uber are other Julia users, partners and organizations hiring Julia programmers.” Clearly, Julia is powering the analytical nous of some of the most high profile organizations on the planet. Perhaps it just needs more cheerleading to go truly mainstream. Why Julia is a great language for machine learning Julia was originally designed for high-performance numerical analysis. This means that everything that has gone into its design is built for the very things you need to do to build effective machine learning systems. Speed and functionality Julia combines the functionality from various popular languages like Python, R, Matlab, SAS and Stata with the speed of C++ and Java. A lot of the standard LaTeX symbols can be used in Julia, with the syntax usually being the same as LaTeX. This mathematical syntax makes it easy for implementing mathematical formulae in code and make Julia machine learning possible. It also has in-built support for parallelism which allows utilization of multiple cores at once making it fast at computations. Julia’s loops and functions features are pretty fast, fast enough that you would probably notice significant performance differences against other languages. The performance can be almost comparable to C with very little code actually used. With packages like ArrayFire, generic code can be run on GPUs. In Julia, the multiple dispatch feature is very useful for defining number and array-like datatypes. Matrices, data tables work with good compatibility and performance. Julia has automatic garbage collection, a collection of libraries for mathematical calculations, linear algebra, random number generation, and regular expression matching. Libraries and scalability Julia machine learning can be done with powerful tools like MLBase.jl, Flux.jl, Knet.jl, that can be used for machine learning and artificial intelligence systems. It also has a scikit-learn implementation called ScikitLearn.jl. Although ScikitLearn.jl is not an official port, it is a useful additional tool for building machine learning systems with Julia. As if all those weren’t enough, Julia also has TensorFlow.jl and MXNet.jl. So, if you already have experience with these tools, in other implementations, the transition is a little easier than learning everything from scratch. Julia is also incredibly scalable. It can be deployed on large clusters quickly, which is vital if you’re working with big data across a distributed system. Should you consider Julia machine learning? Because it’s fast and possesses a great range of features, Julia could potentially overtake both Python and R to be the choice of language for machine learning in the future. Okay, maybe we shouldn’t get ahead of ourselves. But with Julia reaching the 1.0 milestone, and the language rising on the TIOBE index, you certainly shouldn’t rule out Julia when it comes to machine learning. Julia is also available to use in the popular tool Jupyter Notebook, paving a path for wider adoption. A note of caution, however, is important. Rather than simply dropping everything for Julia, it will be worth monitoring the growth of the language. Over the next 12 to 24 months we’ll likely see new projects and libraries, and the Julia machine learning community expanding. If you start hearing more noise about the language, it becomes a much safer option to invest your time and energy in learning it. If you are just starting off with machine learning, then you should stick to other popular languages. An experienced engineer, however, who already has a good grip on other languages shouldn’t be scared of experimenting with Julia - it gives you another option, and might just help you to uncover new ways of working and solving problems. Julia 1.0 has just been released What makes functional programming a viable choice for artificial intelligence projects? Best Machine Learning Datasets for beginners
Read more
  • 0
  • 0
  • 6595

article-image-6-tips-to-prevent-social-engineering
Guest Contributor
03 Oct 2019
10 min read
Save for later

6 Tips to Prevent Social Engineering

Guest Contributor
03 Oct 2019
10 min read
Social engineering is a tactic where the attacker influences the victim to obtain valuable information. Office employees are targeted to reveal confidential data about a corporation while non-specialists can come under the radar to disclose their credit card information. One might also be threatened that the attacker will hack/his her system if he isn’t provided the asked material. In this method, the perpetrator can take any form of disguise, but at most times, he/she poses as tech support or from a bank. However, this isn’t the case always, although the objective is the same. They sniff the information, which you conceal from everybody, by gaining your trust. Social Engineering ends successfully when the wrongdoer gets to know the victim’s weaknesses and then manipulates his trust. Often, the victim shares his private information without paying much heed to the one who contacts him. Later, the victim is blackmailed by providing his sensitive data otherwise he will be charged under unlawful situations. Examples of Social Engineering attacks As defined above, the attacker can take any form of disguise, but the most common ways will be described here. The wrongdoers update themselves daily to penetrate your system, and even you should be extremely wary of your online security. Always stay alert whenever providing someone with your private credentials. The listed examples are variations of the others. There are many others as well, but the most common has been described. The purpose of all of them is to configure you. As the name states, Social Engineering merely is how an individual can be tricked to give up everything to the person who gains his trust. Phishing Attack Phishing is a malicious attempt to access a person’s personal and sensitive information such as financial credentials. The attacker behind a phishing attack pretends as an authentic identity or source to fool an individual. This social engineering technique mainly involves email spoofing or instant messaging to the victim. However, it may steer people to insert their sensitive details into a fraudulent website, which is designed to look exactly like a legitimate site. Unwanted tech support Tech support scams are becoming wide and can have an industry-wide effect. This tactic involves fraudulent attempts to scare people while putting them into the thought that there is something wrong with their device. Attackers behind this scam try to gain money by tricking an individual into paying for the issue which never exists. Offenders usually send you emails or call you to solve issues regarding your system. Mostly, they tell you that there’s an update needed. If you are not wary of this bogus, you can land yourself in danger. The attacker might ask you to run a command on your system which will result in it getting unresponsive. This belongs to the branch of social engineering known as scareware. Scareware uses fear and curiosity against humans to either steal information or sell you useless pieces of software. Sometimes it can be harsher and can keep your data as a hostage unless you pay a hefty amount. Clickbait Technique Term clickbait refers to the technique of trapping individuals via a fraudulent link with tempting headlines. Cybercriminals take advantage of the fact that most legitimate sites or contents also use a similar technique to attract readers or viewers. In this method, the attacker sends you enticing ads related to games, movies, etc. Clickbait is most seen during peer-to-peer networking systems with enticing ads. If you click on a certain Clickbait, an executable command or a suspicious virus can be installed on your system leading it to be hacked. Fake email from a trusted person Another tactic the offender utilizes is by sending you an email from your friend’s or relative’s email address claiming he/she is in danger. That email ID will be hacked, and with this perception, it’s most likely you will fall to this attack. The sent email will have the information you should give so that you can release your contact from the threat. Pretexting Attack Pretexting is also a common form of social engineering which is used for gaining sensitive and non-sensitive information. The attackers pretext themselves as an authentic entity so that they can access the user information. Unlike phishing, pretexting creates a false sense of trust with the victim through making stories, whereas, phishing scams involve fearing and urgency. In some cases, the attack could become intense, such as in the case when the attacker manipulates the victim to carry out a task which enables them to exploit the structural lacks of a firm or organization. An example of this is, the attacker masking himself as an employee from your bank to cross-check your credentials. This is by far, the most frequent tactic used by offenders. Sending content to download The attacker sends you files containing music, movies, games or documents that appear to be just fine. A newbie on the internet will think about how lucky his day is that he got his wanted stuff without asking. Little does he know that the files he just downloaded are virus embedded. Tips to Prevent Social Engineering After understanding the most common examples of social engineering, let us have a look at how you can protect yourself from being manipulated. 1) Don’t give up your private information Will you ever surrender your secret information to a person you don’t know? No, obviously. Therefore, do not spill your sensitive information on the web unnecessarily. If you do not identify the sender of the email, discard it. Nevertheless, if you are buying stuff online, only provide your credit card information over an HTTP secure protocol. When an unknown person calls or emails you, think before you submit your data. Attackers want you to speak first and realize later. Remain skeptical and converse over a conversation regarding when the other is digging into your sensitive information. Therefore, always think of the consequences if you submit your credentials to an authorized person. 2) Enable spam filter Most email service providers come up with spam filters. Any email that is deemed as suspicious shall automatically be thrown away in the spam folder. Credible email services detect any suspicious links and files that might be harmful and warn a user to download them at your own risk. Some files with specific extensions are barred from downloading. By enabling the spam feature, you can ease yourself from categorizing emails. Furthermore, you shall be relieved from the horrendous tasks of detecting mistrustful messages. The perpetrators of social engineering will have no door to reach you, and your sensitive data will be shielded from attackers. 3) Stay cautious of your password A pro tip for you is that you should never use the same password on the platforms you log onto. Keep no traces behind and delete all sessions after you are done with surfing and browsing. Utilize the social media wisely and stay cautious of people you tag and the information you provide since an attacker might loom there. This is necessary in case your social media account gets hacked, and you have the same password for different websites, your data can be breached up to the skin. You will get blackmailed to pay the ransom to prevent your details from being leaked over the internet. Perpetrators can get your passwords pretty quickly but what happens if you get infected with ransomware? All of your files will be encrypted, and you will be forced to pay the ransom with no data back guarantee which is why the best countermeasure against this attack is to prevent it from happening primarily. 4) Keep software up to date Always update your system’s software patch. Maintain the drivers and keep a close look on your network firewall. Stay alert when an unknown person connects to your Wifi network and update your antivirus according to it. Download content from legitimate sources only and be mindful of the dangers. Hacks often take place when the software the victim’s using is out of date. When vulnerabilities are exposed, offenders exploit the system and gain access to it. Regularly updating your software can safeguard you from a ton of dangers. Consequently, there are no backdoors left for hackers to abuse. 5) Pay attention to what you do online Think of the time that you got self-replicating files on your PC after you clicked on a particular ad. Don’t what that to happen again? Train yourself to not click on Clickbait and scam advertisements. Always know that most lotteries you find online are fake. Never provide your financial details there. Carefully inspect the URL of a website you land on. Most scammers make a copy of a website’s front page and change the link slightly. This is done with such efficiency, that the average eye cannot detect a change in the URL and the user opens the website and enters his credentials. Therefore, stay alert. 6) Remain Skeptical The solution to most problems is that one should remain skeptical online. Do not click on spam links, do not open suspicious emails. Furthermore, do not pay heed to messages stating that you have won a lottery or you have been granted a check of a thousand grand. Remain skeptical of the supreme pinnacle. With this strategy, a hacker will have no attraction of reaching you out since you aren’t paying attention to him. Most of the time, this tactic has helped many people from staying safe online and has never been intercepted by hackers digitally. Consequently, as you aren’t getting attracted to suspicious content, you will be saved from social engineering. Final Words All the tips described above summarize that you are doubting, is vital for your digital secrecy. As you are doubtful, of your online presence, you are entirely protected from online manipulation. Not even you, your credit card information and other necessary information will be shielded as well since you never mentioned it to anyone in the first place. All of this was achieved when you were doubtful of what’s occurring online. You inspected the links you visited and discarded suspicious emails, and thus you are secure. With these actions taken, you have prevented social engineering from occurring. Author Bio Peter Buttler is a Cybersecurity Journalist and Tech Reporter, Currently employed as a Senior Editor at PrivacyEnd. He contributes to a number of online publications, including Infosecurity-magazine, SC Magazine UK, Tripwire, Globalsign, and CSO Australia, among others. Peter, covers different topics related to Online Security, Big data, IoT and Artificial Intelligence. With more than seven years of IT experience, he also holds a Master’s degree in cybersecurity and technology. @peter_buttlr Researchers release a study into Bug Bounty Programs and Responsible Disclosure for ethical hacking in IoT How has ethical hacking benefited the software industry 10 times ethical hackers spotted a software vulnerability and averted a crisis
Read more
  • 0
  • 0
  • 6539

article-image-why-should-enterprises-use-splunk
Sunith Shetty
25 Jul 2018
4 min read
Save for later

Why should enterprises use Splunk?

Sunith Shetty
25 Jul 2018
4 min read
Splunk is a multinational software company that offers its core platform, Splunk Enterprise, as well as many related offerings built on the Splunk platform. The platform helps a wide variety of organizational personas, such as analysts, operators, developers, testers, managers, and executives. They get analytical insights from machine-created data. It collects, stores, and provides powerful analytical capabilities, enabling organizations to act on often powerful insights derived from this data. The Splunk Enterprise platform was built with IT operations in mind. When companies had IT infrastructure problems, troubleshooting and solving problems was immensely difficult, complicated, and manual. It was built to collect and make log files from IT systems searchable and accessible. It is commonly used for information security and development operations, as well as more advanced use cases for custom machines, Internet of Things, and mobile devices. Most organizations will start using Splunk in one of three areas: IT operations management, information security, or development operations (DevOps). In today's post, we will understand the thoughts, concepts, and ideas to apply Splunk to an organization level. This article is an excerpt from a book written by J-P Contreras, Erickson Delgado and Betsy Page Sigman titled Splunk 7 Essentials, Third Edition. IT operations IT operations have moved from predominantly being a cost center to also being a revenue center. Today, many of the world's oldest companies also make money based on IT services and/or systems. As a result, the delivery of these IT services must be monitored and, ideally, proactively remedied before failures occur. Ensuring that hardware such as servers, storage, and network devices are functioning properly via their log data is important. Organizations can also log and monitor mobile and browser-based software applications for any issues from software. Ultimately, organizations will want to correlate these sets of data together to get a complete picture of IT Health. In this regard, Splunk takes the expertise accumulated over the years and offers a paid-for application known as IT Server Intelligence (ITSI) to help give companies a framework for tackling large IT environments. Complicating matters for many traditional organizations is the use of Cloud computing technologies, which now drive log captured from both internally and externally hosted systems. Cybersecurity With the relentless focus in today's world on cybersecurity, there is a good chance your organization will need a tool such as Splunk to address a wide variety of Information Security needs as well. It acts as a log data consolidation and reporting engine, capturing essential security-related log data from devices and software, such as vulnerability scanners, phishing prevention, firewalls, and user management and behavior, just to name a few. Companies need to ensure they are protected from external as well as internal threats, and as a result offer the paid-for applications enterprise security and User behavior analytics (UBA). Similar to ITSI, these applications deliver frameworks to help companies meet their specific requirements in these areas. In addition to cyber-security to protect the business, often companies will have to comply with, and audit against, specific security standards, which can be industry-related, such as PCI compliance of financial transactions; customer-related, such as National Institute of Standards and Technologies (NIST) requirements in working with the the US government; or data privacy-related, such as the Health Insurance Portability and Accountability Act (HIPAA) or the European Union's General Data Protection Regulation (GPDR). Software development and support operations Commonly referred to as DevOps, Splunk's ability to ingest and correlate data from many sources solves many challenges faced in software development, testing, and release cycles. Using Splunk will help teams provide higher quality software more efficiently. Then, with the controls into the software in place, it will provide visibility into released software, its use and user behavior changes, intended or not. This set of use cases is particularly applicable to organizations that develop their own software. Internet of Things Many organizations today are looking to build upon the converging trends in computing, mobility and wireless communications and data to capture data from more and more devices. Examples can include data captured from sensors placed on machinery such as wind turbines, trains, sensors, heating, and cooling systems. These sensors provide access to the data they capture in standard formats such as JavaScript Object Notation (JSON) through application programming interfaces (APIs). To summarize, we saw how Splunk can be used at an organizational level for IT operations, cybersecurity, software development and support and the IoTs. To know more about how Splunk can be used to make informed decisions in areas such as IT operations, information security, and the Internet of Things., do checkout this book Splunk 7 Essentials, Third Edition. Create a data model in Splunk to enable interactive reports and dashboards Splunk leverages AI in its monitoring tools Splunk Industrial Asset Intelligence (Splunk IAI) targets Industrial IoT marketplace
Read more
  • 0
  • 1
  • 6513
article-image-why-do-it-teams-need-to-transition-from-devops-to-devsecops
Guest Contributor
13 Jul 2019
8 min read
Save for later

Why do IT teams need to transition from DevOps to DevSecOps?

Guest Contributor
13 Jul 2019
8 min read
Does your team perform security testing during development? If not, why not? Cybercrime is on the rise, and formjacking, ransomware, and IoT attacks have increased alarmingly in the last year. This makes security a priority at every stage of development. In this kind of ominous environment, development teams around the globe should take a more proactive approach to threat detection. This can be done in a number of ways. There are some basic techniques that development teams can use to protect their development environments. But ultimately, what is needed is an integration of threat identification and management into the development process itself. Integrated processes like this are referred to as DevSecOps, and in this guide, we’ll take you through some of the advantages of transitioning to DevSecOps. Protect Your Development Environment First, though, let’s look at some basic measures that can help to protect your development environment. For both individuals and enterprises, online privacy is perhaps the most valuable currency of all. Proxy servers, Tor, and virtual private networks (VPN) have slowly crept into the lexicon of internet users as cost-effective privacy tools to consider if you want to avoid drawing the attention of hackers. But what about enterprises? Should they use the same tools? They would prefer to avoid hackers as well. This answer is more complicated. Encryption and authentication should be addressed early in the development process, especially given the common practice of using open source libraries for app coding. The advanced security protocols that power many popular consumer VPN services make it a good first step to protecting coding and any proprietary technology. Additional controls like using 2-factor authentication and limiting who has access will further protect the development environment and procedures. Beyond these basic measures, though, it is also worth looking in detail at your entire development process and integrating security management at every stage. This is sometimes referred to as integrating DevOps and DevSecOps. DevOps vs. DevSecOps: What's the Difference? DevOps and DevSecOps are not separate entities, but different facets of the development process. Traditionally, DevOps teams work to integrate software development and implementation in order to facilitate the rapid delivery of new business applications. Since this process omits security testing and solutions, many security flaws and vulnerabilities aren't addressed early enough in the development process. With a new approach, DevSecOps, this omission is addressed by automating security-related tasks and integrating controls and functions like composition analysis and configuration management into the development process. Previously, DevSec focused only on automating security code testing, but it is gradually transitioning to incorporate an operations-centric approach. This helps in reconciling two environments that are opposite by nature. DevOps is forward-looking because it's toward rapid deployment, while development security looks backward to analyze and predict future issues. By prioritizing security analysis and automation, teams can still improve delivery speed without the need to retroactively find and deal with threats. Best Practices: How DevSecOps Should Work The goal of current DevSecOps best practices is to implement a shift towards real-time threat detection rather than undergoing a historical analysis. This enables more efficient application development that recognizes and deals with issues as they happen rather than waiting until there's a problem. This can be done by developing a more effective strategy while adopting DevSecOps practices. When all areas of concern are addressed, it results in: Automatic code procurement: Automatic code procurement eliminates the problem of human error and incorporating weak or flawed coding. This benefits developers by allowing vulnerabilities and flaws to be discovered and corrected earlier in the process. Uninterrupted security deployment: Uninterrupted security deployment through the use of automation tools that work in real time. This is done by creating a closed-loop testing and reporting and real-time threat resolution. Leveraged security resources: Leveraged security resources through automation. Using automated DevSecOps typically address areas related to threat assessment, event monitoring, and code security. This frees your IT or security team to focus in other areas, like threat remediation and elimination. There are five areas that need to be addressed in order for DevSecOps to be effective: Code analysis By delivering code in smaller modules, teams are able to identify and address vulnerabilities faster. Management changes Adapting the protocol for changes in management or admins allows users to improve on changes faster as well as enabling security teams to analyze their impact in real time. This eliminates the problem of getting calls about problems with system access after the application is deployed. Compliance Addressing compliance with Payment Card Industry Digital Security Standard (PCI DSS) and the new General Data Protection Regulations (GDPR) earlier, helps prevent audits and heavy fines. It also ensures that you have all of your reporting ready to go in the event of a compliance audit. Automating threat and vulnerability detection Threats evolve and proliferate fast, so security should be agile enough to deal with emerging threats each time coding is updated or altered. Automating threat detection earlier in the development process improves response times considerably. Training programs Comprehensive security response begins with proper IT security training. Developers should craft a training protocol that ensures all personnel who are responsible for security are up to date and on the same page. Organizations should bring security and IT staff into the process sooner. That means advising current team members of current procedures and ensuring that all new staff is thoroughly trained. Finding the Right Tools for DevSecOps Success Does a doctor operate with a chainsaw? Hopefully not. Likewise, all of the above points are nearly impossible to achieve without the right tools to get the job done with precision. What should your DevSec team keep in their toolbox? Automation tools Automation tools provide scripted remediation recommendations for security threats detected. One such tool is Automate DAST, which scans new or modified code against security vulnerabilities listed on the Open Web Application Security Project's (OWASP) list of the most common flaws, such as a SQL injection errors. These are flaws you might have missed during static analysis of your application code. Attack modeling tools Attack modeling tools create models of possible attack matrices and map their implications. There are plenty of attack modeling tools available, but a good one for identifying cloud vulnerabilities is Infection Monkey, which simulates attacks against the parts of your infrastructure that run on major public cloud hosts like Google Cloud, AWS, and Azure, as well as most cloud storage providers like Dropbox and pCloud. Visualization tools Visualization tools are used for evolving, identifying, and sharing findings with the operations team. An example of this type of tool is PortVis, developed by a team led by professor Kwan-Liu Ma at the University of California, Davis. PortVis is designed to display activity by host or port in three different modes: a grid visualization, in which all network activity is displayed on a single grid; a volume visualization, which extends the grid to a three-dimensional volume; and a port visualization, which allows devs to visualize the activity on specific ports over time. Using this tool, different types of attack can be easily distinguished from each other. Alerting tools  Alerting tools prioritize threats and send alerts so that the most hazardous vulnerabilities can be addressed immediately. WhiteSource Bolt, for instance, is a useful tool of this type, designed to improve the security of open source components. It does this by checking these components against known security threats, and providing security alerts to devs. These alerts also auto-generate issues within GitHub. Here, devs can see details such as references for the CVE, its CVSS rating, a suggested fix, and there is even an option to assign the vulnerability to another team member using the milestones feature. The Bottom Line Combining DevOps and DevSec is not a meshing of two separate disciplines, but rather the natural transition of development to a more comprehensive approach that takes security into account earlier in the process, and does it in a more meaningful way. This saves a lot of time and hassles by addressing enterprise security requirements before deployment rather than probing for flaws later. The sooner your team hops on board with DevSecOps, the better. Author Bio Gary Stevens is a front-end developer. He’s a full-time blockchain geek and a volunteer working for the Ethereum foundation as well as an active Github contributor. Is DevOps really that different from Agile? No, says Viktor Farcic [Podcast] Does it make sense to talk about DevOps engineers or DevOps tools? How Visual Studio Code can help bridge the gap between full-stack development and DevOps
Read more
  • 0
  • 0
  • 6503

article-image-a-serverless-online-store-on-aws-could-save-you-money-build-one
Savia Lobo
14 Jun 2018
9 min read
Save for later

A serverless online store on AWS could save you money. Build one.

Savia Lobo
14 Jun 2018
9 min read
In this article you will learn to build an entire serverless project of an AWS online store, beginning with a React SPA frontend hosted on AWS followed by a serverless backend with API Gateway and Lambda functions. This article is an excerpt taken from the book, 'Building Serverless Web Applications' written by Diego Zanon. In this book, you will be introduced to the AWS services, and you'll learn how to estimate costs, and how to set up and use the Serverless Framework. The serverless architecture of AWS' online store We will build a real-world use case of a serverless solution. This sample application is an online store with the following requirements: List of available products Product details with user rating Add products to a shopping cart Create account and login pages For a better understanding of the architecture, take a look at the following diagram which gives a general view of how different services are organized and how they interact: Estimating costs In this section, we will estimate the costs of our sample application demo based on some usage assumptions and Amazon's pricing model. All pricing values used here are from mid 2017 and considers the cheapest region, US East (Northern Virginia). This section covers an example to illustrate how costs are calculated. Since the billing model and prices can change over time, always refer to the official sources to get updated prices before making your own estimations. You can use Amazon's calculator, which is accessible at this link: http://calculator.s3.amazonaws.com/index.html. If you still have any doubts after reading the instructions, you can always contact Amazon's support for free to get commercial guidance. Assumptions For our pricing example, we can assume that our online store will receive the following traffic per month: 100,000 page views 1,000 registered user accounts 200 GB of data transferred considering an average page size of 2 MB 5,000,000 code executions (Lambda functions) with an average of 200 milliseconds per request Route 53 pricing We need a hosted zone for our domain name and it costs US$ 0.50 per month. Also, we need to pay US$ 0.40 per million DNS queries to our domain. As this is a prorated cost, 100,000 page views will cost only US$ 0.04. Total: US$ 0.54 S3 pricing Amazon S3 charges you US$ 0.023 per GB/month stored, US$ 0.004 per 10,000 requests to your files, and US$ 0.09 per GB transferred. However, as we are considering the CloudFront usage, transfer costs will be charged by CloudFront prices and will not be considered in S3 billing. If our website occupies less than 1 GB of static files and has an average per page of 2 MB and 20 files, we can serve 100,000 page views for less than US$ 20. Considering CloudFront, S3 costs will go down to US$ 0.82 while you need to pay for CloudFront usage in another section. Real costs would be even lower because CloudFront caches files and it would not need to make 2,000,000 file requests to S3, but let's skip this detail to reduce the complexity of this estimation. On a side note, the cost would be much higher if you had to provision machines to handle this number of page views to a static website with the same availability and scalability. Total: US$ 0.82 CloudFront pricing CloudFront is slightly more complicated to price since you need to guess how much traffic comes from each region, as they are priced differently. The following table shows an example of estimation: RegionEstimated trafficCost per GB transferredCost per 10,000 HTTPS requestsNorth America70%US$ 0.085US$ 0.010Europe15%US$ 0.085US$ 0.012Asia10%US$ 0.140US$ 0.012South America5%US$ 0.250US$ 0.022 As we have estimated 200 GB of files transferred with 2,000,000 requests, the total will be US$ 21.97. Total: US$ 21.97 Certificate Manager pricing Certificate Manager provides SSL/TLS certificates for free. You only need to pay for the AWS resources you create to run your application. IAM pricing There is no charge specifically for IAM usage. You will be charged only by what AWS resources your users are consuming. Cognito pricing Each user has an associated profile that costs US$ 0.0055 per month. However, there is a permanent free tier that allows 50,000 monthly active users without charges, which is more than enough for our use case. Besides that, we are charged for Cognito Syncs of our user profiles. It costs US$ 0.15 for each 10,000 sync operations and US$ 0.15 per GB/month stored. If we estimate 1,000 active and registered users with less than 1 MB per profile, with less than 10 visits per month in average, we can estimate a charge of US$ 0.30. Total: US$ 0.30 IoT pricing IoT charges starts at US$ 5 per million messages exchanged. As each page view will make at least 2 requests, one to connect and another to subscribe to a topic, we can estimate a minimum of 200,000 messages per month. We need to add 1,000 messages if we suppose that 1% of the users will rate the products and we can ignore other requests like disconnect and unsubscribed because they are excluded from billing. In this setting, the total cost would be of US$ 1.01. Total: US$ 1.01 SNS pricing We will use SNS only for internal notifications, when CloudWatch triggers a warning about issues in our infrastructure. SNS charges US$ 2.00 per 100,000 e-mail messages, but it offers a permanent free tier of 1,000 e-mails. So, it will be free for us. CloudWatch pricing CloudWatch charges US$ 0.30 per metric/month and US$ 0.10 per alarm and offers a permanent free tier of 50 metrics and 10 alarms per month. If we create 20 metrics and expect 20 alarms in a month, we can estimate a cost of US$ 1.00. Total: US$ 1.00 API Gateway pricing API Gateway starts charging US$ 3.50 per million of API calls received and US$ 0.09 per GB transferred out to the Internet. If we assume 5 million requests per month with each response with an average of 1 KB, the total cost of this service will be US$ 17.93. Total: US$ 17.93 Lambda pricing When you create a Lambda function, you need to configure the amount of RAM memory that will be available for use. It ranges from 128 MB to 1.5 GB. Allocating more memory means additional costs. It breaks the philosophy of avoiding provision, but at least it's the only thing you need to worry about. The good practice here is to estimate how much memory each function needs and make some tests before deploying to production. A bad provision may result in errors or higher costs. Lambda has the following billing model: US$ 0.20 per 1 million requests US$ 0.00001667 GB-second Running time is counted in fractions of seconds, rounding up to the nearest multiple of 100 milliseconds. Furthermore, there is a permanent free tier that gives you 1 million requests and 400,000 GB-seconds per month without charges. In our use case scenario, we have assumed 5 million requests per month with an average of 200 milliseconds per execution. We can also assume that the allocated RAM memory is 512 MB per function: Request charges: Since 1 million requests are free, you pay for 4 million that will cost US$ 0.80. Compute charges: Here, 5 million executions of 200 milliseconds each gives us 1 million seconds. As we are running with a 512 MB capacity, it results in 500,000 GB-seconds, where 400,000 GB-seconds of these are free, resulting in a charge of 100,000 GB-seconds that costs US$ 1.67. Total: US$ 2.47 SimpleDB pricing Take a look at the following SimpleDB billing where the free tier is valid for new and existing users: US$ 0.14 per machine-hour (25 hours free) US$ 0.09 per GB transferred out to the internet (1 GB is free) US$ 0.25 per GB stored (1 GB is free) Take a look at the following charges: Compute charges: Considering 5 million requests with an average of 200 milliseconds of execution time, where 50% of this time is waiting for the database engine to execute, we estimate 139 machine hours per month. Discounting 25 free hours, we have an execution cost of US$ 15.96. Transfer costs: Since we'll transfer data between SimpleDB and AWS Lambda, there is no transfer cost. Storage charges: If we assume a 5 GB database, it results in US$ 1.00, since 1 GB is free. Total: US$ 16.96, but this will not be added in the final estimation since we will run our application using DynamoDB. DynamoDB DynamoDB requires you to provision the throughput capacity that you expect your tables to offer. Instead of provisioning hardware, memory, CPU, and other factors, you need to say how many read and write operations you expect and AWS will handle the necessary machine resources to meet your throughput needs with consistent and low-latency performance. One read capacity unit represents one strongly consistent read per second or two eventually consistent reads per second, where objects have a size up to 4 KB. Regarding the writing capacity, one unit means that you can write one object of size 1 KB per second. Considering these definitions, AWS offers in the permanent free tier 25 read units and 25 write units of throughput capacity, in addition to 25 GB of free storage. It charges as follows: US$ 0.47 per month for every Write Capacity Unit (WCU) US$ 0.09 per month for every Read Capacity Unit (RCU) US$ 0.25 per GB/month stored US$ 0.09 GB per GB transferred out to the Internet Since our estimated database will have only 5 GB, we are on the free tier and we will not pay for transferred data because there is no transfer cost to AWS Lambda. Regarding read/write capacities, we have estimated 5 million requests per month. If we evenly distribute them, we will get two requests per second. In this case, we will consider that it's one read and one write operation per second. We need to estimate now how many objects are affected by a read and a write operation. For a write operation, we can estimate that we will manipulate 10 items on average and a read operation will scan 100 objects. In this scenario, we would need to reserve 10 WCU and 100 RCU. As we have 25 WCU and 25 RCU for free, we only need to pay for 75 RCU per month, which costs US$ 6.75. Total: US$ 6.75 Total pricing Let's summarize the cost of each service in the following table: ServiceMonthly CostsRoute 53US$ 0.54S3US$ 0.82CloudFrontUS$ 21.97CognitoUS$ 0.30IoTUS$ 1.01CloudWatchUS$ 1.00API GatewayUS$ 17.93LambdaUS$ 2.47DynamoDBUS$ 6.75TotalUS$ 52.79 It results in a total cost of ~ US$ 50 per month in infrastructure to serve 100,000 page views. If you have a conversion rate of 1%, you can get 1,000 sales per month, which means that you pay US$ 0.05 in infrastructure for each product that you sell. Thus, in this article you learned the serverless architecture of AWS online store also learned how to estimate its costs. If you've enjoyed reading the excerpt, do check out, Building Serverless Web Applications to monitor the performance, efficiency and errors of your apps and also learn how to test and deploy your applications. Google Compute Engine Plugin makes it easy to use Jenkins on Google Cloud Platform Serverless computing wars: AWS Lambdas vs Azure Functions Using Amazon Simple Notification Service (SNS) to create an SNS topic
Read more
  • 0
  • 0
  • 6494