Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech Guides

851 Articles
article-image-5-reasons-learn-generative-adversarial-networks-gans
Savia Lobo
12 Dec 2017
5 min read
Save for later

5 reasons to learn Generative Adversarial Networks (GANs) in 2018

Savia Lobo
12 Dec 2017
5 min read
Generative Adversarial Networks (GANs) are a prominent branch of Machine learning research today. As deep neural networks require a lot of data to train on, they perform poorly if data provided is not sufficient. GANs can overcome this problem by generating new and real data, without using the tricks like data augmentation. As the application of GANs in the Machine learning industry is still at the infancy level, it is considered a highly desirable niche skill. Having an added hands-on experience raises the bar higher in the job market. It can fetch you a higher pay over your colleagues and can also be the feature that sets your resume stand apart. Source: Gartner's Hype Cycle 2017  GANs along with CNNs and RNNs are a part of the in demand deep neural network experience in the industry. Here are five reasons why you should learn GANs today and how Kuntal Ganguly’s book, Learning Generative Adversarial Networks help you do just that. Kuntal is a big data analytics engineer at Amazon Web Services. He has around 7 years of experience building large-scale, data-driven systems using big data frameworks and machine learning. He has designed, developed, and deployed several large-scale distributed applications, without any assistance. Kuntal is a seasoned author with a rich set of books ranging across the data science spectrum from machine learning, deep learning, to Generative Adversarial Networks, published under his belt.[/author] The book shows how to implement GANs in your machine learning models in a quick and easy format with plenty of real-world examples and hands-on tutorials. 1. Unsupervised Learning now a cakewalk with GANs A major challenge of unsupervised learning is the massive amount of unlabelled data one needed to work through as part of data preparation. In traditional neural networks, this labeling of data is both costly and time-consuming. A creative aspect of Deep learning is now possible using Generative Adversarial Networks. Here, the neural networks are capable of generating realistic images from the real-world datasets (such as MNIST and CIFAR). GANs provide an easy way to train the DL algorithms. This is done by slashing down the amount of data required to train the neural network models, that too, with no labeling of data required. This book uses a semi-supervised approach to solve the problem of unsupervised learning for classifying images. However, this could be easily leveraged into developer’s own problem domain. 2. GANs help you change a horse into a zebra using Image style transfer https://www.youtube.com/watch?v=9reHvktowLY Turning an apple into an orange is Magic!! GANs can do this magic, without casting a  spell. Transferring Image-to-Image style, where the styling of one image is applied to the other. What GANs can do is, they can perform image-to-image translations across various domains (such as changing apple to orange or horse to zebra) using Cycle Consistent Generative Network (Cycle GANs). Detailed examples of how to turn the image of an apple to an orange using TensorFlow, and how of turn an image of a horse into a zebra using a GAN model, are given in this book.  3. GANs inputs your text and outputs an image Generative Adversarial networks can also be utilized for text-to-image synthesis. An example of this is in generating a photo-realistic image based on a caption. To do this, a dataset of images with their associated captions are given as training data. The dataset is first encoded using a hybrid neural network called the character-level convolutional Recurrent Neural network, which creates a joint representation of both in multimodal space for both the generator and the discriminator. In this book, Kuntal showcases the technique of stacking multiple generative networks to generate realistic images from textual information using StackGANs.Further, the book goes on to explain the coupling of two generative networks, to automatically discover relationships among various domains (a relationship between shoes and handbags or actor and actress) using DiscoGANs. 4. GANs + Transfer Learning = No more model generation from scratch Source: Learning Generative Adversarial Networks Data is the basis to train any Machine learning model, scarcity of which can lead to a poorly-trained model, which can have high chances of failure. Some real-life scenarios may not have sufficient data, hardware, or resources to train bigger networks in order to achieve the desired accuracy. So, is training from scratch a must-do for training the models? A well-known technique used in deep learning that adapts an existing trained model for a similar task to the task at hand is known as Transfer Learning. This book will showcase Transfer learning using some hands-on examples. Further, a combination of both Transfer learning and GANs, to generate high-resolution realistic images with facial datasets is explained. Thus, you will also understand how to create creating artistic hallucination on images beyond GAN. 5. GANs help you take Machine Learning models to Production Most Machine learning tutorials, video courses, and books, explain the training and the evaluation of the models. But how do we take this trained model to production and put it to use and make it available to customers? In this book, the author has taken an example, i.e. developing a facial correction system using an LFW dataset, to automatically correct corrupted images using your trained GAN model. This book also contains several techniques of deploying machine learning or deep learning models in production both on data centers and clouds with micro-service based containerized environments.You will also learn the way of running deep models in a serverless environment and with managed cloud services. This article just scratches the surface of what is possible with GANs and why learning it would change your thinking about deep neural networks. To know more grab your copy of Kuntal Ganguly’s book on GANs: Learning Generative Adversarial Networks.     .
Read more
  • 0
  • 0
  • 4328

article-image-what-is-automated-machine-learning
Wilson D'souza
17 Oct 2017
6 min read
Save for later

What is Automated Machine Learning (AutoML)?

Wilson D'souza
17 Oct 2017
6 min read
Are you a proud machine learning engineer who hates that the job tests your limits as a human being? Do you dread the long hours of data experimentation and data modeling that leave you high and dry? Automated Machine Learning or AutoML can put that smile back on your face. A self-replicating AI algorithm, AutoML is the latest tool that is being applied in the real world today, and AI market leaders such as Google have made a significant investment to research further in this field. AutoML has seen a steep rise in research and new tools over the last couple of years, but its recent mention during Google IO 2017 has piqued the interest of the entire developer community. What is AutoML all about and what makes it so interesting? Evolution of automated machine learning Before we try to understand AutoML, let’s look at what triggered the need for automated machine learning. Until now, building machine learning models that work in the real world has been a domain ruled by researchers, scientists, and machine learning experts. The process of manually designing a machine learning model involves several complex and time-consuming steps such as: Pre-processing data Selecting appropriate ML architecture Optimizing hyperparameters Constructing models Evaluating suitability of models Add to this, the several layers of neural networks required for an efficient ML architecture -- an n-layer neural network could result in nn potential networks. This level of complexity could be overwhelming for the millions of developers who are keen on embracing machine learning. AutoML tries to solve this problem of complexity and makes machine learning accessible to a large group of developers by automating routine but complex tasks such as the design of neural networks. Since this cuts down development time significantly and takes care of several complex tasks involved in building machine learning models, AutoML is expected to play a crucial role in bringing machine learning to the mainstream. Approaches to automating model generation   With a growing body of research, AutoML aims to automate the following tasks in the field of machine learning: Model Selection Parameter Tuning Meta Learning Ensemble Construction It does this by using a wide range of algorithms and approaches such as: Bayesian Optimization: One of the fundamental approaches for automating model generation is to use Bayesian methods for hyperparameter tuning. By modeling the uncertainty of parameter performance, different variations of the model can be explored which offers an optimal solution. Meta-learning and Ensemble Construction: To further increase AutoML efficiency, meta-learning techniques are used to find and pick optimal hyperparameter settings. These techniques can be further coupled with auto-ensemble construction techniques to create effective ensemble model from a collection of models that undergo optimization. Using these techniques, a high level of accuracy can be achieved throughout the process of automated generation of models. Genetic Programming: Certain tools like TPOT also make use of a variation of genetic programming (tree-based pipeline optimization) to automatically design and optimize ML models that offer highly accurate results for a given set of data. This approach makes use of operators at various stages of the data pipeline which are assembled together in the form of a tree-based pipeline. These are then further optimized and newer pipelines are auto-generated using genetic programming. If these weren’t enough, Google in its recent posts disclosed that they are using reinforcement learning approach to give a further push to develop efficient AutoML techniques. What are some tools in this area? Although it’s still early days, we can already see some frameworks emerging to automate the generation of your machine learning models.   Auto-sklearn: Auto-sklearn, the tool which won the ChaLearn AutoML Challenge, provides a wrapper around the popular Python library scikit-learn to automate machine learning. This is a great addition to the ever-growing ecosystem of Python data science tools. Built on top of Bayesian optimization, it takes away the hassle of algorithm selection, parameter tuning, and ensemble construction while building machine learning pipelines. With auto-sklearn, developers can create rapid iterations and refinements to their machine learning models, thereby saving a significant amount of development time. The tool is still in its early stages of development, so expect a few hiccups while using it. DataRobot: DataRobot offers a machine learning automation platform to all levels of data scientists aimed at significantly reducing the time to build and deploy predictive models. Since it’s a cloud platform it offers great power and speed throughout the process of automating the model generation process. In addition to automating the development of predictive models, it offers other useful features such as a web-based interface, compatibility with several leading tools such as Hadoop and Spark, scalability, and rapid deployment. It’s one of those few machine learning automation platforms which are ready for industry use. TPOT: TPOT is yet another Python tool meant for automated machine learning. It uses a genetic programming approach to iterate and optimize machine learning models. As in the case of auto-sklearn, TPOT is also built on top of scikit-learn. It has a growing interest level on GitHub with 2400 stars and has observed a 100% rise in the past one year alone. Its goals, however, are quite similar to those of Auto-sklearn: feature construction, feature selection, model selection, and parameter optimization. With these goals in mind, TPOT aims at building efficient machine learning systems in lesser time and with better accuracy. Will automated machine learning replace developers? AutoML as a concept is still in its infancy. But as market leaders like Google, Facebook, and others research more in this field, AutoML will keep evolving at a brisk pace. Assuming that AutoML would replace humans in the field of data science, however, is a far-fetched thought and nowhere near reality. Here is why. AutoML as a technique is meant to make the neural network design process efficient rather than replace humans and researchers in the field of building neural networks. The primary goal of AutoML is to help experienced data scientists be more efficient at their work i.e., enhance productivity by a huge margin and to reduce the steep learning curve for the many developers who are keen on designing ML models - i.e., make ML more accessible. With the advancements in this field, it’s exciting times for developers to embrace machine learning and start building intelligent applications. We see automated machine learning as a game changer with the power to truly democratize the building of AI apps. With automated machine learning, you don’t have to be a data scientist to develop an elegant AI app!
Read more
  • 0
  • 0
  • 4326

article-image-web-developer-app-developer
Oliver Blumanski
12 Dec 2016
4 min read
Save for later

From Web Developer to App Developer

Oliver Blumanski
12 Dec 2016
4 min read
As a web developer, you have to adapt every year to new technologies. In the last four years, the JavaScript world has exploded, and their toolsets are changing very fast. In this blog post, I will describe my experience of changing from a web developer to an app developer. My start in the Mobile App World My first attempt at creating a mobile app was a simple JavaScript one-page app, which was just a website designed for mobile devices. It wasn’t very impressive, but at the time, there was no React-Native or Ionic Framework. It was nice, but it wasn't great. Ionic Framework Later, I developed apps using the Ionic/Angular Framework, which uses Cordova as a wrapper. Ionic apps run in a web-view on the device. To work with Ionic was pretty easy, and the performance increased over time, so I found it to be a good toolset. If you need an app that is running on a broad spectrum of devices, Ionic is a good choice. React-Native A while ago, I made the change to React-Native. React-Native was supported only by iOS at the start, but then it also supported Android, so I thought that the time was right to switch to React-Native. The React-Native world is a bit different than the Ionic world. React-Native is still newish, and many modules are a work-in-progress; so, React-Native itself is released every two weeks with a new version. Working with React-Native is bleeding edge development. React-Native and Firebase are what I use right now. When I was working with Ionic, I was using a SQLite database to cache on the device, and I used Ajax to get data from a remote API. For notifications, I used Google GCM and Pushwoosh, and for uploads, AWS S3. With React-Native, I chose the new Firebase v3, which came out earlier this year. Firebase offers a real-time database, authentication, cloud messaging, storage, analytics, offline data capability, and much more. Firebase can replace all of the third-party tools I have used before. For further information, check out here. Google Firebase supports three platforms: iOS, Android, and the Web. Unfortunately, the web platform does not support offline capabilities, notifications, and some other features. If you want to use all the features Firebase has to offer, there is a React-Native module that is wrapping the IOS and Android native platforms. The JavaScript API module is identical to the Firebase web platform JavaScript API. So, you can use the Firebase web docs on this. Developing with React-Native, you come in touch with a lot of different technologies and programming languages. You have to deal with Xcode, and with Android, you have to add/change the Java code and deal with Gradle, permanent google-service upgrades, and many other things. It is fun to work with React-Native, but it can also be frustrating regarding unfinished modules or outdated documentation on the web. It pushes you into new areas, so you learn Java, Objective-C, or both. So, why not? Firebase V3 Features Let’s look at some of the Firebase V3 features. Firebase Authentication One of the great features that Firebase offers is authentication. They have, ready to go, Facebook login, Twitter login, Google login, Github login, anonymous login, and email/password sign up. OK, to get the Facebook login running, you will still need a third-party module. For Facebook login, I have recently used this module. And, for a Google login, I have recently used this module. Firebase Cloud Messages You can receive notifications on the device, but the differences are depending on the state of the app. For instance, is the app open or closed. Read up here. Firebase Cloud Messages Server You may want to send messages to all or particular users/devices, and you can do this via the FCM Server. I use a NodeJS script as the FCM Server, and I use this module to do so Here. You can read more at Here. Firebase Real-Time Database You can subscribe to database queries; so, as soon as data is changing, your app is getting the new data without a reload. However, you can only call the data once. The real-time database uses web sockets to deliver data. Conclusion As a developer, you have to evolve with technology and keep up with upcoming development tools. I think that mobile development is more exciting than web development these days, and this is the reason why I would like to focus more on app development. About the author Oliver Blumanski is a developer based out of Townsville, Australia. He has been a software developer since 2000, and can be found on GitHub @ blumanski.
Read more
  • 0
  • 0
  • 4315

article-image-paper-in-two-minutes-i-revnet-a-deep-invertible-convolutional-network
Sugandha Lahoti
02 Apr 2018
4 min read
Save for later

Paper in Two minutes: i-RevNet, a deep invertible convolutional network

Sugandha Lahoti
02 Apr 2018
4 min read
The ICLR 2018 accepted paper, i-RevNet: Deep Invertible Networks, introduces i-RevNet, an invertible convolutional network, that does not discard any information about the input while classifying images. This paper is authored by Jörn-Henrik Jacobsen, Arnold W.M. Smeulders, and Edouard Oyallon. The 6th annual ICLR conference is scheduled to happen between April 30 - May 03, 2018. i-RevNet, a deep invertible convolutional network What problem is the paper attempting to solve? A CNN is generally composed of a cascade of linear and nonlinear operators. These operators are very effective in classifying images of all sorts but reveal little information about the contribution of the internal representation to the classification. The learning process of a CNN works by a regular reduction of large amounts of uninformative variability in the images to reveal the essence of the visual class. However, the extent to which information is discarded is lost somewhere in the intermediate nonlinear processing steps. Also, there is a wide belief, that discarding information is essential for learning representations that generalize well to unseen data. The authors of this paper show that discarding information is not necessary and propose to explain this theory with empirical evidence. This paper also provides an understanding of the variability reduction process by proposing an invertible convolutional network. The i-RevNet does not discard any information about the input while classifying images. It has a built-in pseudo-inverse, allowing for easy inversion.  It basically uses linear and invertible operators for performing downsampling, instead of non-invertible variants like spatial pooling. Paper summary i-RevNet is an invertible deep network, which builds upon the recently introduced RevNet, where the non-invertible components of the original RevNets are replaced by invertible ones. i-RevNets retain all information about the input signal in any of their intermediate representations up until the last layer. They achieve the same performance on Imagenet compared to similar non-invertible RevNet and ResNet architectures. The above image describes the blocks of an i-RevNet. The strategy implemented by an i-RevNet consists in an alternation between additions, and nonlinear operators, while progressively down-sampling the signal operators. The pair of the final layer is concatenated through a merging operator. Using this architecture, the authors avoid the non-invertible modules of a RevNet (e.g. max-pooling or strides) which are necessary to train them in a reasonable time and are designed to build invariance w.r.t. Translation variability. Their method replaces the non-invertible modules by linear and invertible modules Sj, that can reduce the spatial resolution while maintaining the layer’s size by increasing the number of channels. Key Takeaways This work provides a solid empirical evidence that learning invertible representations does not discard any information about their input on large-scale supervised problems. i-RevNet, the invertible network proposed, is a class of CNN which is fully invertible and permits to exactly recover the input from its last convolutional layer. i-RevNets achieve the same classification accuracy in the classification of complex datasets as illustrated on ILSVRC-2012 when compared to the RevNet and ResNet architectures with a similar number of layers. The inverse network is obtained for free when training an i-RevNet, requiring only minimal adaption to recover inputs from the hidden representations. Reviewer feedback summary Overall Score: 25/30 Average Score: 8.3 Reviewers agreed the paper is a strong contribution, despite some comments about the significance of the result; i.e., why is invertibility a "surprising" property for learnability, in the sense that F(x) = {x,  phi(x)}, where phi is a standard CNN satisfies both properties: invertible and linear measurements of F producing good classification. Having said that, the reviews agreed that the paper is well written and easy to follow and considered it to be a great contribution to the ICLR conference.
Read more
  • 0
  • 0
  • 4280

article-image-mvp-android
HariVigneshJayapalan
04 Apr 2017
6 min read
Save for later

MVP for Android

HariVigneshJayapalan
04 Apr 2017
6 min read
The Android framework does not encourage any specific way to design an application. In a way, that makes the framework more powerful and vulnerable at the same time. You may be asking yourself things like, "Why should I know about this? I'm provided with Activity and I can write my entire implementation using a few Activities and Fragments, right?” Based on my experience, I have realized that solving a problem or implementing a feature at that point of time is not enough. Over time, our apps will go through a lot of change cycles and feature management. Maintaining these over a period of time will create havoc in our application if not designed properly with separation of concerns. That’s why developers have come up with architectural design patterns for better code crafting. How has it evolved? Most developers started creating an Android app with Activity at the center and capable of deciding what to do and how to fetch data. Activity code over a period of time started to grow and became a collection of non-reusable components.Then developers started packaging those components and the Activity could use them through the exposed APIs of these components. Then they started to take pride and began breaking codes into bits and pieces as much as possible. After that, they found themselves in an ocean of components with hard-to-trace dependencies and usage. Also, later we were introduced to the concept of testability and found that regression is much safer if it’s written with tests. Developers realized that the jumbled code that they developed in the above process is very tightly coupled with the Android APIs, preventing JVM tests and also hindering an easy design of test cases. This is the classic MVC with Activity or Fragment acting as a Controller. SOLID principles SOLID principles are object-oriented design principles, thanks to dear Robert C. Martin. According to the SOLID article on Wikipedia, it stands for: S (SRP): Single responsibility principle This principle means that a class must have only one responsibility and do only the task for which it has been designed. Otherwise, if our class assumes more than one responsibility we will have a high coupling causing our code to be fragile with any changes. O (OCP): Open/closed principle According to this principle, a software entity must be easily extensible with new features without having to modify its existing code in use. Open for extension: new behavior can be added to satisfy the new requirements. Close for modification: extending the new behavior is not required to modify the existing code. If we apply this principle, we will get extensible systems that will be less prone to errors whenever the requirements are changed. We can use abstraction and polymorphism to help us apply this principle. L (LSP): Liskov substitution principle This principle was defined by Barbara Liskov and says that objects must be replaceable by instances of their subtypes without altering the correct functioning of our system. Applying this principle, we can validate that our abstractions are correct. I (ISP): Interface segregation principle This principle defines that a class should never implement an interface that does not go to use. Failure to comply with this principle means that in our implementations we will have dependencies on methods that we do not need but that we are obliged to define. Therefore, implementing a specific interface is better than implementing a general-purpose interface. An interface is defined by the client that will use it; so it should not have methods that the client will not implement. D (DIP): Dependency inversion principle The dependency inversion principle means that a particular class should not depend directly on another class, but on an abstraction (interface) of this class. When we apply this principle we will reduce dependency on specific implementations and thus make our code more reusable. MVP somehow tries to follow (not 100% completely) all of these five principles. You can try looking up clean architecture for pure SOLID implementation. What is an MVP design pattern? An MVP design pattern is a set of guidelines that if followed, decouples the code for reusability and testability. It divides the application components based on its role, called separation of concerns. MVP divides the application into three basic components: Model: The Model represents a set of classes that describes the business logic and data. It also defines business rules for data, which means how the data can be changed and manipulated. In other words, it is responsible for handling the data part of the application. View: The View represents the UI components. It is only responsible for displaying the data that is received from the presenter as the result. This also transforms the model(s) into UI. In other words, it is responsible for laying out the views with specific data on the screen. Presenter: The Presenter is responsible for handling all UI events on behalf of the view. This receives input from users via the View, then processes the user’s data with the help of Model, and passes the results back to the View. Unlike view and controller, view and presenter are completely decoupled from each other and communicates to each other by an interface. Also, Presenter does not manage the incoming request traffic as Controller. In other words, it is a bridge that connects a Model and a View. It also acts as an instructor to the View. MVP lays down a few ground rules for the abovementioned components, as listed below: A View’s sole responsibility is to draw a UI as instructed by the Presenter. It is a dumb part of the application. The View delegates all the user interactions to its Presenter. The View never communicates with Model directly. The Presenter is responsible for delegating the View’s requirements to Model and instructing the View with actions for specific events. The Model is responsible for fetching data from the server, database and file system. MVP projects for getting started Every developer will have his/her own way of implementing MVP. I’m listing a few projects down the line. Migrating to MVP will not be quick and it will take some time. Please take your time and get your hands dirty with MVP: https://github.com/mmirhoseini/marvel https://github.com/saulmm/Material-Movies https://fernandocejas.com/2014/09/03/architecting-android-the-clean-way/  About the author HariVigneshJayapalan is a Google-certified Android app developer, IDF-certified UI &UX Professional, street magician, fitness freak, technology enthusiast, and wannabe entrepreneur.
Read more
  • 0
  • 0
  • 4279

article-image-top-5-newish-javascript-libraries-arent-angularjs
Ed Gordon
30 Jul 2014
5 min read
Save for later

Top 5 Newish JavaScript Libraries (That Aren't AngularJS...)

Ed Gordon
30 Jul 2014
5 min read
AngularJS is, like, so 2014. Already the rumblings have started that there are better ways of doing things. I thought it prudent to look into the future to see what libraries are on the horizon for web developers now, and in the future. 5. Famo.us “Animation can explain whatever the mind of man can conceive.” - Walt Disney Famo.us is a clever library. It’s designed to help developers create application user interfaces that perform well; as well, in fact, as native applications. In a moment of spectacular out-of-the-box thinking, Famo.us brings with it its own rendering engine to replace the engine that browsers supply. To get the increase in performance from HTML5 apps that they wanted, Famo.us looked at which tech does rendering best, namely game technologies, such as Unity and Unreal Engine.  CSS is moved into the framework and written in JavaScript instead. It makes transformations and animations quicker. It’s a new way of thinking for web developers, so you best dust off the Unity Rendering tutorials… Famo.us makes things running in the browser as sleek as they’re likely to be over the next few years, and it’s massively exciting for web developers. 4. Ractive “The meeting of two personalities is like the contact of two chemical substances: if there is any reaction, both are transformed.” - Carl Jung Manipulating the Document Object Model (which ties together all the webpages we visit) has been the major foe of web developers for years. Mootools, YUI, jQuery, AngularJS, Famo.us, and everything between have offered developers productivity solutions to enable them to manipulate the DOM to their client’s needs in a more expedient manner. One of the latest libraries to help DOM manipulators at large is Ractive.js, developed by the team at The Guardian (well, mainly one guy – Rich Harris). Its focus remains on UI, so while it borrows heavily from Angular (it was initially called AngularBars), it’s a simpler tool at heart. Or at least, it approaches the problems of DOM manipulation in as simple a way as possible. Ractive is part of the reactive programming direction that JavaScript (and programming, generally) seems to be heading in at the moment. 3. DC.js “A map does not just chart, it unlocks and formulates meaning; it forms bridges between here and there, between disparate ideas that we did not know were previously connected.” ― Reif Larsen, The Selected Works of T.S. Spivet DC.js, borrowing heavily from both D3 and Crossfilter, enables you to visualize linked data through reactive (a theme developing in this list) charts. I could try and explain the benefits in text, but sometimes, it’s worth just going to have a play around (after you’ve finished this post). It uses D3 for the visualization bit, so everything’s in SVG, and uses Crossfilter to handle the underlying linkage of data. For a world of growing data, it provides users with immediate and actionable insight, and is well worth a look. This is the future of data visualization on the web. 2. Lo-dash "The true crime fighter always carries everything he needs in his utility belt, Robin." - Batman There’s something appealing about a utility belt; something that’s called to all walks of life, from builders to Batman, since man had more than one tool at his disposal. Lo-dash, and Underscore.js that came before it, is no different. It’s a library of useful JavaScript functions that abstract some of the pain away of JS development, whilst boosting performance over Underscore.js. It’s actually based around Underscore, which at the time of writing is the most depended upon library used in Node, but builds on the good parts, and gets rid of the not so good. Lo-dash will take over from Underscore in the near future. Watch this space. 1. Polymer “We are dwarfs astride the shoulders of giants. We master their wisdom and move beyond it. Due to their wisdom we grow wise and are able to say all that we say, but not because we are greater than they.” - Isaiah di Trani As with a lot of things, rather than trying to reinvent solutions to existing problems, Google is trying to just reinvent the things that lead to the problem. Web Components is a W3 standard that’s going to change the way we build web applications for the better, and Polymer is the framework that allows you to build these Web Components now. Web Components envision a world where, as a developer, you can select a component from the massive developer shelf of the Internet, call it, and use it without any issues. Polymer provides access to these components; UI components such as a clock – JavaScript that’s beyond my ability to write at least, and a time-sink for normal JS developers – can be called with: <polymer-ui-clock></polymer-ui-clock> Which gives you a pretty clock that you can actually go and customize further if you want. Essentially, they put you in a dialog with the larger development world, no longer needing to craft solutions for your single project; you can use and reuse components that others have developed. It allows us to stand on the shoulders of giants. It’s still some way off standardization, but it’s going to redefine what application development means for a lot of people, and enable a wider range applications to be created quickly and efficiently. “There's always a bigger fish.” - Qui-Gon Jin There will always be a new challenger, an older guard, and a bigger fish, but these libraries represent the continually changing face of web development. For now, at least!
Read more
  • 0
  • 0
  • 4273
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at £15.99/month. Cancel anytime
article-image-how-protect-yourself-botnet-attack
Hari Vignesh
22 Oct 2017
6 min read
Save for later

How to protect yourself from a botnet attack

Hari Vignesh
22 Oct 2017
6 min read
The word 'botnet' is formed from the words ‘robot’ and ‘network’. Cybercriminals use special Trojan viruses to breach the security of several users’ computers, taking control of each computer and organizing all of the infected machines into a network of ‘bots’ that the criminal can remotely manage. It’s basically a collection of Internet-connected devices, which may include PCs, servers, mobile devices, and Internet of Things devices that are infected and controlled by a common type of malware. Users are often unaware of a botnet infecting their system. How can it affect you? Often, the cybercriminal will seek to infect and control thousands, tens of thousands, or even millions of computers, so that the cybercriminal can act as the master of a large ‘zombie network’ or ‘bot-network’ which is capable of delivering a Distributed Denial of Service (DDoS) attack, a large-scale spam campaign, or other types of cyberattack. In some cases, cybercriminals will establish a large network of zombie machines and then sell access to the zombie network to other criminals — either on a rental basis or as an outright sale. Spammers may rent or buy a network in order to operate a large-scale spam campaign. How do botnets work? The botnet malware typically looks for vulnerable devices across the Internet, rather than targeting specific individuals, companies, or industries. The objective for creating a botnet is to infect as many connected devices as possible, and to use the computing power and resources of those devices for automated tasks that generally remain hidden to the users of the devices. For example, an ad fraud botnet that infects a user’s PC will take over the system’s web browsers to divert fraudulent traffic to certain online advertisements. However, to stay concealed, the botnet won’t take complete control of the web browsers, which would alert the user. Instead, the botnet may use a small portion of the browser’s processes, often running in the background, to send a barely noticeable amount of traffic from the infected device to the targeted ads. On its own, that fraction of bandwidth taken from an individual device won’t offer much to the cybercriminals running the ad fraud campaign. However, a botnet that combines millions of devices will be able to generate a massive amount of fake traffic for ad fraud, while also avoiding detection by the individuals using the devices. Notable botnet attacks The Zeus malware, first detected in 2007, is one of the best-known and widely used malware types in the history of information security. Zeus uses a Trojan horse program to infect vulnerable devices and systems, and variants of this malware have been used for various purposes over the years, including to spread CryptoLocker ransomware. The Srizbi botnet, which was first discovered in 2007, was, for a time, the largest botnet in the world. Srizbi, also known as the Ron Paul spam botnet, was responsible for a massive amount of email spam — as much as 60 billion messages a day, accounting for roughly half of all email spam on the Internet at the time. In 2007, the Srizbi botnet was used to send out political spam emails promoting then-U.S. Presidential candidate Ron Paul. An extensive cybercrime operation and ad fraud botnet known as Methbot was revealed in 2016 by cybersecurity services company White Ops. According to security researchers, Methbot was generating between $3 million and $5 million in fraudulent ad revenue daily last year by producing fraudulent clicks for online ads, as well as fake views of video advertisements. Several powerful, record-setting distributed denial-of-service (DDoS) attacks were observed in late 2016, and they later traced to a new brand of malware known as Mirai. The DDoS traffic was produced by a variety of connected devices, such as wireless routers and CCTV cameras. Preventing botnet attacks In the past, botnet attacks were disrupted by focusing on the command-and-control source. Law enforcement agencies and security vendors would trace the bots’ communications to wherever the C&C servers were hosted, and then force the hosting or service provider to shut them down. There are several measures that users can take to prevent botnet virus infection. Because bot infections usually spread via malware, many of these measures actually focus on preventing malware infections. Recommended practices for botnet prevention include: Network baselining: Network performance and activity should be monitored so that irregular network behavior is apparent. Software patches: All software should be kept up-to-date with security patches. Vigilance: Users should be trained to refrain from activity that puts them at risk of bot infections or other malware. This includes opening emails or messages, downloading attachments, or clicking links from untrusted or unfamiliar sources. Anti-botnet tools: Anti-botnet tools provide botnet detection to augment preventative efforts by finding and blocking bot viruses before infection occurs. Most programs also offer features such as scanning for bot infections and botnet removal as well. Firewalls and antivirus software typically include basic tools for botnet detection, prevention, and removal. Tools like Network Intrusion Detection Systems (NIDS), rootkit detection packages, network sniffers, and specialized anti-bot programs can be used to provide more sophisticated botnet detection/prevention/removal. However, as botnet malware has become more sophisticated, and communications have become decentralized, takedown efforts have shifted away from targeting C&C infrastructures to other approaches. These approaches include identifying and removing botnet malware infections at the source devices, identifying and replicating the peer-to-peer communication methods and, in cases of ad fraud, disrupting the monetization schemes, rather than the technical infrastructures. Preventing botnet attacks has been complicated by the emergence of malware like Mirai, which targets routers and IoT devices that have weak or factory default passwords, and which can be easily compromised. In addition, users may be unable to change the passwords for many IoT devices, which leaves them exposed to attacks. If the manufacturer cannot remotely update the devices’ firmware to patch them or change their hardcoded passwords, then they may have to conduct a factory recall of the affected devices. About the Author Hari Vignesh Jayapalan is a Google Certified Android app developer, IDF Certified UI & UX Professional, street magician, fitness freak, technology enthusiast, and wannabe entrepreneur. He can be found on Twitter @HariofSpades.
Read more
  • 0
  • 0
  • 4269

article-image-year-python
Sam Wood
04 Jan 2016
4 min read
Save for later

The Year of the Python

Sam Wood
04 Jan 2016
4 min read
When we asked developers for our $5 Skill Up report what the most valuable skill was in 2015, do you know what they said? Considering the title of this blog and the big snake image, you can probably guess. Python. Python was the most valuable skill they learned in 2015. But 2015 is over - so what did developers say they're hoping to learn from scratch, or increase their skills in in 2016? Correct guess again! It's Python. Despite turning 26 this Christmas (it's the same age as Taylor Swift, you know), the language is thriving. Set to be the most widely adopted new language for two years running is impressive. So why are people flocking to it? Why are we living in the years of the Python? There are three main reasons. 1. It's being learned by non-developers In the Skill Up survey, the people who were most likely to mention Python as a valuable skill that they learned also did not tend to describe themselves as traditional software developers. The job role most likely to be learning Python were 'Academics', followed by analysts, engineers, and people in a non-IT related role. These aren't the people who live to code - but they are the people who are likely finding the ability to program an increasingly useful professional skill. Rather than working with software every day, they are using Python to perform specific and sophisticated tasks. Much like knowledge of working in Microsoft became the essential office skill of the Nineties/Noughties, it looks like Python is becoming the language of choice for those who know they need to be able to code but don't necessarily define themselves as solely working in dev or IT. 2. It's easy to pick up I don't code. When I talked to my friends who did code, mumbling about maybe learning and looking for suggestions, they told me to learn Python. One of their principal reasons was because it was so bloody easy! This also ties in heavily to why we see Python being adopted by non-developers. Often being learned as a first programming language, the speed and ease with which you can pick up Python is a boon - even with minimal prior exposure to programming concepts. With much less of an emphasis on syntax, there's less chance of tripping up with missing parentheses or semicolons than with more complex languages. Originally designed (and still widely used) as a scripting language, Python has become extremely effective for writing standalone programs. The shorter learning curve means that new users will find themselves creating functioning and meaningful programs in a much shorter period of time than with, say, C or Java. 3. It's a good all-rounder Python can do a ton. From app development, to building games, to its dominance of data analysis, to its continued colonization of JavaScript's sovereign territory of web development through frameworks like Django and Flask, it's a great language for anyone who wants to learn something non-specialized. This isn't to say it's a Jack of All Trades, Master of None, however. Python is one of the key languages of scientific computing, aided by fast (and C-based) libraries like NumPy. Indeed, the strength of Python's versatility is the power of its many libraries to allow it to specialize so effectively. Welcoming Our New Python Overlords Python is the double threat - used across the programming world by experienced and dedicated developers, and extensively and heartily recommended as the first language for people to pick up when they start working with software and coding. By combining ease-of-entry with effectiveness, it's come to stand as the most valuable tech skill to learn for the middle of the decade. How many years of the Python do you think lie ahead?
Read more
  • 0
  • 0
  • 4245

article-image-why-aws-is-the-prefered-cloud-platform-for-developers-working-with-big-data
Savia Lobo
07 Jun 2018
4 min read
Save for later

Why AWS is the preferred cloud platform for developers working with big data

Savia Lobo
07 Jun 2018
4 min read
The cloud computing revolution has well and truly begun. But the market is fiercely competitive - there are a handful of cloud vendors leading the pack and it’s not easy to say which one is best. AWS, Google Cloud Platform, Microsoft Azure, and Oracle are leading the way when it comes to modern cloud-based infrastructure and it’s hard to separate them. Big data is in high demand as businesses can flesh out useful insights. Organizations carry out advanced analytics in order to leverage deep and exploratory perspective on the data. After a deep analysis is performed, BI tools such as Tableau, Microsoft Power BI, Qlik Sense, and so on, are used in drafting out dashboard visualizations, reports, performance management metrics etc. that makes the data analytics actionable. Thus, we see how analytics and BI tools are important in getting the best out of big data. In this year’s Skill Up survey, there emerged a frontrunner for developers: AWS Source : Packt Skill Up Survey Let’s talk AWS Amazon is said to outplay any other cloud platform players in the market. AWS provides its customers with a highly robust infrastructure with commendable security options. In its inception year, 2006, AWS already had more than 150,000 developers who signed up to use the AWS services. Amazon announced this at a press release that year. In a recent survey conducted by the Synergy Research, AWS is among the top cloud platform providers with a 35% market share. The top customers of AWS include NASA, Netflix, Adobe Systems, Airbnb, and many more. Cloud technology is not a new and emerging trend anymore and has truly become mainstream. What sets AWS on a different plateau is, it has caught developers’ attention by its impressive suite of developer tools. It’s a cloud platform that is designed with continuous delivery and DevOps in mind. AWS: Every developer’s den Once you’re an AWS’ member, you can experience hundreds of different platforms that it offers. Starting form Core Computation and Content Delivery Networks, one can even take advantage of the IoT and game development platforms. If you’re worried how to payback for all that you have used, don’t worry, AWS offers its complete package of solutions across six modes of payments. It also offers hundreds of templates in every programming language to glide along one’s choice of project. Pay-as-you-go feature in AWS enables customers to use the features that are required. This avoids unnecessary buying of resources that would add no value to businesses. Security on AWS is something users appreciate. AWS’ configuration options, management policies, and their reliable security are the reasons why one can easily trust their cloud services. AWS has layers of security encryptions that enable high-end user data protection. It also decides on user privileges using the IAM (Identity and Access Management) roles. This helps to keep restrictions on the number of resources used by the user. It also helps in greatly reducing malpractices. AWS provides developers with autoscaling, as it is one of the most important features that every developer needs. With AutoScaling, developers can suspend their unimportant management issues on autopilot; AWS takes care of it. Developers can instead focus more on processes, development, and programming. The free tier within AWS runs an Amazon EC2, which includes an S3 storage, EC2 compute hours, Elastic Load balancer time, and so on. This enables developers to try AWS’ API within their software to enhance it further. AWS cuts down deployment time required to provision a web server. By using Amazon Machine Images, one can have a machine deployed and ready to accept connections in a short time. Amazon’s logo says it all. It provides A to Z services under one hood for developers, businesses,and for general users. Each service is tailored to serve different purposes and also has a dedicated and specialized hardware. Developers can easily choose Amazon for their development needs with their pay-as-you-go and make the most of it without even buying stuff. Though, there are other service providers such as Microsoft Azure, Google Cloud Platform and so on, Amazon offers functionalities which others are yet to match. Verizon chooses Amazon Web Services(AWS) as its preferred cloud provider How to secure ElasticCache in AWS How to create your own AWS CloudTrail
Read more
  • 0
  • 0
  • 4234

article-image-google-fuchsia-what-all-the-fuss-is-about
Amarabha Banerjee
02 Jul 2018
4 min read
Save for later

Google Fuchsia: What's all the fuss about?

Amarabha Banerjee
02 Jul 2018
4 min read
It was back in 2016 when we first heard about the Google Fuchsia platform which was supposed to be an alternative to the Android operating system. Google had revealed a WIP version in 2016 and since then a lot of dust had gathered on this news until the latest developments and news resurfaced in January 2018. The question on everyone’s mind is is do you really need to be concerned about Fuchsia OS and does it have what it takes to even challenge the market positioning of Android? Before we come to these questions, let’s look at what Fuchsia has to offer. The Fuchsia UI - Inspired by Material Design Fuchsia brings a complete material design approach to UI design. The first look shared by Google seemed a lot different than the Android UI. Source: The Droid guy Basic Android UI Source: Tech Radar Google Fuchsia on a smartphone device There is more depth; the text, images and wallpapers all look sleeker and feel like a peek through a window rather than being underlays to text and icons. Fuchsia currently offers two layouts - a mobile-centric design codenamed Armadillo, and a more traditional desktop experience codenamed Capybara. While the mobile centric version is in more focus, the desktop version is far from being ready. Google is trying to push Material Design heavily with Fuchsia. How far hey will succeed depends on their roadmap and future investment plan. The Concept of One OS across all devices It has been a long standing dream of Google to make all the different devices work under one OS platform. Google seems to be betting on Fuchsia to be that OS on Desktops, Tablets & Mobiles too. The Google ledger facility allows you to get a cloud account to seamlessly access and manage different Google services. The primary feature of seamless transition of data from one device to another, is sure to help the users play around with it effortlessly. Using the Custom Kernel Feature What makes Android version updates a pain to implement is that different devices run different kernel versions of Linux,the spine of Android. As such, the update rollouts are never in unison. This can create security flaws, and can be a real worrisome factor for Android users. This is where Fuchsia trumps Android. Fuchsia has its own Kernel - Zircon, which is designed to be consistently upgradeable. This helps the apps to be isolated from the Kernel and hence adds an extra security layer to the apps and also doesn’t render these apps useless after an OS update. Language Interoperability The most important aspect from the developer’s perspective is the feature of multi language support. Fuchsia is written in Dart using the latest Google cross platform framework, Flutter. It also provides support for development in Go and Rust. It is also extending support for Swift developers. This along with the added FIDL protocol, will help the developers to easily develop different parts in different languages - such as using a Go based backend with a Dart based front end. This gives developers immense power and flexibility. Although these features seem to be useful and interesting, Fuchsia will need a steady development pipeline and regular updates to reach a stable version so that devices can use it as their default UI. Keeping the current development trends in mind, we can safely conclude that till the next stable release, you can continue to browse your Android phones and not worry about being replaced by Fuchsia or any other competitor. Google updates biometric authentication for Android P, introduces BiometricPrompt API Google’s Android Things, developer preview 8: First look Google Flutter moves out of beta with release preview 1  
Read more
  • 0
  • 0
  • 4222
article-image-introduction-phonegap
Robi Sen
27 Feb 2015
9 min read
Save for later

An Introduction to PhoneGap

Robi Sen
27 Feb 2015
9 min read
This is the first of a series of posts that will focus on using PhoneGap, the free and open source framework for creating mobile applications using web technologies such as HTML, CSS, and JavaScript that will come in handy for game development. In this first article, we will introduce PhoneGap and build a very simple Android application using PhoneGap, the Android SDK, and Eclipse. In a follow-on article, we will look at how you can use PhoneGap and PhoneGap Build to create iOS apps, Android apps, BlackBerry apps, and others from the same web source code. In future articles, we will dive deeper into exploring the various tools and features of PhoneGap that will help you build great mobile applications that perform and function just like native applications. Before we get into setting up and working with PhoneGap, let’s talk a little bit about what PhoneGap is. PhoneGap was originally developed by a company called Nitobi but was later purchased by Adobe Inc. in 2011. When Adobe acquired PhoneGap, it donated the code of the project to the Apache Software Foundation, which renamed the project to Apache Cordova. While both tools are similar and open source, and PhoneGap is built upon Cordova, PhoneGap has additional capabilities to integrate tightly with Adobe’s Enterprise products, and users can opt for full support and training. Furthermore, Adobe offers PhoneGap Build, which is a web-based service that greatly simplifies building Cordova/PhoneGap projects. We will look at PhoneGap Build in a future post.   Apache Cordova is the core code base that Adobe PhoneGap draws from. While both are open source and free, PhoneGap has a paid-for Enterprise version with greater Adobe product integration, management tools, and support. Finally, Adobe offers a free service called PhoneGap Build that eases the process of building applications, especially for those needing to build for many devices. Getting Started For this post, to save space, we are going to jump right into getting started with PhoneGap and Android and spend a minimal amount of time on other configurations. To follow along, you need to install node.js, PhoneGap, Apache Ant, Eclipse, the Android Developer Tools for Eclipse, and the Android SDK. We’ll be using Windows 8.1 for development in this post, but the instructions are similar regardless of the operating system. Installation guides, for any major OS, can be found at each of the links provided for the tools you need to install. Eclipse and the Android SDK The easiest way to install the Android SDK and the Android ADT for Eclipse is to download the Eclipse ADT bundle here. Just downloading the bundle and unpacking it to a directory of your choice will include everything you need to get moving. If you already have Eclipse installed on your development machine, then you should go to this link here, which will let you download the SDK and the Android Development Tools along with instructions on how to integrate the ADT into Eclipse. Even if you have Eclipse, I would recommend just downloading the Eclipse ADT bundle and installing it into your own unique environment. The ADT plugin can sometimes have conflicts with other Eclipse plugins. Making sure Android tooling is set up One thing you will need to do, no matter whether you use the Eclipse ADT bundle or not, is to make sure that the Android tools are added to your class path. This is because PhoneGap uses the Android Development Tools and Android SDK to build and compile the Android application. The easiest way to make sure everything is added to your path is to edit your environment variables. To do that, just search for “Edit Environment” and select Edit the system environment variables. This will open your System Properties window. From there, select Advanced and then Environment Variables as shown in the next figure. Under System Variables, select Path and Edit. Now you need to add sdkplatform-tools and sdktools  to your path as shown in the next figure. If you have used the Eclipse ADT bundle, your SDK directory should be of the form C:adt-bundle-windows-x86_64-20131030sdk.  If you cannot find your Android SDK, search for your ADT. In our case, the two directory paths we add to the Path  variable are C:adt-bundle-windows-x86_64-20131030sdkplatform-tools  and C:adt-bundle-windows-x86_64-20131030sdktools. Once you’re done, select OK , but don’t just exit the Environment Variables  screen yet since we will need to do this again when installing Ant. Installing Ant PhoneGap makes use of Apache Ant to help build projects. Download Ant from here and make sure to add the bin directory to your path. It is also good to set the environment variable ANT_HOME as well. To do that, create a new variable in the Environment Variables screen under System Variables called ANT_HOME and point it to the directory where you installed Ant: For more detailed instructions, you can read the official install guide for Apache Ant here. Installing Node.js Node.js is a development platform built on Chrome’s JavaScript runtime engine that can be used for building large-scale, real-time, server-based applications. Node.js is used to provide a lot of the command-line tools for PhoneGap, and to install PhoneGap, we first need Node.js. Unix, OS X, and Windows users can find installers as well as source code here on the Node.js download site. For this post, we will be using the Windows 64-bit installer, which you should be able to double-click and install. Once you’re done installing, you should be able to open a command prompt and type npm –version and see something like this: Installing PhoneGap Once you have Node.js installed, open a command line and type npm install –g phonegap. Node will now download and install PhoneGap and its dependencies as shown here: Creating an initial project in PhoneGap Now that you have PhoneGap installed, let’s use the command-line tools to create an initial PhoneGap project. First, create a folder where you want to store your project. Then, to create a basic project, all you need to do is type phonegap create mytestapp as shown in the following figure. PhoneGap will now build a basic project with a deployable app. Now go to the directory you are using for your project’s root directory. You should see a directory called mytestapp, and if you open that directory, you should see something like the following: Now look under platforms>android and you should see something like what is shown in the next figure, which is the directory structure that PhoneGap made for your Android project. Make sure to note the assets directory, which contains the HTML and JavaScript of the application or the Cordova directories that contain the necessary code to tie Android’s API’s to PhoneGap/Cordova’s API calls. Now let’s import the project into Eclipse. Open Eclipse and select Create a New Project, and select Android Project from Existing Code. Browse to your project directory and select the platforms/android folder and select Finish, like this: You should now see the mytestapp project, but you may see a lot of little red X’s and warnings about the project not building correctly. To fix this, all you need to do is clean and build the project again like so: Right-click on the project directory. In the resulting Properties dialog, select Android from the navigation pane. For the project build target, select the highest Android API level you have installed. Click on OK. Select Clean from the Project menu. This should correct all the errors in the project. If it does not, you may need to then select Build again if it does not automatically build. Now you can finally launch your project. To do this, select the HelloWorld project and right-click on it, and select Run as and then Android application. You may now be warned that you do not have an Android Virtual Device, and Eclipse will launch the AVD manager for you. Follow the wizard and set up an AVD image for your API. You can do this by selecting Create in the AVD manager and copying the values you see here: Once you have built the image, you should now be able to launch the emulator. You may have to again right-click on the HelloWorld directory and select Run as then Android application. Select your AVD image and Eclipse will launch the Android emulator and push the HelloWorld application to the virtual image. Note that this can take up to 5 minutes! In a later post, we will look at deploying to an actual Android phone, but for now, the emulator will be sufficient. Once the Android emulator has started, you should see the Android phone home screen. You will have to click-and-drag on the home screen to open it, and you should see the phone launch pad with your PhoneGap HelloWorld app. If you click on it, you should see something like the following: Summary Now that probably seemed like a lot of work, but now that you are set up to work with PhoneGap and Eclipse, you will find that the workflow will be much faster when we start to build a simple application. That being said, in this post, you learned how to set up PhoneGap, how to build a simple application structure, how to install and set up Android tooling, and how to integrate PhoneGap with the Eclipse ADT. In the next post, we will actually get into making a real application, look at how to update and deploy code, and how to push your applications to a real phone. About the author Robi Sen, CSO at Department 13, is an experienced inventor, serial entrepreneur, and futurist whose dynamic twenty-plus year career in technology, engineering, and research has led him to work on cutting edge projects for DARPA, TSWG, SOCOM, RRTO, NASA, DOE, and the DOD. Robi also has extensive experience in the commercial space, including the co-creation of several successful start-up companies. He has worked with companies such as UnderArmour, Sony, CISCO, IBM, and many others to help build new products and services. Robi specializes in bringing his unique vision and thought process to difficult and complex problems, allowing companies and organizations to find innovative solutions that they can rapidly operationalize or go to market with.
Read more
  • 0
  • 0
  • 4213

article-image-hybrid-mobile-apps-what-you-need-to-know
Sugandha Lahoti
26 Apr 2018
4 min read
Save for later

Hybrid Mobile apps: What you need to know

Sugandha Lahoti
26 Apr 2018
4 min read
Hybrid mobile apps have been around for quite some time now, but advances in mobile development software and changes in user behavior have allowed it to grow. Today, users expect hybrid apps, even if they wouldn’t know what a ‘hybrid app’ actually is. What is a Hybrid mobile app? A Hybrid app is essentially a web application that acts like a native app. Or a native app that acts like a web application. That means it can do everything HTML5 does while also incorporating native app features, like access to a phone’s camera. Hybrid mobile apps consist of two parts. The first is the back-end code built using languages such as HTML, CSS, and Javascript. The second is a native shell that loads the code using Webview. Advantages of hybrid mobile apps Hybrid apps are much easier to build than native apps. This is because they are built using HTML, CSS, and Javascript - software that typically runs in the browser. They also have a faster development cycle than native apps because you only have a JavaScript codebase. It is, however, important to note that hybrid mobile apps require third-party tools such as Apache Cordova to ease communication between the web view and the native platform. Noteworthy Hybrid apps include MarketWatch, Untappd, Sworkit etc. Hybrid mobile apps can run on both Android and iOS devices (the two most prominent OS). This is great for developers as it means less work for them - code can be reused for progressive web applications and desktop applications with minor tweaking. Disadvantages of hybrid mobile apps Although they’re extremely versatile, hybrid apps have certain disadvantages. They’re often a little more expensive than standard web apps because you have to work with the native wrapper. It’s also sometimes a disadvantage to be dependent on a third-party platform. Compared to native apps, hybrid apps aren’t quite as interactive and often a bit slower. Of course, the app is dependent on resources from the web. Hybrid mobile apps also generally have a standard template. Any customization you want to do in your application will take you away from the hybrid model. If this is the case, you may as well go native. Hybrid mobile app frameworks There are a good range of hybrid mobile application frameworks out there for mobile developers at the moment. Let’s take a look at some of the best. React Native Facebook’s React Native is a mobile framework for implementing a single code multiple times. It compiles to native mobile app components to build native mobile applications (iOS, Android, and Windows) in JavaScript. React Native’s library includes Flexbox CSS styling, inline styling, debugging, and supports deploying to either the App Store or Google Play. Ionic Ionic Framework is an open-source SDK for hybrid mobile app development, licensed under MIT. It is built on top of Angular.js and Apache Cordova.  Ionic provides tools and services for developing hybrid mobile apps using Web technologies like CSS, HTML5, and Sass. Apps build using Ionic can be distributed through native app stores to be installed on devices by using Cordova. Xamarin Microsoft’s Xamarin Hybrid development platform allows developers to code in C# many platforms in C#. Developers can use Xamarin tools to write native Android, iOS, and Windows apps with a C#-shared codebase, and share code across multiple platforms. PhoneGap Adobe PhoneGap framework is an open source distribution of Apache Cordova framework. With PhoneGap, hybrid applications are built with HTML5 and CSS3 (for rendering), and JavaScript (for logic) to be used across multiple platforms. Hybrid mobile apps are great for users Hybrid mobile apps are particularly effective when you want to build and deploy an app more efficiently. They are also useful for building prototype applications. However, the key thing to remember about hybrid mobile apps is that many users today expect the type of experience they deliver. The old distinction between browser and native experiences has almost disappeared. A well-written hybrid app does not behave or look any different than its native equivalent and that, really, is what users want. Also, check out React Native Cookbook React and React Native Learning Ionic - Second Edition Ionic 2 Cookbook - Second Edition Mastering Xamarin UI Development
Read more
  • 0
  • 0
  • 4200

article-image-businesses-are-confident-in-their-cybersecurity-efforts-but-weaknesses-prevail
Guest Contributor
10 Dec 2019
8 min read
Save for later

Businesses are confident in their cybersecurity efforts, but weaknesses prevail

Guest Contributor
10 Dec 2019
8 min read
Today, maintaining data integrity and network security is a primary challenge for businesses everywhere. The scale of the threats they face is enormous. Those that succeed go unheralded. Those that fail end up in the headlines. Despite the risks, a shocking number of security decision-makers seem confident that their companies have no vulnerabilities to exploit. According to a recent research report by Forrester, more than 85% of those decision-makers believe that they've left no gaps in their organization's security posture. A cursory look at the available data, however, should be enough to indicate that some of them are going to be proven wrong – and that they're at a much greater risk than they realize or are willing to admit. The threat landscape is stark. There have already been at least 3,800 data breaches in 2019 alone, which is a huge increase over prior years. The environment is so dangerous that Microsoft and Mastercard are spearheading an effort alongside other tech firms to create a joint-cyberdefense organization to help targeted firms fend off determined attackers. None of that squares with the high confidence that businesses now seem to have in their security. It is clear that there is quite a bit of distance between how digital security experts judge the preparedness of businesses to defend themselves and how the business decision makers view their own efforts. The best way to remedy that is for businesses to double-check their security posture to make sure they are in the best possible position to fend off cyberattacks. To help, here's a rundown of the most common security vulnerabilities that tend to exist in business organizations, to use as a checklist for shoring up defenses. 1. Physical vulnerabilities Although it's often overlooked, the physical security of a company's data and digital assets is essential. That's why penetration testing firms will often include on-site security breach attempts as part of their assessments (sometimes with unfortunate results). It's also why businesses should create and enforce strict on-site security policies and control who possesses what equipment and where they may take it. In addition, any devices that contain protected data should make use of strong storage encryption and have enforced password requirements – ideally using physical keys to further mitigate risk. 2. Poor access controls and monitoring One of the biggest threats to security that businesses now face isn't external – it's from their own employees. Research by Verizon paints a disturbing picture of the kinds of insider threats that are at the root of many cybersecurity incidents. Many of them trace back to unauthorized or improper systems access, or poor access controls that allow employees to see more data than they need to do their jobs. Worse still, there's no way to completely eliminate the problem. An employee with the right know-how can be a threat even when their access is properly restricted. That's why every organization must also practice routine monitoring of data access and credential audits to look for patterns that could indicate a problem. 3. Lack of cybersecurity personnel The speed with which threats in the digital space are evolving has caused businesses everywhere to rush to hire cybersecurity experts to help them defend themselves. The problem is that there are simply not enough of them to go around. According to the industry group (ISC)2, there are currently 2.93 million open cybersecurity positions around the world, and the number keeps on growing. To overcome the shortage, businesses would do well to augment their security personnel recruiting by training existing IT staff in cybersecurity. They can subsidize things like online CompTIA courses for IT staff so they can upskill to meet emerging threats. When it comes to cybersecurity, a business can't have too many experts – so they'd best get started making some new ones. 4. Poor employee security training Intentional acts by disgruntled or otherwise malicious employees aren't the only kind of insider threat that businesses face. In reality, many of the breaches traced to insiders happen by accident. Employees might fall prey to social engineering attacks and spear phishing emails or calls, and turn over information to unauthorized parties without ever knowing they've done anything wrong. If you think about it, a company's workforce is it's largest attack surface, so it's critical to take steps to help them be as security-minded as possible. Despite this reality, a recent survey found that only 31% of employees receive annual security training. This statistic should dent the confidence of the aforementioned security decision-makers, and cause them to reevaluate their employee security training efforts post-haste. 5. Lack of cloud security standards It should come as no surprise that the sharp rise in data breaches has coincided with the headlong rush of businesses into the cloud. One need to only look at the enormous number of data thefts that have happened in broad daylight via misconfigured Amazon AWS storage buckets to understand how big an issue this is. The notoriety notwithstanding, these kinds of security lapses continue to happen with alarming frequency. At their roots is a general lack of security procedures surrounding employee use of cloud data storage. As a general rule, businesses should have a process in place to have qualified IT staff configure offsite data storage and restrict settings access only to those who need it. In addition, all cloud storage should be tested often to make sure no vulnerabilities exist and that no unauthorized access is possible. 6. Failure to plan for future threats In the military, there's a common admonition against "fighting yesterday's war". In practice, this means relying on strategies that have worked in the past but that might not be appropriate in the current environment. The same logic applies to cybersecurity, not that many businesses seem to know it. For example, an all-machine hacking contest sponsored by DARPA in 2016 proved that AI and ML-based attacks are not only possible – but inevitable. Conversely, AI and ML will need to be put to use by businesses seeking to defend themselves from such threats. Still, a recent survey found that just 26% of business security policymakers had plans to invest in AI and ML cybersecurity technologies in the next two years. By the time many come around to the need for doing so, it's likely that their organizations will already be under attack by better-equipped opponents. To make sure they remain safe from such future-oriented threats, businesses should re-evaluate their plans to invest in AI and ML network and data security technology in the near term, so they'll have the right infrastructure in place once those kinds of attacks become common. The perils of overconfidence At this point, it should be very clear that there are quite a few vulnerabilities that the average business must attend to if they hope to remain secure from both current and emerging cyber threats. The various surveys and data referenced here should also be more than enough proof that the confidence many decision-makers have in their current strategies is foolhardy at best – and pure hubris at worst. More importantly, all signs point to the situation getting far worse before it gets better. Every major study on cybersecurity indicates that the pace, scale, and scope of attacks is growing by the day. In the coming years, the rapid expansion of new technologies like the IoT and the hyper-connectivity driven by 5G cellular data networks is going to amplify the current risks to an almost unimaginable level. That means businesses whose security is lacking now don't have much time left to get up to speed. The bottom line here is that when it comes to cybersecurity, nothing is more expensive than regret. It's a dangerous thing for business leaders to be too overconfident in their preparations or to underestimate the size of the security challenges they face. It's a situation where there's no such thing as being too prepared, and they should never be satisfied with the status quo in their efforts to stay protected. Would-be attackers and data thieves will never rest on their laurels – and neither should businesses. Author Bio Andrej Kovačević is a cybersecurity editor at TechLoot, and a contributing writer for a variety of other technology-focused online publications. He has covered the intersection of marketing and technology for several years and is pursuing an ongoing mission to share his expertise with business leaders and marketing professionals everywhere. You can also find him on Twitter. Glen Singh on why Kali Linux is an arsenal for any cybersecurity professional [Interview] CNCF announces Helm 3, a Kubernetes package manager and tool to manage charts and libraries Puppet’s 2019 State of DevOps Report highlight security integration into DevOps practices result into higher business outcome  
Read more
  • 0
  • 0
  • 4196
article-image-self-service-business-intelligence-qlik-sense-users
Amey Varangaonkar
29 May 2018
7 min read
Save for later

Four self-service business intelligence user types in Qlik Sense

Amey Varangaonkar
29 May 2018
7 min read
With the introduction of self-service to BI, there is segmentation at various levels and breaths on how self-service is conducted and to what extent. There are, quite frankly, different user types that differ from each other in level of interest, technical expertise, and the way in which they consume data. While each user will almost be unique in the way they use self-service, the user base can be divided into four different groups. In this article, we take a look at the four types of users in self-service business intelligence model. The following excerpt is taken from the book Mastering Qlik Sense, authored by Martin Mahler and Juan Ignacio Vitantonio. This book presents expert techniques to design and deploy enterprise-grade Business Intelligence solutions for your business, by leveraging the power of Qlik Sense. Power Users or Data Champions Power users are the most tech-savvy business users, who show a great interest in self-service BI. They produce and build dashboards themselves and know how to load data and process it to create a logical data model. They tend to be self-learning and carry a hybrid set of skills, usually a mixture of business knowledge and some advanced technical skills. This user group is often frustrated with existing reporting or BI solutions and finds IT inadequate in delivering the same. As a result, especially in the past, they take away data dumps from IT solutions and create their own dashboards in Excel, using advanced skills such as VBA, Visual Basic for Applications. They generally like to participate in the development process but have been unable to do so due to governance rules and a strict old-school separation of IT from the business. Self-service BI is addressing this group in particular, and identifying those users is key in reaching adoption within an organization. Within an established self-service environment, power users generally participate in committees revolving around the technical environments and represent the business interest. They also develop the bulk of the first versions of the apps, which, as part of a naturally evolving process, are then handed over to more experienced IT for them to be polished and optimized. Power users advocate the self-service BI technology and often not only demo the insights and information they achieved to extract from their data, but also the efficiency and timeliness of doing so. At the same time, they also serve as the first point of contact for other users and consumers when it comes to questions about their apps and dashboards. Sometimes they also participate in a technical advisory capacity on whether other projects are feasible to be implemented using the same technology. Within a self-service BI environment, it is safe to say that those power users are the pillars of a successful adoption. Business Users or Data Visualizers Users are frequent users of data analytics, with the main goal to extract value from the data they are presented with. They represent the group of the user base which is interested in conducting data analysis and data discovery to better understand their business in order to make better-informed decisions. Presentation and ease of use of the application are key to this type of user group and they are less interested in building new analytics themselves. That being said, some form of creating new charts and loading data is sometimes still of interest to them, albeit on a very basic level. Timeliness, the relevance of data, and the user experience are most relevant to them. They are the ones who are slicing and dicing the data and drilling down into dimensions, and who are keen to click around in the app to obtain valuable information. Usually, a group of users belong to the same department and have a power user overseeing them with regard to questions but also in receiving feedback on how the dashboard can be improved even more. Their interaction with IT is mostly limited to requesting access and resolving unexpected technical errors. Consumers or Data Readers Consumers usually form the largest user group of a self-service BI analytics solution. They are the end recipients of the insights and data analytics that have been produced and, normally, are only interested in distilled information which is presented to them in a digested form. They are usually the kind of users who are happy with a report, either digital or in printed form, which summarizes highlights and lowlights in a few pages, requiring no interaction at all. Also, they are most sensitive to the timeliness and availability of their reports. While usually the largest audience, at the same time this user group leverages the self-service capabilities of a BI tool the least. This poses a licensing challenge, as those users don’t take full advantage of the functionality on offer, but are costing the full amount in order to access the reports. It is therefore not uncommon to assign this type of user group a bucket of login access passes or not give them access to the self-service BI platform at all and give them the information they need in (digitally) printed format or within presentations, prepared by users. IT or Data Overseers IT represents the technical user group within this context, who sit in the background and develop and manage the framework within which the self-service BI solution operates. They are the backbone of the deployment and ensure the environment is set up correctly to cater for the various use cases required by the above-described user groups. At the same time, they ensure a security policy is in place and maintained and they introduce a governance framework for deployment, data quality, and best practices. They are in effect responsible for overseeing the power users and helping them with technical questions, but at the same time ensuring terms and definition as well as the look and feel is consistent and maintained across all apps. With self-service BI, IT plays a lesser role in actually developing the dashboards but assumes a more mentoring position, where training, consultation, and advisory in best practices are conducted. While working closely with power users, IT also provides technical support to users and liaises with the IT infrastructure to ensure the server infrastructure is fit for purpose and up and running to serve the users. This also includes upgrading the platform where required and enriching it with additional functionality if and when available. Bringing them together The previous four groups can be distinguished within a typical enterprise environment; however, this is not to say hybrid or fewer user groups are not viable models for self-service BI. It is an evolutionary process in how an organization adapts self-service data analytics with a lot of dependencies on available skills, competing established solutions, culture, and appetite on new technologies. It usually begins with IT being the first users in a newly deployed self-service environment, not only setting up the infrastructure but also developing the first apps for a couple of consumers. Power users then follow up; generally, they are the business sponsors themselves who are often big fans of data analytics, modifying the app to their liking and promoting it to their users. The user base emerges with the success of the solution, where analytics are integrated into their business as the usual process. The last group, the consumers, is mostly the last type of user group that is established, which more often than not doesn’t have actual access to the platform itself, but rather receives printouts, email summaries with screenshots, or PowerPoint presentations. Due to licensing cost and the size of the consumer audience, it is not always easy to give them access to the self-service platform; hence, most of the time, an automated and streamlined PDF printing process is the most elegant solution to cater to this type of user group. At the same time, the size of the deployment also determines the number of various user groups. In small enterprise environments, it will be mostly power users and IT who will be using self-service. This greatly simplifies the approach as well as the setup considerations. If you found the above excerpt useful, make sure you check out the book Mastering Qlik Sense to learn helpful tips and tricks to perform effective Business Intelligence using Qlik Sense. Read more: How Qlik Sense is driving self-service Business Intelligence What we learned from Qlik Qonnections 2018 How self-service analytics is changing modern-day businesses
Read more
  • 0
  • 0
  • 4178

article-image-cloud-deployment-models-private-public-hybrid
Amey Varangaonkar
03 Aug 2018
5 min read
Save for later

Demystifying Clouds: Private, Public, and Hybrid clouds

Amey Varangaonkar
03 Aug 2018
5 min read
Cloud computing is as much about learning the architecture as it is about the different deployment options that we have. We need to know the different ways our cloud infrastructure can be kept open to the world and do we want to restrict it. In this article, we look at the three ways of cloud computing and its deployment: There are 3 major cloud deployment models available to us today: Private cloud Public cloud Hybrid cloud In this excerpt, we will look at each of these separately: The following excerpt has been taken from the book 'Cloud Analytics with Google Cloud Platform' written by Sanket Thodge. Private cloud Private cloud services are built specifically when companies want to hold everything to them. It provides the users with customization in choosing hardware, in all the software options, and storage options. This typically works as a central data center to the internal end users. This model reduces the dependencies on external vendors. Enterprise users accessing this cloud may or may not be billed for utilizing the services. Private cloud changes how an enterprise decides the architecture of the cloud and how they are going to apply it in their infrastructure. Administration of a private cloud environment can be carried by internal or outsourced staff. Common private cloud technologies and vendors include the following: VMware: https://cloud.vmware.com OpenStack: https://www.openstack.org Citrix: https://www.citrix.co.in/products/citrix-cloud CloudStack: https://cloudstack.apache.org Go Grid: https://www.datapipe.com/gogrid With a private cloud, the same organization is showing itself as the cloud consumer as well as the cloud provider, as the infrastructure is built by them and the consumers are also from the same enterprise. But in order to differentiate these roles, a separate organizational department typically assumes the responsibility for provisioning the cloud and therefore assumes the cloud provider role, whereas the departments requiring access to this established private cloud take the role of the cloud consumer: Public cloud In a public cloud deployment model, a third-party cloud service provider often provides the cloud service over the internet. Public cloud services are sold with respect to demand and by a minute or hourly basis. But if you want, you can go for a long term commitment for up to five years in some cases, such as renting a virtual machine. In the case of renting a virtual machine, the customers pay for the duration, storage, or bandwidth that they consume (this might vary from vendor to vendor). Major public cloud service providers include: Google Cloud Platform: https://cloud.google.com Amazon Web Services: https://aws.amazon.com  IBM: https://www.ibm.com/cloud Microsoft Azure: https://azure.microsoft.com Rackspace: https://www.rackspace.com/cloud The architecture of a public cloud will typically go as follows: Hybrid cloud The next and the last cloud deployment type is the hybrid cloud. A hybrid cloud is an amalgamation of public cloud services (GCP, AWS, Azure likes) and an on-premises private cloud (built by the respective enterprise). Both on-premise and public have their roles here. On-premise is more for mission-critical applications, whereas public cloud manages spikes in demand. Automation is enabled between both the environment. The following figure shows the architecture of a hybrid cloud: The major benefit of a hybrid cloud is to create a uniquely unified, superbly automated, and insanely scalable environment that takes the benefit of everything a public cloud infrastructure has to offer, while still maintaining control over mission-critical vital data. Some common hybrid cloud examples include: Hitachi hybrid cloud: https://www.hitachivantara.com/en-us/solutions/hybrid-cloud.html Rackspace: https://www.rackspace.com/en-in/cloud/hybrid IBM: https://www.ibm.com/it-infrastructure/z/capabilities/hybrid-cloud AWS: https://aws.amazon.com/enterprise/hybrid Differences between the private cloud, hybrid cloud, and public cloud models The following tables summarizes the differences between the three cloud deployment models: Private Hybrid Public Definition A cloud computing model in which enterprises uses its own proprietary software and hardware. And this is specifically limited to its own data centre. Servers, cooling system, and storage - everything belongs to the company. This model includes a mixture of private and public cloud. It has a few components on-premises, private cloud and it will also be connected to other services on public cloud with perfect orchestration. Here, we have a complete third-part or a company that lets us use their infrastructure for a given period of time. This is a pay-as-you-use model. General public can access their infrastructure and no in-house servers are required to be maintained. Characteristics Single-tenant architecture On-premises hardware Direct control of the hardware Cloud bursting capacities Advantages of both public and private cloud Freedom to choose services from multiple vendors Pay-per use model Multi-tenant model Vendors HPE, VMWare, Microsoft, OpenStack Combination of public and private Google Cloud Platform, Amazon Web Services, Microsoft Azure We saw the three models are quite distinct from each other, each bringing along a specialized functionality to a business, depending on their needs. If you found the above excerpt useful, make sure to check out the book 'Cloud Analytics with Google Cloud Platform' for more information on GCP and how you can perform effective analytics on your data using it. Read more Why Alibaba cloud could be the dark horse in the public cloud race Is cloud mining profitable? Google announces Cloud TPUs on the Cloud Machine Learning Engine (ML Engine)
Read more
  • 0
  • 1
  • 4173