Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech Guides

851 Articles
article-image-10-types-dos-attacks-you-need-to-know
Savia Lobo
05 Jun 2018
11 min read
Save for later

The 10 most common types of DoS attacks you need to know

Savia Lobo
05 Jun 2018
11 min read
There are businesses that are highly dependent on their services hosted online. It's important that their servers are up and running smoothly during their business hours. Stock markets and casinos are examples of such institutions. They are businesses that deal with a huge sum of money and they expect their servers to work properly during their core business hours. Hackers may extort money by threatening to take down or block these servers during these hours. Denial of service (DoS) attack is the most common methodology used to carry out these kinds of attacks. In this post, we will get to know about  DoS attacks and their various types. This article is an excerpt taken from the book, 'Preventing Ransomware' written by Abhijit Mohanta, Mounir Hahad, and Kumaraguru Velmurugan. In this book, you will learn how to respond quickly to ransomware attacks to protect yourself. What are DoS attacks? DoS is one of the oldest forms of cyber extortion attack. As the term indicates, distributed denial of service (DDoS) means it denies its service to a legitimate user. If a railway website is brought down, it fails to serve the people who want to book tickets. Let's take a peek into some of the details. A DoS attack can happen in two ways: Specially crafted data: If specially crafted data is sent to the victim and if the victim is not set up to handle the data, there are chances that the victim may crash. This does not involve sending too much data but includes specially crafted data packets that the victim fails to handle. This can involve manipulating fields in the network protocol packets, exploiting servers, and so on. Ping of death and teardrop attacks are examples of such attacks. Flooding: Sending too much data to the victim can also slow it down. So it will spend resources on consuming the attackers' data and fail to serve the legitimate data. This can be a DDoS attack where packets are sent to the victim by the attacker from many computers. Attacks can also use a combination of both. For example, UDP flooding and SYN flooding are examples of such attacks. There is another form of DoS attack called a DDoS attack. A DoS attack uses a single computer to carry out the attack. A DDoS attack uses a series of computers to carry out the attack. Sometimes the target server is flooded with so much data that it can't handle it. Another way is to exploit the workings of internal protocols. A DDoS attack that deals with extortion is often termed a ransom DDoS. We will now talk about various types of the DoS attacks that might occur. Teardrop attacks or IP fragmentation attacks In this type of attack, the hacker sends a specially crafted packet to the victim. To understand this, one must have knowledge of the TCP/IP protocol. In order to transmit data across networks, IP packets are broken down into smaller packets. This is called fragmentation. When the packets finally reach their destination, they are re-assembled together to get the original data. In the process of fragmentation, some fields are added to the fragmented packets so that they can be tracked at the destination while reassembling. In a teardrop attack, the attacker crafts some packets that overlap with each other. Consequently, the operating system at the destination gets confused about how to reassemble the packets and hence it crashes. User Datagram Protocol flooding User Datagram Protocol (UDP) is an unreliable packet. This means the sender of the data does not care if the receiver has received it. In UDP flooding, many UDP packets are sent to the victim at random ports. When the victim gets a packet on a port, it looks out for an application that is listening to that port. When it does not find the packet, it replies back with an Internet Control Message Protocol (ICMP) packet. ICMP packets are used to send error messages. When a lot of UDP packets are received, the victim consumes a lot of resources in replying back with ICMP packets. This can prevent the victim from responding to legitimate requests. SYN flood TCP is a reliable connection. That means it makes sure that the data sent by the sender is completely received by the receiver. To start a communication between the sender and receiver, TCP follows a three-way handshake. SYN denotes the synchronization packet and ACK stands for acknowledgment: The sender starts by sending a SYN packet and the receiver replies with SYN-ACK. The sender sends back an ACK packet followed by the data. In SYN flooding, the sender is the attacker and the receiver is the victim. The attacker sends a SYN packet and the server responds with SYN-ACK. But the attacker does not reply with an ACK packet. The server expects an ACK packet from the attacker and waits for some time. The attacker sends a lot of SYN packets and the server waits for the final ACK until timeout. Hence, the server exhausts its resources waiting for ACK. This kind of attack is called SYN flooding. Ping of death While transmitting data over the internet, the data is broken into smaller chunks of packets. The receiving end reassembles these broken packets together in order to derive a conclusive meaning. In a ping of death attack, the attacker sends a packet larger than 65,536 bytes, the maximum size of a packet allowed by the IP protocol. The packets are split and sent across the internet. But when the packets are reassembled at the receiving end, the operating system is clueless about how to handle these bigger packets, so it crashes. Exploits Exploits for servers can also cause DDoS vulnerability. A lot of web applications are hosted on web servers, such as Apache and Tomcat. If there is a vulnerability in these web servers, the attacker can launch an exploit against the vulnerability. The exploit need not necessarily take control, but it can crash the web server software. This can cause a DoS attack. There are easy ways for hackers to find out the web server and its version if the server has default configurations. The attacker finds out the possible vulnerabilities and exploits for that web server. If the web server is not patched, the attacker can bring it down by sending an exploit. Botnets Botnets can be used to carry out DDoS attacks. A botnet herd is a collection of compromised computers. The compromised computers, called bots, act on commands from a C&C server. These bots, on the commands of the C&C server, can send a huge amount of data to the victim server, and as a result, the victim server is overloaded: Reflective DDoS attacks and amplification attacks In this kind of attack, the attacker uses a legitimate computer to launch an attack against the victim by hiding its own IP address. The usual way is the attacker sends a small packet to a legitimate machine after forging the sender of the packet to look as if it has been sent from the victim. The legitimate machine will, in turn, send the response to the victim. If the response data is large, the impact is amplified. We can call the legitimate computers reflectors and this kind of attack, where the attacker sends small data and the victim receives a larger amount of data, is called an amplification attack. Since the attacker does not directly use computers controlled by him and instead uses legitimate computers, it's called a reflective DDoS attack: The reflectors are not compromised machines, unlike botnets. Reflectors are machines that respond to a particular request. It can be a DNS request or a Networking Time Protocol (NTP) request, and so on. DNS amplification attacks, WordPress pingback attacks, and NTP attacks are amplification attacks. In a DNS amplification attack, the attacker sends a forged packet to the DNS server containing the IP address of the victim. The DNS server replies back to the victim instead with larger data. Other kinds of amplification attack include SMTP, SSDP, and so on. We will look at an example of such an attack in the next section. The computers that are used to send traffic to the victim are not the compromised ones and are called reflectors. There are several groups of cyber criminals responsible for carrying out ransom DDoS attacks, such as DD4BC, Armada Collective, Fancy Bear, XMR-Squad, and Lizard Squad. These groups target enterprises. They will first send out an extortion email, followed by an attack if the victim does not pay the ransom. DD4BC The DD4BC group was seen operating in 2014. It charged Bitcoins as the extortion fee. The group mainly targeted media, entertainment, and financial services. They would send a threatening email stating that a low-intensity DoS attack will be carried out first. They would claim that they will protect the organization against larger attacks. They also threatened that they will publish information about the attack in social media to bring down the reputation of the company: Usually, DD4DC are known to exploit a bug WordPress pingback vulnerability. We don't want to get into too much detail about this bug or vulnerability. Pingback is a feature provided by WordPress through which the original author of the WordPress site or blog gets notified where his site has been linked or referenced. We can call the site which refers to the original site as the referrer and the original site as the original. If the referrer uses the original, it sends a request called a pingback request to the original which contains the URL of itself. This is a kind of notification to the original site from the referrer informing that it is linking to the original site. Now the original site downloads the referrer site as a response to the pingback request as per the protocol designed by WordPress and this action is termed as a reflection. The WordPress sites used in the attack are called reflectors. So an attacker can misuse it by creating a forged pingback request with a URL of a victim site and send it to the WordPress sites. The attack uses these WordPress sites in the attack. As a result, the WordPress sites respond to the victim. Put simply, the attack notifies the WordPress sites that the victim has referred them on his/her site. So all the WordPress sites try to connect to the victim, which overloads the victim. If the victim's web page is large and the WordPress sites try to download it, then it chokes the bandwidth and this is called amplification: Armada Collective The Armada Collective group was first seen in 2015. They attacked various financial services and web hosting sites in Russia, Switzerland, Greece, and Thailand. They again re-emerged in Central Europe in October 2017. They used to carry out a demo-DDoS attack to threaten the victim. Here is an extortion letter from Armada Collective: This group is known to carry out reflective DDoS attacks through NTP. The NTP protocol is a protocol that is used to synchronize computer clock times in a protocol. The NTP protocol provides a support for a monlist command for administrative purpose. When an administrator sends the monlist command to an NTP server, the server responds with a list of 600 hosts that are connected to that NTP server. The attacker can exploit this by creating a forged NTP packet which has a monlist command containing the IP address of the victim and then sending multiple copies to the NTP server. The NTP server thinks that the monlist request has come from the victim address and sends a response which contains a list of 600 computers connected to that server. Thus the victim receives too much data from the NTP response and it can crash: Fancy Bear Fancy Bear is one of the hacker groups we have known about since 2010. Fancy Bear threatened to use Mirai Botnet in the attack. Mirai Botnet was known to target Linux operating systems used in IoT devices. It was mostly known to infect CCTV cameras. Here is a letter from Fancy Bear: We have talked about a few groups that were infamous for carrying out DoS extortion and some of the techniques used by them. We explored different types of DoS attacks and how they can occur. If you've enjoyed this excerpt, check out 'Preventing Ransomware' to know in detail about the latest ransomware attacks involving WannaCry, Petya, and BadRabbit. Anatomy of a Crypto Ransomware Barracuda announces Cloud-Delivered Web Application Firewall service Top 5 penetration testing tools for ethical hackers
Read more
  • 0
  • 0
  • 52132

article-image-5-engines-build-games-without-coding
Sam Wood
17 Feb 2016
4 min read
Save for later

5 engines to build games without coding

Sam Wood
17 Feb 2016
4 min read
Let's start with a disclaimer - if you want to make a video game the best it can be, you're going to need to learn how to code. But you can build games without coding. So, if the prospect of grinding C++ in order to make the next Minecraft doesn't quite appeal to you here are 5 accessible tools to help you get into game development without writing a single line of code. GameMaker - drag and drop game development GameMaker is one of the premier engines that offers users the chance to make complete mobile games just with using a drag-and-drop interface. Specifically designed so that novice computer programmers would be able to make computer games without much programming knowledge, it's an excellent choice for anyone looking to make cross-platform game app without reams of code. In addition, GameMaker boasts its own language for when you want to add extra custom features and refine your game experience. Unreal Engine - AAA game development without writing code Unreal Engine is a AAA engine, used to make some of the biggest names out there. If you're just getting into games development and are unsure about coding, you might be surprised to see it on this list. What UE4 offers to beginners and non-coders, though, is the power of its Blueprints visual scripting. With Blueprints, you can create (reasonably) complex games all without typing a single line of C++. Based on the concept of using a node-based interface, Blueprints allows non-programmers access to gameplay elements including camera control, player input, items and triggers, and more. Unity - the definitive game engine and a good place to begin Unity is the tool of pro-game developers; in our 2015 Skill Up Survey, it was revealed as the tech that was most important for earning the top salaries in the industry. Unity has no built-in visual scripting like Unreal - but what it does have is a massive community, and a huge supply of code snippets and assets available for almost every requirement. You can do a lot in its editor just by dragging scripts onto in-game objects. Whilst (like Unreal) you'll want to pick up some coding skills when you start to make your games more complex, you can get quite a way standing on the shoulders of your fellow developers. If that's not working for you, though, why not check out PlayMaker? This visual scripting plugin for Unity offers a whole new arrange of options for when you want something a little more custom. GameSalad - an amazing behavior library Much like GameMaker, GameSalad is an intuitive drag-and-drop interface game creator. What makes GameSalad stand out from the crowd though is its amazing behavior library. This library lets developers implement really complex behaviors, of a kind that would be challenging or even impossible for someone to muddle through without a knowledge of coding. There a thousands of successful games on Google Play and the App Store built with GameSalad - why not add yours to their number? Lumberyard Okay, so I lied a bit in the title of this blog - currently, there's nothing to suggest that Amazon's new game engine will be particularly friendly to non-coders. So what is Lumberyard? Why is it on here? Lumberyard is Amazon's new game engine, derived from CryENGINE. It's built to get people deploying their games to Amazon Web Services (AWS) but is otherwise free to use. What's interesting about Lumberyard is its visual scripting tool supposedly made for designers and engineers with little to no backend experience to add cloud-connected features to a game. These features can include a "community news feed, daily gifts, or server-side combat resolution" - added within minutes through drag-and-drop visual scripting. Lumberyard is still super new, so we'll have to wait and see if it delivers on its promises - but we may well find ourselves with a serious contender to the likes of Unity and Unreal. Check out other related posts: Construct Game Development: Platformer Revisited, a 2D Shooter C++, SFML, Visual Studio, and Starting the first game  
Read more
  • 0
  • 2
  • 50067

article-image-what-are-best-programming-languages-building-apis
Antonio Cucciniello
11 Jun 2017
4 min read
Save for later

What are the best programming languages for building APIs?

Antonio Cucciniello
11 Jun 2017
4 min read
Are you in the process of designing your first web application? Maybe you have built some in the past but are looking for a change in language to increase your skill set, or try out something new. If you fit into those categories, you are in the right place. With all of the information out there, it could be hard to decide on what programming language to select for your next product or project. Because any programming language can ultimately be used to write APIs, some can be better and more efficient to use than others. Today we will be discussing what should be taken into consideration when choosing the programming language to build out APIs for your web app. Comfort is essential when it comes to programming languages This goes out to any developer who has experience in a certain language. If you already have experience in a language, you will ultimately have an easier time developing, understanding the concepts involved, and you will be able to make more progress right out of the gate. This translates to improved code and performance as well because you can spend more time on that rather than learning a brand new programming language. For example, if I have been developing in Python for a few years, but I have the option between using PHP or Python as a programming language for the project, I simply select Python due to the time saved already spent learning Python. This is extremely important because when trying to do something new, you want to limit the amount of unknowns that you will have in the project. That will help your learning and help to achieve better results. If you are a brand new developer with zero programming experience, the following section might help you narrow your results. Libraries and frameworks that support developing APIs The next question to ask in the process of eliminating potential programming languages to build out your API is: Does the language come with plenty of different options for libraries or frameworks that aid in the developing of APIs? To continue with the Python example in the previous section, there is the Django REST framework that is specifically built on top of Django. Django is a web development framework for Python, made for creating an API in the programming language faster and easier. Did you hear faster and easier? Why yes you did, and that is why this is important. These libraries and frameworks allow you to speed up the development process by containing functions and objects that handle plenty of the repetitive or dirty work in building an API. Once you have spent some time researching what is available to you in terms of libraries and frameworks for languages, it is time to check out how active the communities are. Support and community The next question to ask yourself in this process is: Are the frameworks and libraries for this programming language still being supported? If so, how active is the community of developers? Do they have continuous or regular updates to their software and capabilities? Do the updates help improve security and usability? Given that not many people use the language, nor is it being updated for bug fixes in the future, you may not want to continue using it. Another thing to pay attention to is the community of users. Are there plenty of resources for you to learn from? How clear and available is the documentation? Are there experienced developers who have blog posts on the necessary topics to learn? Are there questions being asked and answered on Stack Overflow? Are there any hard resources such as magazines or textbooks that show you how to use these languages and frameworks? Potential languages for building APIs From my experience, there are a number of better programming languages.Here is an example framework for some of these languages, which you can use to start developing your next API: Language Framework Java Spring JavaScript(Node) Express Python Django PHP Laravel Ruby Ruby on Rails   Ultimately, the programming language you select is dependent on several factors: your experience with the language, the frameworks available for API building, and how active both the support and the community are. Do not be afraid to try something new! You can always learn, but if you are concerned about speed and ease of development, use these criteria to help select the language of use. Leave a comment down below and let us know which programming language is your favorite and how you will use it in your future applications!
Read more
  • 0
  • 0
  • 47512

article-image-penetration-testing-rules-of-engagement
Fatema Patrawala
14 May 2018
7 min read
Save for later

5 pen testing rules of engagement: What to consider while performing Penetration testing

Fatema Patrawala
14 May 2018
7 min read
Penetration testing and ethical hacking are proactive ways of testing web applications by performing attacks that are similar to a real attack that could occur on any given day. They are executed in a controlled way with the objective of finding as many security flaws as possible and to provide feedback on how to mitigate the risks posed by such flaws. Security-conscious corporations have implemented integrated penetration testing, vulnerability assessments, and source code reviews in their software development cycle. Thus, when they release a new application, it has already been through various stages of testing and remediation. When planning to execute a penetration testing project, be it for a client as a professional penetration tester or as part of a company's internal security team, there are aspects that always need to be considered before starting the engagement. [box type="shadow" align="" class="" width=""]This article is an excerpt from the book Web Penetration testing with Kali Linux - Third Edition, written by Gilberto Najera-Gutierrez, Juned Ahmed Ansari.[/box] Rules of Engagement for Pen testing Rules of Engagement (RoE) is a document that deals with the manner in which the penetration test is to be conducted. Some of the directives that should be clearly spelled out in RoE before you start the penetration test are as follows: The type and scope of testing Client contact details Client IT team notifications Sensitive data handling Status meeting and reports Type and scope of Penetration testing The type of testing can be black box, white box, or an intermediate gray box, depending on how the engagement is performed and the amount of information shared with the testing team. There are things that can and cannot be done in each type of testing. With black box testing, the testing team works from the view of an attacker who is external to the organization, as the penetration tester starts from scratch and tries to identify the network map, the defense mechanisms implemented, the internet-facing websites and services, and so on. Even though this approach may be more realistic in simulating an external attacker, you need to consider that such information may be easily gathered from public sources or that the attacker may be a disgruntled employee or ex-employee who already possess it. Thus, it may be a waste of time and money to take a black box approach if, for example, the target is an internal application meant to be used by employees only. White box testing is where the testing team is provided with all of the available information about the targets, sometimes even including the source code of the applications, so that little or no time is spent on reconnaissance and scanning. A gray box test then would be when partial information, such as URLs of applications, user-level documentation, and/or user accounts are provided to the testing team. Gray box testing is especially useful when testing web applications, as the main objective is to find vulnerabilities within the application itself, not in the hosting server or network. Penetration testers can work with user accounts to adopt the point of view of a malicious user or an attacker that gained access through social engineering. [box type="note" align="" class="" width=""]When deciding on the scope of testing, the client along with the testing team need to evaluate what information is valuable and necessary to be protected, and based on that, determine which applications/networks need to be tested and with what degree of access to the information.[/box] Client contact details We can agree that even when we take all of the necessary precautions when conducting tests, at times the testing can go wrong because it involves making computers do nasty stuff. Having the right contact information on the client-side really helps. A penetration test is often seen turning into a Denial-of-Service (DoS) attack. The technical team on the client side should be available 24/7 in case a computer goes down and a hard reset is needed to bring it back online. [box type="note" align="" class="" width=""]Penetration testing web applications has the advantage that it can be done in an environment that has been specially built for that purpose, allowing the testers to reduce the risk of negatively affecting the client's productive assets.[/box] Client IT team notifications Penetration tests are also used as a means to check the readiness of the support staff in responding to incidents and intrusion attempts. You should discuss this with the client whether it is an announced or unannounced test. If it's an announced test, make sure that you inform the client of the time and date, as well as the source IP addresses from where the testing (attack) will be done, in order to avoid any real intrusion attempts being missed by their IT security team. If it's an unannounced test, discuss with the client what will happen if the test is blocked by an automated system or network administrator. Does the test end there, or do you continue testing? It all depends on the aim of the test, whether it's conducted to test the security of the infrastructure or to check the response of the network security and incident handling team. Even if you are conducting an unannounced test, make sure that someone in the escalation matrix knows about the time and date of the test. Web application penetration tests are usually announced. Sensitive data handling During test preparation and execution, the testing team will be provided with and may also find sensitive information about the company, the system, and/or its users. Sensitive data handling needs special attention in the RoE and proper storage and communication measures should be taken (for example, full disk encryption on the testers' computers, encrypting reports if they are sent by email, and so on). If your client is covered under the various regulatory laws such as the Health Insurance Portability and Accountability Act (HIPAA), the Gramm-Leach-Bliley Act (GLBA), or the European data privacy laws, only authorized personnel should be able to view personal user data. Status meeting and reports Communication is key for a successful penetration test. Regular meetings should be scheduled between the testing team and the client organization and routine status reports issued by the testing team. The testing team should present how far they have reached and what vulnerabilities have been found up to that point. The client organization should also confirm whether their detection systems have triggered any alerts resulting from the penetration attempt. If a web server is being tested and a WAF was deployed, it should have logged and blocked attack attempts. As a best practice, the testing team should also document the time when the test was conducted. This will help the security team in correlating the logs with the penetration tests. [box type="note" align="" class="" width=""]WAFs work by analyzing the HTTP/HTTPS traffic between clients and servers, and they are capable of detecting and blocking the most common attacks on web applications.[/box] To build defense against web attacks with Kali Linux and understand the concepts of hacking and penetration testing, check out this book Web Penetration Testing with Kali Linux - Third Edition. Top 5 penetration testing tools for ethical hackers Essential skills required for penetration testing Approaching a Penetration Test Using Metasploit
Read more
  • 0
  • 2
  • 47373

article-image-rust-as-a-game-programming-language-is-it-any-good
Amarabha Banerjee
22 Sep 2018
4 min read
Save for later

Rust as a Game Programming Language: Is it any good?

Amarabha Banerjee
22 Sep 2018
4 min read
We have moved lightyears away from the the handheld gaming days. The good old Tetris and Mario games were easy to use, low on graphics, super difficult to program in-spite of their apparent simpler appearance. Although it’s difficult to trace back to the language in which all of these games were written, many of them were written in the C family of languages, which contributed to the difficulty in programming them. Rust has been touted as one of the successors of C. Which in-turn brings the question back - if C was difficult for coding, then how exactly is Rust going to be different? The answer of this question lies in the approach of Rust. Rust was designed primarily as a systems programming language by the Mozilla Foundation. The primary game development language over the past 20 years have been C/C++ majorly. Rust brings a fresh change in approach - from Object Oriented to Data Oriented. The problem with object oriented programming was summarized nicely by Catherine West from Chucklefish. According to her, treating game elements like NPC, game worlds, as Objects might work well at a small level. But when you are trying to create your own game engine, then treating game elements as Objects will imply creating a lot of super sized objects with complex layers of dependencies. The Rust approach, on the other hand, is data oriented. This implies that every element is treated as data. This simplifies the process of creating midsized game engines a lot. Chucklefish being a significant name in 2D game development, this statement from Catherine West comes as a major boost for developers who want to use Rust for developing 2D games. She has although expressed her doubts on using Rust for 3D game development. Another important personality who has recently come out in support of Rust is Andrea Pessino -  CTO of Ready at Dawn. Ready at Dawn is a well established game studio known for games such as The Order: 1886, Daxter and various God of War titles. His tweet read like this. This is another feather in Rust’s cap for game development. The present state of game development in Rust is quite encouraging. There are quite a few low level graphical libraries like GFX.  GFX is a low-level abstraction layer over platform specific graphical interfaces (OpenGL, Metal, Vulkan). It offers some handy wrapper over windows backend (glutin the Rust one, or wrapper around Vulkan system, GLFW and more). GFX is still at a very early stage of development with the present version being 0.17. Although major game engines like Unity, and Unreal are yet to support Rust for game development, there exist a few complete game engines which allow you to create complete games with Rust using their framework. The first one is Piston. It is the oldest game engine for Rust. It is also the most stable and one with great documentation. However, many people find Piston confusing and hard to use as it is super-modular by design. Sometimes it is even hard to understand which module to load for achieving a certain goal or build a certain component of a game. Amethyst is a more recent game engine/framework inspired by commercial monolithic game engines. It comes with all the necessary dependencies in its package. However it is evolving quickly and hence the present documentation is already outdated. However there is a vibrant community which is looking to include more and more developers into its foray. Hence this gives an opportunity to new developers to get into game development with Rust and get involved with a game engine also. GGEZ is a simple 2D game engine inspired by the LÖVE engine. This library is more suited at creating simple 2D games for hobbyists. GGEZ is also very new and changes quickly. The design simplicity is an incentive for indie developers and hobbyists to start creating games with it. Some other popular libraries include: noise-rs / a noise generator rlua / High level bindings between Rust and Lua sfxr / Reimplementation of DrPetter’s “sfxr” sound effect generator as a Rust library The conclusion that we can draw from here is that Rust has a lot of promise when it comes to game development. With the data oriented approach, easy memory management and access to low level performance enhancement techniques, Rust can become a full fledged game development language in the near future. Best game engines for Artificial Intelligence game development Implementing Unity game engine and assets for 2D game development How to use arrays, lists, and dictionaries in Unity for 3D game development
Read more
  • 0
  • 0
  • 40753

article-image-common-big-data-design-patterns
Sugandha Lahoti
08 Jul 2018
17 min read
Save for later

Common big data design patterns

Sugandha Lahoti
08 Jul 2018
17 min read
Design patterns have provided many ways to simplify the development of software applications. Now that organizations are beginning to tackle applications that leverage new sources and types of big data, design patterns for big data are needed. These big data design patterns aim to reduce complexity, boost the performance of integration and improve the results of working with new and larger forms of data. This article intends to introduce readers to the common big data design patterns based on various data layers such as data sources and ingestion layer, data storage layer and data access layer. This article is an excerpt from Architectural Patterns by Pethuru Raj, Anupama Raman, and Harihara Subramanian. In this book, you will learn the importance of architectural and design patterns in business-critical applications. Data sources and ingestion layer Enterprise big data systems face a variety of data sources with non-relevant information (noise) alongside relevant (signal) data. Noise ratio is very high compared to signals, and so filtering the noise from the pertinent information, handling high volumes, and the velocity of data is significant. This is the responsibility of the ingestion layer. The common challenges in the ingestion layers are as follows: Multiple data source load and prioritization Ingested data indexing and tagging Data validation and cleansing Data transformation and compression The preceding diagram depicts the building blocks of the ingestion layer and its various components. We need patterns to address the challenges of data sources to ingestion layer communication that takes care of performance, scalability, and availability requirements. In this section, we will discuss the following ingestion and streaming patterns and how they help to address the challenges in ingestion layers. We will also touch upon some common workload patterns as well, including: Multisource extractor Multidestination Protocol converter Just-in-time (JIT) transformation Real-time streaming pattern Multisource extractor An approach to ingesting multiple data types from multiple data sources efficiently is termed a Multisource extractor. Efficiency represents many factors, such as data velocity, data size, data frequency, and managing various data formats over an unreliable network, mixed network bandwidth, different technologies, and systems: The multisource extractor system ensures high availability and distribution. It also confirms that the vast volume of data gets segregated into multiple batches across different nodes. The single node implementation is still helpful for lower volumes from a handful of clients, and of course, for a significant amount of data from multiple clients processed in batches. Partitioning into small volumes in clusters produces excellent results. Data enrichers help to do initial data aggregation and data cleansing. Enrichers ensure file transfer reliability, validations, noise reduction, compression, and transformation from native formats to standard formats. Collection agent nodes represent intermediary cluster systems, which helps final data processing and data loading to the destination systems. The following are the benefits of the multisource extractor: Provides reasonable speed for storing and consuming the data Better data prioritization and processing Drives improved business decisions Decoupled and independent from data production to data consumption Data semantics and detection of changed data Scaleable and fault tolerance system The following are the impacts of the multisource extractor: Difficult or impossible to achieve near real-time data processing Need to maintain multiple copies in enrichers and collection agents, leading to data redundancy and mammoth data volume in each node High availability trade-off with high costs to manage system capacity growth Infrastructure and configuration complexity increases to maintain batch processing Multidestination pattern In multisourcing, we saw the raw data ingestion to HDFS, but in most common cases the enterprise needs to ingest raw data not only to new HDFS systems but also to their existing traditional data storage, such as Informatica or other analytics platforms. In such cases, the additional number of data streams leads to many challenges, such as storage overflow, data errors (also known as data regret), an increase in time to transfer and process data, and so on. The multidestination pattern is considered as a better approach to overcome all of the challenges mentioned previously. This pattern is very similar to multisourcing until it is ready to integrate with multiple destinations (refer to the following diagram). The router publishes the improved data and then broadcasts it to the subscriber destinations (already registered with a publishing agent on the router). Enrichers can act as publishers as well as subscribers: Deploying routers in the cluster environment is also recommended for high volumes and a large number of subscribers. The following are the benefits of the multidestination pattern: Highly scalable, flexible, fast, resilient to data failure, and cost-effective Organization can start to ingest data into multiple data stores, including its existing RDBMS as well as NoSQL data stores Allows you to use simple query language, such as Hive and Pig, along with traditional analytics Provides the ability to partition the data for flexible access and decentralized processing Possibility of decentralized computation in the data nodes Due to replication on HDFS nodes, there are no data regrets Self-reliant data nodes can add more nodes without any delay The following are the impacts of the multidestination pattern: Needs complex or additional infrastructure to manage distributed nodes Needs to manage distributed data in secured networks to ensure data security Needs enforcement, governance, and stringent practices to manage the integrity and consistency of data Protocol converter This is a mediatory approach to provide an abstraction for the incoming data of various systems. The protocol converter pattern provides an efficient way to ingest a variety of unstructured data from multiple data sources and different protocols. The message exchanger handles synchronous and asynchronous messages from various protocol and handlers as represented in the following diagram. It performs various mediator functions, such as file handling, web services message handling, stream handling, serialization, and so on: In the protocol converter pattern, the ingestion layer holds responsibilities such as identifying the various channels of incoming events, determining incoming data structures, providing mediated service for multiple protocols into suitable sinks, providing one standard way of representing incoming messages, providing handlers to manage various request types, and providing abstraction from the incoming protocol layers. Just-In-Time (JIT) transformation pattern The JIT transformation pattern is the best fit in situations where raw data needs to be preloaded in the data stores before the transformation and processing can happen. In this kind of business case, this pattern runs independent preprocessing batch jobs that clean, validate, corelate, and transform, and then store the transformed information into the same data store (HDFS/NoSQL); that is, it can coexist with the raw data: The preceding diagram depicts the datastore with raw data storage along with transformed datasets. Please note that the data enricher of the multi-data source pattern is absent in this pattern and more than one batch job can run in parallel to transform the data as required in the big data storage, such as HDFS, Mongo DB, and so on. Real-time streaming pattern Most modern businesses need continuous and real-time processing of unstructured data for their enterprise big data applications. Real-time streaming implementations need to have the following characteristics: Minimize latency by using large in-memory Event processors are atomic and independent of each other and so are easily scalable Provide API for parsing the real-time information Independent deployable script for any node and no centralized master node implementation The real-time streaming pattern suggests introducing an optimum number of event processing nodes to consume different input data from the various data sources and introducing listeners to process the generated events (from event processing nodes) in the event processing engine: Event processing engines (event processors) have a sizeable in-memory capacity, and the event processors get triggered by a specific event. The trigger or alert is responsible for publishing the results of the in-memory big data analytics to the enterprise business process engines and, in turn, get redirected to various publishing channels (mobile, CIO dashboards, and so on). Big data workload patterns Workload patterns help to address data workload challenges associated with different domains and business cases efficiently. The big data design pattern manifests itself in the solution construct, and so the workload challenges can be mapped with the right architectural constructs and thus service the workload. The following diagram depicts a snapshot of the most common workload patterns and their associated architectural constructs: Workload design patterns help to simplify and decompose the business use cases into workloads. Then those workloads can be methodically mapped to the various building blocks of the big data solution architecture. Data storage layer Data storage layer is responsible for acquiring all the data that are gathered from various data sources and it is also liable for converting (if needed) the collected data to a format that can be analyzed. The following sections discuss more on data storage layer patterns. ACID versus BASE versus CAP Traditional RDBMS follows atomicity, consistency, isolation, and durability (ACID) to provide reliability for any user of the database. However, searching high volumes of big data and retrieving data from those volumes consumes an enormous amount of time if the storage enforces ACID rules. So, big data follows basically available, soft state, eventually consistent (BASE), a phenomenon for undertaking any search in big data space. Database theory suggests that the NoSQL big database may predominantly satisfy two properties and relax standards on the third, and those properties are consistency, availability, and partition tolerance (CAP). With the ACID, BASE, and CAP paradigms, the big data storage design patterns have gained momentum and purpose. We will look at those patterns in some detail in this section. The patterns are: Façade pattern NoSQL pattern Polyglot pattern Façade pattern This pattern provides a way to use existing or traditional existing data warehouses along with big data storage (such as Hadoop). It can act as a façade for the enterprise data warehouses and business intelligence tools. In the façade pattern, the data from the different data sources get aggregated into HDFS before any transformation, or even before loading to the traditional existing data warehouses: The façade pattern allows structured data storage even after being ingested to HDFS in the form of structured storage in an RDBMS, or in NoSQL databases, or in a memory cache. The façade pattern ensures reduced data size, as only the necessary data resides in the structured storage, as well as faster access from the storage. NoSQL pattern This pattern entails getting NoSQL alternatives in place of traditional RDBMS to facilitate the rapid access and querying of big data. The NoSQL database stores data in a columnar, non-relational style. It can store data on local disks as well as in HDFS, as it is HDFS aware. Thus, data can be distributed across data nodes and fetched very quickly. Let's look at four types of NoSQL databases in brief: Column-oriented DBMS: Simply called a columnar store or big table data store, it has a massive number of columns for each tuple. Each column has a column key. Column family qualifiers represent related columns so that the columns and the qualifiers are retrievable, as each column has a column key as well. These data stores are suitable for fast writes. Key-value pair database: A key-value database is a data store that, when presented with a simple string (key), returns an arbitrarily large data (value). The key is bound to the value until it gets a new value assigned into or from a database. The key-value data store does not need to have a query language. It provides a way to add and remove key-value pairs. A key-value store is a dictionary kind of data store, where it has a list of words and each word represents one or more definitions. Graph database: This is a representation of a system that contains a sequence of nodes and relationships that creates a graph when combined. A graph represents three data fields: nodes, relationships, and properties. Some types of graph store are referred to as triple stores because of their node-relationship-node structure. You may be familiar with applications that provide evaluations of similar or likely characteristics as part of the search (for example, a user bought this item also bought... is a good illustration of graph store implementations). Document database: We can represent a graph data store as a tree structure. Document trees have a single root element or sometimes even multiple root elements as well. Note that there is a sequence of branches, sub-branches, and values beneath the root element. Each branch can have an expression or relative path to determine the traversal path from the origin node (root) and to any given branch, sub-branch, or value. Each branch may have a value associated with that branch. Sometimes the existence of a branch of the tree has a specific meaning, and sometimes a branch must have a given value to be interpreted correctly. The following table summarizes some of the NoSQL use cases, providers, tools and scenarios that might need NoSQL pattern considerations. Most of this pattern implementation is already part of various vendor implementations, and they come as out-of-the-box implementations and as plug and play so that any enterprise can start leveraging the same quickly. NoSQL DB to Use Scenario Vendor / Application / Tools Columnar database Application that needs to fetch entire related columnar family based on a given string: for example, search engines SAP HANA / IBM DB2 BLU / ExtremeDB / EXASOL / IBM Informix / MS SQL Server / MonetDB Key Value Pair database Needle in haystack applications (refer to the Big data workload patterns given in this section) Redis / Oracle NoSQL DB / Linux DBM / Dynamo / Cassandra Graph database Recommendation engine: application that provides evaluation of Similar to / Like: for example, User that bought this item also bought ArangoDB / Cayley / DataStax / Neo4j / Oracle Spatial and Graph / Apache Orient DB / Teradata Aster Document database Applications that evaluate churn management of social media data or non-enterprise data Couch DB / Apache Elastic Search / Informix / Jackrabbit / Mongo DB / Apache SOLR Polyglot pattern Traditional (RDBMS) and multiple storage types (files, CMS, and so on) coexist with big data types (NoSQL/HDFS) to solve business problems. Most modern business cases need the coexistence of legacy databases. At the same time, they would need to adopt the latest big data techniques as well. Replacing the entire system is not viable and is also impractical. The polyglot pattern provides an efficient way to combine and use multiple types of storage mechanisms, such as Hadoop, and RDBMS. Big data appliances coexist in a storage solution: The preceding diagram represents the polyglot pattern way of storing data in different storage types, such as RDBMS, key-value stores, NoSQL database, CMS systems, and so on. Unlike the traditional way of storing all the information in one single data source, polyglot facilitates any data coming from all applications across multiple sources (RDBMS, CMS, Hadoop, and so on) into different storage mechanisms, such as in-memory, RDBMS, HDFS, CMS, and so on. Data access layer Data access in traditional databases involves JDBC connections and HTTP access for documents. However, in big data, the data access with conventional method does take too much time to fetch even with cache implementations, as the volume of the data is so high. So we need a mechanism to fetch the data efficiently and quickly, with a reduced development life cycle, lower maintenance cost, and so on. Data access patterns mainly focus on accessing big data resources of two primary types: End-to-end user-driven API (access through simple queries) Developer API (access provision through API methods) In this section, we will discuss the following data access patterns that held efficient data access, improved performance, reduced development life cycles, and low maintenance costs for broader data access: Connector pattern Lightweight stateless pattern Service locator pattern Near real-time pattern Stage transform pattern The preceding diagram represents the big data architecture layouts where the big data access patterns help data access. We discuss the whole of that mechanism in detail in the following sections. Connector pattern The developer API approach entails fast data transfer and data access services through APIs. It creates optimized data sets for efficient loading and analysis. Some of the big data appliances abstract data in NoSQL DBs even though the underlying data is in HDFS, or a custom implementation of a filesystem so that the data access is very efficient and fast. The connector pattern entails providing developer API and SQL like query language to access the data and so gain significantly reduced development time. As we saw in the earlier diagram, big data appliances come with connector pattern implementation. The big data appliance itself is a complete big data ecosystem and supports virtualization, redundancy, replication using protocols (RAID), and some appliances host NoSQL databases as well. The preceding diagram shows a sample connector implementation for Oracle big data appliances. The data connector can connect to Hadoop and the big data appliance as well. It is an example of a custom implementation that we described earlier to facilitate faster data access with less development time. Lightweight stateless pattern This pattern entails providing data access through web services, and so it is independent of platform or language implementations. The data is fetched through restful HTTP calls, making this pattern the most sought after in cloud deployments. WebHDFS and HttpFS are examples of lightweight stateless pattern implementation for HDFS HTTP access. It uses the HTTP REST protocol. The HDFS system exposes the REST API (web services) for consumers who analyze big data. This pattern reduces the cost of ownership (pay-as-you-go) for the enterprise, as the implementations can be part of an integration Platform as a Service (iPaaS): The preceding diagram depicts a sample implementation for HDFS storage that exposes HTTP access through the HTTP web interface. Near real-time pattern For any enterprise to implement real-time data access or near real-time data access, the key challenges to be addressed are: Rapid determination of data: Ensure rapid determination of data and make swift decisions (within a few seconds, not in minutes) before the data becomes meaningless Rapid analysis: Ability to analyze the data in real time and spot anomalies and relate them to business events, provide visualization, and generate alerts at the moment that the data arrived Some examples of systems that would need real-time data analysis are: Radar systems Customer services applications ATMs Social media platforms Intrusion detection systems Storm and in-memory applications such as Oracle Coherence, Hazelcast IMDG, SAP HANA, TIBCO, Software AG (Terracotta), VMware, and Pivotal GemFire XD are some of the in-memory computing vendor/technology platforms that can implement near real-time data access pattern applications: As shown in the preceding diagram, with multi-cache implementation at the ingestion phase, and with filtered, sorted data in multiple storage destinations (here one of the destinations is a cache), one can achieve near real-time access. The cache can be of a NoSQL database, or it can be any in-memory implementations tool, as mentioned earlier. The preceding diagram depicts a typical implementation of a log search with SOLR as a search engine. Stage transform pattern In the big data world, a massive volume of data can get into the data store. However, all of the data is not required or meaningful in every business case. The stage transform pattern provides a mechanism for reducing the data scanned and fetches only relevant data. HDFS has raw data and business-specific data in a NoSQL database that can provide application-oriented structures and fetch only the relevant data in the required format: Combining the stage transform pattern and the NoSQL pattern is the recommended approach in cases where a reduced data scan is the primary requirement. The preceding diagram depicts one such case for a recommendation engine where we need a significant reduction in the amount of data scanned for an improved customer experience. The implementation of the virtualization of data from HDFS to a NoSQL database, integrated with a big data appliance, is a highly recommended mechanism for rapid or accelerated data fetch. We discussed big data design patterns by layers such as data sources and ingestion layer, data storage layer and data access layer. To know more about patterns associated with object-oriented, component-based, client-server, and cloud architectures, read our book Architectural Patterns. Why we need Design Patterns? Implementing 5 Common Design Patterns in JavaScript (ES8) An Introduction to Node.js Design Patterns
Read more
  • 0
  • 1
  • 36813
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at ₹800/month. Cancel anytime
article-image-how-to-plan-a-system-migration-10-steps
Hari Vignesh
14 Nov 2017
6 min read
Save for later

How to plan a system migration in 10 steps

Hari Vignesh
14 Nov 2017
6 min read
How do I plan a system migration? A system migration refers to the process of moving an application from one environment to another (such as from an on-premises enterprise server to a cloud-based environment, from one server to another, or from cloud-to-cloud). You might, for example, migrate to or from custom-built on platforms like Microsoft Azure, the Google App Engine, Force.com, MySQL, or Amazon Web Services. Software migration is always a challenge, but fortunately many system migrations can be managed – and even automated - by a third-party middleware solution. Sometimes a system migration might be something smaller scale. You might want to move installed applications and data from one piece of hardware to another (as you would when you give your team new computers), rather than moving an app’s entire development environment. While this is pretty easy in technical terms, making sure it’s carefully managed for users is nevertheless important. Why migration? Migrations done to improve efficiency or bring all applications from a legacy system into a current one. That’s why it’s becoming such a pressing issue for many organizations as they seek to undergo ‘digital transformation’ or optimize their existing setup. Often, organizations will want to virtualize their software. This is ultimately about disassociating it with specific operating systems, instead hosting programs in separate environments for sandboxing at runtime. Here are some migration scenarios: Example 1: You want to move your team using Adobe Creative Cloud (CC) from old PCs to new Macs. You need to ensure that once team members are working on Macs with Adobe CC installed, they’re still able to use paths to the server to access all creative assets. Example 2: Your team uses custom software developed on one type of cloud environment — like Amazon Web Services (AWS) — and now your organization is moving en masse to Google Cloud Platform (GCP). You need to map each piece of functionality your app had on AWS to GCP, despite the major differences in how each environment operates. How to successfully plan a system migration The hard bit is actually planning and executing a system migration. There’s a lot that can go wrong from both a technical and people perspective. Here are 10 steps you should follow to remain (relatively) calm and in control when you’re making a move. Establish your cross-functional representatives. Because of the many hands required to see a software migration project through, and its long timeframe and far-off ROI, you need a champion in your corner — from every corner of the business. Get one key representative from each business function relevant to the software that’s moving — be it production, sales, accounting, IT, or another department. These people will help you gain continued support of the project as it continues without ROI yet and comes under budget threats during, say, a lean quarter. Frame the project for stakeholders. Be it department heads, the C-suite, or the board of directors, lay out the plan and just how essential it is for growth. Set up what the project entails, what it isn’t, and lay out goalposts for each phase. Whenever it comes under review, you’ll have this initial framework you and stakeholders agreed upon. Build a team of internal experts. Find technical experts within your organization who can assist with each part of the migration, even if you’re ultimately using a third-party vendor or software for the migration. Put these people in charge of cleaning or writing programs to clean existing data, knowing where everything is stored, and understanding limitations of the platforms on each end of the migration. Depending on the size of your organization, each member of this team may lead their own small team to handle their portion of the project. Take inventory of assets. There’s no way to judge a migration as successful if you’re not sure whether you lost any data along the way. In the case of data, some of your internal experts can check in on what is stored, making backups, and exporting to lightweight .CSV files or hard-copies (in the case of legal and other vital documentation). For software or applications, take inventory on each action and function possible with the software, how it interfaces with its databases, what it’s compatible with and what it isn’t, and the unique custom configurations it has that separate it from off-the-shelf software’s documentation. Create a risk assessment report. Using the section above on challenges, determine all relevant risks to the migration, including opportunity costs and compliance issues. This will be vital for getting final approval from stakeholders, and insulate project runners from being blindsided later. One of these risk assessment matrix templates can help you get started. Determine technical, time, and financial requirements. Work with individuals in finance to work out long-term budget needs and rates of approval over the whole project. Work with IT, developers, and engineering to figure out the technical aspects and requirements, what method of migration is appropriate, and who will be forced into downtime at what stages of the project. Compile all of this to figure out realistic timing and checkpoints in the migration. Create project management system for all parties. With the data you gathered in the previous step, and all the teams you’ve assembled (technical, cross-functional, and stakeholder teams), create a common project management hub where everyone can see progress, send messages, attach files and findings, and generally lend visibility into the process. It should be intuitive for all users. Set up the project management software with the budget and time expectations at each phase agreed upon. You can present this information to the stakeholders for final approval prior to project kickoff, and use it to submit regular reports to them as they request.  Perform the migration in phases. Depending on the appropriate methods, perform the migration and document every step. Use the project management tool to keep everyone informed and gather documentation. Along the way, when some employees inevitably leave or get added to the team, you can use this tool to quickly get them up to speed. Test cases after each phase. After each phase, test whatever you’ve migrated into the new environment, and document the outcomes. Regular testing and sandboxing will allow your team to catch problems early and regroup or change direction before data is lost and progress is wasted. Results. Once the migration is complete, record final results, and compare it to the goalposts set up and tracked in your project management tool. Combine all documentation and deliver a final report to stakeholders, and begin reaping the rewards of your newer, faster, better software, operating system, cloud environment, or whatever else you migrated. By following the steps above, you should find your system migration a little more stress free than it might otherwise be! Hari Vignesh Jayapalan is a Google Certified Android app developer, IDF Certified UI & UX Professional, street magician, fitness freak, technology enthusiast, and wannabe entrepreneur. He can be found on Twitter @HariofSpades.
Read more
  • 0
  • 1
  • 34094

article-image-top-frameworks-for-building-your-progressive-web-application-pwa
Sugandha Lahoti
15 May 2018
6 min read
Save for later

Top frameworks for building your Progressive Web Apps (PWA)

Sugandha Lahoti
15 May 2018
6 min read
The hype and rise of progressive web apps are tremendous. A PWA is basically a web application that feels like a native application to the user. By making your app a PWA, not only do you acquire new users, but you can also retain them for longer. Here’s a quick rundown of all things good about a PWA. Reliable: Loads instantly even under poor network conditions. Lighting fast and app-like: Responds to the user’s actions with speed and with a smooth interaction. Engaging and responsive: Gives the feeling that it was made specifically for that device, but it should be able to work across all platforms. Protected and secure: Served over HTTPS to make sure the contents of the app are not messed with. If you’re not already developing your next app as a PWA, here are 5 reasons why you should do that asap. And if you’re confused about choosing the best framework for developing your PWA, here are the top 3 frameworks available to make your next app a PWA. Ionic Ionic is one of the most popular frameworks for building a progressive web app. Let’s look at a few reasons why you should choose Ionic as your PWA framework Free and open-source:  Ionic is open source, and licensed under MIT. Open-source means developers can manage the code structure easily, saving time, money and efforts. They also have a worldwide community forum to connect with other Ionic developers, ask questions, and help out others. Cross-platform and one-codebase: Ionic allows seamless building of apps across popular operating systems, such as Android, iOS and Windows. It has a one codebase feature. This means apps are deployed through Apache Cordova with a single code base, and the application adapts automatically to the device it is functioning in. Rich UI: Ionic is equipped with pre-built components that are used to customize design themes and elements. It is based on SASS UI, with rich features to create fast, robust, interactive, native-like applications. Powerful functionality: Ionic is supported by Angular. The component API of Angular helps developers create interactive hybrid and web apps. Ionic is equipped with Cordova Plugins for accessing various native features, like Camera, GPS, and so on. It also features a powerful CLI for building, testing and deploying apps across multiple platforms. Read our Hybrid Mobile Development with Ionic to build a complete, professional-quality, hybrid mobile application with Ionic. You can also checkout Hybrid Mobile apps: What you need to know, for a quick rundown of all that is to know about a Hybrid mobile app. Polymer Google’s Polymer App Toolbox is another contender for the development of PWAs. It is a collection of web components, tools, and templates for building Progressive Web Apps. Blends PWAs with Web components Polymer allows developers to architect a component-based web app using Polymer and Web Components. Web components can form encapsulated and reusable custom HTML elements. They are independent of the frameworks because they are made of pure HTML/CSS/JS, unlike framework-dependent UI components in React/Angular. The web components are provided through a light-weight Polymer Library for creating framework-independent, custom components. More features include: Responsive design using the app layout components. Modular routing using the <app-route> elements. Localization with <app-localize-behavior>. Turnkey support for local storage with app storage elements. Offline caching as a progressive enhancement, using service workers. Build tooling to support serving the app multiple ways: unbundled for delivery over HTTP/2 with server push, and bundled for delivery over HTTP/1. Each component whether used separately or together can be used to build a full-featured Progressive web app. Most importantly, each component is additive. For a simple app one only needs the app-layout. As it gets more complicated, developers can add routing, offline caching, and a high-performance server as required. Read our Getting Started with Polymer book to create responsive web apps using Polymer. Angular Angular, probably the most popular front-end web application platform, can also be used to make robust, reliable, and responsive PWAs. Before the release of version 5, supporting progressive web apps in Angular required a lot of expertise on the developers’ part.Version 5 comes equipped with a new version of the Angular Service Worker for built-in PWA support. Angular 6 (released a few days back) has two new CLI commands. Both these versions make it very simple to make web application downloadable and installable, just like a native mobile application. Service Worker Updates With Angular 5 the development of Service Workers is becoming significantly easier. By using Angular CLI developers can choose to add Service Worker functionality by default. The Angular Service Worker functionality is provided by the module @angular/service-worker. Service worker can power up an application by only providing some JSON configuration instead of writing the code manually. The key difference with other service worker generators (like Workbox, sw-precache) is the fact, that you do not re-generate the service worker file itself, you only update its control file. New CLI commands Angular 6, also introduces two new commands apart from the service worker updates. The first, ng update, is a CLI command for updating dependencies and code. The second command, ng add, supports turning applications into progressive web apps, which support offline web pages. Apart from these frameworks, React is also a good alternative. Backed by Facebook, it has a Create-React-App generator which is the official scaffolding tool to generate a Reactjs App. Get started with Scott Domes's Progressive Web Apps with React as your first step for building PWA applications. Yet another popular choice, would be Webpack. Webpack plugins can generate the service worker and manifest required for a PWA to be registered. It uses a Google project called Workbox which provides tools that help make offline support for web apps easier to set up. The bottom line is that the frameworks for building progressive web apps are growing and expanding at a rapid rate with regular updates every couple of months. Choosing a particular framework thus doesn’t make much difference to the app behavior. It only depends on the developer’s area of interest and expertise. Windows launches progressive web apps… that don't yet work on mobile How to Secure and Deploy an Android App How Android app developers can convert iPhone apps
Read more
  • 0
  • 0
  • 31746

article-image-is-dart-programming-dead-already
Amarabha Banerjee
26 Jul 2018
3 min read
Save for later

Is Dart programming dead already?

Amarabha Banerjee
26 Jul 2018
3 min read
Dart is an open-source, object-oriented, general-purpose programming language developed by Google in 2011. Dart uses a 'C' style syntax and optionally transcompiles into JavaScript. It is used for both client side and server-side web development. Dart is also being used for Native and Cross-platform mobile development. In spite of all the capabilities that Dart possesses, there are few rumors about Dart heading nowhere? Are there any truth to these rumors? Through this article, we will see how Dart can turn the tide around with some key new projects and application areas that are being explored recently. But first, let's talk about its recent TIOBE ranking - the de-facto standard for gauging the popularity of programming languages. The latest rankings show Dart at the 24th spot out of the 50 languages that TIOBE tracks, behind languages like R, Delphi, even Swift which was released in 2014, 3 years later after Dart was introduced. Even languages like Delphi and R have seen a recent spike in implementation across various application domain. Codementor ranks Dart at #1 in the list of programming languages one should not learn in 2018. They looked at community engagement, growth, and the job market to arrive at this conclusion. Source: Codementor.io The job trends for Dart also have been quite stagnant for some time. After a minor dip in between 2015-16, the job trends are now back at the 2014 mark. There has been neither any growth nor decline. The major cause of concern for Dart has been that very few companies are using Dart in their development stack. Other than Google backed websites such as AdWords, Google Fiber, Flutter etc, only few other major companies like Workiva, Adobe, Blossom have actively implemented Dart. The Job listings are also similar in number as in 2014, the developer salaries are pretty high. The lack of sustained growth balances that advantage out. Google had also shifted to TypeScript as the official language for it’s most popular front-end development framework Angular. This was also seen as a minor setback for Dart. When compared to other late entrants like Swift, is this a sign of worry for Dart? Source: ITJobswatch.uk The answer is a definitely 'No'. The reason being Dart’s ease of use, lack of boilerplates and extremely lightweight nature. Developers have termed it as a language for the long run. These predictions got a major boost when Google recently announced their latest Cross-Platform mobile development framework, Flutter which was written in Dart. The reviews of Flutter has also been encouraging since it paves way for simpler and native like mobile apps. The popularity of Flutter could mean a revival of Dart in the mobile development scenario. But Google might have even bigger aspirations with Dart. Google might be thinking of a possible replacement of its flagship Android operating system which is bugged by irregular update cycles due to multiple instances of it running on different devices. Google is developing an operating system called Fuchsia which is also being written in Dart. All of these point out to one single direction, Dart is here to stay. If everything goes as per Google's vision, Fuchsia will bring Dart to the forefront of mobile development.   **edited for better clarity on Dart’s comparison to other languages like Swift, Delphi and R and elaborated on Dart’s real world use cases. Read Next Why Google Dart Will Never Win The Battle For The Browser Building Games with HTML5 and Dart Google Fuchsia: What’s all the fuss about?
Read more
  • 0
  • 3
  • 27556

article-image-5-examples-of-artificial-intelligence-in-web-apps
Sugandha Lahoti
20 Aug 2018
7 min read
Save for later

5 examples of Artificial Intelligence in Web apps

Sugandha Lahoti
20 Aug 2018
7 min read
Modern day web app development is increasingly focused on building a customer-facing front-end presence with the use of Artificial Intelligence. Web apps, use Artificial Intelligence not just for intelligent automation, but also for building recommendation engines, website implementation, and image recognition, among other application areas. In this post, we look at five key areas, illustrated by real-world examples, where web apps are employing Artificial intelligence to automate some part of their system. Recommendation Engines of Amazon and Netflix Curating content based on the user’s context is one of the most widely used AI features in web apps. Amazon, for instance, uses item-based collaborative filtering for product classification. Amazon’s recommendation system uses a combination of goods-based recommendation (users are recommended for those similar to what they liked in the past) and buddy-based recommendation (users are recommended things which their Facebook friends like.) Not just for their recommendation system, Amazon has been using AI for multiple tasks. Their AI Management Strategy is called The Flywheel, where one part of Amazon acts as a catalyst for AI and machine learning growth in other areas. Read more: Four interesting Amazon patents in 2018 that use machine learning, AR, and robotics Another popular example is Netflix, who revamped their recommendation algorithm based on visual impressions. One of their research projects indicated that the artwork was not only the biggest influencer to a viewer's decision to watch content, but it also drew over 82% of their focus while browsing Netflix. This made them develop a new image recommendation algorithm which works in real time to project the image it thinks the user will respond to. They use implicit (user behavior) and Explicit data (user activity) and then feed this data to machine learning algorithms to figure out the relevant content for each user. For each title, users get the image with the highest rank based on their profile. Side by side, it continues collecting data from its 100 million other subscribers to improve its engine’s performance. Read more: What software stack does Netflix use? Google and Microsoft using Image recognition Image recognition can serve multiple uses for web apps including object and pattern recognition, locating duplicates (exact or partial), image search by fragments, and more. Two such unique applications of image recognition are Google’s Quickdraw and Microsoft’s Captionbot.ai. Quick Draw is Google’s AI-powered web app game, where users have to draw an everyday object that a neural network tries to recognize. Players are given 20 seconds to draw a random item, and Google’s neural network tries to match it with other 50 million hand-drawn sketches by other players to identify the correct one. Quickdraw aims to generate the world’s largest doodling data set, which is shared publicly to help further machine learning research. The data preserves user privacy by collecting only anonymous metadata, including timestamp, country code, whether or not the drawing was recognized, and which word the drawing corresponded to. This dataset was used in SketchRNN, a neural network that can draw words and interpolate between drawings. Another image recognition web app is Microsoft’s Captionbot.ai. The system can automatically generate a caption for an uploaded photograph. Users can rate how accurately it has detected what was on display. The algorithm learns from the rating, to make the captions more accurate. It uses three separate services to process the images. The Computer Vision API identifies the components of the photo, then mixes it with data from the Bing Image API, and runs any faces it spots through Emotion API. The Emotion API analyses facial expressions to detect anger, contempt, disgust, fear, and other traits. Based on the results from these APIs, the caption is generated. Google Docs powered by Natural Language Processing Modern Web apps can also be fueled with cognitive capabilities to make them stand apart from other apps. Instances of this include transforming human speech to text or conversing with people in natural language. One such example of a web app which includes natural language processing is Google Docs. Google Docs and Slides have an Explore feature to show text, images, and other features relevant to the document that a user is working on at any given point.  Docs can also use natural language to search through data and reports, and automatically generate formulas in Sheets. Google Docs recently incorporated an AI grammar checker, announced at Google Cloud Next. It uses a machine translation algorithm to recognize errors and suggest corrections as users type. Google Docs can also be integrated with Natural Language API to recognize the sentiment of selected text in a Google Doc and highlight it based on that sentiment. Web-based artificial intelligence Chatbots Web-based chatbots are just like app-based chatbots albeit they interact with users in the website browser. They use AI techniques such as natural language understanding and pattern recognition to store and distinguish between the context of the information provided and elicit a suitable response for future replies. An example of web-based chatbots are the Live Chat bots where the conversation with a visitor on a website is automated using a chatbot. Many live chat software companies are already experimenting with chatbots. Examples include the Operator bot used by Intercom, a company building customer messaging platform or Driftbot by Drift which gives your website a personal assistant. Read More: Top 4 chatbot development frameworks for developers Another example, are AI based chatbots which help in creating full websites. Right Click is a startup that introduced an A.I.-powered chatbot which uses Artificial Intelligence in a conversational interface to create websites. It asks general questions during the conversation like “What industry you belong to?” and “Why do you want to make a website?” and creates customized templates as per the given answers. Similarly, Wix’s Artificial Intelligence Design bot can tailor websites by learning about each person’s or business’ own needs. Web-based code helpers using AI Intelligent coding assistants are gaining popularity with their ability to understand the code and provide right suggestions at the right time. They can analyze code on the web and give fast and smart completions. Codota for Chrome is a smart web-based IDE which can build predictive models of code and suggest code completions and related content based on the current context present in the code. It combines program analysis, natural language processing, and machine learning to learn from the code. Users can look for Codota’s Icon on every code snippet on their browsers - in GitHub, StackOverflow and others. Another example is Deep Cognition’s Deep Learning Studio – Cloud. It is not exactly an IDE, but it features AI-powered drag & drop interface to help design deep learning models with ease. It features assisted modeling, for automated tensor size calculations and real-time validation. It also has AutoML feature to automatically build a neural network. [dropcap]E[/dropcap]ven though AI is a great choice to enhance your web apps, an important facet to keep in mind is ensuring fairness, accuracy, and transparency of your web apps. For instance, web apps powered by natural language should not discriminate people based on caste, color, or creed or hurt user sentiments. Similarly, those using neural networks for recognizing images should ensure the filtering of obscene images. Creating such types of artificial intelligence systems would require a hybrid of designers, programmers, ML engineers, and researchers. This collective group will have a good grasp of user experience, will be comfortable thinking in abstracts and algorithms, and equally well versed with the social impacts of artificial intelligence. Read More: 20 lessons on bias in machine learning systems by Kate Crawford at NIPS 2017 Uber introduces Fusion.js, a plugin-based web development framework for high-performance apps. Electron Fiddle: A ‘code playground’ for experimenting with cross-platform native apps. Warp: Rust’s new web framework for implementing WAI (Web Application Interface)
Read more
  • 0
  • 3
  • 27194
article-image-6-common-use-cases-of-reverse-proxy-scenarios
Guest Contributor
05 Oct 2018
6 min read
Save for later

6 common use cases of Reverse Proxy scenarios

Guest Contributor
05 Oct 2018
6 min read
Proxy servers are used as intermediaries between a client and a website or online service. By routing traffic through a proxy server, users can disguise their geographic location and their IP address. Reverse proxies, in particular, can be configured to provide a greater level of control and abstraction, thereby ensuring the flow of traffic between clients and servers remains smooth. This makes them a popular tool for individuals who want to stay hidden online, but they are also widely used in enterprise settings, where they can improve security, allow tasks to be carried out anonymously, and control the way employees are able to use the internet. What is a Reverse Proxy? A reverse proxy server is a type of proxy server that usually exists behind the firewall of a private network. It directs any client requests to the appropriate server on the backend. Reverse proxies are also used as a means of caching common content and compressing inbound and outbound data, resulting in a faster and smoother flow of traffic between clients and servers. Furthermore, the reverse proxy can handle other tasks, such as SSL encryption, further reducing the load on web servers. There is a multitude of scenarios and use cases in which having a reverse proxy can make all the difference to the speed and security of your corporate network. By providing you with a point at which you can inspect traffic and route it to the appropriate server, or even transform the request, a reverse proxy can be used to achieve a variety of different goals. Load Balancing to route incoming HTTP requests This is probably the most familiar use of reverse proxies for many users. Load balancing involves the proxy server being configured to route incoming HTTP requests to a set of identical servers. By spreading incoming requests across these servers, the reverse proxies are able to balance out the load, therefore sharing it amongst them equally. The most common scenario in which load balancing is employed is when you have a website that requires multiple servers. This happens due to the volume of requests, which are too much for one server to handle efficiently. By balancing the load across multiple servers, you can also move away from an architecture that features a single point of failure. Usually, the servers will all be hosting the same content, but there are also situations in which the reverse proxy will also be retrieving specific information from one of a number of different servers. Provide security by monitoring and logging traffic By acting as the mediator between clients and your system’s backend, a reverse proxy server can hide the overall structure of your backend servers. This is because the reverse proxy will capture any requests that would otherwise go to those servers and handle them securely. A reverse proxy can also improve security by providing businesses with a point at which they can monitor and log traffic flowing through their network. A common use case in which a reverse proxy is used to bolster the security of a network would be the use of a reverse proxy as an SSL gateway. This allows you to communicate using HTTP behind the firewall without compromising your security. It also saves you the trouble of having to configure security for each server behind the firewall individually. A rotating residential proxy, also known as a backconnect proxy, is a type of proxy that frequently changes the IP addresses and connections that the user uses. This allows users to hide their identity and generate a large number of requests without setting alarms off. A reverse rotating residential proxy can be used to improve the security of a corporate network or website. This is because the servers in question will display the information for the proxy server while keeping their own information hidden from potential attackers. No need to install certificates on your backend servers with SSL Termination SSL termination process occurs when an SSL connection server ends, or when the traffic shifts between encrypted and unencrypted requests. By using a reverse proxy to handle any incoming HTTPS connections, you can have the proxy server decrypt the request, and then pass on the unencrypted request to the appropriate server. Taking this approach offers practical benefits. For example, it eliminates the need to install certificates on your backend servers. It also provides you with a single configuration point for managing SSL/TLS. Removing the need for your web servers to undertake this decryption means that you are also reducing the processing load on the server. Serve static content on behalf of backend servers Some reverse proxy servers can be configured to also act as web servers. Websites contain a mixture of dynamic content, which changes over time, and static content, which always remains the same. If you can configure your reverse proxy server to serve up static content on behalf of backend servers, you can greatly reduce the load, freeing up more power for dynamic content rendering. Alternatively, a reverse proxy can be configured to behave like a cache. This allows it to store and serve content that is frequently requested, thereby further reducing the load on backend servers. URL Rewriting before they go on to the backend servers Anything that a business can do to easily to improve their SEO score is worth considering. Without an investment in your SEO, your business or website will remain invisible to search engine users. With URL rewriting, you can compensate for any legacy systems you use, which produce URLs that are less than ideal for SEO. With a reverse proxy server, the URLs can be automatically reformatted before they are passed on to the backend servers. Combine Different Websites into a Single URL Space It is often desirable for a business to adopt a distributed architecture whereby different functions are handled by different components. With a reverse proxy, it is easy to route a single URL to a multitude of components. To anyone who uses your URL, it will simply appear as if they are moving to another page on the website. In fact, each page within that URL might actually be connecting to a completely different backend service. This is an approach that is widely used for web service APIs. To sum up, the primary function of a reverse proxy is load balancing, ensuring that no individual backend server becomes inundated with more traffic or requests than it can handle. However, there are a number of other scenarios in which a reverse proxy can potentially offer enormous benefits. About the author Harold Kilpatrick is a cybersecurity consultant and a freelance blogger. He's currently working on a cybersecurity campaign to raise awareness around the threats that businesses can face online. Read Next HAProxy introduces stick tables for server persistence, threat detection, and collecting metrics How to Configure Squid Proxy Server Acting as a proxy (HttpProxyModule)
Read more
  • 0
  • 0
  • 26641

article-image-9-reasons-to-choose-agile-methodology-for-mobile-app-development
Guest Contributor
01 Oct 2018
6 min read
Save for later

9 reasons to choose Agile Methodology for Mobile App Development

Guest Contributor
01 Oct 2018
6 min read
As mobile application development is becoming the new trend in the business world, the competition to enter the mobile market has intensified. Business leaders are hunting for every possible way to reach the market at the earliest and outshine the competition. Albeit, without compromising on the quality and quantity of the opportunities in the market. One such method prevalent in the market is the adoption of Agile methodology. Agile mobile app development methodology The Agile methodology is an incremental and iterative mobile application development approach, where the complete app development process cycle is divided into multiple sub-modules, considered as mini-projects. Every submodule is assigned to an individual team and subjected to the complete development cycle, right from designing to development, testing, and delivery. Image Source: AppInventiv Benefits of Agile Mobile App Development Approach The technology, due to this iterative nature, is highly recommended in the market. Here are 9 reasons to choose Agile for your mobile app development project Faster Development In the case of the Agile model, the complete mobile app project is divided into smaller modules which are treated like independent sub-projects. These sub-projects are handled by different teams independently, with little-to-no dependencies on each other. Besides, everyone has a clear idea of what their contribution will be and the associated resources and deadline, which accelerates the development process. Every developer puts their best efforts into completing their part in the mobile app development project, an outcome of which is a more streamlined app development process with faster delivery. Reduced Risks With the changing market needs and trends, it is quite risky to launch your own application. Many times, it happens that the market data you have taken into consideration while developing your app gets outdated by the time you launch your app. The outcome is poor ROI and a shady future in the market. Agile, in this situation, is a tool that allows you to take calculated risks and improve your project’s market scope. In other words, the methodology enables a mobile app development company to make changes to any particular sprint without disturbing the code of the previous sprints. Thus, making their application more suitable for the market. Better Quality Agile, unlike the traditional app development models, does not test the app at the end of the development phase. Rather, it fosters testing of every single module at the primitive level. This reduces the risk of encountering a bug at the time of quality testing of the complete project. It also helps mobile app developers to inspect the app elements at every stage of the development process and make adjustments as per the requirement, eventually helping in delivering a higher quality of services. Seamless Project Management By transforming the complete app development project into multiple individual modules, the Agile methodology provides you with the facility to manage your project easily. You can easily assign the tasks to different teams and reduce the dependencies and discussions at the inter-team level. You can also keep a record of the activities performed on each mini project and in this way, determine if something is missing or not working as per the proposed plan. Besides, you can check the productivity of every individual and put your efforts into making/hiring more efficient experts. Enhanced Customer Experience The Agile mobile app development approach puts a strong emphasis on people and collaboration, which renders the development team with an opportunity to work closely with their clients and understand their vision. Besides, the projects are delivered to the clients in the form of multiple sprints, which brings transparency to the process. It also enables the team to determine if both parties are on the same page and if not, allow them to make the required changes before proceeding further. This lessens the chances of launching app that does not fulfill to the idea behind, and therefore, ensures enhanced customer experience. Lower Development Cost Since every step is well planned, executed, and delivered, you can easily calculate the cost of making an app and thus, justify your app budget. Besides, if at any stage of development, you feel the need to raise the app budget, you can easily do it with Agile methodology. In this way, you can avoid leaving the project incomplete due to lack of required resources and funds. Customization The agile mobile app development approach also provides developers with an opportunity to customize their development process. There’ are no rules to create an app in a particular way. Experts can look for different ways to develop and launch the mobile app, and integrate the cutting-edge technologies and tools into the process. In a nutshell, the Agile process enables developers to customize the development timeline as per their choice and deliver a user-centric solution. Higher ROI The Agile methodology lets the mobile app development companies enter the market with the most basic app (MVP) and update the app with each iteration. This makes it easier for the app owners to test their idea, gather required data and insights, build a brand presence, and thus, deliver the best features as per the customer needs and market trends. This aids the app owners and associated mobile app development company to take the right decision for gaining better ROI in the market. Earlier Market Reach By dividing the complete app project into submodules, the Agile mobile app development approach encourages the team to deliver every module with the stipulated deadline. Lagging behind the deadlines is not practically possible. The outcome of this is that the complete app project is designed and delivered on-time or even earlier to it, which means earlier market reach. The Agile methodology can do wonders for mobile app development. However, it is required that everyone is well aware of the end goal and contributes to this approach wisely. Only then can you enjoy all of the aforementioned benefits. With this, are you ready to go Agile? Are you ready to develop a mobile app with an Agile mobility strategy?. Author Bio Holding a Bachelor’s degree in Technology and 2 years of work experience in a mobile app development company, Bhupinder is focused on making technology digestible to all. Being someone who stays updated with the latest tech trends, she’s always armed to write and spread the knowledge. When not found writing, you will find her answering on Quora while sipping coffee.
Read more
  • 0
  • 0
  • 25917

article-image-top-10-computer-vision-tools
Aaron Lazar
05 Apr 2018
7 min read
Save for later

Top 10 Tools for Computer Vision

Aaron Lazar
05 Apr 2018
7 min read
The adoption of Computer Vision has been steadily picking up pace over the past decade, but there’s been a spike in adoption of various computer vision tools in recent times, thanks to its implementation in fields like IoT, manufacturing, healthcare, security, etc. Computer vision tools have evolved over the years, so much so that computer vision is now also being offered as a service. Moreover, the advancements in hardware like GPUs, as well as machine learning tools and frameworks make computer vision much more powerful in the present day. Major cloud service providers like Google, Microsoft and AWS have all joined the race towards being the developers’ choice. But which tool should you choose? Today I’ll take you through a list of the top tools and will help you understand which one to pick up, based on your need. Computer Vision Tools/Libraries OpenCV: Any post on computer vision is incomplete without the mention of OpenCV. OpenCV is a great performing computer vision tool and it works well with C++ as well as Python. OpenCV is prebuilt with all the necessary techniques and algorithms to perform several image and video processing tasks. It’s quite easy to use and this makes it clearly the most popular computer vision library on the planet! It is multi-platform, allowing you to build applications for Linux, Windows and Android. At the same time, it does have some drawbacks. It gets a bit slow when working through massive data sets or very large images. Moreover, on its own, it doesn’t have GPU support and relies on CUDA for GPU processing. Matlab: Matlab is a great tool for creating image processing applications and is widely used in research. The reason being that Matlab allows quick prototyping. Another interesting aspect is that Matlab code is quite concise, as compared to C++, making it easier to read and debug. It tackles errors before execution by proposing some ways to make the code faster. On the downside, Matlab is a paid tool. Also, it can get quite slow during execution time, if that’s something that concerns you much. Matlab is not your go to tool in an actual production environment, as it was basically built for prototyping and research. AForge.NET/Accord.NET: You’ll be excited to know that image processing is possible even if you’re a C# and .NET developer, thanks to AForge/Accord. It’s a great tool that has a lot of filters and is great for image manipulation and different transforms. The Image Processing Lab allows for filtering capabilities like edge detection and more. AForge is extremely simple to use as all you need to do is adjust parameters from a user interface. Moreover, its processing speeds are quite good. However, AForge doesn’t possess the power and capabilities of other tools like OpenCV, like advanced motion picture analysis or even advanced processing on images. TensorFlow: TensorFlow has been gaining popularity over the past couple of years, owing to its power and ease of use. It lets you bring the power of Deep Learning to computer vision and has some great tools to perform image processing/classification - it’s API-like graph tensor. Moreover, you can make use of the Python API to perform face and expression detection. You can also perform classification using techniques like regression. Tensorflow also allows you to perform computer vision of tremendous magnitudes. One of the main drawbacks of Tensorflow is that it’s extremely resource hungry and can devour a GPU’s capabilities in no time, quite uncalled for. Moreover, if you wanted to learn how to perform image processing with TensorFlow, you’d have to understand what Machine and Deep Learning is, write your own algorithms and then go forward from there. CUDA: CUDA is a platform for parallel computing, invented by NVIDIA. It enables great boosts in computing performance by leveraging the power of GPUs. The CUDA Toolkit includes the NVIDIA Performance Primitives library which is a collection of signal, image, and video processing functions. If you have large images to process, that are GPU intensive, you can choose to use CUDA. CUDA is easy to program and is quite efficient and fast. On the downside, it is extremely high on power consumption and you will find yourself reformulating for memory distribution in parallel tasks. SimpleCV: SimpleCV is a framework for building computer vision applications. It gives you access to a multitude of computer vision tools on the likes of OpenCV, pygame, etc. If you don’t want to get into the depths of image processing and just want to get your work done, this is the tool to get your hands on. If you want to do some quick prototyping, SimpleCV will serve you best. Although, if your intention is to use it in heavy production environments, you cannot expect it to perform on the level of OpenCV. Moreover, the community forum is not very active and you might find yourself running into walls, especially with the installation. GPUImage: GPUImage is a framework or rather, an iOS library that allows you to apply GPU-accelerated effects and filters to images, live motion video, and movies. It is built on OpenGL ES 2.0. Running custom filters on a GPU calls for a lot of code to set up and maintain. GPUImage cuts down on all of that boilerplate and gets the job done for you. Computer Vision as a Service: Google Cloud and Mobile Vision APIs: Google Cloud Vision API enables developers to perform image processing by encapsulating powerful machine learning models in a simple REST API that can be called in an application. Also, its Optical Character Recognition (OCR) functionality enables you to detect text in your images. The Mobile Vision API lets you detect objects in photos and video, using real-time on-device vision technology. It also lets you scan and recognise barcodes and text. Amazon Rekognition: Amazon Rekognition is a deep learning-based image and video analysis service that makes adding image and video analysis to your applications, a piece of cake. The service can identify objects, text, people, scenes and activities, and it can also detect inappropriate content, apart from providing highly accurate facial analysis and facial recognition for sentiment analysis. Microsoft Azure Computer Vision API: Microsoft’s API is quite similar to its peers and allows you to analyse images, read text in them, and analyse video in near-real time. You can also flag adult content, generate thumbnails of images and recognise handwriting. Bonus: SciPy and NumPy: I thought I’d add these in as well, since I’ve seen quite a few developers use Python to build computer vision applications (without OpenCV, that is). SciPy and NumPy are quite powerful enough to perform image processing. scikit-image is a Python package that is dedicated towards image processing, which uses native NumPy and SciPy arrays as image objects. Moreover, you get to use the cool IPython interactive computing environment and you can also choose to include OpenCV if you want to do some more hardcore image processing. Well there you have it, these were the top tools for computer vision and image processing. Head on over and check out these resources, to get working with some of the top tools used in the industry.
Read more
  • 0
  • 2
  • 25737
article-image-what-is-pytorch-and-how-does-it-work
Sunith Shetty
18 Sep 2018
7 min read
Save for later

What is PyTorch and how does it work?

Sunith Shetty
18 Sep 2018
7 min read
PyTorch is a Python-based scientific computing package that uses the power of graphics processing units. It is also one of the preferred deep learning research platforms built to provide maximum flexibility and speed. It is known for providing two of the most high-level features; namely, tensor computations with strong GPU acceleration support and building deep neural networks on a tape-based autograd systems. There are many existing Python libraries which have the potential to change how deep learning and artificial intelligence are performed, and this is one such library. One of the key reasons behind PyTorch’s success is it is completely Pythonic and one can build neural network models effortlessly. It is still a young player when compared to its other competitors, however, it is gaining momentum fast. A brief history of PyTorch Since its release in January 2016, many researchers have continued to increasingly adopt PyTorch. It has quickly become a go-to library because of its ease in building extremely complex neural networks. It is giving a tough competition to TensorFlow especially when used for research work. However, there is still some time before it is adopted by the masses due to its still “new” and “under construction” tags. PyTorch creators envisioned this library to be highly imperative which can allow them to run all the numerical computations quickly. This is an ideal methodology which fits perfectly with the Python programming style. It has allowed deep learning scientists, machine learning developers, and neural network debuggers to run and test part of the code in real time. Thus they don’t have to wait for the entire code to be executed to check whether it works or not. You can always use your favorite Python packages such as NumPy, SciPy, and Cython to extend PyTorch functionalities and services when required. Now you might ask, why PyTorch? What’ so special in using it to build deep learning models? The answer is quite simple, PyTorch is a dynamic library (very flexible and you can use as per your requirements and changes) which is currently adopted by many of the researchers, students, and artificial intelligence developers. In the recent Kaggle competition, PyTorch library was used by nearly all of the top 10 finishers. Some of the key highlights of PyTorch includes: Simple Interface: It offers easy to use API, thus it is very simple to operate and run like Python. Pythonic in nature: This library, being Pythonic, smoothly integrates with the Python data science stack. Thus it can leverage all the services and functionalities offered by the Python environment. Computational graphs: In addition to this, PyTorch provides an excellent platform which offers dynamic computational graphs, thus you can change them during runtime. This is highly useful when you have no idea how much memory will be required for creating a neural network model. PyTorch Community PyTorch community is growing in numbers on a daily basis. In the just short year and a half, it has shown some great amount of developments that have led to its citations in many research papers and groups. More and more people are bringing PyTorch within their artificial intelligence research labs to provide quality driven deep learning models. The interesting fact is, PyTorch is still in early-release beta, but the way everyone is adopting this deep learning framework at a brisk pace shows its real potential and power in the community. Even though it is in the beta release, there are 741 contributors on the official GitHub repository working on enhancing and providing improvements to the existing PyTorch functionalities. PyTorch doesn’t limit to specific applications because of its flexibility and modular design. It has seen heavy use by leading tech giants such as Facebook, Twitter, NVIDIA, Uber and more in multiple research domains such as NLP, machine translation, image recognition, neural networks, and other key areas. Why use PyTorch in research? Anyone who is working in the field of deep learning and artificial intelligence has likely worked with TensorFlow before, Google’s most popular open source library. However, the latest deep learning framework - PyTorch solves major problems in terms of research work. Arguably PyTorch is TensorFlow’s biggest competitor to date, and it is currently a much favored deep learning and artificial intelligence library in the research community. Dynamic Computational graphs It avoids static graphs that are used in frameworks such as TensorFlow, thus allowing the developers and researchers to change how the network behaves on the fly. The early adopters are preferring PyTorch because it is more intuitive to learn when compared to TensorFlow. Different back-end support PyTorch uses different backends for CPU, GPU and for various functional features rather than using a single back-end. It uses tensor backend TH for CPU and THC for GPU. While neural network backends such as THNN and THCUNN for CPU and GPU respectively. Using separate backends makes it very easy to deploy PyTorch on constrained systems. Imperative style PyTorch library is specially designed to be intuitive and easy to use. When you execute a line of code, it gets executed thus allowing you to perform real-time tracking of how your neural network models are built. Because of its excellent imperative architecture and fast and lean approach it has increased overall PyTorch adoption in the community. Highly extensible PyTorch is deeply integrated with the C++ code, and it shares some C++ backend with the deep learning framework, Torch. Thus allowing users to program in C/C++ by using an extension API based on cFFI for Python and compiled for CPU for GPU operation. This feature has extended the PyTorch usage for new and experimental use cases thus making them a preferable choice for research use. Python-Approach PyTorch is a native Python package by design. Its functionalities are built as Python classes, hence all its code can seamlessly integrate with Python packages and modules. Similar to NumPy, this Python-based library enables GPU-accelerated tensor computations plus provides rich options of APIs for neural network applications. PyTorch provides a complete end-to-end research framework which comes with the most common building blocks for carrying out everyday deep learning research. It allows chaining of high-level neural network modules because it supports Keras-like API in its torch.nn package. PyTorch 1.0: The path from research to production We have been discussing all the strengths PyTorch offers, and how these make it a go-to library for research work. However, one of the biggest downsides is, it has been its poor production support. But this is expected to change soon. PyTorch 1.0 is expected to be a major release which will overcome the challenges developers face in production. This new iteration of the framework will merge Python-based PyTorch with Caffe2 allowing machine learning developers and deep learning researchers to move from research to production in a hassle-free way without the need to deal with any migration challenges. The new version 1.0 will unify research and production capabilities in one framework thus providing the required flexibility and performance optimization for research and production. This new version promises to handle tasks one has to deal with while running the deep learning models efficiently on a massive scale. Along with the production support, PyTorch 1.0 will have more usability and optimization improvements. With PyTorch 1.0, your existing code will continue to work as-is, there won’t be any changes to the existing API. If you want to stay updated with all the progress to PyTorch library, you can visit the Pull Requests page. The beta release of this long-awaited version is expected later this year. Major vendors like Microsoft and Amazon are expected to provide complete support to the framework across their cloud products. Summing up, PyTorch is a compelling player in the field of deep learning and artificial intelligence libraries, exploiting its unique niche of being a research-first library. It overcomes all the challenges and provides the necessary performance to get the job done. If you’re a mathematician, researcher, student who is inclined to learn how deep learning is performed, PyTorch is an excellent choice as your first deep learning framework to learn. Read more Can a production-ready Pytorch 1.0 give TensorFlow a tough time? A new geometric deep learning extension library for Pytorch releases! Top 5 tools for reinforcement learning
Read more
  • 0
  • 0
  • 25550

article-image-bootstrap-vs-material-design-for-your-next-web-or-app-development-project
Guest Contributor
08 Oct 2019
8 min read
Save for later

Should you use Bootstrap or Material Design for your next web or app development project?

Guest Contributor
08 Oct 2019
8 min read
Superior user experience is becoming increasingly important for businesses as it helps them to engage users and boost brand loyalty. Front-end website and app development platforms, namely Bootstrap vs Material Design empower developers to create websites with a robust structure and advanced functionality, thereby delivering outstanding business solutions and unbeatable user experience. Both Twitter’s Bootstrap vs Material Design are used by developers to create functional and high-quality websites and apps. If you are an aspiring front-end developer, here’s a direct comparison between the two, so you can choose the one that’s better suited for your upcoming project. BootStrap Bootstrap is an open-source, intuitive, and powerful framework used for responsive mobile-first solutions on the web. For several years, Bootstrap has helped developers create splendid mobile-ready front-end websites. In fact, Bootstrap is the most popular  CSS framework as it’s easy to learn and offers a consistent design by using re-usable components. Let’s dive deeper into the pros and cons of Bootstrap. Pros High speed of development If you have limited time for the website or app development, Bootstrap is an ideal choice. It offers ready-made blocks of code that can get you started within no time. So, you don’t have to start coding from scratch. Bootstrap also provides ready-made themes, templates, and other resources that can be downloaded and customized to suit your needs, allowing you to create a unique website as quickly as possible. Bootstrap is mobile first Since July 1, 2019, Google started using mobile-friendliness as a critical ranking factor for all websites. This is because users prefer using sites that are compatible with the screen size of the device they are using. In other words, they prefer accessing responsive sites. Bootstrap is an ideal choice for responsive sites as it has an excellent fluid grid system and responsive utility classes that make the task at hand easy and quick. Enjoys  a strong community support Bootstrap has a huge number of resources available on its official website and enjoys immense support from the developers’ community. Consequently, it helps all developers fix issues promptly. At present, Bootstrap is being developed and maintained on GitHub by Mark Otto, currently Principal Design & Brand Architect at GitHub, with nearly 19 thousand commits and 1087 contributors. The team regularly releases updates to fix any new issues and improve the effectiveness of the framework. For instance, currently, the Bootstrap team is working on releasing version 4.3 that will drop jQuery for regular JavaScript. This is primarily because jQuery adds 30KB to the webpage size and is tricky to configure with bundlers like Webpack. Similarly, Flexbox is a new feature added to the Bootstrap 4 framework. In fact, Bootstrap version 4 is rich with features, such as a Flexbox-based grid, responsive sizing and floats, auto margins, vertical centering, and new spacing utilities. Further, you will find plenty of websites offering Bootstrap tutorials, a wide collection of themes, templates, plugins, and user interface kit that can be used as per your taste and nature of the project. Cons All Bootstrap sites look the same The Twitter team introduced Bootstrap with the objective of helping developers use a standardized interface to create websites within a short time. However, one of the major drawbacks of this framework is that all websites created using this framework are highly recognizable as Bootstrap sites. Open Airbnb, Twitter, Apple Music, or Lyft. They all look the same with bold headlines, rounded sans-serif fonts, and lots of negative space. Bootstrap sites can be heavy Bootstrap is notorious for adding unnecessary bloat to websites as the files generated are huge in size. This leads to longer loading time and battery draining issues. Further, if you delete them manually, it defeats the purpose of using the framework. So, if you use this popular front-end UI library in your project, make sure you pay extra attention to page weight and page speed. May not be suitable for simple websites Bootstrap may not be the right front-end framework for all types of websites, especially the ones that don’t need a full-fledged framework. This is because, Bootstrap’s theme packages are incredibly heavy with battery-draining scripts. Also, Bootstrap has CSS weighing in at 126KB and 29KB of JavaScript that can increase the site’s loading time. In such cases, Bootstrap alternatives, namely Foundation, Skeleton, Pure, and Semantic UI adaptable and lightweight frameworks that can meet your developmental needs and improve your site’s user-friendliness. Material Design When compared to Bootstrap vs Material Design is hard to customize and learn. However, this design language was introduced by Google in 2014 with the objective of enhancing Android app’s design and user interface. The language is quite popular among developers as it offers a quick and effective way for web development. It includes responsive transitions and animations, lighting and shadows effects, and grid-based layouts. When developing a website or app using Material Design, designers should play to its strengths but be wary of its cons. Let’s see why. Pros Offers numerous components  Material Design offers numerous components that provide a base design, guidelines, and templates. Developers can work on this to create a suitable website or application for the business. The Material Design concept offers the necessary information on how to use each component. Moreover, Material Design Lite is quite popular for its customization. Many designers are creating customized components to take their projects to the next level. Is compatible across various browsers Both Bootstrap vs Material Design have a sound browser compatibility as they are compatible across most browsers. Material Design supports Angular Material and React Material User Interface. It also uses the SASS preprocessor. Doesn’t require JavaScript frameworks Bootstrap completely depends on JavaScript frameworks. However, Material Design doesn’t need any JavaScript frameworks or libraries to design websites or apps. In fact, the platform provides a material design framework that allows developers to create innovative components such as cards and badges. Cons The animations and vibrant colors can be distracting Material Design extensively uses animated transitions and vibrant colors and images that help bring the interface to life. However, these animations can adversely affect the human brain’s ability to gather information. It is affiliated to Google Since Material Design is a Google-promoted framework, Android is its prominent adopter. Consequently, developers looking to create apps on a platform-independent UX may find it tough to work with Material Design. However, when Google introduced the language, it had broad vision for Material Design that encompasses many platforms, including iOS. The tech giant has several Google Material Design components for iOS that can be used to render interesting effects using a flexible header, standard material colors, typography, and sliding tabs Carries performance overhead Material Design extensively uses animations that carry a lot of overhead. For instance, effects like drop shadow, color fill, and transform/translate transitions can be jerky and unpleasant for regular users. Wrapping up: Should you use Bootstrap vs Material Design for your next web or app development project? Bootstrap is great for responsive, simple, and professional websites. It enjoys immense support and documentation, making it easy for developers to work with it. So, if you are working on a project that needs to be completed within a short time, opt for Bootstrap. The framework is mainly focused on creating responsive, functional, and high-quality websites and apps that enhance the user experience. Notice how these websites have used Bootstrap to build responsive and mobile-first sites. (Source: cssreel) (Source: Awwwards) Material Design, on the other hand, is specific as a design language and great for building websites that focus on appearance, innovative designs, and beautiful animations. You can use Material Design for your portfolio sites, for instance. The framework is pretty detailed and straightforward to use and helps you create websites with striking effects. Check out how these websites and apps use the customized themes, popups, and buttons of Material Design. (Source:  Nimbus 9) (Source: Digital Trends) What do you think? Which framework works better for you? Bootstrap vs Material Design. Let us know in the comments section below. Author Bio Gaurav Belani is a Senior SEO and Content Marketing Analyst at The 20 Media, a Content Marketing agency that specializes in data-driven SEO. He has more than seven years of experience in Digital Marketing and along with that loves to read and write about AI, Machine Learning, Data Science and much more about the emerging technologies. In his spare time, he enjoys watching movies and listening to music. Connect with him on Twitter and Linkedin. Material-UI v4 releases with CSS specificity, Classes boilerplate, migration to Typescript and more Warp: Rust’s new web framework Learn how to Bootstrap a Spring application [Tutorial] Bootstrap 5 to replace jQuery with vanilla JavaScript How to use Bootstrap grid system for responsive website design?  
Read more
  • 0
  • 0
  • 25485