Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech Guides

851 Articles
article-image-salesforce-lightning-platform-powerful-fast-and-intuitive-user-interface
Fatema Patrawala
05 Nov 2019
6 min read
Save for later

What makes Salesforce Lightning Platform a powerful, fast and intuitive user interface

Fatema Patrawala
05 Nov 2019
6 min read
Salesforce has always been proactive in developing and bringing to market new features and functionality in all of its products. Throughout the lifetime of the Salesforce CRM product, there have been several upgrades to the user interface. In 2015, Salesforce began promoting its new platform – Salesforce Lightning. Although long time users and Salesforce developers may have grown accustomed to the classic user interface, Salesforce Lightning may just covert them. It brings in a modern UI with new features, increased productivity, faster deployments, and a seamless transition across desktop and mobile environments. Recently, Salesforce has been actively encouraging its developers, admins and users to migrate from the classic Salesforce user interface to the new Lightning Experience. Andrew Fawcett, currently VP Product Management and a Salesforce Certified Platform Developer II at Salesforce, writes in his book, Salesforce Lightning Enterprise Architecture, “One of the great things about developing applications on the Salesforce Lightning Platform is the support you get from the platform beyond the core engineering phase of the production process.” This book is a comprehensive guide filled with best practices and tailor-made examples developed in the Salesforce Lightning. It is a must-read for all Lightning Platform architects! Why should you consider migrating to Salesforce Lightning Earlier this year, Forrester Consulting published a study quantifying the total economic impact and benefits of Salesforce Lightning for Service Cloud. In the study, Forrester found that a composite service organization deploying Lightning Experience obtained a return on investment (ROI) of 475% over 3 years. Among the other potential benefits, Forrester found that over 3 years organizations using Lighting platform: Saved more than $2.5 million by reducing support handling time; Saved $1.1 million by avoiding documentation time; and Increased customer satisfaction by 8.5% Apart from this, the Salesforce Lightning platform allows organizations to leverage the latest cloud-based features. It includes responsive and visually attractive user interfaces which is not available within the Classic themes. Salesforce Lightning provides stupendous business process improvements and new technological advances over Classic for Salesforce developers. How does the Salesforce Lightning architecture look like While using the Salesforce Lightning platform, developers and users interact with a user interface backed by a robust application layer. This layer runs on the Lightning Component Framework which comprises of services like the navigation, Lightning Data Service, and Lightning Design System. Source: Salesforce website As part of this application layer, Base Components and Standard Components are the building blocks that enable Salesforce developers to configure their user interfaces via the App Builder and Community Builder. Standard Components are typically built up from one or more Base Components, which are also known as Lightning Components. Developers can build Lightning Components using two programming models: the Lightning Web Components model, and the Aura Components model. The Lightning platform is critical for a range of services and experiences in Salesforce: Navigation Service: The navigation service is supported for Lightning Experience and the Salesforce app. It is built with extensive routing, deep linking, and login redirection, Salesforce's navigation service powers app navigation, state changes, and refreshes. Lightning Data Service: Lightning Data Service is built on top of the User Interface API, It enables developers to load, create, edit, or delete a record in your component without requiring Apex code. Lightning Data Service improves performance and data consistency across components. Lightning Design System: With Lightning Design System, developers can build user interfaces easily including the component blueprints, markup, CSS, icons, and fonts. Base Lightning Components: Base Lightning Components are the building blocks for all UI across the platform. Components range from a simple button to a highly functional data table and can be written as an Aura component or a Lightning web component. Standard Components: Lightning pages are made up of Standard Components, which in turn are composed of Base Lightning Components. Salesforce developers or admins can drag-and-drop Standard Components in tools like Lightning App Builder and Community Builder. Lightning App Builder: Lightning App Builder will let developers build and customize interfaces for Lightning Experience, the Salesforce App, Outlook Integration, and Gmail Integration. Community Builder: For Communities, developers can use the Community Builder to build and customize communities easily. Apart from the above there are other services available within the Salesforce Lightning platform, like the Lightning security measures and record detail pages on the platform and Salesforce app. How to plan transitioning from Classic to Lightning Experience As Salesforce admins/developers prepare for the transition to Lightning Experience, they will need to evaluate three things: how does the change benefit the company, what work is needed to prepare for the change, and how much will it cost. This is the stage to make the case for moving to Lightning Experience by calculating the return on investment of the company and defining what a Lightning Experience implementation will look like. First they will need to analyze how prepared the organization is for the transition to Lightning Experience. Salesforce admins/developers can use the Lightning Experience Readiness Check, it is a tool that produces a personalized Readiness Report and shows which users will benefit right away, and how to adjust the implementation for Lightning Experience. Further Salesforce developers/admins can make the case to their leadership team by showing how migrating to Lightning Experience can realize business goals and improve the company's bottom line. Finally, by using the results of the activities carried out to assess the impact of the migration, understand the level of change required and decide on a suitable approach. If the changes required are relatively small, consider migrating all users and all areas of functionality at the same time. However, if the Salesforce environment is more complex and the amount of change is far greater, consider implementing the migration in phases or as an initial pilot to start with. Overall, the Salesforce Lightning Platform is being increasingly adopted by admins, business analysts, consultants, architects, and especially Salesforce developers. If you want to deliver packaged applications using Salesforce Lightning that cater to enterprise business needs, read this book, Salesforce Lightning Platform Enterprise Architecture, written by Andrew Fawcatt.  This book will take you through the architecture of building an application on the Lightning platform and help you understand its features and best practices. It will also help you ensure that the app keeps up with the increasing customers’ and business requirements. What are the challenges of adopting AI-powered tools in Sales? How Salesforce can help Salesforce open sources ‘Lightning Web Components framework’ “Facebook is the new Cigarettes”, says Marc Benioff, Salesforce Co-CEO Build a custom Admin Home page in Salesforce CRM Lightning Experience How to create and prepare your first dataset in Salesforce Einstein  
Read more
  • 0
  • 0
  • 5025

article-image-how-do-aws-developers-manage-web-apps
Guest Contributor
04 Jul 2019
6 min read
Save for later

How do AWS developers manage Web apps?

Guest Contributor
04 Jul 2019
6 min read
When it comes to hosting and building a website on cloud, Amazon Web Services (AWS) is one of the most preferred choices for developers. According to Canalys, AWS is dominating the global public cloud market, holding around one-third of the total market share. AWS offers numerous services that can be used for compute power, content delivery, database storage, and more. Developers can use it to build a high-availability production website, whether it is a WordPress site, Node.js web app, LAMP stack web app, Drupal website, or a Python web app. AWS developers, need to set up, maintain and evolve the cloud infrastructure of web apps. Aside from these, they are also responsible for applying best practices related to security and scalability. Having said that, let’s take a deep dive into how AWS developers manage a web application. Deploying a website or web app with Amazon EC2 Amazon Elastic Compute Cloud (Amazon EC2) offers developers a secure and scalable computing capacity in the cloud. For hosting a website or web app, the developers need to use virtual app servers called instances. With Amazon EC2 instances, developers gain complete control over computing resources. They can scale the capacity on the basis of requirements and pay only for the resources they actually use. There are tools like AWS lambda, Elastic Beanstalk and Lightsail that allow the isolation of web apps from common failure cases. Amazon EC2 supports a number of main operating systems, including Amazon Linux, Windows Server 2012, CentOS 6.5, and Debian 7.4. Here is how developers get themselves started with Amazon EC2 for deploying a website or web app. The first step is to set up an AWS account and log into it.   Select “Launch Instance” from the Amazon EC2 Dashboard. It will enable the creation of VM. Now configure the instance by choosing an Amazon Machine Image (AMI), instance type and security group.   Click on Launch. In the next step, choose ‘Create a new key pair’ and name it. A key pair file gets downloaded automatically, which needs to be saved. It will be needed for logging in to the instance. Click on ‘Launch Instances’ to finish the set-up process. Once the instance is ready, it can be used to build high availability websites or web app. Using Amazon S3 for cloud storage Amazon Simple Storage Service, or Amazon S3 is a secure and highly scalable cloud storage solution that makes web-scale computing seamless for developers. It is used for the objects that are required to build a website, such as HTML pages, images, CSS files, videos and JavaScript. S3 comes with a simple interface so that developers can fetch and store large amounts of data from anywhere on the internet, at any time. The storage infrastructure provided with Amazon S3 is known for scalability, reliability, and speed. Amazon itself uses this storage option to host its own websites. Within S3, the developers need to create buckets for data storage. Each bucket can store a large amount of data, allowing developers to upload a high number of objects into it. The amount of data an object can contain, is up to 5 TB. The objects are stored and fetched from the bucket using a unique key. There are several purposes of a bucket. It can be used to organize the S3 namespace, recognize the accounts assigned for storage and data transfer, as well as work as the aggregation unit for usage. Elastic load balancing Load balancing is a critical part of a website or web app to distribute and balance the traffic load accordingly to multiple targets. AWS provides elastic load balancing to its developers, which allows them to distribute the traffic across a number of services, like Amazon EC2 instances, IP addresses, Lambda functions and containers. With Elastic load balancing, developers can ensure that their projects run efficiently even when there is heavy traffic. There are three kinds of load balancers available with AWS elastic load balancing— Application Load Balancer, Network Load Balancer and Classic Load Balancer. Application Load Balancer is an ideal option for HTTP and HTTPS traffic. It provides advanced routing for the requests meant for the delivery of microservices and containers. For balancing the load of Transmission Control Protocol (TCP), Transport Layer Security (TLS) and User Datagram Protocol (UDP), developers opt for Network Load Balancer. Whereas, the Classic Load Balancer is best suited for typical load distribution across EC2 instances. It works for both requests and connections. Debugging and troubleshooting A web app or website can include numerous features and components. Often, a few of them might face issues or not work as expected, because of coding errors or other bugs. In such cases, AWS developers follow a number of processes and techniques and check the useful resources that help them to debug a recipe or troubleshoot the issues.   See the service issue at Common Debugging and Troubleshooting Issues.   Check the Debugging Recipes for issues related to recipes.   Check the AWS OpsWorks Stack Forum. It is a forum where other developers discuss their issues. AWS team also monitors these issues and helps in finding the solutions.   Get in touch with AWS OpsWorks Stacks support team to solve the issue.  Traffic monitoring and analysis Analysing and monitoring the traffic and network logs help in understanding the way websites and web apps perform on the internet.  AWS provides several tools for traffic monitoring, which includes Real-Time Web Analytics with Kinesis Data Analytics, Amazon Kinesis, Amazon Pinpoint, Amazon Athena, etc.  For tracking of website metrics, the Real-Time Web Analytics with Kinesis Data Analytics is used by developers. This tool provides insights into visitor counts, page views, time spent by visitors, actions taken by visitors, channels driving the traffic and more. Additionally, the tool comes with an optional dashboard which can be used for monitoring of web servers. Developers can see custom metrics of the servers to know about the performance of servers, average network packets processing, errors, etc. Wrapping up Management of a web application is a tedious task and requires quality tools and technologies. Amazon Web Services makes things easier for web developers, providing them with all the tools required to handle the app.  Author Bio Vaibhav Shah is the CEO of Techuz, a mobile app and web development company in India and the USA. He is a technology maven, a visionary who likes to explore innovative technologies and has empowered 100+ businesses with sophisticated Web solutions
Read more
  • 0
  • 0
  • 4981

article-image-5-reasons-learn-reactjs
Sam Wood
04 Nov 2016
3 min read
Save for later

5 Reasons to Learn ReactJS

Sam Wood
04 Nov 2016
3 min read
Created by Facebook, ReactJS has been quick to storm onto the JavaScript stage. But is it really worth picking up, especially over more established options like Ember or Angular? Well, here's five great reasons to learn React. 1. If you want to build high performance JS mobile apps If you're a JavaScript developer, there are a bunch of options to choose from if you find yourself wanting to develop for mobile. Cordova, Ionic and more all allow you to use your JavaScript coding skills to build apps for Android and iOS. But React Native - React's spin off platform for mobile dev - is very different. Rather than running a JavaScript powered app in your mobile web browser, React Native compiles to the native code of the respective mobile OS. What does this mean? It means you get to develop entirely with JavaScript without passing on any performance compromise to your users. React Native apps run as swift and seamlessly as those built using native tools like XCode. 2. If your web app regularly changes state If your single-page web app needs to react regularly to state changes, you'll want to seriously consider React (the clue is in the name). React is built on the idea of minimizing DOM operations - they're expensive, and the least you have the better. Instead, React gives you a virtual DOM to render too instead of the actual DOM. This allows the minimum number of DOM operations you need to achieve the new desired state. With React, you can often stop worrying about DOM performance altogether. It's simple to re-render an entire page all the time as soon as your state changes. This means your code is smaller, sleeker, and simpler - and simpler code is bug free code. 3. If you want to easily reuse your code One of React's biggest features are container components. What are those? The idea is simple - a container does the data fetching, and then renders it into a corresponding sub-component that shares the same name. This means that you separate your data fetching from your rendering concerns entirely - making your React code much, much more reusable in different projects. 4. If you like control over your stack It's a common refrain among those asked to compare React to other JavaScript Frameworks like Angular - React's not a framework! It's a library. What does this mean? It means you can have complete control over your stack. Don't like a bit of React? You can always swap in from another JavaScript library and run things your way. 5. If you want to be in on the ground floor of the next big thing There are thousands of experienced Angular developers, many of whom are progressing to learn Angular 2. In contrast, React is young, scrappy, and hungry - and you'll be hard pressed to find anyone with more than a year or so's experience using it. Despite that, employer demand is rising fast. If you're looking to stand out from the crowd, React skills are an excellent thing to have on your resume. Commit to building your next development project with React. Or maybe Angular. Whatever you decide, pick up some of our very best content from 7th to 13th November 2016 here.
Read more
  • 0
  • 0
  • 4972

article-image-a-tale-of-two-tools-tableau-and-power-bi
Natasha Mathur
07 Jun 2018
11 min read
Save for later

A tale of two tools: Tableau and Power BI

Natasha Mathur
07 Jun 2018
11 min read
Business professionals are on a constant look-out for a powerful yet cost-effective BI tool to ramp up the operational efficiency within organizations. Two tools that are front-runners in the Self-Service Business Intelligence field currently are Tableau and Power BI. Both tools, although quite similar in nature, offer different features. Most experts say that the right tool depends on the size, needs and the budget of an organization, but when compared closely, one of them clearly beats the other in terms of its features. Now, instead of comparing the two based on their pros and cons, we’ll let Tableau and Power BI take over from here to argue their case covering topics like features, usability, to pricing and job opportunities. For those of you who aren’t interested in a good story, there is a summary of the key points at the end of the article comparing the two tools. [box type="shadow" align="" class="" width=""] The clock strikes 2’o'clock for a meeting on a regular Monday afternoon. Tableau, a market leader in Business Intelligence & data analytics and Power BI; another standout performer and Tableau’s opponent in the field of Business Intelligence head off for a meeting with the Vendor. The meeting where the vendor is finally expected to decide to which tool their organization should pick for their BI needs. With Power BI and Tableau joining the Vendor, the conversation starts on a light note with both tools introducing themselves to the Vendor. Tableau: Hi, I am Tableau, I make it easy for companies all around the world to see and understand their data. I provide different visualization tools, drag & drop features, metadata management, data notifications, etc, among other exciting features. Power BI: Hello, I am Power BI, I am a cloud-based analytics and Business Intelligence platform. I provide a full overview of critical data to organizations across the globe. I allow companies to easily share data by connecting the data sources and helping them create reports. I also help create scalable dashboards for visualization. The vendor nods convincingly in agreement while making notes about the two tools. Vendor: May I know what each one of you offers in terms of visualization? Tableau: Sure, I let users create 24 different types of baseline visualizations including heat maps, line charts and scatter plots. Not trying to brag, but you don’t need intense coding knowledge to develop high quality and complex visualizations with me. You can also ask me ‘what if’ questions regarding the data. I also provide unlimited data points for analysis. The vendor seems noticeably pleased with Tableau’s reply. Power BI: I allow users to create visualizations by asking questions in natural language using Cortana. Uploading data sets is quite easy with me. You can select a wide range of visualizations as blueprints. You can then insert data from the sidebar into the visualization. Tableau passes a glittery infectious smirk and throws a question towards Power BI excitedly. Tableau: Wait, what about data points? How many data points can you offer? The Vendor looks at Power BI with a straight face, waiting for a reply. Power BI: For now, I offer 3500 data points for data analysis. Vendor: Umm, okay, but, won’t the 3500 data point limit the effectiveness for the users? Tableau cuts off Power BI as it tries to answer and replies back to the vendor with a distinct sense of rush in its voice. Tableau: It will! Due to the 3500 data point limit, many visuals can't display a large amount of data, so filters are added. As the data gets filtered automatically, it leads to outliers getting missed. Power BI looks visibly irritated after Tableau’s response and looks at the vendor for slight hope, while vendor seems more inclined towards Tableau. Vendor: Okay. Noted. What can you tell me about your compatibility with data sources? Tableau: I support hundreds of data connectors. This includes online analytical processing (OLAP), big data options (such as NoSQL, Hadoop) as well as cloud options. I am capable of automatically determining the relationship between data when added from multiple sources. I also let you modify data links or create them manually based on your company’s preferences. Power BI: I help connect to users’ external sources including SAP HANA, JSON, MySQL, and more. When data is added from multiple sources, I can automatically determine the relationships between them. In fact, I let users connect to Microsoft Azure databases, third-party databases, files and online services like Salesforce and Google Analytics. Vendor: Okay, that’s great! Can you tell me what your customer support is like? Tableau jumps in to answer the question first yet again. Tableau: I offer direct support by phone and email. Customers can also login to the customer portal to submit a support ticket. Subscriptions are provided based on three different categories namely desktop, server and online. Also, there are support resources for different subscription version of the software namely Desktop, Server, and Online. Users are free to access the support resources depending upon the version of the software. I provide getting started guides, best practices as well as how to use the platform’s top features. A user can also access Tableau community forum along with attending training events. The vendor seems highly pleased with Tableau’s answer and continues scribbling in his notebook. Power BI: I offer faster customer support to users with a paid account. However, all users can submit a support ticket. I also provide robust support resources and documentation including learning guides, a user community forum and samples of how my partners use the platform.  Though customer support functionality is limited for users with a free Power BI account. Vendor: Okay, got it! Can you tell me about your learning curves? Do you get along well with novice users too or just professionals? Tableau: I am a very powerful tool and data analysts around the world are my largest customer base. I must confess, I am not quite intuitive in nature but given the powerful visualization features that I offer, I see no harm in people getting themselves acquainted with data science a bit before they decide to choose me. In a nutshell, it can be a bit tricky to transform and clean visualizations with me for novices. Tableau looks at the vendor for approval but he is just busy making notes. Power BI: I am the citizen data scientists’ ally. From common stakeholders to data analysts, there are features for almost everyone on board as far as I am concerned. My interface is quite intuitive and depends more on drag and drop features to build visualizations. This makes it easy for the users to play around with the interface a bit. It doesn’t matter whether you’re a novice or pro, there’s space for everyone here. A green monster of jealousy takes over Tableau as it scoffs at Power BI. Tableau: You are only compatible with Windows. I, on the other hand, am compatible with both Windows and Mac OS. And let’s be real it’s tough to do even simple calculations with you, such as creating a percent-of-total variable, without learning the DAX language. As the flood of anger rises in Power BI, Vendor interrupts them. Vendor: May I just ask one last question before I get ready with the results? How heavy are you on my pockets? Power BI: I offer three subscription plans namely desktop, pro, and premium. Desktop is the free version. Pro is for professionals and starts at $9.99 per user per month. You get additional features such as data governance, content packaging, and distribution. I also offer a 60 day trial with Pro. Now, coming to Premium, it is built on a capacity pricing. What that means is that I charge you per node per month. You get even more powerful features such as premium version cost calculator for custom quote ranges. This is based on the number of pro, frequent and occasional users that are active on an account’s premium version. The vendor seems a little dazed as he continues making notes. Tableau: I offer three subscriptions as well, namely Desktop, Server, and Online. Prices are charged per user per month but billed annually. Desktop category comes with two options: Personal edition (starting at $35) and professional edition (starting at $70). The server option offers on-premises or public cloud capabilities, starting at $35 while the Online version is fully hosted and starts at $42. I also offer a free version namely Tableau Public with which users can create visualizations, save them and share them on social media or their blog. There is a 10GB storage limit though. I also offer 14 days free trial for users so that they can get a demo before the purchase. Tableau and Power BI both await anxiously for the Vendor’s reply as he continued scribbling in his notebook while making weird quizzical expressions. Vendor: Thank you so much for attending this meeting. I’ll be right back with the results. I just need to check on a few things. Tableau and power BI watch the vendor leave the room and heavy anticipation fills the room. Tableau: Let’s be real, I will always be the preferred choice for data visualization. Power BI: We shall see that. Don’t forget that I also offer data visualization tools along with predictive modeling and reporting. Tableau: I have a better job market! Power BI: What makes you say that? I think you need to re-check the Gartner’s Magic Quadrant as I am right beside you on that. Power BI looks at Tableau with a hot rush of astonishment as the Vendor enters the room. The vendor smiles at Tableau as he continues the discussion which makes Power BI slightly uneasy. Vendor: Tableau and Power BI, you both offer great features but as you know I can only pick one of you as my choice for the organizations. An air of suspense surrounds the atmosphere. Vendor: Tableau, you are a great data visualization tool with versatile built-in features such as user interface layout, visualization sharing, and intuitive data exploration. Power BI, you offer real-time data access along with some pretty handy drag and drop features. You help create visualizations quickly and provide even the novice users an access to powerful data analytics without any prior knowledge. The tension notched up even more as the Vendor kept talking. Vendor: Tableau! You’re a great data visualization tool but the price point is quite high. This is one of the reasons why I choose Microsoft Power BI. Microsoft Power BI offers data visualization, connects to external data sources, lets you create reports, etc, all at low cost. Hence, Power BI, welcome aboard! A sense of infinite peace and pride emanates from Power BI. The meeting ends with Power BI and Vendor shaking hands as Tableau silently leaves the room. [/box] We took a peek into the Vendor’s notebook and saw this comparison table. Power BI Tableau Visualization capabilities Good Very Good Compatibility with multiple Data sources Good Good Customer Support Quality Good Good Learning Curve Very Good Good System Compatibility Windows Windows & Mac OS Cost Low Very high Job Market Good Good Analytics Very Good Good Both the Business Intelligence tools are in demand by organizations all over the world. Tableau is fast and agile. It provides a comprehensible interface along with visual analytics where users have the ability to ask and answer questions. Its versatility and success stories make it a good choice for organizations willing to invest in a higher budget Business Intelligence software. Power BI, on the other hand, offers almost similar features as Tableau including data visualization, predictive modeling, reporting, data prep, etc, at one of the lowest subscription prices today in the market. Nevertheless, there are upgrades being made to both of the Business Intelligence tools, and we can only wait to see what’s more to come in these technologies. Building a Microsoft Power BI Data Model “Tableau is the most powerful and secure end-to-end analytics platform”: An interview with Joshua Milligan“ Unlocking the secrets of Microsoft Power BI      
Read more
  • 0
  • 0
  • 4951

article-image-is-initiative-q-a-pyramid-scheme-or-just-a-really-bad-idea
Richard Gall
25 Oct 2018
5 min read
Save for later

Is Initiative Q a pyramid scheme or just a really bad idea?

Richard Gall
25 Oct 2018
5 min read
If things seem too good to be true, they probably are. That's a pretty good motto to live by, and one that's particularly pertinent in the days of fake news and crypto-bubbles. However, it seems like advice many people haven't heeded with Initiative Q, a new 'payment system' developed by the brains behind PayPal technology. That's not to say that Initiative Q certainly is too good to be true. But when an organisation appears to be offering almost hundreds of thousands of dollars to users who simply offer an email and then encourage others to offer theirs, caution is essential. If it looks like a pyramid scheme, then do you really want to risk the chance that it might just be a pyramid scheme? What is Initiative Q? Initiative Q, is, according to its founders, "tomorrow's payment network." On its website it says that current methods of payment, such as credit cards, are outdated. They open up the potential for fraud and other bad business practices, as well as not being particularly efficient. Initiative Q claims that is it going to develop an alternative to these systems "which aggregate the best ideas, innovations, and technologies developed in recent years." It isn't specific about which ideas and technological innovations its referring to, but if you read through the payment model it wants to develop, there are elements that sound a lot like blockchain. For example, it talks about using more accurate methods of authentication to minimize fraud, and improving customer protection by "creating a network where buyers don’t need to constantly worry about whether they are being scammed" (the extent to which this turns out to be deliciously ironic remains to be seen). To put it simply, it's a proposed new payment system that borrows lots of good ideas that still haven't been shaped into a coherent whole. Compelling, yes, but alarm bells are probably sounding. Who's behind Initiative Q? There are very few details on who is actually involved in Initiative Q. The only names attached to the project are Saar Wilf, an entrepreneur who founded Fraud Sciences, a payment technology that was bought by PayPal in 2008, and Lawrence White, Professor of Monetary Theory and Policy and George Mason University. The team should grow, however. Once the number of members has grown to a significant level, the Initiative Q team say "we will continue recruiting the world’s top professionals in payment systems, macroeconomics, and Internet technologies." How is Initiative Q supposed to work? Initiative Q explains that for the world to adopt a new payment network is a huge challenge - a fair comment, because after all, for it to work at all, you need actors within that network who believe in it and trust it. This is why the initial model - which looks and feels a hell of a lot like a pyramid or Ponzi scheme - is, according to Initiative Q, so important. To make this work, you need a critical mass of users. Initiative Q actually defends itself from accusations that it is a Pyramid scheme by pointing out that there's no money involved at this stage. All that happens is that when you sign up you receive a specific number of 'Qs' (the name of the currency Initiative Q is proposing). These Qs obviously aren't worth anything at the moment. The idea is that when the project actually does reach critical mass, it will take on actual value. Isn't Initiative Q just another cryptocurrency? Initiative Q is keen to stress that it isn't a cryptocurrency. That said, on its website the project urges you to "think of it as getting free bitcoin seven years ago." But the website does go into a little more detail elsewhere, explaining that "cryptocurrencies have failed as currencies" because they "focus on ensuring scarcity" while neglecting to consider how people might actually use them in the real world." The implication, then, is that Initiative Q is putting adoption first. Presumably, it's one of the reasons that it has decided to go with such an odd acquisition strategy. Ultimately though, it's too early to say whether Initiative Q is or isn't a cryptocurrency in the strictest (ie. fully de-centralized etc.) sense. There simply isn't enough detail about how it will work. Of course, there are reasons why Initiative Q doesn't want to be seen as a cryptocurrency. From a marketing perspective, it needs to look distinctly different from the crypto-pretenders of the last decade. Initiative Q: pyramid scheme or harmless vaporware? Because no money is exchanged at any point, it's difficult to call Initiative Q a ponzi or pyramid scheme. In fact it's actually quite hard to know what to call it. As David Gerard wrote in a widely shared post from June, published when Initiative Q had a first viral wave, "the Initiative Q payment network concept is hard to critique — because not only does it not exist, they don’t have anything as yet, except the notion of “build a payment network and it’ll be awesome.” But while it's hard to critique, it's also pretty hard to say that it's actually fraudulent. In truth, at the moment it's relatively harmless. However, as David Gerard points out in the same post, if the data of those who signed up is hacked - or even sold (although the organization says it won't do that) - that's a pretty neat database of people who'll offer their details up in return for some empty promises of future riches.
Read more
  • 0
  • 0
  • 4929

article-image-5-reasons-to-choose-kotlin-over-java
Richa Tripathi
30 Apr 2018
3 min read
Save for later

5 reasons to choose Kotlin over Java

Richa Tripathi
30 Apr 2018
3 min read
Java has been a master of all in almost every field of application development, making the Java developers not wander much in search for other languages. However, things have changed with the steady evolution of Kotlin. Kotlin, no more the "other JVM language" has even surpassed Java's prominence . So,what makes this language stand-out and why is it growing in adoption for application development? What are the benefits of Kotlin vs Java, and how can it help developers? In this article, we’re going to look at the top 5 reasons why Kotlin takes a superior stand over Java and why it will work best for your next development project. Kotlin is more concise Kotlin is way more concise than Java in many cases, solving the same problems with fewer lines of code. This improves code maintainability and readability, meaning engineers can write, read, and change code more effectively and efficiently. Kotlin exclusive features such as type inference, smart casts, data classes, and properties help achieve conciseness. Kotlin’s null-safety is great NullPointerExceptions are a huge source of frustration for Java developers. Java allows you to assign null to any variable, but if you try to use an object reference that has a null value, then brace yourself to encounter a NullPointerException! Kotlin’s type system is aimed to eliminate NullPointerExceptions from the code. This type of system helps to avoid null pointer exceptions by simply refusing to compile code that tries to assign or return null. Combine the best of Functional and Procedural Programming Each set of programming paradigm has its own set of pros and cons. Combining the power of both functional and procedural programming leads to better development and output. It consists of many useful methods, which includes higher-order functions, lambda expressions, operator overloading, lazy evaluation, and much more. With a list of weaknesses and strengths from both languages, Kotlin offers inexpensive and intuitive coding style. The power of Kotlin’s extension functions Extensions of Kotlin are very useful because they allow developers to add methods to classes without making changes to their source code. Here, you can add methods on a per-user basis to classes. This allows users to extend the functionality of existing classes without inheriting the functions and properties from other classes. Interoperability with JAVA When debating between Kotlin vs Java, there is always a third option: Use them both. Despite all the differences Kotlin and Java are 100% interoperable,you can literally continue work on your old Java projects using Kotlin. You can call Kotlin code from Java, and you can call Java code from Kotlin. So it’s possible to have Kotlin and Java classes side-by-side within the same project, and everything will still compile. Undoubtedly, Kotlin has made many positive changes to the long and most used Java. It helps to write safer code, because with less work it's possible to write a more reliable code, thus making the life of programmers a lot easier. Kotlin is really a good replacement for Java. With time, more and more advanced features will be added to the Kotlin’s ecosystem that will help its popularity to grow towards its apex making the developers world more promising. Also read Why are Android developers switching from Java to Kotlin? Getting started with Kotlin programming  
Read more
  • 0
  • 0
  • 4883
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $15.99/month. Cancel anytime
article-image-how-to-protect-your-vpn-from-data-leaks
Guest Contributor
26 Jan 2019
7 min read
Save for later

How to protect your VPN from Data Leaks

Guest Contributor
26 Jan 2019
7 min read
The following news story was reported by the Nine Network just a week after New Year's Day: an English teacher from Sydney was surprised when she found that her Facebook account was changing in strange ways. Jennifer Howell first noticed that her profile photo had changed, thus prompting her to change her password; however, she was abruptly logged out and locked out of her account upon attempting to do so. Later, she noticed that her profile had been hijacked by someone from the Middle East for the purpose of spreading radical propaganda. Nine Network journalists tracked down another Facebook user in Melbourne whose account had been similarly hijacked by hackers in the Middle East, and the goal was essentially the same. Even though both cases were reported to the Australian Cybercrime Online Reporting Network, nothing could be done about the hijacking, which may have been facilitated by password sniffing over unsecured connections. The Need for VPN Protection [Image courtesy of CNET.com] Seeing such worrisome reports about hacking is prompting many people to use virtual private networking (VPN) technology to secure their internet connections; however, these connections must be checked for potential leaks or they could be a waste of money. In essence, VPN connections protect online privacy by creating a secure tunnel between the client (who typically uses a personal computing device to connect to the internet) and the internet. A reliable VPN connection masks the user's geographical location by means of providing a different internet protocol (IP) address, which is the calling card of every online connection. Moreover, these connections encrypt data transmitted during sessions and provide a form of anonymous browsing. Like with almost all internet tools, VPN connections can also be subjected to certain vulnerabilities that weaken their reliability. Data leaks are a concern amongst information security researchers who focus on VPN technology, and they have identified the following issues: WebRTC Leaks Web Real-Time Communication (WebRTC) is an evolution of the Voice over Internet Protocol (VoIP) for online communications. VoIP is the technology that powers popular mobile apps such as Skype and WhatsApp; it has also replaced the legacy PBX telephone systems at many businesses. Let's say a company is looking to hire a new personnel. With WebRTC enabled on their end, they can direct applicants to a website they can access on their desktop, laptop, tablet, or smartphone to conduct job interviews without having to install Skype. The problem with WebRTC is that it can leak the IP address of users even when a VPN connection is established. DNS Hijacking The hijacking of domain name system (DNS) servers is an old malicious hacking strategy that has been appropriated by authoritarian regimes to enact internet censorship. The biggest DNS hijacking operation in the world is conducted by Chinese telecom regulators through the Great Firewall, which restricts access to certain websites and internet services. DNS hijacking is a broad name for a series of attacks on DNS servers, a common one involves taking over a router, server or even an internet connection for the purpose of redirecting traffic. In other words, hackers can impersonate websites, so that when you intend to check ABC News you will instead be directed to a page that resembles it, but in reality has been coded to steal passwords, compromise your identity or install malware. Some attacks are even more sophisticated than others. There is a connection between WebRTC and DNS hijacking: a malware attack known as DNS changer that can be injected into a system by means of JavaScript execution followed by a WebRTC call that you will not be aware of. This call can be used to determine your IP address even if you have connected through a VPN. This attack may be enhanced by a change of your DNS settings for the purpose of enlisting your computer or mobile device into a botnet to distribute spam, launch denial-of-service attacks or simply hijack your system without your knowledge. Testing for Leaks [Image courtesy of HowToGeek.com] In addition to WebRTC leaks and DNS queries, there are a few other ways your VPN can betray you: public IP address, torrents, and geolocation. The easiest way to assess if you’ve got a leakage is to visit IPLeak.net with your VPN turned off. Let this nifty site work its magic and make note of the information it offers. Leave the site, then turn your VPN on, and repeat the tests. Now compare the results. The torrents and geolocation tests are interesting but probably not as useful or as likely a culprit as the DNS. Your device navigates the internet by communicating with DNS servers that translate web URLs into numeric IP addresses. Most of the time, you’ll have defaulted through your ISP servers, which often leak like cheesecloth. The bad news is that, even with a VPN in place, leakage through your local servers can give up your physical location to spying eyes. To combat this, VPN services route their customers through servers separate from their ISP. Now that you’ve proven your data is leaking, what can you do about it? Preventing Leaks and Choosing the Right VPN Something you can do even before installing a VPN solution is to disable WebRTC in your browser. Some developers have already made this a default configuration, but many still ship with this option enabled. If you search for "WebRTC" within the help file of your browser, you may be able to find instructions on how to modify the flags or .config file. However, proceed with caution. Take the time to read and understand reliable guides such as this one from security researcher Paolo Stagno. Here are other preventative measures: When configuring your VPN, go with the servers it suggests, which will likely not be those of your ISP but rather servers maintained by the VPN company. Not all VPN companies have their own servers, so be aware of that when considering your options.  Be aware that the internet is transitioning its IP address naming system from IPv4 to IPv6. Without diving too deep into this topic, just be aware that if your VPN has not upgraded its protocols, then any site with a new IPv6 address will leak. Look for a VPN service compatible with the new format.  Make sure your VPN uses the newest version of the OpenVPN protocol.  Windows 10 has an almost impossible to change default setting that chooses the fastest DNS server, resulting in the chance it might ignore your VPN server and revert back to the ISP. The OpenVPN plugin is a good way to fight this. Final Thoughts In the end, using a leaky VPN defeats the security purpose of tunneled connections. It is certainly worth your while to evaluate VPN products, read their guides and learn to secure your system against accidental leaks. Keep in mind this is not a ‘set it and forget it’ problem. You should check for leakage periodically to make sure nothing has changed with your system. The winds of change blow constantly online and what worked yesterday might not work tomorrow. As a final suggestion, make sure the VPN you use has a kill-switch feature that breaks your connection in the event it detects a data leak. Author Bio Gary Stevens is a front-end developer. He’s a full-time blockchain geek and a volunteer working for the Ethereum foundation as well as an active Github contributor. Dark Web Phishing Kits: Cheap, plentiful and ready to trick you How to stop hackers from messing with your home network (IoT) Privacy Australia - can you be tracked if you use a VPN? What you need to know about VPNFilter Malware Attack
Read more
  • 0
  • 0
  • 4880

article-image-module-development-in-angular-js
Patrick Marabeas
29 Oct 2014
5 min read
Save for later

Exploring Module Development in AngularJS

Patrick Marabeas
29 Oct 2014
5 min read
This started off as an article about building a simple ScrollSpy module. Simplicity got away from me however, so I'll focus on some of the more interesting bits and pieces that make this module tick! You may wish to have the completed code with you as you read this to see how it fits together as a whole - as well as the missing code and logic. Modular applications are those that are "composed of a set of highly decoupled, distinct pieces of functionality stored in modules" (Addy Osmani). By having loose coupling between modules, the application becomes easier to maintain and functionality can be easily swapped in and out. As such, the functionality of our module will be strictly limited to the activation of one element when another is deemed to be viewable by the user. Linking, smooth scrolling, and other features that navigation elements might have, won’t be covered. Let's build a ScrollSpy module! Let's start by defining a new module. Using a chained sequence rather than declaring a variable for the module is preferable so you don't pollute the global scope. This also saves you when other modules have used the same var. 'use strict'; angular.module('ngScrollSpy', []); I'm all about making modules that are dead simple to implement for the developer. We don’t need superfluous parents, attributes, and controller requirements! All we need is: A directive (scrollspyBroadcast) that sits on each content section and determines whether it's been scrolled to (active and added to stack) or not. A directive (scrollspyListen) that sits on each navigation (or whatever) element and listens for changes to the stack—triggering a class if it is the current active element. We'll use a factory (SpyFactory) to deal with the stack (adding to, removing from, and broadcasting change). The major issue with a ScrollSpy module (particularly in Angular) is dynamic content. We could use MutationObservers —but they aren't widely supported and polling is just bad form. Let's just leverage scrolling itself to update element positions. We could also take advantage of $rootScope.$watch to watch for any digest calls received by $rootScope, but it hasn't been included in the version this article will link to. To save every single scrollspyBroadcast directive from calculating documentHeight and window positions/heights, another factory (PositionFactory) will deal with these changes. This will be done via a scroll event in a run block. This is a basic visualization of how our module is going to interact: Adding module-wide configuration By using value, provider, and config blocks, module-wide configuration can be implemented without littering our view with data attributes, having a superfluous parent wrapper, or the developer needing to alter the module file. The value block acts as the default configuration for the module. .value('config', { 'offset': 200, 'throttle': true, 'delay': 100 }) The provider block allows us to expose API for application-wide configuration. Here we are exposing config, which the developer will be able to set in the config block. .provider('scrollspyConfig', function() { var self = this; this.config = {}; this.$get = function() { var extend = {}; extend.config = self.config; return extend; }; return this; }); The user of the ScrollSpy module can now implement a config block in their application. The scrollspyConfig provider is injected into it (note, the injected name requires "Provider" on the end)—giving the user access to manipulate the modules configuration from their own codebase. theDevelopersFancyApp.config(['scrollspyConfigProvider', function(scrollspyConfigProvider) { scrollspyConfigProvider.config = { offset: 500, throttle: false, delay: 100 }; }]); The value and provider blocks are injected into the necessary directive—config being extended upon by the application settings. (scrollspyConfig.config). .directive('scrollspyBroadcast', ['config', 'scrollspyConfig', function(config, scrollspyConfig) { return { link: function() { angular.extend(config, scrollspyConfig.config); console.log(config.offset) //500 ... Updating module-wide properties It wouldn't be efficient for all directives to calculate generic values such as the document height and position of the window. We can put this functionality into a service, inject it into a run block, and have it call for updates upon scrolling. .run(['PositionFactory', function(PositionFactory) { PositionFactory.refreshPositions(); angular.element(window).bind('scroll', function() { PositionFactory.refreshPositions(); }); }]) .factory('PositionFactory', [ function(){ return { 'position': [], 'refreshPositions': function() { this.position.documentHeight = //logic this.position.windowTop = //logic this.position.windowBottom = //logic } } }]) PositionFactory can now be injected into the required directive. .directive('scrollspyBroadcast', ['config', 'scrollspyConfig', 'PositionFactory', function(config, scrollspyConfig, PositionFactory) { return { link: function() { console.log(PositionFactory.documentHeight); //1337 ... Using original element types <a data-scrollspyListen>Some text!</a> <span data-scrollspyListen>Some text!</span> <li data-scrollspyListen>Some text!</li> <h1 data-scrollspyListen>Some text!</h1> These should all be valid. The developer shouldn't be forced to use a specific element when using the scrollspyListendirective. Nor should the view fill with superfluous wrappers to allow the developer to retain their original elements. Fortunately, the template property can take a function (which takes two arguments tElement and tAttrs). This gives access to the element prior to replacement. In this example, transclusion could also be replaced by using element[0].innerText instead. This would remove the added child span that gets created. .directive('scrollspyListen', ['$timeout', 'SpyFactory', function($timeout, SpyFactory) { return { replace: true, transclude: true, template: function(element) { var tag = element[0].nodeName; return '<' + tag + ' data-ng-transclude></' + tag + '>'; }, ... Show me all of it! The completed codebase can be found over on GitHub. The version at the time of writing is v3.0.0. About the Author Patrick Marabeas is a freelance frontend developer who loves learning and working with cutting edge web technologies. He spends much of his free time developing Angular Modules, such as ng-FitText, ng-Slider, and ng-YouTubeAPI. You can follow him on Twitter @patrickmarabeas.
Read more
  • 0
  • 0
  • 4877

article-image-hot-chips-31-ibm-power10-amds-ai-ambitions-intel-nnp-t-cerebras-largest-chip-with-1-2-trillion-transistors-and-more
Fatema Patrawala
23 Aug 2019
7 min read
Save for later

Hot Chips 31: IBM Power10, AMD’s AI ambitions, Intel NNP-T, Cerebras largest chip with 1.2 trillion transistors and more

Fatema Patrawala
23 Aug 2019
7 min read
Hot Chips 31, the premiere event for the biggest semiconductor vendors to highlight their latest architectural developments is held in August every year. The event this year was held at the Memorial Auditorium on the Stanford University Campus in California, from August 18-20, 2019. Since its inception it is co-sponsored by IEEE and ACM SIGARCH. Hot Chips is amazing for the level of depth it provides on the latest technology and the upcoming releases in the IoT, firmware and hardware space. This year the list of presentations for Hot Chips was almost overwhelming with a wide range of technical disclosures on the latest chip logic innovations. Almost all the major chip vendors and IP licensees involved in semiconductor logic designs took part: Intel, AMD, NVIDIA, Arm, Xilinx, IBM, were on the list. But companies like Google, Microsoft, Facebook and Amazon also took part. There are notable absences from the likes of Apple, who despite being on the Committee, last presented at the conference in 1994. Day 1 kicked off with tutorials and sponsor demos. On the cloud side, Amazon AWS covered the evolution of hypervisors and the AWS infrastructure. Microsoft described its acceleration strategy with FPGAs and ASICs, with details on Project Brainwave and Project Zipline. Google covered the architecture of Google Cloud with the TPU v3 chip.  And a 3-part RISC-V tutorial rounded off by afternoon, so the day was spent well with insights into the latest cloud infrastructure and processor architectures. The detailed talks were presented on Day 2 and Day 3, below are some of the important highlights of the event: IBM’s POWER10 Processor expected by 2021 IBM which creates families of processors to address different segments, with different models for tasks like scale-up, scale-out, and now NVLink deployments. The company is adding new custom models that use new acceleration and memory devices, and that was the focus of this year’s talk at Hot Chips. They also announced about POWER10 which is expected to come with these new enhancements in 2021, they additionally announced, core counts of POWER10 and process technology. IBM also spoke about focusing on developing diverse memory and accelerator solutions to differentiate its product stack with heterogeneous systems. IBM aims to reduce the number of PHYs on its chips, so now it has PCIe Gen 4 PHYs while the rest of the SERDES run with the company's own interfaces. This creates a flexible interface that can support many types of accelerators and protocols, like GPUs, ASICs, CAPI, NVLink, and OpenCAPI. AMD wants to become a significant player in Artificial Intelligence AMD does not have an artificial intelligence–focused chip. However, AMD CEO Lisa Su in a keynote address at Hot Chips 31 stated that the company is working toward becoming a more significant player in artificial intelligence. Lisa stated that the company had adopted a CPU/GPU/interconnect strategy to tap artificial intelligence and HPC opportunity. She said that AMD would use all its technology in the Frontier supercomputer. The company plans to fully optimize its EYPC CPU and Radeon Instinct GPU for supercomputing. It would further enhance the system’s performance with its Infinity Fabric and unlock performance with its ROCM (Radeon Open Compute) software tools. Unlike Intel and NVIDIA, AMD does not have a dedicated artificial intelligence chip or application-specific accelerators. Despite this, Su noted, “We’ll absolutely see AMD be a large player in AI.” AMD is considering whether to build a dedicated AI chip or not. This decision will depend on how artificial intelligence evolves. Lisa explained that companies have been improving their CPU (central processing unit) performance by leveraging various elements. These elements are process technology, die size, TDP (thermal design power), power management, microarchitecture, and compilers. Process technology is the biggest contributor, as it boosts performance by 40%. Increasing die size also boosts performance in the double digits, but it is not cost-effective. While AMD used microarchitecture to boost EPYC Rome server CPU IPC (instructions per cycle) by 15% in single-threaded and 23% in multi-threaded workloads. This IPC improvement is above the industry average IPC improvement of around 5%–8%. Intel’s Nervana NNP-T and Lakefield 3D Foveros hybrid processors Intel revealed fine-grained details about its much-anticipated Spring Crest Deep Learning Accelerators at Hot Chips 31. The Nervana Neural Network Processor for Training (NNP-T) comes with 24 processing cores and a new take on data movement that's powered by 32GB of HBM2 memory. The spacious 27 billion transistors are spread across a 688mm2 die. The NNP-T also incorporates leading-edge technology from Intel-rival TSMC. Intel Lakefield 3D Foveros Hybrid Processors Intel in another presentation talked about Lakefield 3D Foveros hybrid processors that are the first to come to market with Intel's new 3D chip-stacking technology. The current design consists of two dies. The lower die houses all of the typical southbridge features, like I/O connections, and is fabbed on the 22FFL process. The upper die is a 10nm CPU that features one large compute core and four smaller Atom-based 'efficiency' cores, similar to an ARM big.LITTLE processor. Intel calls this a "hybrid x86 architecture," and it could denote a fundamental shift in the company's strategy. Finally, the company stacks DRAM atop the 3D processor in a PoP (package-on-Package) implementation. Cerebras largest chip ever with 1.2 trillion transistors California artificial intelligence startup Cerebras Systems introduced its Cerebras Wafer Scale Engine (WSE), the world’s largest-ever chip built for neural network processing. Sean Lie the Co-Founder and Chief Hardware Architect at Cerebras Lie presented the gigantic chip ever at Hot Chips 31. The 16nm WSE is a 46,225 mm2 silicon chip which is slightly larger than a 9.7-inch iPad. It features 1.2 trillion transistors, 400,000 AI optimized cores, 18 Gigabytes of on-chip memory, 9 petabyte/s memory bandwidth, and 100 petabyte/s fabric bandwidth. It is 56.7 times larger than the largest Nvidia graphics processing unit, which accommodates 21.1 billion transistors on a 815 mm2 silicon base. NVIDIA’s multi-chip solution for deep neural networks accelerator NVIDIA which announced about designing a test multi-chip solution for DNN computations at a VLSI conference last year, the company explained chip technology at Hot Chips 31 this year. It is currently a test chip which involves a multi-chip DL inference. It is designed for CNNs and has a RISC-V chip controller. It has 36 small chips, 8 Vector MACs per PE, and each chip has 12 PEs and each package has 6x6 chips. Few other notable talks at Hot Chips 31 Microsoft unveiled its new product Hololens 2.0 silicone. It has a holographic processor and a custom silicone. The application processor runs the app, and the HPU modifies the rendered image and sends to the display. Facebook presented details on Zion, its next generation in-memory unified training platform. Zion which is designed for Facebook sparse workloads, has a unified BFLOAT 16 format with CPU and accelerators. Huawei spoke about its Da Vinci architecture, a single Ascend 310 which can deliver 16 TeraOPS of 8-bit integer performance, support real-time analytics across 16 channels of HD video, and consume less than 8W of power. Xiling Versal AI engine Xilinx, the manufacturer of FPGAs, announced its new Versal AI engine last year as a way of moving FPGAs into the AI domain. This year at Hot Chips they expanded on its technology and more. Ayar Labs, an optical chip making startup, showcased results of its work with DARPA (U.S. Department of Defense's Defense Advanced Research Projects Agency) and Intel on an FPGA chiplet integration platform. The final talk on Day 3 ended with a presentation by Habana, they discussed about an innovative approach to scaling AI Training systems with its GAUDI AI Processor. AMD competes with Intel by launching EPYC Rome, world’s first 7 nm chip for data centers, luring in Twitter and Google Apple advanced talks with Intel to buy its smartphone modem chip business for $1 billion, reports WSJ Alibaba’s chipmaker launches open source RISC-V based ‘XuanTie 910 processor’ for 5G, AI, IoT and self-driving applications
Read more
  • 0
  • 0
  • 4865

article-image-8-nosql-databases-compared
Janu Verma
17 Jun 2015
5 min read
Save for later

8 NoSQL Databases Compared

Janu Verma
17 Jun 2015
5 min read
NoSQL, or non-relational databases, are increasingly used in big data and real-time web applications. These databases are non-relational in nature and they provide a mechanism for storage and the retrieval of information that is not tabular. There are many advantages of using NoSQL database: Horizontal Scalability Automatic replication (using multiple nodes) Loosely defined or no schema (Huge advantage, if you ask me!) Sharding and distribution Recently we were discussing the possibility of changing our data storage from HDF5 files to some NoSQL system. HDF5 files are great for the storage and retrieval purposes. But now with huge data coming in we need to scale up, and also the hierarchical schema of HDF5 files is not very well suited for all sorts of data we are using. I am a bioinformatician working on data science applications to genomic data. We have genomic annotation files (GFF format), genotype sequences (FASTA format), phenotype data (tables), and a lot of other data formats. We want to be able to store data in a space and memory efficient way and also the framework should facilitate fast retrieval. I did some research on the NoSQL options and prepared this cheat-sheet. This will be very useful for someone thinking about moving their storage to non-relational databases. Also, data scientists need to be very comfortable with the basic ideas of NoSQL DB's. In the course Introduction to Data Science by Prof. Bill Howe (UWashinton) on Coursera, NoSQL DB's formed a significant part of the lectures. I highly recommend the lectures on these topics and this course in general. This cheat-sheet should also assist aspiring data scientists in their interviews. Some options for NoSQL databases: Membase: This is key-value type database. It is very efficient if you only need to quickly retrieve a value according to a key. It has all of the advantages of memcached when it comes to the low cost of implementation. There is not much emphasis on scalability, but lookups are very fast. It has a JSON format with no predefined schema. The weakness of using it for important data is that it's a pure key-value store, and thus is not queryable on properties. MongoDB: If you need to associate a more complex structure, such as a document to a key, then MongoDB is a good option. With a single query, you are going to retrieve the whole document and it can be a huge win. However using these documents like simple key/value stores would not be as fast and as space-efficient as Membase. Documents are the basic unit. Documents are in JSON format with no predefined schema. It makes integration of data easier and faster. Berkeley DB: It stores records in key-value pairs. Both key and value can be arbitrary byte strings, and can be of variable lengths. You can put native programming language data structures into the database without converting to a foreign record first. Storage and retrieval are very simple, but the application needs to know what the structure of a key and a value is in advance, it can't ask the DB. Simple data access services. No limit to the data types that can be stored. No special support for binary large objects (unlike some others) Berkeley DB v/s MongoDB: Berkeley DB has no partitioning while MongoDB supports sharding. MongoDB has some predefined data types like float, string, integer, double, boolean, date, and so on. Berkeley DB has key-value store and MongoDb has documents. Both are schema free. Berkeley DB has no support for Python, for example, although there are many third parties libraries. Redis: If you need more structures like lists, sets, ordered sets and hashes, then Redis is the best bet. It's very fast and provides useful data-structures. It just works, but don't expect it to handle every use-case. Nevertheless, it is certainly possible to use Redis as your primary data-store. But it is used less for distributed scalability, but optimizes high performance lookups at the cost of no longer supporting relational queries. Cassandra: Each key has values as columns and columns are grouped together into sets called column families. Thus each key identifies a row of a variable number of elements. A column family contains rows and columns. Each row is uniquely identified by a key. And each row has multiple columns. Think of a column family as a table, each key-value pair being a row. Unlike RDBMS, different rows in a column family don't have to share the same set of columns, and a column may be added to one or multiple rows at any time. A hybrid between a key-value and a column-oriented database. Has a partially defined schema. Can handle large amounts of data across many servers (clusters), is fault-tolerant and robust. Examples were originally written by Facebook for the Inbox search, and later replaced by HBase. HBase: It is modeled after Google's Bigtable DB. The deal use for HBase is in the situations when you need improved flexibility, great performance, scaling and have Big Data. The data structure is similar to Cassandra where you have column families. Built on Hadoop (HDFS), and can do MapReduce without any external support. Very efficient for storing sparse data . Big data (2 billion rows) is easy to deal with. Examples scalable email/messaging system with search. HBase V/S Cassandra: Hbase is more suitable for data warehousing and large scale data processing and analysis (indexing the web as in a search engine) and Cassandra is more apt for real time transaction processing and the serving of interactive data. Cassandra is more write-centric and HBase is more read-centric. Cassandra has multi- data center support, which can be very useful. Resources NoSQL explained Why NoSQL Big Table About the Author Janu Verma is a Quantitative Researcher at the Buckler Lab, Cornell University, where he works on problems in bioinformatics and genomics. His background is in mathematics and machine learning and he leverages tools from these areas to answer questions in biology.
Read more
  • 0
  • 0
  • 4845
article-image-5-lessons-public-wi-fi-can-teach-us-about-cybersecurity
Guest Contributor
30 Nov 2018
7 min read
Save for later

5 lessons public wi-fi can teach us about cybersecurity

Guest Contributor
30 Nov 2018
7 min read
Free, public Wi-Fi is now crucial in ensuring people stay connected where a secure network is absent or mobile data is unavailable. While the advantages of flexible internet access are obvious, the dangers are often less clear. By now, most of us are aware that these networks can pose a risk, but few can articulate exactly what these risks are and how we can protect ourselves. Follow the advice below to find out exactly what dangers lurk within. The perils of public wi-fi When you join a public hotspot without protection and begin to access the internet, the packets of data that go from your device to the router are public and open for anyone to intercept. While that sounds scary, technology like SSL/TLS has ensured the danger here isn’t as bad as it was a few years ago. That being said, all a cybercriminal needs to snoop on your connection is some relatively simple Linux software that’s accessible online. This leaves you vulnerable to a variety of attacks. Let's take a look at some of them now. Data monitoring Typically, a wi-fi adapter will be set on “managed” mode. This means it acts as a standalone client connecting to a single router for access to the internet. The interface will ignore all data packets except those that are explicitly addressed to it. However, some adapters can be configured into other modes. In “monitor” mode, an adapter will capture all the wireless traffic in a certain channel, regardless of the source or intended recipient. In this mode, the adapter can even capture data packets without being connected to a router – meaning it can sniff and snoop on all the data it gets its hands on. Not all commercial wi-fi adapters are capable of this, as it’s cheaper for manufacturers to make those that only handle “managed” mode. Still, if someone gets their hands on one and pairs it with some simple Linux software, they can see which URLs you are loading and all of the data you’re entering on any website not using HTTPS – including names, addresses, and financial accounts. Fake hotspots Catching unencrypted data packets out of the air isn’t the only risk of public wi-fi. When you connect to an unprotected router, you are implicitly trusting the supplier of that connection. Usually this trust is well-founded – it’s unlikely your local café is interested in your private data. However, the carelessness with which we now connect to public routers means that cybercriminals can easily set up a fake network to bait you in. Once an illegitimate hotspot has been created, all of the data flowing through it can be captured, analysed, and manipulated. One of the most common forms of manipulation is simply redirecting your traffic to an imitation of a popular website. The sole purpose of this clone site will be to capture your personal information and card details – the same strategy used in phishing scams. ARP spoofing Unfortunately, cybercriminals don’t even need a fake hotspot to interfere with your traffic. Every wi-fi and Ethernet network has a unique MAC address – an identifying code used to ensure data packets travel to the correct destination. The way that routers – and all other devices – discover this information is using ARP (Address Resolution Protocol). For example, your smartphone might send out a request asking which device on the network is associated with a certain IP address. The requested device responds with its MAC address, ensuring the data packets are physically directed to the correct location. The issue with ARP is that it can be faked. Your smartphone might send a request for the address of the public wi-fi router, and a different device will answer with a false address. Providing the signal of the false device is stronger than the legitimate one, your smartphone will be fooled. Again, this can be done with simple Linux software. Once the spoofing has taken place, all of your data will be sent to the false router, which can subsequently manipulate the traffic however it likes. Man-in-the-Middle (MitM) attacks A man-in-the-middle attack (MITM) refers to any malicious action in which the attacker secretly relays or alters the communication between two parties. On an unprotected connection, a cybercriminal can modify key parts of the network traffic, redirect this traffic elsewhere, or inject content into an existing packet. This could mean displaying a fake login form or website, changing links, text, pictures, or more. This is relatively straightforward to execute; an attacker within reception range of an unencrypted wi-fi point could insert themselves easily. How to secure your connection The prevalence and simplicity of these attacks only serves to highlight the importance of basic cybersecurity best practices. Following these foundational rules of cybersecurity should serve to counteract the vast majority of public wi-fi threats. Firewalls An effective firewall will monitor and block any suspicious traffic flowing to and from your device. It’s a given that you should always have a firewall in place and your virus definitions updated to protect your device from upcoming threats. Though properly configured firewalls can effectively block some attacks, they’re not infallible, and do not exempt you from danger. They primarily help protect against malicious traffic, not malicious programs, and may not protect you if you inadvertently run malware. Firewalls should always be used in conjunction with other protective measures such as antivirus software. Software updates Not to be underestimated, software and system updates are imperative and should be installed as soon as they’re offered. Staying up to date with the latest security patches is the simplest step in protecting yourself against existing and easily-exploited system vulnerabilities. Use a VPN Whether you’re a regular user of public Wi-Fi or not, A VPN is an essential security tool worth having. This software works by generating an encrypted tunnel that all of your traffic travels through, ensuring your data is secure regardless of the safety of the network you’re on. This is paramount for anyone concerned about their security online, and is arguably the best safeguard against the risks of open networks. That being said, there are dozens of available VPN services, many of which are unreliable or even dangerous. Free VPN providers have been known to monitor and sell users’ data to third parties. It’s important you choose a service provider with a strong reputation and a strict no-logging policy. It’s a crowded market, but most review websites recommend ExpressVPN and NordVPN as reliable options. Use common sense If you find yourself with no option but to use public Wi-Fi without a VPN, the majority of attacks can be avoided with old-school safe computing practices. Avoid making purchases or visiting sensitive websites like online banking. It’s best to stay away from any website that doesn’t use HTTPS. Luckily, popular browser extensions like HTTPS everywhere can help extend your reach. The majority of modern browsers have in-built security features that can identify threats and notify you if they encounter a malicious website. While it’s sensible to heed these warnings, these browsers are not failsafe and are much less likely to spot local interference by an unknown third party. Simple solutions are often the strongest in cybersecurity With the rising use of HTTPS and TLS, it’s become much harder for data to be intercepted and exploited. That being said, with a laptop, free Linux software, and a cheap Wi-Fi adapter, you’d be surprised how much damage can be done. Public Wi-Fi is now a staple of modern life. Despite its ubiquity, it’s still exploited with relative ease, and many are oblivious to exactly what these risks entail. Clearly cybersecurity still has a long way to go at the consumer level; for now, old lessons still ring true – the simplest solutions are often the strongest. William Chalk is a writer and researcher at Top10VPN, a cybersecurity research group and the world’s largest VPN (Virtual Private Network) review site. As well as recommending the best VPN services, they publish independent research to help raise awareness of digital privacy and security risks.  
Read more
  • 0
  • 0
  • 4833

article-image-what-is-mob-programming
Pavan Ramchandani
24 Apr 2018
4 min read
Save for later

What is Mob Programming?

Pavan Ramchandani
24 Apr 2018
4 min read
Mob Programming is a programming paradigm that is an extension of Pair Programming. The difference is actually quite straightforward. If in Pair Programming engineers work in pairs, in Mob Programming the whole 'mob' of engineers works together. That mob might even include project managers and DevOps engineers. Like any good mob, it can get rowdy, but it can also get things done when you're all focused on the same thing. What is Mob programming? The most common definition given to this approach by Woody Zuill (the self-proclaimed father of Mob programming) is as following: “All the team members working on the same thing, at the same time, in the same space, and on the same computer.” Here are the key principles of Mob Programming: The team comes together in a meeting room with a set task due for the day. This group working together is called the mob. The entire code is developed on a single system. Only one member is allowed to operate the system. This means only the Driver can write the code or make any changes to the code. The other members are called “Navigator” and the expert among them for the problem at hand guides the Driver to write the code. Everyone keeps switching roles, meaning no one person will be at the system all the time. The session ends with all the aspects of the task getting successfully completed. The Mob Programming strategy The success of mob programming depends on the collaborative nature of the developers coming together to form the Mob. A group of 5-6 members make a good mob. For a productive session, each member needs to be familiar with software development concepts like testing, design patterns, software development life cycle, among others. A project manager can initiate the team to take the Mob programming approach in order to make the early stage of software development stress-free. Anyone stuck at a point in the problem will have Navigators who can bring in their expertise and keep the project development moving. The advantages of Mob Programming Mob programming might make you nervous about performing in a group. But the outcomes have shown that it tends to make work, stress free and almost error free since there are multiple opinions. The ground rules to define Mob remains at a state where a single person cannot be on the keyboard, writing code longer than the other. This reduces the grunt work and provides the opportunity to switch to a different role in the mob. This trait really challenges and intrigues  individuals to contribute to the project by using their creativity. Criticisms of Mob Programming Mob programming is about cutting the communication barrier in the team. However, in situations when the dynamics of some members is different, the session can turn out to be just some active members dictating the terms for the task at hand. Many developers out there are set in their own ways. When asked to work on a task/project at the same time, there might occur a conflict of interest. Some developers might not participate with their full capacity and this might lead the work being sub-standard. To do Mob Programming well, you need a good mob Mob programming is a modern approach to software development and comes with its own set of pros and cons. The productivity and fruitfulness of the approach lies in the credibility and dynamics of the members and not in the nature of the problem at hand. Hence the potential of this approach can be leveraged for solving difficult problems, given the best bunch of mobs to deal with it. More on programming paradigms: What is functional reactive programming? What is the difference between functional and object oriented programming?
Read more
  • 0
  • 3
  • 4828

article-image-why-does-oculus-cto-prefer-2d-vr-interfaces-over-3d-virtual-reality-interfaces
Sugandha Lahoti
23 May 2019
6 min read
Save for later

Why does Oculus CTO John Carmack prefer 2D VR interfaces over 3D Virtual Reality interfaces?

Sugandha Lahoti
23 May 2019
6 min read
Creative immersive 3D experiences in Virtual reality setup is the new norm. Tech companies around the world are attempting to perfect these 3D experiences to make them as natural, immersive, and realistic as possible. However, a certain portion of Virtual Reality creators still believe that creating a new interaction paradigm in 3D is actually worse than 2D. One of them is John Carmack, CTO of Oculus VR, the popular Virtual Reality headgear. He has penned a Facebook post highlighting why he thinks 3D interfaces are usually worse than 2D interfaces. Carmack details a number of points to justify his assertion and says that the majority of browsing, configuring, and selecting interactions benefit from designing in 2D. He wrote an internal post in 2017 clarifying his views. Recently, he was reviewing a VR development job description before an interview last week, where he saw that one of the responsibilities for the open Product Management Leader position was: “Create a new interaction paradigm that is 3D instead of 2D based” which made him write this post. Splitting information across multiple depths is harmful Carmack says splitting information across multiple depths makes our eyes re-verge and re-focus. He explains this point with an analogy. “If you have a convenient poster across the room in your visual field above your monitor – switch back and forth between reading your monitor and the poster, then contrast with just switching back and forth with the icon bar at the bottom of your monitor.” Static HMD optics should have their focus point at the UI distance. If we want to be able to scan information as quickly and comfortably as possible, says Carmack, it should all be the same distance from the viewer and it should not be too close. As Carmack observes, you don't see in 3D. You see two 2D planes that your brain extracts a certain amount of depth information from. A Hacker news user points out, “As a UI goes, you can't actually freely use that third dimension, because as soon as one element obscures another, either the front element is too opaque to see through, in which case the second might as well not be there, or the opacity is not 100% in which case it just gets confusing fast. So you're not removing a dimension, you're acknowledging it doesn't exist. To truly "see in 3D" would require a fourth-dimension perspective. A 4D person could use a 3D display arbitrarily, because they can freely see the entire 3D space, including seeing things inside opaque spheres, etc, just like we can look at a 2D display and see the inside of circles and boxes freely.” However, a user critiqued also Carmack’s statement of splitting information across multiple depths being harmful. He says, “Frequently jumping between dissimilar depths is harmful. Less frequent, sliding, and similar depths, can be wonderful, allowing the much denser and easily accessible presentation of information. A general takeaway is that “most of the current commentary about "VR", is coming from a community focused on a particular niche, current VR gaming. One with particular and severe, constraints and priorities that don't characterize the entirety of a much larger design space.” Visualize 3D environment as a pair of 2D projections Camack says that unless we move significantly relative to the environment, they stay essentially the same 2D projections. He further adds, “even on designing a truly 3D UI, developers would have to consider this notion to keep the 3D elements from overlapping each other when projected onto the view.” It can also be difficult for 2D UX/product designers to transfer their thinking over to designing immersive products. https://twitter.com/SuzanneBorders/status/1130231236243337216 However, building in 3D is important for things which are naturally intuitive in 3D. This, as Carmack mentions is "true 3D" content, for which you get a 3D interface whether you like it or not. A user on Hacker News points out, “Sometimes things which we struggle to decode in 2D are just intuitive in 3D like knots or the run of wires or pipes.” Use 3D elements for efficient UI design Carmack says that 3D may have a small place for efficient UI design as a “treatment” for UI elements. He gives examples such as using slightly protruding 3D buttons sticking out of the UI surface in places where we would otherwise use color changes or faux-3D effects like bevels or drop shadows. He says, “the visual scanning and interaction is still fundamentally 2D, but it is another channel of information that your eye will naturally pick up on.” This doesn’t mean that VR interfaces should just be “floating screens”. The core advantage of VR from a UI standpoint is the ability to use the entire field of view, and allow it to be extended by “glancing” to the sides. Content selection, Carmack says, should go off the sides of the screens and have a size/count that leaves half of a tile visible at each edge when looking straight ahead. Explaining his statement he adds, “actually interacting with UI elements at the angles well away from the center is not good for the user, because if they haven’t rotated their entire body, it is a stress on their neck to focus there long, so the idea is to glance, then scroll. He also advises putting less frequently used UI elements off to the sides or back. A Twitter user agreed to Carmack’s floating screens comment. https://twitter.com/SuzanneBorders/status/1130233108073144320 Most users agreed to Carmack’s assertion, sharing their own experiences. A comment on reddit reads, “He makes a lot of good points. There are plenty examples of 'real life' instances where the existence and perception of depth isn't needed to make useful choices or to interact with something, and that in fact, as he points out, it's actually a nuisance to have to focus on multiple planes, back and forth', to get something done.” https://twitter.com/feiss/status/1130524764261552128 https://twitter.com/SculptrVR/status/1130542662681939968 https://twitter.com/jeffchangart/status/1130568914247856128 However, some users point out that this can also be because the tools for doing full 3D designs are nowhere near as mature as the tools for doing 2D designs. https://twitter.com/haltor/status/1130600718287683584 A Twitter user aptly observes: “3D is not inherently superior to 2D.” https://twitter.com/Clarice07825084/status/1130726318763462656 Read the full text of John’s article on Facebook. More insights on this Twitter thread. Google announces Glass Enterprise Edition 2: an enterprise-based augmented reality headset Oculus Rift S: A new VR with inside-out tracking, improved resolution and more! What’s new in VR Haptics?
Read more
  • 0
  • 0
  • 4817
article-image-defensive-strategies-industrial-organizations-can-use-against-cyber-attacks
Guest Contributor
20 Mar 2019
8 min read
Save for later

Defensive Strategies Industrial Organizations Can Use Against Cyber Attacks

Guest Contributor
20 Mar 2019
8 min read
Industrial organizations are prime targets for spies, criminals, hacktivists and even enemy countries. Spies from rival organizations seek ways to access industrial control systems (ICS) so they can steal intelligence and technology and gain a competitive advantage. Criminals look for ways to ransom companies by locking down IT systems. Hacktivists and terrorists are always looking for ways to disrupt and even endanger life through IT and international antagonists might want to hack into a public system (e.g. a power plant) to harm a country's economic performance. This article looks at a number of areas where CTOs need to focus their attention when it comes to securing their organizations from cyber attacks. Third Party Collaboration The Target breach of November 2013 highlighted the risks of poor vendor management policies when it comes to cybersecurity. A third party HVAC (Heating, Ventilation, and Air Conditioning) provider was connected into the retailer's IT architecture in such a way that, when it was hacked, cybercriminals could access and steal credit card details from their customers. Every third party given access to your network–even security vendors–need to be treated as possible accidental or deliberate vectors of attack. These include catering companies, consultants, equipment rental firms, maintenance service providers, transport providers and anyone else who requests access to the corporate network. Then there are sub-contractors to think about. The IT team and legal department need to be involved from the start to risk assess third-party collaborations and ensure access if granted, is restricted to role-specific activities and reviewed regularly. Insider and Outsider Threat An organization's own staff can compromise a system's integrity either deliberately or accidentally. Deliberate attacks can be motivated by money, revenge, ideology or ego and can be among the most difficult to detect and stop. Organizations should employ a combination of technical and non-technical methods to limit insider threat. Technical measures include granting minimum access privileges and monitoring data flow and user behavior for anomalies (e.g. logging into a system at strange hours or uploading data from a system unrelated to their job role). One solution which can be used for this purpose is a privileged access management system (PAM). This is a centralized platform usually divided into three parts: an access manager, a session manager, and a password vault manager. The access manager component handles system access requests based on the company’s IAM (Identity and Access Management) policies. It is a good practice to assign users to specific roles and to limit access for each user to only those services and areas of the network they need to perform their role. The PAM system automates this process with any temporary extra permissions requiring senior authorization. The session manager component tracks user activity in real time and also stores it for future audit purposes. Suspicious user activity can be reported to super admins who can then terminate access. The password vault manager component protects the root passwords of each system and ensures users follow the company’s user password policy. Device management also plays an important part in access security. There is potentially a big security difference between an authorized user logging on to a system from a work desktop and the same user logging on to the same system via their mobile device. Non-technical strategies to tackle insider threat might include setting up a confidential forum for employees to report concerns and ensuring high-quality cyber security training is provided and regularly reviewed. When designing or choosing training packages, it is important to remember that not all employees will understand or be comfortable with the technical language, so all instructions and training should be stripped of jargon as far as possible. Another tip is to include plenty of hands-on training and real-life simulations. Some companies test employee vulnerability by having their IT department create a realistic phishing email and recording how many clicks it gets from employees. This will highlight which employees or departments need refresher training. Robust policies for any sensitive data physically leaving the premises are also important. Employees should not be able to take work devices, disks or flash drives off the premises without the company’s knowledge and this is even more important after an employee leaves the company. Data Protection Post-GDPR, data protection is more critical than ever. Failure to protect EU-based customer data from theft can expose organizations to over 20 million Euros worth of fines. Data needs to be secure both during transmission and while being stored. It also needs to be quickly and easily found and deleted if customers need to access their data or request its removal. This can be complex, especially for large organizations using cloud-based services. A full data audit is the first place to start before deciding what type of encryption is needed during data transfer and what security measures are necessary for stored data. For example, if your network has a demilitarized zone (DMZ), data in transit should always end here and there should be no protocols capable of spanning it. Sensitive customer data or mission-critical data can be secured at rest by encrypting it and then applying cryptographic hashes. Your audit should look at all components of your security provider. For example, problems with reporting threats can arise due to insufficient storage space for firewall logs. VPN Vulnerabilities Some organizations avoid transmitting data over the internet by setting up a VPN (Virtual Private Network). However, this does not mean that data is necessarily safe from cybercriminals. One big problem with most set-ups is that data will be routed over the internet should the VPN connection be dropped. A kill switch or network lock can help avoid this. VPNs may not be configured optimally and some may lack protection from various types of data leaks. These include DNS leaks, WebRTC, and IPV6 leaks. DNS leaks can occur if your VPN drops a connection and your browser defaults to default DNS settings, exposing your IP address. WebRTC, a fairly new technology, enables browsers to talk to one another without using a server. This requires each browser to know the other’s public IP address and some VPNs are not designed to protect from this type of leak. Finally, IPV6 leaks will happen if your VPN only handles IPV4 requests. Any IPV6 requests will be sent on to your PC which will automatically respond with your IP address. Most VPN leaks can be checked for using free online tools and your vendor should either be able to solve the issue or you may need to consider a different vendor. If you can, use L2TP (layer 2 tunneling protocol) or, OpenVPN rather than the more easily compromised PPTP (Point-to-Point Tunneling Protocol). Network Segmentation Industrial organizations tend to use network segmentation to isolate individual zones should a compromise happen. For example, this could immediately cut off all access to potentially dangerous machinery if an office-based CRM is hacked. The Purdue Model for Industrial Control Systems is the basis of ISA-99, a commonly referenced standard, which divides a typical ICS architecture into four to five zones and six levels. In the most basic model, an ICS is split into various area or cell zones which sit within an overall industrial zone. A demilitarized zone (DMZ) sits between this industrial zone and the higher level enterprise zone. Network segmentation is a complex task but is worth the investment. Once it is in place, the attack surface of your network will be reduced and monitoring for intrusions and responding to cyber incidents will be quicker and easier. Intrusion Detection Intrusion detection systems (IDS) are more proactive than simple firewalls, actively searching the network for signs of malicious activity. An IDS can be a hardware device or a software application and can use various detection techniques from identifying malware signatures to monitor deviations from normal traffic flow. The two most common classes of IDS are network intrusion detection systems (NIDS) and host-based intrusion detection systems (HIDS). While NIDS focus on incoming traffic, HIDS monitor existing files, and folders. Alarm filtering (AF) technology can help to sort genuine threats from false positives. When a system generates a warning for every anomaly it picks up, agents can find it hard to connect failures together to find the cause. This can also lead to alarm fatigue where the agent becomes desensitized to system alarms and misses a real threat. AF uses various means to pre-process system alarms so they can be better understood and acted upon. For example, related failures may be grouped together and then assigned to a priority list. System Hardening and Patch Management System hardening means locking down certain parts of a network or device or removing features to prevent access or to stop unwanted changes. Patching is a form of system hardening as it closes up vulnerabilities preventing them from being exploited. To defend their organization, the IT support team should define a clear patch management policy. Vendor updates should be applied as soon as possible and automated where they can. Author Bio Brent Whitfield is CEO of DCG Technical Solutions, Inc. DCG provides a host of IT services Los Angeles businesses depend upon whether they deploy in-house, cloud or hybrid infrastructure. Brent has been featured in Fast Company, CNBC, Network Computing, Reuters, and Yahoo Business. RSA Conference 2019 Highlights: Top 5 cybersecurity products announced Cybersecurity researcher withdraws public talk on hacking Apple’s Face ID from Black Hat Conference 2019: Reuters report 5 lessons public wi-fi can teach us about cybersecurity
Read more
  • 0
  • 0
  • 4783

article-image-should-you-go-with-arduino-uno-or-raspberry-pi-3-for-your-next-iot-project
Savia Lobo
02 May 2018
5 min read
Save for later

Should you go with Arduino Uno or Raspberry Pi 3 for your next IoT project?

Savia Lobo
02 May 2018
5 min read
Arduino Uno and Raspberry Pi 3 are the go-to options for IoT projects. They're tiny computers that can make a big impact in how we connect devices to each other, and to the internet. But they can also be a lot of fun too - at their best, they do both. For example, Arduino Uno and Raspberry Pi were used to make a custom underwater camera solution for filming the Netflix documentary, Chasing Coral. They were also behind the Autonomous racing robot. However, how are the two microcomputers different? If you're confused about which one you should start using, here's a look at the key features of both the Arduino Uno and the Raspberry Pi 3.This will give you a clearer view on what fits your project well, or maybe just help you decide what to include on your birthday wishlist. Comparing the Arduino Uno and Raspberry Pi 3 Raspberry Pi 3 has a Broadcom BCM2837 SoC with it can handle multiple tasks at one time. It is a Single Board Computer (SBC), which means it is a fully functional computer with a dedicated processor, memory, and is capable of running an OS - Raspberry Pi 3 runs on Linux. It can run multiple programs as it has its own USB ports, audio outputs, a graphic driver for HDMI output. Arduino Uno is a microcontroller board based on the ATmega328, an 8-bit microcontroller with 32KB of Flash memory and 2KB of RAM, which is not as powerful as SBCs. However, they are a great choice for quick setups. Microcontrollers are a good pick when controlling small devices  such as LEDs, motors, several different types of sensors, but cannot run a full operating system. The Arduino Uno runs one program at a time. One can also install other operating systems such as Android, Windows 10, or Firefox OS. Let's look at the features and how one stands out better than the other: Speed The Raspberry Pi 3 (1.2 GHz) is much faster than Arduino (16 MHz). This means it can complete day-to-day tasks such as web surfing, playing videos, with greater ease From this perspective, Raspberry Pi is the go-to choice for media centered applications. Winner: Raspberry Pi 3 Easy time interface Arduino Uno offers a simplified approach for project building. It has easy time interfacing with presence of analog sensors, motor, and other components. By contrast, the Raspberry Pi 3  has a more complicated route if you want to set up projects. For example, to take sensor readings you'll need to install libraries and connect to a monitor, keyboard and mouse. Winner: Arduino Uno Bluetooth/ Internet connectivity Raspberry Pi 3 connects to Bluetooth devices and the internet directly using Ethernet or by connecting to Wi-Fi. The Arduino Uno can do that only with the help of a Shield that adds internet or Bluetooth connectivity. HATS (Hardware Attached on Top) and Shields can be used on both devices to give them additional functionality. For example. HATs are used on the Raspberry Pi 3, to control an RBG Matrix, add a touchscreen, or even create an arcade system. Shields that can be used on the Arduino Uno include a Relay Shield, a Touchscreen Shield, or a Bluetooth Shield. There are hundreds of Shields and HATs that provide the functionality that you regularly use. Winner: Raspberry Pi 3 Supporting ports The Raspberry Pi 3 has an HDMI port, audio port, 4 USB ports, camera port, and LCD port, which is ideal for media applications. On the other hand, Arduino Uno does not have any of these ports in the board. However, some of these ports can be added on the Arduino Uno with the help of Shields. Arduino Uno has 14 digital input/output pins (of which 6 can be used as PWM outputs), 6 analog inputs, a 16 MHz crystal oscillator, a USB connection, a power jack, an ICSP header, and a reset button. Winner: Raspberry Pi 3 Other features Set-up time Raspberry Pi 3 takes longer to set up. You'll also probably need additional components such as a HDMI cable, a monitor, a cable, and a keyboard and mouse. For the Arduino Uno you simply have to plug it in. The code then runs immediately. Winner: Arduino Uno Affordable Price Arduino Uno is much cheaper. It's around $20 compared to Raspberry Pi 3, which is around $35. It's important to note that this excludes the cost of cables, keyboards, mouse and other additional hardware.As mentioned above, you don't need those extras with the Arduino Uno. Winner: Arduino Uno Both Arduino Uno and Raspberry Pi 3 are great in their individual offerings. Arduino Uno would be an ideal board if you want to get started with electronics, and begin building fun and engaging hands-on projects. It's great for learning the basics of how sensors and actuators work, and an essential tool for one's rapid prototyping needs. On the other hand, Raspberry Pi 3 is great for projects that need an online connection and have multiple operations running  at the same time. Pick as per your need! You can also check some of our exciting books on Arduino Uno and Raspberry Pi. Raspberry Pi 3 Home Automation Projects: Bringing your home to life using Raspberry Pi 3, Arduino, and ESP8266 Build Supercomputers with Raspberry Pi 3 Internet of Things with Arduino Cookbook   How to build a sensor application to measure Ambient Light 5 reasons to choose AWS IoT Core for your next IoT project Build your first Raspberry Pi project
Read more
  • 0
  • 0
  • 4765