Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech Guides

851 Articles
article-image-are-recurrent-neural-networks-capable-of-warping-time
Savia Lobo
07 May 2018
2 min read
Save for later

Are Recurrent Neural Networks capable of warping time?

Savia Lobo
07 May 2018
2 min read
‘Can Recurrent neural networks warp time?’ is authored by Corentin Tallec and Yann Ollivier to be presented at ICLR 2018. This paper explains that plain RNNs cannot account for warpings, leaky RNNs can account for uniform time scalings but not irregular warpings, and gated RNNs can adapt to irregular warpings. Gating mechanism of LSTMS (and GRUs) to time invariance / warping What problem is the paper trying to solve? In this paper, that authors prove that learnable gates in a recurrent model formally provide quasi-invariance to general time transformations in the input data. Further, the authors try to recover part of the LSTM architecture from a simple axiomatic approach. This leads to a new way of initializing gate biases in LSTMs and GRUs. Experimentally, this new chrono initialization is shown to greatly improve learning of long term dependencies, with minimal implementation effort. Paper summary The authors have derived the self loop feedback gating mechanism of recurrent networks from first principles via a postulate of invariance to time warpings. Gated connections appear to regulate the local time constants in recurrent models. With this in mind, the chrono initialization, a principled way of initializing gate biases in LSTMs, has been introduced. Experimentally, chrono initialization is shown to bring notable benefits when facing long term dependencies. Key takeaways In this paper, the authors show that postulating invariance to time transformations in the data (taking invariance to time warping as an axiom) necessarily leads to a gate-like mechanism in recurrent models. The paper provides precise prescriptions on how to initialize gate biases depending on the range of time dependencies to be captured. The empirical benefits of the new initialization on both synthetic and real world data have been tested. The authors also observed a substantial improvement with long-term dependencies, and slight gains or no change when short-term dependencies dominate. Reviewer comments summary Overall Score: 25/30 Average Score: 8 According to a reviewer, the core insight of the paper is the link between recurrent network design and its effect on how the network reacts to time transformations. This insight is simple, elegant and valuable, as per the reviewer. A minor complaint highlighted is that there are an unnecessarily large number of paragraph breaks, which make reading slightly jarring. Recurrent neural networks and the LSTM architecture Build a generative chatbot using recurrent neural networks (LSTM RNNs) How to recognize Patterns with Neural Networks in Java  
Read more
  • 0
  • 0
  • 2629

article-image-behind-scenes-deep-learning-evolution-core-concepts
Shoaib Dabir
19 Dec 2017
6 min read
Save for later

Behind the scenes: Deep learning evolution and core concepts

Shoaib Dabir
19 Dec 2017
6 min read
[box type="note" align="" class="" width=""]This article is an excerpt from a book by Kuntal Ganguly titled Learning Generative Adversarial Networks. The book will help you build and analyze various deep learning models and apply them to real-world problems.[/box] This article will take you through the history of Deep learning and how it has grown over time. It will walk you through some of the core concepts of Deep Learning like sigmoid activation, rectified linear unit(ReLU), etc. Evolution of deep learning A lot of the important work on neural networks happened in the 80's and in the 90's, but back then computers were slow and datasets very tiny. The research didn't really find many applications in the real world. As a result, in the first decade of the 21st century, neural networks have completely disappeared from the world of machine learning. It's only in the last few years, first seeing speech recognition around 2009, and then in computer vision around 2012, that neural networks made a big comeback with (LeNet, AlexNet). What changed? Lots of data (big data) and cheap, fast GPU's. Today, neural networks are everywhere. So, if you're doing anything with data, analytics, or prediction, deep learning is definitely something that you want to get familiar with. See the following figure: Deep learning is an exciting branch of machine learning that uses data, lots of data, to teach computers how to do things only humans were capable of before, such as recognizing what's in an image, what people are saying when they are talking on their phone, translating a document into another language, helping robots explore the world and interact with it. Deep learning has emerged as a central tool to solve perception problems and it's the state of the art with computer vision and speech recognition. Today many companies have made deep learning a central part of their machine learning toolkit—Facebook, Baidu, Amazon, Microsoft, and Google are all using deep learning in their products because deep learning shines wherever there is lots of data and complex problems to solve. Deep learning is the name we often use for "deep neural networks" composed of several layers. Each layer is made of nodes. The computation happens in the node, where it combines input data with a set of parameters or weights, that either amplify or dampen that input. These input-weight products are then summed and the sum is passed through activation function, to determine what extent the value should progress through the network to affect the final prediction, such as an act of classification. A layer consists of row of nodes that that turn on or off as the input is fed through the network based. The input of the first layer becomes the input of the second layer and so on. Here's a diagram of what neural network might look like: Let's get familiarize with some deep neural network concepts and terminology. Sigmoid activation Sigmoid activation function used in neural network has an output boundary of (0, 1), and α is the offset parameter to set the value at which the sigmoid evaluates to 0. Sigmoid function often works fine for gradient descent as long as input data x is kept within a limit. For large values of x, y is constant. Hence, the derivatives dy/dx (the gradient) equates to 0, which is often termed as the vanishing gradient problem. This is a problem because when the gradient is 0, multiplying it with the loss (actual value - predicted value) also gives us 0 and ultimately networks stops learning. Rectified Linear Unit (ReLU) A neural network can be built from combining some linear classifier with some non-linear function. The Rectified Linear Unit (ReLU) has become very popular in the last few years. It computes the function f(x) = max(0,x) f(x)=max(0,x). In other words, the activation is simply thresholded at zero. Unfortunately, ReLU units can be fragile during training and can die as a ReLU neuron could cause the weights to update in such a way that the neuron will never activate on any datapoint again and so the gradient flowing through the unit will forever be zero from that point on. To overcome this problem a leaky ReLU function will have a small negative slope (of 0.01, or so) instead of zero when x<0: f(x)= (x<0)(αx)+ (x>=0)(x)f(x)=1(x<0)(αx)+1(x>=0)(x) where αα is a small constant. Exponential Linear Unit (ELU) The mean of ReLU activation is not zero and hence sometime makes the learning difficult for the network. Exponential Linear Unit (ELU) is similar to ReLU activation function when input x is positive, but for negative values it is a function bounded by a fixed value -1, for α=1 (hyperparameter α controls the value to which an ELU saturates for negative inputs). This behavior helps to push the mean activation of neurons closer to zero, that helps to learn representations that are more robust to noise. Stochastic Gradient Descent (SGD) Scaling batch gradient descent is cumbersome because it has to compute a lot if the dataset is big and as a rule of thumb. If computing your loss takes n floating point operations, computing its gradient takes about three times that compute. But in practice we want to be able to train lots of data because on real problems we will always get more gains the more data we use. And because gradient descent is iterative and have to do that for many steps. So, that means that in-order to update the parameters in a single step, it has to go through all the data samples and then doing this iteration over the data tens or hundreds of times. Instead of computing the loss over entire data samples for every step, we can compute the average loss for a very small random fraction of the training data. Think between 1 and 1000 training samples each time. This technique is called Stochastic Gradient Descent (SGD) and is at the core of deep learning. That's because SGD scales well with both data and model size. SGD gets its reputation for being black magic as it has lots of hyper-parameters to play and tune such as initialization parameters, learning rate parameters, decay, momentum, and you have to get them right. Deep Learning has emerged over time with its evolution from neural networks to machine learning. It is an intriguing segment of machine learning that uses huge amount of data, to teach computers how to do things that only humans were capable of. It highlights some of the key players who have adopted this concept at the very early stage that are Facebook, Baidu, Amazon, Microsoft, and Google. It shows the different concept layers through which deep learning is executed. If Deep Learning has got you hooked, wait till you learn what GANs are from the book Learning Generative Adversarial Networks.
Read more
  • 0
  • 0
  • 2628

article-image-the-new-tech-worker-movement-how-did-we-get-here-and-what-comes-next
Bhagyashree R
28 Jan 2019
8 min read
Save for later

The new tech worker movement: How did we get here? And what comes next?

Bhagyashree R
28 Jan 2019
8 min read
Earlier this month, Logic Magazine, a print magazine about technology, hosted a discussion about the past, present, and future of the tech worker movement. This event was co-sponsored by solidarity groups like the Tech Worker Coalition, Coworker.org, NYC-DSA Tech Action Working Group, and Science for the people. Among the panelists were Joan Greenbaum, who was involved in organizing tech workers in the mainframe era and was part of Computer People for Peace. Meredith Whittaker is a research scientist at New York University and co-founder of the AI Now Institute, Google Open Research group, and one of the organizers of Google Walkout. Liz Fong-Jones, the Developer Advocate at Google Cloud Platform was also present, who recently tweeted that she will be leaving the company in February, because of Google’s lack of leadership in response to the demands made by employees during the Google walkout in November 2018. Also in the attendance were Emma Quail representing Unite Here and Patricia Rosa, a Facebook food service worker, who was inspired to fight for the union after watching a pregnant friend lose her job because she took one day off for a doctor’s appointment. The discussion was held in New York, hosted by Ben Tarnoff, the co-founder of Logic Magazine. It lasted for almost an hour, after which the Q&A session started. You can see the full discussion at Logic’s Facebook page. The rise of tech workers organizing In recent years, we have seen tech workers coming together to stand against any unjust decision taken by their companies. We saw tech workers at companies like Google, Amazon, and Microsoft raising their voices against contracts, with government agencies like ICE and Pentagon, which are just “profit-oriented” and can prove harmful to humanity. For instance, there was a huge controversy around Google’s Project Maven, which was focused on analyzing drone footage and could have been eventually used to improve drone strikes on the battlefield. More than 3,000 Google employees signed a petition against this project that led to Google deciding not to renew its contract with the U.S. Department of Defense in 2019. In December 2018, Google workers launched an industry-wide effort focusing on the end of forced arbitration, which affects at least 60 million workers in the US alone. In June, Amazon employees demanded Jeff Bezos to stop selling Rekognition, Amazon's facial recognition technology, to law enforcement agencies and to discontinue partnerships with companies that work with U.S. Immigration and Customs Enforcement (ICE). We also saw workers organizing campaigns demanding safer workplaces, free from sexual harassment and gender discrimination, better working conditions, retirement plans, professionalism standards, and fairness in equity compensation. In November, there was a massive Google Walkout with 20,000 Google employees from all over the world to protest against how Google handled sexual harassment cases. This backlash was triggered when it came into light that Google paid millions of dollars as exit packaged to its male executives who were accused of sexual misconduct. Let’s look at some of the highlights from this discussion: What do these issues ranging from controversial contracts, workplace issues, better benefits, a safe equitable workplace have to do with one another? Most companies today are motivated by profits they make, which also shows in the technology they produce. These technologies benefit a small fraction of users while affecting a larger predictable demographic of people, for instance, black and brown people. Meredith Whittaker remarks, “These companies are acting like parallel states right now.” The technologies that they produce have a significant impact over a number of domains that we are not even aware of. Liz Fong-Jones feels that it is also about us as tech workers taking responsibility for what we build. We are feeding into the profit motive these companies have if we keep participating in building systems that can have bad implications for users or not speaking up for the workers working alongside us. To hold these companies accountable and to ensure that all workers are being used for good and people are treated fairly, we all need to come together no matter in what part of the company we are working in. Joan Greenbaum also believes that these types of movement cannot be successful without forming alliances. Any alliance work between tech workers and different roles? Emma Quail shared that there have been many collaborations between engineers, tech employees, cafeteria workers, and other service workers in the fights against companies treating their employees differently. These collaborations are important as tech workers and engineers are much more privileged in these companies. “They have more voice, their job is taken more seriously,” said Emma Quail. Patricia Rosa sharing her experience said, “When some of the tech workers came to one of our negotiations and spoke on our behalf, the company got nervous, and they finally gave them the contract.” Liz Fong-Jones mentions that the main challenge to eliminate this discrimination is that employers want to keep their workers separate. As an example to this, she added, “Google prohibits its cafeteria workers from being on campus when they are not on shift, it prohibits them from holding leadership positions and employee resource groups.” These companies resort to these policies because they do not want their “valuable employees” to find out about the working conditions of other workers. In the last few years, the tech worker movement saw a huge boost in catching the attention of society, but this did not happen overnight. How did we get to this moment? Liz Fong-Jones attributes the Me Too movement as one of the turning points. This movement made workers realize that they are not alone and there are people who share the same concerns. Another thing that Liz Fong-Jones thinks led us to this movement was, management coming with proposals that can have negative implications on people and asked employees to keep secrets. But now tech workers are more informed about what exactly they are building. In the last few years,  tech companies have come under the attention and scrutiny of the public because of the many tech scandals whether it is related to data, software, or workplace, rights. One of the root cause of this was an endless growth requirement. Meredith Whittaker shares, “Over the last few years, we saw series of relentless embarrassing answers to substantially serious questions. They cannot keep going like this.” What’s in the future? Joan Greenbaum rightly mentions that tech companies should actually, “look to work with people what the industry calls users.” They should adopt participatory design instead of user-centered design. Participatory design is basically an approach in which all stakeholders, from employees, partners to local business owners, customers are involved in the design process. Meredith Whittaker remarks, “The people who are getting harmed by these technologies are not the people who are going to get a paycheck from these companies. They are not going to check tech power or tech culture unless we learn how to know each other and form alliances that also connect corporate.” Once we all come together and form alliances, we will be able to pinpoint these companies about the updates and products these companies are building to know about their implications. So, the future basically is in doing our homework, knowing how these companies work, building relationships and coming together against any unjust decisions by these companies. Liz Fong-Jones adds, “The Google Walkout was just the beginning. The labor movement will spread into other companies and also having more visible effects beyond a walkout.” Emma Quail believes that companies will need to address issues related to housing, immigration, rights for people. Patricia Rosa shared that for the future we need to work towards spreading awareness among other workers that there are people who care about their rights and how they are being treated at the workplace. If they are aware that there are people to support them they will not be scared to speak up as Patricia was when she started her journey. Some of the questions asked in the Q&A session were: What's different politically about tech than any other industry? How was the Google walkout organized?  I was a tech contractor and didn't hear about it until it happened. Are there any possibilities of creating a single union of all tech workers no matter what their roles are? Is that a desirable far goal? How tech workers working in one state can relate to the workers working internationally? Watch the full discussion at Logic’s Facebook page. Tech Workers Coalition volunteers talk unionization and solidarity in Silicon Valley Sally Hubbard on why tech monopolies are bad for everyone: Amazon, Google, and Facebook in focus How far will Facebook go to fix what it broke: Democracy, Trust, Reality
Read more
  • 0
  • 0
  • 2602

article-image-raspberry-pi-v-arduino-which-ones-right-me
Raka Mahesa
20 Sep 2017
5 min read
Save for later

Raspberry Pi v Arduino - which one's right for me?

Raka Mahesa
20 Sep 2017
5 min read
Okay, so you’ve decided to be a maker and you’ve created a little electronic project for yourself. Maybe an automatic garage door opener, or maybe a simple media server for your home theatre. As you learn your way further into the DIY world, you realize that you need to decide on the hardware that will be the basis of your project. You’ve checked the Internet for help, and found out the two popular hardware choices for DIY projects: the Raspberry Pi and the Arduino. Since you're just starting out, it seems both hardware choices serve the same functionality. They both are able to run the program needed for your project and they both have a big community that can help you. So, which hardware should you choose? Before we can make that decision, we need to understand which hardware is best. Let's start with the Raspberry Pi. To put it simply, the Raspberry Pi is a computer with a very, very small physical size. Despite its small size, the Raspberry Pi is actually a full-fledged computer capable of running an operating system and executing various programs. By connecting the mini-computer to a screen via an HDMI cable, and to an input device like a keyboard or a mouse, people will be able to use the Raspberry Pi just like any other computer out there. The latest version even has wireless connectivity built right into the device, making it very easy for the hardware to be connected to the Internet. So, what about the Arduino? The Arduino is a microcontroller board--an integrated circuit with a computing chipset capable of running a simple program. If smart devices are run by computer processors, then "dumb devices" are run by microcontrollers. These dumb devices include things like a TV remote, air conditioner, calculator, and other simple devices. Okay, so now we have completed our crash course for both platforms, let's actually compare them, starting from the hardware aspect. Raspberry Pi is a full-blown computer, so it has most of the stuff you'd expect from a computer system. It has a quad-core ARM-based CPU running at 1,200 MHz, 1 GB of RAM, microSD card slot for storage, 4 USB 2.0 ports, and it even has a GPU to drive the display output via an HDMI port. The Raspberry Pi is also equipped with a variety of modules that enables the hardware to easily connect to other devices like camera and touchscreen. Meanwhile, the Arduino is a simple microcontroller board. It has a processor running at 16 MHz, a built-in LED, and a bunch of digital and analog pins to interface with other devices. The hardware also has a USB port that's used to upload a custom program into the board. Just from the hardware specification alone we can see that both are on a totally different level. The Raspberry Pi has a processor running at 1,200 MHz CPU clock, which is roughly similar to a low-end smartphone, whereas the processor in Arduino only runs at 16 MHz CPU clock. This means an Arduino board is only capable of running a simple program, while a Raspberry Pi can handle a much more complex one. So far it seems that Raspberry Pi is a much better choice for DIY projects. But well, we all know that a smartphone is also much more limited and slower than a desktop computer, yet no one is going to say that smartphone is useless. To understand the strength of the Arduino, we need to look at and compare the software running the hardware we're discussing. Since Raspberry Pi is a computer, the device requires an operating system to be able to function. An operating system offers many benefits, like a built-in file system and multitasking system, but it also has disadvantages like needing to be booted up first and programs requiring additional configuration so they can run automatically. On the other hand, an Arduino is running its own firmware that will execute a custom, user-uploaded program as soon as the device is turned on. The software on Arduino is more much limited, but it also means using it is pretty simple and straightforward. This theme of simplicity and complexity also extends to the software development for both platforms. Developing a software for Raspberry Pi is complex, just like developing any computer software. Meanwhile, Arduino provides a development tool that allows you to quickly develop a program in your desktop computer and easily upload it to the Arduino board via USB cable. So with all that said, which hardware platform is the right choice? Well, it depends on your project. If your project is simply about reading sensor data and processing that, then the simplicity of Arduino will help the development of your project immensely. If your project includes a lot of task and processes, like uploading data to the internet, sending you e-mail, reading image data, and other stuff, then the power of Raspberry Pi will help your project successfully do all those tasks. And, if you're just starting out and haven't really decided on your future project though, I'd suggest you to go with Arduino. The simplicity and ease-of-use of an Arduino board makes it a really great learning tool where you can focus on making new stuff instead of making your things work together. About the Author RakaMahesa is a game developer at Chocoarts, who is interested in digital technology in general. Outside of work hours, he likes to work on his own projects, with Corridoom VR being his latest released game. Raka also regularly tweets as @legacy99
Read more
  • 0
  • 0
  • 2601

article-image-introducing-intelligent-apps-a-smarter-way-into-the-future
Amarabha Banerjee
19 Oct 2017
6 min read
Save for later

Introducing Intelligent Apps

Amarabha Banerjee
19 Oct 2017
6 min read
We are a species obsessed with ‘intelligence’ since gaining consciousness. We have always been inventing ways to make our lives better through sheer imagination and application of our intelligence. Now, it comes as no surprise that we want our modern day creations to be smart as well - be it a web app or a mobile app. The first question that comes to mind then is what makes an application ‘intelligent’? A simple answer for budding developers is that intelligent apps are apps that can take intuitive decisions or provide customized recommendations/experience to their users based on insights drawn from data collected from their interaction with humans. This brings up a whole set of new questions: How can intelligent apps be implemented, what are the challenges, what are the primary application areas of these so-called Intelligent apps, and so on. Let’s start with the first question. How can intelligence be infused into an app? The answer has many layers just like an app does. The monumental growth in data science and its underlying data infrastructure has allowed machines to process, segregate and analyze huge volumes of data in limited time. Now, it looks set to enable machines to glean meaningful patterns and insights from the very same data. One such interesting example is predicting user behavior patterns. Like predicting what movies or food or brand of clothing the user might be interested in, what songs they might like to listen to at different times of their day and so on. These are, of course, on the simpler side of the spectrum of intelligent tasks that we would like our apps to perform. Many apps currently by Amazon, Google, Apple, and others are implementing and perfecting these tasks on a day-to-day basis. Complex tasks are a series of simple tasks performed in an intelligent manner. One such complex task would be the ability to perform facial recognition, speech recognition and then use it to perform relevant daily tasks, be it at home or in the workplace. This is where we enter the realm of science fiction where your mobile app would recognise your voice command while you are driving back home and sends automated instructions to different home appliances, like your microwave, AC, and your PC so that your food is served hot when you reach home, your room is set at just the right temperature and your PC has automatically opened the next project you would like to work on. All that happens while you enter your home keys-free thanks to a facial recognition software that can map your face and ID you with more than 90% accuracy, even in low lighting conditions. APIs like IBM Watson, AT&T Speech, Google Speech API, the Microsoft Face API and some others provide developers with tools to incorporate features such as those listed above, in their apps to create smarter apps. It sounds almost magical! But is it that simple? This brings us to the next question. What are some developmental challenges for an intelligent app? The challenges are different for both web and mobile apps. Challenges for intelligent web apps For web apps, choosing the right mix of algorithms and APIs that can implement your machine learning code into a working web app, is the primary challenge. plenty of Web APIs like IBM Watson, AT&T speech etc. are available to do this. But not all APIs can perform all the complex tasks we discussed earlier. Suppose you want an app that successfully performs both voice and speech recognition and then also performs reinforcement learning by learning from your interaction with it. You will have to use multiple APIs to achieve this. Their integration into a single app then becomes a key challenge. Here is why. Every API has its own data transfer protocols and backend integration requirements and challenges. Thus, our backend requirement increases significantly, both in terms of data persistence and dynamic data availability and security. Also, the fact that each of these smart apps would need customized user interface designs, poses a challenge to the front end developer. The challenge is to make a user interface so fluid and adaptive that it supports the different preferences of different smart apps. Clearly, putting together a smart web app is no child’s play. That’s why, perhaps, smart voice-controlled apps like Alexa are still merely working as assistants and providing only predefined solutions to you. Their ability to execute complex voice-based tasks and commands is fairly low, let alone perform any non-voice command based task. Challenges for intelligent mobile apps For intelligent mobile apps, the challenges are manifold. A key reason is network dependency for data transfer. Although the advent of 4G and 5G mobile networks has greatly improved mobile network speed, the availability of network and the data transfer speeds still pose a major challenge. This is due to the high volumes of data that intelligent mobile apps require to perform efficiently. To circumvent this limitation, vendors like Google are trying to implement smarter APIs in the mobile’s local storage. But this approach requires a huge increase in the mobile chip’s computation capabilities - something that’s not currently available. Maybe that’s why Google has also hinted at jumping into the chip manufacturing business if their computation needs are not met. Apart from these issues, running multiple intelligent apps at the same time would also require a significant increase in the battery life of mobile devices. Finally, comes the last question. What are some key applications of intelligent apps? We have explored some areas of application in the previous sections keeping our focus on just web and mobile apps. Broadly speaking, whatever makes our daily life easier, is ideally a potential application area for intelligent apps. From controlling the AC temperature automatically to controlling the oven and microwave remotely using the vacuum cleaner (of course the vacuum cleaner has to have robotic AI capabilities) to driving the car, everything falls in the domain of intelligent apps. The real questions for us are What can we achieve with our modern computation resources and our data handling capabilities? How can mobile computation capabilities and chip architecture be improved drastically so that we can have smart apps perform complex tasks faster and ease our daily workflow? Only the future holds the answer. We are rooting for the day when we will rise to become a smarter race by delegating lesser important yet intelligent tasks to our smarter systems by creating intelligent web and mobile apps efficiently and effectively. The culmination of these apps along with hardware driven AI systems could eventually lead to independent smart systems - a topic we will explore in the coming days.
Read more
  • 0
  • 0
  • 2575

article-image-keep-your-serverless-aws-applications-secure-tutorial
Savia Lobo
18 Jun 2018
11 min read
Save for later

Keep your serverless AWS applications secure [Tutorial]

Savia Lobo
18 Jun 2018
11 min read
Handling security is an extensive and complex topic. If not done right, you open up your app to dangerous hacks and breaches. Even if everything is right, it may be hacked. So it's important we understand common security mechanisms to avoid exposing websites to vulnerabilities and follow the recommended practices and methodologies that have been largely tested and proven to be robust. In this tutorial, we will learn how to secure serverless applications using AWS. Additionally, we will learn about the security basics and then move on to handle authorization and authentication using AWS. This article is an excerpt taken from the book, 'Building Serverless Web Applications' wriiten by Diego Zanon. Security basics in AWS One of the mantras of security experts is this: don't roll your own. It means you should never use in a production system any kind of crypto algorithm or security model that you developed by yourself. Always use solutions that have been highly used, tested, and recommended by trusted sources. Even experienced people may commit errors and expose a solution to attacks, especially in the cryptography field, which requires advanced math. However, when a proposed solution is analyzed and tested by a great number of specialists, errors are much less frequent. In the security world, there is a term called security through obscurity. It is defined as a security model where the implementation mechanism is not publicly known, so there is a belief that it is secure because no one has prior information about the flaws it has. It can be indeed secure, but if used as the only form of protection, it is considered as a poor security practice. If a hacker is persistent enough, he or she can discover flaws even without knowing the internal code. In this case, again, it's better to use a highly tested algorithm than your own. Security through obscurity can be compared to someone trying to protect their own money by burying it in the backyard when the common security mechanism would be to put the money in a bank. The money can be safe while buried, but it will be protected only until someone finds about its existence and starts to look for it. Due to this reason, when dealing with security, we usually prefer to use open source algorithms and tools. Everyone can access and discover flaws in them, but there are also a great number of specialists that are involved in finding the vulnerabilities and fixing them. In this section, we will discuss other security concepts that everyone must know when building a system. Information security When dealing with security, there are some attributes that need to be considered. The most important ones are the following: Authentication: Confirm the user's identity by validating that the user is who they claim to be Authorization: Decide whether the user is allowed to execute the requested action Confidentiality: Ensure that data can't be understood by third-parties Integrity: Protect the message against undetectable modifications Non-repudiation: Ensure that someone can't deny the authenticity of their own message Availability: Keep the system available when needed These terms will be better explained in the next sections. Authentication Authentication is the ability to confirm the user's identity. It can be implemented by a login form where you request the user to type their username and password. If the hashed password matches what was previously saved in the database, you have enough proof that the user is who they claim to be. This model is good enough, at least for typical applications. You confirm the identity by requesting the user to provide what they know. Another kind of authentication is to request the user to provide what they have. It can be a physical device (like a dongle) or access to an e-mail account or phone number. However, you can't ask the user to type their credentials for every request. As long as you authenticate it in the first request, you must create a security token that will be used in the subsequent requests. This token will be saved on the client side as a cookie and will be automatically sent to the server in all requests. On AWS, this token can be created using the Cognito service. How this is done will be described later in this chapter. Authorization When a request is received in the backend, we need to check if the user is allowed to execute the requested action. For example, if the user wants to checkout the order with ID 123, we need to make a query to the database to identify who is the owner of the order and compare if it is the same user. Another scenario is when we have multiple roles in an application and we need to restrict data access. For example, a system developed to manage school grades may be implemented with two roles, such as student and teacher. The teacher will access the system to insert or update grades, while the students will access the system to read those grades. In this case, the authentication system must restrict the actions insert and update for users that are part of the teachers group and users in the students group must be restricted to read their own grades. Most of the time, we handle authorization in our own backend, but some serverless services don't require a backend and they are responsible by themselves to properly check the authorization. For example, in the next chapter, we are going to see how serverless notifications are implemented on AWS. When we use AWS IoT, if we want a private channel of communication between two users, we must give them access to one specific resource known by both and restrict access to other users to avoid the disclosure of private messages. Confidentiality Developing a website that uses HTTPS for all requests is the main drive to achieve confidentiality in the communication between the users and your site. As the data is encrypted, it's very hard for malicious users to decrypt and understand its contents. Although there are some attacks that can intercept the communication and forge certificates (man-in-the-middle), those require the malicious user to have access to the machine or network of the victim user. From our side, adding HTTPS support is the best thing that we can do to minimize the chance of attacks. Integrity Integrity is related to confidentiality. While confidentiality relies on encrypting a message to prevent other users from accessing its contents, integrity deals with protecting the messages against modifications by encrypting messages with digital signatures (TLS certificates). Integrity is an important concept when designing low level network systems, but all that matters for us is adding HTTPS support. Non-repudiation Non-repudiation is a term that is often confused with authentication since both of them have the objective to prove who has sent the message. However, the main difference is that authentication is more interested in a technical view and the non-repudiation concept is interested in legal terms, liability, and auditing. When you have a login form with user and password input, you can authenticate the user who correctly knows the combination, but you can't have 100% certain since the credentials can be correctly guessed or stolen by a third-party. On the other hand, if you have a stricter access mechanism, such as a biometric entry, you have more credibility. However, this is not perfect either. It's just a better non-repudiation mechanism. Availability Availability is also a concept of interest in the information security field because availability is not restricted to how you provision your hardware to meet your user needs. Availability can suffer attacks and can suffer interruptions due to malicious users. There are attacks, such as Distributed Denial of Service (DDoS), that aim to create bottlenecks to disrupt site availability. In a DDoS attack, the targeted website is flooded with superfluous requests with the objective to overload the systems. This is usually accomplished by a controlled network of infected machines called a botnet. On AWS, all services run under the AWS Shield service, which was designed to protect against DDoS attacks with no additional charge. However, if you run a very large and important service, you may be a direct target of advanced and large DDoS attacks. In this case, there is a premium tier offered in the AWS Shield service to ensure your website's availability even in worst case scenarios. This requires an investment of US$ 3,000 per month, and with this, you will have 24x7 support of a dedicated team and access to other tools for mitigation and analysis of DDoS attacks. Security on AWS We use AWS credentials, roles, and policies, but security on AWS is much more than handling authentication and authorization of users. This is what we will discuss in this section. Shared responsibility model Security on AWS is based on a shared responsibility model. While Amazon is responsible for keeping the infrastructure safe, the customers are responsible for patching security updates to software and protecting their own user accounts. AWS's responsibilities include the following: Physical security of the hardware and facilities Infrastructure of networks, virtualization, and storage Availability of services respecting Service Level Agreements (SLAs) Security of managed services such as Lambda, RDS, DynamoDB, and others A customer's responsibilities are as follows: Applying security patches to the operating system on EC2 machines Security of installed applications Avoiding disclosure of user credentials Correct configuration of access policies and roles Firewall configurations Network traffic protection (encrypting data to avoid disclosure of sensitive information) Encryption of server-side data and databases In the serverless model, we rely only on managed services. In this case, we don't need to worry about applying security patches to the operating system or runtime, but we do need to worry about third-party libraries that our application depends on to execute. Also, of course, we need to worry about all the things that we need to configure (firewalls, user policies, and so on), the network traffic (supporting HTTPS) and how data is manipulated by the application. The Trusted Advisor tool AWS offers a tool named Trusted Advisor, which can be accessed through https://console.aws.amazon.com/trustedadvisor. It was created to offer help on how you can optimize costs or improve performance, but it also helps identify security breaches and common misconfigurations. It searches for unrestricted access to specific ports on your EC2 machines, if Multi-Factor Authentication is enabled on the root account and if IAM users were created in your account. You need to pay for AWS premium support to unlock other features, such as cost optimization advice. However, security checks are free. Pen testing A penetration test (or pen test) is a good practice that all big websites must perform periodically. Even if you have a good team of security experts, the usual recommendation is to hire a specialized third-party company to perform pen tests and to find vulnerabilities. This is because they will most likely have tools and procedures that your team may not have tried yet. However, the caveat here is that you can't execute these tests without contacting AWS first. To respect their user terms, you can only try to find breaches on your own account and assets, in scheduled time frames (so they can disable their intrusion detection systems for your assets), and only on restricted services, such as EC2 instances and RDS. AWS CloudTrail AWS CloudTrail is a service that was designed to record all AWS API calls that are executed on your account. The output of this service is a set of log files that register the API caller, the date/time, the source IP address of the caller, the request parameters, and the response elements that were returned. This kind of service is pretty important for security analysis, in case there are data breaches, and for systems that need the auditing mechanism for compliance standards. MFA Multi-Factor Authentication (MFA) is an extra security layer that everyone must add to their AWS root account to protect against unauthorized access. Besides knowing the user and password, a malicious user would also need physical access to your smartphone or security token, which greatly restricts the risks. On AWS, you can use MFA through the following means: Virtual devices: Application installed on Android, iPhone, or Windows phones Physical devices: Six-digit tokens or OTP cards SMS: Messages received on your phone We have discussed the basic security concepts and how to apply them on a serverless project. If you've enjoyed reading this article, do check out 'Building Serverless Web Applications' to implement signup, sign in, and log out features using Amazon Cognito. Google Compute Engine Plugin makes it easy to use Jenkins on Google Cloud Platform Analyzing CloudTrail Logs using Amazon Elasticsearch How to create your own AWS CloudTrail
Read more
  • 0
  • 0
  • 2574
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $15.99/month. Cancel anytime
article-image-ibm-researchs-5-in-5-predictions-think-2018
Amey Varangaonkar
16 Apr 2018
4 min read
Save for later

What we learnt from IBM Research’s ‘5 in 5’ predictions presented at Think 2018

Amey Varangaonkar
16 Apr 2018
4 min read
IBM’s mission has always been to innovate and in the process, change the way the world operates. With this objective in mind, IBM Research started a conversation termed as ‘5 in 5’ way back in 2012, giving their top 5 predictions every year at IBM Think 2018 on how technology would change the world. These predictions are usually the drivers for their research and innovation - and eventually solving the problems by coming up with efficient solutions to them. Here are the 5 predictions made by IBM Research for 2018: More secure Blockchain products: In order to avoid counterfeit Blockchain products, the technology will be coupled with cryptographic solutions to develop decentralized solutions. Digital transactions are often subject to frauds, and securing them with crypto-anchors is seen as the way to go forward. Want to know how this can be achieved? You might want to check out IBM’s blog on crypto-anchors and their real world applications. If you are like me, you’d rather watch IBM researcher Andres Kind explain what crypto-anchors are in a fast paced science slam session. Sophisticated cyber attacks will continue to happen: Cyber attacks resulting in the data leaks or stealing of confidential data is not news to us. The bigger worry, though, is that the current methodologies to prevent these attacks are not proving to be good enough. IBM predicts this is only going to get worse, with more advanced and sophisticated cyber attacks breaking into the current secure systems with ease. IBM Research also predicted the rise of ‘lattice cryptography’, a new security mechanism offering a more sophisticated layer of protection for the systems. You can read more about lattice cryptography technology on IBM’s official blog. Or, you can watch IBM researcher Cecilia Boschini explain what is lattice cryptography in 5 minutes on one of IBM’s famous science slam sessions. Artificial Intelligence-powered bots will help clean the oceans: Our marine ecosystem seems to be going from bad to worse. This is mainly due the pollution and toxic wastes being dumped into it. IBM predicts that AI-powered autonomous bots, deployed and controlled on the cloud, can help relieve this situation by monitoring the water bodies for water quality and pollution levels. You can learn more about how these autonomous bots will help save the seas in this interesting talk by Tom Zimmerman. An unbiased AI system: Artificially designed systems  are only as good as the data being used to build them. This data may be impure, or may contain flaws or bias pertaining of color, race, gender and so on. Going forward, new models which mitigate these biases and ensure more standard, bias-free predictions will be designed. With these models, certain human values and principles will be considered for effective decision-making. IBM researcher Francesca Rossi talks about bias in AI and the importance of building fair systems that help us make better decisions. Quantum Computing will go mainstream: IBM predicts that quantum computing will get out of research labs and gain mainstream adoption in the next 5 years. Problems considered to be difficult or unsolvable today due to their sheer scale or complexity can be tackled with the help of quantum computing. To know more, let IBM researcher Talia Gershon take you through the different aspects of quantum computing and why it is expected to be a massive hit. Amazingly, most of the predictions from the past have turned out to be true. For instance, IBM predicted the rise of Computer Vision technology in 2012, where computers would be able to not only process images, but also understand their ‘features’. It remains to be seen how true this year’s predictions will turn out to be. However, considering the rate at which the research on AI and other tech domains is progressing and being put to practical use, we won’t be surprised if they all become a reality soon. What do you think?
Read more
  • 0
  • 0
  • 2565

article-image-pythons-new-asynchronous-statements-and-expressions
Daniel Arbuckle
12 Oct 2015
5 min read
Save for later

Python’s new asynchronous statements and expressions

Daniel Arbuckle
12 Oct 2015
5 min read
As part of Packt’s Python Week, Daniel Arbuckle, author of our Mastering Python video, explains Python’s journey from generators, to schedulers, to cooperative multithreading and beyond…. My romance with cooperative multithreading in Python began in the December of 2001, with the release of Python version 2.2. That version of Python contained a fascinating new feature: generators. Fourteen years later, generators are old hat to Python programmers, but at the time, they represented a big conceptual improvement. While I was playing with generators, I noticed that they were in essence first-class objects that represented both the code and state of a function call. On top of that, the code could pause an execution and then later resume it. That meant that generators were practically coroutines! I immediately set out to write a scheduler to execute generators as lightweight threads. I wasn’t the only one! While the schedulers that I and others wrote worked, there were some significant limitations imposed on them by the language. For example, back then generators didn’t have a send() method, so it was necessary to come up with some other way of getting data from one generator to another. My scheduler got set aside in favor of more productive projects. Fortunately, that’s not where the story ends. With Python 2.5, Guido van Rossum and Phillip J. Eby added the send() method to generators, turned yield into an expression (it had been a statement before), and made several other changes that made it easier and more practical to treat generators as coroutines, and combine them into cooperatively scheduled threads. Python 3.3 was changed to include yield from expressions, which didn’t make much of a difference to end users of cooperative coroutine scheduling, but made the internals of the schedulers dramatically simpler. The next step in the story is Python 3.4, which included the asyncio coroutine scheduler and asynchronous I/O package in the Python standard library. Cooperative multithreading wasn’t just a clever trick anymore. It was a tool in everyone’s box. All of which brings us to the present, and the recent release of Python 3.5, which includes an explicit coroutine type, distinct from generators, new asynchronous async def, async for, and async with statements, and an await expression that takes the place of yield from for coroutines. So, why does Python need explicit coroutines and new syntax, if generator-based coroutines had gotten good enough for inclusion in the standard library? The short answer is that generators are primarily for iteration, so using them for something else — no matter how well it works conceptually — introduces ambiguities. For example, if you hand a generator to Python’s for loop, it’s not going to treat it as a coroutine, it’s going to treat it as an iterable. There’s another problem, related to Python’s special protocol methods, such as __enter__ and __exit__, which are called by the code in the Python interpreter, leaving the programmer with no opportunity to yield from it. That meant that generator-based coroutines were not compatible with various important bits of Python syntax, such as the with statement. A coroutine couldn’t be called from anything that was called by a special method, whether directly or indirectly, nor was it possible to wait on a future value. The new changes to Python are meant to address these problems. So, what exactly are these changes? async def is used to define a coroutine. Apart from the async keyword, the syntax is almost identical to a normal def statement. The big differences are, first, that coroutines can contain await, async for, and async with syntaxes, and, second, they are not generators, so they’re not allowed to contain yield or yield from expressions. It’s impossible for a single function to be both a generator and a coroutine. await is used to pause the current coroutine until the requested coroutine returns. In other words, an await expression is just like a function call, except that the called function can pause, allowing the scheduler to run some other thread for a while. If you try to use an await expression outside of an async def, Python will raise a SyntaxError. async with is used to interface with an asynchronous context manager, which is just like a normal context manager except that instead of __enter__ and __exit__ methods, it has __aenter__ and __aexit__ coroutine methods. Because they’re coroutines, these methods can do things like wait for data to come in over the network, without blocking the whole program. async for is used to get data from an asynchronous iterable. Asynchronous iterables have an __aiter__ coroutine method, which functions like the normal __iter__ method, but can participate in coroutine scheduling and asynchronous I/O. __aiter__ should return an object with an __anext__ coroutine method, which can participate in coroutine scheduling and asynchronous I/O before returning the next iterated value, or raising StopAsyncIteration. This is Python, all of these new features, and the convenience they represent, are 100% compatible with the existing asyncio scheduler. Further, as long as you use the @asyncio.coroutine decorator, your existing asyncio code is also forward compatible with these features without any overhead.
Read more
  • 0
  • 0
  • 2563

article-image-how-can-cybersecurity-keep-rapid-pace-technological-change
Richard Gall
15 Jun 2016
6 min read
Save for later

How can cybersecurity keep up with the rapid pace of technological change?

Richard Gall
15 Jun 2016
6 min read
Security often gets forgotten as the technological world quenches its thirst for innovation. Many of the most exciting developments – for both consumers and businesses – have developed way ahead of security concerns. Trends such as IoT and wearables, for example, both present challenges for cybersecurity strategists and white hat hackers. As we bridge the gap between hardware and software new challenges emerge. Arguably, as software becomes more and more embedded in everyday life – both literally and metaphorically – the lines between cybersecurity and traditional security and surveillance become much more blurred too. Recent high profile cases have put cybersecurity on the agenda, even if it remains a subject that’s little understood. The celebrity iCloud hack was a reminder that even the largest systems, built by tech giants and used by hundreds of millions of people, can be attacked. In the UK in 2015 broadband company Talk Talk underwent a massive security attack with 157,000 customers’ data put at risk – the fact that 2 teenage boys were later arrested for the incident only serves to underline that cybersecurity is a strange place. On the one hand demonstrating the disturbing power of a couple of kids with exceptional hacking skills; on the other, the vulnerability of entire systems and infrastructures. Managing security in a world built on Open Source software and cloud The paradoxes of cybersecurity mirror those of the wider software world. On the one hand we’ve never felt more powerful - when it comes to technology, change feels inevitable and exhilarating. Yet the more powerful we feel – as both consumers and programmers, the more we remember the tools that build awesome products, that help us manage huge projects and infrastructures, are largely built and maintained by communities. It is precisely because those tools of apparent progress seem democratic and open that they can be undone, used against the very things they build. While Open Source may be one reason that cybersecurity has become more challenging and complex, it’s worth also thinking about how we use software. The role of cloud in managing software and delivering services in particular has had an impact. Our devices – our digital lives – are no longer intermittently connected to the ‘world wide web’ (a phrase that sounds somewhat dated today) but rather continuously in communication with ‘the cloud’ – information and services are always available. Frictionless user experiences are great for users, but they’re also pretty useful for cybercriminals too. We feel powerless when it comes to security. Yet this lack of power is paradoxically wedded to our contemporary experience of complete control and connectivity and the ability to craft and cultivate our lives online exactly as we want. We want software that is built around our lifestyles with minimum friction, yet that often comes at a price. Consider, for example, what security looked like 10 years ago. The most essential step to being ‘safe’ online was to make sure your firewall was active and your antivirus was up to date. Today that can be difficult, as multiple devices access a range of networks even in the space of a single day (from airport Wi-Fi to mobile data). It’s hard to keep up. The issue isn’t just one for everyday consumers; it’s also a problem for cybersecurity teams developing the products we need. Security is all about stability. This is antithetical to today’s technological ethos. But what can we do to keep systems and infrastructure safe? To keep our information and data secure? How we learned to stop worrying and love hackers But perhaps we’ve found a solution. The emerging phenomenon of the cybersecurity hackathon, in which security experts and friendly hackers are invited to test and expose vulnerabilities, find ways in and around a huge range of software infrastructures. Perhaps the best example of this recently happening was the ‘Hack the Pentagon’ program, the U.S. Government’s ‘bug bounty’, in which more than a thousand security experts (mercenary hackers if you want to be impolite) uncovered hundreds of vulnerabilities in the Pentagon’s software infrastructure. You can find similar events all around the world – organizations whose infrastructure is built on Open Source software are effectively open sourcing their own security capabilities. These sort of events prove that developing security skills (pentesting in particular) can be invaluable, and also a great way to learn more about how software works, and how people have decided to build things. It makes sense. Gone are the days when you could guarantee security with your software package, when you could rely on your contact at Oracle for support. Insofar as most attacks and security risks are always external to a system or an organization, it makes sense to replicate those external threats when trying to identify vulnerabilities and protect yourself. It’s also important to acknowledge that you can’t cover everything internally. It’s easy to be hubristic, but hubris is very often the first sign of weakness. The way forward – UX, IA and security But are hackathons enough? Should cybersecurity in fact be something that we bear greater consideration, as individuals and organizations? Instead of viewing security as a problem to consider at the end of a development process, an irritating inconvenience, by focusing on questions of accessibility and security as design and user experience issues, we can begin to use software in a much smarter and hopefully safer way. For individuals that might mean thinking more carefully about your digital footprint, the data we allow organizations to access. (And yes, maybe we could manage our passwords a little better, but I really didn’t want to include such trite advice…) For businesses and other institutions it may mean aligning cybersecurity with UX and Information Architecture questions. While cybersecurity is definitely a software problem – one which anyone with an inclination towards code should learn more about – it’s also much more than that. It’s a design problem, an issue about where complex software systems interact with the real world in all its complexity and chaos.
Read more
  • 0
  • 1
  • 2541

article-image-top-4-facebook-patents-to-battle-fake-news-and-improve-its-news-feed
Sugandha Lahoti
18 Aug 2018
7 min read
Save for later

Four 2018 Facebook patents to battle fake news and improve news feed

Sugandha Lahoti
18 Aug 2018
7 min read
The past few months saw Facebook struggling to maintain its integrity considering the number of fake news and data scandals linked to it - Alex Jones, accusations of discriminatory advertising and more. Not to mention, Facebook Stocks fell $120 billion in market value after Q2 2018 earnings call. Amidst these allegations of providing fake news and allowing discriminatory content on its news feed, Facebook patented its news feed filter tool last week to provide more relevant news to its users. In the past also, Facebook has made several interesting patents to enhance their news feed algorithm in order to curb fake news. This made us look into what other recent patents that Facebook have been granted around news feeds and fake news. Facebook’s News Feed has always been one of its signature features. The news feed is generated algorithmically (instead of chronologically), with a mix of status updates, page updates, and app updates that Facebook believes are interesting and relevant to you. Officially Facebook, successfully patented its News Feed in 2012, after filing for it in 2006. The patent gave the company a stronghold on the ability to let users see status messages, pictures, and links to videos of online friends, but also the actions those friends take. [box type="shadow" align="" class="" width=""]Note: According to United States Patent and Trademark Office (USPTO), Patent is an exclusive right to invention and “the right to exclude others from making, using, offering for sale, or selling the invention in the United States or “importing” the invention into the United States”.[/box] Here are four Facebook patents in 2018 pertaining to news feeds that we found interesting. Dynamically providing a feed of stories Date of Patent: April 10, 2018 Filed: December 10, 2015 Features: Facebook filed this patent to present their news feed in a more dynamic manner suiting to a particular person. Facebook’s News feed automatically generates a display that contains information relevant to a user about another user. This patent is titled Dynamically providing a feed of stories about a user of a social networking system. As per the patent application, recently, social networking websites have developed systems for tailoring connections between various users. Typically, however, these news items are disparate and disorganized. The proposed method generates news items regarding activities associated with a user. It attaches an informational link associated with at least one of the activities, to at least one of the news items. The method limits access to the news items to a predetermined set of viewers and assigns an order to the news items. Source: USPTO This patent is a viable solution to limit access to the news items which a particular section of users may find obscene. For instance, Facebook users below the age of 18, may be restricted from viewing graphic content. The patent received criticism with people ridiculing the patent for seeming to go against everything that the patent system is supposed to do. They say that such automatically generated news feeds are found in all sorts of systems and social networks these days. But now Facebook may have the right to prevent others from doing, what other social networks are inherently supposed to do. Generating a feed of content items from multiple sources Date of Patent: July 3, 2018 Filed: June 6, 2014 Features:  Facebook filed a patent allowing a feed of content items associated with a topic to be generated from multiple content sources. Per the Facebook patent, their newsfeed generation system receives content items from one or more content sources. It matches the content items to topics based on a measure of the affinity of each content item for one or more objects. These objects form a database that is associated with various topics. The feed associated with the topic is communicated to a user, allowing the user to readily identify content items associated with the topic. Source: USPTO Let us consider the example of sports. A sports database will contain an ontology defining relationships between objects such as teams, athletes, and coaches. The news feed system for a particular user interested in sports (an athlete or a coach or a player) will cover all content items associated with sports. Selecting organic content and advertisements based on user engagement Date of Patent: July 3, 2018 Filed: June 6, 2014 Features: Facebook wants to dynamically adjust its organic content items and advertisements, generated to a user by modifying a ranking. Partial engagement scores will be generated for organic content items based on an expected amount of user interaction with each organic content item. Advertisements scores will be generated based on expected user interaction and bid amounts associated with each organic content item. These advertisement and partial engagement scores are next used to determine two separator engagement scores measuring the user's estimated interaction with a content feed. One engagement score is of organic content items with advertisements and one without them. A difference between both these scores will modify a conversion factor used to combine expected user interaction and bid amounts to generate advertisement scores. This mechanism has been patented by Facebook as Selecting organic content and advertisements for presentation to social networking system users based on user engagement. For example, if a large number of advertisements are presented to a user, the user may become frustrated with the increased difficulty in viewing stories and interact less with the social networking system. However, advertisements also generate additional revenue for the social networking system. A balance is necessary. So, if the engagement score is greater than the additional engagement score by at least a threshold amount, the conversion factor is modified (e.g., decreased) to increase the number of organic content items included in the feed. If the engagement score is greater than the additional engagement score but less than the threshold amount, the conversion factor is modified (e.g., increased) to decrease the number of organic content items included in the feed. Source: USPTO Displaying news ticker content in a social networking system Date of Patent: January 9, 2018 Filed: February 10, 2016 Features: Facebook has also patented, Displaying news ticker content in a social networking system. This Facebook patent describes a system that displays stories about a user’s friends in a news ticker, as friends perform actions. The system monitors in real time for actions associated with users connected with the target user. The news ticker is updated such that stories including the identified actions and the associated connected users are displayed within a news ticker interface. The news ticker interface may be a dedicated portion of the website’s interface, for example in a column next to a newsfeed. Additional information related to the selected story may be displayed in a separate interface. Source: USPTO For example, a user may select a story displayed in the news ticker; let’s say movies. In response, additional information associated with movies (such as actors, director, songs etc) may be displayed, in an additional interface. The additional information can also depend on the movies liked by the friends of the target user. These patents talk lengths of how Facebook is trying to repair its image and make amendments to its news feed algorithms to curb fake and biased news. The dynamic algorithm may restrict content, the news ticket content and multiple source extractions will keep the feed relevant, and the balance between organic content and advertisements could lure users to stay on the site. As such there are no details currently on when or if these features will hit the Facebook feed, but once implemented could bring Zuckerberg’s vision of “bringing the world close together”, closer to reality. Read Next Four IBM facial recognition patents in 2018, we found intriguing Facebook patents its news feed filter tool to provide more relevant news to its users Four interesting Amazon patents in 2018 that use machine learning, AR, and robotics
Read more
  • 0
  • 0
  • 2538
article-image-4-ways-use-machine-learning-enterprise-security
Kartikey Pandey
29 Aug 2017
6 min read
Save for later

4 Ways You Can Use Machine Learning for Enterprise Security

Kartikey Pandey
29 Aug 2017
6 min read
Cyber threats continue to cost companies money and reputation. Yet security seems to be undervalued. Or maybe it's just misunderstood. With a series of large-scale cyberattack events and the menace of ransomware, earlier WannaCry and now Petya, continuing to affect millions globally, it’s time you reimagined how your organization stays ahead of the game when it comes to software security.  Fortunately, machine learning can help support a more robust, reliable and efficient security initiative. Here are just 4 ways machine learning can support your software security strategy. Revamp of your company’s endpoint protection with machine learning We have seen in the past how a single gap in endpoint protection resulted in serious data breaches. In May this year, Mexican fast food giant Chipotle learned the hard way when cybercriminals exploited the company's point of sale systems to steal credit card information. The Chiptole incident was a very real reminder for many retailers to patch critical endpoints on a regular basis. It is crucial to guard your company’s endpoints which are virtual front doors to your organization’s precious information. Your cybersecurity strategy must consider a holistic endpoint protection strategy to secure against a variety of threats, both known and unknown. Traditional endpoint security approaches are proving to be ineffective and costing businesses millions in terms of poor detection and wasted time. The changing landscape of the cybersecurity market brings with it its own set of unique challenges (Palo Alto Networks have highlighted some of these challenges in their whitepaper here). Sophisticated Machine Learning techniques can help fight back threats that aren’t easy to defend with traditional ways. One could achieve this by adopting any of the three ML approaches: Supervised machine learning, unsupervised machine learning and reinforcement learning. Establishing the right machine learning approach entails a significant understanding of your expectations from the endpoint protection product. You might consider checking on the speed, accuracy, and efficiency of the machine learning based endpoint protection solution with the vendor to make an informed choice of what you are opting for. We recommend the use of a supervised machine learning approach for endpoint protection as it’s a proven way of malware detection and it delivers accurate results. The only catch is that these algorithms require relevant data in sufficient quantity to work on and the training rounds need to be speedy and effective to guarantee efficient malware detection. Some of the popular ML-based endpoint protection options available in the market are Symantec Endpoint Protection 14, CrowdStrike, and TrendMicro’s XGen. Use machine learning techniques to predict security threats based on historical data Predictive analytics is no longer just restricted to data science. By adopting predictive analytics, you can take a proactive approach to cybersecurity too. Predictive analytics makes it possible to not only identify infections and threats after they have caused damage, but also to raise an alarm for any future incidents or attacks. Predictive analytics is a crucial part of the learning process for the system. With sophisticated detection techniques the system can monitor network activities and report real-time data. One incredibly effective technique organizations are now beginning to use is a combination of  advanced predictive analytics with a red team approach. This enables organizations to think like the enemy and model a broad range of threats. This process mines and captures large sets of data which is then processed. The real value here is the ability to generate meaningful insights out of the large data set collected and then letting the red team work on processing and identifying potential threats. This is then used by the organization to evaluate its capabilities, to prepare for future threats and to mitigate potential risks. Harness the power of behavior analytics to detect security intrusions Behavior analytics is a highly trending area today in the cybersecurity space. Traditional systems such as antiviruses are skilled in identifying attacks based on historical data and matching signatures. Behavior analytics, on the other hand, detects anomalies and makes a judgement against what would be considered normal behaviour. As such, behavior analytics in enterprises is proving very effective when it comes to detecting intrusions that otherwise evade firewalls or antivirus software. It complements existing security measures such as firewall and antivirus rather than replacing them. Behavior analytics work well within private cloud and infrastructures and is able to detect threats within internal networks. One popular example is Enterprise Immune System, by the vendor Darktrace, which uses machine learning to detect abnormal behavior in the system. It helps IT staff narrow down their perimeter of search and look out for specific security events through a visual console. What’s really promising is that because Darktrace uses machine learning, the system is not just learning from events within internal systems, but from events happening globally as well. Use machine learning to close down IoT vulnerabilities Trying to manage large amounts of data and logs generated from millions of IoT devices manually could be overwhelming if your company relies on the Internet of Things. Many a time, IoT devices are directly connected to the network which means it is fairly easy for attackers and hackers to take advantage of your inadequately protected networks. It could therefore be next to impossible to build a secure IoT system, if you set out to identify and fix vulnerabilities manually. Machine learning can help you analyze and make sense of millions of data logs generated from IoT capable devices. Machine learning powered cybersecurity systems placed and seated directly inside your system can learn about security events as they happen. It can then monitor both incoming and outgoing IoT traffic in devices connected to the network and generate profiles for appropriate and inappropriate behavior inside your IoT ecosystem. This way the security system is able to react to even the slightest of irregularities and detect anomalies that were not experienced before. Currently, only a handful number of software and tools use Machine Learning or Artificial Intelligence for IoT security. But we are already seeing development on this front by major security vendors such as Symantec. Surveys carried out frequently on IoT continue to highlight security as a major barrier to IoT adoption and we are hopeful that Machine Learning will come to the rescue. Cyber crimes are evolving at a breakneck speed while businesses remain slow in adapting their IT security strategies to keep up with the times. Machine learning can help businesses make that leap to proactively address cyber threats and attacks by: Having an intelligent revamp of your company’s endpoint protection Investing in machine learning techniques that predict threats based on historical data Harnessing the power of behavior analytics to detect intrusions Using machine learning to close down IoT vulnerabilities And, that’s just the beginning. Have you used machine learning in your organization to enhance cybersecurity? Share with us your best practices and tips for using machine learning in cybersecurity in the comments below!
Read more
  • 0
  • 0
  • 2531

article-image-why-android-will-rule-our-future
Sam Wood
11 Mar 2016
5 min read
Save for later

Why Android Will Rule Our Future

Sam Wood
11 Mar 2016
5 min read
We've been joking for years that in the future we'll be ruled by Android Overlords - we just didn't think it would be an operating system. In 2015, it's predicted that Android shipped over one billion devices - a share of the mobile market equating to almost 80%. In our 2015 Skill Up survey, we also discovered that Android developers were by far the highest paid of mobile application developers. Android dominates our present - so why is it likely going to be vital to the world of tomorrow too? IoT Will Run On Android (Probably) Ask any group of developers what the Next Big Thing will be, and I bet you that more than one of them is going to say Internet of Things. In 2015, Google announced Android stepping into the ring of IoT operating systems when it showed us Brillo. Based on the Android kernal but 'scrubbed down by a Brillo pad', Brillo offers the possibility of a Google-backed cohesive platform for IoT - something potentially vital to a tech innovation increasingly marred by small companies attempting to blaze their own trail off in different directions. If IoT needs to be standardized, what better solution than with Android, the operating system that's already the go-to choice for open-source mobile devices? We've already got Smart Fridges running on Android, smart cars running on Android, and tons of smart-watches running on Android - the rest of the Internet of Things is likely just around the corner. Android is Colonizing Desktop Microsoft is still the King of Desktop, and Windows isn't going anywhere any time soon. However, its attempts to enter the mobile space have been miserable-at-best - a 2.8% share of the mobile market in 2015. What has been more successful is the idea of hybridizing the desktop and the mobile, in particular with the successful line of Surface laptops-come-tablets. But is the reverse likely to happen? Just like we're seeing Android moving from being a mobile OS to being used for IoT, we're also seeing the rise of ideas of Android Desktop. The Remix OS for PC operating system is created by former Google developers, and promises an "Android for PC" experience. Google-proper's own experiments in desktop are currently all based on the Chrome OS - which is growing fast in its market share, particularly among the education and student sectors. I'm an enthusiastic Chromebook owner and user, and when it falls short of meeting the full requirements of a major desktop OS, I'll often turn to my Android device to bridge the gap. According to the Wall Street journal, Google may be thinking similar and is considering folding Chrome OS and Android into one product. Consider the general praise that Microsoft received for Windows 10 mobile, and the successful unification of their platforms under a single OS. It's easy to imagine the integration of Google's mobile and desktop projects into a similar single user experience - and that this hybrid-Android would make a serious impact in the marketplace. For Apple, the Only Way Is Down Apple has banked on being the market in luxury for its mobile devices - and that might spell its doom. The pool of new buyers in the smartphone market is shrinking, and those late adopters are more likely to be price-conscious and enamored with the cheaper options available on Android. (After all, if your grandmother still complains about how much milk costs these days, is she really going to want to shell out $650 for an iPhone?) If Apple wants a bigger share of the market, it's going to need to consider a 'budget option' - and as any brand consultant will tell you, nothing damages the image of luxury like the idea that there's a 'cheap version'. Apple is aware of this, and has historically protested that it's never happening. But in 2015, we saw the number people switching from Android to iOS fall from from 13% to 11%. Even larger, the number of first-time smartphone buyers contributing to Apple's overall sales went from 20% to 11% over the same period. Those are worrying figures - especially when it also looks like more people switched from iOS to Android, than switched from Android to iOS. Apple may be a little damned-if-it-does, damned-if-it-doesn't in the face of Android. You can get a lot for your money if you're willing to buy something which doesn't carry an Apple logo. It's easy to see Android's many producers creating high-powered luxury devices; it's harder to see Apple succeeding by doing the opposite. And are we really ever going to see something like the iFridge? Android's Strength is its Ubiquity Principal to Android's success in the future is its ubiquity. In just six years, it's gone from being a new and experimental venture to over a billion downloads and being used across almost every kind of smart device out there. As an open source OS, the possibilities of Android are only going to get wider. When Androids rule our future, it may be on far more than just our phones. Dive into developing for Android all this week with our exclusive Android Week deals! Get 50% off selected titles, or build your own bundle of any five promoted Android books for only $50.
Read more
  • 0
  • 0
  • 2527

article-image-reasons-your-business-to-adopt-cloud-computing
Vijin Boricha
11 Jun 2018
6 min read
Save for later

5 reasons why your business should adopt cloud computing

Vijin Boricha
11 Jun 2018
6 min read
Businesses are moving their focus to using existing technology to accomplish their 2018 business targets. Although cloud services have been around for a while, many organisations hesitated to make the move. But recent enhancements such as cost-effectiveness, portability, agility, and faster connectivity have grabbed more attention from new and not so famous organisations. So, if your organization is looking for ways to achieve greater heights and you are exploring healthy investments that benefit your organisation then, your first choice should be cloud computing as the on-premises server system is fading away. You don’t need any proof to agree that cloud computing is playing a vital role in changing the way businesses work today. Organizations have started looking for cloud options to widen their businesses’ reach (read revenue, growth, sales) and to run more efficiently (read cost savings, bottom line, ROI). There are three major cloud options that growing businesses can look at: Public Cloud Private Cloud Hybrid Cloud A Gartner report states that by the year 2020 big vendors will shift from cloud-first to cloud-only policies. If you are wondering what could fuel this predicted rise in cloud adoption, look no further.Below are some factors contributing to this trend of businesses adopting cloud computing. Cloud offers increased flexibility One of the most beneficial aspects of adopting cloud computing is its flexibility no matter the size of the organization or the location your employee is placed at. Cloud computing comes with a wide range of options from modifying storage space to supporting both in-office and remotely located employees. This makes it easy for businesses to increase and decrease server loads along with providing employees with the benefit of working from anywhere at anytime with zero timezone restrictions. Cloud computing services, in a way, help businesses focus on revenue growth than spending time and resources on building hardware/software capabilities. Cloud computing is cost effective Cloud-backed businesses definitely benefit on cost as there is no need to maintain expensive in-house servers and other expensive devices given that everything is handled on the cloud. If you want your business to grow, you just need to spend on storage space and pay for the services you use. Cost transparency helps organizations plan their expenditure and pay-per-use is one of the biggest advantage businesses can leverage. With cloud adoption you eliminate spending on increasing processing power, hard drive space or building a large data center. When there are less hardware facilities to manage, you do not need a large IT team to handle it. Software licensing cost is automatically eliminated as the software is already stored on cloud and businesses have an option of paying as per their use. Scalability is easier with cloud The best part about cloud computing is its support for unpredicted requirements which helps businesses scale or downsize resources quickly and efficiently. It’s all about modifying your subscription plan which allows you to upgrade your storage or bandwidth plans as per your business needs.This kind of scalability option helps increasing business performance and minimizes the risk of up-front investments of operational issues and maintenance. Better availability means less downtime and better productivity So with cloud adoption you need not worry about downtime as they are reliable and maintain about 100% uptime. This means whatever you own on the cloud is available to your customers at any point. For every server breakdown, the cloud service providers make sure of having a backup server in place to avoid missing out on essential data. This can barely be achieved by traditional on-premises infrastructure, which is another reason businesses should switch to cloud. All of the above mentioned mechanism makes it easy to share files and documents with teammates, thanks to its flexible accessibility. Teams can collaborate more effectively when they can access documents anytime and anywhere. This obviously improves workflow and gives businesses a competitive edge. Being present is office to complete tasks is no longer a requirement for productivity;  a work/life balance is an added side-effect of such an arrangement. In short, you need not worry about operational disasters and you can get the job done without physically being present in office. Automated Backups One major problem with an on-premises data center is that everything depends on the functioning of your physical system. In cases where you lose your device or some kind of disaster befalls your physical system, it may lead to loss of data as well. This is never the case with cloud as you can access your files and documents from any device or location no matter the physical device you use. Organizations have to bear a massive expense for regular back-ups whereas cloud computing comes with automatic back-ups and provides enterprise grade functioning to all sizes of businesses. If you’re thinking about data security, cloud is a safer option as each of the cloud computing variants (private, public, and hybrid) has its own set of benefits. If you are not dealing with sensitive data, choosing public cloud would be the best option whereas for sensitive data, businesses should opt for private cloud where they have total control of the security policies. On the other hand, hybrid cloud allows you to benefit from both worlds. So, if you are looking for scalable solutions along with a much more controlled architecture for data security, hybrid cloud architecture will blend well with your business needs. It allows users to pick and choose the public or private cloud service they require to fulfill their business requirements. Migrating your business to cloud definitely has more advantages over disadvantages. It helps increase organizational efficiency and fuels business growth. Cloud computing helps reduce time-to-market, facilitates product development, keeps employees happy, and builds a desired workflow. This in the end helps your organisation achieve greater success. It doesn’t hurt that the cost you saved thus is available for you to invest in areas that are in dire need of some cash inflow! Read Next: What Google, RedHat, Oracle, and others announced at KubeCon + CloudNativeCon 2018 Serverless computing wars: AWS Lambdas vs Azure Functions How machine learning as a service is transforming cloud
Read more
  • 0
  • 0
  • 2521
article-image-how-to-beat-cyber-interference-in-an-election-process
Guest Contributor
05 Sep 2018
6 min read
Save for later

How to beat Cyber Interference in an Election process

Guest Contributor
05 Sep 2018
6 min read
The battle for political influence and power is transcending all boundaries and borders. There are many interests at stake, and some parties, organizations, and groups are willing to pull out the “big guns” in order to get what they want. “Hacktivists” are gaining steam and prominence these days. However, governmental surveillance and even criminal (or, at the very least, morally questionable) activity can happen, too, and when it does, the scandal rises to the most relevant headlines in the world’s most influential papers. That was the case in the United States’ presidential election of 2016 and in France’s most recent process. Speaking of the former, the Congress and the Department of Investigations revealed horrifying details about Russian espionage activity in the heat of the battle between Democrat Hillary Clinton and Republican Donald Trump, who ended up taking the honors. As for the latter, the French had better luck in their quest to prevent the Russians to wreak havoc in the digital world. In fact, it wasn’t luck: it was due diligence, a sense of responsibility, and a clever way of using past experiences (such as what happened to the Americans) to learn and adjust. Russia’s objective was to influence the outcome of the process by publishing top secret and compromising conversations between high ranked officials. In their attempt to intervene the American elections, they managed to get in networks and systems controlled by the state to publish fake news, buy Facebook ads, and employ bots to spread the fake news pieces. How to stop cyber interference during elections Everything should start with awareness about how to avoid hacking attacks, as well as a smoother communication and integration between security layers. Since the foundation of it all is the law, each country needs to continually make upgrades to have all systems ready to avoid and fight cyber interference in the election and in all facets of life. Diplomatic relationships need to understand just how far a nation state can go in the case of defending their sovereignty against such crimes. Pundits and experts in the matter state that until the system is hacking-proof and can offer reliability, every state needs to gather and count hand votes as a backup to digital votes. Regarding this, some advocates recently told the Congress that the United States should implement paper ballots that are prepared to provide physical evidence of every vote, effectively replacing the unreliable and vulnerable machines currently used. According to J. Alex Halderman, who is a computer science teacher, this ballot might look “low tech” to the average eye, but they represent a “reliable and cost-effective defense.” Paying due attention to every detail Government authorities need to pay better attention to propaganda (especially Russian propaganda), because it may show patterns about the nation’s intentions. By now, we all know what the Russians are capable of, and figuring out their intentions would go a long way in helping the country prepare to future attacks in a better way. The American government may also require Russian media and social platforms to register under the FARA, which is the Foreign Agents Registration Act. That way, there will be a more efficient database about who is a foreign agent of influence. One of the most critical corrective measures to be taken in the future is prohibiting the chance of buying advertising that directly influences the outcome of certain processes and elections. Handing diplomatic sanctions just isn’t enough Lately, the US Congress, approved by president Trump, has been handing sanctions to people involved in the 2016 cyber attack. However, a far more effective measure to take would be enhancing cyber defense, because it can offer immediate detection of threats and is well-equipped to bring to an end any network intrusions. According to scientist Thomas Schelling, the fear of the consequences of any given situation can be a powerful motivator, but it can be difficult to deter individuals or organizations that can’t be easily tracked and identified, and act behind irrational national ideologies and political goals. Instead, adopting cyber defense can stop any intrusion in time and offer more efficient punishments. Active defense is legally viable and a very capable solution because it can disrupt the perpetrators outside networks. Enabling the “hack back” approach can allow countries to take justice into their own hands in case of any cyber attack attempt. The next step would be working on lowering the required threshold to enable this kind of response. Cyber defense is the way to go Cyber defense measures can be very versatile and have proven effectiveness. Take the example of France: in the most recent elections, French intelligence watched Russian cyber activity for the duration of the election campaign of Emmanuel Macron. Some strategies include letting the hackers steal fake files and documents, misleading them and making them waste their time. The cyber defense can also ensure to embed beacons that can disclose the attackers’ current location or mess with their networks. There is even a possibility of erasing stolen information. In the case of France, cyber defense specialists were one step ahead of the Russians: they made false email accounts and introduced numerous fake documents and files that discouraged the Russians. Known systems, networks, and platforms The automated capabilities of cyber defense can trump any malicious attempt or digital threat. For example, the LightCyber Magna platform can perceive big amounts of information. Such a system may have been able to stop Russian hackers from installing malware on the DMC (Democratic National Committee). Another cyber defense tool, the Palo Alto Network Traps, are known to block malware as strong as the WannaCry ransomware attack that encrypted more than 200,000 computers in almost a hundred countries. Numerous people lost their data or had to pay thousands of dollars to recover it. VPN: an efficient cybersecurity tool Another perfectly usable cyber defense tools are Virtual Private Networks. VPNs such as Surfshark can encrypt all traffic shared online, as well as the user’s IP address. They effectively provide anonymous browsing as well as privacy. Cyber defense isn’t just a luxury that just a handful of countries can afford: it is a necessity as a tool that helps combat cyber interference not only in elections but in every facet of life and international relationships. Author Bio Harold is a cybersecurity consultant and a freelance blogger. He's currently working on a cybersecurity campaign to raise awareness around the threats that businesses can face online. Top 5 cybersecurity myths debunked Skepticism welcomes Germany’s DARPA-like cybersecurity agency – The federal agency tasked with creating cutting-edge defense technology How cybersecurity can help us secure cyberspace
Read more
  • 0
  • 0
  • 2521

article-image-relations-in-backbone
Andrew Burgess
05 Jan 2015
7 min read
Save for later

Relations In Backbone

Andrew Burgess
05 Jan 2015
7 min read
In this Backbone tutorial from Andrew Burgess, author of Backbone.js Blueprints, we’ll be looking at two key extensions that you can use when working with models and collections. As you will see, both give more rigidity to what is an incredibly flexible framework. This can be extremely valuable when you want a reliable way to perform certain front-end web development tasks. Relations In Backbone Backbone is an extremely flexible front-end framework. It is very unopinionated, as frameworks go, and can be bent to build anything you want. However, for some things you want your framework to be a little more opinionated and give you a reliable way to perfrom some operation. One of those things is relating models and collections. Sure, a collection is a group of models; but what if you want to relate them the other way: what if you want a model to have a "child" collection? You could role your own implementation of model associations, or you could use one of the Backbone extension libraries made expressly for this purpose. In this article, we'll look at two extension options: Backbone Associations and Backbone Relational. Backbone Associations We'll use the example of an employer with a collection of employees. In plain Backbone, you might have an Employer model and an Employees collection, but how can we relate an Employer instance to an Employees instance? For starters, let's create the Employee model: var Employee = Backbone.Model.extend({}); Notice that we are extending the regular Backbone.Model, not a special "class" that Backbone Associations gives us. However, we'll use a special class that next: var Employer = Backbone.AssociatedModel.extend({ relations: [{ type: Backbone.Many, key: 'employees', relatedModel: Employee }] }); The Employer will be an extention of Backbone.AssociatedModel class. We give it a special property: relations. It's an array, because a model can have multiple associations; but we'll just give it one for now. There are several properties that we could give a relation object, but only three are required. The first is type: it must be either Backbone.Many (if we are creating a 1:N relation) or Backbone.One (if we are creating a 1:1 relation). The second required parameter is key, which is the name of the property that will appear as the collection on the model instance. Finally, we have the relatedModel, which is a reference to the model class. Now, we can create Employee instances. var john = new Employee({ name: "John" }), jane = new Employee({ name: "Jane" }), paul = new Employee({ name: "Paul" }), kate = new Employee({ name: "Kate" }); Then, we can create an Employer instance and relate the Employee instances to it: var boss = new Employer({ name: 'Winston', employees: [john, jane, paul, kate] }); Notice that we've used the special relation key name employees. Even though we've assigned a regular array, it will be converted to a full Backbone.Collection object behind the scenes. This is great, because now you can use collection-specific functions, like this: boss.get('employees').pluck('name'); // ['John', 'Jane', 'Paul', 'Kate'] We can even use some special syntax with the get method to get properties on our nested models: boss.get('employees[0].name'); // 'John' boss.get('employees[3].name'); // 'Kate' Unfortunately, relations with Backbone Associations are a one-way thing: there's no relation going from Employee back to Employer. We can, of course, set an attribute on our model instances: john.set({ employer: boss }); But there's nothing special about this. We can make it an association, however, if we change our Employee from a regular Backbone.Model class to a Backbone.AssociatedModel class. var Employee = Backbone.AssociatedModel.extend({ relations: [{ type: Backbone.One, key: 'employer', relatedModel: 'Employer' }] }); Two things are different about this relation. First, it's a one-to-one (1:1) relation, so we use the type Backbone.One. Second, we make the relatedModel a string, instead of a reference to the Employer object; this is necessary because Employer comes after Employee in our code, and hence can't be references directly as this point; model class will look it up later, when it's needed. Now, we'll still have to set our employer attribute on Employee models, like so: john.set({ employer: boss }); The difference now is that we can use the features Backbone Associations provides us with, like nested getting: john.get('employer.name'); // Winston One more thing: I mentioned that the array attribute with the special key name becomes a collection object. By default, it's a generic Backbone.Collection. However, if you want to make your own collection class with some special features, you can add a collectionType property to the relation: var EmployeeList = Backbone.Collection.extend({ model: Employee }); var Employer = Backbone.AssociatedModel.extend({ relations: [{ type: Backbone.Many, collectionType: EmployeeList key: 'employees', relatedModel: Employee }] }); Backbone Relational Backbone Relational has very similar syntax to Backbone Associations; however, I prefer this library because it makes our relations two-way affairs from the beginning. First, both models are required to extend Backbone.RelationalModel. var Employee = Backbone.RelationalModel.extend({}); var EmployeeList = Backbone.Collection.extend({ model: Employee }); var Employer = Backbone.RelationalModel.extend({ relations: [{ type: Backbone.HasMany, key: 'employees', relatedModel: Employee, collectionType: EmployeeList reverseRelation: { key: 'employer' } }] }); Notice that our Employer class has a relations attributes. The key, relatedModel, and type attributes are required and perform the same duties their Backbone Associated counterparts do. Optionally, the collectionType property is also avaiable. The big difference with Backbone Relational—and the reason I prefer it—is because of the reverseRelation property: we can use this to make the relationship act two ways. We're giving it a single property here, a key: this value will be the attribute given to model instances on the other side of the relationship. In this case, it means that Employee model instances will have an employer attribute. We can see this in action if we create our employer and employees, just as we did before: var john = new Employee({ name: "John" }), jane = new Employee({ name: "Jane" }), paul = new Employee({ name: "Paul" }), kate = new Employee({ name: "Kate" }); var boss = new Employer({ name: 'Winston', employees: [john, jane, paul, kate] }); And now, we have a two-way relationship. Based on the above code, this part should be obvious: boss.get('employees').pluck('name'); // ['John', 'Jane', 'Paul', 'Kate'] But we can also do this: john.get('employer').get('name'); // Winston Even though we never gave the john instance an employer attribute, Backbone Relational did it for us, because we gave an object on one side of the relationship knowledge of the connection. We could have done it the other way, as well, by giving the employees an employer: var boss = new Employer({ name: 'Winston' }); var john = new Employee({ name: "John", employer: boss }), jane = new Employee({ name: "Jane", employer: boss }), paul = new Employee({ name: "Paul", employer: boss }), kate = new Employee({ name: "Kate", employer: boss }); boss.get('employees').pluck('name'); // ['John', 'Jane', 'Paul', 'Kate'] It's this immediate two-way connection that makes me prefer Backbone Relational. But both libraries have other great features, so check them out in full before making a decision. Backbone Associations Backbone Relational I've found that the best way to get better at using Backbone is to really understand what's going on behind the curtain, to get a feel for how Backbone "thinks." With this in mind, I wrote Backbone.js Blueprints. As you build seven different web applications, you'll learn how to use and abuse Backbone to the max. About the Author Andrew Burgess is a primarily JavaScript developer, but he dabbles in as many languages as he can find. He's written several eBooks on Git, JavaScript, and PHP, as well as Backbone.js Blueprints, published by Packt Publishing. He's also an web development instructor as Tuts+, where he produces videos on everything from JavaScript to the command line.
Read more
  • 0
  • 0
  • 2517