Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

6719 Articles
article-image-highlights-from-jack-dorseys-live-interview-by-kara-swisher-on-twitter-on-lack-of-diversity-tech-responsibility-physical-safety-and-more
Natasha Mathur
14 Feb 2019
7 min read
Save for later

Highlights from Jack Dorsey’s live interview by Kara Swisher on Twitter: on lack of diversity, tech responsibility, physical safety and more

Natasha Mathur
14 Feb 2019
7 min read
Kara Swisher, Recode co-founder, interviewed Jack Dorsey, Twitter CEO, yesterday over Twitter. The interview ( or ‘Twitterview’)  was conducted in tweets using the hashtag #KaraJack. It started at 5 pm ET and lasted for around 90-minutes. Let’s have a look at the top highlights from the interview. https://twitter.com/karaswisher/status/1095440667373899776 On Fixing what is broke on Social Media and Physical safety Swisher asked Dorsey why he isn’t moving faster in his efforts to fix the disaster that has been caused so far on social media. To this Dorsey replied that Twitter was trying to do “too much” in the past but that they have become better at prioritizing now. The number one focus for them now is a person’s “physical safety” i.e. the offline ramifications for Twitter users off the platform. “What people do offline with what they see online”, says Dorsey. Some examples of ‘offline ramifications’ being “doxxing” (harassment technique that reveals a person’s personal information on the internet) and coordinated harassment campaigns. Dorsey further added that replies, searches, trends, mentions on Twitter are where most of the abuse happens and are the shared spaces people take advantage of. “We need to put our physical safety above all else. We don’t have all the answers just yet. But that’s the focus. I think it clarifies a lot of the work we need to do. Not all of it of course”, said Dorsey. On Tech responsibility and improving the health of digital conversation on Twitter When Swisher asked Dorsey what grading would he give to Silicon Valley and himself for embodying tech responsibility, he replied with “C” for himself. He said that Twitter has made progress but it’s scattered and ‘not felt enough’. He did not comment on what he thought of Silicon Valley’s work in this area. Swisher further highlighted that the goal of improving Twitter conversations have only remained empty talk so far. She asked Dorsey if Twitter has made any actual progress in the last 18-24 months when it comes to addressing the issues regarding the “health of conversation” (which eventually plays into safety). Dorsey said these issues are the most important thing right now that they need to fix and it’s a failure on Twitter’s part to ‘put the burden on victims’. He did not share a specific example of improvements made to the platform to further this goal. Swisher then questioned him on how he intends on fixing the issue, Dorsey mentioned that: Twitter intends to be more proactive when it comes to enforcing healthy conversations so that reporting/blocking becomes the last resort. He mentioned that Twitter takes actions against all offenders who go against its policies but that the system works reactively to someone who reports it. “If they don’t report, we don’t see it. Doesn’t scale. Hence the need to focus on proactive”, said Dorsey. Since Twitter is constantly evolving its policies to address the ‘current issues’, it's rooting these in fundamental human rights (UN) and is making physical safety the top priority alongside privacy. On lack of diversity https://twitter.com/jack/status/1095459084785004544 Swisher questioned Dorsey on his negligence towards addressing the issues. “I think it is because many of the people who made Twitter never ever felt unsafe,” adds Swisher. Dorsey admits that the “lack of diversity” didn’t help with the empathy of what people (especially women) experience on Twitter every day. He further adds that Twitter should be reflective of the people that it’s trying to serve, which is why they established a trust and safety council to get feedback. Swisher then asks him to provide three concrete examples of what Twitter has done to fix this. Dorsey mentioned that Twitter has: evolved its policies ( eg; misgendering policy). prioritized proactive enforcement by using machine learning to downrank bad actors, meaning, they'll look at the probability of abuse from any one account. This is because if someone else is abusing one account then they’re probably doing the same on other accounts. Given more user control in a product, such as muting of accounts with no profile picture, etc. More focus on coordinated behavior/gaming. On Dorsey’s dual CEO role Swisher asked him why he insists on being the CEO of two publicly traded companies (Twitter and Square Inc.) that both require maximum effort at the same time. Dorsey said that his main focus is on building leadership in both and that it’s not his ambition to be CEO of multiple companies “just for the sake of that”. She further questioned him if he has any plans in mind to hire someone as his “number 2”. Dorsey said it’s better to spread that kind of responsibility across several people as it reduces dependencies and the company gets more options for future leadership. “I’m doing everything I can to help both. Effort doesn’t come down to one person. It’s a team”, he said. On Twitter breaks, Donald Trump and Elon Musk When initially asked about what Dorsey feels about people not feeling good after being for a while on Twitter, he said he feels “terrible” and that it's depressing. https://twitter.com/jack/status/1095457041844334593 “We made something with one intent. The world showed us how it wanted to use it. A lot has been great. A lot has been unexpected. A lot has been negative. We weren’t fast enough to observe, learn, and improve”, said Dorsey. He further added that he does not feel good about how Twitter tends to incentivize outrage, fast takes, short term thinking, echo chambers, and fragmented conversations. Swisher then questioned Dorsey on whether Twitter has ever intended on suspending Donald Trump and if Twitter’s business/engagement would suffer when Trump is no longer the president. Dorsey replied that Twitter is independent of any account or person and that although the number of politics conversations has increased on Twitter, that’s just one experience. He further added that Twitter is ready for 2020 elections and that it has partnered up with government agencies to improve communication around threats. https://twitter.com/jack/status/1095462610462433280 Moreover, on being asked about the most exciting influential on Twitter, Dorsey replied with Elon Musk. He said he likes how Elon is focused on solving existential problems and sharing his thinking openly. On being asked he thought of how Alexandria Ocasio Cortez is using Twitter, he replied that she is ‘mastering the medium’. Although Swisher managed to interview Dorsey over Twitter, the ‘Twitterview’ got quite confusing soon and went out of order. The conversations seemed all over the place and as Kurt Wagner, tech journalist from Recode puts it, “in order to find a permanent thread of the chat, you had to visit one of either Kara or Jack’s pages and continually refresh”. This made for a difficult experience overall and points towards the current flaws within the conversation system on Twitter. Many users tweeted out their opinion regarding the same: https://twitter.com/RTKumaraSwamy/status/1095542363890446336 https://twitter.com/waltmossberg/status/1095454665305739264 https://twitter.com/kayvz/status/1095472789870436352 https://twitter.com/sukienniko/status/1095520835861864448 https://twitter.com/LauraGaviriaH/status/1095641232058011648 Recode Decode #GoogleWalkout interview shows why data and evidence don’t always lead to right decisions in even the world’s most data-driven company Twitter CEO, Jack Dorsey slammed by users after a photo of him holding ‘smash Brahminical patriarchy’ poster went viral Jack Dorsey discusses the rumored ‘edit tweet’ button and tells users to stop caring about followers
Read more
  • 0
  • 0
  • 2507

article-image-implementing-a-non-blocking-cross-service-communication-with-webclienttutorial
Amrata Joshi
13 Feb 2019
10 min read
Save for later

Implementing a non-blocking cross-service communication with WebClient[Tutorial]

Amrata Joshi
13 Feb 2019
10 min read
The  WebClient is the reactive replacement for the old RestTemplate.  However, in WebClient, we have a functional API that fits better with the reactive approach and offers built-in mapping to Project Reactor types such as Flux or Mono. This article is an excerpt taken from the book Hands-On Reactive Programming in Spring 5 written by Oleh Dokuka and Igor Lozynskyi. This book covers the difference between a reactive system and reactive programming, the basics of reactive programming in Spring 5 and much more. In this article, you will understand the basics of non-blocking cross-service communication with WebClient, reactive WebSocket API, server-side WebSocket API, and much more. WebClient.create("http://localhost/api") // (1) .get() // (2) .uri("/users/{id}", userId) // (3) .retrieve() // (4) .bodyToMono(User.class) // (5) .map(...) // (6) .subscribe(); // In the preceding example, we create a WebClient instance using a factory method called create, shown at point 1. Here, the create method allows us to specify the base URI, which is used internally for all future HTTP calls. Then, in order to start building a call to a remote server, we may execute one of the WebClient methods that sounds like an HTTP method. In the previous example, we used WebClient#get, shown at point (2). Once we call the WebClient#get method, we operate on the request builder instance and can specify the relative path in the uri method, shown at point (3). In addition to the relative path, we can specify headers, cookies, and a request body. However, for simplicity, we have omitted those settings in this case and moved on to composing the request by calling the retrieve or exchange methods. In this example, we use the retrieve method, shown at point (4). This option is useful when we are only interested in retrieving the body and performing further processing. Once the request is set up, we may use one of the methods that help us with the conversion of the response body. Here, we use the bodyToMono method, which converts the incoming payload of the User to Mono, shown at point (5). Finally, we can build the processing flow of the incoming response using the Reactor API, and execute the remote call by calling the subscribe method. WebClient follows the behavior described in the Reactive Streams specification. This means that only by calling the subscribe method will WebClient wire the connection and start sending the data to the remote server. Even though, in most cases, the most common response processing is body processing, there are some cases where we need to process the response status, headers, or cookies. For example, let's build a call to our password checking service and process the response status in a custom way using the WebClient API: class DefaultPasswordVerificationService // (1) implements PasswordVerificationService { // final WebClient webClient; // (2) // public DefaultPasswordVerificationService( // WebClient.Builder webClientBuilder // ) { // this.webClient = webClientBuilder // (2.1) .baseUrl("http://localhost:8080") // .build(); // } // @Override // (3) public Mono<Void> check(String raw, String encoded) { // return webClient // .post() // (3.1) .uri("/check") // .body(BodyInserters.fromPublisher( // (3.2) Mono.just(new PasswordDTO(raw, encoded)), // PasswordDTO.class // )) // .exchange() // (3.3) .flatMap(response -> { // (3.4) if (response.statusCode().is2xxSuccessful()) { // (3.5) return Mono.empty(); // } // else if(resposne.statusCode() == EXPECTATION_FAILD) { // return Mono.error( // (3.6) new BadCredentialsException(...) // ); // } // return Mono.error(new IllegalStateException()); // }); // } // } // The following numbered list describes the preceding code sample: This is the implementation of the PasswordVerificationService interface. This is the initialization of the WebClient instance. It is important to note that we use a WebClient instance per class here, so we do not have to initialize a new one on each execution of the check method. Such a technique reduces the need to initialize a new instance of WebClient and decreases the method's execution time. However, the default implementation of WebClient uses the Reactor-Netty HttpClient, which in default configurations shares a common pool of resources among all the HttpClient instances. Hence, the creation of a new HttpClient instance does not cost that much. Once the constructor of DefaultPasswordVerificationService is called, we start initializing webClient and use a fluent builder, shown at point (2.1), in order to set up the client. This is the implementation of the check method. Here, we use the webClient instance in order to execute a post request, shown at point (3.1). In addition, we send the body, using the body method, and prepare to insert it using the BodyInserters#fromPublisher factory method, shown in (3.2). We then execute the exchange method at point (3.3), which returns Mono<ClientResponse>. We may, therefore, process the response using the flatMap operator, shown in (3.4). If the password is verified successfully, as shown at point (3.5), the check method returns Mono.empty. Alternatively, in the case of an EXPECTATION_FAILED(417) status code, we may return the Mono of BadCredentialsExeception, as shown at point (3.6). As we can see from the previous example, in a case where it is necessary to process the status code, headers, cookies, and other internals of the common HTTP response, the most appropriate method is the exchange method, which returns ClientResponse. As mentioned, DefaultWebClient uses the Reactor-Netty HttpClient in order to provide asynchronous and non-blocking interaction with the remote server. However, DefaultWebClient is designed to be able to change the underlying HTTP client easily. For that purpose, there is a low-level reactive abstraction around the HTTP connection, which is called org.springframework.http.client.reactive.ClientHttpConnector. By default, DefaultWebClient is preconfigured to use ReactorClientHttpConnector, which is an implementation of the ClientHttpConnector interface. Starting from Spring WebFlux 5.1, there is a JettyClientHttpConnector implementation, which uses the reactive HttpClient from Jetty. In order to change the underlying HTTP client engine, we may use the WebClient.Builder#clientConnector method and pass the desired instance, which might be either a custom implementation or the existing one. In addition to the useful abstract layer, ClientHttpConnector may be used in a raw format. For example, it may be used for downloading large files, on-the-fly processing, or just simple byte scanning. We will not go into details about ClientHttpConnector; we will leave this for curious readers to look into themselves. Reactive WebSocket API We have now covered most of the new features of the new WebFlux module. However, one of the crucial parts of the modern web is a streaming interaction model, where both the client and server can stream messages to each other. In this section, we will look at one of the most well-known duplex protocols for duplex client-server communication, called WebSocket. Despite the fact that communication over the WebSocket protocol was introduced in the Spring Framework in early 2013 and designed for asynchronous message sending, the actual implementation still has some blocking operations. For instance, both writing data to I/O or reading data from I/O are still blocking operations and therefore both impact on the application's performance. Therefore, the WebFlux module has introduced an improved version of the infrastructure for WebSocket. WebFlux offers both client and server infrastructure. We are going to start by analyzing the server-side WebSocket and will then cover the client-side possibilities. Server-side WebSocket API WebFlux offers WebSocketHandler as the central interface for handling WebSocket connections. This interface has a method called handle, which accepts WebSocketSession. The WebSocketSession class represents a successful handshake between the client and server and provides access to information, including information about the handshake, session attributes, and the incoming stream of data. In order to learn how to deal with this information, let's consider the following example of responding to the sender with echo messages: class EchoWebSocketHandler implements WebSocketHandler { // (1) @Override // public Mono<Void> handle(WebSocketSession session) { // (2) return session // (3) .receive() // (4) .map(WebSocketMessage::getPayloadAsText) // (5) .map(tm -> "Echo: " + tm) // (6) .map(session::textMessage) // (7) .as(session::send); // (8) } // } As we can see from the previous example, the new WebSocket API is built on top of the reactive types from Project Reactor. Here, at point (1), we provide an implementation of the WebSocketHandler interface and override the handle method at point (2). Then, we use the WebSocketSession#receive method at point (3) in order to build the processing flow of the incoming WebSocketMessage using the Flux API. WebSocketMessage is a wrapper around DataBuffer and provides additional functionalities, such as translating the payload represented in bytes to text in point (5). Once the incoming message is extracted, we prepend to that text the "Echo: " suffix shown at point (6), wrap the new text message in the WebSocketMessage, and send it back to the client using the WebSocketSession#send method. Here, the send method accepts Publisher<WebSocketMessage> and returns Mono<Void> as the result. Therefore, using the as operator from the Reactor API, we may treat Flux as Mono<Void> and use session::send as a transformation function. Apart from the WebSocketHandler interface implementation, setting up the server-side WebSocket API requires configuring additional HandlerMapping and WebSocketHandlerAdapter instances. Consider the following code as an example of such a configuration: @Configuration // (1) public class WebSocketConfiguration { // @Bean // (2) public HandlerMapping handlerMapping() { // SimpleUrlHandlerMapping mapping = // new SimpleUrlHandlerMapping(); // (2.1) mapping.setUrlMap(Collections.singletonMap( // (2.2) "/ws/echo", // new EchoWebSocketHandler() // )); // mapping.setOrder(-1); // (2.3) return mapping; // } // @Bean // (3) public HandlerAdapter handlerAdapter() { // return new WebSocketHandlerAdapter(); // } // } The preceding example can be described as follows: This is the class that is annotated with @Configuration. Here, we have the declaration and setup of the HandlerMapping bean. At point (2.1), we create SimpleUrlHandlerMapping, which allows setup path-based mapping, shown at point (2.2), to WebSocketHandler. In order to allow SimpleUrlHandlerMapping to be handled prior to other HandlerMapping instances, it should be a higher priority. This is the declaration of the HandlerAdapter bean, which is WebSocketHandlerAdapter. Here, WebSocketHandlerAdapter plays the most important role, since it upgrades the HTTP connection to the WebSocket one and then calls the WebSocketHandler#handle method. Client-side WebSocket API Unlike the WebSocket module (which is based on WebMVC), WebFlux provides us with client-side support too. In order to send a WebSocket connection request, we have the WebSocketClient class. WebSocketClient has two central methods to execute WebSocket connections, as shown in the following code sample: public interface WebSocketClient { Mono<Void> execute( URI url, WebSocketHandler handler ); Mono<Void> execute( URI url, HttpHeaders headers, WebSocketHandler handler ); } As we can see, WebSocketClient uses the same WebSockeHandler interface in order to process messages from the server and send messages back. There are a few WebSocketClient implementations that are related to the server engine, such as the TomcatWebSocketClient implementation or the JettyWebSocketClient implementation. In the following example, we will look at ReactorNettyWebSocketClient: WebSocketClient client = new ReactorNettyWebSocketClient(); client.execute( URI.create("http://localhost:8080/ws/echo"), session -> Flux .interval(Duration.ofMillis(100)) .map(String::valueOf) .map(session::textMessage) .as(session::send) ); The preceding example shows how we can use ReactorNettyWebSocketClient to wire a WebSocket connection and start sending periodic messages to the server. To summarize, we learned the basics of non-blocking cross-service communication with WebClient, reactive WebSocket API, server-side WebSocket API, and much more. To know more about the reactive system and reactive programming, check out the book, Hands-On Reactive Programming in Spring 5 written by Oleh Dokuka and Igor Lozynskyi.  Getting started with React Hooks by building a counter with useState and useEffect Implementing Dependency Injection in Swift [Tutorial] Reactive programming in Swift with RxSwift and RxCocoa [Tutorial]
Read more
  • 0
  • 0
  • 10387

article-image-getting-started-with-react-hooks-by-building-a-counter-with-usestate-and-useeffect
Guest Contributor
12 Feb 2019
7 min read
Save for later

Getting started with React Hooks by building a counter with useState and useEffect

Guest Contributor
12 Feb 2019
7 min read
React 16 added waves of new features, improving the way we build web applications. The most impactful update is the new Hooks feature in version 16.8. Hooks allow us to write functional React components that manage state and side effects, making our code cleaner and providing the ability to easily to share functionality. React is not removing class components, but they cause many problems and are a detriment to upcoming code optimizations. The vision for Hooks is that all new components will be written using the API, resulting in more scalable web applications with better code. This tutorial will walk you through Hooks step-by-step and teach the core hook functionality by building a counter app. An overview of hooks Hooks provide the ability to manage state and side effects in functional components while also providing a simple interface to control the component lifecycle. The 4 built-in hooks provided by React are useState, useEffect, useReducer, and useContext. useState replaces the need for this.state used in class components useEffect manages side effects of the app by controlling the componentDidMount, componentDidUpdate, and componentWillUnmount lifecycle methods. useContext allows us to subscribe to the React context useReducer is similar to useState but allows for more complex state updates. The two main hook functions that you will use are, useState and useEffect, which manage the standard React state and lifecycle. useReducer is used to manage more complex state and useContext is a hook to pass values from the global React context to a component. With the core specification updating frequently, it’s essential to find tutorials to learn React. You can also build your own custom hooks, which can contain the primitive hooks exposed by React. You are able to extract component state into reusable functions that can be accessed by any component. Higher-order components and render props have traditionally been the way to share functionality, but these methods can lead to a bloated component tree with a confusing glob of nested React elements. Hooks offer a straightforward way to DRY out your code by simply importing the custom hook function into your component. Building counter with hooks To build our counter, we will use Create React App to bootstrap the application. You can install the package globally or use npx from the command line: npx create-react-app react-hooks-counter cd react-hooks-counter React Hooks is a brand new feature, so ensure you have v16.8.x installed. Inside your package.json, the version of react and react-dom should look similar to the code snippet below. If not, update them and reinstall using the yarn command. The foundation of hooks is that they are utilized inside functional components. To start, let’s convert the boilerplate file inside src/App.js to a functional component and remove the content. At the top of the file, we can import useState and useEffect from React. import React, { useState, useEffect } from 'react'; The most straightforward hook is useState since its purpose is to maintain a single value, so let’s begin there. The function takes an initial value and returns an array of arguments, with the item at the 0 index containing the state value, and the item at the 1 index containing a function to update the value. We will initialize our count to 0 and name the return variables count and setCount. const [count, setCount] = useState(0); NOTE: The returned value of the useState is an array. To simplify the syntax, we use array destructuring to extract the elements at the 0 and 1 index. Inside our rendered React component, we will display the count and provide a button to increment the count by 1 by using setCount. With a single function, we have eliminated the need to have a class component along with this.state and this.setState to manage our data. Every time you click the increment button, the count will increase by 1. Since we are using a hook, React recognizes this change in state and will re-render the DOM with this updated value. To demonstrate the extensibility of the state updates, we will add buttons increment the count by 2, 5, and 10 as well. We will also DRY out our code by storing these values in an array. We iterate over this array using the .map() function, which will return an array of React components. React will treat this as sibling elements in the DOM. You are now able to increment the count by different values. Now we will integrate the useEffect hook. This hook enables you to manage side effects and handle asynchronous events. The most notable and frequently used side effect is an API call. We will mimic the async nature of an API call using a setTimeout function. We will make a fake API request on the component’s mount that will initialize a random integer 1–10 to our count after waiting 1 second. We will also have an additional useEffect that will update the document title (a side effect) with the current count to show how it responds to a change in state. The useEffect hook takes a function as an argument. useEffect replaces the componentDidMount, componentDidUpdate, and componentWillUnmount class methods. When the state of the component mounts or updates, React will execute the callback function. If your callback function returns a function itself, React will execute this during componentWillUnmount. First, let’s create our effect to update the document title. Inside the body of our function, we declare useEffect which sets document.title = 'Count = ' + count in the callback. When the state count updates, you should see your tab title also updating simultaneously. For the final step, we will create a mock API call that returns an integer to update the state count. We use a setTimeout and a function that returns a Promise because this simulates the time required to wait for an API request to return and the associated return value of a promise, which allows us to handle the response asynchronously. To mock an API, we create a mockApi function above our component. It returns a promise with a resolved random integer between 1 and 10. A common pattern is to make fetch requests in the componentDidMount. To reproduce this is in our functional component, we will add another useState to manage a hasFetched variable: const [hasFetched, setFetch] = useState(false). This is used to prevent the mockApi from being executed on subsequent updates. Our fetch hook will be an async function, so we will use async/await to handle the result. Inside our useEffect function, we will first check if the hasFetched has been executed. If it has not, we call mockApi and setCount with a result to initialize our value and then flip our hasFetched flag to true. Visual indicators are essential for UX and provide feedback for your users of the application status. Since we are waiting for an initial count value, we want to hide our buttons and display “Loading…” text on the screen if the hasFetched is false. This results in the following behavior: The final code Wrapping Up This article introduced hooks and showed how to implement useState and useEffect to simplify your class components into simple functional components. While this is a big win for React developers, the power of hooks is fully realized with the ability to combine them to create custom hooks. This allows you to extract logic and build modular functionality that can seamlessly be shared among React components without the overhead of HOCs or render props. You simply import your custom hook function, and any component can implement it. The only caveat is that all hook functions must follow the rules of hooks. Author Bio Trey Huffine A JavaScript fanatic. He is a software engineer in Silicon Valley building products using React, Node, and Go. Passionate for making the world a better place through code. Reactive programming in Swift with RxSwift and RxCocoa [Tutorial] React 16.8 releases with the stable implementation of Hooks PrimeReact 3.0.0 is now out with Babylon create-react-app template
Read more
  • 0
  • 0
  • 20379

article-image-5-blog-posts-that-could-make-you-a-better-python-programmer
Sam Wood
11 Feb 2019
2 min read
Save for later

5 blog posts that could make you a better Python programmer

Sam Wood
11 Feb 2019
2 min read
Python is one of the most important languages to master. It’s top rated, fast growing, and in demand by businesses around the globe. There’s a host of excellent insight across the web about how to become a better programmer with Python. Here’s five blogs we think you need to read to upgrade your skills and knowledge. 1. A Brief History of Python Did you know Python is actually older than Java, R and JavaScript? If you want to be a better Python programmer, it pays to know your history. This quick blog post takes you through the language's journey from Christmas hobby project to its modern ascendancy with version 3. 2. Do you write Python Code or Pythonic Code? Are you writing code in Python, or code for Python? When people talk about Pythonic code they mean that the code uses Python idioms well, that is natural or displays fluency in the language. Are you writing code like you would write Java or C++? This 4-minute blog post gives quick tips on how to make your code Pythonic. 3. The Singleton Python Design Pattern in Depth The singleton pattern is a powerful design pattern that allows you to create only one instance of data. You’d generally use it for things like the logging class and its subclasses, managing a connection to a database, or use read-only singletons to store some global states. This in-depth blog post takes you through the three principle ways to implement singletons, for better Python code. 4. Why is Python so good for artificial intelligence and machine learning? 5 Experts Explain. Python is the breakout language of data, zooming ahead of rival R to be dominant in the field of artificial intelligence and machine learning. But what is it about the programming language that makes it so well suited for this fast-growing field? In this blog post, five artificial intelligence experts all weigh in on what they think makes Python perfect for AI and machine learning. 5. Top 7 Python Programming Books You Need To Read That’s right - we put a list in our list. But if you really want to become a better Python programmer, you’ll want to get to grips with this stack of amazing Python books. Whether you’re a complete beginner or more experienced, these seven Python titles are the perfect way to upgrade your knowledge.
Read more
  • 0
  • 0
  • 6527

article-image-implementing-dependency-injection-in-swift-tutorial
Bhagyashree R
11 Feb 2019
14 min read
Save for later

Implementing Dependency Injection in Swift [Tutorial]

Bhagyashree R
11 Feb 2019
14 min read
In software development, it's always recommended to split the system into loosely coupled modules that can work independently as much as they can. Dependency Injection (DI) is a pattern that helps to reach this goal, creating a maintainable and testable system. It is often confused with complex and over-configurable frameworks that permit us to add DI to our code; in reality, it is a simple pattern that can be added without too much effort. This article is taken from the book Hands-On Design Patterns with Swift by Florent Vilmart, Giordano Scalzo, and Sergio De Simone.  This book demonstrates how to apply design patterns and best practices in real-life situations, whether that's for new or already existing Swift projects. You’ll begin with a quick refresher on Swift, the compiler, the standard library, and the foundation, followed by the Cocoa design patterns to follow up with the creational, structural, and behavioral patterns as defined by the GoF.  To follow along with the examples implemented in this article, you can download the code from the book’s GitHub repository. In this article, we'll see what Dependency Injection is, where it comes, and how it's defined so that we can then discuss various methods to implement it, having a clear understanding of its principles. Dependency Injection, a primer Dependency Injection is one of the most misunderstood concepts in computer programming. This is because the Dependency Injection borders are quite blurry and they could overlap with other object-oriented programming concepts. Let's start with a formal definition given by Wikipedia: "In software engineering, Dependency Injection is a software design pattern that implements inversion of control for resolving dependencies." To be honest, this is not really clear: what is Inversion of Control? Why is it useful for resolving dependencies? In procedural programming, each object interacts with all of its collaborators in a direct way and also instantiates them directly. In Inversion Of Control, this flow is managed by a third party, usually, a framework that calls the objects and receives notifications. An example of this is an implementation of a UI engine. In a UI Engine, there are two parts: the Views and the Models part. The Views part handles all the interaction with the users, such as tapping buttons and rendering labels, whereas the Models part is responsible for business logic. Usually, the application code goes in the Models part, and the connections with the Views are done via callbacks that are called by the engine when the user interacts with a button or a text field. The paradigm changes from an imperative style where the algorithm is a sequence of actions, like in do this then do that, to an event style, when the button is tapped then call the server. The control of the actions is thus inverted. Instead of being the model that does things, the model now receives calls. Inversion of Control is often called Hollywood Principle. The essence of this principle is, "Don't call us, we'll call you," which is a response you might hear after auditioning for a role in Hollywood. In procedural programming, the flow of the program is determined by the modules that are statically connected together: ContactsView talks to ContactsCoreData and  ContactsProductionRemoteService, and each object instantiate its next collaborator. In Inversion of Control, ContactsView talks to a generic ContactsStore and a generic ContactsRemoteService whose concrete implementation could change depending on the context. If it is during the tests, an important role is played by the entity that manages how to create and connect all the objects together. After having defined the concept of IoC, let's give a simpler definition of DI by James Shore: "Dependency Injection" is a 25-dollar term for a 5-cent concept. [...] Dependency Injection means giving an object its instance variables. Really. That's it." The first principle of the book Design Patterns by the Gang of Four is "Program to an interface, not an implementation" which means that the objects need to know each other only by their interface and not by their implementation. After having defined how all the classes in software will collaborate with each other, this collaboration can be designed as a graph. The graph could be implemented connecting together the actual implementation of the classes, but following the first principle mentioned previously, we can do it using the interfaces of the same objects: the Dependency Injection is a way of building this graph passing the concrete classes to the objects. Four ways to use Dependency Injection Dependency Injection is used ubiquitously in Cocoa too, and in the following examples, we'll see code snippets both from Cocoa and typical client-side code. Let's take a look at the following four sections to learn how to use Dependency Injection. Constructor Injection The first way to do DI is to pass the collaborators in the constructor, where they are then saved in private properties. Let's have as an example on e-commerce app, whose Basket is handled both locally and remotely. The BasketClient class orchestrates the logic, saves locally in BasketStore, and synchronizes remotely with BasketService: protocol BasketStore { func loadAllProduct() -> [Product] func add(product: Product) func delete(product: Product) } protocol BasketService { func fetchAllProduct(onSuccess: ([Product]) -> Void) func append(product: Product) func remove(product: Product) } struct Product { let id: String let name: String //... } Then in the constructor of BasketClient, the concrete implementations of the protocols are passed: class BasketClient { private let service: BasketService private let store: BasketStore init(service: BasketService, store: BasketStore) { self.service = service self.store = store } func add(product: Product) { store.add(product: product) service.append(product: product) calculateAppliedDiscount() //... } // ... private func calculateAppliedDiscount() { // ... } } In Cocoa and Cocoa Touch, the Apple foundation libraries, there are a few examples of this pattern. A notable example is NSPersistentStore in CoreData: class NSPersistentStore: NSObject { init(persistentStoreCoordinator root: NSPersistentStoreCoordinator?, configurationName name: String?, URL url: NSURL, options: [NSObject: AnyObject]?) var persistentStoreCoordinator: NSPersistentStoreCoordinator? { get } } In the end, Dependency Injection as defined by James Shore is all here: define the collaborators with protocols and then pass them in the constructor. This is the best way to do DI. After the construction, the object is fully formed and it has a consistent state. Also, by just looking at the signature of init, the dependencies of this object are clear. Actually, the Constructor Injection is not only the most effective, but it's also the easiest. The only problem is who has to create the object graph? The parent object? The AppDelegate? We'll discuss that point in the Where to bind the dependencies section. Property Injection We have already agreed that Construction Injection is the best way to do DI, so why bother finding other methods? Well, it is not always possible to define the constructor the way we want. A notable example is doing DI with ViewControllers that are defined in storyboards. Given we have a BasketViewController that orchestrates the service and the store, we must pass them as properties: class BasketViewController: UIViewController { var service: BasketService? var store: BasketStore? // ... } This pattern is less elegant than the previous one: The ViewController isn't in the right state until all the properties are set Properties introduce mutability, and immutable classes are simpler and more efficient The properties must be defined as optional, leading to add question marks everywhere They are set by an external object, so they must be writeable and this could potentially permit something else to overwrite the value set at the beginning after a while There is no way to enforce the validity of the setup at compile-time However, something can be done: The properties can be set as implicitly unwrapped optional and then required in viewDidLoad. This is as a static check, but at least they are checked at the first sensible opportunity, which is when the view controller has been loaded. A function setter of all the properties prevents us from partially defining the collaborator list. The class BasketViewController must then be written as: class BasketViewController: UIViewController { private var service: BasketService! private var store: BasketStore! func set(service: BasketService, store: BasketStore) { self.service = service self.store = store } override func viewDidLoad() { super.viewDidLoad() precondition(service != nil, "BasketService required") precondition(store != nil, "BasketStore required") // ... } } The Properties Injection permits us to have overridable properties with a default value. This can be useful in the case of testing. Let's consider a dependency to a wrapper around the time: class CheckoutViewController: UIViewController { var time: Time = DefaultTime() } protocol Time { func now() -> Date } struct DefaultTime: Time { func now() -> Date { return Date() } } In the production code, we don't need to do anything, while in the testing code we can now inject a particular date instead of always return the current time. This would permit us of testing how the software will behave in the future, or in the past. A dependency defined in the same module or framework is Local. When it comes from another module or framework, it's Foreign. A Local dependency can be used as a default value, but a Foreign cannot, otherwise it would introduce a strong dependency between the modules. Method Injection This pattern just passes a collaborator in the method: class BasketClient { func add(product: Product, to store: BasketStore) { store.add(product: product) calculateAppliedDiscount() //... } // ... private func calculateAppliedDiscount() { // ... } } This is useful when the object has several collaborators, but most of them are just temporary and it isn't worth having the relationship set up for the whole life cycle of the object. Ambient Context The final pattern, Ambient Context, is similar to the Singleton. We still have a single instance as a static variable, but the class has multiple subclasses with different behaviors, and each static variable is writeable with a static function: class Analytics { static private(set) var instance: Analytics = NoAnalytics() static func setAnaylics(analitics: Analytics) { self.instance = analitics } func track(event: Event) { fatalError("Implement in a subclass") } } class NoAnalytics: Analytics { override func track(event: Event) {} } class GoogleAnalytics: Analytics { override func track(event: Event) { //... } } class AdobeAnalytics: Analytics { override func track(event: Event) { //... } } struct Event { //... } This pattern should be used only for universal dependencies, representing some cross-cutting concerns, such as analytics, logging, and times and dates. This pattern has some advantages. The dependencies are always accessible and don't need to change the API. It works well for cross-cutting concerns, but it doesn't fit in other cases when the object isn't unique. Also, it makes the dependency implicit and it represents a global mutable state that sometimes can lead to issues that are difficult to debug. DI anti-patterns When we try to implement a new technique, it is quite easy to lose control and implement it in the wrong way. Let's see then the most common anti-patterns in Dependency Injection. Control Freak The first one is pretty easy to spot: we are not using the Injection at all. Instead of being Injected, the dependency is instantiated inside the object that depends on it: class FeaturedProductsController { private let restProductsService: ProductsService init() { self.restProductsService = RestProductsService(configuration: Configuration.loadFromBundleId()) } } In this example, ProductsService could have been injected in the constructor but it is instantiated there instead. Mark Seeman, in his book Dependency Injection in .NET, Chapter 5.1 - DI anti-patterns, calls it Control Freak because it describes a class that will not relinquish its dependencies. The Control Freak is the dominant DI anti-pattern and it happens every time a class directly instantiates its dependencies, instead of relying on the Inversion of Control for that. In the case of the example, even though the rest of the class is programmed against an interface, there is no way of changing the actual implementation of ProductsService and the type of concrete class that it is, it will always be RestProductsService. The only way to change it is to modify the code and compile it again, but with DI it should be possible to change the behavior at runtime. Sometimes, someone tries to fix the Control Freak anti-pattern using the factory pattern, but the reality is that the only way to fix it is to apply the Inversion of Control for the dependency and inject it in the constructor: class FeaturedProductsController { private let productsService: ProductsService init(service: ProductsService) { self.productsService = service } } As already mentioned, Control Freak is the most common DI anti-pattern; pay particular attention so you don't slip into its trap. Bastard Injection Constructor overloads are fairly common in Swift codebases, but these could lead to the Bastard Injection anti-pattern. A common scenario is when we have a constructor that lets us inject a Test Double, but it also has a default parameter in the constructor: class TodosService { let repository: TodosRepository init(repository: TodosRepository = SqlLiteTodosRepository()) { self.repository = repository } } The biggest problem here is when the default implementation is a Foreign dependency, which is a class defined using another module; this creates a strong relationship between the two modules, making it impossible to reuse the class without including the dependent module too. The reason someone is tempted to write a default implementation it is pretty obvious since it is an easy way to instantiate the class just with TodoService() without the need of Composition Root or something similar. However, this nullifies the benefits of DI and it should be avoided removing the default implementation and injecting the dependency. Service Locator The final anti-pattern that we will explore is the most dangerous one: the Service Locator. It's funny because this is often considered a good pattern and is widely used, even in the famous Spring framework. Originally, the Service Locator pattern was defined in Microsoft patterns & practices' Enterprise Library, as Mark Seeman writes in his book Dependency Injection in .NET, Chapter 5.4 - Service Locator, but now he is advocating strongly against it. Service Locator is a common name for a service that we can query for different objects that were previously registered in it. As mentioned, it is a tricky one because it makes everything seem OK, but in fact, it nullifies all the advantage of the Dependency Injection: let locator = ServiceLocator.instance locator.register( SqlLiteTodosRepository(), forType: TodosRepository.self) class TodosService { private let repository: TodosRepository init() { let locator = ServiceLocator.instance self.repository = locator.resolve(TodosRepository.self) } } Here we have a service locator as a singleton, to whom we register the classes we want to resolve. Instead of injecting the class into the constructor, we just query from the service. It looks like the Service Locator has all the advantages of Dependency Injection, it provides testability and extensibility since we can use different implementations without changing the client. It also enables parallel development and separated configuration from the usage. But it has some major disadvantages. With DI, the dependencies are explicit; it's enough to look at the signature of the constructor or the exposed properties to understand what the dependencies for a class are. With a Service Locator, these dependencies are implicit, and the only way to find them is to inspect the implementation, which breaks the encapsulation. Also, all the classes are depending on the Service Locator and this makes the code tightly coupled with it. If we want to reuse a class, other then that class, we also need to add the Service Locator in our project, which could be in a different module and then adding the whole module as dependency where we wanted just to use one class. Service Locator could also give us the impression that we are not using DI at all because all the dependencies are hidden inside the classes. In this article, we covered the different flavors of dependency injection and examines how each can solve a particular set of problems in real-world scenarios. If you found this post useful, do check out the book, Hands-On Design Patterns with Swift. From learning about the most sought-after design patterns to comprehensive coverage of architectural patterns and code testing, this book is all you need to write clean, reusable code in Swift. Implementing Dependency Injection in Google Guice [Tutorial] Implementing Dependency Injection in Spring [Tutorial] Dagger 2.17, a dependency injection framework for Java and Android, is now out!
Read more
  • 0
  • 0
  • 6130

article-image-reactive-programming-in-swift-with-rxswift-and-rxcocoa-tutorial
Bhagyashree R
10 Feb 2019
10 min read
Save for later

Reactive programming in Swift with RxSwift and RxCocoa [Tutorial]

Bhagyashree R
10 Feb 2019
10 min read
The basic idea behind Reactive Programming (RP) is that of asynchronous data streams, such as the stream of events that are generated by mouse clicks, or a piece of data coming through a network connection. Anything can be a stream; there are really no constraints. The only property that makes it sensible to model any entity as a stream is its ability to change at unpredictable times. The other half of the picture is the idea of observers, which you can think of as agents that subscribe to receive notifications of new events in a stream. In between, you have ways of transforming those streams, combining them, creating new streams, filtering them, and so on. You could look at RP as a generalization of Key-Value Observing (KVO), a mechanism that is present in the macOS and iOS SDKs since their inception. KVO enables objects to receive notifications about changes to other objects' properties to which they have subscribed as observers. An observer object can register by providing a keypath, hence the name, into the observed object. This article is taken from the book Hands-On Design Patterns with Swift by Florent Vilmart, Giordano Scalzo, and Sergio De Simone.  This book demonstrates how to apply design patterns and best practices in real-life situations, whether that's for new or already existing Swift projects. You’ll begin with a quick refresher on Swift, the compiler, the standard library, and the foundation, followed by the Cocoa design patterns to follow up with the creational, structural, and behavioral patterns as defined by the GoF.  To follow along with the examples implemented in this article, you can download the code from the book’s GitHub repository. In this article, we will give a brief introduction to one popular framework for RP in Swift, RxSwift, and its Cocoa counterpart, RxCocoa, to make Cocoa ready for use with RP. RxSwift is not the only RP framework for Swift. Another popular one is ReactiveCocoa, but we think that, once you have understood the basic concepts behind one, it won't be hard to switch to the other. Using RxSwift and RxCocoa in reactive programming RxSwift aims to be fully compatible with Rx, Reactive Extensions for Microsoft .NET, a mature reactive programming framework that has been ported to many languages, including Java, Scala, JavasScript, and Clojure. Adopting RxSwift thus has the advantage that it will be quite natural for you to use the same approach and concepts in another language for which Rx is available, in case you need to. If you want to play with RxSwift, the first step is creating an Xcode project and adding the SwiftRx dependency. If you use the Swift Package Manager, just make sure your Package.swift file contains the following information: If you use CocoaPods, add the following dependencies to your podfile: pod 'RxSwift', '~> 4.0' pod 'RxCocoa', '~> 4.0' Then, run this command: pod install Finally, if you use Carthage, add this to Cartfile: github "ReactiveX/RxSwift" ~> 4.0 Then, run this command to finish: carthage update As you can see, we have also included RxCocoa as a dependency. RxCocoa is a framework that extends Cocoa to make it ready to be used with RxSwift. For example, RxCocoa will make many properties of your Cocoa objects observable without requiring you to add a single line of code. So if you have a UI object whose position changes depending on some user action, you can observe its center property and react to its evolution. Observables and observers Now that RxSwift is set up in our project, let's start with a few basic concepts before diving into some code: A stream in RxSwift is represented through Observable<ObservableType>, which is equivalent to Sequence, with the added capability of being able to receive new elements asynchronously. An observable stream in Rx can emit three different events: next, error, and complete. When an observer registers for a stream, the stream begins to emit next events, and it does so until an error or complete event is generated, in which case the stream stops emitting events. You subscribe to a stream by calling ObservableType.subscribe, which is equivalent to Sequence.makeIterator. However, you do not use that iterator directly, as you would, to iterate a sequence; rather, you provide a callback that will receive new events. When you are done with a stream, you should release it, along with all resources it allocated, by calling dispose. To make it easier not to forget releasing streams, RxSwift provides DisposeBag and takeUntil. Make sure that you use one of them in your production code. All of this can be translated into the following code snippet: let aDisposableBag = DisposeBag() let thisIsAnObservableStream = Observable.from([1, 2, 3, 4, 5, 6]) let subscription = thisIsAnObservableStream.subscribe( onNext: { print("Next value: \($0)") }, onError: { print("Error: \($0)") }, onCompleted: { print("Completed") }) // add the subscription to the disposable bag // when the bag is collected, the subscription is disposed subscription.disposed(by: aDisposableBag) // if you do not use a disposable bag, do not forget this! // subscription.dispose() Usually, your view controller is where you create your subscriptions, while, in our example thisIsAnObservableStream, observers and observables fit into your view model. In general, you should make all of your model properties observable, so your view controller can subscribe to those observables to update the UI when need be. In addition to being observable, some properties of your view model could also be observers. For example, you could have a UITextField or UISearchBar in your app UI and a property of your view model could observe its text property. Based on that value, you could display some relevant information, for example, the result of a query. When a property of your view model is at the same time an observable and an observer, RxSwift provides you with a different role for your entity—that of a Subject. There exist multiple categories of subjects, categorized based on their behavior, so you will see BehaviourSubject, PublishSubject, ReplaySubject, and Variable. They only differ in the way that they make past events available to their observers. Before looking at how these new concepts may be used in your program, we need to introduce two further concepts: transformations and schedulers. Transformations Transformations allow you to create new observable streams by combining, filtering, or transforming the events emitted by other observable streams. The available transformations include the following: map: This transforms each event in a stream into another value before any observer can observe that value. For example, you could map the text property of a UISearchBar into an URL to be used to query some remote service. flatMap: This transforms each event into another Observable. For example, you could map the text property of a UISearchBar into the result of an asynchronous query. scan: This is similar to the reduce Swift operator on sequences. It will accumulate each new event into a partial result based on all previously emitted events and emit that result. filter: This enables filtering of emitted events based on a condition to be verified. merge: This merges two streams of events by preserving their ordering. zip: This combines two streams of events by creating a new stream whose events are tuples made by the successive events from the two original streams. Schedulers Schedulers allow you to control to which queue RxSwift operators are dispatched. By default, all RxSwift operations are executed on the same queue where the subscription was made, but by using schedulers with observeOn and subscribeOn, you can alter that behavior. For example, you could subscribe to a stream whose events are emitted from a background queue, possibly the results of some lengthy tasks, and observe those events from the main thread to be able to update the UI based on those tasks' outcomes. Recalling our previous example, this is how we could use observeOn and subscribeOn as described: let aDisposableBag = DisposeBag() let thisIsAnObservableStream = Observable.from([1, 2, 3, 4, 5, 6]) .observeOn(MainScheduler.instance).map { n in print("This is performed on the main scheduler") } let subscription = thisIsAnObservableStream .subscribeOn(ConcurrentDispatchQueueScheduler(qos: .background)) .subscribe(onNext: { event in print("Handle \(event) on main thread? \(Thread.isMainThread)") }, onError: { print("Error: \($0). On main thread? \(Thread.isMainThread)") }, onCompleted: { print("Completed. On main thread? \(Thread.isMainThread)") }) subscription.disposed(by: aDisposableBag) Asynchronous networking – an example Now we can take a look at a slightly more compelling example, showing off the power of reactive programming. Let's get back to our previous example: a UISearchBar collects user input that a view controller observes, to update a table displaying the result of a remote query. This is a pretty standard UI design. Using RxCocoa, we can observe the text property of the search bar and map it into a URL. For example, if the user inputs a GitHub username, the URLRequest could retrieve a list of all their repositories. We then further transform the URLRequest into another observable using flatMap. The remoteStream function is defined in the following snippet, and simply returns an observable containing the result of the network query. Finally, we bind the stream returned by flatMap to our tableView, again using one of the methods provided by RxCocoa, to update its content based on the JSON data passed in record: searchController.searchBar.rx.text.asObservable() .map(makeURLRequest) .flatMap(remoteStream) .bind(to: tableView.rx.items(cellIdentifier: cellIdentifier)) { index, record, cell in cell.textLabel?.text = "" // update here the table cells } .disposed(by: disposeBag) This looks all pretty clear and linear. The only bit left out is the networking code. This is a pretty standard code, with the major difference that it returns an observable wrapping a URLSession.dataTask call. The following code shows the standard way to create an observable stream by calling observer.onNext and passing the result of the asynchronous task: func remoteStream<T: Codable>(_ request: URLRequest) -> Observable<T> { return Observable<T>.create { observer in let task = URLSession.shared.dataTask(with: request) { (data, response, error) in do { let records: T = try JSONDecoder().decode(T.self, from: data ?? Data()) for record in records { observer.onNext(record) } } catch let error { observer.onError(error) } observer.onCompleted() } task.resume() return Disposables.create { task.cancel() } } } As a final bit, we could consider the following variant: we want to store the UISearchBar text property value in our model, instead of simply retrieving the information associated with it in our remote service. To do so, we add a username property in our view model and recognize that it should, at the same time, be an observer of the UISearchBar text property as well as an observable, since it will be observed by the view controller to retrieve the associated information whenever it changes. This is the relevant code for our view model: import Foundation import RxSwift import RxCocoa class ViewModel { var username = Variable<String>("") init() { setup() } setup() { ... } } The view controller will need to be modified as in the following code block, where you can see we bind the UISearchBar text property to our view model's username property; then, we observe the latter, as we did previously with the search bar: searchController.searchBar.rx.observe(String.self, "text") .bindTo(viewModel.username) .disposed(by: disposeBag) viewModel.username.asObservable() .map(makeURLRequest) .flatMap(remoteStream) .bind(to: tableView.rx.items(cellIdentifier: cellIdentifier)) { index, record, cell in cell.textLabel?.text = "" // update here the table cells } .disposed(by: disposeBag) With this last example, our short introduction to RxSwift is complete. There is much more to be said, though. A whole book could be devoted to RxSwift/RxCocoa and how they can be used to write Swift apps! If you found this post useful, do check out the book, Hands-On Design Patterns with Swift. This book provides a complete overview of how to implement classic design patterns in Swift.  It will guide you to build Swift applications that are scalable, faster, and easier to maintain. Reactive Extensions: Ways to create RxJS Observables [Tutorial] What’s new in Vapor 3, the popular Swift based web framework Exclusivity enforcement is now complete in Swift 5
Read more
  • 0
  • 0
  • 12536
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €14.99/month. Cancel anytime
article-image-creating-a-basic-julia-project-for-loading-and-saving-data-tutorial
Prasad Ramesh
09 Feb 2019
11 min read
Save for later

Creating a basic Julia project for loading and saving data [Tutorial]

Prasad Ramesh
09 Feb 2019
11 min read
In this article, we take a look at the common Iris dataset using simple statistical methods. Then we create a simple Julia project to load and save data from the Iris dataset. This article is an excerpt from a book written by Adrian Salceanu titled Julia Programming Projects. In this book, you will develop and run a web app using Julia and the HTTP package among other things. To start, we'll load, the Iris flowers dataset, from the RDatasets package and we'll manipulate it using standard data analysis functions. Then we'll look more closely at the data by employing common visualization techniques. And finally, we'll see how to persist and (re)load our data. But, in order to do that, first, we need to take a look at some of the language's most important building blocks. Here are the external packages used in this tutorial and their specific versions: [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] In order to install a specific version of a package you need to run: pkg> add [email protected] For example: pkg> add [email protected] Alternatively, you can install all the used packages by downloading the Project.toml file using pkg> instantiate as follows: julia> download("https://raw.githubusercontent.com/PacktPublishing/Julia-Programming-Projects/master/Chapter02/Project.toml", "Project.toml") pkg> activate . pkg> instantiate Using simple statistics to better understand our data Now that it's clear how the data is structured and what is contained in the collection, we can get a better understanding by looking at some basic stats. To get us started, let's invoke the describe function: julia> describe(iris) The output is as follows: This function summarizes the columns of the iris DataFrame. If the columns contain numerical data (such as SepalLength), it will compute the minimum, median, mean, and maximum. The number of missing and unique values is also included. The last column reports the type of data stored in the row. A few other stats are available, including the 25th and the 75th percentile, and the first and the last values. We can ask for them by passing an extra stats argument, in the form of an array of symbols: julia> describe(iris, stats=[:q25, :q75, :first, :last]) The output is as follows: Any combination of stats labels is accepted. These are all the options—:mean, :std, :min, :q25, :median, :q75, :max, :eltype, :nunique, :first, :last, and :nmissing. In order to get all the stats, the special :all value is accepted: julia> describe(iris, stats=:all) The output is as follows: We can also compute these individually by using Julia's Statistics package. For example, to calculate the mean of the SepalLength column, we'll execute the following: julia> using Statistics julia> mean(iris[:SepalLength]) 5.843333333333334 In this example, we use iris[:SepalLength] to select the whole column. The result, not at all surprisingly, is the same as that returned by the corresponding describe() value. In a similar way we can compute the median(): julia> median(iris[:SepalLength]) 5.8 And there's (a lot) more, such as, for instance, the standard deviation std(): julia> std(iris[:SepalLength]) 0.828066127977863 Or, we can use another function from the Statistics package, cor(), in a simple script to help us understand how the values are correlated: julia> for x in names(iris)[1:end-1] for y in names(iris)[1:end-1] println("$x \t $y \t $(cor(iris[x], iris[y]))") end println("-------------------------------------------") end Executing this snippet will produce the following output: SepalLength SepalLength 1.0 SepalLength SepalWidth -0.11756978413300191 SepalLength PetalLength 0.8717537758865831 SepalLength PetalWidth 0.8179411262715759 ------------------------------------------------------------ SepalWidth SepalLength -0.11756978413300191 SepalWidth SepalWidth 1.0 SepalWidth PetalLength -0.42844010433053953 SepalWidth PetalWidth -0.3661259325364388 ------------------------------------------------------------ PetalLength SepalLength 0.8717537758865831 PetalLength SepalWidth -0.42844010433053953 PetalLength PetalLength 1.0 PetalLength PetalWidth 0.9628654314027963 ------------------------------------------------------------ PetalWidth SepalLength 0.8179411262715759 PetalWidth SepalWidth -0.3661259325364388 PetalWidth PetalLength 0.9628654314027963 PetalWidth PetalWidth 1.0 ------------------------------------------------------------ The script iterates over each column of the dataset with the exception of Species (the last column, which is not numeric), and generates a basic correlation table. The table shows strong positive correlations between SepalLength and PetalLength (87.17%), SepalLength and PetalWidth (81.79%), and PetalLength and PetalWidth (96.28%). There is no strong correlation between SepalLength and SepalWidth. We can use the same script, but this time employ the cov() function to compute the covariance of the values in the dataset: julia> for x in names(iris)[1:end-1] for y in names(iris)[1:end-1] println("$x \t $y \t $(cov(iris[x], iris[y]))") end println("--------------------------------------------") end This code will generate the following output: SepalLength SepalLength 0.6856935123042507 SepalLength SepalWidth -0.04243400447427293 SepalLength PetalLength 1.2743154362416105 SepalLength PetalWidth 0.5162706935123043 ------------------------------------------------------- SepalWidth SepalLength -0.04243400447427293 SepalWidth SepalWidth 0.189979418344519 SepalWidth PetalLength -0.3296563758389262 SepalWidth PetalWidth -0.12163937360178968 ------------------------------------------------------- PetalLength SepalLength 1.2743154362416105 PetalLength SepalWidth -0.3296563758389262 PetalLength PetalLength 3.1162778523489933 PetalLength PetalWidth 1.2956093959731543 ------------------------------------------------------- PetalWidth SepalLength 0.5162706935123043 PetalWidth SepalWidth -0.12163937360178968 PetalWidth PetalLength 1.2956093959731543 PetalWidth PetalWidth 0.5810062639821031 ------------------------------------------------------- The output illustrates that SepalLength is positively related to PetalLength and PetalWidth, while being negatively related to SepalWidth. SepalWidth is negatively related to all the other values. Moving on, if we want a random data sample, we can ask for it like this: julia> rand(iris[:SepalLength]) 7.4 Optionally, we can pass in the number of values to be sampled: julia> rand(iris[:SepalLength], 5) 5-element Array{Float64,1}: 6.9 5.8 6.7 5.0 5.6 We can convert one of the columns to an array using the following: julia> sepallength = Array(iris[:SepalLength]) 150-element Array{Float64,1}: 5.1 4.9 4.7 4.6 5.0 # ... output truncated ... Or we can convert the whole DataFrame to a matrix: julia> irisarr = convert(Array, iris[:,:]) 150×5 Array{Any,2}: 5.1 3.5 1.4 0.2 CategoricalString{UInt8} "setosa" 4.9 3.0 1.4 0.2 CategoricalString{UInt8} "setosa" 4.7 3.2 1.3 0.2 CategoricalString{UInt8} "setosa" 4.6 3.1 1.5 0.2 CategoricalString{UInt8} "setosa" 5.0 3.6 1.4 0.2 CategoricalString{ UInt8} "setosa" # ... output truncated ... Loading and saving our data Julia comes with excellent facilities for reading and storing data out of the box. Given its focus on data science and scientific computing, support for tabular-file formats (CSV, TSV) is first class. Let's extract some data from our initial dataset and use it to practice persistence and retrieval from various backends. We can reference a section of a DataFrame by defining its bounds through the corresponding columns and rows. For example, we can define a new DataFrame composed only of the PetalLength and PetalWidth columns and the first three rows: julia> iris[1:3, [:PetalLength, :PetalWidth]] 3×2 DataFrames.DataFrame │ Row │ PetalLength │ PetalWidth │ ├─────┼─────────────┼────────────┤ │ 1 │ 1.4 │ 0.2 │ │ 2 │ 1.4 │ 0.2 │ │ 3 │ 1.3 │ 0.2 │ The generic indexing notation is dataframe[rows, cols], where rows can be a number, a range, or an Array of boolean values where true indicates that the row should be included: julia> iris[trues(150), [:PetalLength, :PetalWidth]] This snippet will select all the 150 rows since trues(150) constructs an array of 150 elements that are all initialized as true. The same logic applies to cols, with the added benefit that they can also be accessed by name. Armed with this knowledge, let's take a sample from our original dataset. It will include some 10% of the initial data and only the PetalLength, PetalWidth, and Species columns: julia> test_data = iris[rand(150) .<= 0.1, [:PetalLength, :PetalWidth, :Species]] 10×3 DataFrames.DataFrame │ Row │ PetalLength │ PetalWidth │ Species │ ├─────┼─────────────┼────────────┼──────────────┤ │ 1 │ 1.1 │ 0.1 │ "setosa" │ │ 2 │ 1.9 │ 0.4 │ "setosa" │ │ 3 │ 4.6 │ 1.3 │ "versicolor" │ │ 4 │ 5.0 │ 1.7 │ "versicolor" │ │ 5 │ 3.7 │ 1.0 │ "versicolor" │ │ 6 │ 4.7 │ 1.5 │ "versicolor" │ │ 7 │ 4.6 │ 1.4 │ "versicolor" │ │ 8 │ 6.1 │ 2.5 │ "virginica" │ │ 9 │ 6.9 │ 2.3 │ "virginica" │ │ 10 │ 6.7 │ 2.0 │ "virginica" │ What just happened here? The secret in this piece of code is rand(150) .<= 0.1. It does a lot—first, it generates an array of random Float values between 0 and 1; then, it compares the array, element-wise, against 0.1 (which represents 10% of 1); and finally, the resultant Boolean array is used to filter out the corresponding rows from the dataset. It's really impressive how powerful and succinct Julia can be! In my case, the result is a DataFrame with the preceding 10 rows, but your data will be different since we're picking random rows (and it's quite possible you won't have exactly 10 rows either). Saving and loading using tabular file formats We can easily save this data to a file in a tabular file format (one of CSV, TSV, and others) using the CSV package. We'll have to add it first and then call the write method: pkg> add CSV julia> using CSV julia> CSV.write("test_data.csv", test_data) And, just as easily, we can read back the data from tabular file formats, with the corresponding CSV.read function: julia> td = CSV.read("test_data.csv") 10×3 DataFrames.DataFrame │ Row │ PetalLength │ PetalWidth │ Species │ ├─────┼─────────────┼────────────┼──────────────┤ │ 1 │ 1.1 │ 0.1 │ "setosa" │ │ 2 │ 1.9 │ 0.4 │ "setosa" │ │ 3 │ 4.6 │ 1.3 │ "versicolor" │ │ 4 │ 5.0 │ 1.7 │ "versicolor" │ │ 5 │ 3.7 │ 1.0 │ "versicolor" │ │ 6 │ 4.7 │ 1.5 │ "versicolor" │ │ 7 │ 4.6 │ 1.4 │ "versicolor" │ │ 8 │ 6.1 │ 2.5 │ "virginica" │ │ 9 │ 6.9 │ 2.3 │ "virginica" │ │ 10 │ 6.7 │ 2.0 │ "virginica" │ Just specifying the file extension is enough for Julia to understand how to handle the document (CSV, TSV), both when writing and reading. Working with Feather files Feather is a binary file format that was specially designed for storing data frames. It is fast, lightweight, and language-agnostic. The project was initially started in order to make it possible to exchange data frames between R and Python. Soon, other languages added support for it, including Julia. Support for Feather files does not come out of the box, but is made available through the homonymous package. Let's go ahead and add it and then bring it into scope: pkg> add Feather julia> using Feather Now, saving our DataFrame is just a matter of calling Feather.write: julia> Feather.write("test_data.feather", test_data) Next, let's try the reverse operation and load back our Feather file. We'll use the counterpart read function: julia> Feather.read("test_data.feather") 10×3 DataFrames.DataFrame │ Row │ PetalLength │ PetalWidth │ Species │ ├─────┼─────────────┼────────────┼──────────────┤ │ 1 │ 1.1 │ 0.1 │ "setosa" │ │ 2 │ 1.9 │ 0.4 │ "setosa" │ │ 3 │ 4.6 │ 1.3 │ "versicolor" │ │ 4 │ 5.0 │ 1.7 │ "versicolor" │ │ 5 │ 3.7 │ 1.0 │ "versicolor" │ │ 6 │ 4.7 │ 1.5 │ "versicolor" │ │ 7 │ 4.6 │ 1.4 │ "versicolor" │ │ 8 │ 6.1 │ 2.5 │ "virginica" │ │ 9 │ 6.9 │ 2.3 │ "virginica" │ │ 10 │ 6.7 │ 2.0 │ "virginica" │ Yeah, that's our sample data all right! In order to provide compatibility with other languages, the Feather format imposes some restrictions on the data types of the columns. You can read more about Feather in the package's official documentation at https://juliadata.github.io/Feather.jl/latest/index.html. Saving and loading with MongoDB Let's also take a look at using a NoSQL backend for persisting and retrieving our data. In order to follow through this part, you'll need a working MongoDB installation. You can download and install the correct version for your operating system from the official website, at https://www.mongodb.com/download-center?jmp=nav#community. I will use a Docker image which I installed and started up through Docker's Kitematic (available for download at https://github.com/docker/kitematic/releases). Next, we need to make sure to add the Mongo package. The package also has a dependency on LibBSON, which is automatically added. LibBSON is used for handling BSON, which stands for Binary JSON, a binary-encoded serialization of JSON-like documents. While we're at it, let's add the JSON package as well; we will need it. I'm sure you know how to do that by now—if not, here is a reminder: pkg> add Mongo, JSON At the time of writing, Mongo.jl support for Julia v1 was still a work in progress. This code was tested using Julia v0.6. Easy! Let's let Julia know that we'll be using all these packages: julia> using Mongo, LibBSON, JSON We're now ready to connect to MongoDB: julia> client = MongoClient() Once successfully connected, we can reference a dataframes collection in the db database: julia> storage = MongoCollection(client, "db", "dataframes") Julia's MongoDB interface uses dictionaries (a data structure called Dict in Julia) to communicate with the server. For now, all we need to do is to convert our DataFrame to such a Dict. The simplest way to do it is to sequentially serialize and then deserialize the DataFrame by using the JSON package. It generates a nice structure that we can later use to rebuild our DataFrame: julia> datadict = JSON.parse(JSON.json(test_data)) Thinking ahead, to make any future data retrieval simpler, let's add an identifier to our dictionary: julia> datadict["id"] = "iris_test_data" Now we can insert it into Mongo: julia> insert(storage, datadict) In order to retrieve it, all we have to do is query the Mongo database using the "id" field we've previously configured: Julia> data_from_mongo = first(find(storage, query("id" => "iris_test_data"))) We get a BSONObject, which we need to convert back to a DataFrame. Don't worry, it's straightforward. First, we create an empty DataFrame: julia> df_from_mongo = DataFrame() 0×0 DataFrames.DataFrame Then we populate it using the data we retrieved from Mongo: for i in 1:length(data_from_mongo["columns"]) df_from_mongo[Symbol(data_from_mongo["colindex"]["names"][i])] = Array(data_from_mongo["columns"][i]) end julia> df_from_mongo 10×3 DataFrames.DataFrame │ Row │ PetalLength │ PetalWidth │ Species │ ├─────┼─────────────┼────────────┼──────────────┤ │ 1 │ 1.1 │ 0.1 │ "setosa" │ │ 2 │ 1.9 │ 0.4 │ "setosa" │ │ 3 │ 4.6 │ 1.3 │ "versicolor" │ │ 4 │ 5.0 │ 1.7 │ "versicolor" │ │ 5 │ 3.7 │ 1.0 │ "versicolor" │ │ 6 │ 4.7 │ 1.5 │ "versicolor" │ │ 7 │ 4.6 │ 1.4 │ "versicolor" │ │ 8 │ 6.1 │ 2.5 │ "virginica" │ │ 9 │ 6.9 │ 2.3 │ "virginica" │ │ 10 │ 6.7 │ 2.0 │ "virginica" │ And that's it! Our data has been loaded back into a DataFrame. In this tutorial, we looked at the Iris dataset and worked on loading and saving the data in a simple Julia project.  To learn more about machine learning recommendation in Julia and testing the model check out this book Julia Programming Projects. Julia for machine learning. Will the new language pick up pace? Announcing Julia v1.1 with better exception handling and other improvement GitHub Octoverse: top machine learning packages, languages, and projects of 2018
Read more
  • 0
  • 0
  • 7245

article-image-how-to-make-machine-learning-based-recommendations-using-julia-tutorial
Prasad Ramesh
08 Feb 2019
8 min read
Save for later

How to make machine learning based recommendations using Julia [Tutorial]

Prasad Ramesh
08 Feb 2019
8 min read
In this article, we will look at machine learning based recommendations using Julia. We will make recommendations using a Julia package called 'Recommendation'. This article is an excerpt from a book written by Adrian Salceanu titled Julia Programming Projects. In this book, you will learn how to build simple-to-advanced applications through examples in Julia Lang 1.x using modern tools. In order to ensure that your code will produce the same results as described in this article, it is recommended to use the same package versions. Here are the external packages used in this tutorial and their specific versions: [email protected] [email protected] [email protected] [email protected] [email protected]+ In order to install a specific version of a package you need to run: pkg> add [email protected] For example: pkg> add [email protected] Alternatively, you can install all the used packages by downloading the Project.toml file provided on GitHub. You can use pkg> instantiate as follows: julia> download("https://raw.githubusercontent.com/PacktPublishing/Julia-Projects/master/Chapter07/Project.toml", "Project.toml") pkg> activate . pkg> instantiate Julia's ecosystem provides access to Recommendation.jl, a package that implements a multitude of algorithms for both personalized and non-personalized recommendations. For model-based recommenders, it has support for SVD, MF, and content-based recommendations using TF-IDF scoring algorithms. There's also another very good alternative—the ScikitLearn.jl package (https://github.com/cstjean/ScikitLearn.jl). This implements Python's very popular scikit-learn interface and algorithms in Julia, supporting both models from the Julia ecosystem and those of the scikit-learn library (via PyCall.jl). The Scikit website and documentation can be found at http://scikit-learn.org/stable/. It is very powerful and definitely worth keeping in mind, especially for building highly efficient recommenders for production usage. For learning purposes, we'll stick to Recommendation, as it provides for a simpler implementation. Making recommendations with Recommendation For our learning example, we'll use Recommendation. It is the simplest of the available options, and it's a good teaching device, as it will allow us to further experiment with its plug-and-play algorithms and configurable model generators. Before we can do anything interesting, though, we need to make sure that we have the package installed: pkg> add Recommendation#master julia> using Recommendation Please note that I'm using the #master version, because the tagged version, at the time of writing this book, was not yet fully updated for Julia 1.0. The workflow for setting up a recommender with Recommendation involves three steps: Setting up the training data Instantiating and training a recommender using one of the available algorithms Once the training is complete, asking for recommendations Let's implement these steps. Setting up the training data Recommendation uses a DataAccessor object to set up the training data. This can be instantiated with a set of Event objects. A Recommendation.Event is an object that represents a user-item interaction. It is defined like this: struct Event user::Int item::Int value::Float64 end In our case, the user field will represent the UserID, the item field will map to the ISBN, and the value field will store the Rating. However, a bit more work is needed to bring our data in the format required by Recommendation: First of all, our ISBN data is stored as a string and not as an integer. Second, internally, Recommendation builds a sparse matrix of user *  item and stores the corresponding values, setting up the matrix using sequential IDs. However, our actual user IDs are large numbers, and Recommendation will set up a very large, sparse matrix, going all the way from the minimum to the maximum user IDs. What this means is that, for example, we only have 69 users in our dataset (as confirmed by unique(training_data[:UserID]) |> size), with the largest ID being 277,427, while for books we have 9,055 unique ISBNs. If we go with this, Recommendation will create a 277,427 x 9,055 matrix instead of a 69 x 9,055 matrix. This matrix would be very large, sparse, and inefficient. Therefore, we'll need to do a bit more data processing to map the original user IDs and the ISBNs to sequential integer IDs, starting from 1. We'll use two Dict objects that will store the mappings from the UserID and ISBN columns to the recommender's sequential user and book IDs. Each entry will be of the form dict[original_id] = sequential_id: julia> user_mappings, book_mappings = Dict{Int,Int}(), Dict{String,Int}() We'll also need two counters to keep track of, and increment, the sequential IDs: julia> user_counter, book_counter = 0, 0 We can now prepare the Event objects for our training data: julia> events = Event[] julia> for row in eachrow(training_data) global user_counter, book_counter user_id, book_id, rating = row[:UserID], row[:ISBN], row[:Rating] haskey(user_mappings, user_id) || (user_mappings[user_id] = (user_counter += 1)) haskey(book_mappings, book_id) || (book_mappings[book_id] = (book_counter += 1)) push!(events, Event(user_mappings[user_id], book_mappings[book_id], rating)) end This will fill up the events array with instances of Recommendation.Event, which represents a unique UserID, ISBN, and Rating combination. To give you an idea, it will look like this: julia> events 10005-element Array{Event,1}: Event(1, 1, 10.0) Event(1, 2, 8.0) Event(1, 3, 9.0) Event(1, 4, 8.0) Event(1, 5, 8.0) # output omitted # Please remember this very important aspect—in Julia, the for loop defines a new scope. This means that variables defined outside the for loop are not accessible inside it. To make them visible within the loop's body, we need to declare them as global. Now, we are ready to set up our DataAccessor: julia> da = DataAccessor(events, user_counter, book_counter) Building and training the recommender At this point, we have all that we need to instantiate our recommender. A very efficient and common implementation uses MF—unsurprisingly, this is one of the options provided by the Recommendation package, so we'll use it. Matrix Factorization The idea behind MF is that, if we're starting with a large sparse matrix like the one used to represent user x profile ratings, then we can represent it as the product of multiple smaller and denser matrices. The challenge is to find these smaller matrices so that their product is as close to our original matrix as possible. Once we have these, we can fill in the blanks in the original matrix so that the predicted values will be consistent with the existing ratings in the matrix: Our user x books rating matrix can be represented as the product between smaller and denser users and books matrices. To perform the matrix factorization, we can use a couple of algorithms, among which the most popular are SVD and Stochastic Gradient Descent (SGD). Recommendation uses SGD to perform matrix factorization. The code for this looks as follows: julia> recommender = MF(da) julia> build(recommender) We instantiate a new MF recommender and then we build it—that is, train it. The build step might take a while (a few minutes on a high-end computer using the small dataset that's provided on GitHub). If we want to tweak the training process, since SGD implements an iterative approach for matrix factorization, we can pass a max_iter argument to the build function, asking it for a maximum number of iterations. The more iterations we do, in theory, the better the recommendations—but the longer it will take to train the model. If you want to speed things up, you can invoke the build function with a max_iter of 30 or less—build(recommender, max_iter = 30). We can pass another optional argument for the learning rate, for example, build (recommender, learning_rate=15e-4, max_iter=100). The learning rate specifies how aggressively the optimization technique should vary between each iteration. If the learning rate is too small, the optimization will need to be run a lot of times. If it's too big, then the optimization might fail, generating worse results than the previous iterations. Making recommendations Now that we have successfully built and trained our model, we can ask it for recommendations. These are provided by the recommend function, which takes an instance of a recommender, a user ID (from the ones available in the training matrix), the number of recommendations, and an array of books ID from which to make recommendations as its arguments: julia> recommend(recommender, 1, 20, [1:book_counter...]) With this line of code, we retrieve the recommendations for the user with the recommender ID 1, which corresponds to the UserID 277427 in the original dataset. We're asking for up to 20 recommendations that have been picked from all the available books. We get back an array of a Pair of book IDs and recommendation scores: 20-element Array{Pair{Int64,Float64},1}: 5081 => 19.1974 5079 => 19.1948 5078 => 19.1946 5077 => 17.1253 5080 => 17.1246 # output omitted # In this article, we learned how to make recommendations with machine learning in Julia.  To learn more about machine learning recommendation in Julia and testing the model check out this book Julia Programming Projects. YouTube to reduce recommendations of ‘conspiracy theory’ videos that misinform users in the US How to Build a music recommendation system with PageRank Algorithm How to build a cold-start friendly content-based recommender using Apache Spark SQL
Read more
  • 0
  • 0
  • 5017

article-image-how-to-create-a-desktop-application-with-electron-tutorial
Bhagyashree R
06 Feb 2019
15 min read
Save for later

How to create a desktop application with Electron [Tutorial]

Bhagyashree R
06 Feb 2019
15 min read
Electron is an open source framework, created by GitHub, that lets you develop desktop executables that bring together Node and Chrome to provide a full GUI experience. Electron has been used for several well-known projects, including developer tools such as Visual Studio Code, Atom, and Light Table. Basically, you can define the UI with HTML, CSS, and JS (or using React, as we'll be doing), but you can also use all of the packages and functions in Node. So, you won't be limited to a sandboxed experience, being able to go beyond what you could do with just a browser. This article is taken from the book  Modern JavaScript Web Development Cookbook by Federico Kereki.  This problem-solving guide teaches you popular problems solving techniques for JavaScript on servers, browsers, mobile phones, and desktops. To follow along with the examples implemented in this article, you can download the code from the book's GitHub repository. In this article, we will look at how we can use Electron together with the tools like, React and Node, to create a native desktop application, which you can distribute to users. Setting up Electron We will start with installing Electron, and then in the later recipes, we'll see how we can turn a React app into a desktop program. You can install Electron by executing the following command: npm install electron --save-dev Then, we'll need a starter JS file. Taking some tips from the main.js file, we'll create the following electron-start.js file: // Source file: electron-start.js /* @flow */ const { app, BrowserWindow } = require("electron"); let mainWindow; const createWindow = () => { mainWindow = new BrowserWindow({ height: 768, width: 1024 }); mainWindow.loadURL("http://localhost:3000"); mainWindow.on("closed", () => { mainWindow = null; }); }; app.on("ready", createWindow); app.on("activate", () => mainWindow === null && createWindow()); app.on( "window-all-closed", () => process.platform !== "darwin" && app.quit() ); Here are some points to note regarding the preceding code snippet: This code runs in Node, so we are using require() instead of import The mainWindow variable will point to the browser instance where our code will run We'll start by running our React app, so Electron will be able to load the code from http://localhost:3000 In our code, we also have to process the following events: "ready" is called when Electron has finished its initialization and can start creating windows. "closed" means your window was closed; your app might have several windows open, so at this point, you should delete the closed one. "window-all-closed" implies your whole app was closed. In Windows and Linux, this means quitting, but for macOS, you don't usually quit applications, because of Apple' s usual rules. "activate" is called when your app is reactivated, so if the window had been deleted (as in Windows or Linux), you have to create it again. We already have our React app (you can find the React app in the GitHub repository) in place, so we just need a way to call Electron. Add the following script to package.json, and you'll be ready: "scripts": { "electron": "electron .", . . . How it works... To run the Electron app in development mode, we have to do the following: Run our restful_server_cors server code from the GitHub repository. Start the React app, which requires the server to be running. Wait until it's loaded, and then and only then, move on to the next step. Start Electron. So, basically, you'll have to run the following two commands, but you'll need to do so in separate terminals: // in the directory for our restful server: node out/restful_server_cors.js // in the React app directory: npm start // and after the React app is running, in other terminal: npm run electron After starting Electron, a screen quickly comes up, and we again find our countries and regions app, now running independently of a browser: The app works as always; as an example, I selected a country, Canada, and correctly got its list of regions: We are done! You can see that everything is interconnected, as before, in the sense that if you make any changes to the React source code, they will be instantly reflected in the Electron app. Adding Node functionality to your app In the previous recipe, we saw that with just a few small configuration changes, we can turn our web page into an application. However, you're still restricted in terms of what you can do, because you are still using only those features available in a sandboxed browser window. You don't have to think this way, for you can add basically all Node functionality using functions that let you go beyond the limits of the web. Let's see how to do it in this recipe. How to do it We want to add some functionality to our app of the kind that a typical desktop would have. The key to adding Node functions to your app is to use the remote module in Electron. With it, your browser code can invoke methods of the main process, and thus gain access to extra functionality. Let's say we wanted to add the possibility of saving the list of a country's regions to a file. We'd require access to the fs module to be able to write a file, and we'd also need to open a dialog box to select what file to write to. In our serviceApi.js file, we would add the following functions: // Source file: src/regionsApp/serviceApi.js /* @flow */ const electron = window.require("electron").remote; . . . const fs = electron.require("fs"); export const writeFile = fs.writeFile.bind(fs); export const showSaveDialog = electron.dialog.showSaveDialog; Having added this, we can now write files and show dialog boxes from our main code. To use this functionality, we could add a new action to our world.actions.js file: // Source file: src/regionsApp/world.actions.js /* @flow */ import { getCountriesAPI, getRegionsAPI, showSaveDialog, writeFile } from "./serviceApi"; . . . export const saveRegionsToDisk = () => async ( dispatch: ({}) => any, getState: () => { regions: [] } ) => { showSaveDialog((filename: string = "") => { if (filename) { writeFile(filename, JSON.stringify(getState().regions), e => e && window.console.log(`ERROR SAVING ${filename}`, e); ); } }); }; When the saveRegionsToDisk() action is dispatched, it will show a dialog to prompt the user to select what file is to be written, and will then write the current set of regions, taken from getState().regions, to the selected file in JSON format. We just have to add the appropriate button to our <RegionsTable> component to be able to dispatch the necessary action: // Source file: src/regionsApp/regionsTableWithSave.component.js /* @flow */ import React from "react"; import PropTypes from "prop-types"; import "../general.css"; export class RegionsTable extends React.PureComponent<{ loading: boolean, list: Array<{ countryCode: string, regionCode: string, regionName: string }>, saveRegions: () => void }> { static propTypes = { loading: PropTypes.bool.isRequired, list: PropTypes.arrayOf(PropTypes.object).isRequired, saveRegions: PropTypes.func.isRequired }; static defaultProps = { list: [] }; render() { if (this.props.list.length === 0) { return <div className="bordered">No regions.</div>; } else { const ordered = [...this.props.list].sort( (a, b) => (a.regionName < b.regionName ? -1 : 1) ); return ( <div className="bordered"> {ordered.map(x => ( <div key={x.countryCode + "-" + x.regionCode}> {x.regionName} </div> ))} <div> <button onClick={() => this.props.saveRegions()}> Save regions to disk </button> </div> </div> ); } } } We are almost done! When we connect this component to the store, we'll simply add the new action, as follows: // Source file: src/regionsApp/regionsTableWithSave.connected.js /* @flow */ import { connect } from "react-redux"; import { RegionsTable } from "./regionsTableWithSave.component"; import { saveRegionsToDisk } from "./world.actions"; const getProps = state => ({ list: state.regions, loading: state.loadingRegions }); const getDispatch = (dispatch: any) => ({ saveRegions: () => dispatch(saveRegionsToDisk()) }); export const ConnectedRegionsTable = connect( getProps, getDispatch )(RegionsTable); How it works The code we added showed how we could gain access to a Node package (fs, in our case) and some extra functions, such as showing a Save to disk dialog. When we run our updated app and select a country, we'll see our newly added button, as in the following screenshot: Clicking on the button will pop up a dialog, allowing you to select the destination for the data: If you click Save, the list of regions will be written in JSON format, as we specified earlier in our writeRegionsToDisk() function. Building a more windowy experience In the previous recipe, we added the possibility of using any and all of the functions provided by Node. In this recipe, let's now focus on making our app more window-like, with icons, menus, and so on. We want the user to really believe that they're using a native app, with all the features that they would be accustomed to. The following list of interesting subjects from Electron APIs is just a short list of highlights, but there are many more available options: clipboardTo do copy and paste operations using the system's clipboarddialogTo show the native system dialogs for messages, alerts, opening and saving files, and so onglobalShortcutTo detect keyboard shortcutsMenu, MenuItemTo create a menu bar with menus and submenusNotificationTo add desktop notificationspowerMonitor, powerSaveBlockerTo monitor power state changes, and to disable entering sleep modescreenTo get information about the screen, displays, and so onTrayTo add icons and context menus to the system's tray Let's add a few of these functions so that we can get a better-looking app that is more integrated to the desktop. How to do it Any decent app should probably have at least an icon and a menu, possibly with some keyboard shortcuts, so let's add those features now, and just for the sake of it, let's also add some notifications for when regions are written to disk. Together with the Save dialog we already used, this means that our app will include several native windowing features. To start with, let's add an icon. Showing an icon is the simplest thing because it just requires an extra option when creating the BrowserWindow() object. I'm not very graphics-visual-designer oriented, so I just downloaded the Alphabet, letter, r Icon Free file from the Icon-Icons website. Implement the icon as follows: mainWindow = new BrowserWindow({ height: 768, width: 1024, icon: "./src/regionsApp/r_icon.png" }); You can also choose icons for the system tray, although there's no way of using our regions app in that context, but you may want to look into it nonetheless. To continue, the second feature we'll add is a menu, with some global shortcuts to boot. In our App.regions.js file, we'll need to add a few lines to access the Menu module, and to define our menu itself: // Source file: src/App.regions.js . . . import { getRegions } from "./regionsApp/world.actions"; . . . const electron = window.require("electron").remote; const { Menu } = electron; const template = [ { label: "Countries", submenu: [ { label: "Uruguay", accelerator: "Alt+CommandOrControl+U", click: () => store.dispatch(getRegions("UY")) }, { label: "Hungary", accelerator: "Alt+CommandOrControl+H", click: () => store.dispatch(getRegions("HU")) } ] }, { label: "Bye!", role: "quit" } ]; const mainMenu = Menu.buildFromTemplate(template); Menu.setApplicationMenu(mainMenu); Using a template is a simple way to create a menu, but you can also do it manually, adding item by item. I decided to have a Countries menu with two options to show the regions for Uruguay and Hungary. The click property dispatches the appropriate action. I also used the accelerator property to define global shortcuts. See the accelerator.md for the list of possible key combinations to use, including the following: Command keys, such as Command (or Cmd), Control (or Ctrl), or both (CommandOrControl or CmdOrCtrl) Alternate keys, such as Alt, AltGr, or Option Common keys, such as Shift, Escape (or Esc), Tab, Backspace, Insert, or Delete Function keys, such as F1 to F24 Cursor keys, including Up, Down, Left, Right, Home, End, PageUp, and PageDown Media keys, such as MediaPlayPause, MediaStop, MediaNextTrack, MediaPreviousTrack, VolumeUp, VolumeDown, and VolumeMute I also want to be able to quit the application. A complete list of roles is available at Electron docs. With these roles, you can do a huge amount, including some specific macOS functions, along with the following: Work with the clipboard (cut, copy, paste, and pasteAndMatchStyle) Handle the window (minimize, close, quit, reload, and forceReload) Zoom (zoomIn, zoomOut, and resetZoom) To finish, and really just for the sake of it, let's add a notification trigger for when a file is written. Electron has a Notification module, but I opted to use node-notifier, which is quite simple to use. First, we'll add the package in the usual fashion: npm install node-notifier --save In serviceApi.js, we'll have to export the new function, so we'll able to import from elsewhere, as we'll see shortly: const electron = window.require("electron").remote; . . . export const notifier = electron.require("node-notifier"); Finally, let's use this in our world.actions.js file: import { notifier, . . . } from "./serviceApi"; With all our setup, actually sending a notification is quite simple, requiring very little code: // Source file: src/regionsApp/world.actions.js . . . export const saveRegionsToDisk = () => async ( dispatch: ({}) => any, getState: () => { regions: [] } ) => { showSaveDialog((filename: string = "") => { if (filename) { writeFile(filename, JSON.stringify(getState().regions), e => { if (e) { window.console.log(`ERROR SAVING ${filename}`, e); } else { notifier.notify({ title: "Regions app", message: `Regions saved to ${filename}` }); } }); } }); }; How it works First, we can easily check that the icon appears: Now, let's look at the menu. It has our options, including the shortcuts: Then, if we select an option with either the mouse or the global shortcut, the screen correctly loads the expected regions: Finally, let's see if the notifications work as expected. If we click on the Save regions to disk button and select a file, we'll see a notification, as in the following screenshot: Making a distributable package Now that we have a full app, all that's left to do is package it up so that you can deliver it as an executable file for Windows, Linux, or macOS users. How to do it. There are many ways of packaging an app, but we'll use a tool, electron-builder, that will make it even easier, if you can get its configuration right! First of all, we'll have to begin by defining the build configuration, and our initial step will be, as always, to install the tool: npm install electron-builder --save-dev To access the added tool, we'll require a new script, which we'll add in package.json: "scripts": { "dist": "electron-builder", . . . } We'll also have to add a few more details to package.json, which are needed for the build process and the produced app. In particular, the homepage change is required, because the CRA-created index.html file uses absolute paths that won't work later with Electron: "name": "chapter13", "version": "0.1.0", "description": "Regions app for chapter 13", "homepage": "./", "license": "free", "author": "Federico Kereki", Finally, some specific building configuration will be required. You cannot build for macOS with a Linux or Windows machine, so I'll leave that configuration out. We have to specify where the files will be found, what compression method to use, and so on: "build": { "appId": "com.electron.chapter13", "compression": "normal", "asar": true, "extends": null, "files": [ "electron-start.js", "build/**/*", "node_modules/**/*", "src/regionsApp/r_icon.png" ], "linux": { "target": "zip" }, "win": { "target": "portable" } } We have completed the required configuration, but there are also some changes to do in the code itself, and we'll have to adapt the code for building the package. When the packaged app runs, there won't be any webpack server running; the code will be taken from the built React package. The starter code will require the following changes: // Source file: electron-start.for.builder.js /* @flow */ const { app, BrowserWindow } = require("electron"); const path = require("path"); const url = require("url"); let mainWindow; const createWindow = () => { mainWindow = new BrowserWindow({ height: 768, width: 1024, icon: path.join(__dirname, "./build/r_icon.png") }); mainWindow.loadURL( url.format({ pathname: path.join(__dirname, "./build/index.html"), protocol: "file", slashes: true }) ); mainWindow.on("closed", () => { mainWindow = null; }); }; app.on("ready", createWindow); app.on("activate", () => mainWindow === null && createWindow()); app.on( "window-all-closed", () => process.platform !== "darwin" && app.quit() ); Mainly, we are taking icons and code from the build/ directory. An npm run build command will take care of generating that directory, so we can proceed with creating our executable app. How it works After doing this setup, building the app is essentially trivial. Just do the following, and all the distributable files will be found in the dist/ directory: npm run electron-builder Now that we have the Linux app, we can run it by unzipping the .zip file and clicking on the chapter13 executable. (The name came from the "name" attribute in package.json, which we modified earlier.) The result should be like what's shown in the following screenshot: I also wanted to try out the Windows EXE file. Since I didn't have a Windows machine, I made do by downloading a free VirtualBox virtual machine. After downloading the virtual machine, setting it up in VirtualBox, and finally running it, the result that was produced was the same as for Linux: So, we've managed to develop a React app, enhanced it with the Node and Electron features, and finally packaged it for different operating systems. With that, we are done! If you found this post useful, do check out the book, Modern JavaScript Web Development Cookbook.  You will learn how to create native mobile applications for Android and iOS with React Native, build client-side web applications using React and Redux, and much more. How to perform event handling in React [Tutorial] Flutter challenges Electron, soon to release a desktop client to accelerate mobile development Electron 3.0.0 releases with experimental textfield, and button APIs
Read more
  • 0
  • 0
  • 28381

article-image-6-signs-you-need-containers
Richard Gall
05 Feb 2019
9 min read
Save for later

6 signs you need containers

Richard Gall
05 Feb 2019
9 min read
I’m not about to tell you containers is a hot new trend - clearly, it isn’t. Today, they are an important part of the mainstream software development industry that probably won't be disappearing any time soon. But while containers certainly can’t be described as a niche or marginal way of deploying applications, they aren’t necessarily ubiquitous. There are still developers or development teams yet to fully appreciate the usefulness of containers. You might know them - you might even be one of them. Joking aside, there are often many reasons why people aren’t using containers. Sometimes these are good reasons: maybe you just don’t need them. Often, however, you do need them, but the mere thought of changing your systems and workflow can feel like more trouble than it’s worth. If everything seems to be (just about) working, why shake things up? Well, I’m here to tell you that more often than not it is worthwhile. But to know that you’re not wasting your time and energy, there are a few important signs that can tell you if you should be using containers. Download Containerize Your Apps with Docker and Kubernetes for free, courtesy of Microsoft.  Your codebase is too complex There are few developers in the world who would tell you that their codebase couldn’t do with a little pruning and simplification. But if your code has grown into a beast that everyone fears and doesn’t really understand, containers could probably help you a lot. Why do containers help simplify your codebase? Let’s think about how spaghetti code actually happens. Yes, it always happens by accident, but usually it’s something that evolves out of years of solving intractable problems with knock on effects and consequences that only need to be solved later. By using containers you can begin to think differently about your code. Instead of everything being tied up together, like a complex concrete network of road junctions, containers allow you to isolate specific parts of it. When you can better isolate your code, you can also isolate different problems and domains. This is one of the reasons that containers is so closely aligned with microservices. Software testing is nightmarish The efficiency benefits of containers are well documented, but the way containers can help the software testing process is often underplayed - this probably says more about a general inability to treat testing with the respect and time it deserves as much as anything else. How do containers make testing easier? There are a number of reasons containers make software testing easier. On the one hand, by using containers you’re reducing that gap between the development environment and production, which means you shouldn’t be faced with as many surprises once your code hits production as you sometimes might. Containers also make the testing process faster - you only need to test against a container image, you don’t need a fully-fledged testing environment for every application you do tests on. What this all boils down to is that testing becomes much quicker and easier. In theory, then, this means the testing process fits much more neatly within the development workflow. Code quality should never be seen as a bottleneck; with containers it becomes much easier to embed the principle in your workflow. Read next: How to build 12 factor microservices on Docker Your software isn’t secure - you’ve had breaches that could have been prevented Spaghetti code, lack of effective testing can lead to major security risks. If no one really knows what’s going on inside your applications and inside your code it’s inevitable that you’ll have vulnerabilities. And, in turn, it’s highly likely these vulnerabilities will be exploited. How can containers make software security easier? Because containers allow you to make changes to parts of your software infrastructure (rather than requiring wholesale changes), this makes security patches much easier to achieve. Essentially, you can isolate the problem and tackle it. Without containers, it becomes harder to isolate specific pieces of your infrastructure, which means any changes could have a knock on effect on other parts of your code that you can’t predict. That all being said, it probably is worth mentioning that containers do still pose a significant set of security challenges. While simplicity in your codebase can make testing easier, you are replacing simplicity at that level with increased architectural complexity. To really feel the benefits of container security, you need a strong sense of how your container deployments are working together and how they might interact. Your software infrastructure is expensive (you feel the despair of vendor lock-in) Running multiple virtual machines can quickly get expensive. In terms of both storage and memory, if you want to scale up, you’re going to be running through resources at a rapid rate. While you might end up spending big on more traditional compute resources, the tools around container management and automation are getting cheaper. One of the costs of many organization’s software infrastructure is lock-in. This isn’t just about price, it’s about the restrictions that come with sticking with a certain software vendor - you’re spending money on software systems that are almost literally restricting your capacity for growth and change. How do containers solve the software infrastructure problem and reduce vendor lock-in? Traditional software infrastructure - whether that’s on-premise servers or virtual ones - is a fixed cost - you invest in the resources you need, and then either use it or you don’t. With containers running on, say, cloud, it becomes a lot easier to manage your software spend alongside strategic decisions about scalability. Fundamentally, it means you can avoid vendor lock-in. Yes, you might still be paying a lot of money for AWS or Azure, but because containers are much more portable, moving your applications between providers is much less hassle and risk. Read next: CNCF releases 9 security best practices for Kubernetes, to protect a customer’s infrastructure DevOps is a war, not a way of working Like containers, DevOps could hardly be considered a hot new trend any more. But this doesn’t mean it’s now part of the norm. There are plenty of organizations that simply don’t get DevOps, or, at the very least, seem to be stumbling their way through sprint meetings with little real alignment between development and operations. There could be multiple causes for this conflict (maybe people just don’t get on), but DevOps often fails where the code that’s being written and deployed is too complicated for anyone to properly take accountability. This takes us back to the issue of the complex codebase. Think of it this way - if code is a gigantic behemoth that can’t be easily broken up, the unintended effects and consequences of every new release and update can cause some big problems - both personally and technically. How do containers solve DevOps challenges? Containers can help solve the problems that DevOps aims to tackle by breaking up software into different pieces. This means that developers and operations teams have much more clarity on what code is being written and why, as well as what it should do. Indeed, containers arguably facilitate DevOps practices much more effectively than DevOps proponents have been trying to do in pre-container years. Adding new product features is a pain The issue of adding features or improving applications is a complaint that reaches far beyond the development team. Product management, marketing - these departments will all bemoan the ability to make necessary changes or add new features that they will argue is business critical. Often, developers will take the heat. But traditional monolithic applications make life difficult for developers - you simply can’t make changes or updates. It’s like wanting to replace a radiator and having to redo your house’s plumbing. This actually returns us to the earlier point about DevOps - containers makes DevOps easier because it enables faster delivery cycles. You can make changes to an application at the level of a container or set of containers. Indeed, you might even simply kill one container and replace it with a new one. In turn, this means you can change and build things much more quickly. How do containers make it easier to update or build new features? To continue with the radiator analogy: containers would allow you to replace or change an individual radiator without having to gut your home. Essentially, if you want to add a new feature or change an element, you wouldn’t need to go into your application and make wholesale changes - that may have unintended consequences - instead, you can simply make a change by running the resources you need inside a new container (or set of containers). Watch for the warning signs As with any technology decision, it’s well worth paying careful attention to your own needs and demands. So, before fully committing to containers, or containerizing an application, keep a close eye on the signs that they could be a valuable option. Containers may well force you to come face to face with the reality of technical debt - and if it does, so be it. There’s no time like the present, after all. Of course, all of the problems listed above are ultimately symptoms of broader issues or challenges you face as a development team or wider organization. Containers shouldn’t be seen as a sure-fire corrective, but they can be an important element in changing your culture and processes. Learn how to containerize your apps with a new eBook, free courtesy of Microsoft. Download it here.
Read more
  • 0
  • 0
  • 4733
article-image-understanding-address-spaces-and-subnetting-in-ipv4-tutorial
Melisha Dsouza
05 Feb 2019
13 min read
Save for later

Understanding Address spaces and subnetting in IPv4 [Tutorial]

Melisha Dsouza
05 Feb 2019
13 min read
In any network, Internet Protocol (IP) addressing is needed to ensure that data is sent to the correct recipient or device. Both IPv4 and IPv6 address schemes are managed by the Internet Assigned Numbers Authority (IANA). Most of the internet that we know today is based on the IPv4 addressing scheme and is still the predominant method of communication on both the internet and private networks. This tutorial is an excerpt from a book written by Glen D. Singh, Rishi Latchmepersad titled CompTIA Network+ Certification Guide. This book is a practical certification guide that covers all CompTIA certification exam topics in an easy-to-understand manner along with self-assessment scenarios for better preparation. Public IPv4 addresses There are two main IPv4 address spaces—the public address space and the private address space. The primary difference between both address spaces is that the public IPv4 addresses are routable on the internet, which means that any device that requires communication to other devices on the internet will need to be assigned a public IPv4 address on its interface, which is connected to the internet. The public address space is divided into five classes: Class A 0.0.0.0 – 126.255.255.255 Class B 128.0.0.0 – 191.255.255.255 Class C 192.0.0.0 – 223.255.255.255 Class D 224.0.0.0 – 239.255.255.255 Class E 240.0.0.0 – 255.255.255.255 Class D addresses are used for multicast traffic. These addresses are not assignable. Class E addresses are reserved for experimental usage and are not assignable. On the internet, classes A, B, and C are commonly used on devices that are directly connected to the internet, such as layer 3 switches, routers, firewalls, servers, and any other network-related device. As mentioned earlier, there are approximately four billion public IPv4 addresses. However, in a lot of organizations and homes, only one public IPv4 address is assigned to the router or modem's publicly facing interface. The following diagram shows how a public IP address is seen by internet users: So, what about the devices that require internet access from within the organization or home? There may be a few devices to hundreds or even thousands of devices that require an internet connection and an IP address to communicate to the internet from within a company. If ISPs give their customers a single public IPv4 address on their modem or router, how can this single public IPv4 address serve more than one device from within the organization or home? The internet gateway or router is usually configured with Network Addresses Translation (NAT), which is the method of mapping either a group of IP addresses or a single IP address on the internet-facing interface to the local area network (LAN). For any devices that are behind the internet gateway that want to communicate with another device on the internet, NAT will translate the sender's source IP address to the public IPv4 address. Therefore, all of the devices on the internet will see the public IPv4 address and not the sender's actual IP address. Private IPv4 addresses As defined by RFC 1918, there are three classes of private IPv4 address that are allocated for private use only. This means within a private network such as LAN. The benefit of using the private address space (RFC 1918) is that the classes are not unique to any particular organization or group. They can be used within an organization or a private network. However, on the internet, the public IPv4 address is unique to a device. This means that if a device is directly connected to the internet with a private IPv4 address, there will be no network connectivity to devices on the internet. Most ISPs usually have a filter to prevent any private addresses (RFC 1918) from entering their network. The private address space is divided into three classes: Class A—10.0.0.0/8 network block 10.0.0.0 - 010.255.255.255 Class B—172.16.0.0/12 network block 172.16.0.0 - 172.31.255.255 Class C—192.168.0.0/16 network block 192.168.0.0 - 192.168.255.255 Subnetting in IPv4 What is subnetting and why do we need to subnet a network? First, subnetting is the process of breaking down a single IP address block into smaller subnetworks (subnets). Second, the reason we need to subnet is to efficiently distribute IP addresses with the result of less wastage. This brings us to other questions, such as why do we need to break down a single IP address block, and why is least wastage so important? Could we simply assign a Class A, B, or C address block to a network of any size? To answer these questions, we will go more in depth with this topic by using practical examples and scenarios. Let's assume that you are a network administrator at a local company and one day the IT manager assigns a new task to you. The task is to redesign the IP scheme of the company. He has also told you to use an address class that is suitable for the company's size and to ensure that there is minimal wastage of IP addresses. The first thing you decided to do was draw a high-level network diagram indicating each branch, which shows the number of hosts per branch office and the Wide Area Network (WAN) links between each branch router: Network diagram As we can see from the preceding diagram, each building has a branch router, and each router is connected to another using a WAN link. Each branch location has a different number of host devices that requires an IP address for network communication. Step 1 – determining an appropriate class of address and why The subnet mask can tell us a lot about a network, such as the following: The network and host portion of an IP address The number of hosts within a network If we use a network block from either of the address classes, we will get the following available hosts: As you may remember, the network portion of an address is represented by 1s in the subnet mask, while the 0s represent the host portion. We can use the following formula to calculate the total number of IP addresses within a subnet by the known the amount of host bits in the subnet mask. Using the formula 2H, where H represents the host bit, we get the following results: Class A = 224 = 16,777,216 total IPs Class B = 216 = 65,536 total IPs Class C = 28 = 256 total IPs In IPv4, there are two IPs that cannot be assigned to any devices. These are the Network ID and the Broadcast IP address. Therefore, you need to subtract two addresses from the total IP formula. Using the formula 2H-2 to calculate usable IPs, we get the following: Class A = 224 – 2 = 16,777,214 total IPs Class B = 216 – 2 = 65,534 total IPs Class C = 28 – 2 = 254 total IPs Looking back at Network diagram, we can identify the following seven networks: Branch A LAN: 25 hosts Branch B LAN: 15 hosts Branch C LAN: 28 hosts Branch D LAN: 26 hosts WAN R1-R2: 2 IPs are needed WAN R2-R3: 2 IPs are needed WAN R3-R4: 2 IPs are needed Determining the appropriate address class depends on the largest network and the number of networks needed. Currently, the largest network is Branch C, which has 28 host devices that needs an IP address. We can use the smallest available class, which is any Class C address because it will be able to support the largest network we have. However, to do this, we need to choose a Class C address block. Let's use the 192.168.1.0/24 block. Remember, the subnet mask is used to identify the network portion of the address. This also means that we are unable to modify the network portion of the IP address when we are subnetting, but we can modify the host portion: The first 24-bits represent the network portion and the remaining 8-bits represent the host portion. Using the formula 2H – 2 to calculate the number of usable host IPs, we get the following: 2H – 2 28 – 2 = 256 – 2 = 254 usable IP addresses Assigning this single network block to either of the seven networks, there will be a lot of IP addresses being wasted. Therefore, we need to apply our subnetting techniques to this Class C address block. Step 2 – creating subnets (subnetworks) To create more subnets or subnetworks, we need to borrow bits on the host portion of the network. The formula 2N is used to calculate the number of subnets, where N is the number of bits borrowed on the host portion. Once these bits are borrowed, they will become part of the network portion and a new subnet mask will be presented. So far, we have a Network ID of 192.168.1.0/24. We need to get seven subnets, and each subnet should be able to fit our largest network (which is Branch C—28 hosts). Let's create our subnets. Remember that we need to borrow bits on the host portion, starting where the 1s end in the subnet mask. Let's borrow two host bits and apply them to our formula to determine whether we are able to get the seven subnets: When bits are borrowed on the host portion, the bits are changed to 1s in the subnet mask. This produces a new subnet mask for all of the subnets that have been created. Let's use our formula for calculating the number of networks: Number of Networks = 2N 22 = 2 x 2 = 4 networks As we can see, two host bits are not enough as we need at least seven networks. Let's borrow one more host bit: Once again, let's use our formula for calculating the number of networks: Number of Networks = 2N 23 = 2 x 2 x 2 = 8 networks Using 3 host bits, we are able to get a total of 8 subnets. In this situation, we have one additional network, and this additional network can be placed aside for future use if there's an additional branch in the future. Since we borrowed 3 bits, we have 5 host bits remaining. Let's use our formula for calculating usable IP addresses: Usable IP addresses = 2H – 2 25 – 2 = 32 – 2 = 30 usable IPs This means that each of the 8 subnets will have a total of 32 IP addresses, with 30 usable IP addresses inclusive. Now we have a perfect match. Let's work out our 8 new subnets. The guidelines we must follow at this point are as follows: We cannot modify the network portion of the address (red) We cannot modify the host portion of the address (black) We can only modify the bits that we borrowed (green) Starting with the Network ID, we get the following eight subnets: We can't forget about the subnet mask: As we can see, there are twenty-seven 1s in the subnet mask, which gives us 255.255.255.224 or /27 as the new subnet mask for all eight subnets we've just created. Take a look at each of the subnets. They all have a fixed increment of 32. A quick method to calculate the incremental size is to use the formula 2x. This assists in working out the decimal notation of each subnet much easier than calculating the binary. The last network in any subnet always ends with the customized ending of the new subnet mask. From our example, the new subnet mask 255.255.255.224 ends with 224, and the last subnet also ends with the same value, 192.168.1.224. Step 3 – assigning each network an appropriate subnet and calculating the ranges To determine the first usable IP address within a subnet, the first bit from the right must be 1. To determine the last usable IP address within a subnet all of the host bits except the first bit from the right should all be 1s. The broadcast IP of any subnet is when all of the host bits are 1s. Let's take a look at the first subnet. We will assign subnet 1 to the Branch A LAN: The second subnet will be allocated to the Branch B LAN: The third subnet will be allocated to the Branch C LAN: The fourth subnet will be allocated to Branch D LAN: At this point, we have successfully allocated subnets 1 to 4 to each of the branch's LANs. During our initial calculation for determining the size of each subnet, we saw that each of the eight subnets are equal, and that we have 32 total IPs with 30 usable IP addresses. Currently, we have subnets 5 to 8 for allocation, but if we allocate subnet 5, 6 and 7 to the WAN links between the branches R1-R2, R2-R3 and R3-R4, we would be wasting 28 IP addresses since each WAN link (point-to-point) only requires 2 IP addresses. What if we can take one of our existing subnets and create even more but smaller networks to fit each WAN (point-to-point) link? We can do this with a process known as Variable Length Subnet Masking (VLSM). By using this process, we are subnetting a subnet. For now, we will place aside subnets 5, 6, and 7 as a future reservation for any future branches: Step 4 – VLSM and subnetting a subnet For the WAN links, we need at least three subnets. Each must have a minimum of two usable IP addresses. To get started, let's use the following formula to determine the number of host bits that are needed so that we have at least two usable IP addresses: 2H – 2, where H is the number of host bits. We are going to use one bit, 21 – 2 = 2 – 2 = 0 usable IP addresses. Let's add an extra host bit in our formula, that is, 22 – 2 = 4 – 2 = 2 usable IP addresses. At this point, we have a perfect match, and we know that only two host bits are needed to give us our WAN (point-to-point) links. We are going to use the following guidelines: We cannot modify the network portion of the address (red) Since we know that the two host bits are needed to represent two usable IP addresses, we can lock it into place (purple) The bit between the network portion (red) and the locked-in host bits (purple) will be the new network bits (black) To calculate the number of networks, we can use 2N = 23 = 8 networks. Even though we got a lot more networks than we actually needed, the remainder of the networks can be set aside for future use. To calculate the total IPs and increment, we can use 2H = 22 = 4 total IP addresses (inclusive of the Network ID and Broadcast IP addresses). To calculate the number of usable IP addresses, we can use 2H – 2 = 22 – 2 = 2 usable IP addresses per network. Let's work out our eight new subnets for any existing and future WAN (point-to-point) links: Now that we have eight new subnets, let's allocate them accordingly. The first subnet will be allocated to WAN 1, R1-R2: The second subnet will be allocated to WAN 2, R2-R3: The third subnet will be allocated to WAN 3, R3-R4: Now that we have allocated the first three subnets to each of the WAN links, the following remaining subnets can be set aside for any future branches which may need another WAN link. These will be assigned for future reservation: Summary In this tutorial, we understood public and private IPV4 addresses. We also learned the importance of having a subnet and saw the 4 simple steps needed to complete the subnetting process. To learn from industry experts and implement their practices to resolve complex IT issues and effectively pass and achieve this certification, check out our book CompTIA Network+ Certification Guide. AWS announces more flexibility its Certification Exams, drops its exam prerequisites Top 10 IT certifications for cloud and networking professionals in 2018 What matters on an engineering resume? Hacker Rank report says skills, not certifications
Read more
  • 0
  • 0
  • 48484

article-image-snopes-will-no-longer-do-fact-checking-work-for-facebook-ends-its-partnership-with-the-firm
Alan Thorn
04 Feb 2019
4 min read
Save for later

Snopes will no longer do fact-checking work for Facebook, ends its partnership with the firm

Alan Thorn
04 Feb 2019
4 min read
A leading fact-checking agency, Snopes, announced last week that it’s terminating its partnership with Facebook and will no longer aid in reducing the spread of misinformation and fake news on the platform. “We are evaluating the ramifications and costs of providing third-party fact-checking services, and we want to determine with certainty that our efforts to aid any particular platform are a net positive for our online community, publication, and staff”, reads the statement by David Mikkelson, CEO, Snopes, and Vinny Green, VP operations, Snopes. Facebook had decided to partner up with 3rd party fact checking firms at the end of 2016, to get help in combating false news on its platform following the 2016 US elections. One such firm to partner up with Facebook was Snopes, who contributed to Facebook for two years. Snopes team mentions that when they contributed to Facebook’s initial fact-checking effort in December 2016, there were no financial benefits (payment offer) involved. However, Facebook did offer a lump $100,000 payment for their work in 2017. Other than that, Green told Poynter that part of the reason why Snopes withdrew its partnership with Facebook is that third-party fact-checking for Facebook didn’t seem practical to the publishers within Snopes. He mentions that fact checkers had to manually enter false news post on Facebook that they flag into the dashboard, which in turn, requires a lot of time and is not possible for a team that has only 16 people. “It doesn’t seem like we’re striving to make third-party fact-checking more practical for publishers — it seems like we’re striving to make it easier for Facebook. The work that fact-checkers are doing doesn’t need to be just for Facebook — we can build things for fact-checkers that benefit the whole web, and that can also help Facebook”, Green told Poynter. Offering Fact-checking services for Facebook has been under a lot of controversy within Snopes, as Guardian recently quoted Brooke Binkowski, Snopes’ former managing editor, and Kim LaCapria, a former fact-checker in a report published in December last year. As per the reports, the former Snopes employees mentioned that Facebook ‘didn’t care’ about the fact-checking firms. “They’ve essentially used us for crisis PR. They’re not taking anything seriously. They are more interested in making themselves look good and passing the buck … They clearly don’t care,” said Binkowski. Regarding the current news, a Facebook spokesperson told Poynter that, despite Snopes’ pulling out of the partnership, Facebook will continue to improve its platform and work with fact-checkers around the world. Another agency, named, Associated Press (AP) is also currently making negotiations over its role as a fact-checking agency on Facebook. AP spokesperson told TechCrunch it’s not doing any fact-checking work for Facebook currently and is in an ongoing discussion with Facebook about opportunities related to doing more important fact-checking work on Facebook. AP doesn’t plan on leaving Facebook but is in talks with the company and hopes to start the fact-checking work soon, as reported by TechCrunch. Snopes team also mentioned that they've not entirely ruled out working with Facebook and are willing to have an open dialogue and discussion with Facebook over its approaches to fighting misinformation. “We will continue to be pioneers in a challenging digital media landscape, forever looking for opportunities to cultivate our publication and increase our impact. Our extremely talented and dedicated staff stands ready for the challenges ahead”, states the Snopes team.This is a test for blockquote This is a test for blockquote This is a test for blockquote This is a test for blockquote This is a test for blockquote This is a test for blockquote This is a test for blockquote This is a test for blockquote This is a test for blockquote This is a test for blockquote Read NextFacebook faces multiple data-protection investigations in Ireland Facebook pays users $20/month to install a ‘Facebook Research’ VPN that spies on their phone and web activities, TechCrunch reports Facebook hires top EEF lawyer and Facebook critic as Whatsapp privacy policy manager
Read more
  • 0
  • 5
  • 110

article-image-how-to-recover-deleted-data-from-an-android-device-tutorial
Sugandha Lahoti
04 Feb 2019
11 min read
Save for later

How to recover deleted data from an Android device [Tutorial]

Sugandha Lahoti
04 Feb 2019
11 min read
In this tutorial, we are going to learn about data recovery techniques that enable us to view data that has been deleted from a device. Deleted data could contain highly sensitive information and thus data recovery is a crucial aspect of mobile forensics. This article will cover the following topics: Data recovery overview Recovering data deleted from an SD card Recovering data deleted from a phone's internal storage This article is taken from the book Learning Android Forensics by Oleg Skulkin, Donnie Tindall, and Rohit Tamma. This book is a comprehensive guide to Android forensics, from setting up the workstation to analyzing key artifacts. Data recovery overview Data recovery is a powerful concept within digital forensics. It is the process of retrieving deleted data from a or SD card when it cannot be accessed normally. Being able to recover data that has been deleted by a user could help solve civil or criminal cases. This is because many accused just delete data from their device hoping that the evidence will be destroyed. Thus, in most criminal cases, deleted data could be crucial because it may contain information the user wanted to erase from their Android device. For example, consider the scenario where a mobile phone has been seized from a terrorist. Wouldn't it be of the greatest importance to know which items were deleted by them? Access to any deleted SMS messages, pictures, dialed numbers, and so on could be of critical importance as they may reveal a lot of sensitive information. From a normal user's point of view, recovering data that has been deleted would usually mean referring to the operating system's built-in solutions, such as the Recycle Bin in Windows. While it's true that data can be recovered from these locations, due to an increase in user awareness, these options often don't work. For instance, on a desktop computer, people now use Shift + Del whenever they want to delete a file completely from their desktop. Similarly, in mobile environments, users are aware of the restore operations provided by apps and so on. In spite of these situations, data recovery techniques allow a forensic investigator to access the data that has been deleted from the device. With respect to Android, it is possible to recover most of the deleted data, including SMS, pictures, application data, and so on. But it is important to seize the device in a proper manner and follow certain procedures, otherwise, data might be deleted permanently. To ensure that the deleted data is not lost forever, it is recommended to keep the following points in mind: Do not use the phone for any activity after seizing it. The deleted text message exists on the device until space is needed by some other incoming data, so the phone must not be used for any sort of activity to prevent the data from being overwritten. Even when the phone is not used, without any intervention from our end, data can be overwritten. For instance, an incoming SMS would automatically occupy the space, which overwrites the deleted data. Also, remote wipe commands can wipe the content present on the device. To prevent such events, you can consider the option of placing the device in a Faraday bag. Thus, care should be taken to prevent delivery of any new messages or data through any means of communication. How can deleted files be recovered? When a user deletes any data from a device, the data is not actually erased from the device and continues to exist on it. What gets deleted is the pointer to that data. All filesystems contain metadata, which maintains information about the hierarchy of files, filenames, and so on. Deletion will not really erase the data but instead removes the file system metadata. Thus, when text messages or any other files are deleted from a device, they are just made invisible to the user, but the files are still present on the device as long as they are not overwritten by some other data. Hence, there is the possibility of recovering them before new data is added and occupies the space. Deleting the pointer and marking the space as available is an extremely fast operation compared to actually erasing all the data from the device. Hence, to increase performance, operating systems just delete the metadata. Recovering deleted data on an Android device involves three scenarios: Recovering data that is deleted from the SD card such as pictures, videos, and so on Recovering data that is deleted from SQLite databases such as SMS, chats, web history, and so on Recovering data that is deleted from the device's internal storage The following sections cover the techniques that can be used to recover deleted data from SD cards, and the internal storage of the Android device. Recovering deleted data from SD cards Data present on an SD card can reveal lots of information that is useful during a forensic investigation. The fact that pictures, videos, voice recordings, and application data are stored on the SD card adds weight to this. As mentioned in the previous chapters, Android devices often use FAT32 or exFAT file systems on their SD card. The main reason for this is that these file systems are widely supported by most operating systems, including Windows, Linux, and macOS X. The maximum file size on a FAT32 formatted drive is around 4 GB. With increasingly high-resolution formats now available, this limit is commonly reached, that's why newer devices support exFAT: this file system doesn't have such limitations. Recovering the data deleted from an external SD is pretty easy if it can be mounted as a drive. If the SD card is removable, it can be mounted as a drive by connecting it to a computer using a card reader. Any files can be transferred to the SD card while it's mounted. Some of the older devices that use USB mass storage also mount the device to a drive when connected through a USB cable. In order to make sure that the original evidence is not modified, a physical image of the disk is taken and all further experimentation is done on the image itself. Similarly, in the case of SD card analysis, an image of the SD card needs to be taken. Once the imaging is done, we have a raw image file. In our example, we will use FTK Imager by AccessData, which is an imaging utility. In addition to creating disk images, it can also be used to explore the contents of a disk image. The following are the steps that can be followed to recover the contents of an SD card using this tool: Start FTK Imager and click on File and then Add Evidence Item... in the menu, as shown in the following screenshot: Adding evidence source to FTK Imager Select Image File in the Select Source dialog and click on Next. In the Select File dialog, browse to the location where you downloaded the sdcard.dd file, select it, and click on Finish, as shown in the following screenshot: Selecting the image file for analysis in FTK Imager FTK Imager's default display will appear with the contents of the SD card visible in the View pane at the lower right. You can also click on the Properties tab below the lower left pane to view the properties for the disk image. Now, on the left pane, the drive has opened. You can open folders by clicking on the + sign. When highlighting the folder, contents are shown on the right pane. When a file is selected, its contents can be seen on the bottom pane. As shown in the following screenshot, the deleted files will have a red X over the icon derived from their file extension: Deleted files shown with red X over the icons As shown in the following screenshot, to export the file, right-click on the file that contains the picture and select Export Files...: Sometimes, only a fragment of the file is recoverable, which cannot be read or viewed directly. In that case, we need to look through free or unallocated space for more data. Carving can be used to recover files from free and unallocated space. PhotoRec is one of the tools that can help you to do that. You will learn more about file carving with PhotoRec in the following sections. Recovering deleted data from internal memory Recovering files deleted from Android's internal memory, such as app data and so on, is not as easy as recovering such data from SD cards and SQLite databases, but, of course, it's not impossible. Many commercial forensic tools are capable of recovering deleted data from Android devices, of course, if a physical acquisition is possible and the user data partition isn't encrypted.  But this is not very common for modern devices, especially those running most recent versions of the operating system, such as Oreo and Pie. Most Android devices, especially modern smartphones, and tablets, use the EXT4 file system to organize data in their internal storage. This file system is very common for Linux-based devices. So, if we want to recover deleted data from the device's internal storage, we need a tool capable of recovering deleted files from the EXT4 file system. One such tool is extundelete. The tool is available for downloading here: http://extundelete.sourceforge.net/. To recover the contents of an inode, extundelete searches a file system's journal for an old copy of that inode. Information contained in the inode helps the tool to locate the file within the file system. To recover not only the file's contents, but also its name, extundelete is able to search the deleted entries in a directory to match the inode number of a file to a file name. To use this tool, you will need a Linux workstation. Most forensic Linux distributions have it already on board. For example, the following is a screenshot from SIFT Workstation—a popular digital forensics and incident response Linux distribution created by Rob Lee and his team from the SANS Institute (https://digital-forensics.sans.org/community/downloads): extundelete command-line options Before you can start the recovery process, you will need to mount a previously imaged userdata partition. In this example, we are going to use an Android device imaged via the chip-off technique. First of all, we need to determine the location of the userdata partition within the image. To do this, we can use mmls from the Sleuth Kit, as shown in the following screenshot: Android device partitions As you can see in the screenshot, the userdata partition is the last one and starts in sector 9199616. To make sure the userdata partition is EXT4 formatted, let's use fsstat, as shown in the following example: A part of fsstat output All you need now is to mount the userdata partition and run extundelete against it, as shown in the following example: extundelete /userdata/partition/mount/point --restore-all All recovered files will be saved to a subdirectory of the current directory named RECOVERED_FILES. If you are interested in recovering files before or after the specified date, you can use the --before date and --after-date options. It's important to note that these dates must be in UNIX Epoch format. There are quite a lot of both online and offline tools capable of converting timestamps, for example, you can use https://www.epochconverter.com/. As you can see, this method isn't very easy and fast, but there is a better way: using Autopsy, an open source digital forensic tool In the following example, we used a built-in file extension filter to find all the images on the Android device, and found a lot of deleted artifacts: Recovering deleted files from an EXT4 partition with Autopsy Summary Data recovery is the process of retrieving deleted data from the device and thus is a very important concept in forensics. In this chapter, we have seen various techniques to recover deleted data from both the SD card and the internal memory. While recovering the data from a removable SD card is easy, recovering data from internal memory involves a few complications. SQLite file parsing and file carving techniques aid a forensic analyst in recovering the deleted items present in the internal memory of an Android device. In order to understand the forensic perspective and the analysis of Android apps, read our book Learning Android Forensics. What role does Linux play in securing Android devices? How the Titan M chip will improve Android security Getting your Android app ready for the Play Store[Tutorial]
Read more
  • 0
  • 0
  • 23500
article-image-how-to-extract-sim-card-data-from-android-devices-tutorial
Sugandha Lahoti
03 Feb 2019
9 min read
Save for later

How to extract SIM card data from Android devices [Tutorial]

Sugandha Lahoti
03 Feb 2019
9 min read
This tutorial discusses logical data extraction, and one of its subtopics Android SIM card extractions. This article is taken from the book Learning Android Forensics by Oleg Skulkin, Donnie Tindall, and Rohit Tamma. This book explore open source and commercial forensic tools and teaches readers the basic skills of Android malware identification and analysis. Logical extraction overview In digital forensics, the term logical extraction is typically used to refer to extractions that don't recover deleted data or do not include a full bit-by-bit copy of the evidence. However, a more correct definition of logical extraction is any method that requires communication with the base operating system. Because of this interaction with the operating system, a forensic examiner cannot be sure that they have recovered all of the data possible; the operating system is choosing which data it allows the examiner to access. In traditional computer forensics, logical extraction is analogous to copying and pasting a folder in order to extract data from a system; this process will only copy files that the user can access and see. If any hidden or deleted files are present in the folder being copied, they won't be in the pasted version of the folder. As you'll see, however, the line between logical and physical extractions in mobile forensics is somewhat blurrier than in traditional computer forensics. For example, deleted data can routinely be recovered from logical extractions on mobile devices due to the prevalence of SQLite databases being used to store data. Furthermore, almost every mobile extraction will require some form of interaction with the operating Android OS; there's no simple equivalent to pulling a hard drive and imaging it without booting the drive. What data can be recovered logically? For the most part, any and all user data may be recovered logically: Contacts Call logs SMS/MMS Application data System logs and information The bulk of this data is stored in SQLite databases, so it's even possible to recover large amounts of deleted data through a logical extraction. Root access When forensically analyzing an Android device, the limiting factor is often not the type of data being sought, but rather whether or not the examiner has the ability to access the data. All of the data listed previously, when stored on the internal flash memory, is protected and requires root access to read. The exception to this is application data that is stored on the SD card, which will be discussed later in this book. Without root access, a forensic examiner cannot simply copy information from the /data partition. The examiner will have to find some method of escalating privileges in order to gain access to the contacts, call logs, SMS/MMS, and application data. These methods often carry many risks, such as the potential to destroy or brick the device (making it unable to boot), and may alter data on the device in order to gain permanence. The methods commonly vary from device to device, and there is no universal, one-click method to gain root access to every device. Commercial mobile forensic tools such as Oxygen Forensic Detective and Cellebrite UFED have built-in capabilities to temporarily and safely root many devices but do not cover the wide range of all Android devices. The decision to root a device should be in accordance with your local operating procedures and court opinions in your jurisdiction. The legal acceptance of evidence obtained by rooting varies by jurisdiction. Android SIM card extractions Traditionally, SIM cards were used for transferring data between devices. SIM cards in the past were used to store many different types of data, such as the following: User data Contacts SMS messages Dialed calls Network data Integrated Circuit Card Identifier (ICCID): Serial number of the SIM International Mobile Subscriber Identity (IMSI): Identifier that ties the SIM to a specific user account MSISDN: Phone number assigned to the SIM Location Area Identity (LAI): Identifies the cell that a user is in Authentication Key (Ki): Used to authenticate the mobile network Various other network-specific information With the rise in capacity of device storage, SD cards, and cloud backups, the necessity for storing data on a SIM card has decreased. As such, most modern smartphones typically do not store much, if any, user data on the SIM card. All network data listed previously does still reside on the SIM, as a SIM is necessary to connect to all modern (4G) cellular networks. As with all Android devices, though, there is no concrete stipulation that user data can't be stored on a SIM; it simply doesn't happen by default. Individual device manufacturers can easily decide to write user data to the SIM, and individual users can download applications to provide that functionality. This means that a device's SIM card should always be examined during a forensic examination. It is a very quick process, and should never be overlooked. Acquiring SIM card data The SIM card should always be removed from the device and examined separately. While some tools claim to read the SIM card through the device interface, this may not recover deleted data or all data on the SIM; the only way for an examiner to be certain all data was acquired is to read the SIM through a standalone SIM card reader with a tool that has been tested and verified. The location of the SIM will vary by device but is typically either stored beneath the battery or in a tray located on the side of the device. Once the SIM is removed, it should be placed in a SIM card reader. There are hundreds of SIM card readers available in the marketplace, and all major mobile forensics tools come with an included reader that will work with their software. Oftentimes, the forensic tools will also support third-party SIM readers as well. There is a surprising lack of thorough, free SIM card reading software available. Any software used should always be tested and validated on a SIM card that has been populated with known data prior to being used in an actual forensic investigation. Also, keep in mind that much of the free software available works for older 2G/3G SIMs, but may not work properly on a modern 4G SIM. We used the Mobiledit! Lite, a free version of Mobiledit!, for the following screenshots. It is available at: http://www.mobiledit.com/downloads. The following is a sample 4G SIM card extraction from an Android phone running version 4.4.4; note that nothing that could be considered user data was acquired despite the SIM being used actively for over a year, though fields such as the ICCID, IMSI, and MSISDN (own phone number) could be useful for subpoenas/warrants or other aspects of an investigation: SIM card extraction overview The following screenshot highlights SMS messages on the SIM card: The following screenshot highlights the phonebook of the SIM card: The following screenshot highlights the phone number of the SIM card (also called the MSISDN): SIM Security Due to the fact that SIM cards conform to established, international standards, all SIM cards provide the same security functionality: a 4- to 8-digit PIN. Generally, this PIN must be set through a menu on the device. On Android devices, this setting is found at Settings | Security | Set up SIM card lock. The SIM PIN is completely independent of any lock screen security settings and only has to be entered when the device boots. The SIM PIN only protects user data on the SIM; all network information is still recoverable even if the SIM is PIN locked. The SIM card will allow three attempts to enter the PIN; if one of these attempts are correct, the counter will reset. On the other hand, if all of these attempts are incorrect, the SIM will enter Personal Unblocking Key (PUK) mode. The PUK is an 8-digit number assigned by the carrier and is frequently found on documentation when the SIM is purchased. Bypassing a PUK is not possible with any commercial forensic software; because of this, an examiner should never attempt to enter the PIN on the device as the device will not indicate how many attempts remain before the PUK is activated. An examiner could unwittingly PUK lock the SIM and be unable to access the device. Forensic tools, however, will show how many attempts remain before the PUK is activated, as seen in the previous screenshots. Common carrier defaults for SIM PINs are 0000 and 1234. If three tries remain before activating the PUK, an examiner may successfully unlock the SIM with one of these defaults. Carriers frequently retain PUK keys when a SIM is issued. These may be available through a subpoena or warrant issued to the carrier. SIM cloning The SIM PIN itself provides almost no additional security, and can easily be bypassed through SIM cloning. SIM cloning is a feature provided in almost all commercial mobile forensic software, although the term cloning is somewhat misleading. SIM cloning, in the case of mobile forensics, is the process of copying the network data from a locked SIM onto a forensically sterile SIM that does not have the PIN activated. The phone will identify the cloned SIM based on this network data (typically the ICCID and IMSI) and think that it is the same SIM that was inserted previously, but this time there will be no SIM PIN. This cloned SIM will also be unable to access the cellular network, which makes it an effective solution similar to Airplane Mode. Therefore, SIM cloning will allow an examiner to access the device, but the user data on the original SIM is still inaccessible as it remains protected by the PIN. We are unaware of any free software that performs forensic SIM cloning. It is supported by almost all commercial mobile forensic kits, however. These kits will typically include a SIM card reader, software to perform the clone, as well as multiple blank SIM cards for the cloning process. This article has covered SIM card extraction, which is a subtopic of logical extractions of Android devices. To know more about the other methods of logical extractions in Android devices, read our book Learning Android Forensics. What role does Linux play in securing Android devices? How the Titan M chip will improve Android security Getting your Android app ready for the Play Store[Tutorial]
Read more
  • 0
  • 0
  • 28922

article-image-creating-views-in-odoo-12-list-form-search-tutorial
Sugandha Lahoti
02 Feb 2019
10 min read
Save for later

Creating views in Odoo 12 - List, Form, Search [Tutorial]

Sugandha Lahoti
02 Feb 2019
10 min read
Odoo provides a rapid application development framework that's particularly suited to building business applications. This type of application is usually concerned with keeping business records, centered around create, read, update, and delete (CRUD) operations. Not only does Odoo makes it easy to build this type of application, but it also provides rich components to create compelling user interfaces, such as kanban, calendar, and graph views. In this tutorial, we will create list, form, and search views, the basic building blocks for the user interface. This article is taken from the book Odoo 12 Development Essentials by Daniel Reis. This book will tecah you to build a business application from scratch by using  Odoo 12. Technical requirements The minimal requirement is for you to have a modern web browser, such as Firefox, Chrome, or Edge. You may go a little further and use a packaged Odoo distribution to have it locally installed on your computer. For that, you only need an operating system such as Windows, macOS, Debian-based Linux (such as Ubuntu), or Red Hat-based Linux (such as Fedora). Windows, Debian, and Red Hat have installation packages available. Another option is to use Docker, available for all these systems and for macOS. In this article, we will mostly have point-and-click interaction with the user interface. You will find the code snippets used and a summary of the steps performed in the book's code repository, under the ch01 folder. It's important to note that Odoo databases are incompatible between Odoo major versions. If you run an Odoo 11 server against a database created for a previous major version of Odoo, it won't work. Non-trivial migration work is needed before a database can be used with a later version of the product. The same is true for add-on modules: as a general rule, an add-on module developed for an Odoo major version will not work on other versions. When downloading a community module from the web, make sure it targets the Odoo version you are using. On the other hand, major releases (10.0, 11.0) are expected to receive frequent updates, but these should be mostly bug fixes. They are assured to be API-stable, meaning that model data structures and view element identifiers will remain stable. This is important because it means there will be no risk of custom modules breaking due to incompatible changes in the upstream core modules. Creating a new Model Models are the basic components for applications, providing the data structures and storage to be used. We will create the Model for To-do Items. It will have three fields: Description  Is done? flag Work team partner list Model definitions are accessed in the Settings app, in the Technical | Database Structure | Models menu. To create a Model, follow these steps: Visit the Models menu, and click on the upper-left Create button. Fill in the new Model form with these values: Model Description: To-do Item Model: x_todo_item We should save it before we can properly add new fields to it. So, click on Save and then Edit it again. You can see that a few fields were automatically added. The ORM includes them in all Models, and they can be useful for audit purposes: The x_name (or Name) field is a title representing the record in lists or when it is referenced in other records. It makes sense to use it for the To-do Item title. You may edit it and change the Field Label to a more meaningful label description. Adding the Is Done? flag to the Model should be straightforward now. In the Fields list, click on Add a line, at the bottom of the list, to create a new field with these values: Field Name: x_is_done Field Label: Is Done? Field Type: boolean The new Fields form should look like this: Now, something a little more challenging is to add the Work Team selection. Not only it is a relation field, referring to a record in the res.partner Model, it also is a multiple-value selection field. In many frameworks, this is not a trivial task, but fortunately, that's not the case in Odoo, because it supports many-to-many relations. This is the case because one to-do can have many people, and each person can participate in many to-do items. In the Fields list, click again on Add a line to create the new field: Field Name: x_work_team_ids Field Label: Work Team Field Type: many2many Object Relation: res.partner Domain: [('x_is_work_team', '=', True)] The many-to-many field has a few specific definitions—Relation Table, Column 1, and Column 2 fields. These are automatically filled out for you and the defaults are good for most cases, so we don't need to worry about them now. The domain attribute is optional, but we used it so that only eligible work team members are selectable from the list. Otherwise, all partners would be available for selection. The Domain expression defines a filter for the records to be presented. It follows an Odoo-specific syntax—it is a list of triplets, where each triplet is a filter condition, indicating the Field Name to filter, the filter operator to use, and the value to filter against. Odoo has an interactive domain filter wizard that can be used as a helper to generate Domain expressions. You can use it at Settings | User Interface | User-defined Filters. Once a target Model is selected in the form, the Domain field will display an add filter button, which can be used to add filter conditions, and the text box below it will dynamically show the corresponding Domain expression code. Creating views We have created the To-do Items Model. Next, we will be creating the two essential views for it—a list (also called a tree) and a form. List views We will now create a list view: In Settings, navigate to Technical | User Interface | Views and create a new record with the following values: View Name: To-do List View View Type: Tree Model: x_todo_item This is how the View definition is expected to look like: In the Architecture tab, we should write XML with the view structure. Use the following XML code: <tree> <field name="x_name" /> <field name="x_is_done" /> </tree> The basic structure of a list view is quite simple—a <tree> element containing one or more <field> elements for each of the columns to display in the list view. Form views Next, we will create the form view: Create another View record, using the following values: View Name: To-do Form View View Type: Form Model: x_todo_item If we don't specify the View Type, it will be auto-detected from the view definition. In the Architecture tab, type the following XML code: <form> <group> <field name="x_name" /> <field name="x_is_done" /> <field name="x_work_team_ids" widget="many2many_tags" context="{'default_x_is_work_team': True}" /> </group> </form> The form view structure has a root <form> element, containing elements such as <field>, Here, we also chose a specific widget for the work team field, to be displayed as tag buttons instead of a list grid. We added the widget attribute to the Work Team field, to have the team members presented as button-like tags. By default, relational fields allow you to directly create a new record to be used in the relationship. This means that we are allowed to create new Partner directly from the Work Team field. But if we do so, they won't have the Is Work Team? flag enabled, which can cause inconsistencies. For better user experience, we can have this flag set by default for these cases. This is done with the context attribute, used to pass session information to the next View, such as default values to be used. This will be discussed in detail in later chapters, and for now, we just need to know that it is a dictionary of key-value pairs. Values prefixed with default_ provide the default value for the corresponding field. So in our case, the expression needed to set a default value for the partner's Is Work Team? flag is {'default_x_is_work_team': True}. That's it. If we now try the To-Do menu option, and create a new item or open an existing one from the list, we will see the form we just added. Search views We can also make predefined filter and grouping options available, in the search box in the upper-right corner of the list view. Odoo considers these view elements also, and so they are defined in Views records, just like lists and forms are. As you may already know by now, Views can be edited either in the Settings | Technical | User Interface menu, or from the contextual Developer Tools menu. Let's go for the latter now; navigate to the to-do list, click on the Developer Tools icon in the upper-right corner, and select Edit Search view from the available options: Since no search view is yet defined for the To-do Items Model, we will see an empty form, inviting us to create the first one. Fill in these values and save it: View Name: Some meaningful description, such as To-do Items Filter View Type: Search Model: x_todo_item Architecture: Add this XML code: <search> <filter name="item_not_done" string="Not Done" domain="[('x_is_done', '=', False)]" /> </search> If we now open the to-do list from the menu, so that it is reloaded, we will see that our predefined filter is now available from the Filters button below the search box. If we type Not Done inside the search box, it will also show a suggested selection. It would be nice to have this filter enabled by default and disable it when needed. Just like default field values, we can also use context to set default filters. When we click on the To-do menu option, it runs a Window Actions to open the To-do list view. This Window Actions can set a context value, signaling the Views to enable a search filter by default. Let's try this: Click on the To-do menu option to go to the To-do list. Click on the Developer Tools icon and select the Edit Action option. This will open the Window Actions used to open the current Views. In the lower-right corner, there is a Filter section, where we have the Domain and Context fields. The Domain allows setting a fixed filter on the records shown, which can't be removed by the user. We don't want to use that. Instead, we want to enable the item_not_done filter created before by default, which can be deselected whenever the user wishes to. To enable a filter by default, add a context key with its name prefixed with search_default_, in this case {'search_default_item_not_done': True}. If we click on the To-do menu option now, we should see the Not Done filter enabled by default on the search box. In this article, we created create list, form, and search views, the basic building blocks for the user interface for our model. To learn more about Odoo development in depth, read our book Odoo 12 Development Essentials. “Everybody can benefit from adopting Odoo, whether you’re a small start-up or a giant tech company - An interview by Yenthe van Ginneken. Implement an effective CRM system in Odoo 11 [Tutorial] Handle Odoo application data with ORM API [Tutorial]
Read more
  • 0
  • 0
  • 11317