Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

6719 Articles
article-image-debugging-and-profiling-python-scripts-tutorial
Melisha Dsouza
21 Mar 2019
12 min read
Save for later

Debugging and Profiling Python Scripts [Tutorial]

Melisha Dsouza
21 Mar 2019
12 min read
Debugging and profiling play an important role in Python development. The debugger helps programmers to analyze the complete code. The debugger sets the breakpoints whereas the profilers run our code and give us the details of the execution time. The profilers will identify the bottlenecks in your programs. In this tutorial, we'll learn about the pdb Python debugger, cProfile module, and timeit module to time the execution of Python code. This tutorial is an excerpt from a book written by Ganesh Sanjiv Naik titled Mastering Python Scripting for System Administrators. This book will show you how to leverage Python for tasks ranging from text processing, network administration, building GUI, web-scraping as well as database administration including data analytics & reporting. Python debugging techniques Debugging is a process that resolves the issues that occur in your code and prevent your software from running properly. In Python, debugging is very easy. The Python debugger sets conditional breakpoints and debugs the source code one line at a time. We'll debug our Python scripts using a pdb module that's present in the Python standard library. To better debug a Python program, various techniques are available. We're going to look at four techniques for Python debugging: print() statement: This is the simplest way of knowing what's exactly happening so you can check what has been executed. logging: This is like a print statement but with more contextual information so you can understand it fully. pdb debugger: This is a commonly used debugging technique. The advantage of using pdb is that you can use pdb from the command line, within an interpreter, and within a program. IDE debugger: IDE has an integrated debugger. It allows developers to execute their code and then the developer can inspect while the program executes. Error handling (exception handling) In this section, we're going to learn how Python handles exceptions. An exception is an error that occurs during program execution. Whenever any error occurs, Python generates an exception that will be handled using a try…except block. Some exceptions can't be handled by programs so they result in error messages. Now, we are going to see some exception examples. In your Terminal, start the python3 interactive console and we will see some exception examples: student@ubuntu:~$ python3 Python 3.5.2 (default, Nov 23 2017, 16:37:01) [GCC 5.4.0 20160609] on linux Type "help", "copyright", "credits" or "license" for more information. >>> >>> 50 / 0 Traceback (most recent call last): File "<stdin>", line 1, in <module> ZeroDivisionError: division by zero >>> >>> 6 + abc*5 Traceback (most recent call last): File "<stdin>", line 1, in <module> NameError: name 'abc' is not defined >>> >>> 'abc' + 2 Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: Can't convert 'int' object to str implicitly >>> >>> import abcd Traceback (most recent call last): File "<stdin>", line 1, in <module> ImportError: No module named 'abcd' >>> These are some examples of exceptions. Now, we will see how we can handle the exceptions. Whenever errors occur in your Python program, exceptions are raised. We can also forcefully raise an exception using raise keyword. Now we are going to see a try…except block that handles an exception. In the try block, we will write a code that may generate an exception. In the except block, we will write a solution for that exception. The syntax for try…except is as follows: try: statement(s) except: statement(s) A try block can have multiple except statements. We can handle specific exceptions also by entering the exception name after the except keyword. The syntax for handling a specific exception is as follows: try: statement(s) except exception_name: statement(s) We are going to create an exception_example.py script to catch ZeroDivisionError. Write the following code in your script: a = 35 b = 57 try: c = a + b print("The value of c is: ", c) d = b / 0 print("The value of d is: ", d) except: print("Division by zero is not possible") print("Out of try...except block") Run the script as follows and you will get the following output: student@ubuntu:~$ python3 exception_example.py The value of c is: 92 Division by zero is not possible Out of try...except block Debuggers tools There are many debugging tools supported in Python: winpdb pydev pydb pdb gdb pyDebug In this section, we are going to learn about pdb Python debugger. pdb module is a part of Python's standard library and is always available to use. The pdb debugger The pdb module is used to debug Python programs. Python programs use pdb interactive source code debugger to debug the programs. pdb sets breakpoints and inspects the stack frames, and lists the source code. Now we will learn about how we can use the pdb debugger. There are three ways to use this debugger: Within an interpreter From a command line Within a Python script We are going to create a pdb_example.py script and add the following content in that script: class Student: def __init__(self, std): self.count = std def print_std(self): for i in range(self.count): print(i) return if __name__ == '__main__': Student(5).print_std() Using this script as an example to learn Python debugging, we will see how we can start the debugger in detail. Within an interpreter To start the debugger from the Python interactive console, we are using run() or runeval(). Start your python3 interactive console. Run the following command to start the console: $ python3 Import our pdb_example script name and the pdb module. Now, we are going to use run() and we are passing a string expression as an argument to run() that will be evaluated by the Python interpreter itself: student@ubuntu:~$ python3 Python 3.5.2 (default, Nov 23 2017, 16:37:01) [GCC 5.4.0 20160609] on linux Type "help", "copyright", "credits" or "license" for more information. >>> >>> import pdb_example >>> import pdb >>> pdb.run('pdb_example.Student(5).print_std()') > <string>(1)<module>() (Pdb) To continue debugging, enter continue after the (Pdb) prompt and press Enter. If you want to know the options we can use in this, then after the (Pdb) prompt press the Tab key twice. Now, after entering continue, we will get the output as follows: student@ubuntu:~$ python3 Python 3.5.2 (default, Nov 23 2017, 16:37:01) [GCC 5.4.0 20160609] on linux Type "help", "copyright", "credits" or "license" for more information. >>> >>> import pdb_example >>> import pdb >>> pdb.run('pdb_example.Student(5).print_std()') > <string>(1)<module>() (Pdb) continue 0 1 2 3 4 >>> From a command line The simplest and most straightforward way to run a debugger is from a command line. Our program will act as input to the debugger. You can use the debugger from command line as follows: $ python3 -m pdb pdb_example.py When you run the debugger from the command line, source code will be loaded and it will stop the execution on the first line it finds. Enter continue to continue the debugging. Here's the output: student@ubuntu:~$ python3 -m pdb pdb_example.py > /home/student/pdb_example.py(1)<module>() -> class Student: (Pdb) continue 0 1 2 3 4 The program finished and will be restarted > /home/student/pdb_example.py(1)<module>() -> class Student: (Pdb) Within a Python script The previous two techniques will start the debugger at the beginning of a Python program. But this third technique is best for long-running processes. To start the debugger within a script, use set_trace(). Now, modify your pdb_example.py file as follows: import pdb class Student: def __init__(self, std): self.count = std def print_std(self): for i in range(self.count): pdb.set_trace() print(i) return if __name__ == '__main__': Student(5).print_std() Now, run the program as follows: student@ubuntu:~$ python3 pdb_example.py > /home/student/pdb_example.py(10)print_std() -> print(i) (Pdb) continue 0 > /home/student/pdb_example.py(9)print_std() -> pdb.set_trace() (Pdb) set_trace() is a Python function, therefore you can call it at any point in your program. So, these are the three ways by which you can start a debugger. Debugging basic program crashes In this section, we are going to see the trace module. The trace module helps in tracing the program execution. So, whenever your Python program crashes, we can understand where it crashes. We can use trace module by importing it into your script as well as from the command line. Now, we will create a script named trace_example.py and write the following content in the script: class Student: def __init__(self, std): self.count = std def go(self): for i in range(self.count): print(i) return if __name__ == '__main__': Student(5).go() The output will be as follows: student@ubuntu:~$ python3 -m trace --trace trace_example.py --- modulename: trace_example, funcname: <module> trace_example.py(1): class Student: --- modulename: trace_example, funcname: Student trace_example.py(1): class Student: trace_example.py(2): def __init__(self, std): trace_example.py(5): def go(self): trace_example.py(10): if __name__ == '__main__': trace_example.py(11): Student(5).go() --- modulename: trace_example, funcname: init trace_example.py(3): self.count = std --- modulename: trace_example, funcname: go trace_example.py(6): for i in range(self.count): trace_example.py(7): print(i) 0 trace_example.py(6): for i in range(self.count): trace_example.py(7): print(i) 1 trace_example.py(6): for i in range(self.count): trace_example.py(7): print(i) 2 trace_example.py(6): for i in range(self.count): trace_example.py(7): print(i) 3 trace_example.py(6): for i in range(self.count): trace_example.py(7): print(i) 4 So, by using trace --trace at the command line, the developer can trace the program line-by-line. So, whenever the program crashes, the developer will know the instance where it crashes. Profiling and timing programs Profiling a Python program means measuring an execution time of a program. It measures the time spent in each function. Python's cProfile module is used for profiling a Python program. The cProfile module As discussed previously, profiling means measuring the execution time of a program. We are going to use the cProfile Python module for profiling a program. Now, we will write a cprof_example.py script and write the following code in it: mul_value = 0 def mul_numbers( num1, num2 ): mul_value = num1 * num2; print ("Local Value: ", mul_value) return mul_value mul_numbers( 58, 77 ) print ("Global Value: ", mul_value) Run the program and you will see the output as follows: student@ubuntu:~$ python3 -m cProfile cprof_example.py Local Value: 4466 Global Value: 0 6 function calls in 0.000 seconds Ordered by: standard name ncalls tottime percall cumtime percall filename:lineno(function) 1 0.000 0.000 0.000 0.000 cprof_example.py:1(<module>) 1 0.000 0.000 0.000 0.000 cprof_example.py:2(mul_numbers) 1 0.000 0.000 0.000 0.000 {built-in method builtins.exec} 2 0.000 0.000 0.000 0.000 {built-in method builtins.print} 1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects} So, using cProfile, all functions that are called will get printed with the time spent on each function. Now, we will see what these column headings mean: ncalls: Number of calls tottime: Total time spent in the given function percall: Quotient of tottime divided by ncalls cumtime: Cumulative time spent in this and all subfunctions percall: Quotient of cumtime divided by primitive calls filename:lineno(function): Provides the respective data of each function timeit timeit is a Python module used to time small parts of your Python script. You can call timeit from the command line as well as import the timeit module into your script. We are going to write a script to time a piece of code. Create a timeit_example.py script and write the following content into it: import timeit prg_setup = "from math import sqrt" prg_code = ''' def timeit_example(): list1 = [] for x in range(50): list1.append(sqrt(x)) ''' # timeit statement print(timeit.timeit(setup = prg_setup, stmt = prg_code, number = 10000)) Using timeit, we can decide what piece of code we want to measure the performance of. So, we can easily define the setup code as well as the code snippet on which we want to perform the test separately. The main code runs 1 million times, which is the default time, whereas the setup code runs only once. Making programs run faster There are various ways to make your Python programs run faster, such as the following: Profile your code so you can identify the bottlenecks Use built-in functions and libraries so the interpreter doesn't need to execute loops Avoid using globals as Python is very slow in accessing global variables Use existing packages Summary In this tutorial, we learned about the importance of debugging and profiling programs. We learned what the different techniques available for debugging are. We learned about the pdb Python debugger and how to handle exceptions and how to use the cProfile and timeit modules of Python while profiling and timing our scripts. We also learned how to make your scripts run faster. To learn how to to use the latest features of Python and be able to build powerful tools that will solve challenging, real-world tasks, check out our book Mastering Python Scripting for System Administrators. 5 blog posts that could make you a better Python programmer Using Python Automation to interact with network devices [Tutorial] 4 tips for learning Data Visualization with Python
Read more
  • 0
  • 0
  • 33148

article-image-how-to-remotely-monitor-hosts-over-telnet-and-ssh-tutorial
Melisha Dsouza
20 Mar 2019
14 min read
Save for later

How to remotely monitor hosts over Telnet and SSH [Tutorial]

Melisha Dsouza
20 Mar 2019
14 min read
In this tutorial, you will learn how to carry out basic configurations on a server with Telnet and SSH configured. We will begin by using the Telnet module, after which we will implement the same configurations using the preferred method: SSH using different modules in Python. You will also learn about how telnetlib, subprocess, fabric, Netmiko, and paramiko modules work.  This tutorial is an excerpt from a book written by Ganesh Sanjiv Naik titled Mastering Python Scripting for System Administrators. This book will take you through a set of specific software patterns and  you will learn, in detail, how to apply these patterns and build working software on top of a serverless system. The telnetlib() module In this section, we are going to learn about the Telnet protocol and then we will do Telnet operations using the telnetlib module over a remote server. Telnet is a network protocol that allows a user to communicate with remote servers. It is mostly used by network administrators to remotely access and manage devices. To access the device, run the Telnet command with the IP address or hostname of a remote server in your Terminal. Telnet uses TCP on the default port number 23. To use Telnet, make sure it is installed on your system. If not, run the following command to install it: $ sudo apt-get install telnetd To run Telnet using the simple Terminal, you just have to enter the following command: $ telnet ip_address_of_your_remote_server Python has the telnetlib module to perform Telnet functions through Python scripts. Before telnetting your remote device or router, make sure they are configured properly and, if not, you can do basic configuration by using the following command in the router's Terminal: configure terminal enable password 'set_Your_password_to_access_router' username 'set_username' password 'set_password_for_remote_access' line vty 0 4 login local transport input all interface f0/0 ip add 'set_ip_address_to_the_router' 'put_subnet_mask' no shut end show ip interface brief Now, let's see the example of Telnetting a remote device. For that, create a telnet_example.py script and write following content in it: import telnetlib import getpass import sys HOST_IP = "your host ip address" host_user = input("Enter your telnet username: ") password = getpass.getpass() t = telnetlib.Telnet(HOST_IP) t.read_until(b"Username:") t.write(host_user.encode("ascii") + b"\n") if password: t.read_until(b"Password:") t.write(password.encode("ascii") + b"\n") t.write(b"enable\n") t.write(b"enter_remote_device_password\n") #password of your remote device t.write(b"conf t\n") t.write(b"int loop 1\n") t.write(b"ip add 10.1.1.1 255.255.255.255\n") t.write(b"int loop 2\n") t.write(b"ip add 20.2.2.2 255.255.255.255\n") t.write(b"end\n") t.write(b"exit\n") print(t.read_all().decode("ascii") ) Run the script and you will get the output as follows: student@ubuntu:~$ python3 telnet_example.py Output: Enter your telnet username: student Password: server>enable Password: server#conf t Enter configuration commands, one per line. End with CNTL/Z. server(config)#int loop 1 server(config-if)#ip add 10.1.1.1 255.255.255.255 server(config-if)#int loop 23 server(config-if)#ip add 20.2.2.2 255.255.255.255 server(config-if)#end server#exit In the preceding example, we accessed and configured a Cisco router using the telnetlib module. In this script, first, we took the username and password from the user to initialize the Telnet connection with a remote device. When the connection was established, we did further configuration on the remote device. After telnetting, we will be able to access a remote server or device. But there is one very important disadvantage of this Telnet protocol, and that is all the data, including usernames and passwords, is sent over a network in a text manner, which may cause a security risk. Because of that, nowadays Telnet is rarely used and has been replaced by a very secure protocol named Secure Shell, known as SSH. Install SSH by running the following command in your Terminal: $ sudo apt install ssh Also, on a remote server where the user wants to communicate, an SSH server must be installed and running. SSH uses the TCP protocol and works on port number 22 by default. You can run the ssh command through the Terminal as follows: $ ssh host_name@host_ip_address Now, you will learn to do SSH by using different modules in Python, such as subprocess, fabric, Netmiko, and Paramiko. Now, we will see those modules one by one. The subprocess.Popen() module The Popen class handles the process creation and management. By using this module, developers can handle less common cases. The child program execution will be done in a new process. To execute a child program on Unix/Linux, the class will use the os.execvp() function. To execute a child program in Windows, the class will use the CreateProcess() function. Now, let's see some useful arguments of subprocess.Popen(): class subprocess.Popen(args, bufsize=0, executable=None, stdin=None, stdout=None, stderr=None, preexec_fn=None, close_fds=False, shell=False, cwd=None, env=None, universal_newlines=False, startupinfo=None, creationflags=0) Let's look at each argument: args: It can be a sequence of program arguments or a single string. If args is a sequence, the first item in args is executed. If args is a string, it recommends to pass args as a sequence. shell: The shell argument is by default set to False and it specifies whether to use shell for execution of the program. If shell is True, it recommends to pass args as a string. In Linux, if shell=True, the shell defaults to /bin/sh. If args is a string, the string specifies the command to execute through the shell. bufsize: If bufsize is 0 (by default, it is 0), it means unbuffered and if bufsize is 1, it means line buffered. If bufsize is any other positive value, use a buffer of the given size. If bufsize is any other negative value, it means fully buffered. executable: It specifies that the replacement program to be executed. stdin, stdout, and stderr: These arguments define the standard input, standard output, and standard error respectively. preexec_fn: This is set to a callable object and will be called just before the child is executed in the child process. close_fds: In Linux, if close_fds is true, all file descriptors except 0, 1, and 2 will be closed before the child process is executed. In Windows, if close_fds is true then the child process will inherit no handles. env: If the value is not None, then mapping will define environment variables for new process. universal_newlines: If the value is True then stdout and stderr will be opened as text files in newlines mode. Now, we are going to see an example of subprocess.Popen(). For that, create a ssh_using_sub.py  script and write the following content in it: import subprocess import sys HOST="your host username@host ip" COMMAND= "ls" ssh_obj = subprocess.Popen(["ssh", "%s" % HOST, COMMAND], shell=False, stdout=subprocess.PIPE, stderr=subprocess.PIPE) result = ssh_obj.stdout.readlines() if result == []: err = ssh_obj.stderr.readlines() print(sys.stderr, "ERROR: %s" % err) else: print(result) Run the script and you will get the output as follows: student@ubuntu:~$ python3 ssh_using_sub.py Output : [email protected]'s password: [b'Desktop\n', b'Documents\n', b'Downloads\n', b'examples.desktop\n', b'Music\n', b'Pictures\n', b'Public\n', b'sample.py\n', b'spark\n', b'spark-2.3.1-bin-hadoop2.7\n', b'spark-2.3.1-bin-hadoop2.7.tgz\n', b'ssh\n', b'Templates\n', b'test_folder\n', b'test.txt\n', b'Untitled1.ipynb\n', b'Untitled.ipynb\n', b'Videos\n', b'work\n'] In the preceding example, first, we imported the subprocess module, then we defined the host address where you want to establish the SSH connection. After that, we gave one simple command that executed over the remote device. After all this was set up, we put this information in the subprocess.Popen() function. This function executed the arguments defined inside that function to create a connection with the remote device. After the SSH connection was established, our defined command was executed and provided the result. Then we printed the result of SSH on the Terminal, as shown in the output. SSH using fabric module Fabric is a Python library as well as a command-line tool for the use of SSH. It is used for system administration and application deployment over the network. We can also execute shell commands over SSH. To use fabric module, first you have to install it using the following command: $ pip3 install fabric3 Now, we will see an example. Create a fabfile.py script and write the following content in it: from fabric.api import * env.hosts=["host_name@host_ip"] env.password='your password' def dir(): run('mkdir fabric') print('Directory named fabric has been created on your host network') def diskspace(): run('df') Run the script and you will get the output as follows: student@ubuntu:~$ fab dir Output: [[email protected]] Executing task 'dir' [[email protected]] run: mkdir fabric Done. Disconnecting from 192.168.0.106... done. In the preceding example, first, we imported the fabric.api module, then we set the hostname and password to get connected with the host network. After that, we set a different task to perform over SSH. Therefore, to execute our program instead of the Python3 fabfile.py, we used the fab utility (fab dir), and after that we stated that the required tasks should be performed from our fabfile.py.  In our case, we performed the dir task, which creates a directory with the name 'fabric' on your remote network. You can add your specific task in your Python file. It can be executed using the fab utility of the fabric module. SSH using the Paramiko library Paramiko is a library that implements the SSHv2 protocol for secure connections to remote devices. Paramiko is a pure Python interface around SSH. Before using Paramiko, make sure you have installed it properly on your system. If it is not installed, you can install it by running the following command in your Terminal: $ sudo pip3 install paramiko Now, we will see an example of using paramiko. For this paramiko connection, we are using a Cisco device. Paramiko supports both password-based and  key-pair based authentication for a secure connection with the server. In our script, we are using password-based authentication, which means we check for a password and, if available, authentication is attempted using plain username/password authentication. Before we are going to do SSH to your remote device or multi-layer router, make sure they are configured properly and, if not, you can do basic configuration by using the following command in a multi-layer router Terminal: configure t ip domain-name cciepython.com crypto key generate rsa How many bits in the modulus [512]: 1024 interface range f0/0 - 1 switchport mode access switchport access vlan 1 no shut int vlan 1 ip add 'set_ip_address_to_the_router' 'put_subnet_mask' no shut exit enable password 'set_Your_password_to_access_router' username 'set_username' password 'set_password_for_remote_access' username 'username' privilege 15 line vty 0 4 login local transport input all end Now, create a pmiko.py script and write the following content in it: import paramiko import time ip_address = "host_ip_address" usr = "host_username" pwd = "host_password" c = paramiko.SSHClient() c.set_missing_host_key_policy(paramiko.AutoAddPolicy()) c.connect(hostname=ip_address,username=usr,password=pwd) print("SSH connection is successfully established with ", ip_address) rc = c.invoke_shell() for n in range (2,6): print("Creating VLAN " + str(n)) rc.send("vlan database\n") rc.send("vlan " + str(n) + "\n") rc.send("exit\n") time.sleep(0.5) time.sleep(1) output = rc.recv(65535) print(output) c.close Run the script and you will get the output as follows: student@ubuntu:~$ python3 pmiko.py Output: SSH connection is successfuly established with 192.168.0.70 Creating VLAN 2 Creating VLAN 3 Creating VLAN 4 Creating VLAN 5 In the preceding example, first, we imported the paramiko module, then we defined the SSH credentials required to connect the remote device. After providing credentials, we created an instance 'c' of paramiko.SSHclient(), which is the primary client used to establish connections with the remote device and execute commands or operations. The creation of an SSHClient object allows us to establish remote connections using the .connect() function. Then, we set the policy paramiko connection because, by default, paramiko.SSHclient sets the SSH policy in reject policy state. That causes the policy to reject any SSH connection without any validation. In our script, we are neglecting this possibility of SSH connection drop by using the AutoAddPolicy() function that automatically adds the server's host key without prompting it. We can use this policy for testing purposes only, but this is not a good option in a production environment because of security purpose. When an SSH connection is established, you can do any configuration or operation that you want on your device. Here, we created a few virtual LANs on a remote device. After creating VLANs, we just closed the connection. SSH using the Netmiko library In this section, we will learn about Netmiko. The Netmiko library is an advanced version of Paramiko. It is a multi_vendor library that is based on Paramiko. Netmiko simplifies SSH connection to a network device and takes particular operation on the device. Before going doing SSH to your remote device or multi-layer router, make sure they are configured properly and, if not, you can do basic configuration by command mentioned in the Paramiko section. Now, let's see an example. Create a nmiko.py script and write the following code in it: from netmiko import ConnectHandler remote_device={ 'device_type': 'cisco_ios', 'ip': 'your remote_device ip address', 'username': 'username', 'password': 'password', } remote_connection = ConnectHandler(**remote_device) #net_connect.find_prompt() for n in range (2,6): print("Creating VLAN " + str(n)) commands = ['exit','vlan database','vlan ' + str(n), 'exit'] output = remote_connection.send_config_set(commands) print(output) command = remote_connection.send_command('show vlan-switch brief') print(command) Run the script and you will get the output as follows: student@ubuntu:~$ python3 nmiko.py Output: Creating VLAN 2 config term Enter configuration commands, one per line. End with CNTL/Z. server(config)#exit server #vlan database server (vlan)#vlan 2 VLAN 2 modified: server (vlan)#exit APPLY completed. Exiting.... server # .. .. .. .. switch# Creating VLAN 5 config term Enter configuration commands, one per line. End with CNTL/Z. server (config)#exit server #vlan database server (vlan)#vlan 5 VLAN 5 modified: server (vlan)#exit APPLY completed. Exiting.... VLAN Name Status Ports ---- -------------------------------- --------- ------------------------------- 1 default active Fa0/0, Fa0/1, Fa0/2, Fa0/3, Fa0/4, Fa0/5, Fa0/6, Fa0/7, Fa0/8, Fa0/9, Fa0/10, Fa0/11, Fa0/12, Fa0/13, Fa0/14, Fa0/15 2 VLAN0002 active 3 VLAN0003 active 4 VLAN0004 active 5 VLAN0005 active 1002 fddi-default active 1003 token-ring-default active 1004 fddinet-default active 1005 trnet-default active In the preceding example, we use Netmiko library to do SSH, instead of Paramiko. In this script, first, we imported ConnectHandler from the Netmiko library, which we used to establish an SSH connection to the remote network devices by passing in the device dictionary. In our case, that dictionary is remote_device. After the connection is established, we executed configuration commands to create a number of virtual LANs using the send_config_set() function. When we use this type (.send_config_set()) of function to pass commands on a remote device, it automatically sets our device in configuration mode. After sending configuration commands, we also passed a simple command to get the information about the configured device. Summary In this tutorial, you learned about Telnet and SSH and different Python modules such as telnetlib, subprocess, fabric, Netmiko, and Paramiko, using which we perform Telnet and SSH. SSH uses the public key encryption for security purposes and is more secure than Telnet. To learn how to leverage the features and libraries of Python to administrate your environment efficiently, check out our book Mastering Python Scripting for System Administrators. 5 blog posts that could make you a better Python programmer “With Python, you can create self-explanatory, concise, and engaging data visuals, and present insights that impact your business” – Tim Großmann and Mario Döbler [Interview] Using Python Automation to interact with network devices [Tutorial]
Read more
  • 0
  • 0
  • 22345

article-image-women-win-all-open-board-director-seats-in-open-source-initiative-2019-board-elections
Savia Lobo
19 Mar 2019
3 min read
Save for later

Women win all open board director seats in Open Source Initiative 2019 board elections

Savia Lobo
19 Mar 2019
3 min read
The recently held Open Source Initiative’s 2019 Board elections elected six Board of Directors to its eleven-person Board. Two were elected from the affiliate membership, and four from the individual membership. If it wasn’t incredible enough that many women ran for the seats,  they have won all the seats! The six seats include two returning directors, Carol Smith and Molly de Blanc; and three new directors Pamela Chestek, Elana Hashman, and Hong Phuc Dang. Pamela Chestek (nominated by The Document Foundation) and Molly de Blanc (nominated by the Debian Project) captured the most votes from OSI Affiliate Members. The last seat is a tie between Christine Hall and Mariatta Wijaya and hence a runoff election will be required to identify the final OSI Board Director. The run off election started yesterday, March 18th (opening at 12:00 a.m. / 00:00) and will end on Monday, March 25th (closing at 12:00 a.m. / 00:00). Mariatta Wijaya, a core Python developer and a platform engineer at Zapier, told in a statement to Business Insider that she found not all open source projects were as welcoming, especially to women. That's one reason why she's running for the board of the Open Source Initiative, an influential organization that promotes and protects open source software communities. Wijaya also said, "I really want to see better diversity across the people who contribute to open source. Not just the users, the creators of open source. I would love to see that diversity improve. I would like to see a better representation. I did find it a barrier initially, not seeing more people who look like me in this space, and I felt like an outsider." A person discussed six female candidates in misogynistic language on Slashdot, which is a tech-focussed social news website. The post also then labeled each woman with how much of a "threat" they were. Slashdot immediately took down this post “shortly afterward the OSI started seeing inappropriate comments posted on its website”. https://twitter.com/alicegoldfuss/status/1102609189342371840 Molly de Blanc and Patrick Masson said this was the first time they saw such type of harassment of female OSI board candidates. They also said that such harassments in open source are not uncommon. Joshua R. Simmons, an Open source advocate, and web developer tweeted, “women winning 100% of the open seats in an election that drew attention from a cadre of horrible misogynists” https://twitter.com/joshsimmons/status/1107303020293832704 OSI President, Simon Phipps said that the OSI committee is “thrilled the electorate has picked an all-female cohort to the new Board” https://twitter.com/webmink/status/1107367907825274886 To know more about these elections in detail, head over to the OSI official blog post. UPDATED: In the previous draft, Pamela Chestek who was listed as returning board member, is a new board member; and Carol Smith who was listed as a new board member, is a returning member. #GoogleWalkout demanded a ‘truly equitable culture for everyone’; Pichai shares a “comprehensive” plan for employees to safely report sexual harassment MongoDB withdraws controversial Server Side Public License from the Open Source Initiative’s approval process Google’s pay equity analysis finds men, not women, are underpaid; critics call out design flaws in the analysis  
Read more
  • 0
  • 0
  • 4743

article-image-social-media-enabled-and-amplified-christchurch-terrorist-attack
Fatema Patrawala
19 Mar 2019
11 min read
Save for later

How social media enabled and amplified the Christchurch terrorist attack

Fatema Patrawala
19 Mar 2019
11 min read
The recent horrifying terrorist attack in New Zealand has cast new blame on how technology platforms police content. There are now questions about whether global internet services are designed to work this way? And if online viral hate is uncontainable? Fifty one people so far have been reported to be dead and 50 more injured after the terrorist attacks on two New Zealand mosques on Friday. The victims included children as young as 3 and 4 years old, and elderly men and women. The alleged shooter is identified as a 28 year old Australian man named Brenton Tarrant. Brenton announced the attack on the anonymous-troll message board 8chan. There, he posted images of the weapons days before the attack, and made an announcement an hour before the shooting. On 8chan, Facebook and Twitter, he also posted links to a 74-page manifesto, titled “The Great Replacement,” blaming immigration for the displacement of whites in Oceania and elsewhere. The manifesto cites “white genocide” as a motive for the attack, and calls for “a future for white children” as its goal. Further he live-streamed the attacks on Facebook, YouTube; and posted a link to the stream on 8chan. It’s terrifying and disgusting, especially when 8chan is one of the sites where disaffected internet misfits create memes and other messages to provoke dismay and sow chaos among people. “8chan became the new digital home for some of the most offensive people on the internet, people who really believe in white supremacy and the inferiority of women,” Ethan Chiel wrote. “It’s time to stop shitposting,” the alleged shooter’s 8chan post reads, “and time to make a real-life effort post.” Many of the responses, anonymous by 8chan’s nature, celebrate the attack, with some posting congratulatory Nazi memes. A few seem to decry it, just for logistical quibbles. And others lament that the whole affair might destroy the site, a concern that betrays its users’ priorities. Social media encourages performance crime The use of social media technology and livestreaming marks the attack as different from many other terrorist incidents. It is a form of violent “performance crime”. That is, the video streaming is a central component of the violence itself, it’s not somehow incidental to the crime, or a trophy for the perpetrator to re-watch later. In the past, terrorism functioned according to what has been called the “theatre of terror”, which required the media to report on the spectacle of violence created by the group. Nowadays with social media in our hands it's much easier for someone to both create the spectacle of horrific violence and distribute it widely by themselves. There is a tragic and recent history of performance crime videos that use live streaming and social media video services as part of their tactics. In 2017, for example, the sickening murder video of an elderly man in Ohio was uploaded to Facebook, and the torture of a man with disabilities in Chicago was live streamed. In 2015, the murder of two journalists was simultaneously broadcast on-air, and live streamed. Tech companies on the radar Social-media companies scrambled to take action as the news—and the video—of the attack spread. Facebook finally managed to pull down Tarrant’s profiles and the video, but only after New Zealand police brought the live-stream to the company’s attention. It has been working "around the clock" to remove videos of the incident shared on its platform. In a statement posted to Twitter on Sunday, the tech company said that within 24 hours of Friday’s shooting it had removed 1.5 million videos of the attack from its platform globally. YouTube said it had also removed an “unprecedented volume” of videos of the shooting. Twitter also suspended Tarrant’s account, where he had posted links to the manifesto from several file-sharing sites. The chaotic aftermath mostly took place while many North Americans slept unaware, waking up to the news and its associated confusion. By morning on the East Coast, news outlets had already weighed in on whether technology companies might be partly to blame for catastrophes such as the New Zealand massacre because they have failed to catch offensive content before it spreads. One of the tweets say Google, Twitter and Facebook made a choice to not use tools available to them to stop white supremacist terrorism. https://twitter.com/samswey/status/1107055372949286912 Countries like Germany and France already have a law in place that demands social media sites move quickly to remove hate speech, fake news and illegal material. Sites that do not remove "obviously illegal" posts could face fines of up to 50m euro (£44.3m). In the wake of the attack, a consortium of New Zealand’s major companies has pledged to pull their advertising from Facebook. In a joint statement, the Association of New Zealand Advertisers (ANZA) and the Commercial Communications Council asked domestic companies to think about where “their advertising dollars are spent, and carefully consider, with their agency partners, where their ads appear.” They added, “We challenge Facebook and other platform owners to immediately take steps to effectively moderate hate content before another tragedy can be streamed online.” Additionally internet service providers like Vodafone, Spark and Vocus in New Zealand are blocking access to websites that do not respond or refuse to comply to requests to remove reuploads of the shooter’s original live stream. The free speech vs safety debate puts social media platforms in the crosshairs Tech Companies are facing new questions on content moderation following the New Zealand attack. The shooter posted a link to the live stream, and soon after he was apprehended, reuploads were found on other platforms like YouTube and Twitter. “Tech companies basically don’t see this as a priority,” the counter-extremism policy adviser Lucinda Creighton commented. “They say this is terrible, but what they’re not doing is preventing this from reappearing.” Others affirmed the importance of quelling the spread of the manifesto, video, and related materials, for fear of producing copycats, or of at least furthering radicalization among those who would be receptive to the message. The circulation of ideas might have motivated the shooter as much as, or even more than, ethnic violence. As Charlie Warzel wrote at The New York Times, the New Zealand massacre seems to have been made to go viral. Tarrant teased his intentions and preparations on 8chan. When the time came to carry out the act, he provided a trove of resources for his anonymous members, scattered to the winds of mirror sites and repositories. Once the live-stream started, one of the 8chan user posted “capped for posterity” on Tarrant’s thread, meaning that he had downloaded the stream’s video for archival and, presumably, future upload to other services, such as Reddit or 4chan, where other like-minded trolls or radicals would ensure the images spread even further. As Warzel put it, “Platforms like Facebook, Twitter, and YouTube … were no match for the speed of their users.” The internet is a Pandora’s box that never had a lid. Camouflaging stories is easy but companies trying hard in building AI to catch it Last year, Mark Zuckerberg defended himself and Facebook before Congress against myriad failures, which included Russian operatives disrupting American elections and permitting illegal housing ads that discriminate by race. Mark Zuckerberg repeatedly invoked artificial intelligence as a solution for the problems his and other global internet companies have created. There’s just too much content for human moderators to process, even when pressed hard to do so under poor working conditions. The answer, Zuckerberg has argued, is to train AI to do the work for them. But that technique has proved insufficient. That’s because detecting and scrubbing undesirable content automatically is extremely difficult. False positives enrage earnest users or foment conspiracy theories among paranoid ones, thanks to the black-box nature of computer systems. Worse, given a pool of billions of users, the clever ones will always find ways to trick any computer system, for example, by slightly modifying images or videos in order to make them appear different to the computer but identical to human eyes. 8chan, as it happens, is largely populated by computer-savvy people who have self-organized to perpetrate exactly those kinds of tricks. The primary sources of content are only part of the problem. Long after the deed, YouTube users have bolstered conspiracy theories about murders, successfully replacing truth with lies among broad populations of users who might not even know they are being deceived. Even stock-photo providers are licensing stills from the New Zealand shooter’s video; a Reuters image that shows the perpetrator wielding his rifle as he enters the mosque is simply credited, “Social media.” Interpreting real motives is difficult on social The video is just the tip of the iceberg. Many smaller and less obviously inflamed messages have no hope of being found, isolated, and removed by technology services. The shooter praised Donald Trump as a “symbol of renewed white identity” and incited the conservative commentator Candace Owens, who took the bait on Twitter in a post that got retweeted thousands of times by the morning after the attack. The shooter’s forum posts and video are littered with memes and inside references that bear special meaning within certain communities on 8chan, 4chan, Reddit, and other corners of the internet, offering tempting receptors for consumption and further spread. Perhaps worst of all, the forum posts, the manifesto, and even the shooting itself might not have been carried out with the purpose that a literal read of their contents suggests. At the first glance, it seems impossible to deny that this terrorist act was motivated by white-extremist hatred, an animosity that authorities like the FBI expert and the Facebook officials would want to snuff out before it spreads. But 8chan is notorious for users with an ironic and rude behaviour under the shades of anonymity.They use humor, memes and urban slang to promote chaos and divisive rhetoric. As the internet separates images from context and action from intention, and then spreads those messages quickly among billions of people scattered all around the globe. That structure makes it impossible to even know what individuals like Tarrant “really mean” by their words and actions. As it spreads, social-media content neuters earnest purpose entirely, putting it on the same level as anarchic randomness. What a message means collapses into how it gets used and interpreted. For 8chan trolls, any ideology might be as good as any other, so long as it produces chaos. We all have a role to play It’s easy to say that technology companies can do better. They can, and they should. But ultimately, content moderation is not the solution by itself. The problem is the media ecosystem they have created. The only surprise is that anyone would still be surprised that social media produce this tragic abyss, for this is what social media are supposed to do, what they were designed to do: spread the images and messages that accelerate interest and invoke raw emotions, without check, and absent concern for their consequences. We hope that social media companies get better at filtering out violent content and explore alternative business models, and governments think critically about cyber laws that protect both people and speech. But until they do we should reflect on our own behavior too. As news outlets, we shape the narrative through our informed perspectives which makes it imperative to publish legitimate & authentic content. Let’s as users too make a choice of liking and sharing content on social platforms. Let’s consider how our activities could contribute to an overall spectacle society that might inspire future perpetrator-produced videos of such gruesome crime – and act accordingly. In this era of social spectacle, we all have a role to play in ensuring that terrorists aren’t rewarded for their crimes with our clicks and shares. The Indian government proposes to censor social media content and monitor WhatsApp messages Virality of fake news on social media: Are weaponized AI bots to blame, questions Destin Sandlin Mastodon 2.7, a decentralized alternative to social media silos, is now out!
Read more
  • 0
  • 0
  • 3073

article-image-application-server-clustering-using-various-cloud-providers-tutorial
Melisha Dsouza
19 Mar 2019
11 min read
Save for later

Application server clustering using various cloud providers [Tutorial]

Melisha Dsouza
19 Mar 2019
11 min read
In this tutorial, we will illustrate how applications are clustered, using different cloud providers and frameworks. You can set up a lot of applications in a highly available way, and spread their workloads between the on-premises and cloud environments. You can also set them up in different cloud environments if the technical requirements are set. We will describe solutions to do the same and learn how to implement them to be independent of one cloud vendor and avoid a vendor login. This tutorial is an excerpt from a book written by Florian Klaffenbach, Markus Klein, Suresh Sundaresan titled Multi-Cloud for Architects. This book will be your go-to guide to find perfect solutions at completely adapting to any Cloud and its services, no matter the size of your infrastructure. Technical requirements for cross-cloud application servers To design a cross-cloud application server environment, the requirements needed are: Network connectivity between the different clouds Single identity management solutions for all servers Supported applications for georedundancy Networking connectivity between different clouds No matter what cloud you need to connect to, networking and networking security are always the key. This means that you will need reliable and secured networking connectivity, as there is a possibility that not all of the traffic is encrypted, depending on the software architecture for high availability. As the Windows cluster environment basically has all of the nodes in one physical network location, each newer release can work with the nodes in physically different locations. So, for example, one part of the server nodes could be in cloud A, and the other one in cloud B. The software requirements are set for applications using different cloud vendors. As each cloud vendor is running different connection gateways, the most common solution is to have the same network virtual appliance as a VM instance (single, or even double redundant) in each environment, and to design each cloud as an independent data center location. There is no requirement to have direct connectivity (using MPLS) or remote connectivity, but you will need to make sure that the network package round-trips are as quick as possible. Single identity management solutions for all servers The second pillar of cross-platform application designs is the IDM solution. As not every cloud vendor offers managed IDM solutions, the valid options are setting them up, for example, Active Directory as VMs and joining all servers in the Cloud into these domain or using a managed IDM solution like Azure AD which is not only support being used in Azure, but also works in other public cloud environments to have all servers join the one Microsoft Azure AD. Supported applications for georedundancy Finally, the application itself needs to support georedundancy in its node placement design. The application needs to be designed to work in low latency environments (for example, Microsoft SQL Server or Microsoft Exchange Server); it can also be designed quite easily, by placing a georedundant load balancer (for example, a web server). A sample design of a cross-cloud virtual machine running clustered application nodes is as follows: The preceding diagram shows Azure on the left and AWS on the right, both connected to a single-network environment and using Azure AD as a single identity management solution. Whether you choose the left one or the right one for your preferred cloud vendor, the result will be the same, and the service will be reachable from both cloud environments. Examples of clustered application servers Let's take a look at some sample application servers (based on virtual machines) in a multi-cloud design. Microsoft SQL Server Microsoft Exchange Server Web servers Microsoft SQL Server Microsoft SQL Server is a robust relational database application server that can run either based on Microsoft Windows Server or in Linux, as an operating system (starting with the 2017 version). Surely, you could set up a single SQL server as a virtual machine in your preferred cloud, we need to look at the high availability features of the application. With Microsoft SQL Server, the basic keyword for HA is availability groups. Using this feature, you can set up your application server to have one or more replicas of the database itself, organized in an availability group. You can design the availability groups based on your needs, and split them between servers, no matter where the virtual machine really lives; then, you can configure database replication. One availability group supports one primary, and up to eight secondary, databases. Keep in mind that a secondary database is not equal to a database backup, as the replica contains the same information that the primary database does. Since the 2017 release, there have been two options for availability groups, as follows: Always on for redundancy: This means that if a database replica fails, one or more are still available for the application requesting the data. Always on for read optimization: This means that if an application needs to read from data, it uses its nearest database server. For write operations, the primary database replica needs to be available. The replica operation itself can be synchronous or asynchronous, depending on the requirements and design of the application working with the database(s). The following chart illustrates the availability group design of Microsoft SQL Servers: As you can see in the preceding chart, there are different nodes available. Each of them can either reside in the same cloud, or a different one. Regarding risk management, a single cloud vendor can go offline without affecting the availability of your database servers themselves. When you take a look at the SQL Server Management Studio, you will see the availability groups and their health statuses, as follows: Microsoft Exchange Server Microsoft Exchange Server is a groupware and email solution that is, in Microsoft cloud technology, the technical basis of the Office 365 SaaS email functionality. If you decide to run Exchange Server on your own, the most recent release is Exchange Server 2019. And, in case it is needed, there is full support to run Exchange Server as virtual machines in your cloud environment. Of course, it is possible to run a simple VM with all Exchange Services on it, but, like with almost every company groupware solution that requires high availability, there is the possibility to set up Exchange Server environments in a multi-server environment. As Exchange Server has to support low latency network environments by default, it supports running an Exchange Server environment in different networking regions. This means that you can set up some nodes of Exchange Server in cloud A, and the others in cloud B. This feature is called availability groups, too. A typical design would look as follows: As you can see, the preceding diagram shows two Azure AD sites with redundancy, using Database Availability Groups (DAGs). The best practice for running Exchange Server on AWS is illustrated as follows: There is also a best practice design for Azure in hybrid cloud environments; it looks as follows: A primary database residing on one Exchange mailbox server could have up to 16 database copies. The placement of these copies is dependent on customer requirements. Configuring high availability within Exchange Server is the default, and it is quite easy to handle, too, from either the Exchange Server Management Console or from PowerShell. Supporting cross-cloud implementations using geo load balancers If an application that needs to be redesigned for a multi-cloud environment works based on port communications, the redesign will be quite easy, as you will just need to set up a georedundant load balancer to support the different cloud targets, and route the traffic correspondingly. A georedundant load balancer is a more complex solution; it's like a default load balancer that just routes the traffic between different servers in one region or cloud environment. It generally works with the same technology and uses DNS name resolutions for redundancy and traffic routing, but, in comparison to DNS round robin technologies, a load balancer knows the available targets for resolving requests, and can work with technologies such as IP range mapping. Azure Traffic Manager Azure Traffic Manager is the Microsoft solution for georedundant traffic routing. It is available in each Azure region, and it provides transparent load balancing for services that coexist in different Azure regions, non-Microsoft clouds, or on premises. It provides the following features: Flexible traffic routing Reduced application downtime Improve performance and content delivery Distributed traffic over multiple locations Support for all available cloud solutions (private and public clouds) As you can see in the following diagram, Azure Traffic Manager is a flexible solution for traffic routing, and can point to each target that you need in your application design: Incoming traffic is routed to the appropriate site using Traffic Manager metrics, and if a site is down or degraded, Traffic Manager routes the traffic to another available site. You can compare the Traffic Manager to an intelligent router that knows the origin of the traffic and reroutes the requests to the nearest available service. AWS Route 53 In AWS, the Route 53 service provides an easy solution for routing traffic based on load and availability. It is a PaaS service, like Azure Traffic Manager, and works based on DNS name resolutions, too. The technical design works as follows; it is fully integrated into the DNS service: As you can see, the Route 53 design is quite comparable to Azure Traffic Manager. If you need to decide which service to use in your design, it is not a technical decision at all, as the technology is nearly the same. Rather, the choice is based on other requirements, involving technology and pricing. Managing multi-cloud virtual machines for clustered application servers If you decide to design your applications in a multi-cloud environment, it does not facilitate designing automation and replaying configurations. Azure works with ARM templates and AWS with AWS CloudFormation. Both languages are JSON based, but they are different. If you plan to use cloud solutions to transform your on-premises solutions, you should think about automation and ways to replay configurations. If you need to deal with two (or even more) different dialects, you will need to set up a process to create and update the corresponding templates. Therefore, implementing another layer of templating will be required, if you do not want to rely on manual processes. There is a very small number of vendors that provide the technology to avoid relying on different dialects. A common one is Terraform, but Ansible and Puppet are other options. Terraform works based on a language called Hashicorp Configuration Language (HCL). It is designed for human consumption, so users can quickly interpret and understand the infrastructure configurations. HCL includes a full JSON parser for machine-generated configurations. If you compare HCL to the JSON language, it looks as follows: # An AMI variable "ami" { description = "the AMI to use" } /* A multi line comment. */ resource "aws_instance" "web" { ami = "${var.ami}" count = 2 source_dest_check = false connection { user = "root" } } Terraform gives us providers to translate the deployments into the corresponding cloud vendor languages. There are a lot of providers available, as you can see in the following screenshot: A provider works as follows: If you decide to work with Terraform to make your cloud automation processes smooth and independent of your cloud vendor, you can install it from most cloud vendor marketplaces as a virtual machine. Troubleshooting cross-cloud application servers If you have decided on a multi-cloud design for your applications, you will need to have a plan for troubleshooting; network connectivity and having identities between the different cloud environments could be reasons for unavailability issues. Otherwise, the troubleshooting mechanisms are the same ones that you're already familiar with, and they are included in the application servers themselves, in general. Summary In this tutorial, we learned that it is quite easy to design a multi-cloud environment. In case there is a need to change the components in this solution, you can even decide to change services as a part of your solution from one cloud vendor to another. To learn how to architect a multi-cloud solution for your organization, check out our book  Multi-Cloud for Architects. Modern Cloud Native architectures: Microservices, Containers, and Serverless – Part 1 The 10 best cloud and infrastructure conferences happening in 2019 VMware Essential PKS: Use upstream Kubernetes to build a flexible, cost-effective cloud-native platform
Read more
  • 0
  • 0
  • 3728

article-image-the-u-s-dod-wants-to-dominate-russia-and-china-in-artificial-intelligence-last-week-gave-us-a-glimpse-into-that-vision
Savia Lobo
18 Mar 2019
9 min read
Save for later

The U.S. DoD wants to dominate Russia and China in Artificial Intelligence. Last week gave us a glimpse into that vision.

Savia Lobo
18 Mar 2019
9 min read
In a hearing on March 12, the sub-committee on emerging threats and capabilities received testimonies on Artificial Intelligence Initiatives within the Department of Defense(DoD). The panel included Peter Highnam, Deputy Director of the Defense Advanced Research Projects Agency; Michael Brown, DoD Defense Innovation Unit Director; and Lieutenant General John Shanahan, director of the Joint Artificial Intelligence Center (JAIC). The panel broadly testified to senators that AI will significantly transform DoD’s capabilities and that it is critical the U.S. remain competitive with China and Russia in developing AI applications. Dr. Peter T. Highnam on DARPA’s achievements and future goals Dr. Peter T. Highnam, Deputy Director, Defense Advanced Research Projects Agency talked about DARPA’s significant role in the development of AI technologies that have produced game-changing capabilities for the Department of Defense and beyond. In his testimony, he mentions, “DARPA’s AI Next effort is simply a continuing part of its 166 historic investment in the exploration and advancement of AI technologies.” Dr. Highnam highlighted different waves of AI technologies. The first wave, which was nearly 70 years ago, emphasized handcrafted knowledge, and computer scientists constructed so-called expert systems that captured the rules that the system could then apply to situations of interest. However, handcrafting rules was costly and time-consuming. The second wave that brought in machine learning that applies statistical and probabilistic methods to large data sets to create generalized representations that can be applied to future samples. However, this required training deep learning (artificial) neural networks with a variety of classification and prediction tasks when adequate historical data. Therein lies the rub, however, as the task of collecting, labelling, and vetting data on which to train. Such a process is prohibitively costly and time-consuming too. He says, “DARPA envisions a future in which machines are more than just tools that execute human programmed rules or generalize from human-curated data sets. Rather, the machines DARPA envisions will function more as colleagues than as tools.” Towards this end, DARPA is focusing its investments on a “third wave” of AI technologies that brings forth machines that can reason in context. Incorporating these technologies in military systems that collaborate with warfighters will facilitate better decisions in complex, time-critical, battlefield environments; enable a shared understanding of massive, incomplete, and contradictory information; and empower unmanned systems to perform critical missions safely and with high degrees of autonomy. DARPA’s more than $2 billion “AI Next” campaign, announced in September 2018, includes providing robust foundations for second wave technologies, aggressively applying the second wave AI technologies into appropriate systems, and exploring and creating third wave AI science and technologies. DARPA’s third wave research efforts will forge new theories and methods that will make it possible for machines to adapt contextually to changing situations, advancing computers from tools to true collaborative partners. Furthermore, the agency will be fearless about exploring these new technologies and their capabilities – DARPA’s core function – pushing critical frontiers ahead of our nation’s adversaries. To know more about this in detail, read Dr. Peter T. Highnam’s complete statement. Michael Brown on (Defense Innovation Unit) DIU’s efforts in Artificial Intelligence Michael Brown, Director of the Defense Innovation Unit, started the talk by highlighting on the fact how China and Russia are investing heavily to become dominant in AI.  “By 2025, China will aim to achieve major breakthroughs in AI and increase its domestic market to reach $59.6 billion (RMB 400 billion) To achieve these targets, China’s National Development and Reform Commission (China’s industrial policy-making agency) funded the creation of a national AI laboratory, and Chinese local governments have pledged more than $7 billion in AI funding”, Brown said in his statement. He said that these Chinese firms are in a way leveraging U.S. talent by setting up research institutes in the state, investing in U.S. AI-related startups and firms, recruiting U.S.-based talent, and commercial and academic partnerships. Brown said that DIU will engage with DARPA and JAIC(Joint Artificial Intelligence Center) and also make its commercial knowledge and relationships with potential vendors available to any of the Services and Service Labs. DIU also anticipates that with its close partnership with the JAIC, DIU will be at the leading edge of the Department’s National Mission Initiatives (NMIs), proving that commercial technology can be applied to critical national security challenges via accelerated prototypes that lay the groundwork for future scaling through JAIC. “DIU looks to bring in key elements of AI development pursued by the commercial sector, which relies heavily on continuous feedback loops, vigorous experimentation using data, and iterative development, all to achieve the measurable outcome, mission impact”, Brown mentions. DIU’s AI portfolio team combines depth of commercial AI, machine learning, and data science experience from the commercial sector with military operators. However, they have specifically prioritized projects that address three major impact areas or use cases which employ AI technology, including: Computer vision The DIU is prototyping computer vision algorithms in humanitarian assistance and disaster recovery scenarios. “This use of AI holds the potential to automate post-disaster assessments and accelerate search and rescue efforts on a global scale”, Brown said in his statement. Large dataset analytics and predictions DIU is prototyping predictive maintenance applications for Air Force and Army platforms. For this DIU plans to partner with JAIC to scale this solution across multiple aircraft platforms, as well as ground vehicles beginning with DIU’s complementary predictive maintenance project focusing on the Army’s Bradley Fighting Vehicle. Brown says this is one of DIU’s highest priority projects for FY19 given its enormous potential for impact on readiness and reducing costs. Strategic reasoning DIU is prototyping an application from Project VOLTRON that leverages AI to reason about high-level strategic questions, map probabilistic chains of events, and develop alternative strategies. This will make DoD owned systems more resilient to cyber attacks and inform program offices of configuration errors faster and with fewer errors than humans. Know more about what more DIU plans in partnership with DARPA and JAIC, in detail, in Michael Brown’s complete testimony. Lieutenant General Jack Shanahan on making JAIC “AI-Ready” Lieutenant General Jack Shanahan, Director, Joint Artificial Intelligence Center, touches upon  how the JAIC is partnering with the Under Secretary of Defense (USD) Research & Engineering (R&E), the role of the Military Services, the Department’s initial focus areas for AI delivery, and how JAIC is supporting whole-of-government efforts in AI. “To derive maximum value from AI application throughout the Department, JAIC will operate across an end-to-end lifecycle of problem identification, prototyping, integration, scaling, transition, and sustainment. Emphasizing commerciality to the maximum extent practicable, JAIC will partner with the Services and other components across the Joint Force to systematically identify, prioritize, and select new AI mission initiatives”, Shanahan mentions in his testimony. The AI capability delivery efforts that will go through this lifecycle will fall into two categories including National Mission Initiatives (NMI) and Component Mission Initiatives (CMI). NMI is an operational or business reform joint challenge, typically identified from the National Defense Strategy’s key operational problems and requiring multi-service innovation, coordination, and the parallel introduction of new technology and new operating concepts. On the other hand, Component Mission Initiatives (CMI) is a component-level challenge that can be solved through AI. JAIC will work closely with individual components on CMIs to help identify, shape, and accelerate their Component-specific AI deployments through: funding support; usage of common foundational tools, libraries, cloud infrastructure; application of best practices; partnerships with industry and academia; and so on. The Component will be responsible for identifying and implementing the organizational structure required to accomplish its project in coordination and partnership with the JAIC. Following are some examples of early NMI’s by JAIC to deliver mission impact at speed, demonstrate the proof of concept for the JAIC operational model, enable rapid learning and iterative process refinement, and build their library of reusable tools while validating JAIC’s enterprise cloud architecture. Perception Improve the speed, completeness, and accuracy of Intelligence, Surveillance, Reconnaissance (ISR) Processing, Exploitation, and Dissemination (PED). Shanahan says Project Maven’s efforts are included here. Predictive Maintenance (PMx) Provide computational tools to decision-makers to help them better forecast, diagnose, and manage maintenance issues to increase availability, improve operational effectiveness, and ensure safety, at a reduced cost. Humanitarian Assistance/Disaster Relief (HA/DR) Reduce the time associated with search and discovery, resource allocation decisions, and executing rescue and relief operations to save lives and livelihood during disaster operations. Here, JAIC plans to apply lessons learned and reusable tools from Project Maven to field AI capabilities in support of federal responses to events such as wildfires and hurricanes—where DoD plays a supporting role. Cyber Sensemaking Detect and deter advanced adversarial cyber actors who infiltrate and operate within the DoD Information Network (DoDIN) to increase DoDIN security, safeguard sensitive information, and allow warfighters and engineers to focus on strategic analysis and response. Shanahan states, “Under the DoD CIO’s authorities and as delineated in the JAIC establishment memo, JAIC will coordinate all DoD AI-related projects above $15 million annually.” “It does mean that we will start to ensure, for example, that they begin to leverage common tools and libraries, manage data using best practices, reflect a common governance framework, adhere to rigorous testing and evaluation methodologies, share lessons learned, and comply with architectural principles and standards that enable scale”, he further added. To know more about this in detail, read Lieutenant General Jack Shanahan’s complete testimony. To know more about this news in detail, watch the entire hearing on 'Artificial Intelligence Initiatives within the Department of Defense' So, you want to learn artificial intelligence. Here’s how you do it. What can happen when artificial intelligence decides on your loan request Mozilla partners with Ubisoft to Clever-Commit its code, an artificial intelligence assisted assistant
Read more
  • 0
  • 0
  • 3982
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at ₹800/month. Cancel anytime
article-image-applying-modern-css-to-create-react-app-projects-tutorial
Bhagyashree R
18 Mar 2019
13 min read
Save for later

Applying Modern CSS to Create React App Projects [Tutorial]

Bhagyashree R
18 Mar 2019
13 min read
Previously with Create React App, you actually didn't have a lot of options to be able to clean things up visually. You were frequently at the whims and mercy of random Cascading Style Sheets (CSS) project maintainers, and trying to get other libraries, frameworks, or preprocessors involved in the project compilation process was frequently a nightmare. A preprocessor in the context of Create React App is basically one of the steps in the build process. In this case, we're talking about something that takes some of the style code (CSS or another format), compiles it down to basic CSS, and adds it to the output of the build process. This article is taken from the book  Create React App 2 Quick Start Guide by Brandon Richey. This book is intended for those who want to get intimately familiar with the Create React App tool. It covers all the commands in Create React App and all of the new additions in version 2.  To follow along with the examples implemented in this article, you can download the code from the book’s GitHub repository. Over the span of this article, we'll be covering materials that span the gamut of style-related functionality and highlighting what is, in my mind, one of the best new features in Create React App: support for CSS Modules and SASS. Introducing CSS Modules CSS Modules give you the ability to modularize any CSS code that you import in a way that prevents introducing global, overlapping namespaces, despite the fact that the end result is still just one giant CSS file. Better project organization Let's start off by cleaning up our directory structure in our project a little bit better. What we're going to do is just separate out each component that has CSS and JavaScript code into their own folders. Let's first create NewTodo, Todo, App, TodoList, and Divider folders and place all of their related code in each of those. We'll also need to create a new file in each of these directories called index.js, which will be responsible for only importing and exporting the appropriate component. For example, the App index file (src/App/index.js) will look like this: import App from "./App"; export default App; The new index file of Todo (src/Todo/index.js) will look like this: import Todo from "./Todo"; export default Todo; You can probably guess what the index files NewTodo, TodoList, and Divider will look like as well, based on this pattern. Next, we'll need to change each place that these files are referenced to make it easier to import all of them. This will unfortunately be a little bit of grunt work, but we'll need to do it all the same to make sure we don't break anything in the process. First, in src/App/App.js, change the TodoList import component to the following: import TodoList from "../TodoList"; There's nothing we need to do for Divider since it is a component with no imports. NewTodo and Todo are of a similar type, so we can skip them as well. src/TodoList/TodoList.js, on the other hand, has a lot we need to deal with, since it's one of our highest-level components and imports a lot: import Todo from "../Todo"; import NewTodo from "../NewTodo"; import Divider from "../Divider"; But that's not all. Our test file, src/TodoList/TodoList.test.js, also needs to be modified to include these new paths for our files or else our tests will fail! We'll need nearly the same list of imports as earlier: import TodoList from "./TodoList"; import NewTodo from "../NewTodo"; import Todo from "../Todo"; Now, when you reload your application, your code should still be working just fine, your tests should all pass, and everything should be cleanly separated out! Our full project structure should now look like this: src/ App/ App.css App.js App.test.js index.js Divider/ Divider.css Divider.js index.js NewTodo/ NewTodo.css NewTodo.js NewTodo.test.js index.js Todo/ Todo.css Todo.js Todo.test.js index.js TodoList/ TodoList.css TodoList.js TodoList.test.js index.js index.css index.js setupTests.js ... etc ... Introducing CSS Modules to our application If we want to use CSS Modules, there are a few simple guidelines that we need to follow. The first is that we need to name our files [whatever].module.css, instead of [whatever].css. The next thing we need to do is to make sure that our styles are named simply and are easy to reference. Let's start off by following these conventions and by renaming our CSS file for Todo as src/Todo/Todo.module.css, and then we'll change the contents a tiny bit: .todo { border: 2px solid black; text-align: center; background: #f5f5f5; color: #333; margin: 20px; padding: 20px; } .done { background: #f5a5a5; } Next, we'll open up src/Todo/Todo.js to take advantage of CSS Modules instead. We created a helper function in our Todo component called cssClasses(), which returns the styles we should be using in our component, and there's not much we need to change to make this all work exactly the same as earlier. We'll need to change our import statement at the top as well, since we renamed the file and are changing how our CSS is getting loaded into our code! Take a look at the following: import styles from "./Todo.module.css"; This enables our code to take advantage of any class names defined in Todo.module.css by referencing them as styles.[className]. For example, in the previous file, we defined two CSS class names: todo and done, so we can now reference them in our component via styles.Todo and styles.done. We'll need to change the cssClasses() function to use this, so let's make those exact changes now. In src/Todo/Todo.js, our cssClasses() function should now read as follows: cssClasses() { let classes = [styles.todo]; if (this.state.done) { classes = [...classes, styles.done]; } return classes.join(' '); } Save and reload, and our application should be back to normal! Next, let's change the hr tags inside of the todo components to have their own styles and effects. Head back into src/Todo/Todo.module.css and add the following block for our hr tag, which we'll give a new class of redDivider: .redDivider { border: 2px solid red; } And finally, return back to our render() function in src/Todo/Todo.js, and change our render() function's hr tag inclusion to the following: <hr className={styles.redDivider} /> Save and reload, and now we should have fully compartmentalized CSS code without worrying about collisions and global namespaces! Here's how the output looks like: Composability with CSS Modules That's not all that CSS Modules give us, although it's certainly one of the great parts of CSS Modules that we get immediately and with no fuss. We also get CSS composability, which is the ability to inherit CSS classes off of other classes, whether they're in the main file or not. This can be incredibly useful when you're setting up more complicated nested components that all need to handle slightly different style sheets, but are not wildly different from each other. Let's say we want to have the ability to mark some components as critical instead of just regular Todos. We don't want to change too much about the component; we want it to inherit the same basic rules as all of the other Todos. We'll need to set up some code to make this happen. Back in src/Todo/Todo.js, we're going to make some modifications to allow a new state property named critical. We'll start off in the constructor component, where we'll add our new state property and a bind tag for a function: constructor(props) { super(props); this.state = { done: false, critical: false }; this.markAsDone = this.markAsDone.bind(this); this.removeTodo = this.removeTodo.bind(this); this.markCritical = this.markCritical.bind(this); } We add a new critical property in our state property and set it to a default value of false. Then we also reference a function (which we haven't written yet) called markCritical, and we bind this, since we'll be using it in an event handler later. Next, we'll tackle the markCritical() function: markCritical() { this.setState({ critical: true }); } We'll also need to modify our cssClasses() function so that it can react to this new state property. To demonstrate the composability function of CSS Modules, we'll set it so that classes is originally an empty array, and then the first item either becomes critical or todo, depending on whether or not the item is marked as critical: cssClasses() { let classes = []; if (this.state.critical) { classes = [styles.critical]; } else { classes = [styles.todo]; } if (this.state.done) { classes = [...classes, styles.done]; } return classes.join(' '); } And finally, in our render function, we'll create the button tag to mark items as critical: render() { return ( <div className={this.cssClasses()}> {this.props.description} <br /> <hr className={styles.hr} /> <button className="MarkDone" onClick={this.markAsDone}> Mark as Done </button> <button className="RemoveTodo" onClick={this.removeTodo}> Remove Me </button> <button className="MarkCritical" onClick={this.markCritical}> Mark as Critical </button> </div> ); } We're not quite done yet, although we're at least 90% of the way there. We'll also want to go back to src/Todo/Todo.module.css and add a new block for the critical class name, and we'll use our composable property as well: .critical { composes: todo; border: 4px dashed red; } To use composition, all you need to do is add a new CSS property called composes and give it a class name (or multiple class names) that you want it to compose. Compose, in this case, is a fancy way of saying that it inherits the behavior of the other class names and allows you to override others. In the previous case, we're saying critical is a CSS module class that is composed of a todo model as the base, and adds a border component of a big red dashed line since, well, we'll just say that this means it is critical. Save and reload, as always, and you should be able to mark items as Mark as Done, Mark as Critical, or both, or remove them by clicking Remove Me, as in the following screenshot: And that about covers it for our brief introduction to CSS Modules! Before you move on, you'll also want to quickly update your snapshots for your tests by hitting U in the yarn test screen. Introducing SASS to our project SASS is essentially CSS with extended feature support. When I say extended feature support here, though, I mean it! SASS supports the following feature set, which is missing in CSS: Variables Nesting Partial CSS files Import support Mixins Extensions and inheritance Operators and calculations Installing and configuring SASS The good news is that getting SASS support working in a Create React App project is incredibly simple. We first need to install it via yarn, or npm. $ yarn add node-sass We'll see a ton of output from it, but assuming there are no errors and everything goes well, we should be able to restart our development server and get started with some SASS. Let's create a more general utility SASS file that will be responsible for storing standardized colors that we'll want to use throughout our application, and something to store that neat gradient hr pattern in case we want to use it elsewhere. We'll also change some of the colors that we're using so that there is some red, green, and blue, depending on whether the item is critical, done, or neither, respectively. In addition, we'll need to change up our project a little bit and add a new file to have a concept of some shared styles and colors. So, let's begin: Create a new file, src/shared.scss, in our project and give it the following body: $todo-critical: #f5a5a5; $todo-normal: #a5a5f5; $todo-complete: #a5f5a5; $fancy-gradient: linear-gradient( to right, rgba(0, 0, 0, 0), rgba(0, 0, 0, 0.8), rgba(0, 0, 0, 0) ); Next, hop over to src/Divider/Divider.css and rename the file to src/Divider/Divider.scss. Next, we'll change the reference to Divider.css in src/Divider/Divider.js, as follows: import "./Divider.scss"; Now we'll need to change up the code in Divider.scss to import in our shared variables file and use a variable as part of it: @import "../shared"; hr { border: 0; height: 1px; background-image: $fancy-gradient; } So, we import in our new shared SASS file in src/, and then the background-image value just references the $fancy-gradient variable that we created, which means we can now recreate that fancy gradient whenever we need to without having to rewrite it over and over. Save and reload, and you should see that nothing major has changed. Mixing SASS and CSS Modules The good news is that it's basically no more complicated to introduce SASS to CSS Modules in Create React App. In fact, the steps are borderline identical! So, if we want to start mixing the two, all we need to do is rename some files and change how our imports are handled. Let's see this in action: First, head back to our src/Todo/Todo.module.css file and make a very minor modification. Specifically, let's rename it src/Todo/Todo.module.scss. Next, we need to change our import statement in src/Todo/Todo.js, otherwise the whole thing will fall apart: import styles from "./Todo.module.scss"; Now, we should have our SASS working for CSS Modules with the Todo component, so let's start taking advantage of it. Again, we'll need to import our shared file into this SASS file as well. Note the following back in src/Todo/Todo.module.scss: @import '../shared'; Next, we'll need to start changing the references to our various background colors. We'll change the background for regular Todos to $todo-normal. Then, we'll change the finished Todo background to $todo-complete. Finally, we'll want to change the critical items to $todo-critical: .todo { border: 2px solid black; text-align: center; background: $todo-normal; color: #333; margin: 20px; padding: 20px; } .done { background: $todo-complete; } .hr { border: 2px solid red; } .critical { composes: todo; background: $todo-critical; } Save and reload our project, and let's make sure the new color scheme is being respected: Now, we have CSS Modules and SASS  integrated nicely in our Create React App project without having to install a single new dependency. We have them playing nicely together even, which is an even greater achievement! If you found this post useful, do check out the book, Create React App 2 Quick Start Guide. In addition to getting familiar with Create React App 2, you will also build modern, React projects with, SASS, and progressive web applications. React Native 0.59 is now out with React Hooks, updated JavaScriptCore, and more! React Native Vs Ionic: Which one is the better mobile app development framework? How to create a native mobile app with React Native [Tutorial]
Read more
  • 0
  • 0
  • 4623

article-image-designing-a-multi-cloud-environment-with-iaas-paas-and-saas-tutorial
Melisha Dsouza
17 Mar 2019
15 min read
Save for later

Designing a Multi-Cloud Environment with IaaS, PaaS, and SaaS [Tutorial] 

Melisha Dsouza
17 Mar 2019
15 min read
In this tutorial, you will understand a scenario that describes how to use solutions from different cloud providers and frameworks. You will learn how to interact with and create a design to fit into the requirements that will be as transparent as possible to the end customer. We will conclude the tutorial by designing a real-world scenario with Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS), in multi-cloud environments (private, public, and hybrid). This tutorial is an excerpt from a book written by Florian Klaffenbach, Markus Klein, Suresh Sundaresan titled Multi-Cloud for Architects. This book is a practical step-by-step guide that will teach you to architect effective Cloud computing solutions and services efficiently. Design guidelines for multi-cloud solutions To design a multi-cloud environment you will need: Network connectivity between the different clouds A single identity management solution for all servers Supported application for georedundancy Containerization As virtual machines are complex and not easy to manage, there is often a requirement to bring in flexibility to custom IT services. This is where containerization comes into play. The concept of containers is defined as follows: a container is an infrastructure independent sandbox running in a container environment, without an operating system in the container. You can easily move containers between environments, and you can scale by adding another container to an existing environment. Items that are usually bundled into a container include the following: Applications Dependencies Libraries Binaries Configuration files Container services have been an approach of different solutions in the past and even came from the community. In the meantime, the following container solutions survived. Cloud Foundry Cloud Foundry was mainly developed by VMware and later by Pivotal. As you can see in the following diagram, the basic design is easy to understand. There are different layers of services that are split to scale and communicate between them: Cloud Foundry provides the ability to run containers independent of the underlying programming language or framework. It provides so-called service brokers that provide a defined connection to existing PaaS solutions from cloud vendors (for example, MySQL from Azure, DB2 from AWS, and so on). It is completely integrated into the CI/CD pipeline of development, and therefore, it has a lot of users from the DevOps parties. Docker Docker is a software solution that works based on containers. A container is defined as an isolated environment containing applications, tools, configuration files, and libraries. All of them run on a single operating system Kernel without guest operating systems, and we know from virtual machines. If you run container in scale, you will need to define an orchestration solution. In today's Docker environments provided by public cloud vendors, you will find Kubernetes as the management, orchestration, and scaling solution shown as follows: As you can see in the preceding diagram, there are different layers that ensure that the Docker Engine can communicate to the rest of the services and provide a defined API, internally and externally. Each of these layers and design parts is responsible for an important part of the product. OpenShift OpenShift is a container solution, with Kubernetes as the orchestrator, that runs on the RedHat operating system. It is owned by IBM: OpenShift is comparable to Docker itself, but it has some modifications that were designed by RedHat. They have been explicitly implemented into the OS itself. Microservices The technical basis for working with containers is a microservice application architecture. This means that each application should be sliced into the smallest possible (but scalable) services. These services are then moved to containers. To scale a specific part of an application, another container is switched on, and a load balancer, sitting before the microservices container, is responsible for integrating the new container into the application life cycle. The concept of microservices is illustrated in the following diagram: As you can see in the preceding diagram, there is an application frontend, API services, and a background database in this sample design. So we have services talking to the user and acting as the frontend. There is a layer in between for communication and translation and we can find a third layer, which is the database service itself. Docker Services as a Service Depending on the cloud vendor, you will find some, or even all, of the following services as PaaS offerings: Container registries for hosting the container images Container hosts/instances for running the container images Container orchestrators (generally based on Kubernetes) to orchestrate the images Regardless of the overall design of your cloud solution, you will be able to integrate these from different cloud vendors. This means that you can spread the same containers to different clouds, hosted by different vendors, and decide where you would like to place your registry, and where your Kubernetes orchestrator should have its home address. Best practices Regarding best practices for your PaaS container design, you should make sure that you find all the required parts for a solution at one or many public cloud vendors. If we set the networking connectivity with good performance and low latency, for example, there will be no need to, place our container hosts in different environments to provide better availability for customers consuming the cloud services. For example, if a customer is consuming his cloud services from Azure, it could connect to your service within the Azure backbone. If it comes from Amazon Web Services, this may be its connectivity target. Bear in mind that redundancy requirements cannot be solved with more complexity. Even here, Terraforms can help to design a descriptive language that is cloud independent. A real-world design scenario Now let's take a look at a real-world scenario that involves designing a multi-cloud architecture for a mid-size European company with locations in the United States, Japan, and Moscow, in addition to their worldwide headquarters in Munich. They are working in the medical and health care area and decided to completely move every IT service to the cloud, except their data center in Moscow, as their IT team is located there. But even for this location, their goal is to minimize the on-premises servers and even work in the data center environment using public cloud technology, as this would give them the chance to move the last on-premise servers to a public cloud, if somehow, in the future, their data center needs to move. As of today, the company is running the following services: Active Directory on Windows Server 2012 R2 with four domain controllers' in the IT headquarters in Moscow. Each location has two domain controllers' on-premises. 480 member servers, running Windows Server 2008 and higher. 280 Ubuntu 17 servers. VMware, as a virtualization layer. Each server is a virtual machine; even their firewalls (checkpoint) are virtual machines. The company's network is MPLS-based, operated by AT&T. They have a central internet breakout in Moscow. There are about 500 SQL server instances running on Windows and Linux. Each of them is already in the most recent release. About 50 servers are running custom installations developed by the company's development team. They are using Visual Studio for managing their code. About 4200 client devices are being used. Each of them is running Windows 7 and Office 2010. For mobile devices, they are using Apple iPhones. The central solution for business services is SAP, which is currently hosted in their local data center in Germany, and in Moscow as a redundant hot standby environment. They are currently managing their environment by using system center 2012 R2 tools. Their ITSM solution is already running as a SaaS solution in the ServiceNow cloud. This is the only solution that will survive the redesign and even in 2021 and later will run in the ServiceNow cloud. The CEO has decided to have a cloud-first strategy, and all of the cloud migration must be done by the end of 2021, as all the existing data center contracts would be cancelled by then. In addition, they have already decided to implement a two cloud vendor strategy using Microsoft Azure and Amazon Web Services. AWS should mainly be used by their development team, as the chief of development is a former employee of AWS. The head of IT has decided to move all of the infrastructure services mainly to Microsoft Azure. Suppose that you are the responsible, external consultant, helping to design the new IT environment. Solution design This section will describe the project and the final design of the new company's IT environment, based on Azure and AWS technology. Preparations One of the most important steps, when starting to use cloud services is to define cloud governance. Regarding which cloud vendor you decide to use, basically, they are all the same. With Amazon Web Services, the Cloud Adoption Framework looks as follows: With AWS, as the customer, have to work through each of the points, in order to be happy with your cloud governance. With Microsoft Azure Services, there is the Azure Governance Scaffold, as follows: These are the main points that you will need to define your governance and work with Azure properly. Looking at Azure in more detail, we will need to decide on a concept for the following components: As you can see in the preceding diagram, there are different portals on the business side (the Enterprise Portal and the Account Portal), and then a third one to manage the technologies themselves (the Management Portal). If you would like to work with code (JSON), the APIS, and CLI, Visual Studio will be your ideal solution to work with. We will need to merge both of the cloud vendors. For this example, the governance has already been set and we can have a look at the technology itself. Networking Looking at the networking portion of the multi-cloud design, the company decided to work with a partner that supports multi-cloud connections. This means that they, themselves, do not have to manage connectivity. This is being done via remote peering with their networking partner. Our company decided to go with Equinix and Interxion. The following diagram shows the Equinix Cloud Exchange Framework: As you can see in the preceding diagram, the customer has connectivity to Equinix and Equinix will provide the customer with connectivity to the cloud vendors of your choice. Let's take a look at Interxion: Interxion works the same way that Equinix does, and it is another vendor to help you solve your multi-cloud networking configurations. The result will be redundancy and independency and even a connection to the local data center in Moscow without any issues to both cloud vendors. Identity management The company has decided to have a single identity management solution based on the technology, they already run on both public clouds, which is Azure Active Directory: As you can see in the preceding diagram, each cloud service (either public or hybrid, using Azure Stack or a similar service) is using Azure AD as a single IDM solution. Based on their security policies, the company has decided to go with Azure AD Connect, using pass through authentication (PTA): The PTA agent is monitoring the IDM queues in the cloud and authenticating the requests locallys transferring back the authentication token. As Azure AD works with AWS, too, there is single identity management solution in place, as follows: For their on-premises cloud environment, the company has decided to go with Azure Stack in a connected mode, in order to leverage Azure AD, too. The design is illustrated in the following diagram: As you can see in the preceding diagram, Azure Stack and Azure behave the same way technologically, and can therefore be integrated into the express route configuration as another Azure cloud. Modern workplace services With the basic cloud environment in place, the company has decided to go with Office 365 for all workplace services, on all client devices. They will be migrated to Office applications on the client computers, using Office servers as SaaS directly from the cloud. This will be a seamless migration for the user accounts, and everything will work as usual, even when the migration is taking place: As the Office 365 license can also be used on iPhones, all of the employees will be fine. Regarding the company's on-premises exchange server, Skype, and SharePoint, they will move these to Office 365 completely and will get rid of the virtual machines that are present today. Infrastructure services Regarding the existing infrastructure services, you have decided to move most of them to Microsoft Azure and to prepare the migration by first identifying which solution can exist as a PaaS, and what needs to reside on a VM in IaaS. To automatically collect all of the required information, you decide to perform an analysis using Azure Migrate, as follows: The vCenter Service will be connected to Azure and will host the migration service itself. I will be responsible for synchronizing, and later managing, the switch of each VM from on-premises to the cloud. For all of the SQL services, there is a solution called Azure SQL Migrate, as follows: As a result of these cloud services, you will be able to figure out if there are still any virtual machines running SQL. In general, about 80-90% of the SQL servers in the existing design can be moved to PaaS solutions. Using the results of the migration service, you can get an idea of what the cloud SQL solution will look like. It will also help you to work through each migrating step in an informed way. For the SAP environment that is currently running on-premises, you decide to migrate to SAP HEC on Azure, based on the existing blueprint design of Microsoft, as follows: About 68% of the VMs can be migrated to Azure seamlessly, without requiring running them as virtual machines anymore. Another 25% of the VMs need to be migrated to Azure using the lift and shift model. The service that you will need to migrate them to Azure is Azure Site Recovery. This service works as follows: For all of the VMs that need to run in the on-premises data centers that will be left after the move to the cloud, you decide to use Azure Stack. As Azure Stack is the on-premises solution of Azure, the process to migrate the VM is nearly the same. As the company's CEO has decided that a temporary placement of a virtual machine in a public Azure cloud for migration tasks is fine, you three-step migration: Migrate the VM from the on-premises VMware to Azure, using Azure Site Recovery Move the VM from Azure to Azure Stack, using Azure Storage Explorer Bring the VM online again, on Azure Stack From a sizing perspective, you decide to use an eight-node Azure Stack environment from the company's favorite hardware vendor. The sizing has been created using the Azure Stack Capacity Planner. As you can see in the following screenshot, it is an Excel sheet with input and output parameters: Setting up new cloud services For each new cloud service that will be deployed, the CIO has decided to go with Terraforms, in order to have a unique description language for all of the resources, regardless of the cloud flavor of a specific vendor. Terraforms provides an easy way to automate the deployment, and to be flexible when moving resources, even between the clouds. Development environment As the CDO is a former employee of Amazon Web Services, and as all of the existing development code is in AWS, there is no need for him to change this: As Jenkins is supported in Azure, too, the development is flexible. The main task is to design the build pipeline using stage environments. If DevOps decides to implement virtual machines for their services, these may also reside on AWS, but due to the underlying single identity and networking design, this really does not matter at all. The only requirement from the CIO is that if the VM is a Windows server and not Linux, it must be placed on Azure, as in Azure, there is an option to save license costs by using Azure Hybrid Benefits. As you can see in the preceding diagram, there are 41% savings using the Hybrid Benefits and reusing the Windows server licenses in the cloud. So, the plan is to demote a server on-premises and enable it in Azure. With this switch of each VM, you will be able to transfer the license itself. Summary In this tutorial, we learned how to use solutions from different cloud providers and frameworks and create a design to fit into the requirements that will be as transparent as possible to the end customer. If you are looking at completely adapting to any Cloud and its services, Multi-Cloud for Architects will be your go-to guide to find perfect solutions irrespective the size of your infrastructure.  Microsoft Cloud services’ DNS outage results in deleting several Microsoft Azure database records VMware Essential PKS: Use upstream Kubernetes to build a flexible, cost-effective cloud-native platform MariaDB CEO says big proprietary cloud vendors “strip-mining open-source technologies and companies”
Read more
  • 0
  • 0
  • 11698

article-image-interpretation-of-functional-apis-in-deep-neural-networks-by-rowel-atienza
Guest Contributor
16 Mar 2019
6 min read
Save for later

Interpretation of Functional APIs in Deep Neural Networks by Rowel Atienza

Guest Contributor
16 Mar 2019
6 min read
Deep neural networks have shown excellent performance in terms of classification accuracy on more challenging established datasets like ImageNet, CIFAR10, and CIFAR100.  This article is an excerpt taken from the book Advanced Deep Learning with Keras authored by Rowel Atienza. This book covers advanced deep learning techniques to create successful AI by using MLPs, CNNs, and RNNs as building blocks to more advanced techniques. You’ll also study deep neural network architectures, Autoencoders, Generative Adversarial Networks (GANs), Variational AutoEncoders (VAEs), and Deep Reinforcement Learning (DRL) critical to many cutting-edge AI results. For conciseness, we’ll discuss two deep networks, ResNet and DenseNet. ResNet introduced the concept of residual learning that enabled it to build very deep networks by addressing the vanishing gradient problem in deep convolutional networks. DenseNet improved this technique further by having every convolution to have direct access to inputs, and lower layers feature maps. Furthermore, DenseNet managed to keep the number of parameters low in deep networks with the use of Bottleneck and Transition layers. Numerous models such as ResNeXt and FractalNet have been inspired by the technique used by these two networks. With the understanding of ResNet and DenseNet, we can use their design guidelines to build our own models. By using transfer learning, we can also take advantage of pre-trained ResNet and DenseNet models for our purposes. In this article, we’ll discuss an important feature of Keras called Functional API. This is an alternative method for building networks in Keras. Functional API enables us to build more complex networks that cannot be accomplished by a sequential model. Functional API is useful in building deep networks such as ResNet and DenseNet. Functional API model in Keras In the sequential model, a layer is stacked on top of another layer. Generally, the model is accessed through its input and output layers. There is no simple mechanism if we want to add an auxiliary input at the middle of the network or extract an auxiliary output before the last layer. Furthermore, the sequential model does not support graph-like models or models that behave like Python functions. It is also not straightforward to share layers between the two models. Such limitations are addressed by functional API. Functional API is guided by the following concepts: A layer is an instance that accepts a tensor as an argument. The output of a layer is another tensor. To build a model, layer instances are objects that are chained to one another through input and output tensors. This has a similar end-result as stacking multiple layers in the sequential model. However, using layer instances makes it easier for models to have auxiliary or multiple inputs and outputs since the input/output of each layer is readily accessible. A model is a function between one or more input tensors and one or more output tensors. In between the model input and output, tensors are the layer instances that are chained to one another by layer input and output tensors. A model is, therefore, a function of one or more input layers and one or more output layers. The model instance formalizes the computational graph on how the data flows from input(s) to output(s). After building the functional API model, training and evaluation are performed by the same functions used in the sequential model. To illustrate, in functional API, a 2D convolutional layer, Conv2D, with 32 filters and with x as the layer input tensor and y as the layer output tensor can be written as: y = Conv2D(32)(x) We can stack multiple layers to build our models. For example, we can rewrite the CNN on MNIST code as shown in Listing 2.1.1. Listing 2.1.1 cnn-functional-2.1.1.py: Converting cnn-mnist-1.4.1.py code using functional API: import numpy as np from keras.layers import Dense, Dropout, Input from keras.layers import Conv2D, MaxPooling2D, Flatten from keras.models import Model from keras.datasets import mnist from keras.utils import to_categorical # load MNIST dataset (x_train, y_train), (x_test, y_test) = mnist.load_data() # from sparse label to categorical num_labels = np.amax(y_train) + 1 y_train = to_categorical(y_train) y_test = to_categorical(y_test) # reshape and normalize input images image_size = x_train.shape[1] x_train = np.reshape(x_train,[-1, image_size, image_size, 1]) x_test = np.reshape(x_test,[-1, image_size, image_size, 1]) x_train = x_train.astype('float32') / 255 x_test = x_test.astype('float32') / 255 # network parameters input_shape = (image_size, image_size, 1) batch_size = 128 kernel_size = 3 filters = 64 dropout = 0.3 # use functional API to build cnn layers inputs = Input(shape=input_shape) y = Conv2D(filters=filters, kernel_size=kernel_size, activation='relu')(inputs) y = MaxPooling2D()(y) y = Conv2D(filters=filters, kernel_size=kernel_size, activation='relu')(y) y = MaxPooling2D()(y) y = Conv2D(filters=filters, kernel_size=kernel_size, activation='relu')(y) # image to vector before connecting to dense layer y = Flatten()(y) # dropout regularization y = Dropout(dropout)(y) outputs = Dense(num_labels, activation='softmax')(y) # build the model by supplying inputs/outputs model = Model(inputs=inputs, outputs=outputs) # network model in text model.summary() # classifier loss, Adam optimizer, classifier accuracy model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy']) # train the model with input images and labels model.fit(x_train, y_train, validation_data=(x_test, y_test), epochs=20, batch_size=batch_size) # model accuracy on test dataset score = model.evaluate(x_test, y_test, batch_size=batch_size) print("\nTest accuracy: %.1f%%" % (100.0 * score[1])) By default, MaxPooling2D uses pool_size=2, so the argument has been removed. In Listing 2.1.1, every layer is a function of a tensor. Every layer generates a tensor as output which becomes the input to the next. To create the model, we can call Model() and supply the inputs and outputs tensors or lists of tensors. Everything else is the same. The model in Listing 2.1.1 can be trained and evaluated using fit() and evaluate() functions similar to the sequential model. The sequential class is, in fact, a subclass of Model class. Please note that we inserted the validation_data argument in the fit() function to see the progress of validation accuracy during training. The accuracy ranges from 99.3% to 99.4% in 20 epochs. To learn how to create a model with two inputs and one output you can head over to the book. In this article, we have touched base with an important feature of Keras, the functional API model. We simply covered the necessary materials needed to build deep networks like ResNet and DenseNet. To learn more about the function API model and Keras in deep learning, you can explore the book Advanced Deep Learning with Keras by Rowel Atienza. Build a Neural Network to recognize handwritten numbers in Keras and MNIST Train a convolutional neural network in Keras and improve it with data augmentation [Tutorial] Generative Adversarial Networks: Generate images using Keras GAN [Tutorial]  
Read more
  • 0
  • 0
  • 5773

article-image-keeping-animations-running-at-60-fps-in-a-react-native-app-tutorial
Sugandha Lahoti
15 Mar 2019
4 min read
Save for later

Keeping animations running at 60 FPS in a React Native app [Tutorial]

Sugandha Lahoti
15 Mar 2019
4 min read
An important aspect of any quality mobile app is the fluidity of the user interface. Animations are used to provide a rich user experience, and any jank or jitter can negatively affect this. Animations will likely be used for all kinds of interactions, from changing between views, to reacting to a user's touch interaction on a component. The second most important factor for high-quality animations is to make sure that they do not block the JavaScript thread. To keep animations fluid and not interrupt UI interactions, the render loop has to render each frame in 16.67 ms, so that 60 FPS can be achieved. In this recipe, we will take a look at several techniques for improving the performance of animations in a React Native mobile app. These techniques focus in particular on preventing JavaScript execution from interrupting the main thread. This article is taken from the book React Native Cookbook, Second Edition by Dan Ward.  In this book, you will improve your React Native mobile development skills and learn how to transition from web development to mobile development. For this post, we'll assume that you have a React Native app that has some animations defined. How to do it First and foremost, when debugging animation performance in React Native, we'll want to enable the performance monitor. To do so, show the Dev Menu (shake the device or cmd + D from the simulator) and tap Show Perf Monitor. The output in iOS will look something like the following screenshot: The output in Android will look something like the following screenshot: If you are looking to animate a component's transition (opacity) or dimensions (width, height), then make sure to use LayoutAnimation. If you want to use LayoutAnimation on Android, you need to add the following code when your application starts: UIManager.setLayoutAnimationEnabledExperimental && UIManager.setLayoutAnimationEnabledExperimental(true). If you need finite control over the animations, it is recommended that you use the Animated library that comes with React Native. This library allows you to offload all of the animation work onto the native UI thread. To do so, we have to add the useNativeDriver property to our Animated call. Let's take a sample Animated example and offload it to the native thread: componentWillMount() { this.setState({ fadeAnimimation: new Animated.Value(0) }); } componentDidMount() { Animated.timing(this.state.fadeAnimimation, { toValue: 1, useNativeDriver: true }).start(); } Currently, only a subset of the functionality of the Animated library supports native offloading. Please refer to the There's more section for a compatibility guide. If you are unable to offload your animation work onto the native thread, there is still a solution for providing a smooth experience. We can use the InteractionManager to execute a task after the animations have completed: componentWillMount() { this.setState({ isAnimationDone: false }); } componentWillUpdate() { LayoutAnimation.easeInAndOut(); } componentDidMount() { InteractionManager.runAfterInteractions(() => { this.setState({ isAnimationDone: true }); }) } render() { if (!this.state.isAnimationDone) { return this.renderPlaceholder(); } return this.renderMainScene(); } Finally, if you are still suffering from poor performance, you'll have to either rethink your animation strategy or implement the poorly performing view as a custom UI view component on the target platform(s). You will have to implement both your view and animation natively using the iOS and/or Android SDK. How it works The tips in this recipe focus on the simple goal of preventing the JavaScript thread from locking. The moment our JavaScript thread begins to drop frames (lock), we lose the ability to interact with our application, even if it's for a fraction of a second. It may seem inconsequential, but the effect is felt immediately by a savvy user. The focus of the tips in this post is to offload animations onto the GPU. When the animation is running on the main thread (the native layer, rendered by the GPU), the user can interact with the app freely without stuttering, hanging, jank, or jitters. There's more Here's a quick reference for where useNativeDriver is usable: Function iOS Android style, value, propertys √ √ decay √ timing √ √ spring √ add √ √ multiply √ √ modulo √ diffClamp √ √ interpoloate √ √ event √ division √ √ transform √ √ If you liked this post, support the author by reading the book React Native Cookbook, Second Edition for enhancing your React Native mobile development skills. React Native 0.59 is now out with React Hooks, updated JavaScriptCore, and more! React Native community announce March updates, post sharing the roadmap for Q4 How to create a native mobile app with React Native [Tutorial]
Read more
  • 0
  • 0
  • 12397
article-image-googlepayoutsforall-a-digital-protest-against-googles-135-million-execs-payout-for-misconduct
Natasha Mathur
14 Mar 2019
6 min read
Save for later

#GooglePayoutsForAll: A digital protest against Google’s $135 million execs payout for misconduct

Natasha Mathur
14 Mar 2019
6 min read
The Google Walkout for Real Change group tweeted out their protest against the news of ‘Google confirming that it paid $135 million as exit packages to the two top execs accused of sexual assault, on Twitter, earlier this week. The group castigated the ‘multi-million dollar payouts’ and asked people to use the hashtag #GooglePayoutsForAll to demonstrate different and better ways this obscenely large amount of ‘hush money’ could have been used. https://twitter.com/GoogleWalkout/status/1105556617662214145 The news of Google paying its senior execs, namely, Amit Singhal (former Senior VP of Google search) and Andy Rubin (creator of Android) high exit packages was first highlighted in a report by the New York Times, last October. As per the report, Google paid $90 million to Rubin and $15 million to Singhal. A lawsuit filed by James Martin, an Alphabet shareholder, on Monday this week, further confirmed this news. The lawsuit states that this decision taken by directors of Alphabet caused significant financial harm to the company apart from deteriorating its reputation, goodwill, and market capitalization. Meredith Whittaker, one of the early organizers of the Google Walkout in November last month tweeted, “$135 million could fix Flint's water crisis and still have $80 million left.” Vicki Tardif, another Googler summed up the sentiments in her tweet, “$135M is 1.35 times what Google.org  gave out in grants in 2016.” An ACLU researcher pointed out that $135M could have in addition to feeding the hungry, housing the homeless and pay off some student loans, It could also support local journalism killed by online ads. The public support to the call for protest using the hashtag #GooglePayoutsForAll has been awe-inspiring. Some shared their stories of injustice in cases of sexual assault, some condemned Google for its handling of sexual misconduct, while others put the amount of money Google wasted on these execs into a larger perspective. Better ways Google could have used $135 million it wasted on execs payouts, according to Twitter Invest in people to reduce structural inequities in the company $135M could have been paid to the actual victims who faced harassment and sexual assault. https://twitter.com/xzzzxxzx/status/1105681517584572416 Google could have used the money to fix the wage and level gap for women of color within the company. https://twitter.com/sparker2/status/1105511306465992705 $135 million could be used to adjust the 16% median pay gap of the 1240 women working in Google’s UK offices https://twitter.com/crschmidt/status/1105645484104998913 $135M could have been used by Google for TVC benefits. It could also be used to provide rigorous training to the Google employees on what impact misinformation within the company can have on women and other marginalized groups.   https://twitter.com/EricaAmerica/status/1105546835526107136 For $135M, Google could have paid the 114 creators featured in its annual "YouTube Rewind" who are otherwise unpaid for their time and participation. https://twitter.com/crschmidt/status/1105641872033230848 Improve communities by supporting social causes Google could have paid $135M to RAINN, a largest American nonprofit anti-sexual assault organization, covering its expenses for the next 18 years. https://twitter.com/GoogleWalkout/status/1105450565193121792 For funding 1800 school psychologists for 1 year in public schools https://twitter.com/markfickett/status/1105640930936324097 To build real, affordable housing solutions in collaboration with London Breed, SFGOV, and other Bay Area officials https://twitter.com/jillianpuente/status/1105922474930245636 $135M could provide insulin for nearly 10,000 people with Type 1 diabetes in the US https://twitter.com/GoogleWalkout/status/1105585078590210051 To pay for the first year for 1,000 people with stage IV breast cancer https://twitter.com/GoogleWalkout/status/1105845951938347008 Be a responsible corporate citizen To fund approximately 5300 low-cost electric vehicles for Google staff, and saving around 25300 metric tons of carbon dioxide from vehicle emissions per year. https://twitter.com/crschmidt/status/1105698893361233926 Providing free Google Fiber internet to 225,000 homes for a year https://twitter.com/markfickett/status/1105641215389773825 To give $5/hr raise to 12,980 service workers at Silicon Valley tech campuses https://twitter.com/LAuerhahn/status/1105487572069801985 $135M could have been used for the construction of affordable homes, protecting 1,100 low-income families in San Jose from coming rent hikes of Google’s planned mega-campus. https://twitter.com/JRBinSV/status/1105478979543154688 #GooglePayoutsForAll: Another initiative to promote awareness of structural inequities in tech   The core idea behind launching #GooglePayoutsForAll on Twitter by the Google walkout group was to promote awareness among people regarding the real issues within the company. It urged people to discuss how Google is failing at maintaining the ‘open culture’ that it promises to the outside world. It also highlights how mottos such as “Don’t be Evil” and “Do the right thing” that Google stood by only make for pretty wall decor and there’s still a long way to go to see those ideals in action. The group gained its name when more than 20,000 Google employees along with vendors, and contractors, temps, organized Google “walkout for real change” and walked out of their offices in November 2018. The walkout was a protest against the hushed and unfair handling of sexual misconduct within Google. Ever since then, Googlers have been consistently taking initiatives to bring more transparency, accountability, and fairness within the company. For instance, the team launched an industry-wide awareness campaign to fight against forced arbitration in January, where they shared information about arbitration on their Twitter and Instagram accounts throughout the day. The campaign was a success as Google finally ended its forced arbitration policy which goes into effect this month for all the employees (including contractors, temps, vendors) and for all kinds of discrimination. Also, House and Senate members in the US have proposed a bipartisan bill to prohibit companies from using forced arbitration clauses, last month.    Although many found the #GooglePayoutsForAll idea praiseworthy, some believe this initiative doesn’t put any real pressure on Google to bring about a real change within the company. https://twitter.com/Jeffanie16/status/1105541489722081290 https://twitter.com/Jeffanie16/status/1105546783063752709 https://twitter.com/Jeffanie16/status/1105547341862457344 Now, we don’t necessarily disagree with this opinion, however, the initiative can't be completely disregarded as it managed to make people who’d otherwise hesitate to open up talk extensively regarding the real issues within the company. As Liz Fong-Jones puts it, “Strikes and walkouts are more sustainable long-term than letting Google drive each organizer out one by one. But yes, people *are* taking action in addition to speaking up. And speaking up is a bold step in companies where workers haven't spoken up before”. The Google Walkout group have not yet announced what they intend to do next following this digital protest. However, the group has been organizing meetups such as the one earlier this month on March 6th where it invited the tech contract workers for discussion about building solidarity to make work better for everyone. We are only seeing the beginning of a powerful worker movement take shape in Silicon Valley. Recode Decode #GoogleWalkout interview shows why data and evidence don’t always lead to right decisions in even the world’s most data-driven company Liz Fong Jones, prominent ex-Googler shares her experience at Google and ‘grave concerns’ for the company Google’s pay equity analysis finds men, not women, are underpaid; critics call out design flaws in the analysis
Read more
  • 0
  • 0
  • 3174

article-image-react-native-development-tools-expo-react-native-cli-cocoapods-tutorial
Sugandha Lahoti
14 Mar 2019
10 min read
Save for later

React Native development tools: Expo, React Native CLI, CocoaPods [Tutorial]

Sugandha Lahoti
14 Mar 2019
10 min read
There are a large number of React Native development tools. Expo, React Native CLI, CocoaPods being the more popular ones. As with any development tools, there is going to be a trade-off between flexibility and ease of use. I encourage you start by using Expo for your React Native development workflow unless you’re sure you’ll need access to the native code. This article is taken from the book React Native Cookbook, Second Edition by Dan Ward.  In this book, you will improve your React Native mobile development skills or transition from web development to mobile development. In this article, we will learn about the various React Native development tools- Expo, React Native CLI, CocoaPods. We will also learn how to setup Expo and React Native CLI Expo This was taken from the expo.io site: "Expo is a free and open source toolchain built around React Native to help you build native iOS and Android projects using JavaScript and React." Expo is becoming an ecosystem of its own, and is made up of five interconnected tools: Expo CLI: The command-line interface for Expo. We'll be using the Expo CLI to create, build, and serve apps. A list of all the commands supported by the CLI can be found in the official documentation at the following link:   https://docs.expo.io/versions/latest/workflow/expo-cli Expo developer tools: This is a browser-based tool that automatically runs whenever an Expo app is started from the Terminal via the expo start command. It provides active logs for your in-development app, and quick access to running the app locally and sharing the app with other developers. Expo Client: An app for Android and iOS. This app allows you to run your React Native project within the Expo app on the device, without the need for installing it. This allows developers to hot reload on a real device, or share development code with anyone else without the need for installing it. Expo Snack: Hosted at https://snack.expo.io, this web app allows you to work on a React Native app in the browser, with a live preview of the code you’re working on. If you've ever used CodePen or JSFiddle, Snack is the same concept applied to React Native applications. Expo SDK: This is the SDK that houses a wonderful collection of JavaScript APIs that provide Native functionality not found in the base React Native package, including working with the device's accelerometer, camera, notifications, geolocation, and many others. This SDK comes baked in with every new project created with Expo. These tools together make up the Expo workflow. With the Expo CLI, you can create and build new applications with Expo SDK support baked in. The XDE/CLI also provides a simple way to serve your in-development app by automatically pushing your code to Amazon S3 and generating a URL for the project. From there, the CLI generates a QR code linked to the hosted code. Open the Expo Client app on your iPhone or Android device, scan the QR code, and BOOM there’s your app, equipped with live/hot reload! And since the app is hosted on Amazon S3, you can even share the in-development app with other developers in real time. React Native CLI The original bootstrapping method for creating a new React Native app using the command is as follows: react-native init This is provided by the React Native CLI. You'll likely only be using this method of bootstrapping a new app if you're sure you'll need access to the native layer of the app. In the React Native community, an app created with this method is said to be a pure React Native app, since all of the development and Native code files are exposed to the developer. While this provides the most freedom, it also forces the developer to maintain the native code. If you’re a JavaScript developer that’s jumped onto the React Native bandwagon because you intend on writing native applications solely with JavaScript, having to maintain the native code in a React Native project is probably the biggest disadvantage of this method. On the other hand, you'll have access to third-party plugins when working on an app that's been bootstrapped with the following command: react-native init Get direct access to the native portion of the code base. You'll also be able to sidestep a few of the limitations in Expo currently, particularly the inability to use background audio or background GPS services. CocoaPods Once you begin working with apps that have components that use native code, you're going to be using CocoaPods in your development as well. CocoaPods is a dependency manager for Swift and Objective-C Cocoa projects. It works nearly the same as npm, but manages open source dependencies for native iOS code instead of JavaScript code. We won't be using CocoaPods much in this book, but React Native makes use of CocoaPods for some of its iOS integration, so having a basic understanding of the manager can be helpful. Just as the package.json file houses all of the packages for a JavaScript project managed with npm, CocoaPods uses a Podfile for listing a project's iOS dependencies. Likewise, these dependencies can be installed using the command: pod install Ruby is required for CocoaPods to run. Run the command at the command line to verify Ruby is already installed: ruby -v If not, it can be installed with Homebrew with the command: brew install ruby Once Ruby has been installed, CocoaPods can be installed via the command: sudo gem install cocoapods If you encounter any issues while installing, you can read the official CocoaPods Getting Started guide at https://guides.cocoapods.org/using/getting-started.html. Planning your app and choosing your workflow When trying to choose which development workflow best fits your app's needs, here are a few things you should consider: Will I need access to the native portion of the code base? Will I need any third-party packages in my app that are not supported by Expo? Will my app need to play audio while it is not in the foreground? Will my app need location services while it is not in the foreground? Will I need push notification support? Am I comfortable working, at least nominally, in Xcode and Android Studio? In my experience, Expo usually serves as the best starting place. It provides a lot of benefits to the development process, and gives you an escape hatch in the eject process if your app grows beyond the original requirements. I would recommend only starting development with the React Native CLI if you're sure your app needs something that cannot be provided by an Expo app, or if you're sure you will need to work on the Native code. I also recommend browsing the Native Directory hosted at http://native.directory. This site has a very large catalog of the third-party packages available for React Native development. Each package listed on the site has an estimated stability, popularity, and links to documentation. Arguably the best feature of the Native Directory, however, is the ability to filter packages by what kind of device/development they support, including iOS, Android, Expo, and web. This will help you narrow down your package choices and better indicate which workflow should be adopted for a given app. React Native CLI setup We'll begin with the React Native CLI setup of our app, which will create a new pure React Native app, giving us access to all of the Native code, but also requiring that Xcode and Android Studio are installed. First, we'll install all the dependencies needed for working with a pure React Native app, starting with the Homebrew (https://brew.sh/) package manager for macOS. As stated on the project's home page, Homebrew can be easily installed from the Terminal via the following command: /usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)" Once Homebrew is installed, it can be used to install the dependencies needed for React Native development: Node.js and nodemon. If you're a JavaScript developer, you've likely already got Node.js installed. You can check it's installed via the following command: node -v This command will list the version of Node.js that's installed, if any. Note that you will need Node.js version 8 or higher for React Native development. If Node.js is not already installed, you can install it with Hombrew via the following command: brew install node We also need the nodemon package, which React Native uses behind the scenes to enable things like live reload during development. Install nodemon with Homebrew via the following command: brew install watchman We'll also of course need the React Native CLI for running the commands that bootstrap the React Native app. This can be installed globally with npm via the following command: npm install -g react-native-cli With the CLI installed, all it takes to create a new pure React Native app is the following: react-native init name-of-project This will create a new project in a new name-of-project directory. This project has all Native code exposed, and requires Xcode for running the iOS app and Android Studio for running the Android app. Luckily, installing Xcode for supporting iOS React Native development is a simple process. The first step is to download Xcode from the App Store and install it. The second step is to install the Xcode command-line tools. To do this, open Xcode, choose Preferences... from the Xcode menu, open the Locations panel, and install the most recent version from the Command Line Tools dropdown: Unfortunately, setting up Android Studio for supporting Android React Native development is not as cut and dry, and requires some very specific steps for installing it. Since this process is particularly involved, and since there is some likelihood that the process will have changed by the time you read this chapter, I recommend referring to the official documentation for in-depth, up-to-date instructions on installing all Android development dependencies. These instructions are hosted at the following URL:   https://facebook.github.io/react-native/docs/getting-started.html#java-development-kit Now that all dependencies have been installed, we're able to run our pure React Native project via the command line. The iOS app can be executed via the following: react-native run-ios And the Andriod app can be started with this: react-native run-android Each of these commands should start up the associated emulator for the correct platform, install our new app, and run the app within the emulator. If you have any trouble with either of these commands not behaving as expected, you might be able to find an answer in the React Native troubleshooting docs, hosted here: https://facebook.github.io/react-native/docs/troubleshooting.html#content Expo CLI setup The Expo CLI can be installed using the Terminal with npm via the following command: npm install -g expo The Expo CLI can be used to do all the great things the Expo GUI client can do. For all the commands that can be run with the CLI, check out the docs here: https://docs.expo.io/versions/latest/workflow/expo-cli If you liked this post, support the author by reading the book React Native Cookbook, Second Edition for enhancing your React Native mobile development skills. React Native 0.59 is now out with React Hooks, updated JavaScriptCore, and more! React Native community announce March updates, post sharing the roadmap for Q4 How to create a native mobile app with React Native [Tutorial]
Read more
  • 0
  • 0
  • 12818

article-image-the-seven-deadly-sins-of-web-design
Guest Contributor
13 Mar 2019
7 min read
Save for later

The seven deadly sins of web design

Guest Contributor
13 Mar 2019
7 min read
Just 30 days before the debut of "Captain Marvel," the latest cinematic offering by the successful and prolific Marvel Studios, a delightful and nostalgia-filled website was unveiled to promote the movie. Since the story of "Captain Marvel" is set in the 1990s, the brilliant minds at the marketing department of Marvel Studios decided to design a website with the right look and feel, which in this case meant using FrontPage and hosting on Angelfire. The "Captain Marvel" promo website is filled with the typography, iconography, glitter, and crudely animated GIFs you would expect from a 1990s creation, including a guestbook, hidden easter eggs, flaming borders, hit counter, and even headers made with Microsoft WordArt. (Image courtesy of Marvel) The site is delightful not just for the dead-on nostalgia trip it provides to visitors, but also because it is very well developed. This is a site with a lot to explore, and it is clearly evident that the website developers met client demands while at the same time thinking about users. This site may look and feel like it was made during the GeoCities era, but it does not make any of the following seven mistakes: Sin #1: Non-Responsiveness In 2019, it is simply inconceivable to think of a web development firm that neglects to make a responsive site. Since 2016, internet traffic flowing through mobile devices has been higher than the traffic originating from desktops and laptops. Current rates are about 53 percent smartphones and tablets versus 47 percent desktops, laptops, kiosks, and smart TVs. Failure to develop responsive websites means potentially alienating more than 50 percent of prospective visitors. As for the "Captain Marvel" website, it is amazingly responsive when considering that internet users in the 1990s barely dreamed about the day when they would be able to access the web from handheld devices (mobile phones were yet to be mass distributed back then). Sin #2: Way too much Jargon (Image courtesy of the Botanical Linguist) Not all website developers have a good sense of readability, and this is something that often shows up when completed projects result in product visitors struggling to comprehend. We’re talking about jargon. There’s a lot of it online, not only in the usual places like the privacy policy and terms of service sections but sometimes in content too. Regardless of how jargon creeps onto your website, it should be rooted out. The "Captain Marvel" website features legal notices written by The Walt Disney Company, and they are very reader-friendly with minimal jargon. The best way to handle jargon is to avoid it as much as possible unless the business developer has good reasons to include it. Sin #3: A noticeable lack of content No content means no message, and this is the reason 46 percent of visitors who land on B2B websites end up leaving without further exploration or interaction. Quality content that is relevant to the intention of a website is crucial in terms of establishing credibility, and this goes beyond B2B websites. In the case of "Captain Marvel," the amount of content is reduced to match the retro sensibility, but there are enough photos, film trailers, character bios, and games to keep visitors entertained. Modern website development firms that provide full-service solutions can either provide or advise clients on the content they need to get started. Furthermore, they can also offer lessons on how to operate content management systems. Sin #4: Making essential information hard to find There was a time when the "mystery meat navigation” issue of website development was thought to have been eradicated through the judicious application of recommended practices, but then mobile apps came around. Even technology giant Google fell victim to mystery meat navigation with its 2016 release of Material Design, which introduced bottom navigation bars intended to offer a more clarifying alternative to hamburger menus. Unless there is a clever purpose for prompting visitors to click or tap on a button, link or page element, that does not explain next steps, mystery meat navigation should be avoided, particularly when it comes to essential information. When the 1990s "Captain Marvel" page loads, visitors can click or tap on labeled links to get information about the film, enjoy multimedia content, play games, interact with the guestbook, or get tickets. There is a mysterious old woman that pops up every now and then from the edges of the screen, but the reason behind this mysterious element is explained in the information section. Sin #5: Website loads too slow (Image courtesy of Horton Marketing Solutions) There is an anachronism related to the "Captain Marvel" website that users who actually used Netscape in the 1990s will notice: all pages load very fast. This is one retro aspect that Marvel Studios decided to not include on this site, and it makes perfect sense. For a fast-loading site, a web design rule of thumb is to simplify and this responsibility lies squarely with the developer. It stands to reason that the more “stuff” you have on a page (images, forms, videos, widgets, shiny things), the longer it takes the server to send over the site files and the longer it takes the browser to render them. Here are a few design best practices to keep in mind: 1 Make the site light - get rid of non-essential elements, especially if they are bandwidth-sucking images or video. 2 Compress your pages - it’s easy with Gzip. 3 Split long pages into several shorter ones 4 Write clean code that doesn’t rely on external sources 5 Optimize images For more web design tips that help your site load in the sub-three second range, like Google expects in 2019, check out our article on current design trends.   Once you have design issues under control, investigate your web host. They aren’t all created equal. Cheap, entry-level shared packages are notoriously slow and unpredictable, especially as your traffic increases. But even beyond that, the reality is that some companies spend money buying better, faster servers and don’t overload them with too many clients. Some do. Recent testing from review site HostingCanada.org checked load times across the leading providers and found variances from a ‘meh’ 2,850 ms all the way down to speedy 226 ms. With pricing amongst credible competitors roughly equal, web developers should know which hosts are the fastest and point clients in that direction. Sin #6: Outdated information Functional and accurate information will always triumph over form. The "Captain Marvel" website is garish to look at by 2019 standards, but all the information is current. The film's theater release date is clearly displayed, and should something happen that would require this date to change, you can be sure that Marvel Studios will fire up FrontPage to promptly make the adjustment. Sin #7: No clear call to action Every website should compel visitors to do something. Even if the purpose is to provide information, the call-to-action or CTA should encourage visitors to remember it and return for updates. The CTA should be as clear as the navigation elements, otherwise, the purpose of the visit is lost. Creating enticements is acceptable, but the CTA message should be explained nonetheless. In the case of "Captain Marvel," visitors can click on "Get Tickets" link to be taken to a Fandango.com page with geolocation redirection for their region. The Bottom Line In the end, the seven mistakes listed herein are easy to avoid. Whenever developers run into clients whose instructions may result in one of these mistakes, proper explanations should be given. Author Bio Gary Stevens is a front-end developer. He’s a full-time blockchain geek and a volunteer working for the Ethereum foundation as well as an active Github contributor. 7 Web design trends and predictions for 2019 How to create a web designer resume that lands you a Job Will Grant’s 10 commandments for effective UX Design
Read more
  • 0
  • 0
  • 9393
article-image-www-turns-30-tim-berners-lee-its-inventor-shares-his-plan-to-save-the-web-from-its-current-dysfunctions
Bhagyashree R
13 Mar 2019
6 min read
Save for later

WWW turns 30: Tim Berners-Lee, its inventor, shares his plan to save the Web from its current dysfunctions

Bhagyashree R
13 Mar 2019
6 min read
The World Wide Web turned 30 years old yesterday. As a part of the celebration, its creator, Tim Berners-Lee published an open letter on Monday, sharing his vision for the future of the web. In this year’s letter, he also expressed his concerns about the direction in which the web is heading and how we can make it as the one he envisioned. To celebrate #Web30, Tim Berners-Lee is on a 30-hour trip and his first stop was the birthplace of WWW, the European Organization for Nuclear Research, CERN. https://twitter.com/timberners_lee/status/1105400740112203777 Back in 1989, Tim Berners-Lee, as a research fellow at the CERN researching lab, wrote a proposal to his boss titled Information Management: A Proposal. This proposal was for building an information system that would allow researchers to share general information about accelerators and experiments. Initially, he named the project “The Mesh”, which combined hypertext with internet TCP and domain name system. The project did not go that well, but Berners-Lee’s boss, Mike Sendall did remark that the idea is “vague but exciting”. Later on, in 1990, he actually started coding for the project and this time he named the project, what we know today as, the World Wide Web. Fast forward to now, the simple innocent system that he built has become so large, connecting millions and millions of people across the globe. If you are curious to know how WWW looked back then, check out its revived version by a CERN team: https://twitter.com/CERN/status/1105457772626358273 The three dysfunctions the Web is now facing World Wide Web has come a long way. It has opened various opportunities, given voice to marginalized groups, and has made our daily lives much convenient and easier. At the same time, it has also given opportunities to scammers, provided a platform for hate speech, and made it extremely easy for committing crimes while sitting behind a computer screen. Berners-Lee listed down three sources of problems that are affecting today’s web and also suggested a few ways we can minimize or prevent them: “Deliberate, malicious intent, such as state-sponsored hacking and attacks, criminal behavior, and online harassment.” Though it is really not possible to completely eliminate this dysfunction, policymakers can come up with laws and developers can take the responsibility to write code that will help minimize this behavior. “System design that creates perverse incentives where user value is sacrificed, such as ad-based revenue models that commercially reward clickbait and the viral spread of misinformation.” These type of systems introduces the wrong ways of rewarding that encourage others to sacrifice the user’s interests. To prevent this problem developers need to rethink the incentives and accordingly redesign the systems so that they are not promoting these wrong behaviors. “Unintended negative consequences of benevolent design, such as the outraged and polarised tone and quality of online discourse.” These are the systems that are created thoughtfully and with good intent but still result in negative outcomes. Actually, the problem is that it is really difficult to tell what are all the outcomes of the system you are building. Berners-Lee in an interview with The Guardian said, “Given there are more web pages than there are neurons in your brain, it’s a complicated thing. You build Reddit, and people on it behave in a particular way. For a while, they all behave in a very positive, constructive way. And then you find a subreddit in which they behave in a nasty way.” This problem could be eliminated by researching and understanding of existing systems. Based on this research, we can then model possible new systems or enhance those we already have. Contract for the Web Berners Lee further explained that we can’t just really put the blame on the government or a social network for all the loopholes and dysfunctions that are affecting the Web. He said, “You can’t generalise. You can’t say, you know, social networks tend to be bad, tend to be nasty.” We need to find the root causes and to do exactly that we all need to come together as a global web community. “As the web reshapes, we are responsible for making sure that it is seen as a human right and is built for the public good”, he wrote in the open letter. To address these problems, Berners-Lee has a radical solution. Back in November last year at the Web Summit, he, with The Web Foundation, introduced Contract for the Web. The contract aims to bring together governments, companies, and citizens who believe that there is a need for setting clear norms, laws, and standards that underpin the web. “Governments, companies, and citizens are all contributing, and we aim to have a result later this year,” he shared. In theory, the contract defines people’s online rights and lists the key principles and duties government, companies, and citizens should follow. In Berners-Lee’s mind, it will restore some degree of equilibrium and transparency to the digital realm. The contract is part of a broader project that Berners-Lee believes is essential if we are to ‘save’ the web from its current problems. First, we need to create an open web for the users who are already connected to the web and give them the power of fixing issues that we have with the existing web. Secondly, we need to bring the other half of the world, which is not yet connected to the web. Many people are agreeing on the points Berners-Lee discussed in the open letter. Here is what some of the Twitter users are saying: https://twitter.com/girlygeekdom/status/1105375206829256704 https://twitter.com/solutionpoint/status/1105366111678279681 Contract for the Web, as Berners-Lee says, is about “going back to the values”. His idea of bringing together governments, companies, and citizens to make the Web safer and accessible to everyone looks pretty solid. Read the full open letter by Tim Berners-Lee on the Web Foundation’s website. Web Summit 2018: day 2 highlights Tim Berners-Lee is on a mission to save the web he invented UN on Web Summit 2018: How we can create a safe and beneficial digital future for all  
Read more
  • 0
  • 0
  • 2521

Bhagyashree R
13 Mar 2019
12 min read
Save for later

Building a Progressive Web Application with Create React App 2 [Tutorial]

Bhagyashree R
13 Mar 2019
12 min read
The beauty of building a modern web application is being able to take advantage of functionalities such as a Progressive Web App (PWA)! But they can be a little complicated to work with. As always, the Create React App tool makes a lot of this easier for us but does carry some significant caveats that we'll need to think about. This article is taken from the book  Create React App 2 Quick Start Guide by Brandon Richey. This book is intended for those that want to get intimately familiar with the Create React App tool. It covers all the commands in Create React App and all of the new additions in version 2.  To follow along with the examples implemented in this article, you can download the code from the book’s GitHub repository. In this article, we will learn what exactly PWAs are and how we can configure our Create React App project into a custom PWA. We will also explore service workers, their life cycle, and how to use them with Create React App. Understanding and building PWAs Let's talk a little bit about what a PWA is because there is, unfortunately, a lot of misinformation and confusion about precisely what a PWA does! In very simple words, it's simply a website that does the following: Only uses HTTPS Adds a JSON manifest (a web app manifest) file Has a Service Worker A PWA, for us, is a React application that would be installable/runnable on a mobile device or desktop. Essentially, it's just your app, but with capabilities that make it a little more advanced, a little more effective, and a little more resilient to poor/no internet. A PWA accomplishes these via a few tenets, tricks, and requirements that we'd want to follow: The app must be usable by mobile and desktop-users alike The app must operate over HTTPS The app must implement a web app JSON manifest file The app must implement a service worker Now, the first one is a design question. Did you make your design responsive? If so, congratulations, you built the first step toward having a PWA! The next one is also more of an implementation question that's maybe not as relevant to us here: when you deploy your app to production, did you make it HTTPS only? I hope the answer to this is yes, of course, but it's still a good question to ask! The next two, though, are things we can do as part of our Create React App project, and we'll make those the focus of this article. Building a PWA in Create React App Okay, so we identified the two items that we need to build to make this all happen: the JSON manifest file and the service worker! Easy, right? Actually, it's even easier than that. You see, Create React App will populate a JSON manifest file for us as part of our project creation by default. That means we have already completed this step! Let's celebrate, go home, and kick off our shoes, because we're all done now, right? Well, sort of. We should take a look at that default manifest file because it's very unlikely that we want our fancy TodoList project to be called "Create React App Sample". Let's take a look at the manifest file, located in public/manifest.json: { "short_name": "React App", "name": "Create React App Sample", "icons": [ { "src": "favicon.ico", "sizes": "64x64 32x32 24x24 16x16", "type": "image/x-icon" } ], "start_url": ".", "display": "standalone", "theme_color": "#000000", "background_color": "#ffffff" } Some of these keys are pretty self-explanatory or at least have a little bit of information that you can infer from them as to what they accomplish. Some of the other keys, though, might be a little stranger. For example, what does "start_url" mean? What are the different options we can pick for display? What's a "theme_color" or "background_color"? Aren't those just decided by the CSS of our application? Not really. Let's dive deeper into this world of JSON manifest files and turn it into something more useful! Viewing our manifest file in action with Chrome First, to be able to test this, we should have something where we can verify the results of our changes. We'll start off with Chrome, where if you go into the Developer tools section, you can navigate to the Application tab and be brought right to the Service Workers section! Let's take a look at what it all looks like for our application: Exploring the manifest file options Having a manifest file with no explanation of what the different keys and options mean is not very helpful. So, let's learn about each of them, the different configuration options available to us, and some of the possible values we could use for each. name and short_name The first key we have is short_name. This is a shorter version of the name that might be displayed when, for example, the title can only display a smaller bit of text than the full app or site name. The counterpart to this is name, which is the full name of your application.  For example: { "short_name": "Todos", "name": "Best Todoifier" } icons Next is the icons key, which is a list of sub-objects, each of which has three keys. This contains a list of icons that the PWA should use, whether it's for displaying on someone's desktop, someone's phone home screen, or something else. Each "icon" object should contain an "src", which is a link to the image file that will be your icon. Next, you have the "type" key, which should tell the PWA what type of image file you're working with. Finally, we have the "sizes" key, which tells the PWA the size of the icon. For best results, you should have at least a "512x512" and a "192x192" icon. start_url The start_url key is used to tell the application at what point it should start in your application in relation to your server. While we're not using it for anything as we have a single page, no route app, that might be different in a much larger application, so you might just want the start_url key to be something indicating where you want them to start off from. Another option would be to add a query string on to the end of url, such as a tracking link. An example of that would be something like this: { "start_url": "/?source=AB12C" } background_color This is the color used when a splash screen is displayed when the application is first launched. This is similar to when you launch an application from your phone for the first time; that little page that pops up temporarily while the app loads is the splash screen, and background_color would be the background of that. This can either be a color name like you'd use in CSS, or it can be a hex value for a color. display The display key affects the browser's UI when the application is launched. There are ways to make the application full-screen, to hide some of the UI elements, and so on. Here are the possible options, with their explanations: ValueDescription.browserA normal web browser experience.fullscreenNo browser UI, and takes up the entire display.standaloneMakes the web app look like a native application. It will run in its own window and hides a lot of the browser UI to make it look and feel more native. orientation If you want to make your application in the landscape orientation, you would specify it here. Otherwise, you would leave this option missing from your manifest: { "orientation": "landscape" } scope Scope helps to determine where the PWA in your site lies and where it doesn't. This prevents your PWA from trying to load things outside where your PWA runs. start_url must be located inside your scope for it to work properly! This is optional, and in our case, we'll be leaving it out. theme_color This sets the color of the toolbar, again to make it feel and look a little more native. If we specify a meta-theme color, we'd set this to be the same as that specification. Much like background color, this can either be a color name, as you'd use in CSS, or it can be a hex value for a color. Customizing our manifest file Now that we're experts on manifest files, let's customize our manifest file! We're going to change a few things here and there, but we won't make any major changes. Let's take a look at how we've set up the manifest file in public/manifest.json: { "short_name": "Todos", "name": "Best Todoifier", "icons": [ { "src": "favicon.ico", "sizes": "64x64 32x32 24x24 16x16", "type": "image/x-icon" } ], "start_url": "/", "display": "standalone", "theme_color": "#343a40", "background_color": "#a5a5f5" } So we've set our short_name and name keys to match the actual application. We've left the icons key alone completely since we don't really need to do much of anything with that anyway. Next, we've changed start_url to just be "/", since we're working under the assumption that this application is the only thing running on its domain. We've set the display key to standalone since we want our application to have the ability to be added to someone's home screen and be recognized as a true PWA. Finally, we set the theme color to #343a40, which matches the color of the nav bar and will give a more seamless look and feel to the PWA. We also set the background_color key, which is for our splash screen, to #a5a5f5, which is the color of our normal Todo items! If you think back to the explanation of keys, you'll remember we also need to change our meta-theme tag in our public/index.html file, so we'll open that up and quickly make that change: <meta name="theme-color" content="#343a40" /> And that's it! Our manifest file has been customized! If we did it all correctly, we should be able to verify the changes again in our Chrome Developer tools: Hooking up service workers Service workers are defined as a script that your browser runs behind the scenes, separate from the main browser threads. It can intercept network requests, interact with a cache (either storing or retrieving information from a cache), or listen to and deliver push messages. The service worker life cycle The life cycle for a service worker is pretty simple. There are three main stages: Registration Installation Activation Registration is the process of letting the browser know where the service worker is located and how to install it into the background. The code for registration may look something like this: if ('serviceWorker' in navigator) { navigator.serviceWorker.register('/service-worker.js') .then(registration => { console.log('Service Worker registered!'); }) .catch(error => { console.log('Error registering service worker! Error is:', error); }); } Installation is the process that happens after the service worker has been registered, and only happens if the service worker either hasn't already been installed, or the service worker has changed since the last time. In a service-worker.js file, you'd add something like this to be able to listen to this event: self.addEventListener('install', event => { // Do something after install }); Finally, Activation is the step that happens after all of the other steps have completed. The service worker has been registered and then installed, so now it's time for the service worker to start doing its thing: self.addEventListener('activate', event => { // Do something upon activation }); How can we use a service worker in our app? So, how do we use a service worker in our application? Well, it's simple to do with Create React App, but there is a major caveat: you can't configure the service-worker.js file generated by Create React App by default without ejecting your project! Not all is lost, however; you can still take advantage of some of the highlights of PWAs and service workers by using the default Create React App-generated service worker. To enable this, hop over into src/index.js, and, at the final line, change the service worker unregister() call to register() instead: serviceWorker.register(); And now we're opting into our service worker! Next, to actually see the results, you'll need to run the following: $ yarn build We'll create a Production build. You'll see some output that we'll want to follow as part of this: The build folder is ready to be deployed. You may serve it with a static server: yarn global add serve serve -s build As per the instructions, we'll install serve globally, and run the command as instructed: $ serve -s build We will get the following output: Now open up http://localhost:5000 in your local browser and you'll be able to see, again in the Chrome Developer tools, the service worker up and running for your application: Hopefully, we've explored at least enough of PWAs that they have been partially demystified! A lot of the confusion and trouble with building PWAs tends to stem from the fact that there's not always a good starting point for building one. Create React App limits us a little bit in how we can implement service workers, which admittedly limits the functionality and usefulness of our PWA. It doesn't hamstring us, by any means, but doing fun tricks with pre-caching networks and API responses, and loading up our application instantly, even if the browser doing the loading is offline in the first place. That being said, it's like many other things in Create React App: an amazing stepping stone and a great way to get moving with PWAs in the future! If you found this post useful, do check out the book, Create React App 2 Quick Start Guide. In addition to getting familiar with Create React App 2, you will also build modern, React projects with, SASS, and progressive web applications. ReactOS 0.4.11 is now out with kernel improvements, manifests support, and more! React Native community announce March updates, post sharing the roadmap for Q4 React Native Vs Ionic: Which one is the better mobile app development framework?
Read more
  • 0
  • 0
  • 6987