Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

6719 Articles
article-image-18-people-in-tech-every-programmer-and-software-engineer-needs-to-follow-in-2019
Richard Gall
02 Jan 2019
9 min read
Save for later

18 people in tech every programmer and software engineer needs to follow in 2019

Richard Gall
02 Jan 2019
9 min read
After a tumultuous 2018 in tech, it's vital that you surround yourself with a variety of opinions and experiences in 2019 if you're to understand what the hell is going on. While there are thousands of incredible people working in tech, I've decided to make life a little easier for you by bringing together 18 of the best people from across the industry to follow on Twitter. From engineers at Microsoft and AWS, to researchers and journalists, this list is by no means comprehensive but it does give you a wide range of people that have been influential, interesting, and important in 2018. (A few of) the best people in tech on Twitter April Wensel (@aprilwensel) April Wensel is the founder of Compassionate Coding, an organization that aims to bring emotional intelligence and ethics into the tech industry. In April 2018 Wensel wrote an essay arguing that "it's time to retire RTFM" (read the fucking manual). The essay was well received by many in the tech community tired of a culture of ostensible caustic machismo and played a part in making conversations around community accessibility an important part of 2018. Watch her keynote at NodeJS Interactive: https://www.youtube.com/watch?v=HPFuHS6aPhw Liz Fong-Jones (@lizthegrey) Liz Fong-Jones is an SRE and Dev Advocate at Google Cloud Platform, but over the last couple of years she has become an important figure within tech activism. First helping to create the NeverAgain pledge in response to the election of Donald Trump in 2016, then helping to bring to light Google's fraught internal struggle over diversity, Fong-Jones has effectively laid the foundations for the mainstream appearance of tech activism in 2018. In an interview with Fast Company, Fong-Jones says she has accepted her role as a spokesperson for the movement that has emerged, but she's committed to helping to "equipping other employees to fight for change in their workplaces–whether at Google or not –so that I’m not a single point of failure." Ana Medina (@Ana_M_Medina) Ana Medina is a chaos engineer at Gremlin. Since moving to the chaos engineering platform from Uber (where she was part of the class action lawsuit against the company), Medina has played an important part in explaining what chaos engineering looks like in practice all around the world. But she is also an important voice in discussions around diversity and mental health in the tech industry - if you get a chance to her her talk, make sure you take the chance, and if you don't, you've still got Twitter... Sarah Drasner (@sarah_edo) Sarah Drasner does everything. She's a Developer Advocate at Microsoft, part of the VueJS core development team, organizer behind Concatenate (a free conference for Nigerian developers), as well as an author too. https://twitter.com/sarah_edo/status/1079400115196944384 Although Drasner specializes in front end development and JavaScript, she's a great person to follow on Twitter for her broad insights on how we learn and evolve as software developers. Do yourself a favour and follow her. Mark Imbriaco (@markimbriaco) Mark Imbriaco is the technical director at Epic Games. Given the company's truly, er, epic year thanks to Fortnite, Imbriaco can offer an insight on how one of the most important and influential technology companies on the planet are thinking. Corey Quinn (@QuinnyPig) Corey Quinn is an AWS expert. As the brain behind the Last Week in AWS newsletter and the voice behind the Screaming in the Cloud podcast (possibly the best cloud computing podcast on the planet), he is without a doubt the go-to person if you want to know what really matters in cloud. The range of guests that Quinn gets on the podcast is really impressive, and sums up his online persona: open, engaged, and always interesting. Yasmine Evjen (@YasmineEvjen) Yasmine Evjen is a Design Advocate at Google. That means that she is not only one of the minds behind Material Design, she is also someone that is helping to demonstrate the importance of human centered design around the world. As the presenter of Centered, a web series by the Google Design team about the ways human centered design is used for a range of applications. If you haven't seen it, it's well worth a watch. https://www.youtube.com/watch?v=cPBXjtpGuSA&list=PLJ21zHI2TNh-pgTlTpaW9kbnqAAVJgB0R&index=5&t=0s Suz Hinton (@noopkat) Suz Hinton works on IoT programs at Microsoft. That's interesting in itself, but when she's not developing fully connected smart homes (possibly), Hinton also streams code tutorials on Twitch (also as noopkat). Chris Short (@ChrisShort) If you want to get the lowdown on all things DevOps, you could do a lot worse than Chris Short. He boasts outstanding credentials - he's a CNCF ambassador, has experience with Red Hat and Ansible - but more importantly is the quality of his insights. A great place to begin is with DevOpsish, a newsletter Short produces, which features some really valuable discussions on the biggest issues and talking points in the field. Dan Abramov (@dan_abramov) Dan Abramov is one of the key figures behind ReactJS. Along with @sophiebits,@jordwalke, and @sebmarkbage, Abramov is quite literally helping to define front end development as we know it. If you're a JavaScript developer, or simply have any kind of passing interest in how we'll be building front ends over the next decade, he is an essential voice to have on your timeline. As you'd expect from someone that has helped put together one of the most popular JavaScript libraries in the world, Dan is very good at articulating some of the biggest challenges we face as developers and can provide useful insights on how to approach problems you might face, whether day to day or career changing. Emma Wedekind (@EmmaWedekind) As well as working at GoTo Meeting, Emma Wedekind is the founder of Coding Coach, a platform that connects developers to mentors to help them develop new skills. This experience makes Wedekind an important authority on developer learning. And at a time when deciding what to learn and how to do it can feel like such a challenging and complex process, surrounding yourself with people taking those issues seriously can be immensely valuable. Jason Lengstorf (@jlengstorf) Jason Lengstorf is a Developer Advocate at GatsbyJS (a cool project that makes it easier to build projects with React). His writing - on Twitter and elsewhere - is incredibly good at helping you discover new ways of working and approaching problems. Bridget Kromhout (@bridgetkromhout) Bridget Kromhout is another essential voice in cloud and DevOps. Currently working at Microsoft as Principal Cloud Advocate, Bridget also organizes DevOps Days and presents the Arrested DevOps podcast with Matty Stratton and Trevor Hess. Follow Bridget for her perspective on DevOps, as well as her experience in DevRel. Ryan Burgess (@burgessdryan) Netflix hasn't faced the scrutiny of many of its fellow tech giants this year, which means it's easy to forget the extent to which the company is at the cutting edge of technological innovation. This is why it's well worth following Ryan Burgess - as an engineering manager he's well placed to provide an insight on how the company is evolving from a tech perspective. His talk at Real World React on A/B testing user experiences is well worth watching: https://youtu.be/TmhJN6rdm28 Anil Dash (@anildash) Okay, so chances are you probably already follow Anil Dash - he does have half a million followers already, after all - but if you don't follow him, you most definitely should. Dash is a key figure in new media and digital culture, but he's not just another thought leader, he's someone that actually understands what it takes to actually build this stuff. As CEO of Glitch, a platform for building (and 'remixing') cool apps, he's having an impact on the way developers work and collaborate. 6 years ago, Dash wrote an essay called 'The Web We Lost'. In it, he laments how the web was becoming colonized by a handful of companies who built the key platforms on which we communicate and engage with one another online. Today, after a year of protest and controversy, Dash's argument is as salient as ever - it's one of the reasons it's vital that we listen to him. Jessie Frazelle (@jessfraz) Jessie Frazelle is a bit of a superstar. Which shouldn't really be that surprising - she's someone that seems to have a natural ability to pull things apart and put them back together again and have the most fun imaginable while doing it. Formerly part of the core Docker team, Frazelle now works at GitHub, where her knowledge and expertise is helping to develop the next Microsoft-tinged chapter in GitHub's history. I was lucky enough to see Jessie speak at ChaosConf in September - check out her talk: https://youtu.be/1hhVS4pdrrk Rachel Coldicutt (@rachelcoldicutt) Rachel Coldicutt is the CEO of Doteveryone, a think tank based in the U.K. that champions responsible tech. If you're interested in how technology interacts with other aspects of society and culture, as well as how it is impacting and being impacted by policymakers, Coldicutt is a vital person to follow. Kelsey Hightower (@kelseyhightower) Kelsey Hightower is another superstar in the tech world - when he talks, you need to listen. Hightower currently works at Google Cloud, but he spends a lot of time at conferences evangelizing for more effective cloud native development. https://twitter.com/mattrickard/status/1073285888191258624 If you're interested in anything infrastructure or cloud related, you need to follow Kelsey Hightower. Who did I miss? That's just a list of a few people in tech I think you should follow in 2019 - but who did I miss? Which accounts are essential? What podcasts and newsletters should we subscribe to?
Read more
  • 0
  • 0
  • 9048

article-image-all-of-my-engineering-teams-have-a-machine-learning-feature-on-their-roadmap-will-ballard-talks-artificial-intelligence-in-2019-interview
Packt Editorial Staff
02 Jan 2019
3 min read
Save for later

“All of my engineering teams have a machine learning feature on their roadmap” - Will Ballard talks artificial intelligence in 2019 [Interview]

Packt Editorial Staff
02 Jan 2019
3 min read
The huge advancements of deep learning and artificial intelligence were perhaps the biggest story in tech in 2018. But we wanted to know what the future might hold - luckily, we were able to speak to Packt author Will Ballard about what they see as in store for artificial in 2019 and beyond. Will Ballard is the chief technology officer at GLG, responsible for engineering and IT. He was also responsible for the design and operation of large data centers that helped run site services for customers including Gannett, Hearst Magazines, NFL, NPR, The Washington Post, and Whole Foods. He has held leadership roles in software development at NetSolve (now Cisco), NetSpend, and Works (now Bank of America). Explore Will Ballard's Packt titles here. Packt: What do you think the biggest development in deep learning / AI was in 2018? Will Ballard: I think attention models beginning to take the place of recurrent networks is a pretty impressive breakout on the algorithm side. In Packt’s 2018 Skill Up survey, developers across disciplines and job roles identified machine learning as the thing they were most likely to be learning in the coming year. What do you think of that result? Do you think machine learning is becoming a mandatory multidiscipline skill, and why? Almost all of my engineering teams have an active, or a planned machine learning feature on their roadmap. We’ve been able to get all kinds of engineers with different backgrounds to use machine learning -- it really is just another way to make functions -- probabilistic functions -- but functions. What do you think the most important new deep learning/AI technique to learn in 2019 will be, and why? In 2019 -- I think it is going to be all about PyTorch and TensorFlow 2.0, and learning how to host these on cloud PaaS. The benefits of automated machine learning and metalearning How important do you think automated machine learning and metalearning will be to the practice of developing AI/machine learning in 2019? What benefits do you think they will bring? Even ‘simple’ automation techniques like grid search and running multiple different algorithms on the same data are big wins when mastered. There is almost no telling which model is ‘right’ till you try it, so why not let a cloud of computers iterate through scores of algorithms and models to give you the best available answer? Artificial intelligence and ethics Do you think ethical considerations will become more relevant to developing AI/machine learning algorithms going forwards? If yes, how do you think this will be implemented? I think the ethical issues are important on outcomes, and on how models are used, but aren’t the place of algorithms themselves. If a developer was looking to start working with machine learning/AI, what tools and software would you suggest they learn in 2019? Python and PyTorch.
Read more
  • 0
  • 0
  • 3815

article-image-introduction-to-open-shortest-path-first-ospf-tutorial
Amrata Joshi
02 Jan 2019
14 min read
Save for later

Introduction to Open Shortest Path First (OSPF) [Tutorial]

Amrata Joshi
02 Jan 2019
14 min read
The OSPF interior routing protocol is a very popular protocol in enterprise networks. OSPF does a very good job in calculating cost values to choose the Shortest Path First to its destinations. OSPF operations can be separated into three categories: Neighbor and adjacency initialization LSA flooding SPF tree calculation This article is an excerpt taken from the book  CCNA Routing and Switching 200-125 Certification Guide by Lazaro (Laz) Diaz. This book covers the understanding of networking using routers and switches, layer 2 technology and its various configurations and connections, VLANS and inter-VLAN routing and more. In this article, we will cover the basics of OSPF, its features and configuration, and much more. Neighbor and adjacency initialization This is the very first part of OSPF operations. The router at this point will allocate memory for this function as well as for the maintenance of both the neighbor and topology tables. Once the router discovers which interfaces are configured with OSPF, it will begin sending hello packets throughout the interface in the hope of finding other routers using OSPF. Let's look at a visual representation: Remember this would be considered a broadcast in between the routers so the election needs to run to choose DR and BDR. 00:03:06: OSPF: DR/BDR election on FastEthernet0/0 00:03:06: OSPF: Elect BDR 10.1.1.5 00:03:06: OSPF: Elect DR 10.1.1.6 00:03:06: OSPF: Elect BDR 10.1.1.5 00:03:06: OSPF: Elect DR 10.1.1.6 00:03:06: DR: 10.1.1.6 (Id) BDR: 10.1.1.5 (Id) One thing to keep in mind is that if you are using Ethernet, as we are, the hello packet timer is set to 10 seconds. If it is not an Ethernet connection, the hello packet timer will be set to 30 seconds. Why is this so important to know? Because the hello packet timer must be identical to its adjacent router or they will never become neighbors. Link State Advertisements and Flooding Before we begin with LSA flooding and how it uses LSUs to create the OSPF routing table, let's elaborate on this term. There is not just one type of LSA either. Let's have a look at the following table: By no means are these the only LSAs that exist. There are 11 LSAs, but for the CCNA, you must know about the ones that I highlighted, do not dismiss the rest. LSA updates are sent via multicast addresses. Depending on the type of network topology you have, that multicast address is used. For the point-to-point networks, the multicast address is 224.0.0.5. In a broadcast environment, 224.0.0.6 is used. But as we get further into OSPF and start discussing DR/BDR routers in a broadcast environment, the DR uses 224.0.0.5 and the BDR uses 224.0.0.6. In any case, remember that these two multicast addresses are used within OSPF. The network topology is created via LSAs updates, for which the information is acquired through LSUs or link state updates. So, OSPF routers, after they have converged, send hellos via LSAs. If any new change happens, it is the job of the LSU to update the LSA of the routers in order to keep routing tables current. Configuring the basics of OSPF You have already had a sneak peek into the configuration of OSPF, but let's take it back to the basics. The following diagram shows the topology: Yes, this is the basic topology, but we will do a dual stack, shown as follows: Configuration of R1: Configuration of R2: Configuration of R3: So, what did we do? We put the IP addresses on each interface and since we are using serial cables, on the DCE side of the cable, we must use the clock rate command and assign the clock rate for synchronization and encapsulation. Then we configured OSPF with basic configuration, which means that all we did was advertise the networks we are attached to using the process ID number, which is local to the router. The complete network ID address we are partly using is a wildcard mask and since this is the first area, we must use area 0. We can verify several ways to use the ping command. Use the sh ip protocols or sh ip route, but let's look at how this would look. Verifying from R1, you will get the following: There are three simple commands that we could use to verify that our configuration of OSPF is correct. One thing you need to know very well is wild card masking, so let me show you a couple of examples: Before we begin, let me you present a very simple way of doing wildcard masking. All you must do is use the constant number 255.255.255.255 and subtract your subnet mask from it: So, as you can plainly see, your mask will determine the wildcard mask. The network ID may look the same but you will have three different wildcard masks. That would be a lot of different hosts pointing to a specific interface. Finally, let's look at another example, which is a subnetted Class A address: It's extremely simple, with no physics needed. So, that was a basic configuration of OSPF, but you can configure OSPF in many ways. I just explained wildcard masking, but remember that zeros need to match exactly, so what can you tell me about the following configuration, using a different topology? R1(config)#router ospf 1 R1(config-router)#net 0.0.0.0 0.0.0.0 area 0 R2(config)#router ospf 2 R2(config-router)#net 10.1.1.6 0.0.0.0 area 0 R2(config-router)#net 10.1.1.9 0.0.0.0 area 0 R2(config-router)#net 2.2.2.2 0.0.0.0 area 0 R3(config)#router ospf 3 R3(config-router)#net 10.1.1.0 0.0.0.255 area 0 R3(config-router)#net 3.3.3.0 0.0.0.255 area 0 We configured OSPF in three different ways, so let's explain each one. In this new topology, we are playing around with the wildcard mask. You can see in the first configuration that when we create the network statement, we use all zeros, 0.0.0.0 0.0.0.0, and then we put in the area number. Using all zeros means matching all interfaces, so any IP address that exists on the router will be matched by OSPF, placed in area 0, and advertised to the neighbor routers. In the second example, when we create our network statement, we put the actual IP address of the interface and then use a wildcard mask of all zeros, 192.168.1.254 0.0.0.0. In this case, OSPF will know exactly what interface is going to participate in the OSPF process, because we are matching exactly each octet. In the last example, the network state created was using the network ID and then we only matched the first three octets and we used 255 on the last octet, which states whatever number. So, OSPF has tremendous flexibility in its configurations, to meet your needs on the network. You just need to know what those needs are. By the way, I hope you spotted that I used a different process ID number on each router. Keep in mind for the CCNA and even most "real-world" networks that the process ID number is only locally significant. The other routers do not care, so this number can be whatever you want it to be. To further prove that the three new ways of configuring OSPF work, here are the routers' output: R1#sh ip route Gateway of last resort is not set 1.0.0.0/32 is subnetted, 1 subnets C 1.1.1.1 is directly connected, Loopback1 2.0.0.0/32 is subnetted, 1 subnets O 2.2.2.2 [110/2] via 10.1.1.6, 18:41:09, FastEthernet0/0 3.0.0.0/32 is subnetted, 1 subnets O 3.3.3.3 [110/3] via 10.1.1.6, 18:41:09, FastEthernet0/0 10.0.0.0/30 is subnetted, 2 subnets O 10.1.1.8 [110/2] via 10.1.1.6, 18:41:09, FastEthernet0/0 C 10.1.1.4 is directly connected, FastEthernet0/0 R1#sh ip protocols Routing Protocol is "ospf 1" Outgoing update filter list for all interfaces is not set Incoming update filter list for all interfaces is not set Router ID 1.1.1.1 Number of areas in this router is 1. 1 normal 0 stub 0 nssa Maximum path: 4 Routing for Networks: 0.0.0.0 255.255.255.255 area 0 Reference bandwidth unit is 100 mbps Routing Information Sources: Gateway Distance Last Update 3.3.3.3 110 18:41:42 2.2.2.2 110 18:41:42 Distance: (default is 110) R1#ping 2.2.2.2 Type escape sequence to abort. Sending 5, 100-byte ICMP Echos to 2.2.2.2, timeout is 2 seconds: !!!!! Success rate is 100 percent (5/5), round-trip min/avg/max = 16/20/24 ms R1#ping 3.3.3.3 Type escape sequence to abort. Sending 5, 100-byte ICMP Echos to 3.3.3.3, timeout is 2 seconds: !!!!! Success rate is 100 percent (5/5), round-trip min/avg/max = 36/52/72 ms As you can see, I have full connectivity and by looking at my routing table, I am learning about all the routes. But I want to show the differences in the configuration of the network statements for the three routers using the sh ip protocols command: R2#sh ip protocols Routing Protocol is "ospf 2" Outgoing update filter list for all interfaces is not set Incoming update filter list for all interfaces is not set Router ID 2.2.2.2 Number of areas in this router is 1. 1 normal 0 stub 0 nssa Maximum path: 4 Routing for Networks: 2.2.2.2 0.0.0.0 area 0 10.1.1.6 0.0.0.0 area 0 10.1.1.9 0.0.0.0 area 0 Reference bandwidth unit is 100 mbps Routing Information Sources: Gateway Distance Last Update 3.3.3.3 110 18:31:18 1.1.1.1 110 18:31:18 Distance: (default is 110) R3#sh ip protocols Routing Protocol is "ospf 3" Outgoing update filter list for all interfaces is not set Incoming update filter list for all interfaces is not set Router ID 3.3.3.3 Number of areas in this router is 1. 1 normal 0 stub 0 nssa Maximum path: 4 Routing for Networks: 3.3.3.0 0.0.0.255 area 0 10.1.1.0 0.0.0.255 area 0 Reference bandwidth unit is 100 mbps Routing Information Sources: Gateway Distance Last Update 2.2.2.2 110 18:47:13 1.1.1.1 110 18:47:13 Distance: (default is 110) To look at other features that OSPF uses, we are going to explore the passive-interface command. This is very useful in preventing updates being sent out. But be warned, this command works differently with other routing protocols. For example, if you were to configure it on EIGRP, it will not send or receive updates. In OSPF, it simply prevents updates from being sent out, but will receive updates for neighbor routers. It will not update its routing table, so essentially that interface is down. Let's look from the perspective of R2: R2(config-router)#passive-interface f1/0 *Oct 3 04:47:01.763: %OSPF-5-ADJCHG: Process 2, Nbr 1.1.1.1 on FastEthernet1/0 from FULL to DOWN, Neighbor Down: Interface down or detached Almost immediately, it took the F1/0 interface down. What's happening is that the router is not sending any hellos. Let's further investigate by using the debug ip ospf hello command: R2#debug ip ospf hello OSPF hello events debugging is on R2# *Oct 3 04:49:40.319: OSPF: Rcv hello from 3.3.3.3 area 0 from FastEthernet1/1 10.1.1.10 *Oct 3 04:49:40.319: OSPF: End of hello processing R2# *Oct 3 04:49:43.723: OSPF: Send hello to 224.0.0.5 area 0 on FastEthernet1/1 from 10.1.1.9 R2# *Oct 3 04:49:50.319: OSPF: Rcv hello from 3.3.3.3 area 0 from FastEthernet1/1 10.1.1.10 *Oct 3 04:49:50.323: OSPF: End of hello processing R2# *Oct 3 04:49:53.723: OSPF: Send hello to 224.0.0.5 area 0 on FastEthernet1/1 from 10.1.1.9 R2# *Oct 3 04:50:00.327: OSPF: Rcv hello from 3.3.3.3 area 0 from FastEthernet1/1 10.1.1.10 *Oct 3 04:50:00.331: OSPF: End of hello processing It is no longer sending updates out to the F1/0 interface, so let's look at the routing table now and see what networks we know about: R2#sh ip route Gateway of last resort is not set 2.0.0.0/32 is subnetted, 1 subnets C 2.2.2.2 is directly connected, Loopback2 3.0.0.0/32 is subnetted, 1 subnets O 3.3.3.3 [110/2] via 10.1.1.10, 00:05:12, FastEthernet1/1 10.0.0.0/30 is subnetted, 2 subnets C 10.1.1.8 is directly connected, FastEthernet1/1 C 10.1.1.4 is directly connected, FastEthernet1/0 R2#ping 2.2.2.2 Type escape sequence to abort. Sending 5, 100-byte ICMP Echos to 2.2.2.2, timeout is 2 seconds: !!!!! Success rate is 100 percent (5/5), round-trip min/avg/max = 1/1/4 ms R2#ping 3.3.3.3 Type escape sequence to abort. Sending 5, 100-byte ICMP Echos to 3.3.3.3, timeout is 2 seconds: !!!!! Success rate is 100 percent (5/5), round-trip min/avg/max = 20/24/40 ms So, what are we looking at? We are only learning about the 3.3.3.3 network, which is the loopback address on R3. We have stopped learning about the 1.1.1.1 network, and we do not have connectivity to it. We can ping our own loopback, obviously, and we can ping the loopback on R3. Okay, let's remove the passive interface command and compare the difference: R2(config)#router ospf 2 R2(config-router)#no passive-interface f1/0 R2(config-router)# *Oct 3 04:57:34.343: %OSPF-5-ADJCHG: Process 2, Nbr 1.1.1.1 on FastEthernet1/0 from LOADING to FULL, Loading Done We have now recreated our neighbor relationship with R1 once more. Let's debug again: R2#debug ip ospf hello OSPF hello events debugging is on R2# *Oct 3 05:03:48.527: OSPF: Send hello to 224.0.0.5 area 0 on FastEthernet1/0 from 10.1.1.6 R2# *Oct 3 05:03:50.303: OSPF: Rcv hello from 3.3.3.3 area 0 from FastEthernet1/1 10.1.1.10 *Oct 3 05:03:50.303: OSPF: End of hello processing R2# *Oct 3 05:03:52.143: OSPF: Rcv hello from 1.1.1.1 area 0 from FastEthernet1/0 10.1.1.5 *Oct 3 05:03:52.143: OSPF: End of hello processing R2# *Oct 3 05:03:53.723: OSPF: Send hello to 224.0.0.5 area 0 on FastEthernet1/1 from 10.1.1.9 Once again, we are sending and receiving hellos from R1, so let's ping the loopback on R1, but also look at the routing table: R2#sh ip route Gateway of last resort is not set 1.0.0.0/32 is subnetted, 1 subnets O 1.1.1.1 [110/2] via 10.1.1.5, 00:06:50, FastEthernet1/0 2.0.0.0/32 is subnetted, 1 subnets C 2.2.2.2 is directly connected, Loopback2 3.0.0.0/32 is subnetted, 1 subnets O 3.3.3.3 [110/2] via 10.1.1.10, 00:06:50, FastEthernet1/1 10.0.0.0/30 is subnetted, 2 subnets C 10.1.1.8 is directly connected, FastEthernet1/1 C 10.1.1.4 is directly connected, FastEthernet1/0 R2#ping 1.1.1.1 Type escape sequence to abort. Sending 5, 100-byte ICMP Echos to 1.1.1.1, timeout is 2 seconds: !!!!! Once more, we have connectivity, so with the passive-interface be very careful how you are going to use it and which protocol you are going to use it with. Now let's explore another feature, which is the default-information originate. This is used in conjunction with a static-default route to create an OSPF default static route. It is like advertising a static default route. To let all the routers know if you want to get to a destination network, this is the way to go. So, how would you configure something like that? Let's take a look. Use the following topology: R1(config)# ip route 0.0.0.0 0.0.0.0 GigabitEthernet2/0 R1(config)#router ospf 1 R1(config-router)#default-information originate Now that we have created a static route to an external network and we did the default-information originate command, what would the routing tables of the other routers look like? R2#sh ip route Codes: C - connected, S - static, R - RIP, M - mobile, B - BGP D - EIGRP, EX - EIGRP external, O - OSPF, IA - OSPF inter area N1 - OSPF NSSA external type 1, N2 - OSPF NSSA external type 2 E1 - OSPF external type 1, E2 - OSPF external type 2 i - IS-IS, su - IS-IS summary, L1 - IS-IS level-1, L2 - IS-IS level-2 ia - IS-IS inter area, * - candidate default, U - per-user static route o - ODR, P - periodic downloaded static route Gateway of last resort is 10.1.1.5 to network 0.0.0.0 1.0.0.0/32 is subnetted, 1 subnets O 1.1.1.1 [110/2] via 10.1.1.5, 00:16:35, FastEthernet1/0 2.0.0.0/32 is subnetted, 1 subnets C 2.2.2.2 is directly connected, Loopback2 3.0.0.0/32 is subnetted, 1 subnets O 3.3.3.3 [110/2] via 10.1.1.10, 00:16:35, FastEthernet1/1 10.0.0.0/30 is subnetted, 2 subnets C 10.1.1.8 is directly connected, FastEthernet1/1 C 10.1.1.4 is directly connected, FastEthernet1/0 O 192.168.1.0/24 [110/2] via 10.1.1.5, 00:16:35, FastEthernet1/0 O*E2 0.0.0.0/0 [110/1] via 10.1.1.5, 00:16:35, FastEthernet1/0 R3#sh ip route Codes: C - connected, S - static, R - RIP, M - mobile, B - BGP D - EIGRP, EX - EIGRP external, O - OSPF, IA - OSPF inter area N1 - OSPF NSSA external type 1, N2 - OSPF NSSA external type 2 E1 - OSPF external type 1, E2 - OSPF external type 2 i - IS-IS, su - IS-IS summary, L1 - IS-IS level-1, L2 - IS-IS level-2 ia - IS-IS inter area, * - candidate default, U - per-user static route o - ODR, P - periodic downloaded static route Gateway of last resort is 10.1.1.9 to network 0.0.0.0 1.0.0.0/32 is subnetted, 1 subnets O 1.1.1.1 [110/3] via 10.1.1.9, 00:17:17, FastEthernet0/0 2.0.0.0/32 is subnetted, 1 subnets O 2.2.2.2 [110/2] via 10.1.1.9, 00:17:17, FastEthernet0/0 3.0.0.0/32 is subnetted, 1 subnets C 3.3.3.3 is directly connected, Loopback3 10.0.0.0/30 is subnetted, 2 subnets C 10.1.1.8 is directly connected, FastEthernet0/0 O 10.1.1.4 [110/2] via 10.1.1.9, 00:17:17, FastEthernet0/0 O 192.168.1.0/24 [110/3] via 10.1.1.9, 00:17:17, FastEthernet0/0 O*E2 0.0.0.0/0 [110/1] via 10.1.1.9, 00:17:17, FastEthernet0/0 R4#sh ip route Codes: C - connected, S - static, R - RIP, M - mobile, B - BGP D - EIGRP, EX - EIGRP external, O - OSPF, IA - OSPF inter area N1 - OSPF NSSA external type 1, N2 - OSPF NSSA external type 2 E1 - OSPF external type 1, E2 - OSPF external type 2 i - IS-IS, su - IS-IS summary, L1 - IS-IS level-1, L2 - IS-IS level-2 ia - IS-IS inter area, * - candidate default, U - per-user static route o - ODR, P - periodic downloaded static route Gateway of last resort is 192.168.1.1 to network 0.0.0.0 1.0.0.0/32 is subnetted, 1 subnets D EX 1.1.1.1 [170/5376] via 192.168.1.1, 00:12:38, GigabitEthernet2/0 2.0.0.0/32 is subnetted, 1 subnets D EX 2.2.2.2 [170/5376] via 192.168.1.1, 00:12:38, GigabitEthernet2/0 3.0.0.0/32 is subnetted, 1 subnets D EX 3.3.3.3 [170/5376] via 192.168.1.1, 00:12:38, GigabitEthernet2/0 10.0.0.0/30 is subnetted, 2 subnets D EX 10.1.1.8 [170/5376] via 192.168.1.1, 00:12:38, GigabitEthernet2/0 D EX 10.1.1.4 [170/5376] via 192.168.1.1, 00:12:38, GigabitEthernet2/0 C 192.168.1.0/24 is directly connected, GigabitEthernet2/0 D*EX 0.0.0.0/0 [170/5376] via 192.168.1.1, 00:12:38, GigabitEthernet2/0 So, this is how you can advertise a default route to external route, using OSPF. Obviously, you must configure EIGRP on R1 and R4 and do some redistribution. That is why all the routes are external, but you are advertising a way out using a static default route. To summarize, this article covered OSPF configurations,   features of OSPF, and different ways of advertising the networks. To know more about Multi-area OSPF configuration, check out the book CCNA Routing and Switching 200-125 Certification Guide. Brute forcing HTTP applications and web applications using Nmap [Tutorial] Discovering network hosts with ‘TCP SYN’ and ‘TCP ACK’ ping scans in Nmap[Tutorial] How to build a convolution neural network based malware detector using malware visualization [Tutorial]
Read more
  • 0
  • 0
  • 10647

article-image-how-to-build-a-relay-react-app-tutorial
Bhagyashree R
01 Jan 2019
12 min read
Save for later

How to build a Relay React App [Tutorial]

Bhagyashree R
01 Jan 2019
12 min read
Relay is used with both web and mobile React applications. It relies on a language called GraphQL which is used to fetch resources and to mutate those resources. The premise of Relay is that it can be scaled in ways that Redux and other approaches to handling state are limiting. It does this by eliminating them and keeping the focus on the data requirements of the component. In this article, we will build a Todo React Native application using Relay. By the end of this article, you should feel comfortable about how data moves around in a GraphQL centric architecture. At a high level, you can think of Relay as an implementation of Flux architecture patterns, and you can think of GraphQL as the interface that describes how the Flux stores within Relay work. At a more practical level, the value of Relay is ease of implementation. For example, with Redux, you have a lot of implementation work to do, just to populate the stores with data. This gets verbose over time. It's this verbosity that makes Redux difficult to scale beyond a certain point. [box type="shadow" align="" class="" width=""]This article is taken from the book React and React Native - Second Edition by Adam Boduch. This book guides you through building applications for web and native mobile platforms with React, JSX, Redux, and GraphQL.  To follow along with the examples implemented in this article, you can find the code in the GitHub repository of the book. [/box] TodoMVC and Relay TodoMVC example for Relay will be a robust yet concise example. We will walk through an example React Native implementation of a Todo app. The key is that it'll use the same GraphQL backend as the web UI. I've included the web version of the TodoMVC app in the code that ships with this book, but I won't dwell on the details of how it works. If you've worked on web development in the past 5 years, you've probably come across a sample Todo app. Here's what the web version looks like: Even if you haven't used any of the TodoMVC apps before, I would recommend playing with this one before trying to implement the native version, which is what you'll be doing for the remainder of the article. The goal of the native version that you're about to implement isn't functional parity. In fact, you're shooting for a very minimal subset of todo functionality. The aim is to show you that Relay works mostly the same on native platforms as it does on web platforms and that the GraphQL backend can be shared between web and native apps. The GraphQL schema The schema is the vocabulary used by GraphQL backend server and the Relay components in the frontend. The GraphQL type system enables the schema to describe the data that's available, and how to put it all together when a query request comes in. This is what makes the whole approach so scalable, the fact that the GraphQL runtime figures out how to put data together. All you need to supply are functions that tell GraphQL where the data is; for example, in a database or in some remote service endpoint. Let's take a look at the types used in the GraphQL schema for the TodoMVC app: You can find the code in this section on GitHub. import { GraphQLBoolean, GraphQLID, GraphQLInt, GraphQLList, GraphQLNonNull, GraphQLObjectType, GraphQLSchema, GraphQLString } from 'graphql'; import { connectionArgs, connectionDefinitions, connectionFromArray, cursorForObjectInConnection, fromGlobalId, globalIdField, mutationWithClientMutationId, nodeDefinitions, toGlobalId } from 'graphql-relay'; import { Todo, User, addTodo, changeTodoStatus, getTodo, getTodos, getUser, getViewer, markAllTodos, removeCompletedTodos, removeTodo, renameTodo } from './database'; const { nodeInterface, nodeField } = nodeDefinitions( globalId => { const { type, id } = fromGlobalId(globalId); if (type === 'Todo') { return getTodo(id); } if (type === 'User') { return getUser(id); } return null; }, obj => { if (obj instanceof Todo) { return GraphQLTodo; } if (obj instanceof User) { return GraphQLUser; } return null; } ); const GraphQLTodo = new GraphQLObjectType({ name: 'Todo', fields: { id: globalIdField(), complete: { type: GraphQLBoolean }, text: { type: GraphQLString } }, interfaces: [nodeInterface] }); const { connectionType: TodosConnection, edgeType: GraphQLTodoEdge } = connectionDefinitions({ nodeType: GraphQLTodo }); const GraphQLUser = new GraphQLObjectType({ name: 'User', fields: { id: globalIdField(), todos: { type: TodosConnection, args: { status: { type: GraphQLString, defaultValue: 'any' }, ...connectionArgs }, resolve: (obj, { status, ...args }) => connectionFromArray(getTodos(status), args) }, numTodos: { type: GraphQLInt, resolve: () => getTodos().length }, numCompletedTodos: { type: GraphQLInt, resolve: () => getTodos('completed').length } }, interfaces: [nodeInterface] }); const GraphQLRoot = new GraphQLObjectType({ name: 'Root', fields: { viewer: { type: GraphQLUser, resolve: getViewer }, node: nodeField } }); const GraphQLAddTodoMutation = mutationWithClientMutationId({ name: 'AddTodo', inputFields: { text: { type: new GraphQLNonNull(GraphQLString) } }, outputFields: { viewer: { type: GraphQLUser, resolve: getViewer }, todoEdge: { type: GraphQLTodoEdge, resolve: ({ todoId }) => { const todo = getTodo(todoId); return { cursor: cursorForObjectInConnection(getTodos(), todo), node: todo }; } } }, mutateAndGetPayload: ({ text }) => { const todoId = addTodo(text); return { todoId }; } }); const GraphQLChangeTodoStatusMutation = mutationWithClientMutationId({ name: 'ChangeTodoStatus', inputFields: { id: { type: new GraphQLNonNull(GraphQLID) }, complete: { type: new GraphQLNonNull(GraphQLBoolean) } }, outputFields: { viewer: { type: GraphQLUser, resolve: getViewer }, todo: { type: GraphQLTodo, resolve: ({ todoId }) => getTodo(todoId) } }, mutateAndGetPayload: ({ id, complete }) => { const { id: todoId } = fromGlobalId(id); changeTodoStatus(todoId, complete); return { todoId }; } }); const GraphQLMarkAllTodosMutation = mutationWithClientMutationId({ name: 'MarkAllTodos', inputFields: { complete: { type: new GraphQLNonNull(GraphQLBoolean) } }, outputFields: { viewer: { type: GraphQLUser, resolve: getViewer }, changedTodos: { type: new GraphQLList(GraphQLTodo), resolve: ({ changedTodoIds }) => changedTodoIds.map(getTodo) } }, mutateAndGetPayload: ({ complete }) => { const changedTodoIds = markAllTodos(complete); return { changedTodoIds }; } }); const GraphQLRemoveCompletedTodosMutation = mutationWithClientMutationId( { name: 'RemoveCompletedTodos', outputFields: { viewer: { type: GraphQLUser, resolve: getViewer }, deletedIds: { type: new GraphQLList(GraphQLString), resolve: ({ deletedIds }) => deletedIds } }, mutateAndGetPayload: () => { const deletedTodoIds = removeCompletedTodos(); const deletedIds = deletedTodoIds.map( toGlobalId.bind(null, 'Todo') ); return { deletedIds }; } } ); const GraphQLRemoveTodoMutation = mutationWithClientMutationId({ name: 'RemoveTodo', inputFields: { id: { type: new GraphQLNonNull(GraphQLID) } }, outputFields: { viewer: { type: GraphQLUser, resolve: getViewer }, deletedId: { type: GraphQLID, resolve: ({ id }) => id } }, mutateAndGetPayload: ({ id }) => { const { id: todoId } = fromGlobalId(id); removeTodo(todoId); return { id }; } }); const GraphQLRenameTodoMutation = mutationWithClientMutationId({ name: 'RenameTodo', inputFields: { id: { type: new GraphQLNonNull(GraphQLID) }, text: { type: new GraphQLNonNull(GraphQLString) } }, outputFields: { todo: { type: GraphQLTodo, resolve: ({ todoId }) => getTodo(todoId) } }, mutateAndGetPayload: ({ id, text }) => { const { id: todoId } = fromGlobalId(id); renameTodo(todoId, text); return { todoId }; } }); const GraphQLMutation = new GraphQLObjectType({ name: 'Mutation', fields: { addTodo: GraphQLAddTodoMutation, changeTodoStatus: GraphQLChangeTodoStatusMutation, markAllTodos: GraphQLMarkAllTodosMutation, removeCompletedTodos: GraphQLRemoveCompletedTodosMutation, removeTodo: GraphQLRemoveTodoMutation, renameTodo: GraphQLRenameTodoMutation } }); export default new GraphQLSchema({ query: GraphQLRoot, mutation: GraphQLMutation }); There are a lot of things being imported here, so I'll start with the imports. I wanted to include all of these imports because I think they're contextually relevant for this discussion. First, there's the primitive GraphQL types from the graphql library. Next, you have helpers from the graphql-relay library that simplify defining a GraphQL schema. Lastly, there's imports from your own database module. This isn't necessarily a database, in fact, in this case, it's just mock data. You could replace database with api for instance, if you needed to talk to remote API endpoints, or we could combine the two; it's all GraphQL as far as your React components are concerned. Then, you define some of your own GraphQL types. For example, the GraphQLTodo type has two fields—text and complete. One is a Boolean and one is a string. The important thing to note about GraphQL fields is the resolve() function. This is how you tell the GraphQL runtime how to populate these fields when they're required. These two fields simply return property values. Then, there's the GraphQLUser type. This field represents the user's entire universe within the UI, hence the name. The todos field, for example, is how you query for todo items from Relay components. It's resolved using the connectionFromArray() function, which is a shortcut that removes the need for more verbose field definitions. Then, there's the GraphQLRoot type. This has a single viewer field that's used as the root of all queries. Now let's take a closer look at the add todo mutation, as follows. I'm not going to go over every mutation that's used by the web version of this app, in the interests of space: const GraphQLAddTodoMutation = mutationWithClientMutationId({ name: 'AddTodo', inputFields: { text: { type: new GraphQLNonNull(GraphQLString) } }, outputFields: { viewer: { type: GraphQLUser, resolve: getViewer }, todoEdge: { type: GraphQLTodoEdge, resolve: ({ todoId }) => { const todo = getTodo(todoId); return { cursor: cursorForObjectInConnection(getTodos(), todo), node: todo }; } } }, mutateAndGetPayload: ({ text }) => { const todoId = addTodo(text); return { todoId }; } }); All mutations have a mutateAndGetPayload() method, which is how the mutation actually makes a call to some external service to change the data. The returned payload can be the changed entity, but it can also include data that's changed as a side-effect. This is where the outputFields come into play. This is the information that's handed back to Relay in the browser so that it has enough information to properly update components based on the side effects of the mutation. Don't worry, you'll see what this looks like from Relay's perspective shortly. The mutation type that you've created here is used to hold all application mutations. Lastly, here's how the entire schema is put together and exported from the module: export default new GraphQLSchema({ query: GraphQLRoot, mutation: GraphQLMutation }); Don't worry about how this schema is fed into the GraphQL server for now. Bootstrapping Relay At this point, you have the GraphQL backend up and running. Now, you can focus on your React components in the frontend. In particular, you're going to look at Relay in a React Native context, which really only has minor differences. For example, in web apps, it's usually react-router that bootstraps Relay. In React Native, it's a little different. Let's look at the App.js file that serves as the entry point for your native app: You can find the code in this section on GitHub. Let's break down what's happening here, starting with the environment constant: const environment = new Environment({ network: Network.create({ schema }), store: new Store(new RecordSource()) }); This is how you communicate with the GraphQL backend, by configuring a network. In this example, you're importing Network from relay-local-schema, which means that no network requests are being made. This is really handy for when you're getting started—especially building a React Native app. Next, there's the QueryRenderer component. This Relay component is used to render other components that depend on GraphQL queries. It expects a query property: query={graphql` query App_Query($status: String!) { viewer { ...TodoList_viewer } } `} Note that queries are prefixed by the module that they're in. In this case, App. This query uses a GraphQL fragment from another module, TodoList, and is named TodoList_viewer. You can pass variables to the query: variables={{ status: 'any' }} Then, the render property is a function that renders your components when the GraphQL data is ready: If something went wrong, error will contain information about the error. If there's no error and no props, it's safe to assume that the GraphQL data is still loading. Adding todo items In the TodoInput component, there's a text input that allows the user to enter new todo items. When they're done entering the todo, Relay will need to send a mutation to the backend GraphQL server. Here's what the component code looks like: You can find the code in this section on GitHub. It doesn't look that different from your typical React Native component. The piece that stands out is the mutation—AddTodoMutation. This is how you tell the GraphQL backend that you want a new todo node created. Let's see what the application looks like so far: The textbox for adding new todo items is just above the list of todo items. Now, let's look at the TodoList component, which is responsible for rendering the todo item list. Rendering todo items It's the job of the TodoList component to render the todo list items. When AddTodoMutation takes place, the TodoList component needs to be able to render this new item. Relay takes care of updating the internal data stores where all of our GraphQL data lives. Here's a look at the item list again, with several more todos added: Here's the TodoList component itself: You can find the code in this section on GitHub. The relevant GraphQL to get the data you need is passed as a second argument to createFragmentContainer(). This is the declarative data dependency for the component. When you render the <Todo> component, you're passing it the edge.todo data. Now, let's see what the Todo component itself looks like. Completing todo items The last piece of this application is rendering each todo item and providing the ability to change the status of the todo. Let's take a look at this code: You can find the code in this section on GitHub. The actual component that's rendered is a switch control and the item text. When the user marks the todo as complete, the item text is styled as crossed off. The user can also uncheck items. The ChangeTodoStatusMutation mutation sends the request to the GraphQL backend to change the todo state. The GraphQL backend then talks to any microservices that are needed to make this happen. Then, it responds with the fields that this component depends on. The important part of this code that I want to point out is the fragments used in the Relay container. This container doesn't actually use them directly. Instead, they're used by the todos query in the TodoList component (Todo.getFrament()). This is useful because it means that you can use the Todo component in another context, with another query, and its data dependencies will always be satisfied. In this article, we implemented some specific Relay and GraphQL ideas. Starting with the GraphQL schema, we learned how to declare the data that's used by the application and how these data types resolve to specific data sources, such as microservice endpoints. Then, we learned about bootstrapping GraphQL queries from Relay in your React Native app. Next, we walked through the specifics of adding, changing and listing todo items. The application itself uses the same schema as the web version of the Todo application, which makes things much easier when you're developing web and native React applications. If you found this post useful, do check out the book, React and React Native - Second Edition. This book guides you through building applications for web and native mobile platforms with React, JSX, Redux, and GraphQL. JavaScript mobile frameworks comparison: React Native vs Ionic vs NativeScript React introduces Hooks, a JavaScript function to allow using React without classes npm JavaScript predictions for 2019: React, GraphQL, and TypeScript are three technologies to learn
Read more
  • 0
  • 0
  • 8413

article-image-building-your-own-snapchat-like-ar-filter-on-android-using-tensorflow-lite-tutorial
Natasha Mathur
31 Dec 2018
13 min read
Save for later

Building your own Snapchat-like AR filter on Android using TensorFlow Lite [ Tutorial ]

Natasha Mathur
31 Dec 2018
13 min read
Augmented Reality (AR) filters that are used on applications such as Snapchat and Instagram have gained worldwide popularity. This tutorial is an excerpt taken from the book 'Machine Learning Projects for Mobile Applications' written by Karthikeyan NG. In this tutorial,  we will look at how you can build your own Augmented Reality (AR) filter using TensorFlow Lite, a platform that allows you to run machine learning models on mobile and embedded devices.  With this application, we will place AR filters on top of a real-time camera view. Using AR filters, we can add a mustache to a male's facial key point, and we can add a relevant emotional expression on top of the eyes. The TensorFlow Lite model is used to detect gender and emotion from the camera view. We will be looking at concepts such as MobileNet models and building the dataset required for model conversion before looking at how to build the Android application. MobileNet models We use the MobileNet model to identify gender, while the AffectNet model is used to detect emotion. Facial key point detection is achieved using Google's Mobile Vision API. TensorFlow offers various pre-trained models, such as drag and drop models, in order to identify approximately 1,000 default objects. When compared with other similar models such as the Inception model datasets, MobileNet works better with latency, size, and accuracy. In terms of output performance, there is a significant amount of lag, with a full-fledged model. However, the trade-off is acceptable when the model is deployable on a mobile device and for real-time offline model detection. The MobileNet architecture deals with a 3 x 3 convolution layer in a different way from a typical CNN. For a more detailed explanation of the MobileNet architecture, please visit https://arxiv.org/pdf/1704.04861.pdf. Let's look at an example of how to use MobileNet. Let's not build one more generic dataset in this case. Instead, we will write a simple classifier to find Pikachu in an image. The following are sample pictures showing an image of Pikachu and an image without Pikachu: Building the dataset To build our own classifier, we need to have datasets that contain images with and without Pikachu. You can start with 1,000 images on each database and you can pull down such images here: https://search.creativecommons.org/. Let's create two folders named pikachu and no-pikachu and drop those images accordingly. Always ensure that you have the appropriate licenses to use any images, especially for commercial purposes. Image scrapper from the Google and Bing API: https://github.com/rushilsrivastava/image_search. Now we have an image folder, which is structured as follows: /dataset/ /pikachu/[image1,..] /no-pikachu/[image1,..] Retraining of images  We can now start labeling our images. With TensorFlow, this job becomes easier. Assuming that you have installed TensorFlow already, download the following retraining script: curl https://github.com/tensorflow/hub/blob/master/examples/ image_retraining/retrain.py Let's retrain the image with the Python script now: python retrain.py \ --image_dir ~/MLmobileapps/Chapter5/dataset/ \ --learning_rate=0.0001 \ --testing_percentage=20 \ --validation_percentage=20 \ --train_batch_size=32 \ --validation_batch_size=-1 \ --eval_step_interval=100 \ --how_many_training_steps=1000 \ --flip_left_right=True \ --random_scale=30 \ --random_brightness=30 \ --architecture mobilenet_1.0_224 \ --output_graph=output_graph.pb \ --output_labels=output_labels.txt If you set validation_batch_size to -1, it will validate the whole dataset; learning_rate = 0.0001 works well. You can adjust and try this for yourself. In the architecture flag, we choose which version of MobileNet to use, from versions 1.0, 0.75, 0.50, and 0.25. The suffix number 224 represents the image resolution. You can specify 224, 192, 160, or 128 as well. Model conversion from GraphDef to TFLite TocoConverter is used to convert from a TensorFlow GraphDef file or SavedModel into either a TFLite FlatBuffer or graph visualization. TOCO stands for TensorFlow Lite Optimizing Converter. We need to pass the data through command-line arguments. There are a few command-line arguments listed in the following with TensorFlow 1.10.0: --output_file OUTPUT_FILE Filepath of the output tflite model. --graph_def_file GRAPH_DEF_FILE Filepath of input TensorFlow GraphDef. --saved_model_dir Filepath of directory containing the SavedModel. --keras_model_file Filepath of HDF5 file containing tf.Keras model. --output_format {TFLITE,GRAPHVIZ_DOT} Output file format. --inference_type {FLOAT,QUANTIZED_UINT8} Target data type in the output --inference_input_type {FLOAT,QUANTIZED_UINT8} Target data type of real-number input arrays. --input_arrays INPUT_ARRAYS Names of the input arrays, comma-separated. --input_shapes INPUT_SHAPES Shapes corresponding to --input_arrays, colon-separated. --output_arrays OUTPUT_ARRAYS Names of the output arrays, comma-separated. We can now use the toco tool to convert the TensorFlow model into a TensorFlow Lite model: toco \ --graph_def_file=/tmp/output_graph.pb --output_file=/tmp/optimized_graph.tflite --input_arrays=Mul --output_arrays=final_result --input_format=TENSORFLOW_GRAPHDEF --output_format=TFLITE --input_shape=1,${224},${224},3 --inference_type=FLOAT --input_data_type=FLOAT Similarly, we have two model files used in this application: the gender model and emotion model. These will be explained in the following two sections. To convert ML models in TensorFlow 1.9.0 to TensorFlow 1.11.0, use TocoConverter. TocoConverter is semantically identically to TFLite Converter. To convert models prior to TensorFlow 1.9, use the toco_convert function. Run help(tf.contrib.lite.toco_convert) to get details about acceptable parameters. Gender model This is built on the IMDB WIKI dataset, which contains 500k+ celebrity faces. It uses the MobileNet_V1_224_0.5 version of MobileNet. The link to the data model project can be found here: https://data.vision.ee.ethz.ch/cvl/rrothe/imdb-wiki/. It is very rare to find public datasets with thousands of images. This dataset is built on top of a large collection of celebrity faces. There are two common places: one is IMDb and the other one is Wikipedia. More than 100K celebrities' details were retrieved from their profiles from both sources through scripts. Then it was organized by removing noises(irrelevant content). Emotion model This is built on the AffectNet model with more than 1 million images. It uses the MobileNet_V2_224_1.4 version of MobileNet. The link to the data model project can be found here: http://mohammadmahoor.com/affectnet/. The AffectNet model is built by collecting and annotating facial images of more than 1 million faces from the internet. The images were sourced from three search engines, using around 1,250 related keywords in six different languages. Comparison of MobileNet versions In both of our models, we use different versions of MobileNet models. MobileNet V2 is mostly an updated version of V1 that makes it even more efficient and powerful in terms of performance. We will see a few factors between both the models: The picture above shows the numbers from MobileNet V1 and V2 belong to the model versions with 1.0 depth multiplier. It is better if the numbers are lower in this table. By seeing the results we can assume that V2 is almost twice as fast as V1 model. On a mobile device when memory access is limited than the computational capability V2 works very well. MACs—multiply-accumulate operations. This measures how many calculations are needed to perform inference on a single 224×224 RGB image. When the image size increases more MACs are required. From the number of MACs alone, V2 should be almost twice as fast as V1. However, it’s not just about the number of calculations. On mobile devices, memory access is much slower than computation. But here V2 has the advantage too: it only has 80% of the parameter count that V1 has. Now, let's look into the performance in terms of accuracy: The figure shown above is tested on ImageNet dataset. These numbers can be misleading as it depends on all the constraints that is taken into account while deriving these numbers. The IEEE paper behind the model can be found here: http://mohammadmahoor.com/wp-content/uploads/2017/08/AffectNet_oneColumn-2.pdf. Building the Android application Now create a new Android project from Android Studio. This should be called ARFilter, or whatever name you prefer: On the next screen, select the Android OS versions that our application supports and select API 15 which is not shown on the image. That covers almost all existing Android phones. When you are ready, press Next. On the next screen, select Add No Activity and click Finish. This creates an empty project: Once the project is created, let's add one Empty Activity. We can select different activity styles based on our needs: Name the created activity Launcher Activity by selecting the checkbox. This adds an intent filter under the particular activity in the AndroidManifest.xml file: <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> <intent-filter>: To advertise which implicit intents your app can receive, declare one or more intent filters for each of your app components with an <intent-filter> element in your manifest file. Each intent filter specifies the type of intents it accepts based on the intent's action, data, and category. The system delivers an implicit intent to your app component only if the intent can pass through one of your intent filters. Here, the intent is to keep this activity as the first activity when the app is opened by the user. Next, we will name the launcher activity: Once the activity is created, let's start designing the user interface (UI) layout for the activity. Here, the user selects which model to utilize in this application. We have two models for gender and emotion detection, whose details we discussed earlier. In this activity, we will add two buttons and their corresponding model classifiers, shown as follows: With the selection of the corresponding model, we will launch the next activity accordingly using a clickListener event with the ModelSelectionActivity class as follows. Based on the clicks on the buttons on gender identification or emotion identification, we will pass on the information to the ARFilterActivity. So that the corresponding model will be loaded into memory: @Override public void onClick(View view) { int id = view.getId(); if(id==R.id.genderbtn){ Intent intent = new Intent(this, ARFilterActivity.class); intent.putExtra(ARFilterActivity.MODEL_TYPE,"gender"); startActivity(intent); } else if(id==R.id.emotionbtn){ Intent intent = new Intent(this,ARFilterActivity.class); intent.putExtra(ARFilterActivity.MODEL_TYPE,"emotion"); startActivity(intent); } } Intent: An Intent is a messaging object you can use to request an action from another app component. Although intents facilitate communication between components in several ways, there are three fundamental use cases such as starting an Activity, starting a service and delivering a broadcast. In ARFilterActivity, we will have the real-time view classification. The object that has been passed on will be received inside the filter activity, where the corresponding classifier will be invoked as follows. Based on the classifier selected from the previous activity, the corresponding model will be loaded into ARFilterActivity inside the OnCreate() method as shown as follows: public static String classifierType(){ String type = mn.getIntent().getExtras().getString("TYPE"); if(type!=null) { if(type.equals("gender")) return "gender"; else return "emotion"; } else return null; } The UI will be designed accordingly in order to display the results in the bottom part of the layout via the activity_arfilter layout as follows. CameraSourcePreview initiates the Camera2 API for a view inside that we will add GraphicOverlay class. It is a view which renders a series of custom graphics to be overlayed on top of an associated preview (that is the camera preview). The creator can add graphics objects, update the objects, and remove them, triggering the appropriate drawing and invalidation within the view. It supports scaling and mirroring of the graphics relative the camera's preview properties. The idea is that detection item is expressed in terms of a preview size but need to be scaled up to the full view size, and also mirrored in the case of the front-facing camera: <com.mlmobileapps.arfilter.CameraSourcePreview android:id="@+id/preview" android:layout_width="wrap_content" android:layout_height="wrap_content"> <com.mlmobileapps.arfilter.GraphicOverlay android:id="@+id/faceOverlay" android:layout_width="match_parent" android:layout_height="match_parent" /> </com.mlmobileapps.arfilter.CameraSourcePreview> We use the CameraPreview class from the Google open source project and the CAMERA object needs user permission based on different Android API levels: Link to Google camera API: https://github.com/googlesamples/android-Camera2Basic. Once we have the Camera API ready, we need to have the appropriate permission from the user side to utilize the camera as shown below. We need these following permissions: Manifest.permission.CAMERA Manifest.permission.WRITE_EXTERNAL_STORAGE private void requestPermissionThenOpenCamera() { if(ContextCompat.checkSelfPermission(context, Manifest.permission.CAMERA) == PackageManager.PERMISSION_GRANTED) { if (ContextCompat.checkSelfPermission(context, Manifest.permission.WRITE_EXTERNAL_STORAGE) == PackageManager.PERMISSION_GRANTED) { Log.e(TAG, "requestPermissionThenOpenCamera: "+Build.VERSION.SDK_INT); useCamera2 = (Build.VERSION.SDK_INT >= Build.VERSION_CODES.LOLLIPOP); createCameraSourceFront(); } else { ActivityCompat.requestPermissions(this, new String[] {Manifest.permission.WRITE_EXTERNAL_STORAGE}, REQUEST_STORAGE_PERMISSION); } } else { ActivityCompat.requestPermissions(this, new String[]{Manifest.permission.CAMERA}, REQUEST_CAMERA_PERMISSION); } } With this, we now have an application that has a screen where we can choose which model to load. On the next screen, we have the camera view ready. We now have to load the appropriate model, detect the face on the screen, and apply the filter accordingly. Face detection on the real camera view is done through the Google Vision API. This can be added on your build.gradle as a dependency as follows. You should always use the latest version of the api: api 'com.google.android.gms:play-services-vision:15.0.0' The image classification object is initialized inside the OnCreate() method of the ARFilterActivity and inside the ImageClassifier class. The corresponding model is loaded based on user selection as follows: private void initPaths(){ String type = ARFilterActivity.classifierType(); if(type!=null) { if(type.equals("gender")){ MODEL_PATH = "gender.lite"; LABEL_PATH = "genderlabels.txt"; } else{ MODEL_PATH = "emotion.lite"; LABEL_PATH = "emotionlabels.txt"; } } } Once the model is decided, we will read the file and load them into memory. Thus in this article, we looked at concepts such as mobile net models and building the dataset required for the model application, we then looked at how to build a Snapchat-like AR filter.  If you want to know the further steps to build AR filter such as loading the model, and so on, be sure to check out the book  'Machine Learning Projects for Mobile Applications'. Snapchat source code leaked and posted to GitHub Snapchat is losing users – but revenue is up 15 year old uncovers Snapchat’s secret visual search function
Read more
  • 0
  • 0
  • 9992

article-image-how-to-perform-event-handling-in-react-tutorial
Bhagyashree R
31 Dec 2018
11 min read
Save for later

How to perform event handling in React [Tutorial]

Bhagyashree R
31 Dec 2018
11 min read
React has a unique approach to handling events: declaring event handlers in JSX. The differentiating factor with event handling in React components is that it's declarative. Contrast this with something like jQuery, where you have to write imperative code that selects the relevant DOM elements and attaches event handler functions to them. The advantage with the declarative approach to event handlers in JSX markup is that they're part of the UI structure. Not having to track down the code that assigns event handlers is mentally liberating. [box type="shadow" align="" class="" width=""]This article is taken from the book React and React Native - Second Edition by Adam Boduch. This book guides you through building applications for web and native mobile platforms with React, JSX, Redux, and GraphQL. To follow along with the examples implemented in this article, you can find the code in the GitHub repository of the book.[/box] In this tutorial, we will look at how event handlers for particular elements are declared in JSX. It will walk you through the implementation of inline and higher-order event handler functions. Then you'll learn how React actually maps event handlers to DOM elements under the hood. Finally, you'll learn about the synthetic events that React passes to event handler functions, and how they're pooled for performance purposes. Declaring event handlers In this section, you'll write a basic event handler, so that you can get a feel for the declarative event handling syntax found in React applications. Then, we will see how to use generic event handler functions. Declaring handler functions Let's take a look at a basic component that declares an event handler for the click event of an element: Find the code for this section in GitHub. The event handler function, this.onClick(), is passed to the onClick property of the <button> element. By looking at this markup, it's clear what code is going to run when the button is clicked. Multiple event handlers What I really like about the declarative event handler syntax is that it's easy to read when there's more than one handler assigned to an element. Sometimes, for example, there are two or three handlers for an element. Imperative code is difficult to work with for a single event handler, let alone several of them. When an element needs more handlers, it's just another JSX attribute. This scales well from a code maintainability perspective: Find the code for this section in GitHub. This input element could have several more event handlers, and the code would be just as readable. As you keep adding more event handlers to your components, you'll notice that a lot of them do the same thing. Next, you'll learn how to share generic handler functions across components. Importing generic handlers Any React application is likely going to share the same event handling logic for different components. For example, in response to a button click, the component should sort a list of items. It's these types of generic behaviors that belong in their own modules so that several components can share them. Let's implement a component that uses a generic event handler function: Find the code for this section on GitHub. Let's walk through what's going on here, starting with the imports. You're importing a function called reverse(). This is the generic event handler function that you're using with your <button> element. When it's clicked, the list should reverse its order. The onReverseClick method actually calls the generic reverse() function. It is created using bind() to bind the context of the generic function to this component instance. Finally, looking at the JSX markup, you can see that the onReverseClick() function is used as the handler for the button click. So how does this work, exactly? Do you have a generic function that somehow changes the state of this component because you bound context to it? Well, pretty much, yes, that's it. Let's look at the generic function implementation now: Find the code for this section on GitHub. This function depends on a this.state property and an items array within the state. The key is that the state is generic; an application could have many components with an items array in its state. Here's what our rendered list looks like: As expected, clicking the button causes the list to sort, using your generic reverse() event handler: Next, you'll learn how to bind the context and the argument values of event handler functions. Event handler context and parameters In this section, you'll learn about React components that bind their event handler contexts and how you can pass data into event handlers. Having the right context is important for React event handler functions because they usually need access to component properties or state. Being able to parameterize event handlers is also important because they don't pull data out of DOM elements. Getting component data In this section, you'll learn about scenarios where the handler needs access to component properties, as well as argument values. You'll render a custom list component that has a click event handler for each item in the list. The component is passed an array of values as follows: Find the code for this section on GitHub. Each item in the list has an id property, used to identify the item. You'll need to be able to access this ID when the item is clicked in the UI so that the event handler can work with the item. Here's what the MyList component implementation looks like: Find the code for this section on GitHub. Here is what the rendered list looks like: You have to bind the event handler context, which is done in the constructor. If you look at the onClick() event handler, you can see that it needs access to the component so that it can look up the clicked item in this.props.items. Also, the onClick() handler is expecting an id parameter. If you take a look at the JSX content of this component, you can see that calling bind() supplies the argument value for each item in the list. This means that when the handler is called in response to a click event, the id of the item is already provided. Higher-order event handlers A higher-order function is a function that returns a new function. Sometimes, higher-order functions take functions as arguments too. In the preceding example, you used bind() to bind the context and argument values of your event handler functions. Higher-order functions that return event handler functions are another technique. The main advantage of this technique is that you don't call bind() several times. Instead, you just call the function where you want to bind parameters to the function. Let's look at an example component: Find the code for this section on GitHub. This component renders three buttons and has three pieces of state—a counter for each button. The onClick() function is automatically bound to the component context because it's defined as an arrow function. It takes a name argument and returns a new function. The function that is returned uses this name value when called. It uses computed property syntax (variables inside []) to increment the state value for the given name. Here's what that component content looks like after each button has been clicked a few times: Inline event handlers The typical approach to assigning handler functions to JSX properties is to use a named function. However, sometimes you might want to use an inline function. This is done by assigning an arrow function directly to the event property in the JSX markup: Find the code for this section on GitHub. The main use of inlining event handlers like this is when you have a static parameter value that you want to pass to another function. In this example, you're calling console.log() with the string clicked. You could have set up a special function for this purpose outside of the JSX markup by creating a new function using bind(), or by using a higher-order function. But then you would have to think of yet another name for yet another function. Inlining is just easier sometimes. Binding handlers to elements When you assign an event handler function to an element in JSX, React doesn't actually attach an event listener to the underlying DOM element. Instead, it adds the function to an internal mapping of functions. There's a single event listener on the document for the page. As events bubble up through the DOM tree to the document, the React handler checks to see whether any components have matching handlers. The process is illustrated here: Why does React go to all of this trouble, you might ask? To keep the declarative UI structures separated from the DOM as much as possible. For example, when a new component is rendered, its event handler functions are simply added to the internal mapping maintained by React. When an event is triggered and it hits the document object, React maps the event to the handlers. If a match is found, it calls the handler. Finally, when the React component is removed, the handler is simply removed from the list of handlers. None of these DOM operations actually touch the DOM. It's all abstracted by a single event listener. This is good for performance and the overall architecture (keep the render target separate from the application code). Synthetic event objects When you attach an event handler function to a DOM element using the native addEventListener() function, the callback will get an event argument passed to it. Event handler functions in React are also passed an event argument, but it's not the standard Event instance. It's called SyntheticEvent, and it's a simple wrapper for native event instances. Synthetic events serve two purposes in React: Provides a consistent event interface, normalizing browser inconsistencies Synthetic events contain information that's necessary for propagation to work Here's an illustration of the synthetic event in the context of a React component: Event pooling One challenge with wrapping native event instances is that this can cause performance issues. Every synthetic event wrapper that's created will also need to be garbage collected at some point, which can be expensive in terms of CPU time. For example, if your application only handles a few events, this wouldn't matter much. But even by modest standards, applications respond to many events, even if the handlers don't actually do anything with them. This is problematic if React constantly has to allocate new synthetic event instances. React deals with this problem by allocating a synthetic instance pool. Whenever an event is triggered, it takes an instance from the pool and populates its properties. When the event handler has finished running, the synthetic event instance is released back into the pool, as shown here: This prevents the garbage collector from running frequently when a lot of events are triggered. The pool keeps a reference to the synthetic event instances, so they're never eligible for garbage collection. React never has to allocate new instances either. However, there is one gotcha that you need to be aware of. It involves accessing the synthetic event instances from asynchronous code in your event handlers. This is an issue because, as soon as the handler has finished running, the instance goes back into the pool. When it goes back into the pool, all of its properties are cleared. Here's an example that shows how this can go wrong: Find the code for this section on GitHub. The second call to  console.log() is attempting to access a synthetic event property from an asynchronous callback that doesn't run until the event handler completes, which causes the event to empty its properties. This results in a warning and an undefined value. This tutorial introduced you to event handling in React. The key differentiator between React and other approaches to event handling is that handlers are declared in JSX markup. This makes tracking down which elements handle which events much simpler. We learned that it's a good idea to share event handling functions that handle generic behavior.  We saw the various ways to bind the event handler function context and parameter values. Then, we discussed the inline event handler functions and their potential use, as well as how React actually binds a single DOM event handler to the document object. Synthetic events are an abstraction that wraps the native event, and you learned why they're necessary and how they're pooled for efficient memory consumption. If you found this post useful, do check out the book, React and React Native - Second Edition. This book guides you through building applications for web and native mobile platforms with React, JSX, Redux, and GraphQL. JavaScript mobile frameworks comparison: React Native vs Ionic vs NativeScript React introduces Hooks, a JavaScript function to allow using React without classes React 16.6.0 releases with a new way of code splitting, and more!
Read more
  • 0
  • 0
  • 8845
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at ₹800/month. Cancel anytime
article-image-why-moving-from-a-monolithic-architecture-to-microservices-is-so-hard-gitlabs-jason-plum-breaks-it-down-kubeconcnc-talk
Amrata Joshi
19 Dec 2018
12 min read
Save for later

Why moving from a monolithic architecture to microservices is so hard, Gitlab’s Jason Plum breaks it down [KubeCon+CNC Talk]

Amrata Joshi
19 Dec 2018
12 min read
Last week, at the KubeCon+CloudNativeCon North America 2018, Jason Plum, Sr. software engineer, distribution at GitLab spoke about GitLab, Omnibus, and the concept of monolith and its downsides. He spent the last year working on the cloud native helm charts and breaking out a complicated pile of code. This article highlights few insights from Jason Plum’s talk on Monolith to Microservice: Pitchforks Not Included at the KubeCon + CloudNativeCon. Key takeaways “You could not have seen the future that you live in today, learn from what you've got in the past, learn what's available now and work your way to it.” - Jason Plum GitLab’s beginnings as the monolithic project provided the means for focused acceleration and innovation. The need to scale better and faster than the traditional models caused to reflect on our choices, as we needed to grow beyond the current architecture to keep up. New ways of doing things require new ways of looking at them. Be open minded, and remember your correct choices in the past could not see the future you live in. “So the real question people don't realize is what is GitLab?”- Jason Plum Gitlab is the first single application to have the entire DevOps lifecycle in a single Interface. Omnibus - The journey from a package to a monolith “We had a group of people working on a single product to binding that and then we took that, we bundled that. And we shipped it and we shipped it and we shipped it and we shipped it and all the twenties every month for the entire lifespan of this company we have done that, that's not been easy. Being a monolith made that something that was simple to do at scale.”- Jason Plum In the beginning it was simple as Ruby on Rails was on a single codebase and users had to deploy it from source. Just one gigantic code was used but that's not the case these days. Ruby on Rails is still used for the primary application but now a shim proxy called workhorse is used that takes the heavy lifting away from Ruby. It ensures the users and their API’s are are responsive. The team at GitLab started packaging this because doing everything from source was difficult. They created the Omnibus package which eventually became the gigantic monolith. Monoliths make sense because... Adding features is simple It’s easy as everything is one bundle Clear focus for Minimum Viable Product (MVP) Advantages of Omnibus Full-stack bundle provides all components necessary to use every feature of GitLab. Simple to install. Components can be individually enabled/disabled. East to distribute. Highly controlled, version locked components. Guaranteed configuration stability. The downsides of monoliths “The problem is this thing is massive” - Jason Plum The Omnibus package can work on any platform, any cloud and under any distribution. But the question is how many of us would want to manage fleets of VMs? This package has grown so much that it is 1.5 gigabytes and unpacked. It has all the features and is still usable. If a user downloads 500 megabytes as an installation package then it unpacks almost a gigabyte and a half. This package contains everything that is required to run the SaaS but the problem is that this package is massive. “The trick is Git itself is the reason that moving to cloud native was hard.” - Jason Plum While using Git, the users run a couple of commands, they push them and deploy the app. But at the core of that command is how everything is handled and how everything is put together. Git works with snapshots of the entire file. The number of files include, every file the user has and every version the user had. It also involves all the indexes and references and some optimizations. But the problem is the more the files, the harder it gets. “Has anybody ever checked out the Linux tree? You check out that tree, get your coffee, come back check out, any branch I don't care what it is and then dip that against current master. How many files just got read on the file system?” - Jason Plum When you come back you realize that all the files that are marked as different and between the two of them when you do diff, that information is not stored, it's not greeting and it is not even cutting it out. It is running differently on all of those files. Imagine how bad that gets when you have 10 million lines of code in a repository that's 15 years old ?  That’s expensive in terms of performance.  - Jason Plum Traditional methods - A big problem “Now let's actually go and make a branch make some changes and commit them right. Now you push them up to your fork and now you go into add if you on an M R. Now it's my job to do the thing that was already hard on your laptop, right? Okay cool, that's one of you, how about 10,000 people a second right do you see where this is going? Suddenly it's harder but why is this the problem?” - Jason Plum The answer is traditional methods, as they are quite slow. If we have hundreds of things in the fleet, accessing tens of machines that are massive and it still won’t work because the traditional methods are a problem. Is NFS a solution to this problem? NFS (Network File System) works well when there are just 10 or 100 people. But if a user is asked to manage an NFS server for 5,000 people, one might rather choose pitchfork. NFS is capable but it can’t work at such a scale. The Git team now has a mount that has to be on every single node, as the API code and web code and other processes which needs to be functional enough to read the files. The team has previously used Garrett, Lib Git to read the files on the file system. Every time, one reads the file, the whole file used to get pulled. This gave rise to another problem, disk i/o problems. Since, everybody tries to read the disparate set of files, the traffic increases. “Okay so we have definitely found a scaling limit now we can only push the traditional methods of up and out so far before we realize that that's just not going to work because we don't have big enough pipes, end of line. So now we've got all of this and we've just got more of them and more of them and more of them. And all of a sudden we need to add 15 nodes to the fleet and another 15 nodes to the fleet and another 15 nodes to the fleet to keep up with sudden user demand. With every single time we have to double something the choke points do not grow - they get tighter and tighter” - Jason Plum The team decided to take a second look at the problem and started working on a project called Gitaly. They took the API calls that the users would make to live Git. So the Git mechanics was sent over a GRPC and then Gitaly was put on the actual file servers. Further the users were asked to call for a diff on whatever they want and then Gitaly was asked for the response. There is no need of NFS now. “I can send a 1k packet get a 4k response instead of NFS and reading 10,000 files. We centralized everything across and this gives us the ability to actually meet throughput because that pipe that's not getting any bigger suddenly has 1/10 of the traffic going through it.” - Jason Plum This leaves more space for users to easily get to the file servers and further removes the need of NFS mounts for everything. Incase one node is lost then half of the fleet is not lost in an instant. How is Gitaly useful? With Gitaly the throughput requirement significantly reduced. The service nodes no more need disk access. It provides optimization for specific problems. How to solve Git’s performance related issue? For better optimization and performance it is important to treat it like a service or like a database. The file system is still in use and all of the accesses to the files are on the node where we have the best performance and best caching and there is no issue with regards to the network. “To take the monolith and rip a chunk out make it something else and literally prop the thing up, but how long are we going to be able to do this?” - Jason Plum If a user plans to upload something then he/she has to use a file system and which means that NFS hasn't gone away. Do we really need to have NFS because somebody uploaded a cat picture? Come on guys we can do better than that right?- Jason Plum The next solution was to take everything as a traditional file that does not get and move into object store as an option. This matters because there is no need to have a file system locally. The files can be handed over to a service that works well. And it could run on Prem in a cloud and can be handled by any number of men and service providers. Pets cattle is a popular term by CERN which means anything that can be replaced easily is cattle and anything that you have to care and feed for on a regular basis is a pet. The pet could be the stateful information, for example, database. The problem can be better explained with configuring the Omnibus at scale. If there are  hundreds of the VM’s and they are getting installed, further which the entire package is getting installed. So now there are 20 gigabytes per VM. The package needs to be downloaded for all the VM’s which means almost 500 megabytes. All the individual components can be configured out of the Omnibus. But even the load gets spreaded, it will still remain this big. And each of the nodes will at least take two minutes to come up from. So to speed up this process, the massive stack needs to be broken down into chunks and containers so they can be treated as individualized services. Also, there is no need of NFS as the components are no longer bound to the NFS disk. And this process would now take just five seconds instead of two minutes. A problem called legacy debt, a shared file system expectation which was a bugger. If there are separate containers and there is no shared disk then it could again give rise to a problem. “I can't do a shared disk because if we do shared disk through rewrite many. What's the major provider that will do that for us on every platform, anybody remember another three-letter problem.” - Jason Plum Then there came an interesting problem called workhorse, a smart proxy that talks to the UNIX sockets and not TCP. Though this problem got fixed. Time constraints - another problem “We can't break existing users and we can't have hiccups we have to think about everything ahead of time plan well and execute.” - Jason Plum Time constraints is a serious problem for a project’s developers, the development resources milestones, roadmaps deliverables. The new features would keep on coming into the project. The project would keep on functioning in the background but the existing users can’t be kept waiting. Is it possible to define individual component requirements? “Do you know how much CPU you need when idle versus when there's 10 people versus literally some guy clicking around and if files because he's one to look at what the kernel would like in 2 6 2 ?”- Jason Plum Monitoring helps to understand the component requirements. Metrics and performance data are few of the key elements for getting the exact component requirements. Other parameters like network, throughput, load balance, services etc also play an important role. But the problem is how to deal with throughput? How to balance the services? How to ensure that those services are always up? Then the other question comes up regarding the providers and load balancers as everyone doesn’t want to use the same load balancers or the same services. The system must support all the load balancers from all the major cloud providers and which is difficult. Issues with scaling “Maybe 50 percent for the thing that needs a lot of memory is a bad idea. I thought 50 percent was okay because when I ran a QA test against it, it didn't ever use more than 50 percent of one CPU. Apparently when I ran three more it now used 115 percent and I had 16 pounds and it fell over again.” - Jason Plum It's important to know what things needs to be scaled horizontally and which ones needs to be scaled vertically. To go automated or manual is also a crucial question. Also, it is equally important to understand which things should be configurable and how to tweak them as the use cases may vary from project to project. So, one should know how to go about a test and how to document a test. Issues with resilience “What happens to the application when a node, a whole node disappears off the cluster? Do you know how that behaves?” - Jason Plum It is important to understand which things shouldn't be on the same nodes. But the problem is how to recover it. These things are not known and by the time one understands the problem and the solution, it is too late. We need new ways of examining these issues and for planning the solution. Jason’s insightful talk on Monolith to Microservice gives a perfect end to the KubeCon + CloudNativeCon and is a must watch for everyone. Kelsey Hightower on Serverless and Security on Kubernetes at KubeCon + CloudNative RedHat contributes etcd, a distributed key-value store project, to the Cloud Native Computing Foundation at KubeCon + CloudNativeCon Oracle introduces Oracle Cloud Native Framework at KubeCon+CloudNativeCon 2018
Read more
  • 0
  • 0
  • 4424

article-image-8-programming-languages-to-learn-in-2019
Richard Gall
19 Dec 2018
9 min read
Save for later

8 programming languages to learn in 2019

Richard Gall
19 Dec 2018
9 min read
Learning new skills takes time - that's why, before learning something, you need to know that what you're learning is going to be worthwhile. This is particularly true when deciding which programming language to learn. As we approach the new year, it's a good time to reflect on our top learning priorities for 2019. But which programming should you learn in 2019? We’ve put together a list of the top programming languages to learn in the new year - as well as reasons you should learn them, and some suggestions for how you can get started. This will help you take steps to expand your skill set in 2019 in the way that’s right for you. Want to learn a new programming language? We have thousands of eBooks and videos in our $5 sale to help you get to grips with everything from Python to Rust. Python Python has been a growing programming language for some time and it shows no signs of disappearing. There are a number of reasons for this, but the biggest is the irresistible allure of artificial intelligence. Once you know Python, performing some relatively sophisticated deep learning tasks becomes relatively easy, not least because of the impressive ecosystem of tools that surround it, like TensorFlow. But Python’s importance isn’t just about machine learning. It’s flexibility means it has a diverse range of applications. If you’re a full-stack developer, for example, you might find Python useful for developing backend services and APIs; equally, if you’re in security or SRE, Python can be useful for automating aspects of your infrastructure to keep things secure and reliable. Put simply, Python is a useful addition to your skill set. Learn Python in 2019 if... You’re new to software development You want to try your hand at machine learning You want to write automation scripts Want to get started? Check out these titles: Clean Code in Python Learning Python Learn Programming in Python with Cody Jackson Python for Everyday Life [Video]       Go Go isn’t quite as popular as Python, but it is growing rapidly. And its fans are incredibly vocal about why they love it: it’s incredibly simple, yet also amazingly powerful. The reason for this is its creation: it was initially developed by Google that wanted a programming language that could handle the complexity of the systems they were developing, without adding to complexity in terms of knowledge and workflows. Combining the best aspects of functional and object oriented programming, as well as featuring a valuable set of in-built development tools, the language is likely to only go from strength to strength over the next 12 months. Learn Go in 2019 if… You’re a backend or full-stack developer looking to increase your language knowledge You’re working in ops or SRE Looking for an alternative to Python Learn Go with these eBooks and videos: Mastering Go Cloud Native programming with Golang Hands-On Go Programming Hands-On Full-Stack Development with Go       Rust In Stack Overflow’s 2018 developer survey Rust was revealed to be the best loved language among the developers using it. 80% of respondents said they loved using it or wanted to use it. Now, while Rust lacks the simplicity of Go and Python, it does do what it sets out to do very well - systems programming that’s fast, efficient, and secure. In fact, developers like to debate the merits of Rust and Go - it seems they occupy the minds of very similar developers. However, while they do have some similarities, there are key differences that should make it easier to decide which one you learn. At a basic level, Rust is better for lower level programming, while Go will allow you to get things done quickly. Rust does have a lot of rules, all of which will help you develop incredibly performant applications, but this does mean it has a steeper learning curve than something like Go. Ultimately it will depend on what you want to use the language for and how much time you have to learn something new. Learn Rust in 2019 if… You want to know why Rust developers love it so much You do systems programming You have a bit of time to tackle its learning curve Learn Rust with these titles: Rust Quick Start Guide Building Reusable Code with Rust [Video] Learning Rust [Video] Hands-On Concurrency with Rust       TypeScript TypeScript has quietly been gaining popularity over recent years. But it feels like 2018 has been the year that it has really broke through to capture the imagination of the wider developer community. Perhaps it’s just Satya Nadella’s magic... More likely, however, it’s because we’re now trying to do too much with plain old JavaScript. We simply can’t build applications of the complexity we want without drowning in lines of code. Essentially, TypeScript bulks up JavaScript, and makes it suitable for building applications of the future. It’s no surprise that TypeScript is now fundamental to core JavaScript frameworks - even Google decided to use it in Angular. But it’s not just for front end JavaScript developers - there are examples of Java and C# developers looking closely at TypeScript, as it shares many features with established statically typed languages. Learn TypeScript in 2019 if… You’re a JavaScript developer You’re a Java or C# developer looking to expand their horizons Learn TypeScript in 2019: TypeScript 3.0 Quick Start Guide TypeScript High Performance Introduction to TypeScript [Video]         Scala Scala has been around for some time, but its performance gains over Java have seen it growing in popularity in recent years. It isn’t the easiest language to learn - in comparison with other Java-related languages, like Kotlin, which haven’t strayed far from its originator, Scala is almost an attempt to rewrite the rule book. It’s a good multi-purpose programming language that brings together functional programming principles and the object oriented principles you find in Java. It’s also designed for concurrency, giving you a scale of power that just isn’t possible. The one drawback of Scala is that it doesn’t have the consistency in its ecosystem in the way that, say, Java does. This does mean, however, that Scala expertise can be really valuable if you have the time to dedicate time to really getting to know the language. Learn Scala in 2019 if… You’re looking for an alternative to Java that’s more scalable and handles concurrency much better You're working with big data Learn Scala: Learn Scala Programming Professional Scala [Video] Scala Machine Learning Projects Scala Design Patterns - Second Edition       Swift Swift started life as a replacement for Objective-C for iOS developers. While it’s still primarily used by those in the Apple development community, there are some signs that Swift could expand beyond its beginnings to become a language of choice for server and systems programming. The core development team have consistently demonstrated they have a sense of purpose is building a language fit for the future, with versions 3 and 4 both showing significant signs of evolution. Fast, relatively easy to learn, and secure, not only has Swift managed to deliver on its brief to offer a better alternative to Objective-C, it also looks well-suited to many of the challenges programmers will be facing in the years to come. Learn Swift in 2019 if… You want to build apps for Apple products You’re interested in a new way to write server code Learn Swift: Learn Swift by Building Applications Hands-On Full-Stack Development with Swift Hands-On Server-Side Web Development with Swift         Kotlin It makes sense for Kotlin to follow Swift. The parallels between the two are notable; it might be crude, but you could say that Kotlin is to Java what Swift is to Objective-C. There are, of course, some who feel that the comparison isn’t favorable, with accusations that one language is merely copying the other, but perhaps the similarities shouldn’t really be that surprising - they’re both trying to do the same things: provide a better alternative to what already exists. Regardless of the debates, Kotlin is a particularly compelling language if you’re a Java developer. It works extremely well, for example, with Spring Boot to develop web services. Certainly as monolithic Java applications shift into microservices, Kotlin is only going to become more popular. Learn Kotlin in 2019 if… You’re a Java developer that wants to build better apps, faster You want to see what all the fuss is about from the Android community Learn Kotlin: Kotlin Quick Start Guide Learning Kotlin by building Android Applications Learn Kotlin Programming [Video]         C Most of the languages on this list are pretty new, but I’m going to finish with a classic that refuses to go away. C has a reputation for being complicated and hard to learn, but it remains relevant because you can find it in so much of the software we take for granted. It’s the backbone of our operating systems, and used in everyday objects that have software embedded in them. Together, this means C is a language worth learning because it gives you an insight into how software actually works on machines. In a world where abstraction and accessibility rules the software landscape, getting beneath everything can be extremely valuable. Learn C in 2019 if… You’re looking for a new challenge You want to gain a deeper understanding of how software works on your machine You’re interested in developing embedded systems and virtual reality projects Learn C: Learn and Master C Programming For Absolute Beginners! [Video]
Read more
  • 0
  • 0
  • 17580

article-image-4-things-in-tech-that-might-die-in-2019
Richard Gall
19 Dec 2018
10 min read
Save for later

4 things in tech that might die in 2019

Richard Gall
19 Dec 2018
10 min read
If you’re in and around the tech industry, you’ve probably noticed that hype is an everyday reality. People spend a lot of time talking about what trends and technologies are up and coming and what people need to be aware of - they just love it. Perhaps second only to the fashion industry, the tech world moves through ideas quickly, with innovation piling up upon the next innovation. For the most part, our focus is optimistic: what is important? What’s actually going to shape the future? But with so much change there are plenty of things that disappear completely or simply shift out of view. Some of these things may have barely made an impression, others may have been important but are beginning to be replaced with other, more powerful, transformative and relevant tools. So, in the spirit of pessimism, here is a list of some of the trends and tools that might disappear from view in 2019. Some of these have already begun to sink, while others might leave you pondering whether I’ve completely lost my marbles. Of course, I am willing to be proven wrong. While I will not be eating my hat or any other item of clothing, I will nevertheless accept defeat with good grace in 12 months time. Blockchain Let’s begin with a surprise. You probably expected Blockchain to be hyped for 2019, but no, 2019 might, in fact, be the year that Blockchain dies. Let’s consider where we are right now: Blockchain, in itself, is a good idea. But so far all we’ve really had our various cryptocurrencies looking ever so slightly like pyramid schemes. Any further applications of Blockchain have, by and large, eluded the tech world. In fact, it’s become a useful sticker for organizations looking to raise funds - there are examples of apps out there that support Blockchain backed technologies in the early stages of funding which are later dropped as the company gains support. And it’s important to note that the word Blockchain doesn’t actually refer to one thing - there are many competing definitions as this article on The Verge explains so well. At risk of sounding flippant, Blockchain is ultimately a decentralized database. The reason it’s so popular is precisely because there is a demand for a database that is both scalable and available to a variety of parties - a database that isn’t surrounded by the implicit bureaucracy and politics that even the most prosaic ones do. From this perspective, it feels likely that 2019 will be a search for better ways of managing data - whether that includes Blockchain in its various forms remains to be seen. What you should learn instead of Blockchain A trend that some have seen as being related to Blockchain is edge computing. Essentially, this is all about decentralized data processing at the ‘edge’ of a network, as opposed to within a centralized data center (say, for example, cloud). Understanding the value of edge computing could allow us to better realise what Blockchain promises. Learn edge computing with Azure IoT Development Cookbook. It’s also worth digging deeper into databases - understanding how we can make these more scalable, reliable, and available, are essentially the tasks that anyone pursuing Blockchain is trying to achieve. So, instead of worrying about a buzzword, go back to what really matters. Get to grips with new databases. Learn with Seven NoSQL Databases in a Week Why I could be wrong about Blockchain There’s a lot of support for Blockchain across the industry, so it might well be churlish to dismiss it at this stage. Blockchain certainly does offer a completely new way of doing things, and there are potentially thousands of use cases. If you want to learn Blockchain, check out these titles: Mastering Blockchain, Second Edition Foundations of Blockchain Blockchain for Enterprise   Hadoop and big data If Blockchain is still receiving considerable hype, then big data has been slipping away quietly for the last couple of years. Of course, it hasn’t quite disappeared - data is now a normal part of reality. It’s just that trends like artificial intelligence and cloud have emerged to take its place and place even greater emphasis on what we’re actually doing with that data, and how we’re doing it. Read more: Why is Hadoop dying? With this change in emphasis, we’ve also seen the slow death of Hadoop. In a world that increasingly cloud native, it simply doesn’t make sense to run data on a cluster of computers - instead, leveraging public cloud makes much more sense. You might, for example, use Amazon S3 to store your data and then Spark, Flink, or Kafka for processing. Of course, the advantages of cloud are well documented. But in terms of big data, cloud allows for much greater elasticity in terms of scale, greater speed, and makes it easier to perform machine learning thanks to in built features that a number of the leading cloud vendors provide. What you should learn instead of Hadoop The future of big data largely rests in tools like Spark, Flink and Kafka. But it’s important to note it’s not really just about a couple of tools. As big data evolves, focus will need to be on broader architectural questions about what data you have, where it needs to be stored and how it should be used. Arguably, this is why ‘big data’ as a concept will lose valence with the wider community - it will still exist, but will be part of parcel of everyday reality, it won’t be separate from everything else we do. Learn the tools that will drive big data in the future: Apache Spark 2: Data Processing and Real-Time Analytics [Learning Path] Apache Spark: Tips, Tricks, & Techniques [Video] Big Data Processing with Apache Spark Learning Apache Flink Apache Kafka 1.0 Cookbook Why I could be wrong about Hadoop Hadoop 3 is on the horizon and could be the saving grace for Hadoop. Updates suggest that this new iteration is going to be much more amenable to cloud architectures. Learn Hadoop 3: Apache Hadoop 3 Quick Start Guide Mastering Hadoop 3         R 12 to 18 months ago debate was raging over whether R or Python was the best language for data. As we approach the end of 2018, that debate seems to have all but disappeared, with Python finally emerging as the go-to language for anyone working with data. There are a number of reasons for this: Python has the best libraries and frameworks for developing machine learning models. TensorFlow, for example, which runs on top of Keras, makes developing pretty sophisticated machine and deep learning systems relatively easy. R, however, simply can’t match Python in this way. With this ease comes increased adoption. If people want to ‘get into’ machine learning and artificial intelligence, Python is the obvious choice. This doesn’t mean R is dead - instead, it will continue to be a language that remains relevant for very specific use cases in research and data analysis. If you’re a researcher in a university, for example, you’ll probably be using R. But it at least now has to concede that it will never have the reach or levels of growth that Python has. What you should learn instead of R This is obvious - if you’re worried about R’s flexibility and adaptability for the future, you need to learn Python. But it’s certainly not the only option when it comes to machine learning - the likes of Scala and Go could prove useful assets on your CV, for machine learning and beyond. Learn a new way to tackle contemporary data science challenges: Python Machine Learning - Second Edition Hands-on Supervised Machine Learning with Python [Video] Machine Learning With Go Scala for Machine Learning - Second Edition       Why I could be wrong about R R is still an incredibly useful language when it comes to data analysis. Particularly if you’re working with statistics in a variety of fields, it’s likely that it will remain an important part of your skill set for some time. Check out these R titles: Getting Started with Machine Learning in R [Video] R Data Analysis Cookbook - Second Edition Neural Networks with R         IoT IoT is a term that has been hanging around for quite a while now. But it still hasn’t quite delivered on the hype that it originally received. Like Blockchain, 2019 is perhaps IoT’s make or break year. Even if it doesn’t develop into the sort of thing it promised, it could at least begin to break down into more practical concepts - like, for example edge computing. In this sense, we’d stop talking about IoT as if it were a single homogenous trend about to hit the modern world, but instead a set of discrete technologies that can produce new types of products, and complement existing (literal) infrastructure. The other challenge that IoT faces in 2019 is that the very concept of a connected world depends upon decision making - and policy - beyond the world of technology and business. If, for example, we’re going to have smart cities, there needs to be some kind of structure in place on which some degree of digital transformation can take place. Similarly, if every single device is to be connected in some way, questions will need to be asked about how these products are regulated and how this data is managed. Essentially, IoT is still a bit of a wild west. Given the year of growing scepticism about technology, major shifts are going to be unlikely over the next 12 months. What to learn One way of approaching IoT is instead to take a step back and think about the purpose of IoT, and what facets of it are most pertinent to what you want to achieve. Are you interested in collecting and analyzing data? Or developing products that have in built operational intelligence. Once you think about it from this perspective, IoT begins to sound less like a conceptual behemoth, and something more practical and actionable. Why I could be wrong about IoT Immediate shifts in IoT might be slow, but it could begin to pick up speed in organizations that understand it could have a very specific value. In this sense, IoT is a little like Blockchain - it’s only really going to work if we can move past the hype, and get into the practical uses of different technologies. Check out some of our latest IoT titles: Internet of Things Programming Projects Industrial Internet Application Development Introduction to Internet of Things [Video] Alexa Skills Projects       Does anything really die in tech? You might be surprised at some of the entries on this list - others, not so much. But either way, it’s worth pointing out that ultimately nothing ever really properly disappears in tech. From a legacy perspective change and evolution often happens slowly, and in terms of innovation buzzwords and hype don’t simply vanish, they mature and influence developments in ways we might not have initially expected. What will really be important in 2019 is to be alive to these shifts, and give yourself the best chance of taking advantage of change when it really matters.
Read more
  • 0
  • 0
  • 7178

article-image-quantum-computing-edge-analytics-and-meta-learning-key-trends-in-data-science-and-big-data-in-2019
Richard Gall
18 Dec 2018
11 min read
Save for later

Quantum computing, edge analytics, and meta learning: key trends in data science and big data in 2019

Richard Gall
18 Dec 2018
11 min read
When historians study contemporary notions of data in the early 21st century, 2018 might well be a landmark year. In many ways this was the year when Big and Important Issues - from the personal to the political - began to surface. The techlash, a term which has defined the year, arguably emerged from conversations and debates about the uses and abuses of data. But while cynicism casts a shadow on the brightly lit data science landcape, there’s still a lot of optimism out there. And more importantly, data isn’t going to drop off the agenda any time soon. However, the changing conversation in 2018 does mean that the way data scientists, analysts, and engineers use data and build solutions for it will change. A renewed emphasis on ethics and security is now appearing, which will likely shape 2019 trends. But what will these trends be? Let’s take a look at some of the most important areas to keep an eye on in the new year. Meta learning and automated machine learning One of the key themes of data science and artificial intelligence in 2019 will be doing more with less. There are a number of ways in which this will manifest itself. The first is meta learning. This is a concept that aims to improve the way that machine learning systems actually work by running machine learning on machine learning systems. Essentially this allows a machine learning algorithm to learn how to learn. By doing this, you can better decide which algorithm is most appropriate for a given problem. Find out how to put meta learning into practice. Learn with Hands On Meta Learning with Python. Automated machine learning is closely aligned with meta learning. One way of understanding it is to see it as putting the concept of automating the application of meta learning. So, if meta learning can help better determine which machine learning algorithms should be applied and how they should be designed, automated machine learning makes that process a little smoother. It builds the decision making into the machine learning solution. Fundamentally, it’s all about “algorithm selection, hyper-parameter tuning, iterative modelling, and model assessment,” as Matthew Mayo explains on KDNuggets. Automated machine learning tools What’s particularly exciting about automated machine learning is that there are already a number of tools that make it relatively easy to do. AutoML is a set of tools developed by Google that can be used on the Google Cloud Platform, while auto-sklearn, built around the scikit-learn library, provides a similar out of the box solution for automated machine learning. Although both AutoML and auto-sklearn are very new, there are newer tools available that could dominate the landscape: AutoKeras and AdaNet. AutoKeras is built on Keras (the Python neural network library), while AdaNet is built on TensorFlow. Both could be more affordable open source alternatives to AutoML. Whichever automated machine learning library gains the most popularity will remain to be seen, but one thing is certain: it makes deep learning accessible to many organizations who previously wouldn’t have had the resources or inclination to hire a team of PhD computer scientists. But it’s important to remember that automated machine learning certainly doesn’t mean automated data science. While tools like AutoML will help many organizations build deep learning models for basic tasks, for organizations that need a more developed data strategy, the role of the data scientist will remain vital. You can’t after all, automate away strategy and decision making. Learn automated machine learning with these titles: Hands-On Automated Machine Learning TensorFlow 1.x Deep Learning Cookbook         Quantum computing Quantum computing, even as a concept, feels almost fantastical. It's not just cutting-edge, it's mind-bending. But in real-world terms it also continues the theme of doing more with less. Explaining quantum computing can be tricky, but the fundamentals are this: instead of a binary system (the foundation of computing as we currently know it), which can be either 0 or 1, in a quantum system you have qubits, which can be 0, 1 or both simultaneously. (If you want to learn more, read this article). What Quantum computing means for developers So, what does this mean in practice? Essentially, because the qubits in a quantum system can be multiple things at the same time, you are then able to run much more complex computations. Think about the difference in scale: running a deep learning system on a binary system has clear limits. Yes, you can scale up in processing power, but you’re nevertheless constrained by the foundational fact of zeros and ones. In a quantum system where that restriction no longer exists, the scale of the computing power at your disposal increases astronomically. Once you understand the fundamental proposition, it becomes much easier to see why the likes of IBM and Google are clamouring to develop and deploy quantum technology. One of the most talked about use cases is using Quantum computers to find even larger prime numbers (a move which contains risks given prime numbers are the basis for much modern encryption). But there other applications, such as in chemistry, where complex subatomic interactions are too detailed to be modelled by a traditional computer. It’s important to note that Quantum computing is still very much in its infancy. While Google and IBM are leading the way, they are really only researching the area. It certainly hasn’t been deployed or applied in any significant or sustained way. But this isn’t to say that it should be ignored. It’s going to have a huge impact on the future, and more importantly it’s plain interesting. Even if you don’t think you’ll be getting to grips with quantum systems at work for some time (a decade at best), understanding the principles and how it works in practice will not only give you a solid foundation for major changes in the future, it will also help you better understand some of the existing challenges in scientific computing. And, of course, it will also make you a decent conversationalist at dinner parties. Who's driving Quantum computing forward? If you want to get started, Microsoft has put together the Quantum Development Kit, which includes the first quantum-specific programming language Q#. IBM, meanwhile, has developed its own Quantum experience, which allows engineers and researchers to run quantum computations in the IBM cloud. As you investigate these tools you’ll probably get the sense that no one’s quite sure what to do with these technologies. And that’s fine - if anything it makes it the perfect time to get involved and help further research and thinking on the topic. Get a head start in the Quantum Computing revolution. Pre-order Mastering Quantum Computing with IBM QX.           Edge analytics and digital twins While Quantum lingers on the horizon, the concept of the edge has quietly planted itself at the very center of the IoT revolution. IoT might still be the term that business leaders and, indeed, wider society are talking about, for technologists and engineers, none of its advantages would be possible without the edge. Edge computing or edge analytics is essentially about processing data at the edge of a network rather than within a centralized data warehouse. Again, as you can begin to see, the concept of the edge allows you to do more with less. More speed, less bandwidth (as devices no longer need to communicate with data centers), and, in theory, more data. In the context of IoT, where just about every object in existence could be a source of data, moving processing and analytics to the edge can only be a good thing. Will the edge replace the cloud? There's a lot of conversation about whether edge will replace cloud. It won't. But it probably will replace the cloud as the place where we run artificial intelligence. For example, instead of running powerful analytics models in a centralized space, you can run them at different points across the network. This will dramatically improve speed and performance, particularly for those applications that run on artificial intelligence. A more distributed world Think of it this way: just as software has become more distributed in the last few years, thanks to the emergence of the edge, data itself is going to be more distributed. We'll have billions of pockets of activity, whether from consumers or industrial machines, a locus of data-generation. Find out how to put the principles of edge analytics into practice: Azure IoT Development Cookbook Digital twins An emerging part of the edge computing and analytics trend is the concept of digital twins. This is, admittedly, still something in its infancy, but in 2019 it’s likely that you’ll be hearing a lot more about digital twins. A digital twin is a digital replica of a device that engineers and software architects can monitor, model and test. For example, if you have a digital twin of a machine, you could run tests on it to better understand its points of failure. You could also investigate ways you could make the machine more efficient. More importantly, a digital twin can be used to help engineers manage the relationship between centralized cloud and systems at the edge - the digital twin is essentially a layer of abstraction that allows you to better understand what’s happening at the edge without needing to go into the detail of the system. For those of us working in data science, digital twins provide better clarity and visibility on how disconnected aspects of a network interact. If we’re going to make 2019 the year we use data more intelligently - maybe even more humanely - then this is precisely the sort of thing we need. Interpretability, explainability, and ethics Doing more with less might be one of the ongoing themes in data science and big data in 2019, but we can’t ignore the fact that ethics and security will remain firmly on the agenda. Although it’s easy to dismiss these issues issues as separate from the technical aspects of data mining, processing, and analytics, but it is, in fact, deeply integrated into it. One of the key facets of ethics are two related concepts: explainability and interpretability. The two terms are often used interchangeably, but there are some subtle differences. Explainability is the extent to which the inner-working of an algorithm can be explained in human terms, while interpretability is the extent to which one can understand the way in which it is working (eg. predict the outcome in a given situation). So, an algorithm can be interpretable, but you might not quite be able to explain why something is happening. (Think about this in the context of scientific research: sometimes, scientists know that a thing is definitely happening, but they can’t provide a clear explanation for why it is.) Improving transparency and accountability Either way, interpretability and explainability are important because they can help to improve transparency in machine learning and deep learning algorithms. In a world where deep learning algorithms are being applied to problems in areas from medicine to justice - where the problem of accountability is particularly fraught - this transparency isn’t an option, it’s essential. In practice, this means engineers must tweak the algorithm development process to make it easier for those outside the process to understand why certain things are happening and why they aren't. To a certain extent, this ultimately requires the data science world to take the scientific method more seriously than it has done. Rather than just aiming for accuracy (which is itself often open to contestation), the aim is to constantly manage that gap between what we’re trying to achieve with an algorithm and how it goes about actually doing that. You can learn the basics of building explainable machine learning models in the Getting Started with Machine Learning in Python video.          Transparency and innovation must go hand in hand in 2019 So, there are two fundamental things for data science in 2019: improving efficiency, and improving transparency. Although the two concepts might look like the conflict with each other, it's actually a bit of a false dichotomy. If we realised that 12 months ago, we might have avoided many of the issues that have come to light this year. Transparency has to be a core consideration for anyone developing systems for analyzing and processing data. Without it, the work your doing might be flawed or unnecessary. You’re only going to need to add further iterations to rectify your mistakes or modify the impact of your biases. With this in mind, now is the time to learn the lessons of 2018’s techlash. We need to commit to stopping the miserable conveyor belt of scandal and failure. Now is the time to find new ways to build better artificial intelligence systems.
Read more
  • 0
  • 0
  • 6420
article-image-how-ira-hacked-american-democracy-using-social-media-and-meme-warfare-to-promote-disinformation-and-polarization-a-new-report-to-senate-intelligence-committee
Natasha Mathur
18 Dec 2018
9 min read
Save for later

How IRA hacked American democracy using social media and meme warfare to promote disinformation and polarization: A new report to Senate Intelligence Committee

Natasha Mathur
18 Dec 2018
9 min read
A new report prepared for the Senate Intelligence Committee by the cybersecurity firm, New Knowledge was released yesterday. The report titled “The Tactics & Tropes of the Internet Research Agency” provides an insight into how IRA a group of Russian agents used and continue to use social media to influence politics in America by exploiting the political and racial separation in American society.   “Throughout its multi-year effort, the Internet Research Agency exploited divisions in our society by leveraging vulnerabilities in our information ecosystem. We hope that our work has resulted in a clearer picture for policymakers, platforms, and the public alike and thank the Senate Select Committee on Intelligence for the opportunity to serve”, says the report. Russian interference during the 2016 Presidential Elections comprised of Russian agents trying to hack the online voting systems, making cyber-attacks aimed at Democratic National Committee and Russian tactics of social media influence to exacerbate the political and social divisions in the US. As a part of SSCI’s investigation into IRA’s social media activities, some of the social platforms companies such as Twitter, Facebook, and Alphabet that were misused by the IRA, provided data related to IRA influence tactics. However, none of these platforms provided complete sets of related data to SSCI. “Some of what was turned over was in PDF form; other datasets contained extensive duplicates. Each lacked core components that would have provided a fuller and more actionable picture. The data set provided to the SSCI for this analysis includes data previously unknown to the public.and..is the first comprehensive analysis by entities other than the social platforms”, reads the report.   The report brings to light IRA’s strategy that involved deciding on certain themes, primarily social issues and then reinforcing these themes across its Facebook, Instagram, and YouTube content. Different topics such as black culture, anti-Clinton, pro-trump, anti-refugee, Muslim culture, LGBT culture, Christian culture, feminism, veterans, ISIS, and so on were grouped thematically on Facebook Pages and Instagram accounts to reinforce the culture and to foster the feelings of pride.  Here is a look at some key highlights from the report. Key Takeaways IRA used Instagram as the biggest tool for influence As per the report, Facebook executives, during the Congressional testimony held in April this year, hid the fact that Instagram played a major role in IRA’s influence operation. There were about 187 million engagements on Instagram compared to 76.5 million on Facebook and 73 million on Twitter, according to a data set of posts between 2015 and 2018. In 2017, IRA moved much of its activity and influence operations to Instagram as media started looking into Facebook and Twitter operations. Instagram was the most effective platform for the Internet Research Agency and approximately 40% of Instagram accounts achieved over 10,000 followers (a level referred to as “micro-influencers” by marketers) and twelve of these accounts had over 100,000 followers (“influencer” level).                                     The Tactics & Tropes of IRA “Instagram engagement outperformed Facebook, which may indicate its strength as a tool in image-centric memetic (meme) warfare. Our assessment is that Instagram is likely to be a key battleground on an ongoing basis,” reads the report. Apart from social media posts, another feature of Instagram platform activity by IRA was merchandise. This merchandise promotion aimed at building partnerships for boosting audience growth and getting the audience data. This was especially evident in the black targeted communities with hashtags #supportblackbusiness and #buyblack appearing quite frequently. In fact, sometimes these IRA pages also offered coupons in exchange for sharing content.                                               The Tactics & Tropes of IRA IRA promoted Voter Suppression Operations The report states that although Twitter and Facebook were debating on determining if there was any voter suppression content present on these platforms, three major variants of voter suppression narratives was found widespread on Twitter, Facebook, Instagram, and YouTube.  These included malicious misdirection (eg: tweets promoting false voting rules), candidates supporting redirection, and turnout depression ( eg: no need to vote, your vote doesn’t matter). The Tactics & Tropes of IRA For instance, few days before the 2016 presidential elections in the US, IRA started to implement voter suppression tactics on the Black-community targeted accounts. IRA started to spread content about voter fraud and delivering warnings that “election would be stolen and violence might be necessary”. These suppression narratives and content was largely targeted almost exclusively at the Black community on Instagram and Facebook. There was also the promotion of other kinds of content on topics such as alienation and violence to divert people’s attention away from politics. Other varieties of voter suppression narratives include: “don’t vote, stay home”, “this country is not for Black people”, “these candidates don’t care about Black people”, etc. Voter-suppression narratives aimed at non-black communities focused primarily on promoting identity and pride for communities like Native Americans, LGBT+, and Muslims. The Tactics & Tropes of IRA Then there were narratives that directly and broadly called out for voting for candidates apart from Hillary Clinton and pages on Facebook that posted repeatedly about voter fraud, stolen elections, conspiracies about machines provided by Soros, and rigged votes. IRA largely targeted black American communities IRA’s major efforts over Facebook and Instagram were targeted at Black communities in America and involved developing and recruiting Black Americans as assets. The report states that IRA adopted a cross-platform media mirage strategy which shared authentic black related content to create a strong influence on the black community over social media.   An example presented in the report is that of a case study of “Black Matters” which illustrates the extent to which IRA created “inauthentic media property” by creating different accounts across the social platforms to “reinforce its brand” and widely distribute its content.  “Using only the data from the Facebook Page posts and memes, we generated a map of the cross-linked properties – other accounts that the Pages shared from, or linked to – to highlight the complex web of IRA-run accounts designed to surround Black audiences,” reads the report. So, an individual who followed or liked one of the Black-community-targeted IRA Pages would get exposed to content from a dozen other pages more. Apart from IRA’s media mirage strategy, there was also the human asset recruitment strategy. It involved posts encouraging Americans to perform different types of tasks for IRA handlers. Some of these tasks included requests for contact with preachers from Black churches, soliciting volunteers to hand out fliers, offering free self-defense classes (Black Fist/Fit Black), requests for speakers at protests, etc. These posts appeared in the Black, Left, and Right-targeted groups, although they were mostly present in the black groups and communities. “The IRA exploited the trust of their Page audiences to develop human assets, at least some of whom were not aware of the role they played. This tactic was substantially more pronounced on Black-targeted accounts”, reads the report. IRA also created domain names such as blackvswhite.info, blackmattersusa.com, blacktivist.info, blacktolive.org, and so on. It also created YouTube channels like “Cop Block US” and “Don’t Shoot” to spread anti-Clinton videos. In response to these reports of specific black targeting at Facebook, National Association for the Advancement of Colored People (NAACP) returned a donation from Facebook and called on its users yesterday to log out of all Facebook-owned products such as Facebook, Instagram, and Whatsapp today. “NAACP remains concerned about the data breaches and numerous privacy mishaps that the tech giant has encountered in recent years, and is especially critical about those which occurred during the last presidential election campaign”, reads the NAACP announcement. IRA promoted Pro-Trump and anti-Clinton operations As per the report, IRA focussed on promoting political content surrounding pro-Donald Trump sentiments over different channels and pages regardless of whether these pages targeted conservatives, liberals, or racial and ethnic groups. The Tactics & Tropes of IRA On the other hand, large volumes of political content articulated anti-Hillary Clinton sentiments among both the Right and Left-leaning communities created by IRA. Moreover, there weren’t any communities or pages on Instagram and Facebook that favored Clinton. There were some pro-Clinton Twitter posts, however, most of the tweets were still largely anti-Clinton. The Tactics & Tropes of IRA Additionally, there were different YouTube channels created by IRA such as Williams & Kalvin, Cop Block US, don’t shoot, etc, and 25 videos across these different channels consisted election-related keywords in their title and all of these videos were anti-Hillary Clinton. An example presented in a report is of one of the political channels, Paul Jefferson, solicited videos for a #PeeOnHillary video challenge for which the hashtag appeared on Twitter and Instagram.  and shared submissions that it received. Other videos promoted by these YouTube channels were “The truth about elections”, “HILLARY RECEIVED $20,000 DONATION FROM KKK TOWARDS HER CAMPAIGN”, and so on. Also, on IRA’s Facebook account, the post with maximum shares and engagement was a conspiracy theory about President Barack Obama refusing to ban Sharia Law, and encouraging Trump to take action. The Tactics & Tropes of IRA Also, the number one post on Facebook featuring Hillary Clinton was a conspiratorial post that was made public a month before the election. The Tactics & Tropes of IRA These were some of the major highlights from the report. However, the report states that there is still a lot to be done with regard to IRA specifically. There is a need for further investigation of subscription and engagement pathways and only these social media platforms currently have that data. New Knowledge team hopes that these platforms will provide more data that can speak to the impact among the targeted communities. For more information into the tactics of IRA, read the full report here. Facebook, Twitter takes down hundreds of fake accounts with ties to Russia and Iran, suspected to influence the US midterm elections Facebook plans to change its algorithm to demote “borderline content” that promotes misinformation and hate speech on the platform Facebook’s outgoing Head of communications and policy takes the blame for hiring PR firm ‘Definers’ and reveals more
Read more
  • 0
  • 0
  • 2739

article-image-troll-patrol-report-amnesty-international-and-element-ai-use-machine-learning-to-understand-online-abuse-against-women
Sugandha Lahoti
18 Dec 2018
5 min read
Save for later

Troll Patrol Report: Amnesty International and Element AI use machine learning to understand online abuse against women

Sugandha Lahoti
18 Dec 2018
5 min read
Amnesty International has partnered with Element AI to release a Troll Patrol report on the online abuse against women on Twitter. This finding was a part of their Troll patrol project which invites human rights researchers, technical experts, and online volunteers to build a crowd-sourced dataset of online abuse against women.   https://twitter.com/amnesty/status/1074946094633836544 Abuse of women on social media websites has been rising at an unprecedented rate. Social media websites have a responsibility to respect human rights and to ensure that women using the platform are able to express themselves freely and without fear. However, this has not been the case with Twitter and Amnesty has unearthed certain discoveries. Amnesty’s methodology was powered by machine learning Amnesty and Element AI surveyed 778 journalists and politicians from the UK and US throughout 2017 and then use machine learning techniques to qualitatively analyze abuse against women. The first process was to design large, unbiased dataset of tweets mentioning 778 women politicians and journalists from the UK and US. Next, over 6,500 volunteers (aged between 18 to 70 years old and from over 150 countries) analyzed 288,000 unique tweets to create a labeled dataset of abusive or problematic content. This was based on simple questions such as if the tweets were abusive or problematic, and if so, whether they revealed misogynistic, homophobic or racist abuse or other types of violence. Three experts also categorized a sample of 1,000 tweets to assess the quality of the tweets labeled by digital volunteers. Element AI used data science specifically using a subset of the Decoders and experts’ categorization of the tweets, to extrapolate the abuse analysis. Key findings from the report Per the findings of the Troll Patrol report, 7.1% of tweets sent to the women in the study were “problematic” or “abusive”. This amounts to 1.1 million tweets mentioning 778 women across the year, or one every 30 seconds. Women of color, (black, Asian, Latinx and mixed-race women) were 34% more likely to be mentioned in abusive or problematic tweets than white women. Black women were disproportionately targeted, being 84% more likely than white women to be mentioned in abusive or problematic tweets. Source: Amnesty Online abuse targets women from across the political spectrum faced similar levels of online abuse and both liberals and conservatives alike, as well as left and right-leaning media organizations, were targeted. Source: Amnesty   What does this mean for people in tech Social media organizations are repeatedly failing in their responsibility to protect women’s rights online. They fall short of adequately investigating and responding to reports of violence and abuse in a transparent manner which leads many women to silence or censor themselves on the platform. Such abuses also hinder the freedom of expression online and also undermines women’s mobilization for equality and justice, particularly those groups who already face discrimination and marginalization. What can tech platforms do? One of the recommendations of the report is that social media platforms should publicly share comprehensive and meaningful information about reports of violence and abuse against women, as well as other groups, on their platforms. They should also talk in detail about how they are responding to it. Although Twitter and other platforms are using machine learning for content moderation and flagging, they should be transparent about the algorithms they use. They should publish information about training data, methodologies, moderation policies and technical trade-offs (such as between greater precision or recall) for public scrutiny. Machine learning automation should ideally be part of a larger content moderation system characterized by human judgment, greater transparency, rights of appeal and other safeguards. Amnesty in collaboration with Element AI also developed a machine learning model to better understand the potential and risks of using machine learning in content moderation systems. This model was able to achieve results comparable to their digital volunteers at predicting abuse, although it is ‘far from perfect still’, Amnesty notes. It achieves about a 50% accuracy level when compared to the judgment of experts. It was able to correctly identify 2 in every 14 tweets as abusive or problematic in comparison to experts who identified 1 in every 14 tweets as abusive or problematic. “Troll Patrol isn’t about policing Twitter or forcing it to remove content. We are asking it to be more transparent, and we hope that the findings from Troll Patrol will compel it to make that change. Crucially, Twitter must start being transparent about how exactly they are using machine learning to detect abuse, and publish technical information about the algorithms they rely on”. said Milena Marin senior advisor for tactical research at Amnesty International. Read more: The full list of Amnesty’s recommendations to Twitter. People on Twitter (the irony) are shocked at the release of Amnesty’s report and #ToxicTwitter is trending. https://twitter.com/gregorystorer/status/1074959864458178561 https://twitter.com/blimundaseyes/status/1074954027287396354 https://twitter.com/MikeWLink/status/1074500992266354688 https://twitter.com/BethRigby/status/1074949593438265344 Check out the full Troll Patrol report on Amnesty. Also, check out their machine learning based methodology in detail. Amnesty International takes on Google over Chinese censored search engine, Project Dragonfly. Twitter CEO, Jack Dorsey slammed by users after a photo of him holding ‘smash Brahminical patriarchy’ poster went viral Twitter plans to disable the ‘like’ button to promote healthy conversations; should retweet be removed instead?
Read more
  • 0
  • 0
  • 2714

article-image-python-governance-vote-results-are-here-the-steering-council-model-is-the-winner
Prasad Ramesh
18 Dec 2018
3 min read
Save for later

Python governance vote results are here: The steering council model is the winner

Prasad Ramesh
18 Dec 2018
3 min read
The election to select the governance model for Python following the stepping down of Guido van Rossum as the BDFL earlier this year has ended and PEP 8016 was selected as the winner. PEP 8016 is the steering council model that has a focus on providing a minimal and solid foundation for governance decisions. The vote has chosen a governance PEP that will be implemented on the Python project. The winner: PEP 8016 the steering council model Authored by Nathaniel J. Smith, and Donald Stufft, this proposal involves a model for Python governance based on a steering council. The council has vast authority, which they intend to use as rarely as possible, instead, they plan to use this power to establish standard processes. The steering council committee consists of five people. A general philosophy is followed—it's better to split up large changes into a series of small changes to be reviewed independently. As opposed to trying to do everything in one PEP, the focus is on providing a minimal and solid foundation for future governance decisions. This PEP was accepted on December 17, 2018. Goals of the steering council model The main goals of this proposal are: Sticking to the basics aka ‘be boring’. The authors don't think Python is a good place to experiment with new and untested governance models. Hence, this proposal sticks to mature, well-known, processes that have been tested previously. A high-level approach where the council does not involve much very common in large successful open source projects. The low-level details are directly derived from Django's governance. Being simple enough for minimum viable governance. The proposal attempts to slim things down to the minimum required, just enough to make it workable. The trimming includes the council, the core team, and the process for changing documentation. A goal is to ‘be comprehensive’. The things that need to be defined are covered well for future use. Having a clear set of rules will also help minimize confusion. To ‘be flexible and light-weight’. The authors are aware that to find the best processes for working together, it will take time and experimentation. Hence, they keep the document as minimal as possible, for maximal flexibility to adjust things later. The need for heavy-weight processes like whole-project votes is also minimized. The council will work towards maintaining the quality of and stability of the Python language and the CPython interpreter. Make contribution process easy, maintain relations with the core team, establish a decision-making process for PEPs, and so on. They have powers to make decisions on PEPs, enforce project code of conduct, etc. To know more about the election to the committee visit the Python website. NumPy drops Python 2 support. Now you need Python 3.5 or later. NYU and AWS introduce Deep Graph Library (DGL), a python package to build neural network graphs Python 3.7.2rc1 and 3.6.8rc1 released
Read more
  • 0
  • 0
  • 2702
article-image-mips-open-sourced-under-mips-open-program-makes-the-semiconductor-space-and-soc-ones-to-watch-for-in-2019
Melisha Dsouza
18 Dec 2018
4 min read
Save for later

MIPS open sourced under ‘MIPS Open Program’, makes the semiconductor space and SoC, ones to watch for in 2019

Melisha Dsouza
18 Dec 2018
4 min read
On 17th December, Wave Computing announced that it will put MIPS on open source, with MIPS Instruction Set Architecture (ISA) and MIPS’ latest core R6 to be made available in the first quarter of 2019. With a vision to “accelerate the ability for semiconductor companies, developers and universities to adopt and innovate using MIPS for next-generation system-on-chip (SoC) designs”, Wave computings’ MIPS Open program will give participants full access to the most recent versions of the 32-bit and 64-bit MIPS ISA free of charge, without any licensing or royalty fees. Under this program, participants will have full access to the most recent versions of the 32-bit and 64-bit MIPS ISA free of charge – with no licensing or royalty fees. Additionally, participants in the MIPS Open program will be licensed under MIPS’ existing worldwide patents. Addressing the “lack of open source access to true industry-standard, patent-protected and silicon-proven RISC architectures”, Art Swift, president of Wave Computing’s MIPS IP Business claims that MIPS will bring to the open-source community “commercial-ready” instruction sets with “industrial-strength” architecture, where “Chip designers will have opportunities to design their own cores based on proven and well-tested instruction sets for any purposes.” Lee Flanagin, Wave’s senior vice president and chief business officer further added in the post that the MIPS Open initiative is a key part of Wave’s ‘AI for All’ vision. He says that “The MIPS-based solutions developed under MIPS Open will complement our existing and future MIPS IP cores that Wave will continue to create and license globally as part of our overall portfolio of systems, solutions and IP. This will ensure current and new MIPS customers will have a broad array of solutions from which to choose for their SoC designs, and will also have access to a vibrant MIPS development community and ecosystem.” The MIPS Open initiative further will encourage the adoption of MIPS while helping customers develop new, MIPS-compatible solutions for a variety of emerging market applications from third-party tool vendors, software developers and universities. RISC-V versus MIPS? Considering that the RISC-V instruction set architecture is also free and open for anyone to use,  the internet went abuzz with speculations about competition between RISC-V and MIPS and the potential future of both. Hacker news also saw comments like: “Had this happened two or three years ago, RISC-V would have never been born.” In an interview to EE Times, Rupert Baines, CEO of UltraSoC, said that “Given RISC-V’s momentum, MIPS going open source is an interesting, shrewd move.”  He observed, “MIPS already has a host of quality tools and software environment. This is a smart way to amplify MIPS’ own advantage, without losing much.” Linley Gwennap, principal analyst at the Linley Group compared the two chips and stated that, “The MIPS ISA is more complete than RISC-V. For example, it includes DSP and SIMD extensions, which are still in committee for RISC-V.”. Calling the MIPS software development tools more mature than RISC-V, he went on to list down the benefits of MIPS over RISC: “MIPS also provides patent protection and a central authority to avoid ISA fragmentation, both of which RISC-V lacks. These factors give MIPS an advantage for commercial implementations, particularly for customer-facing cores.” Hacker News and Twitter are bustling with comments on this move by Wave computing. Opinions are split over which architecture is more preferable to use. For the most part, customers appear excited about this news. https://twitter.com/corkmork/status/1074857920293027840 https://twitter.com/plessl/status/1074778310025076736 You can head over to Wave Computing’s official blog to know more about this announcement. The Linux and RISC-V foundations team up to drive open source development and adoption of RISC-V instruction set architecture (ISA) Arm releases free Cortex-M processor cores for FPGAs, includes measures to combat FOSSi threat SpectreRSB targets CPU return stack buffer, found on Intel, AMD, and ARM chipsets  
Read more
  • 0
  • 0
  • 2569

article-image-neurips-2018-how-machine-learning-experts-can-work-with-policymakers-to-make-good-tech-decisions-invited-talk
Bhagyashree R
18 Dec 2018
6 min read
Save for later

NeurIPS 2018: How machine learning experts can work with policymakers to make good tech decisions [Invited Talk]

Bhagyashree R
18 Dec 2018
6 min read
At the 32nd annual  NeurIPS conference held earlier this month, Edward William Felten, a professor of computer science and public affairs at Princeton University spoke about how decision makers and tech experts can work together to make better policies. The talk was aimed at answering questions such as why should public policy matter to AI researchers, what role can researchers play in policy debates, and how can researchers help bridge divides between the research and policy communities. While AI and machine learning are being used in high impact areas and have seen heavy adoption in every field, in recent years, they have also gained a lot of attention from the policymakers. Technology has become a huge topic of discussion among policymakers mainly because of its cases of failure and how it is being used or misused. They have now started formulating laws and regulations and holding discussions about how society will govern the development of these technologies. Prof. Felten explained how having constructive engagement with policymakers will lead to better outcomes for technology, government, and society. Why tech should be regulated? Regulating tech is important, and for that researchers, data scientists, and other people in tech fields have to close the gap between their research labs, cubicles, and society. Prof. Felten emphasizes that it is up to the tech people to bridge this gap as we not only have the opportunity but also a duty to be more active and productive in participating in public life. There are many people coming to the conclusion that tech should be regulated before it is too late. In a piece published by the Wall Street Journal, three experts debated about whether the government should regulate AI. One of them, Ryan Calo explains, “One of the ironies of artificial intelligence is that proponents often make two contradictory claims. They say AI is going to change everything, but there should be no changes to the law or legal institutions in response.” Prof. Felten points out that law and policies are meant to change in order to adapt according to the current conditions. They are not just written once and for all for the cases of today and the future, rather law is a living system that adapts to what is going on in the society. And, if we believe that technology is going to change everything, we can expect that law will change. Prof. Felten also said that not only the tech researchers and policymakers but the society also should also have some say in how the technology is developed, “After all the people who are affected by the change that we are going to cause deserve some say in how that change happens, how it is used. If we believe in a society which is fundamentally democratic in which everyone has a stake and everyone has a voice then it is only fair that those lives we are going to change have some say in how that change come about and what kind of changes are going to happen and which are not.” How experts can work with decision makers to make good tech decisions The three key approaches that we can take to engage with policymakers to take a decision about technology: Engage in a two-way dialogue with policymakers As a researcher, we might think that we are tech experts/scientists and we do not need to get involved in politics. We need to just share the facts we know and our job is done. But if researchers really want to maximize their impact in policy debates, they need to combine the knowledge and preferences of policymakers with their knowledge and preferences. Which means, they need to take into account what policymakers might already have heard about a particular subject and the issues or approaches that resonate with them. Prof. Felten explains that this type of understanding and exchange of ideas can be done in two stages. Researchers need to ask several questions to policymakers, which is not a one-time thing, rather a multi-round protocol. They have to go back and forth with the person and need to build engagement over time and mutual trust. And, then they need to put themselves into the shoes of a decision maker and understand how to structure the decision space for them. Be present in the room when the decisions are being made To have their influence on the decisions that get made, researchers need to have “boots on the ground.” Though not everyone has to engage in this deep and long-term process of decision making, we need some people from the community to engage on behalf of the community. Researchers need to be present in the room when the decisions are being made. This means taking posts as advisers or civil servants. We already have a range of such posts at both local and national government levels, alongside a range of opportunities to engage less formally in policy development and consultations. Creating a career path and rewarding policy engagement To drive this engagement, we need to create a career path which rewards policy engagement. We should have a way through which researchers can move between policy and research careers. Prof. Felten pointed to a range of US-based initiatives that seek to bring those with technical expertise into policy-oriented roles, such as the US Digital Service. He adds that if we do not create these career paths and if this becomes something that people can do only after sacrificing their careers then very few people will do it. This needs to be an activity that we learn to respect when people in the community do it well. We need to build incentives whether it is in career incentives in academia, whether it is understanding that working in government or on policy issues is a valuable part of one kind of academic career and not thinking of it as deter or a stop. To watch the full talk, check out NeurIPS Facebook page. NeurIPS 2018: Rethinking transparency and accountability in machine learning NeurIPS 2018: Developments in machine learning through the lens of Counterfactual Inference [Tutorial] Accountability and algorithmic bias: Why diversity and inclusion matters [NeurIPS Invited Talk]
Read more
  • 0
  • 0
  • 1945