Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3709 Articles
article-image-red-hat-releases-red-hat-enterprise-linux-8-beta-deprecates-btrfs-filesystem
Sugandha Lahoti
16 Nov 2018
3 min read
Save for later

Red Hat releases Red Hat Enterprise Linux 8 beta; deprecates Btrfs filesystem

Sugandha Lahoti
16 Nov 2018
3 min read
Earlier this month, Red Hat released RHEL 7.6. Now, Red Hata Enterprise Linux (RHEL) 8 beta version is available with more container friendliness than ever. This RHEL release is based on the Red Hat community Linux May 2018 Fedora 28 release. It uses the upstream Linux kernel 4.18 for its foundation. RHEL 8 beta introduces the concept of Application Streams. With this, userspace components can now update more quickly than core operating system packages and without having to wait for the next major version of the operating system. With Application Streams, you can also keep multiple versions of the same package around. RHEL 8 beta features RHEL 8 beta introduces a single and consistent user control panel through the RHEL Web Console. Systems admins of all experience levels can easily manage RHEL servers locally and remotely, including virtual machines. RHEL 8 beta uses IPVLAN to support efficient Linux networking in containers through connecting containers nested in virtual machines (VMs) to networking hosts. RHEL 8 beta also has a new TCP/IP stack with Bandwidth and Round-trip propagation time (BBR) congestion control. This increases performance and minimizes latency for services like streaming video or hosted storage. RHEL 8 is made secure with OpenSSL 1.1.1 and TLS 1.3 support and system-wide Cryptographic Policies. Red Hat’s lightweight, open standards-based container toolkit comes with Buildah (container building), Podman (running containers) and Skopeo (sharing/finding containers). RPM's YUM package manager has also been updated. Yum 4 delivers faster performance, fewer installed dependencies and more choices of package versions to meet specific workload requirements. File Systems in RHEL 8 beta RedHat has deprecated the Btrfs filesystem. This has really confused developers who are surprised why RedHat would opt out of it especially considering that it is also used for ChromeOS's Crostini Linux application container. From hacker news: “I'm still incredibly sad about that, especially as Btrfs has become a really solid filesystem over the last year or so in the upstream kernel.” “Indeed, Btrfs is uniquely capable and important. It has lightweight snapshots of directory trees, and fully supports NFS exports and kernel namespaces, so it can easily solve technical problems that currently can't be easily solved using ZFS or other filesystems.” Stratis is the new volume-managing file system in RHEL 8 beta. Stratis abstracts away the complexities inherent to data management via an API. Also, File System Snapshots provide for a faster way of conducting file-level tasks, like cloning virtual machines, while saving space by consuming new storage only when data changes. Existing customers and subscribers can test Red Hat Enterprise Linux 8 beta. You can also view the README file for instructions on how to download and install the software. RedHat shares what to expect from next week’s first-ever DNSSEC root key rollover. Soon, RHEL (Red Hat Enterprise Linux) won’t support KDE. Red Hat Enterprise Linux 7.5 (RHEL 7.5) now generally available.
Read more
  • 0
  • 0
  • 5266

article-image-fastly-open-sources-lucet-a-native-webassembly-compiler-and-runtime
Bhagyashree R
29 Mar 2019
2 min read
Save for later

Fastly open sources Lucet, a native WebAssembly compiler and runtime

Bhagyashree R
29 Mar 2019
2 min read
Yesterday, Fastly, a US-based cloud computing service provider, open-sourced its native WebAssembly compiler and runtime, Lucet. Lucet is built on top of Cranelift, Mozilla’s low-level retargetable code generator. It already powers Fastly’s Terrarium project, their experimental platform for edge computation using WebAssembly, and now it is coming to their edge cloud platform as well. How does Lucet work? Lucet delegates the responsibility of executing WebAssembly programs into two components: compiler and runtime. The compiler compiles WebAssembly modules to native code and the runtime manages resources and traps runtime faults. As it uses ahead-of-time compilation strategy, it simplifies the design and overhead of the runtime compared to just-in-time (JIT) compilation that browser engines use. What are its advantages? Faster and safer execution of WebAssembly programs WebAssembly allows web browsers to safely execute programs with near-native performance. It is supported by some of the most commonly used browsers including Google, Mozilla, and Safari. With Lucet, Fastly aims to take WebAssembly “beyond the browser” by providing users a platform for faster and safer execution of programs on Fastly’s edge cloud. More languages to choose from Since WebAssembly is supported by an impressive list of programming languages including Rust, TypeScript, C, and C++, Lucet users will be able to work with the language they prefer. They do not have to be restricted to Fastly’s Varnish Configuration Language (VCL). Simultaneous execution of programs The Lucet compiler and runtime ensure that each WebAssembly program is allocated its own resources. This enables Fastly’s edge cloud to simultaneously execute a large number of WebAssembly programs without compromising on security. Supports WebAssembly System Interface (WASI) Lucet supports WASI, an API that provides access to various operating-system-like features. These include files and filesystems, Berkeley sockets, clocks, and random numbers. At the moment, Lucet supports running WebAssembly programs written in C, Rust, and AssemblyScript and its runtime only support x86-64 based Linux systems. To read the official announcement, visit Fastly’s official website. Introducing CT-Wasm, a type-driven extension to WebAssembly for secure, in-browser cryptography Creating and loading a WebAssembly module with Emscripten’s glue code [Tutorial] The elements of WebAssembly – Wat and Wasm, explained [Tutorial]
Read more
  • 0
  • 0
  • 5250

article-image-australias-assistance-and-access-aa-bill-popularly-known-as-the-anti-encryption-law-opposed-by-many-including-the-tech-community
Savia Lobo
10 Dec 2018
6 min read
Save for later

Australia’s Assistance and Access (A&A) bill, popularly known as the anti-encryption law, opposed by many including the tech community

Savia Lobo
10 Dec 2018
6 min read
Last week, Australia’s Assistance and Access (A&A) anti-encryption law was passed through Parliament, which allows Australian police and government the powers to issue technical notices. The Assistance and Access (A&A) law requires tech companies to help law enforcement agencies break into individuals’ encrypted data. Using secret warrants, the government can even compel a company to serve malware remotely to the target’s device. The Labor party, which planned to amend the legislation, later pulled its amendments in the Senate and the bill was passed even though it was found to be flawed by the Labour community. The Australian Human Rights Commission wrote to Parliament, “The definition of ‘acts or things’ in the Bill is so vague as to potentially permit almost limitless forms of assistance”. Several lawmakers look set to reject the bill, criticizing the government’s efforts to rush through the bill before the holiday. The anti-encryption bill has been slammed by many. Protonmail, a Swiss-based end-to-end email encryption company has also condemned the new law in their blog post and said that they will remain committed to protecting their users anywhere in the world, including in Australia. Protonmail against the Assistance and Access (A&A) law Although ProtonMail has data centers only in Switzerland and is not under Australian jurisdiction, any request for assistance from Australian agencies under the A&A law would need to pass the scrutiny of Switzerland’s criminal procedure and data protection laws. According to ProtonMail, “just because this particular law does not affect ProtonMail and ProtonVPN does not mean we are indifferent. A&A is one of the most significant attacks on digital security and privacy since the NSA’s PRISM program. But the Australian measure is more brazen, hastily forced through Parliament over the loud objections of every sector of society, from businesses to lawyers groups.” In a letter to the Parliament, the Australian Computer Society, a trade association for IT professionals, outlined several problems in the law, including: Not every company has the technical know-how to safely implement malware that won’t accidentally backdoor the entire product (particularly with IoT devices), putting the security of people’s homes and organizations at risk. Businesses can’t easily plan or budget for possible covert surveillance work with the government. A companion “explanatory document” outlines some safeguards to protect civil rights and privacy that don’t actually appear in the law itself. Once police have gained access to a suspect’s device, they could easily remove evidence from the device that could prove the person’s innocence. There would be no way to know. These are just a few of the issues, and that’s barely scratching the surface. According to ProtonMail, “the widespread use of encryption can actually further governments’ national security goals. It is critical that we strike the right balance. In our opinion, the A&A law does not do this, and in the long run, will make us all less safe.” To know more about this in detail, visit ProtonMail ‘s official blog post. The tech community also oppose the Australian bill in an open letter The Tech community also wrote an open letter titled, “You bunch of Idiots!” to Bill Shorten and the Australian Labor from the tech community. They mention, “Every tech expert agrees that the so-called "Assistance and Access Bill" will do significant damage to Australia's IT industry.” The letter highlights three key points including: The community members state that the law weakens security for users. “We do not want to deliberately build backdoors or make our products insecure. This means everyone else's data will be vulnerable. People have an expectation that we protect their personal data to the best of our ability. We cannot continue to guarantee this unless we go against the technical capability notices issued by law enforcement - which will become a criminal offence”, according to the letter. They also said, “You have made it harder for international companies to hire Australian talent, or have offices in Australia filled with Australian talent. Companies such as Amazon, Apple, Atlassian, Microsoft, Slack, Zendesk and others now have to view their Australian staff and teams as "potentially compromised". This is because law enforcement can force a person to build a backdoor and they cannot tell their bosses. They might sack them and leave Australia because of the law you just passed.” “You have also just made it almost impossible to export Australian tech services because no-one wants a potentially vulnerable system that might contain a backdoor. Who in their right mind will buy a product like that? Look at the stock price of one of Australia's largest tech companies, Atlassian. It's down because of what you have voted for. In addition, because it violates the EU's General Data Protection Regulations (GDPR), you have just locked Australian companies and startups out of a huge market.” The tech communities strongly opposed the bill calling it a destructive and short-sighted law. They said, “In all good conscience, we can no longer support Labor. We will be advocating for people to choose those who protect digital rights.” The ‘blackout’ move on GitHub to block Australia for everyone’s safety Many Australian users suggested that the world block Australia for everyone’s safety, after the Australian Assistance and Access Bill was passed. Following this, users have created a repository on GitHub to provide easy-to-use solutions to blackout Australia, in solidarity with Australians who oppose the Assistance and Access Bill. Under the GNU/Linux OSes, the goal of the main script shall be to periodically download a blocklist and update rules in a dedicated BLACKOUT chain in iptables. The repo also includes scripts to: setup a dedicated BLACKOUT chain in the iptables filter table, and setup a privileged cron job for updating the iptable rules stop any running cron job, remove the cron job, and tear down the dedicated BLACKOUT chain. Australia’s ACCC publishes a preliminary report recommending Google Facebook be regulated and monitored for discriminatory and anti-competitive behavior Australia’s Facial recognition and identity system can have “chilling effect on freedoms of political discussion, the right to protest and the right to dissent”: The Guardian report Dark Web Phishing Kits: Cheap, plentiful and ready to trick you
Read more
  • 0
  • 0
  • 5231

article-image-why-scepticism-is-important-in-computer-security-watch-james-mickens-at-usenix-2018-argue-for-thinking-over-blindly-shipping-code
Melisha Dsouza
21 Nov 2018
6 min read
Save for later

Why scepticism is important in computer security: Watch James Mickens at USENIX 2018 argue for thinking over blindly shipping code

Melisha Dsouza
21 Nov 2018
6 min read
"Technology, in general, and computer science in particular, have been hyped up to such an extreme level that we've ignored the importance of not only security but broader notions of ethical computing." -James Mickens We like to think that things are going to get better. That, after all, is why we get up in the morning and go to work, in the hope that we might just be making a difference, that we’re working towards something. That’s certainly true across the technology landscape. And in cybersecurity in particular, the belief that you’re building a more secure world - even if it’s on a small scale - is an energizing and motivating thought. However, at this year’s USENIX Conference back in August, Harvard Professor James Mickens attempted to put that belief to rest. His talk - titled ‘Why Do Keynote Speakers Keep Suggesting That Improving Security Is Possible?’ - was an argument for scepticism in a field that is by nature optimistic (not least when it has a solution to sell). So, what exactly does Mickens have against keynote speakers? Quite a lot, actually: he jokingly calls them people who have made bad life decisions aand poorrole models. Although his tongue is firmly in his cheek, he does have a number of serious points. Fundamentally, he suggests developers do not invest time in questioning anything since any degree ofintrospection would “reduce the frequency of git commits”. Mickens argument is essentially thatsoftware developers are deploying new systems without a robust understanding of those systems. Why machine learning highlights the problem with computer science today Mickens stresses that such is the hype and optimism around modern technology and computer science  that the field has largely forgotten the value of scepticism. In turn, this can be dangerous for issues such as security and ethics. Take Machine Learning for instance. Machine learning is, Mickens sayss  “the oxygen that Silicon Valley is trying to force into our lungs.” It’s everywhere, we seem to need it - but it’s also being forced on us, almost blindly, Using the example of machine learning he illustrates his point about domain knowledge: Computer scientists do not have a deep understanding of the mathematics used in machine learning systems. There is no reason or incentive for computer scientists to even invest their time in learning those things. This lack of knowledge means ethical issues and security issues that may be hidden at a conceptual level - not a technical one - are simply ignored. Mickens compares machine learning to the standard experiment used in America since 8th grade: the egg drop experiment. This is where students desperately search for a solution to prevent the egg from breaking when dropped from 20 feet in the air. When they finally come up with a technique that is successful, Mickens explains, they don’t really care to understand the logic/math behind it. This is exactly the same as developers in the context of machine learning. Machine learning is complex, yes, but often, Mickens argues, developers will have no understanding as to why models generate a particular output on being provided with a specific input. When this inscrutable AI used in models connected with real life mission critical systems (financial markets, healthcare systems, news systems etc) and the internet, security issues arise. Indeed, it begins to raise even more questions than provide answers. Now that AI is practically used everywhere - even to detect anomalies in cybersecurity, it is somewhat scary that a technology which is so unpredictable can be used to protect our systems. Examples of poor machine learning design Some of the examples James presented that caught our attention were: Microsoft chatbot Tay- Tay was originally intended to learn language by interacting with humans on Twitter. That sounds all good and very noble - until you realise that given the level of toxic discourse on Twitter, your chatbot will quickly turn into a raving Nazi with zero awareness it is doing so.  Machine learning used for risk assessment and criminal justice systems have incorrectly labelled Black defendants as “high risk” -  at twice the rate of white defendants. It’s time for a more holistic approach to cybersecurity Mickens further adds that we need a more holistic perspective when it comes to security. To do this,, developers should ask themselves not only if a malicious actor can perform illicit actions on a system,  but also should a particular action on a system be possible and how can the action achieve societally-beneficial outcomes. He says developers have 3 major assumptions  while deploying a new technology: #1 Technology is Value-Neutral, and will therefore automatically lead to good outcomes for everyone #2 New kinds of technology should be deployed as quickly as possible, even if we lack a general idea of how the technology works, or what the societal impact will be #3 History is generally uninteresting, because the past has nothing to teach us According to Mickens developers assume way too much.  In his assessment, those of us working in the industry take it for granted that technology will always lead to good outcomes for everyone. This optimism goes hand in hand with a need for speed - in turn, this can lead us to miss important risk assessments, security testing, and a broader view on the impact of technology not just on individual users but wider society too. Most importantly, for Mickens, is that we are failing to learn from mistakes. In particular, he focuses on IoT security. Here, Mickens points out, security experts are failing to learn lessons from traditional network security issues. The Harvard Professor has written extensively on this topic - you can go through his paperon IoT security here. Perhaps Mickens talk was intentionally provocative, but there are certainly lessons - if 2018 has taught us anything, it’s that a dose of scepticism is healthy where tech is concerned. And maybe it’s time to take a critical eye to the software we build. If the work we do is to actually matter and make a difference, maybe a little negative is a good thing. What do you think? Was Mickens assessment of the tech world correct? You can watch James Mickens whole talk at Youtube UN on Web Summit 2018: How we can create a safe and beneficial digital future for all 5 ways artificial intelligence is upgrading software engineering “ChromeOS is ready for web development” – A talk by Dan Dascalescu at the Chrome Web Summit 2018
Read more
  • 0
  • 0
  • 5229

article-image-wordpress-5-0-bebo-released-with-improvements-in-design-theme-and-more
Amrata Joshi
07 Dec 2018
4 min read
Save for later

WordPress 5.0 (Bebo) released with improvements in design, theme and more

Amrata Joshi
07 Dec 2018
4 min read
Yesterday, the team at WordPress released WordPress 5.0 (also known as Bebo) to give a seamless experience to users in building a website, revamping a blog or writing a code. Major improvements in WordPress 5.0 Blocks make it easier to work with WordPress This new release will let the users insert any type of multimedia in a snap and rearrange as per the content. The content pieces will be arranged in the blocks which makes the process of uploading easy. Design WordPress 5.0 brings improvements to design and content. While building client sites, users can create reusable blocks which lets the clients add new content anytime. Source: Wordpress Theme WordPress 5.0 comes with the new Twenty Nineteen theme that features custom styles for the blocks.The editor styles are used in the themes to enhance them. Twenty Nineteen features ample whitespace, and modern sans-serif headlines along with serif body text. It also uses the system fonts for increasing loading speed.Twenty Nineteen can work for a wide variety of use cases, be it a photo blog, launching a new business, or supporting a non-profit cause. Classic editor plugin The classic editor plugin is very useful as it restores the previous WordPress editor and the Edit Post screen. WordPress 5.0 for developers Blocks let the users to change the content directly but also ensures that the content structure doesn’t get disturbed by accidental code edits. This lets the developer control the output and build polished and semantic markup that could be preserved through edits. WordPress 5.0 offers wide collection of APIs and interface components that creates blocks with intuitive controls for the clients. These APIs speeds up the entire development work but also provides a usable, consistent, and accessible interface to all users. [box type="info" align="" class="" width=""] WordPress 5.0 is named Bebo in homage to the pioneering Cuban jazz musician Bebo Valdés.[/box] This new release has got some negative reactions from the users. Few think that the blocks won’t make things easier but would make the entire process complicated. Also, the release has received heat for having been announced on the wrong timing, as Christmas is almost around and retailers won’t be able to support the process of teaching the process of the new editor to the staff. Also developers will have to fix client sites broken by the new editor on an immediate basis and this might definitely create chaos. One of the users said, “Okay, I've now tested it on my main site, and I can definitely confirm that it's not a good fit for blog posts/news articles. Took me forever to post a simple 300 word article, in part because of all the random spaces it kept removing when I copied in paragraphs from my text editor.” https://twitter.com/MyFunkyTravel/status/1070848742738276352 https://twitter.com/anandnU/status/1070947019735425025 https://twitter.com/niall_flynn/status/1070762641700937728 The new editor is also causing troubles to existing sites and breaking them down. Few of the businesses have planned to move away from WordPress as they are not finding the change convincing. The users also are unhappy with the UI. Gutenberg: A disappointment? Last month’s Gutenberg release was met with disappointment and many ended up- uninstalling it,  with major issue being the lack of Markdown support. Usually before posting an article, a user writes it on Google docs or Microsoft word and then copies it to WordPress. Gutenberg makes it difficult for users to copy paste content as they must create the blocks multiple times given that every element is considered as a block. Also, it is still somewhere  between a post editor and a site builder plugin. One has to rewrite everything on Gutenberg as the blocks are complex. It could work best for large publishers who are comfortable with complicated layouts. Those working on HTML and CSS might find this jump to Gutenberg which is based on Javascript and React framework, very complicated. The idea of Gutenberg getting integrated with Core won’t be accomplished any day sooner as it has to go under a lot of documentation and work is still pending. https://twitter.com/_l3m35_/status/1070768052202033159 But there is still hope for Gutenberg, as the page builder market might appreciate the efforts taken for this editor. It could work well for the ones aiming for static content. Read more about this news on WordPress. Introduction to WordPress Plugin WordPress as a Web Application Framework WordPress Management with WP-CLI  
Read more
  • 0
  • 0
  • 5220

article-image-researchers-input-rabbit-duck-illusion-to-google-cloud-vision-api-and-conclude-it-shows-orientation-bias
Bhagyashree R
11 Mar 2019
3 min read
Save for later

Researchers input rabbit-duck illusion to Google Cloud Vision API and conclude it shows orientation-bias

Bhagyashree R
11 Mar 2019
3 min read
When last week, Janelle Shane, a Research Scientist in optics, fed the infamous rabbit-duck illusion example to the Google Cloud Vision API, it gave “rabbit” as a result. However, when the image was rotated at a different angle, the Google Cloud Vision API predicted a “duck”. https://twitter.com/JanelleCShane/status/1103420287519866880 Inspired by this, Max Woolf, a data scientist at Buzzfeed, further tested and concluded that the result really varies based on the orientation of the image: https://twitter.com/minimaxir/status/1103676561809539072 Google Cloud Vision provides pretrained API models that allow you to derive insights from input images. The API classifies images into thousands of categories, detects individual objects and faces within images, and reads printed words within images. You can also train custom vision models with AutoML Vision Beta. Woolf used Python for rotating the image and get predictions from the API for each rotation. He built the animations with R, ggplot2, and gganimate. To render these animations he used ffmpeg. Many times, in deep learning, a model is trained using a strategy in which the input images are rotated to help the model better generalize. Seeing the results of the experiment, Woolf concluded, “I suppose the dataset for the Vision API didn't do that as much / there may be an orientation bias of ducks/rabbits in the training datasets.” The reaction to this experiment was pretty torn. While many Reddit users felt that there might be an orientation bias in the model, others felt that as the image is ambiguous there is no “right answer” and hence there is no problem with the model. One of the Redditor said, “I think this shows how poorly many neural networks are at handling ambiguity.” Another Redditor commented, “This has nothing to do with a shortcoming of deep learning, failure to generalize, or something not being in the training set. It's an optical illusion drawing meant to be visually ambiguous. Big surprise, it's visually ambiguous to computer vision as well. There's not 'correct' answer, it's both a duck and a rabbit, that's how it was drawn. The fact that the Cloud vision API can see both is actually a strength, not a shortcoming.” Woolf has open-sourced the code used to generate this visualization on his GitHub page, which also includes a CSV of the prediction results at every rotation. In case you are more curious, you can test the Cloud Vision API with the drag-and-drop UI provided by Google. Google Cloud security launches three new services for better threat detection and protection in enterprises Generating automated image captions using NLP and computer vision [Tutorial] Google Cloud Firestore, the serverless, NoSQL document database, is now generally available  
Read more
  • 0
  • 0
  • 5211
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at £15.99/month. Cancel anytime
article-image-nyu-and-aws-introduce-deep-graph-library-dgl-a-python-package-to-build-neural-network-graphs
Prasad Ramesh
13 Dec 2018
2 min read
Save for later

NYU and AWS introduce Deep Graph Library (DGL), a python package to build neural network graphs

Prasad Ramesh
13 Dec 2018
2 min read
Introducing a new library called Deep Graph Library (DGL) developed by the NYU & AWS teams, Shanghai. DGL is a package built on Python to simplify deep learning on graph, atop of existing deep learning frameworks. DGL is essentially a Python package which serves as an interface between any existing tensor libraries and data that is expressed as graphs. It helps in easy implementation of graph neural networks such as Graph Convolution Networks, TreeLSTM and others. It also maintains high computation efficiency while doing this. This new Python library is made in an effort to make graph implementations in deep learning simpler. According to the results they state, the improvement on some models is as high as 10 times and has better accuracy in some cases. Check out the results on GitHub. Their website states: “We are keen to bring graphs closer to deep learning researchers. We want to make it easy to implement graph neural networks model family. We also want to make the combination of graph based modules and tensor based modules (PyTorch or MXNet) as smooth as possible.” As of now, DGL supports PyTorch v1.0. The autobatching is up to 4x faster than DyNet. DGL is tested on Ubuntu 16.04, macOS X, and Windows 10 and will work on any newer versions of these OSes. Python 3.5 or later is required while Python 3.4 or older is not tested. Support for Python 2 is in the works. Installing it is as same as any other Python package. With pip: pip install dgl And with conda: conda install -c dglteam dgl https://twitter.com/aussetg/status/1072897828677144582 DGL is currently in the beta stage, licensed under Apache 2.0, and they have a Twitter page. You can check out DGL at their website. UK researchers have developed a new PyTorch framework for preserving privacy in deep learning OpenAI launches Spinning Up, a learning resource for potential deep learning practitioners Deep Learning Indaba presents the state of Natural Language Processing in 2018
Read more
  • 0
  • 0
  • 5207

article-image-reinforcement-learning-works
Pravin Dhandre
14 Nov 2017
5 min read
Save for later

How Reinforcement Learning works

Pravin Dhandre
14 Nov 2017
5 min read
[box type="note" align="" class="" width=""]This article is an excerpt from a book by Rodolfo Bonnin titled Machine Learning for Developers.[/box] Reinforcement learning is a field that has resurfaced recently, and it has become more popular in the fields of control, finding the solutions to games and situational problems, where a number of steps have to be implemented to solve a problem. A formal definition of reinforcement learning is as follows: "Reinforcement learning is the problem faced by an agent that must learn behavior through trial-and-error interactions with a dynamic environment.” (Kaelbling et al. 1996). In order to have a reference frame for the type of problem we want to solve, we will start by going back to a mathematical concept developed in the 1950s, called the Markov decision process. Markov decision process Before explaining reinforcement learning techniques, we will explain the type of problem we will attack with them. When talking about reinforcement learning, we want to optimize the problem of a Markov decision process. It consists of a mathematical model that aids decision making in situations where the outcomes are in part random, and in part under the control of an agent. The main elements of this model are an Agent, an Environment, and a State, as shown in the following diagram: Simplified scheme of a reinforcement learning process The agent can perform certain actions (such as moving the paddle left or right). These actions can sometimes result in a reward rt, which can be positive or negative (such as an increase or decrease in the score). Actions change the environment and can lead to a new state st+1, where the agent can perform another action at+1. The set of states, actions, and rewards, together with the rules for transitioning from one state to another, make up a Markov decision process. Decision elements To understand the problem, let's situate ourselves in the problem solving environment and look at the main elements: The set of states The action to take is to go from one place to another The reward function is the value represented by the edge The policy is the way to complete the task A discount factor, which determines the importance of future rewards The main difference with traditional forms of supervised and unsupervised learning is the time taken to calculate the reward, which in reinforcement learning is not instantaneous; it comes after a set of steps. Thus, the next state depends on the current state and the decision maker's action, and the state is not dependent on all the previous states (it doesn't have memory), thus it complies with the Markov property. Since this is a Markov decision process, the probability of state st+1 depends only on the current state st and action at: Unrolled reinforcement mechanism The goal of the whole process is to generate a policy P, that maximizes rewards. The training samples are tuples, <s, a, r>.  Optimizing the Markov process Reinforcement learning is an iterative interaction between an agent and the environment. The following occurs at each timestep: The process is in a state and the decision-maker may choose any action that is available in that state The process responds at the next timestep by randomly moving into a new state and giving the decision-maker a corresponding reward The probability that the process moves into its new state is influenced by the chosen action in the form of a state transition function Basic RL techniques: Q-learning One of the most well-known reinforcement learning techniques, and the one we will be implementing in our example, is Q-learning. Q-learning can be used to find an optimal action for any given state in a finite Markov decision process. Q-learning tries to maximize the value of the Q-function that represents the maximum discounted future reward when we perform action a in state s. Once we know the Q-function, the optimal action a in state s is the one with the highest Q- value. We can then define a policy π(s), that gives us the optimal action in any state, expressed as follows: We can define the Q-function for a transition point (st, at, rt, st+1) in terms of the Q-function at the next point (st+1, at+1, rt+1, st+2), similar to what we did with the total discounted future reward. This equation is known as the Bellman equation for Q-learning: In practice, we  can think of the Q-function as a lookup table (called a Q-table) where the states (denoted by s) are rows and the actions (denoted by a) are columns, and the elements (denoted by Q(s, a)) are the rewards that you get if you are in the state given by the row and take the action given by the column. The best action to take at any state is the one with the highest reward: initialize Q-table Q observe initial state s while (! game_finished): select and perform action a get reward r advance to state s' Q(s, a) = Q(s, a) + α(r + γ max_a' Q(s', a') - Q(s, a)) s = s' You will realize that the algorithm is basically doing stochastic gradient descent on the Bellman equation, backpropagating the reward through the state space (or episode) and averaging over many trials (or epochs). Here, α is the learning rate that determines how much of the difference between the previous Q-value and the discounted new maximum Q- value should be incorporated.  We can represent this process with the following flowchart: We have successfully reviewed Q-Learning, one of the most important and innovative architecture of reinforcement learning that have appeared in recent. Every day, such reinforcement models are applied in innovative ways, whether to generate feasible new elements from a selection of previously known classes or even to win against professional players in strategy games. If you enjoyed this excerpt from the book Machine learning for developers, check out the book below.
Read more
  • 0
  • 0
  • 5203

article-image-google-ai-introduces-snap-a-microkernel-approach-to-host-networking
Savia Lobo
29 Oct 2019
4 min read
Save for later

Google AI introduces Snap, a microkernel approach to ‘Host Networking’

Savia Lobo
29 Oct 2019
4 min read
A few days ago, the Google AI team introduced Snap, a microkernel-inspired approach to host networking at the 27th ACM Symposium on Operating Systems Principles. Snap is a userspace networking system with flexible modules that implement a range of network functions, including edge packet switching, virtualization for our cloud platform, traffic shaping policy enforcement, and a high-performance reliable messaging and RDMA-like service. The Google AI team says, “Snap has been running in production for over three years, supporting the extensible communication needs of several large and critical systems.” Why Snap? Prior to Snap, Google AI team says they were limited in their ability to develop and deploy new network functionality and performance optimizations in several ways. This is because developing kernel code was slow and drew on a smaller pool of software engineers. Second, feature release through the kernel module reloads covered only a subset of functionality and often required disconnecting applications, while the more common case of requiring a machine reboot necessitated draining the machine of running applications. Unlike prior microkernel systems, Snap benefits from multi-core hardware for fast IPC and does not require the entire system to adopt the approach wholesale, as it runs as a userspace process alongside our standard Linux distribution and kernel. Source: Snap Research paper Using Snap, the Google researchers also created a new communication stack called Pony Express that implements a custom reliable transport and communications API. Pony Express provides significant communication efficiency and latency advantages to Google applications, supporting use cases ranging from web search to storage. Features of the Snap userspace networking system Snap’s architecture comprises of recent ideas in userspace networking, in-service upgrades, centralized resource accounting, programmable packet processing, kernel-bypass RDMA functionality, and optimized co-design of transport, congestion control, and routing. With these, Snap: Enables a high rate of feature development with a microkernel-inspired approach of developing in userspace with transparent software upgrades. It also retains the benefits of centralized resource allocation and management capabilities of monolithic kernels and also improves upon accounting gaps with existing Linux-based systems. Implements a custom kernel packet injection driver and a custom CPU scheduler that enables interoperability without requiring the adoption of new application runtimes and while maintaining high performance across use cases that simultaneously require packet processing through both Snap and the Linux kernel networking stack. Encapsulates packet processing functions into composable units called “engines”, which enables both modular CPU scheduling as well as incremental and minimally disruptive state transfer during upgrades. Through Pony Express, it provides support for OSI layer 4 and 5 functionality through an interface similar to an RDMA-capable “smart” NIC. This enables transparently leveraging offload capabilities in emerging hardware NICs as a means to further improve server efficiency and throughput. Supports 3x better transport processing efficiency than the baseline Linux kernel and supporting RDMA-like functionality at speeds of 5M ops/sec/core. MicroQuanta: Snap’s new lightweight kernel scheduling class To dynamically scale CPU resources, Snap works in conjunction with a new lightweight kernel scheduling class called MicroQuanta. It provides a flexible way to share cores between latency-sensitive Snap engine tasks and other tasks, limiting the CPU share of latency-sensitive tasks and maintaining low scheduling latency at the same time. A MicroQuanta thread runs for a configurable runtime out of every period time units, with the remaining CPU time available to other CFS-scheduled tasks using a variation of a fair queuing algorithm for high and low priority tasks (rather than more traditional fixed time slots). MicroQuanta is a robust way for Snap to get priority on cores runnable by CFS tasks that avoid starvation of critical per-core kernel threads. While other Linux real-time scheduling classes use both per-CPU tick-based and global high-resolution timers for bandwidth control, MicroQuanta uses only per-CPU highresolution timers. This allows scalable time-slicing at microsecond granularity. Snap is being received positively by many in the community. https://twitter.com/copyconstruct/status/1188514635940421632 To know more about Snap in detail, you can read it’s complete research paper. Amazon announces improved VPC networking for AWS Lambda functions Netflix security engineers report several TCP networking vulnerabilities in FreeBSD and Linux kernels ReactOS 0.4.12 releases with kernel improvements, Intel e1000 NIC driver support, and more
Read more
  • 0
  • 0
  • 5196

article-image-introducing-raspberry-pi-tv-hat-a-new-addon-that-lets-you-stream-live-tv
Prasad Ramesh
19 Oct 2018
2 min read
Save for later

Introducing Raspberry Pi TV HAT, a new addon that lets you stream live TV

Prasad Ramesh
19 Oct 2018
2 min read
Yesterday the Raspberry Pi Foundation launched a new device called the Raspberry Pi TV HAT. It is a small board, TV antenna that lets you decode and stream live TV. The TV HAT is roughly the size of a Raspberry Pi Zero board. It connects to the Raspberry Pi via a GPIO connector and has a port for a TV antenna connector. The new Raspberry Pi addon is designed after a new form factor of HAT (Hardware Attached on Top). The addon itself is a half-sized HAT matching the outline of Raspberry Pi Zero boards. Source: Raspberry Pi website TV HAT specifications and requirement The board addon has a Sony CXD2880 TV tuner. It supports TV standards like DVB-T2 (1.7MHz, 5MHz, 6MHz, 7MHz, 8MHz channel bandwidth), and DVB-T (5MHz, 6MHz, 7MHz, 8MHz channel bandwidth). The frequencies it can recieve are VHF III, UHF IV, and UHF V. Raspbian Stretch (or later) is required for using the Raspberry Pi TV HAT. TVHeadend is the recommended software to start with TV streams. There is a ‘Getting Started’ guide on the Raspberry Pi website. Watch on the Raspberry Pi With the TV HAT can receive and you can view television on a Raspberry Pi board. The Pi can also be used as a server to stream television over a network to other devices. When running as a server the TV HAT works with all 40-pin GPIO Raspberry Pi boards. Watching on TV on the Pi itself needs more processing, so the use of a Pi 2, 3, or 3B+ is recommended. The TV HAT connected to a Raspberry Pi board: Source: Raspberry Pi website Streaming over a network Connecting a TV HAT to your network allows viewing streams on any device connected to the network. This includes computers, smartphones, and tablets. Initially, it will be available only in Europe. The Raspberry Pi TV HAT is now on sale for $21.50, visit the Raspberry Pi website for more details. Tensorflow 1.9 now officially supports Raspberry Pi bringing machine learning to DIY enthusiasts How to secure your Raspberry Pi board [Tutorial] Should you go with Arduino Uno or Raspberry Pi 3 for your next IoT project?
Read more
  • 0
  • 0
  • 5185
article-image-improve-interpretability-machine-learning-systems
Sugandha Lahoti
12 Mar 2018
6 min read
Save for later

How to improve interpretability of machine learning systems

Sugandha Lahoti
12 Mar 2018
6 min read
Advances in machine learning have greatly improved products, processes, and research, and how people might interact with computers. One of the factors lacking in machine learning processes is the ability to give an explanation for their predictions. The inability to give a proper explanation of results leads to end-users losing their trust over the system, which ultimately acts as a barrier to the adoption of machine learning. Hence, along with the impressive results from machine learning, it is also important to understand why and where it works, and when it won’t. In this article, we will talk about some ways to increase machine learning interpretability and make predictions from machine learning models understandable. 3 interesting methods for interpreting Machine Learning predictions According to Miller, interpretability is the degree to which a human can understand the cause of a decision. Interpretable predictions lead to better trust and provide insight into how the model may be improved. The kind of machine learning developments happening in the present times require a lot of complex models, which lack in interpretability. Simpler models (e.g. linear models), on the other hand,  often give a correct interpretation of a prediction model’s output, but they are often less accurate than complex models. Thus creating a tension between accuracy and interpretability. Complex models are less interpretable as their relationships are generally not concisely summarized. However, if we focus on a prediction made on a particular sample, we can describe the relationships more easily. Balancing the trade-off between model complexity and interpretability lies at the heart of the research done in the area of developing interpretable deep learning and machine learning models. We will discuss a few methods to increase the interpretability of complex ML models by summarizing model behavior with respect to a single prediction. LIME or Local Interpretable Model-Agnostic Explanations, is a method developed in the paper Why should I trust you? for interpreting individual model predictions based on locally approximating the model around a given prediction. LIME uses two approaches to explain specific predictions: perturbation and linear approximation. With Perturbation, LIME takes a prediction that requires explanation and systematically perturbs its inputs. These perturbed inputs become new, labeled training data for a simpler approximate model. It then does local linear approximation by fitting a linear model to describe the relationships between the (perturbed) inputs and outputs. Thus a simple linear algorithm approximates the more complex, nonlinear function. DeepLIFT (Deep Learning Important FeaTures) is another method which serves as a recursive prediction explanation method for deep learning.  This method decomposes the output prediction of a neural network on a specific input by backpropagating the contributions of all neurons in the network to every feature of the input. DeepLIFT assigns contribution scores based on the difference between activation of each neuron and its ‘reference activation’. DeepLIFT can also reveal dependencies which are missed by other approaches by optionally giving separate consideration to positive and negative contributions. Layer-wise relevance propagation is another method for interpreting the predictions of deep learning models. It determines which features in a particular input vector contribute most strongly to a neural network’s output.  It defines a set of constraints to derive a number of different relevance propagation functions. Thus we saw 3 different ways of summarizing model behavior with a single prediction to increase model interpretability. Another important avenue to interpret machine learning models is to understand (and rethink) generalization. What is generalization and how it affects Machine learning interpretability Machine learning algorithms are trained on certain datasets, called training sets. During training, a model learns intrinsic patterns in data and updates its internal parameters to better understand the data. Once training is over, the model is tried upon test data to predict results based on what it has learned. In an ideal scenario, the model would always accurately predict the results for the test data. In reality, what happens is that the model is able to identify all the relevant information in the training data, but sometimes fails when presented with the new data. This difference between “training error” and “test error” is called the generalization error. The ultimate aim of turning a machine learning system to a scalable product is generalization. Every task in ML wants to create a generalized algorithm that acts in the same way for all kind of distributions. And the ability to distinguish models that generalize well from those that do not, will not only help to make ML models more interpretable, but it might also lead to more principled and reliable model architecture design. According to the conventional statistical theory, small generalization error is either due to properties of the model family or because of the regularization techniques used during training. A recent paper at ICLR 2017,  Understanding deep learning requires rethinking generalization shows that current machine learning theoretical frameworks fail to explain the impressive results of deep learning approaches and why understanding deep learning requires rethinking generalization. They support their findings through extensive systematic experiments. Developing human understanding through visualizing ML models Interpretability also means creating models that support human understanding of machine learning. Human interpretation is enhanced when visual and interactive diagrams and figures are used for the purpose of explaining the results of ML models. This is why a tight interplay of UX design with Machine learning is essential for increasing Machine learning interpretability. Walking along the lines of Human-centered Machine Learning, researchers at Google, OpenAI, DeepMind, YC Research and others have come up with Distill. This open science journal features articles which have a clear exposition of machine learning concepts using excellent interactive visualization tools. Most of these articles are aimed at understanding the inner working of various machine learning techniques. Some of these include: An article on attention and Augmented Recurrent Neural Networks which has a beautiful visualization of attention distribution in RNN. Another one on feature visualization, which talks about how neural networks build up their understanding of images Google has also launched the PAIR initiative to study and design the most effective ways for people to interact with AI systems. It helps researchers understand ML systems through work on interpretability and expanding the community of developers. R2D3 is another website, which provides an excellent visual introduction to machine learning. Facets is another tool for visualizing and understanding training datasets to provide a human-centered approach to ML engineering. Conclusion Human-Centered Machine Learning is all about increasing machine learning interpretability of ML systems and in developing their human understanding. It is about ML and AI systems understanding how humans reason, communicate and collaborate. As algorithms are used to make decisions in more angles of everyday life, it’s important for data scientists to train them thoughtfully to ensure the models make decisions for the right reasons. As more progress is done in this area, ML systems will not make commonsense errors or violate user expectations or place themselves in situations that can lead to conflict and harm, making such systems safer to use.  As research continues in this area, machines will soon be able to completely explain their decisions and their results in the most humane way possible.
Read more
  • 0
  • 0
  • 5181

article-image-dropbox-walks-back-its-own-decision-brings-back-support-for-zfs-xfs-btrfs-and-ecryptfs-on-linux
Vincy Davis
23 Jul 2019
3 min read
Save for later

Dropbox walks back its own decision; brings back support for ZFS, XFS, Btrfs, and eCryptFS on Linux

Vincy Davis
23 Jul 2019
3 min read
Today, Dropbox notified users that it has brought back support for ZFS and XFS on 64-bit Linux systems, and Btrfs and eCryptFS on all Linux systems in its Beta Build 77.3.127. The support note in the Dropbox forum reads “Add support for zfs (on 64-bit systems only), eCryptFS, xfs (on 64-bit systems only), and btrfs filesystems in Linux.” Last year in November, Dropbox notified users that they are “ending support for Dropbox syncing to drives with certain uncommon file systems. The supported file systems are Ext4 filesystem on Linux, NTFS for Windows, and HFS+ or APFS for Mac.” Dropbox explained, a supported file system is necessary for Dropbox as it uses extended attributes (X-attrs) to identify files in their folder and to keep them in sync. The post also mentioned that Dropbox will support only the most common file systems that support X-attrs to ensure stability and consistency to its users. After Dropbox discontinued support for these Linux formats, many developers switched to other services such as Google Drive, Box, etc. This is speculated to be one of the reasons why Dropbox has changed its previous decision. However, no official statement from the Dropbox community, for bringing the support back, has been announced yet. Many users have expressed resentment on Dropbox’s irregular actions. A user on Hacker News says, “Too late. I have left Dropbox because of their stance on Linux filesystems, price bump with unnecessary features, and the continuous badgering to upgrade to its business. It's a great change though for those who are still on Dropbox. Their sync is top-notch” A Redditor comments, “So after I stopped using Dropbox they do care about me as a user after all? Linux users screamed about how nonsensical the original decision was. Maybe ignoring your users is not such a good idea after all? I moved to Cozy Drive - it's not perfect, but has native Linux client, is Europe based (so I am protected by EU privacy laws) and is pretty good as almost drop-in replacement.” Another Redditor said that “Too late for me, I was a big dropbox user for years, they dropped support for modern file systems and I dropped them. I started using Syncthing to replace the functionality I lost with them.” Few developers are still happy to see that Dropbox will again support the popular Linux systems. A user on Hacker News comments, “That's good news. Happy to see Dropbox thinking about the people who stuck with them from day 1. In the past few years they have been all over the place, trying to find their next big thing and in the process also neglecting their non-enterprise customers. Their core product is still the best in the market and an important alternative to Google.” Fedora announces the first preview release of Fedora CoreOS as an automatically updating Linux OS for containerized workloads Linux Mint 19.2 beta releases with Update Manager, improved menu and much more! Linux kernel announces a patch to allow 0.0.0.0/8 as a valid address range
Read more
  • 0
  • 0
  • 5163

article-image-go-2-design-drafts-include-plans-for-better-error-handling-and-generics
Prasad Ramesh
29 Aug 2018
3 min read
Save for later

Go 2 design drafts include plans for better error handling and generics

Prasad Ramesh
29 Aug 2018
3 min read
In the annual Go user survey, the three top requests made by users for Go version 2 were better package management, error handling and the inclusion of generics. Following these requests, the Go 2 draft designs were shared yesterday to include error handling, error values, and adding generics. Note that these are not official proposals. The features, error handling and generics are in step 2 according to the Go release cycle, shown as follows. Source: Go Blog Yesterday, Google developer Russ Cox, gave a talk on design drafts for Golang 2. Go 2 draft designs were also previewed at Gophercon 2018. In his talk, he mentions that the current boilerplate contains too much code for error checks and that the error reporting is not precise enough. For example, an error while using os.Open in which the name of the file which cannot be opened, isn’t mentioned. As proper error reporting only adds to the code, most programmers don’t really bother with this despite knowing that such a practice may create confusion. The new idea, therefore, aims to add a check expression to shorten the checks while keeping them explicit. Cox also stresses on adding experience reports. These reports are difficult but necessary to implement new features. Experience reports turn abstract problems into concrete ones and are needed for changes to be implemented in Golang. They serve as a test case for evaluating a proposed solution and its effects on real-life use-cases. Regarding the inclusion of Generics, Cox mentions: “I've been examining generics recently, but I don't have in my mind a clear picture of the detailed, concrete problems that Go users need generics to solve. As a result, I can't answer a design question like whether to support generic methods, which is to say methods that are parameterized separately from the receiver. If we had a large set of real-world use cases, we could begin to answer a question like this by examining the significant ones.” Go 2 is not going to be a single release, but a sequence of releases adding features as and when they are ready. The approach is to first make features backward compatible to Go 1. Minor changes could be seen in Go 1 in a year or so. If there are no backward incompatible changes, Go 1.20 may be just declared as Go 2. The conversation for Go 2 has started, and there is a call for community help and contribution to converting the drafts into official proposals. Visit the Go page and the GitHub repository for more details. Why Golang is the fastest growing language on GitHub Golang 1.11 is here with modules and experimental WebAssembly port among other updates GoMobile: GoLang’s Foray into the Mobile World
Read more
  • 0
  • 0
  • 5157
article-image-microsoft-azure-vp-demonstrates-holoportation-a-reconstructed-transmittable-3d-technology
Vincy Davis
18 Jul 2019
3 min read
Save for later

Microsoft Azure VP demonstrates Holoportation, a reconstructed transmittable 3D technology

Vincy Davis
18 Jul 2019
3 min read
One of the major highlights at the ongoing Microsoft Inspire 2019 at Las Vegas, was the demonstration of Holoportation by Azure Corporate Vice President Julia White. Holoportation is a type of 3D capture technology that allows high-quality 3D models of people to be reconstructed, compressed and transmitted anywhere in the world in real-time. Microsoft Researchers have been working on this technology for several years by utilizing Mixed Reality (MR) devices with HoloLens, which is a pair of mixed reality smart glasses. Individuals owning these devices will be able to see each other virtually, thus giving the impression that they are in the same room at the same time. Yesterday, on Day 3 of the conference, White demonstrated this technology using Mixed Reality and Azure AI. White wore a HoloLens 2 headset which generated a ‘mini-me’ version of herself, which she could almost hold in her hand. After a little sparkling of green special effects, the miniature version got transformed into a full-size hologram of White. The hologram of White spoke in Japanese, even though the real White doesn’t know the language personally. The hologram White’s voice was the exact replica of the real White’s “unique voice signature”. https://www.youtube.com/watch?time_continue=169&v=auJJrHgG9Mc White said this “mind blowing” technology was made possible by using Mixed Real technology to create her hologram and to render it live. Next, it used Azure speech to text capability in English transcription to get the speech and then used Azure translate to translate her language in Japanese. Finally, the neural text to speech technology was applied to make it sound exactly like White, just speaking in Japanese. This is not the first time that Microsoft has demonstrated its holographic technology. Last year, during the Microsoft Inspire 2018 event, the Microsoft team had remotely collaborated in real-time with a 3D hologram. The demo participants had used advanced hand gestures and voice commands to collectively assess and dissect a 3D hologram of the Rigado Cascade IoT gateway. The Azure text-to-speech allows users to convert their custom voice into natural human-like synthesized speech. Thus, this technology gives the ability to converse with anybody, anywhere in the world in real-time, without any language barrier and in their own voice texture. The audience present expressed their amazement during the demo. The seamless technology has also impressed many Twitterati. https://twitter.com/tendaidongo/status/1151567203428384773 https://twitter.com/KamaraSwaby/status/1151528144198705158 https://twitter.com/bobbyschang/status/1151526620362002432 https://twitter.com/_dimpy_/status/1151526775404429312 With Microsoft showcasing its prowess in the field of virtual and augmented reality, it can be expected that devices like 3D cameras, HoloLens headsets might become the new norm in smartphones, video games, and many other applications. Microsoft adds Telemetry files in a “security-only update” without prior notice to users Microsoft introduces passwordless feature in its Windows 10 devices, replaces it with Windows Hello face authentication, fingerprints, or a PIN Microsoft will not support Windows registry backup by default, to reduce disk footprint size from Windows 10 onwards
Read more
  • 0
  • 0
  • 5156

article-image-15-millions-jobs-in-britain-at-stake-with-ai-robots-set-to-replace-humans-at-workforce
Natasha Mathur
23 Aug 2018
3 min read
Save for later

15 millions jobs in Britain at stake with Artificial Intelligence robots set to replace humans at workforce

Natasha Mathur
23 Aug 2018
3 min read
Earlier this week, the Bank of England’s chief economist, Andy Haldane, gave a warning that the UK needs a skills revolution as up to 15 million jobs in Britain are at stake. This is apparently due to a “third machine age” where Artificial Intelligence is making a huge number of jobs that were previously the preserve of humans outdated. Haldane says that this potential "Fourth Industrial Revolution" could cause disruptions on a "much greater scale" than the damage experienced during the first three Industrial Revolutions. This is because the first three industrial revolutions were mainly about machines replacing humans doing manual tasks.  But, the fourth Industrial revolution will be different. As Haldane told the BBC Radio 4’s Today programme, “the 20th-century machines have substituted not just for manual human tasks, but cognitive ones too -- human skills machines could reproduce, at lower cost, has both widened and deepened”. With robots becoming more intelligent, there will be deeper degrees of hollowing-out of jobs in this revolution than in the past. The Bank of England has classified jobs into three categories –jobs with a high (greater than 66%), medium (33-66%) and low (less than 33%) chances of automation. Administrative, clerical and production jobs are at the highest risk of getting replaced by Robots. Whereas, jobs focussing on human interaction, face-to-face conversation, and negotiation are less likely to suffer. Probability of automation by occupation This “hollowing out” poses risk not only for low-paid jobs but will also affect the mid-level jobs. Meanwhile, the UK’s Artificial Intelligence Council Chair, Tabitha Goldstaub, mentioned that the “challenge will be ensuring that people are prepared for the cultural and economic shifts” with focus on creating "the new jobs of the future" in order to avoid mass replacement by robots. Haldane echoed Goldstaub’s sentiments and told the BBC that “we will need even greater numbers of new jobs to be created in the future if we are not to suffer this longer-term feature called technological unemployment”. Every cloud has a silver lining Although the automation of these tasks can lead to mass unemployment, Goldstaub is positive. She says “there are great opportunities ahead as well as significant challenges”. Challenge being bracing the UK workforce for the coming change. Whereas, the silver lining, according to Goldstaub is that “there is a hopeful view -- that a lot of these jobs (existing) are boring, mundane, unsafe, drudgery - there could be -- liberation from -- these jobs and a move towards a brighter world.” OpenAI builds reinforcement learning based system giving robots human like dexterity OpenAI Five bots beat a team of former pros at Dota 2 What if robots get you a job! Enter Helena, the first artificial intelligence recruiter  
Read more
  • 0
  • 0
  • 5136