Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

6719 Articles
article-image-nips-2017-6-challenges-deep-learning-robotics-pieter-abbeel
Aaron Lazar
13 Dec 2017
10 min read
Save for later

NIPS 2017 Special: 6 Key Challenges in Deep Learning for Robotics by Pieter Abbeel

Aaron Lazar
13 Dec 2017
10 min read
Pieter Abbeel is a professor at UC Berkeley and a former Research Scientist at OpenAI. His current research focuses on robotics and machine learning with particular focus on deep reinforcement learning, deep imitation learning, deep unsupervised learning, meta-learning, learning-to-learn, and AI safety. This article attempts to bring our readers to Pieter’s fantastic Keynote speech at NIPS 2017. It talks about the implementation of Deep Reinforcement Learning in Robotics, what challenges exist and how these challenges can be overcome. Once you’ve been through this article, we’re certain you’d be extremely interested in watching the entire video on the NIPS Facebook page. All images in this article come from his presentation slides and do not belong to us. Robotics in ML has been growing in leaps and bounds with several companies investing huge amounts to tie both these technologies together in the best way possible. Although, there are still several aspects that are not thoroughly accomplished when it comes to AI Robotics. Here are a few of them: Maximize Signal Extracted from Real World Experience Faster/Data efficient Reinforcement Learning Long Horizon Reasoning Taskability (Imitation Learning) Lifelong Learning (Continuous Adaptation) Leverage Simulation Maximise signal extracted from real world experience We need more real world data, so we need to extract as much signal from it. In the diagram below, are the different layers of machine learning that engineers perform. There are engineers who look at the entire cake and train the agent to take both the learning from the reward and from auxiliary signals. This is because using only Reinforcement Learning doesn’t give you a lot of signal. Is there then, a possibility of having a Reward Signal in RL that ties more RL into the system? There’s something known as Hindsight Experience Replay. The idea is to get a reward signal from any experience by assuming the goal equals whatever happened, and not just from success like in usual RL. For this, we need to assume that whatever the agent does is a success. We use Q-learning and instead of a standard Q function, we use multiple goals even though they were not really a goal when you were acting.  Here, a replay buffer collects experience, Q-learning is then applied and a hindsight replay is performed to infuse a new reward for everything the agent has done. For various robotic tasks like pushing, sliding and picking and placing objects, this does very well. Faster Reinforcement Learning When we’re talking about faster RL, we’re talking about much more data efficient RL. Here is a diagram that demonstrates standard RL: An agent lets a robot perform an action in a particular environment or situation in order to achieve a reward. Here, the goal is to maximise the reward. As against Supervised Learning, there is no supervision as to whether the actions taken by the agents are right or wrong. That brings in a few additional challenges in RL, which are: Credit assignment: This is a major problem and is where you get the signal from in RL Stability: Because of the feedback loop, the system could destabilize and destroy itself Exploration: Doing things you’ve never done before when the only way to learn is based on what you’ve done before Despite this, there have been great improvements in Reinforcement Learning in the past few years, enabling AI systems to play games like Go, Dota, etc. It has also been implemented in building robots by NASA for planetary exploration. But the question still exists: “How good is learning?” In the game of pong, a human takes roughly 2 hours to learn what Deep Q-Network (DQN) learns in 40 hours! A more careful study reveals that after 15 minutes, humans tend to outperform DDQN that has trained for 115 hours. This is a tremendous gap in terms of learning efficiency. So, how do we overcome the challenge? Several fully generalised algorithms like Trust Region Policy Optimization (TRPO), DQN, Asynchronous Actor-Critic Agents (A3C) and Rainbow are available, meaning that they can be applied to any kind of environment. Although, only a very small subset of environments are actually encountered in the real world. Can we develop fast RL algorithms that take advantage of this situation? RL Agents can be reused to train various policies. The RL algorithm is developed to train the policy to adapt to a particular environment A. This can then be replicated to environment B and so on. Humans develop the RL algorithm and then rely on it to train the policy. Despite this, none of the algorithms are as good as human learners. Do we have an alternative then? Indeed, yes! Why not let the system learn not just the policy but the algorithm as well or in other words, the entire agent? Enter Meta-Reinforcement Learning In Meta-RL, the learning algorithm itself is being learnt. You could relate this to meta-programming, where one program is trained to write another. This process helps a system learn the world better so it can pick up on learning a new situation quicker. So how does this work? The system is faced with many environments, so that it learns the algorithms and then outputs a faster RL Agent. So, when faced with a new environment, it quickly adapts to it. For evaluating the actual performance, the Multi-armed bandits problem can be considered. Here’s the setting: each bandit has its own distribution over payouts, and in each episode you can choose one bandit. A good RL agent should be able to explore a sufficient number of bandits and exploit the best ones. We need to come up with an algorithm that pulls a higher probability of payoff, rather than a low probability. There are already several asymptotically optimal algorithms like Gittins index, UCB1, Thompson Sampling, that have been created to solve this problem. Here’s a comparison of some of them with the Meta-RL algorithm. The result is quite impressive. The Meta-RL algorithm is equally competitive with Gittins. In a case where the task is to obtain an on target running direction as well as attain the maximum speed, the agent when dropped into an environment is able to master the the task almost instantly. However, meta-learning succeeds only 2/3rd of the time. It doesn’t succeed the rest of the time due to two main reasons. Overfitting: You would usually tend to overfit to the current situation rather than generically fitting to situations Underfitting: This is when you don’t get enough signal to get any rewards The solution is to put a different structure underneath the system. Instead of using an RNN, we use a wavenet like architecture or maybe Simple Neural Attentive Meta-Learner (SNAIL). SNAIL is able to perform a bit better than RL2 in the same Bandits problem. Longer Horizon Reasoning We need to learn to reason over longer horizons than what canonical algorithms do. For this, we need hierarchy. For example, suppose a robot has to perform 10 tasks in a day. This would mean it has 10 timesteps per day? Each of these 10 tasks would have subtasks under them. Let’s assume that would make it a total of 1000 time steps. To perform these tasks, the robot would need footstep planning, which would amount to 100,000 time steps. Footsteps in turn require commands to be sent to motors, which would make it 100,000,000 time steps. This is a very long horizon. We can formulate this as a meta-learning problem. The agent has to solve a distribution of related long-horizon tasks with the goal of learning new tasks in the distribution quickly. If that is our objective, hierarchy would fall out. Taskability (Imitation Learning) There are several things we want from robots. We need to be able to tell them what to do and we can do this by giving them examples. This is called Imitation Learning, which can be successfully implemented to a variety of use cases. The idea is to collect many demonstrations, then train something from those demonstrations, then deploy the learn policy. The problem with this is that everytime there is a new task, you start from scratch. The solution to this problem is experience through several demonstrations, as in the case of humans. Although, instead of running the agent through several demos, it is trained completely on one, then showed a frame of a second demo, where it uses it to predict what the outcome would be. This is known as One-Shot imitation learning which is a part of supervised learning, where in several demonstrations are used to train the system to be able to handle any new environment it is put into. Lifelong learning (Continuous Adaptation) What we usually do in ML can be divided into two broad steps: Run Machine Learning Deploy it, which is a canonical way In this case, all the learning happens ahead of time, before the deployment. However, in real world cases, what you learn from past data might not work in the future. There is a necessity to learn during deployment, which is a lifelong learning spirit. This brings us to Continuous Adaptation. Can we train an agent to be good at non stationary environments? We need to find whether at the time of meta training the agent is able to adapt to a new/changing task. We can try changing the dynamics since it’s hard to do ML training in the real world. At the same time, we can also use competitor environments; which means you’re in an environment with other agents who are trying to beat your agent. The only way to succeed is to continuously adapt more quickly than the others. Leverage Simulation Simulation is very helpful and it’s not that expensive. It’s fast and scalable and lets you label more easily. However, the challenge is how to get useful things out of the simulator. One approach is to build realistic simulators. This is quite expensive. Another way is to use a close enough simulator that uses real world data through domain confusion or adaptation. It allows to learn from a small amount of real world data and is quite successful. Further, another approach to look at is Domain Randomisation, which is also working well in the real world. If the model sees enough simulated variations, the real world might appear like just the next simulator. This has worked in the context of using simulator data to train a quadcopter to avoid collision. Moreover, when pre trained from imagenet or just training in simulation, both performances were similar, after around 8000 examples. To conclude, the beauty of meta learning is that it enables the discovery of algorithms that are data driven, as against those that are created from pure human ingenuity. This requires more compute power, but several companies like Nvidia and Intel are working hard to overcome this challenge. This will surely power meta-learning to great heights to be implemented in robotics. While we figure out these above mentioned technical challenges of incorporating AI in robotics, some significant other challenges that we must focus on in parallel are safe learning, and value alignment among others.    
Read more
  • 0
  • 0
  • 3505

article-image-3-simple-steps-convert-mathematical-formula-model
Vedika Naik
13 Dec 2017
3 min read
Save for later

3 simple steps to convert a mathematical formula to a model

Vedika Naik
13 Dec 2017
3 min read
[box type="note" align="" class="" width=""]The following is an excerpt from the book Scala for Machine Learning, Second Edition, written by Patrick R. Nicolas. The book will help you leverage Scala and Machine Learning to study and construct systems that can learn from data. [/box] This article aims to teach how a mathematical formula can be converted into a machine learning model in three easy steps: Stackable traits enable developers to follow a strict mathematical formalism while implementing a model in Scala. Scientists use a universally accepted template to solve mathematical problems: Declare the variables relevant to the problem. Define a model (equation, algorithm, formulas…) as the solution to the problem. Instantiate the variables and execute the model to solve the problem. Let's consider the example of the concept of kernel functions, a model that consists of the composition of two mathematical functions, and its potential implementation in Scala. Step 1 – variable declaration The implementation consists of wrapping (scope) the two functions into traits and defining these functions as abstract values. The mathematical formalism is as follows: The Scala implementation is represented here: type V = Vector[Double] trait F{ val f: V => V} trait G{ val g: V => Double } Step 2 – model definition The model is defined as the composition of the two functions. The stack of traits G, F describes the type of compatible functions that can be composed using the self-referenced constraint self: G with F: Formalism h = f o g The implementation is as follows: class H {self: G with F =>def apply(v:V): Double =g(f(v))} Step 3 – instantiation The model is executed once the variable f and g are instantiated. The formalism is as follows: The implementation is as follows: val h = new H with G with F { val f: V=>V = (v: V) =>v.map(exp(_)) val g: V => Double = (v: V) =>v.sum Note: Lazy value trigger In the preceding example, the value of h(v) = g(f(v)) can be automatically computed as soon as g and f are initialized, by declaring h as a lazy value. Clearly, Scala preserves the formalism of mathematical models, making it easier for scientists and developers to migrate their existing projects written in scientific oriented languages such as R. Note: Emulation of R Most data scientists use the language R to create models and apply learning strategies. They may consider Scala as an alternative to R in some cases, as Scala preserves the mathematical formalism used in models implemented in R. In this article we explained the concept preservation of mathematical formalism. This needs to be further extended to dynamic creation of workflows using traits. Design patterns, such as the cake pattern, can be used to do this. If you enjoyed this excerpt, be sure to check out the book Scala Machine Learning, Second Edition to gain solid foundational knowledge in machine learning with Scala.  
Read more
  • 0
  • 0
  • 3619

article-image-tinkering-with-ticks-in-matplotlib-2-0
Sugandha Lahoti
13 Dec 2017
6 min read
Save for later

Tinkering with ticks in Matplotlib 2.0

Sugandha Lahoti
13 Dec 2017
6 min read
[box type="note" align="" class="" width=""]This is an excerpt from the book titled Matplotlib 2.x By Example written by Allen Chi Shing Yu, Claire Yik Lok Chung, and Aldrin Kay Yuen Yim,. The book covers basic know-how on how to create and customize plots by Matplotlib. It will help you learn to visualize geographical data on maps and implement interactive charts. [/box] The article talks about how you can manipulate ticks in Matplotlib 2.0. It includes steps to adjust tick spacing, customizing tick formats, trying out the ticker locator and formatter, and rotating tick labels. What are Ticks Ticks are dividers on an axis that help readers locate the coordinates. Tick labels allow estimation of values or, sometimes, labeling of a data series as in bar charts and box plots. Adjusting tick spacing Tick spacing can be adjusted by calling the locator methods: ax.xaxis.set_major_locator(xmajorLocator) ax.xaxis.set_minor_locator(xminorLocator) ax.yaxis.set_major_locator(ymajorLocator) ax.yaxis.set_minor_locator(yminorLocator) Here, ax refers to axes in a Matplotlib figure. Since set_major_locator() or set_minor_locator() cannot be called from the pyplot interface but requires an axis, we call pyplot.gca() to get the current axes. We can also store a figure and axes as variables at initiation, which is especially useful when we want multiple axes. Removing ticks NullLocator: No ticks Drawing ticks in multiples Spacing ticks in multiples of a given number is the most intuitive way. This can be done by using MultipleLocator space ticks in multiples of a given value. Automatic tick settings MaxNLocator: This finds the maximum number of ticks that will display nicely AutoLocator: MaxNLocator with simple defaults AutoMinorLocator: Adds minor ticks uniformly when the axis is linear Setting ticks by the number of data points IndexLocator: Sets ticks by index (x = range(len(y)) Set scaling of ticks by mathematical functions LinearLocator: Linear scale LogLocator: Log scale SymmetricalLogLocator: Symmetrical log scale, log with a range of linearity LogitLocator: Logit scaling Locating ticks by datetime There is a series of locators dedicated to displaying date and time: MinuteLocator: Locate minutes HourLocator: Locate hours DayLocator: Locate days of the month WeekdayLocator: Locate days of the week MonthLocator: Locate months, for example, 8 for August YearLocator: Locate years that in multiples RRuleLocator: Locate using matplotlib.dates.rrulewrapper The rrulewrapper is a simple wrapper around a dateutil.rrule (dateutil) that allows almost arbitrary date tick specifications AutoDateLocator: On autoscale, this class picks the best MultipleDateLocator to set the view limits and the tick locations Customizing tick formats Tick formatters control the style of tick labels. They can be called to set the major and minor tick formats on the x and y axes as follows: ax.xaxis.set_major_formatter( xmajorFormatter ) ax.xaxis.set_minor_formatter( xminorFormatter ) ax.yaxis.set_major_formatter( ymajorFormatter ) ax.yaxis.set_minor_formatter( yminorFormatter ) Removing tick labels NullFormatter: No tick labels Fixing labels FixedFormatter: Labels are set manually Setting labels with strings IndexFormatter: Take labels from a list of strings StrMethodFormatter: Use the string format method Setting labels with user-defined functions FuncFormatter: Labels are set by a user-defined function Formatting axes by numerical values ScalarFormatter: The format string is automatically selected for scalars by default The following formatters set values for log axes: LogFormatter: Basic log axis LogFormatterExponent: Log axis using exponent = log_base(value) LogFormatterMathtext: Log axis using exponent = log_base(value) using Math text LogFormatterSciNotation: Log axis with scientific notation LogitFormatter: Probability formatter Trying out the ticker locator and formatter To demonstrate the ticker locator and formatter, here we use Netflix subscriber data as an example. Business performance is often measured seasonally. Television shows are even more "seasonal". Can we better show it in the timeline? import numpy as np import matplotlib.pyplot as plt import matplotlib.ticker as ticker """ Number for Netflix streaming subscribers from 2012-2017 Data were obtained from Statista on https://www.statista.com/statistics/250934/quarterly-number-of-netflix-stre aming-subscribers-worldwide/ on May 10, 2017. The data were originally published by Netflix in April 2017. """ # Prepare the data set x = range(2011,2018) y = [26.48,27.56,29.41,33.27,36.32,37.55,40.28,44.35, 48.36,50.05,53.06,57.39,62.27,65.55,69.17,74.76,81.5, 83.18,86.74,93.8,98.75] # quarterly subscriber count in millions # Plot lines with different line styles plt.plot(y,'^',label = 'Netflix subscribers',ls='-') # get current axes and store it to ax ax = plt.gca() # set ticks in multiples for both labels ax.xaxis.set_major_locator(ticker.MultipleLocator(4)) # set major marks # every 4 quarters, ie once a year ax.xaxis.set_minor_locator(ticker.MultipleLocator(1)) # set minor marks # for each quarter ax.yaxis.set_major_locator(ticker.MultipleLocator(10)) # ax.yaxis.set_minor_locator(ticker.MultipleLocator(2)) # label the start of each year by FixedFormatter  ax.get_xaxis().set_major_formatter(ticker.FixedFormatter(x)) plt.legend() plt.show() From this plot, we see that Netflix has a pretty linear growth of subscribers from the year 2012 to 2017. We can tell the seasonal growth better after formatting the x axis in a quarterly manner. In 2016, Netflix was doing better in the latter half of the year. Any TV shows you watched in each season? Rotating tick labels A figure can get too crowded or some tick labels may get skipped when we have too many tick labels or if the label strings are too long. We can solve this by rotating the ticks, for example, by pyplot.xticks(rotation=60): import matplotlib.pyplot as plt import numpy as np import matplotlib as mpl mpl.style.use('seaborn') techs = ['Google Adsense','DoubleClick.Net','Facebook Custom Audiences','Google Publisher Tag', 'App Nexus'] y_pos = np.arange(len(techs)) # Number of websites using the advertising technologies # Data were quoted from builtwith.com on May 8th 2017 websites = [14409195,1821385,948344,176310,283766] plt.bar(y_pos, websites, align='center', alpha=0.5) # set x-axis tick rotation plt.xticks(y_pos, techs, rotation=25) plt.ylabel('Live site count') plt.title('Online advertising technologies usage') plt.show() Use pyplot.tight_layout() to avoid image clipping. Using rotated labels can sometimes result in image clipping, as follows, if you save the figure by pyplot.savefig(). You can call pyplot.tight_layout() before pyplot.savefig() to ensure a complete image output. We saw how ticks can be adjusted, customized, rotated and formatted in Matplotlib 2.0 for easy readability, labelling and estimation of values. To become well-versed with Matplotlib for your day to day work, check out this book Matplotlib 2.x By Example.  
Read more
  • 0
  • 0
  • 6690

article-image-write-effective-stored-procedures-postgresql
Amey Varangaonkar
12 Dec 2017
8 min read
Save for later

How to write effective Stored Procedures in PostgreSQL

Amey Varangaonkar
12 Dec 2017
8 min read
[box type="note" align="" class="" width=""]This article is an excerpt from the book Mastering PostgreSQL 9.6 written by Hans-Jürgen Schönig. PostgreSQL is an open source database used not only for handling large datasets (Big Data) but also as a JSON document database. It finds applications in the software as well as the web domains. This book will enable you to build better PostgreSQL applications and administer databases more efficiently. [/box] In this article, we explain the concept of Stored Procedures, and how to write them effectively in PostgreSQL 9.6. When it comes to stored procedures, PostgreSQL differs quite significantly from other database systems. Most database engines force you to use a certain programming language to write server-side code. Microsoft SQL Server offers Transact-SQL while Oracle encourages you to use PL/SQL. PostgreSQL does not force you to use a certain language but allows you to decide on what you know best and what you like best. The reason PostgreSQL is so flexible is actually quite interesting too in a historical sense. Many years ago, one of the most well-known PostgreSQL developers (Jan Wieck), who had written countless patches back in its early days, came up with the idea of using TCL as the server-side programming language. The trouble was simple—nobody wanted to use TCL and nobody wanted to have this stuff in the database engine. The solution to the problem was to make the language interface so flexible that basically any language can be integrated with PostgreSQL easily. Then, the CREATE LANGUAGE clause was born: test=# h CREATE LANGUAGE Command: CREATE LANGUAGE Description: define a new procedural language Syntax: CREATE [ OR REPLACE ] [ PROCEDURAL ] LANGUAGE name CREATE [ OR REPLACE ] [ TRUSTED ] [ PROCEDURAL ] LANGUAGE name HANDLER call_handler [ INLINE inline_handler ] [ VALIDATOR valfunction ] Nowadays, many different languages can be used to write stored procedures. The flexibility added to PostgreSQL back in the early days has really paid off, and so you can choose from a rich set of programming languages. How exactly does PostgreSQL handle languages? If you take a look at the syntax of the CREATE LANGUAGE clause, you will see a couple of keywords: HANDLER: This function is actually the glue between PostgreSQL and any external language you want to use. It is in charge of mapping PostgreSQL data structures to whatever is needed by the language and helps to pass the code around. VALIDATOR: This is the policeman of the infrastructure. If it is available, it will be in charge of delivering tasty syntax errors to the end user. Many languages are able to parse the code before actually executing it. PostgreSQL can use that and tell you whether a function is correct or not when you create it. Unfortunately, not all languages can do this, so in some cases, you will still be left with problems showing up at runtime. INLINE: If it is present, PostgreSQL will be able to run anonymous code blocks in this function. The anatomy of a stored procedure Before actually digging into a specific language, I want to talk a bit about the anatomy of a typical stored procedure. For demo purposes, I have written a function that just adds up two numbers: test=# CREATE OR REPLACE FUNCTION mysum(int, int) RETURNS int AS ' SELECT $1 + $2; ' LANGUAGE 'sql'; CREATE FUNCTION The first thing you can see is that the procedure is written in SQL. PostgreSQL has to know which language we are using, so we have to specify that in the definition. Note that the code of the function is passed to PostgreSQL as a string ('). That is somewhat noteworthy because it allows a function to become a black box to the execution machinery. In other database engines, the code of the function is not a string but is directly attached to the statement. This simple abstraction layer is what gives the PostgreSQL function manager all its power. Inside the string, you can basically use all that the programming language of your choice has to offer. In my example, I am simply adding up two numbers passed to the function. For this example, two integer variables are in use. The important part here is that PostgreSQL provides you with function overloading. In other words, the mysum(int, int) function is not the same as the mysum(int8, int8) function. PostgreSQL sees these things as two distinct functions. Function overloading is a nice feature; however, you have to be very careful not to accidentally deploy too many functions if your parameter list happens to change from time to time. Always make sure that functions that are not needed anymore are really deleted. The CREATE OR REPLACE FUNCTION clause will not change the parameter list. You can, therefore, use it only if the signature does not change. It will either error out or simply deploy a new function. Let's run the function: test=# SELECT mysum(10, 20); Mysum ------- 30 (1 row) The result is not really surprising. Introducing dollar quoting Passing code to PostgreSQL as a string is very flexible. However, using single quotes can be an issue. In many programming languages, single quotes show up frequently. To be able to use quotes, people have to escape them when passing the string to PostgreSQL. For many years this has been the standard procedure. Fortunately, those old times have passed by and new means to pass the code to PostgreSQL are available: test=# CREATE OR REPLACE FUNCTION mysum(int, int) RETURNS int AS $$ SELECT $1 + $2; $$ LANGUAGE 'sql'; CREATE FUNCTION The solution to the problem of quoting strings is called dollar quoting. Instead of using quotes to start and end strings, you can simple use $$. Currently, I am only aware of two languages that have assigned a meaning to $$. In Perl as well as in bash scripts, $$ represents the process ID. To overcome even this little obstacle, you can use $ almost anything $ to start and end the string. The following example shows how that works: test=# CREATE OR REPLACE FUNCTION mysum(int, int) RETURNS int AS $  $ SELECT $1 + $2; $  $ LANGUAGE 'sql'; CREATE FUNCTION All this flexibility allows you to really overcome the problem of quoting once and for all. As long as the start string and the end string match, there won't be any problems left. Making use of anonymous code blocks So far, you have learned to write the most simplistic stored procedures possible, and you have learned to execute code. However, there is more to code execution than just full-blown stored procedures. In addition to full-blown procedures, PostgreSQL allows the use of anonymous code blocks. The idea is to run code that is needed only once. This kind of code execution is especially useful to deal with administrative tasks. Anonymous code blocks don't take parameters and are not permanently stored in the database as they don't have a name anyway. Here is a simple example: test=# DO $$ BEGIN RAISE NOTICE 'current time: %', now(); END; $$ LANGUAGE 'plpgsql'; NOTICE:  current time: 2016-12-12 15:25:50.678922+01 CONTEXT:  PL/pgSQL function inline_code_block line 3 at RAISE DO In this example, the code only issues a message and quits. Again, the code block has to know which language it uses. The string is again passed to PostgreSQL using simple dollar quoting. Using functions and transactions As you know, everything that PostgreSQL exposes in user land is a transaction. The same, of course, applies if you are writing stored procedures. The procedure is always part of the transaction you are in. It is not autonomous—it is just like an operator or any other operation. Here is an example: All three function calls happen in the same transaction. This is important to understand because it implies that you cannot do too much transactional flow control inside a function. Suppose the second function call commits. What happens in such a case anyway? It cannot work. However, Oracle has a mechanism that allows for autonomous transactions. The idea is that even if a transaction rolls back, some parts might still be needed and should be kept. The classical example is as follows: Start a function to look up secret data Add a log line to the document that somebody has modified this important secret data Commit the log line but roll back the change You still want to know that somebody attempted to change data To solve problems like this one, autonomous transactions can be used. The idea is to be able to commit a transaction inside the main transaction independently. In this case, the entry in the log table will prevail while the change will be rolled back. As of PostgreSQL 9.6, autonomous transactions are not happening. However, I have already seen patches floating around that implement this feature. We will see when these features make it to the core. To give you an impression of how things will most likely work, here is a code snippet based on the first patches: AS $$ DECLARE PRAGMA AUTONOMOUS_TRANSACTION; BEGIN FOR i IN 0..9 LOOP START TRANSACTION; INSERT INTO test1 VALUES (i); IF i % 2 = 0 THEN COMMIT; ELSE ROLLBACK; END IF; END LOOP; RETURN 42;            END;             $$; ... The point in this example is that we can decide on the fly whether to commit or to roll back the autonomous transaction. More about stored procedures - on how to write them in different languages and how you can improve their performance - can be found in the book Mastering PostgreSQL 9.6. Make sure you order your copy now!  
Read more
  • 0
  • 0
  • 11180

article-image-sequence-generator-transformation-informatica-powercenter
Savia Lobo
12 Dec 2017
7 min read
Save for later

How to integrate Sequence Generator transformation in Informatica PowerCenter 10.x

Savia Lobo
12 Dec 2017
7 min read
[box type="note" align="" class="" width=""]This article is an excerpt from a book by Rahul Malewar titled Learning Informatica PowerCenter 10.x. This book will guide you through core functionalities offered by Informatica PowerCenter version 10.x. [/box] Our article covers how sequence generators are used to generate primary key values or a sequence of unique numbers. It also showcases the ports associated and the properties of sequence generators. A sequence Generator transformation is used to generate a sequence of unique numbers. Based on the property defined in the Sequence Generator transformation, the unique values are generated. A sample mapping showing the Sequence Generator transformation in shown in the following screenshot: As you can notice in the mapping, the Sequence Generator transformation does not have any input port. You need to define the Start value, Increment by value, and End value in the properties. Based on properties, the sequence generator generates the value. In the preceding mapping, as soon as the first record enters the target from the Source Qualifier transformation, NEXTVAL generates its first value and so on for other records. The sequence Generator is built for generating numbers. Ports of Sequence Generator transformation Sequence Generator transformation has only two ports, namely NEXTVAL and CURRVAL. Both the ports are output ports. You cannot add or delete any port in the Sequence Generator. It is recommended that you always use the NEXTVAL port first. If the NEXTVAL port is utilized, then use the CURRVAL port. You can define the value of CURRVAL in the properties of the Sequence Generator transformation. Consider a scenario where we are passing two records to transformation; the following events occur inside the Sequence Generator transformation. Also, note that in our case, we have defined the Start value as 0, the Increment by value as 1, and the End value is default in the property. Also, the current value defined in properties is 1. The following is the sequence of events: When the first record enters the target from the Filter transformation, the current value, which is set to 1 in the properties of the Sequence Generator, is assigned to the NEXTVAL port. This gets loaded into the target by the connected link. So, for the first record, SEQUENCE_NO in the target gets the value as 1. The Sequence Generator increments CURRVAL internally and assigns that value to the current value, which is 2 in this case. When the second record enters the target, the current value, which is set as 2, now gets assigned to NEXTVAL. The Sequence Generator increments CURRVAL internally to make it 3. So at the end of the processing of record 2, the NEXTVAL port will have value 2 and the CURRVAL port will have value set as 3. This is how the cycle keeps on running till you reach the end of the records from the source. It is a little confusing to understand how the NEXTVAL and CURRVAL ports are behaving, but after reading the preceding example, you will have a proper understanding of the process. Properties of Sequence Generator transformation There are multiple values that you need to define inside the Sequence Generator transformation. Double-click on the Sequence Generator, and click on the Properties tab as shown in the following screenshot: Let's discuss the properties in detail: Start Value:This comes into the picture only if you select the Cycle option in the properties. It indicates the integration service to start over from this value as soon as the end value is reached when you have checked the cycle option. The default value is 0, and the maximum value is 9223372036854775806. Increment By: This is the value by which you wish to increment the consecutive numbers from the NEXTVAL port. The default value is 1, and the maximum value is 2147483647. End Value: This is the maximum value Integration service can generate. If the Sequence Generator reaches the end value and if it is not configured for the cycle, the session will fail, giving the error as data overflow. The maximum value is 9223372036854775807. Current Value: This indicates the value assigned to the CURRVAL port. Specify the current value that you wish to have for the first record. As mentioned in the preceding section, the CURRVAL port gets assigned to NEXTVAL, and the CURRVAL port is incremented. The CURRVAL port stores the value after the session is over, and when you run the session the next time, it starts incrementing the value from the stored value if you have not checked the reset option. If you check the reset option, Integration service resets the value to 1. Suppose you have not checked the Reset option and you have passed 17 records at the end of the session, the current value will be set to 18, which will be stored internally. When you run the session the next time, it starts generating the value from 18. The maximum value is 9223372036854775807. Cycle:If you check this option, Integration services cycles through the sequence defined. If you do not check this option, the process stops at the defined End Value. If your source records are more than the End value defined, the session will fail with the overflow error. Number of Cached Values: This option indicates how many sequential values Integration service can cache at a time. This option is useful only when you are using reusable sequence generator transformation. The default value for non-reusable transformation is 0. The default value for reusable transformation is 1,000. The maximum value is 9223372036854775807. Reset:If you do not check this option, the Integration services store the value of the previous run and generate the value from the stored value. Else, the Integration will reset to the defined current value and generate values from the initial value defined. This property is disabled for reusable Sequence Generator transformation. Tracing Level:This indicates the level of detail you wish to write into the session log. We will discuss this option in detail later in the chapter. With this, we have seen all the properties of the Sequence Generator transformation. Let's talk about the usage of Sequence Generator transformation: Generating Primary/Foreign Key:Sequence Generator can be used to generate primary key and foreign key. Primary key and foreign key should be unique and not null; the Sequence Generator transformation can easily do this as seen in the preceding section. Connect the NEXTVAL port to both the targets for which you wish to generate the primary and foreign keys as shown in the following screenshot: Replace missing values: You can use the Sequence Generator transformation to replace missing values by using the IIF and ISNULL functions. Consider you have some data with JOB_ID of Employee. Some records do not have JOB_ID in the table. Use the following function to replace those missing values: IIF( ISNULL (JOB_ID), NEXTVAL, JOB_ID) The preceding function interprets if the JOB_ID is null, then assign NEXTVAL, else keep JOB_ID as it is. The following screenshot indicates the preceding requirement: With this, you have learned all the options present in the Sequence Generator transformation. If you liked the above article, checkout our book Learning Informatica PowerCenter 10.x to explore Informatica PowerCenter’s other powerful features such as working on sources, targets, performance optimization, scheduling, deploying for processing, and managing your data at speed.  
Read more
  • 0
  • 0
  • 5392

article-image-working-targets-informatica-powercenter-10-x
Savia Lobo
11 Dec 2017
8 min read
Save for later

Working with Targets in Informatica PowerCenter 10.x

Savia Lobo
11 Dec 2017
8 min read
[box type="note" align="" class="" width=""]This article is an excerpt from a book by Rahul Malewar titled Learning Informatica PowerCenter 10.x. The book harnesses the power and simplicity of Informatica PowerCenter 10.x to build and manage efficient data management solutions.[/box] This article guides you through working with the target designer in the Informatica’s PowerCenter. It provides a user interface for creation and customization of the logical target schema. PowerCenter is capable of working with different types of targets to load data: Database: PowerCenter supports all the relations databases such as Oracle, Sybase, DB2, Microsoft SQL Server, SAP HANA, and Teradata. File: This includes flat files (fixed width and delimited files), COBOL Copybook files, XML files, and Excel files. High-end applications: PowerCenter also supports applications such as Hyperion, PeopleSoft, TIBCO, WebSphere MQ, and so on. Mainframe: Additional features of Mainframe such as IBM DB2 OS/390, IBM DB2 OS/400, IDMS, IDMS-X, IMS, and VSAM can be purchased Other: PowerCenter also supports Microsoft Access and external web services. Let's start! Working with Target relational database tables - the Import option Just as we discussed importing and creating source files and source tables, we need to work on target definitions. The process of importing the target table is exactly same as importing the Source table, the only difference is that you need to work in the Target Designer. You can import or create the table structure in the Target Designer. After you add these target definitions to the repository, you can use them in a mapping. Follow these steps to import the table target definition: In the Designer, go to Tools | Target Designer to open the Target Designer. Go to Targets | Importfrom Database. From the ODBC data source button, select the ODBC data source that you created to access source tables. We have already added the data source while working on the sources. Enter the username and password to connect to the database. Click on Connect. In the Select tables list, expand the database owner and the TABLE heading. Select the tables you wish to import, and click on OK. The structure of the selected tables will appear in the Target Designer in workspace. As mentioned, the process is the same as importing the source in the Source Analyzer. Follow the preceding steps in case of some issues. Working with Target Flat Files - the Import option The process of importing the target file is exactly same as importing the Source file, the only difference is that you need to work on the Target Designer. Working with delimited files Following are the steps that you will have to perform to work with delimited files. In the Designer, go to Tools | Target Designer to open the Target Designer. Go to Target | Importfrom File. Browse the files you wish to import as source files. The flat file import wizard will come up. Select the file type -- Delimited. Also, select the appropriate option to import the data from second row and import filed names from the first line as we did in case of importing the source. Click on Next. Select the type of delimiter used in the file. Also, check the quotes option -- No Quotes, Single Quotes, and Double Quotes -- to work with the quotes in the text values. Click on Next. Verify the column names, data type, and precision in the data view option. Click on Next. Click on Finish to get the target file imported in the Target Designer. We now move on to fixed width files. Working with fixed width Files Following are the steps that you will have to perform to work with fixed width Files: In the Designer, go to Tools | Target Designer to open the Target Designer. Go to Target | Import from File. Browse the files you wish to use as source files. The Flat file import wizard will come up. Select the file type -- fixed width. Click on Next. Set the width of each column as required by adding a line break. Click on Next. Specify the column names, data type, and precision in the data view option. Click on Next. Click on Finish to get the target imported in the Target Designer. Just as in the case of working with sources, we move on to the create option in target. Working with Target - the Create option Apart from importing the file or table structure, we can manually create the Target Definition. When the sample Target file or the table structure is not available, we need to manually create the Target structure. When we select the create option, we need to define every detail related to the file or table manually, such as the name of the Target, the type of the Target, column names, column data type, column data size, indexes, constraints, and so on. When you import the structure, the import wizard automatically imports all these details. In the Designer, go to Tools | Target Designer to open the Target Designer. Go to Target | Create. Select the type of Target you wish to create from the drop-down list. An empty target structure will appear in the Target Designer. Double-click on the title bar of the target definition for the T_EMPLOYEES table. This will open the T_EMPLOYEES target definition. A popup window will display all the properties of this target definition. The Table tab will show the name of the table, the name of the owner, and the database type. You can add a comment in the Description section. Usually, we keep the Business name empty. Click on the Columns tab. This will display the column descriptions for the target. You can add, delete, or edit the columns. Click on the Metadata Extensions tab (usually, you keep this tab blank). You can store some Metadata related to the target you created. Some personal details and reference details can be saved. Click on Apply and then on OK. Go to Repository | Save to save the changes to the repository. Let's move on to something interesting now! Working with Target - the Copy or Drag-Drop option PowerCenter provides a very convenient way of reusing the existing components in the Repository. It provides the Drag-Drop feature, which helps in reusing the existing components. Using the Drag-Drop feature, you can copy the existing source definition created earlier to the Target Designer in order to create the target definition with the same structure. Follow these steps: Step 1: In the Designer, go to Tools | Target Designer to open the Target Designer. Step 2: Drag the SRC_STUDENT source definition from the Navigator to the Target Designer workspace as shown in the following screenshot: Step 3: The Designer creates a target definition, SRC_STUDENT, with the same column definitions as the SRC_STUDENT source definition and the same database type: Step 4: Double-click on the title bar of the SRC_STUDENT target definition to open it and edit properties if you wish to change some properties. Step 5: Click on Rename: Step 6: A new pop-up window will allow you to mention the new name. Change the target definition name to TGT_STUDENT: Step 7: Click on OK Step 8: Click on the Columns tab. The target column definitions are the same as the SRC_STUDENT source definition. You can add new columns, delete existing columns, or edit the columns as per your requirement. Step 9: Click on OK to save the changes and close the dialog box. Step 10: Go to Repository | Save Creating Source Definition from Target structure With Informatica PowerCenter 10.1.0, now you can drag and drop the target definition from target designer into Source Analyzer. In the previous topic, we learned to drag-drop the Source definition from Source Analyzer and reuse it in Target Designer. In the previous versions of Informatica, this feature was not available. In the latest version, this feature is now available. Follow the steps as shown in the preceding section to drag-drop the target definition into Source Analyzer. In the article, we have tried to explain how to work with the target designer, one of the basic component of the PowerCenter designer screen in Informatica 10.x. Additionally to use the target definition from the target designer to create a source definition. If you liked the above article, checkout our book, Learning Informatica PowerCenter 10.x. The book will let you explore more on how to implement various data warehouse and ETL concepts, and use PowerCenter 10.x components to build mappings, tasks, workflows, and so on.  
Read more
  • 0
  • 0
  • 7162
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €14.99/month. Cancel anytime
article-image-introducing-generative-adversarial-networks
Amey Varangaonkar
11 Dec 2017
6 min read
Save for later

Implementing a simple Generative Adversarial Network (GANs)

Amey Varangaonkar
11 Dec 2017
6 min read
[box type="note" align="" class="" width=""]The following excerpt is taken from Chapter 2 - Learning Features with Unsupervised Generative Networks of the book Deep Learning with Theano, written by Christopher Bourez. This book talks about modeling and training effective deep learning models with Theano, a popular Python-based deep learning library. [/box] In this article, we introduce you to the concept of Generative Adversarial Networks, a popular class of Artificial Intelligence algorithms used in unsupervised machine learning. Code files for this particular chapter are available for download towards the end of the post. Generative adversarial networks are composed of two models that are alternatively trained to compete with each other. The generator network G is optimized to reproduce the true data distribution, by generating data that is difficult for the discriminator D to differentiate from real data. Meanwhile, the second network D is optimized to distinguish real data and synthetic data generated by G. Overall, the training procedure is similar to a two-player min-max game with the following objective function: Here, x is real data sampled from real data distribution, and z the noise vector of the generative model. In some ways, the discriminator and the generator can be seen as the police and the thief: to be sure the training works correctly, the police is trained twice as much as the thief. Let's illustrate GANs with the case of images as data. In particular, let's again take our example from Chapter 2, Classifying Handwritten Digits with a Feedforward Network about MNIST digits, and consider training a generative adversarial network, to generate images, conditionally on the digit we want. The GAN method consists of training the generative model using a second model, the discriminative network, to discriminate input data between real and fake. In this case, we can simply reuse our MNIST image classification model as discriminator, with two classes, real or fake, for the prediction output, and also condition it on the label of the digit that is supposed to be generated. To condition the net on the label, the digit label is concatenated with the inputs: def conv_cond_concat(x, y): return T.concatenate([x, y*T.ones((x.shape[0], y.shape[1], x.shape[2], x.shape[3]))], axis=1) def discrim(X, Y, w, w2, w3, wy): yb = Y.dimshuffle(0, 1, 'x', 'x') X = conv_cond_concat(X, yb) h = T.nnet.relu(dnn_conv(X, w, subsample=(2, 2), border_mode=(2, 2)), alpha=0.2 ) h = conv_cond_concat(h, yb) h2 = T.nnet.relu(batchnorm(dnn_conv(h, w2, subsample=(2, 2), border_mode=(2, 2))), alpha=0.2) h2 = T.flatten(h2, 2) h2 = T.concatenate([h2, Y], axis=1) h3 = T.nnet.relu(batchnorm(T.dot(h2, w3))) h3 = T.concatenate([h3, Y], axis=1) y = T.nnet.sigmoid(T.dot(h3, wy)) return y Note the use of two leaky rectified linear units, with a leak of 0.2, as activation for the first two convolutions. To generate an image given noise and label, the generator network consists of a stack of deconvolutions, using an input noise vector z that consists of 100 real numbers ranging from 0 to 1: To create a deconvolution in Theano, a dummy convolutional forward pass is created, which gradient is used as deconvolution: def deconv(X, w, subsample=(1, 1), border_mode=(0, 0), conv_ mode='conv'): img = gpu_contiguous(T.cast(X, 'float32')) kerns = gpu_contiguous(T.cast(w, 'float32')) desc = GpuDnnConvDesc(border_mode=border_mode, subsample=subsample, conv_mode=conv_mode)(gpu_alloc_empty(img.shape[0], kerns.shape[1], img.shape[2]*subsample[0], img.shape[3]*subsample[1]).shape, kerns. shape) out = gpu_alloc_empty(img.shape[0], kerns.shape[1], img. shape[2]*subsample[0], img.shape[3]*subsample[1]) d_img = GpuDnnConvGradI()(kerns, img, out, desc) return d_img def gen(Z, Y, w, w2, w3, wx): yb = Y.dimshuffle(0, 1, 'x', 'x') Z = T.concatenate([Z, Y], axis=1) h = T.nnet.relu(batchnorm(T.dot(Z, w))) h = T.concatenate([h, Y], axis=1) h2 = T.nnet.relu(batchnorm(T.dot(h, w2))) h2 = h2.reshape((h2.shape[0], ngf*2, 7, 7)) h2 = conv_cond_concat(h2, yb) h3 = T.nnet.relu(batchnorm(deconv(h2, w3, subsample=(2, 2), border_mode=(2, 2)))) h3 = conv_cond_concat(h3, yb) x = T.nnet.sigmoid(deconv(h3, wx, subsample=(2, 2), border_ mode=(2, 2))) return x Real data is given by the tuple (X,Y), while generated data is built from noise and label (Z,Y): X = T.tensor4() Z = T.matrix() Y = T.matrix() gX = gen(Z, Y, *gen_params) p_real = discrim(X, Y, *discrim_params) p_gen = discrim(gX, Y, *discrim_params) Generator and discriminator models compete during adversarial learning: The discriminator is trained to label real data as real (1) and label generated data as generated (0), hence minimizing the following cost function: d_cost = T.nnet.binary_crossentropy(p_real, T.ones(p_real.shape)).mean() + T.nnet.binary_crossentropy(p_gen, T.zeros(p_gen.shape)). mean() The generator is trained to deceive the discriminator as much as possible. The training signal for the generator is provided by the discriminator network (p_gen) to the generator: g_cost = T.nnet.binary_crossentropy(p_gen,T.ones(p_gen.shape)). mean() The same as usual follows. Cost with respect to the parameters for each model is computed and training optimizes the weights of each model alternatively, with two times more the discriminator. In the case of GANs, competition between discriminator and generator does not lead to decreases in each loss. From the first epoch: To the 45th epoch: Generated examples look closer to real ones: Generative models, and especially Generative Adversarial Networks are currently the trending areas of Deep Learning. It has also found its way in a few practical applications as well.  For example, a generative model can successfully be trained to generate the next most likely video frames by learning the features of the previous frames. Another popular example where GANs can be used is, search engines that predict the next likely word before it is even entered by the user, by studying the sequence of the previously entered words. If you found this excerpt useful, do check out more comprehensive coverage of popular deep learning topics in our book  Deep Learning with Theano. [box type="download" align="" class="" width=""] Download files [/box]  
Read more
  • 0
  • 0
  • 7655

article-image-3-ways-use-structures-machine-learning-lise-getoor-nips-2017
Sugandha Lahoti
08 Dec 2017
11 min read
Save for later

3 great ways to leverage Structures for Machine Learning problems by Lise Getoor at NIPS 2017

Sugandha Lahoti
08 Dec 2017
11 min read
Lise Getoor is a professor in the Computer Science Department, at the University of California, Santa Cruz. She has a PhD in Computer Science from Stanford University. She has spent a lot of time studying machine learning, reasoning under uncertainty, databases, data science for social good, artificial intelligence This article attempts to bring our readers to Lisa’s Keynote speech at NIPS 2017. It highlights how structures can be unreasonably effective and the ways to leverage structures in Machine learning problems. After reading this article, head over to the NIPS Facebook page for the complete keynote. All images in this article come from Lisa’s presentation slides and do not belong to us. Our ability to collect, manipulate, analyze, and act on vast amounts of data is having a profound impact on all aspects of society. Much of this data is heterogeneous in nature and interlinked in a myriad of complex ways. This Data is Multimodal (it has different kinds of entities), Multi-relational (it has different links between things), and Spatio-Temporal (it involves space and time parameters). This keynote explores how we can exploit the structure that's in the input as well as the output of machine learning algorithms. A large number of structured problems exists in the fields of NLP and computer vision, computational biology, computational social sciences, knowledge graph extraction and so on. According to Dan Roth, all interesting decisions are structured i.e. there are dependencies between the predictions. Most ML algorithms take this nicely structured data and flatten it put it in a matrix form, which is convenient for our algorithms. However, there is a bunch of different issues with it. The most fundamental issue with the matrix form is that it assumes incorrect independence. Further, in the context of structure and outputs, we’re unable to apply the collective reasoning about the predictions we made for different entries in this matrix. Therefore we need to have ways where we can declaratively talk about how to transform the structure into features. This talk provides us with patterns, tools, and templates for dealing with structures in both inputs and outputs.   Lisa has covered three topics for solving structured problems: Patterns, Tools, and Templates. Patterns are used for simple structured problems. Tools help in getting patterns to work and in creating tractable structured problems. Templates build on patterns and tools to solve bigger computational problems. [dropcap]1[/dropcap] Patterns They are used for naively simple structured problems. But on encoding them repeatedly, one can increase performance by 5 or 10%. We use Logical Rules to capture structure in patterns. These logical structures capture structure, i.e. they give an easy way of talking about entities and links between entities. They also tend to be interpretable. There are three basic patterns for structured prediction problems: Collective Classification, Link Prediction, Entity Resolution. [toggle title="To learn more about Patterns, open this section" state="close"] Collective Classification Collective classification is used for inferring the labels of nodes in a graph. The pattern for expressing this in logical rules is [box type="success" align="" class="" width=""]local - predictor (x, l) → label (x, l) label (x, l) & link(x,y) → label (y,l)[/box] It is called as collective classification as the thing to predict i.e. the label, occurs on both sides of the rule. Let us consider a toy problem: We have to predict unknown labels here (marked in grey) as to what political party the unknown person will vote for. We apply logical rules to the problem. Local rules: [box type="success" align="" class="" width=""]“If X donates to part P, X votes for P” “If X tweets party P slogans, X votes for P”[/box] Relational rules: [box type="success" align="" class="" width=""]“If X is linked to Y, and X votes for P, Y votes for P” Votes (X,P) & Friends (X,Y) → Votes (Y, P) Votes (X,P) & Spouse (X,Y) → Votes (Y, P)[/box] The above example shows the local and relational rules applied to the problem based on collective classification. Adding a collective classifier like this to other problems yields significant improvement. Link Prediction Link Prediction is used for predicting links or edges in a graph. The pattern for expressing this in logical rules is : [box type="success" align="" class="" width=""]link (x,y) & similar (y,z) →  link (x,y)[/box] For example, consider a basic recommendation system. We apply logical rules of link prediction to express likes and similarities. So, how you infer one link is gonna give you information about another link. Rules express: [box type="success" align="" class="" width=""]“If user U likes item1, and item2 is similar to item1, user U likes item2” Likes (U, I1) & SimilarItem (I1, I2) → Likes(U, I2) “If user1  likes item I, and user2 is similar to user1, user2 likes item I” Likes (U1, I) & SimilarUser (U1, U2) → Likes(U2, I)[/box] Entity Resolution Entity Resolution is used for determining which nodes refer to the same underlying entity. Here we use local rules between how similar things are, for instance, how similar their names or links are [box type="success" align="" class="" width=""]similar - name (x,y) → same (x,y) similar - links (x,y) → same (x,y)[/box] There are two collective rules. One is based on transitivity. [box type="success" align="" class="" width=""]similar - name (x,y) → same (x,y) similar - links (x,y) → same (x,y) same (x,y) && same(y,z) → same (x,z)[/box] The other is based on matching i.e. dependence on both sides of the rule. [box type="success" align="" class="" width=""]similar - name (x,y) → same (x,y) similar - links (x,y) → same (x,y) same (x,y) & ! same (y,z) → ! same (x,z)[/box] The logical rules as described above though being quite helpful, have certain disadvantages. They are intractable, can’t handle inconsistencies, and can’t represent degrees of similarity.[/toggle] [dropcap]2[/dropcap] Tools Tools help in making the structured kind of problems tractable and in getting patterns to work. The tools come from the Statistical Relational Learning community.  Lise adds another one to this mix of languages - PSL. PSL is probabilistic logical programming, a declarative language for expressing collective inference problems. To know more: psl.linqs.org Predicate = relationship or property Ground Atom = (continuous) random variable Weighted Rules = capture dependency or constraint PSL Program = Rules + Input DB PSL makes reasoning scalable by mapping Logical inference to Convex optimization. The language takes logical rules and assign weights to them and then uses it to define a distribution for the unknown variables. One of the striking features here is that the random variables have continuous values. The work done pertaining to the PSL language turns the disadvantages of logical rules into advantages. So they are tractable, can handle inconsistencies, and can represent similarity. The key idea is to convert the clauses to concave functions. To be tractable, we relax it to a concave maximization. PSL has semantics from three different worlds: Randomized algorithms from the Computer science community, Probabilistic graphical models from the Machine Learning community, and Soft Logic from the AI community. [toggle title="To learn more about PSL, open this section" state="close"] Randomized Algorithm In this setting, we have a weighted rule. We have nonnegative weights and then a set of weighted logical rules in clausal form. Weighted Max SAT is a classical problem where we attempt to find the assignment to the random variables that maximize the weights of the satisfied rules. However, this problem is NP-HARD (which is a computational complexity theory for non-deterministic polynomial-time hardness). To overcome this, the randomized community converts this combinatorial optimization to a continuous optimization by introducing random variables which denote rounding probabilities. Probabilistic Graphic Models Graph models represent problems as a factor graph where we have random variables and rules that are essentially the potential function. However, this problem is also NP-Hard. We use Variational Inference approximation technique to solve this. Here we introduce marginal distributions (μ) for the variables. We can then express a solution if we can find a set of globally consistent assignment for these marginal distributions. The problem here is, although we can express it as a linear program, there is an exponential number of constraints. We will use techniques from the graphical model's community, particularly Local Consistency Relaxation to convert this to a simpler problem. The simple idea is to relax search over consistent marginals to simpler set. We introduce local pseudo marginals over joint potential states. Using KKT conditions we can optimize out the θ to derive simplified projected LCR over μ. This approach shows 16% improvement over canonical dual decomposition (MPLP) Soft Logic In the Soft Logic technique for convex optimizations, we have random variables that denote a degree of truth or similarity. We are essentially trying to minimize the amount of dissatisfaction in the rules. Hence with three different interpretations i.e. Randomized Algorithms, Graphical Models, and Soft Logic, we get the same convex optimizations. A PSL essentially takes a PSL program, takes some input data and defines a convex optimization. PSL is open-source. The code, data, tutorials are available online at psl.linqs.org MAP inference in PSL translates into convex optimization problem Inference is further enhanced with state-of-the-art optimization and distributed graph processing paradigms Learning methods for rule weights and latent variables Using PSL gives fast as well as accurate results on comparison with other approaches. [/toggle] [dropcap]3[/dropcap] Templates Templates build on patterns to solve problems in bigger areas such as computational social sciences, knowledge discovery, and responsible data science and machine learning. [toggle title="To learn about some use cases of PSL and Templates for pattern recognition, open this section." state="close"] Computational Social Sciences For exploring this area we will apply a PSL model to Debate stance classification. Let us consider a scenario of an online debate. The topic of the debate is climate change. We can use information in the text to figure out if the people participating in the debate are pro or anti the topic. We can also use information about the dialogue in the discourse. And we can build this on a PSL model. This is based on the collective classification problem we saw earlier in the post. We get a significant rise in accuracy by using a PSL program. Here are the results Knowledge Discovery Using a structure and making use of patterns in Knowledge discovery really pays off. Although we have Information extractors which can extract information from the web and other sources such as facts about entities, relationships, they are usually noisy. So it gets difficult to reason about them collectively to figure out which facts we actually wanna add to our knowledge base. We can add structure to the knowledge graph construction by Performing collective classification, link prediction, and entity resolution Enforcing ontological constraints Integrate knowledge source confidences Using PSL to make it scalable Here’s the PSL program for knowledge graph identification. These were evaluated on three real-world knowledge graphs. NELL, MusicBrainz, and Freebase. As shown in the above image, both statistical features and semantic constraints help but combining them always wins. Responsible Machine Learning Understanding structure can be key to mitigating negative effects and lead to responsible Machine Learning. The perils of ignoring structure in the machine learning space include overlooking Privacy. For instance, many approaches consider only individual’s attribute data. Some don't take into account what can be inferred from relational context. The other area is around Fairness. The structure here is often outside the data. It can be in the organization or the socio-economic structure. To enable fairness we need to implement impartial decision making without bias and need to take into account structural patterns. Algorithmic Discrimination is another area which can make use of a structure. The fundamental structural pattern here is a feedback loop. Having a way of encoding this feedback loop is important to eliminate algorithmic discrimination. [/toggle] Conclusion In this article, we saw ways of exploiting structures that can be tractable. It provided some tools and templates for exploiting structure. The keynote also provided opportunities for Machine Learning methods that can mix: Structured and unstructured approaches Probabilistic and logical inference Data-driven and knowledge-driven modeling AI and Machine Learning developers need to build on the approaches as described above and discover, exploit, and find new structure and create compelling commercial, scientific, and societal applications.
Read more
  • 0
  • 0
  • 2168

article-image-how-to-implement-discrete-convolution-on-a-2d-dataset
Pravin Dhandre
08 Dec 2017
7 min read
Save for later

How to implement discrete convolution on a 2D dataset

Pravin Dhandre
08 Dec 2017
7 min read
[box type="note" align="" class="" width=""]This article is an excerpt from the book by Rodolfo Bonnin titled Machine Learning for Developers. Surprisingly the question frequently asked by developers across the globe is, “How do I get started in Machine Learning?”. One reason could be attributed to the vastness of the subject area. This book is a systematic guide teaching you how to implement various Machine Learning techniques and their day-to-day application and development. [/box] In the tutorial given below,  we have implemented convolution in a practical example to see it applied to a real image and get intuitive ideas of its effect.We will use different kernels to detect high-detail features and execute subsampling operation to get optimized and brighter image. This is a simple intuitive implementation of discrete convolution concept by applying it to a sample image with different types of kernel. Let's import the required libraries. As we will implement the algorithms in the clearest possible way, we will just use the minimum necessary ones, such as NumPy: import matplotlib.pyplot as plt import imageio import numpy as np Using the imread method of the imageio package, let's read the image (imported as three equal channels, as it is grayscale). We then slice the first channel, convert it to a floating point, and show it using matplotlib: arr = imageio.imread("b.bmp") [:,:,0].astype(np.float) plt.imshow(arr, cmap=plt.get_cmap('binary_r')) plt.show() Now it's time to define the kernel convolution operation. As we did previously, we will simplify the operation on a 3 x 3 kernel in order to better understand the border conditions. apply3x3kernel will apply the kernel over all the elements of the image, returning a new equivalent image. Note that we are restricting the kernels to 3 x 3 for simplicity, and so the 1 pixel border of the image won't have a new value because we are not taking padding into consideration: class ConvolutionalOperation: def apply3x3kernel(self, image, kernel): # Simple 3x3 kernel operation newimage=np.array(image) for m in range(1,image.shape[0]-2): for n in range(1,image.shape[1]-2): newelement = 0 for i in range(0, 3): for j in range(0, 3): newelement = newelement + image[m - 1 + i][n - 1+ j]*kernel[i][j] newimage[m][n] = newelement return (newimage) As we saw in the previous sections, the different kernel configurations highlight different elements and properties of the original image, building filters that in conjunction can specialize in very high-level features after many epochs of training, such as eyes, ears, and doors. Here, we will generate a dictionary of kernels with a name as the key, and the coefficients of the kernel arranged in a 3 x 3 array. The Blur filter is equivalent to calculating the average of the 3 x 3 point neighborhood, Identity simply returns the pixel value as is, Laplacian is a classic derivative filter that highlights borders, and then the two Sobel filters will mark horizontal edges in the first case, and vertical ones in the second case: kernels = {"Blur":[[1./16., 1./8., 1./16.], [1./8., 1./4., 1./8.], [1./16., 1./8., 1./16.]] ,"Identity":[[0, 0, 0], [0., 1., 0.], [0., 0., 0.]] ,"Laplacian":[[1., 2., 1.], [0., 0., 0.], [-1., -2., -1.]] ,"Left Sobel":[[1., 0., -1.], [2., 0., -2.], [1., 0., -1.]] ,"Upper Sobel":[[1., 2., 1.], [0., 0., 0.], [-1., -2., -1.]]} Let's generate a ConvolutionalOperation object and generate a comparative kernel graphical chart to see how they compare: conv = ConvolutionalOperation() plt.figure(figsize=(30,30)) fig, axs = plt.subplots(figsize=(30,30)) j=1 for key,value in kernels.items(): axs = fig.add_subplot(3,2,j) out = conv.apply3x3kernel(arr, value) plt.imshow(out, cmap=plt.get_cmap('binary_r')) j=j+1 plt.show() <matplotlib.figure.Figure at 0x7fd6a710a208> In the final image you can clearly see how our kernel has detected several high-detail features on the image—in the first one, you see the unchanged image because we used the unit kernel, then the Laplacian edge finder, the left border detector, the upper border detector, and then the blur operator: Having reviewed the main characteristics of the convolution operation for the continuous and discrete fields, we can conclude by saying that, basically, convolution kernels highlight or hide patterns. Depending on the trained or (in our example) manually set parameters, we can begin to discover many elements in the image, such as orientation and edges in different dimensions. We may also cover some unwanted details or outliers by blurring kernels, for example. Additionally, by piling layers of convolutions, we can even highlight higher-order composite elements, such as eyes or ears. This characteristic of convolutional neural networks is their main advantage over previous data-processing techniques: we can determine with great flexibility the primary components of a certain dataset, and represent further samples as a combination of these basic building blocks. Now it's time to look at another type of layer that is commonly used in combination with the former—the pooling layer. Subsampling operation (pooling) The subsampling operation consists of applying a kernel (of varying dimensions) and reducing the extension of the input dimensions by dividing the image into mxn blocks and taking one element representing that block, thus reducing the image resolution by some determinate factor. In the case of a 2 x 2 kernel, the image size will be reduced by half. The most well-known operations are maximum (max pool), average (avg pool), and minimum (min pool). The following image gives you an idea of how to apply a 2 x 2 maxpool kernel, applied to a one-channel 16 x 16 matrix. It just maintains the maximum value of the internal zone it covers: Now that we have seen this simple mechanism, let's ask ourselves, what's the main purpose of it? The main purpose of subsampling layers is related to the convolutional layers: to reduce the quantity and complexity of information while retaining the most important information elements. In other word, they build a compact representation of the underlying information. Now it's time to write a simple pooling operator. It's much easier and more direct to write than a convolutional operator, and in this case we will only be implementing max pooling, which chooses the brightest pixel in the 4 x 4 vicinity and projects it to the final image: class PoolingOperation: def apply2x2pooling(self, image, stride): # Simple 2x2 kernel operation newimage=np.zeros((int(image.shape[0]/2),int(image.shape[1]/2)),np.float32) for m in range(1,image.shape[0]-2,2): for n in range(1,image.shape[1]-2,2): newimage[int(m/2),int(n/2)] = np.max(image[m:m+2,n:n+2]) return (newimage) Let's apply the newly created pooling operation, and as you can see, the final image resolution is much more blocky, and the details, in general, are brighter: plt.figure(figsize=(30,30)) pool=PoolingOperation() fig, axs = plt.subplots(figsize=(20,10)) axs = fig.add_subplot(1,2,1) plt.imshow(arr, cmap=plt.get_cmap('binary_r')) out=pool.apply2x2pooling(arr,1) axs = fig.add_subplot(1,2,2) plt.imshow(out, cmap=plt.get_cmap('binary_r')) plt.show() Here you can see the differences, even though they are subtle. The final image is of lower precision, and the chosen pixels, being the maximum of the environment, produce a brighter image: This simple implementation with various kernels simplified the working mechanism of discrete convolution operation on a 2D dataset. Using various kernels and subsampling operation, the hidden patterns of dataset are unveiled and the image is made more sharpened, with maximum pixels and much brighter image thereby producing compact representation of the dataset. If you found this article interesting, do check  Machine Learning for Developers and get to know about the advancements in deep learning, adversarial networks, popular programming frameworks to prepare yourself in the ubiquitous field of machine learning.    
Read more
  • 0
  • 0
  • 2577

article-image-20-lessons-bias-machine-learning-systems-nips-2017
Aarthi Kumaraswamy
08 Dec 2017
9 min read
Save for later

20 lessons on bias in machine learning systems by Kate Crawford at NIPS 2017

Aarthi Kumaraswamy
08 Dec 2017
9 min read
Kate Crawford is a Principal Researcher at Microsoft Research and a Distinguished Research Professor at New York University. She has spent the last decade studying the social implications of data systems, machine learning, and artificial intelligence. Her recent publications address data bias and fairness, and social impacts of artificial intelligence among others. This article attempts to bring our readers to Kate’s brilliant Keynote speech at NIPS 2017. It talks about different forms of bias in Machine Learning systems and the ways to tackle such problems. By the end of this article, we are sure you would want to listen to her complete talk on the NIPS Facebook page. All images in this article come from Kate's presentation slides and do not belong to us. The rise of Machine Learning is every bit as far reaching as the rise of computing itself.  A vast new ecosystem of techniques and infrastructure are emerging in the field of machine learning and we are just beginning to learn their full capabilities. But with the exciting things that people can do, there are some really concerning problems arising. Forms of bias, stereotyping and unfair determination are being found in machine vision systems, object recognition models, and in natural language processing and word embeddings. High profile news stories about bias have been on the rise, from women being less likely to be shown high paying jobs to gender bias and object recognition datasets like MS COCO, to racial disparities in education AI systems. 20 lessons on bias in machine learning systems Interest in the study of bias in ML systems has grown exponentially in just the last 3 years. It has more than doubled in the last year alone. We are speaking different languages when we talk about bias. I.e., it means different things to different people/groups. Eg: in law, in machine learning, in geometry etc. Read more on this in the ‘What is bias?’ section below. In the simplest terms, for the purpose of understanding fairness in machine learning systems, we can consider ‘bias’ as a skew that produces a type of harm. Bias in MLaaS is harder to identify and also correct as we do not build them from scratch and are not always privy to how it works under the hood. Data is not neutral. Data cannot always be neutralized. There is no silver bullet for solving bias in ML & AI systems. There are two main kinds of harms caused by bias: Harms of allocation and harms of representation. The former takes an economically oriented view while the latter is more cultural. Allocative harm is when a system allocates or withholds certain groups an opportunity or resource. To know more, jump to the ‘harms of allocation’ section. When systems reinforce the subordination of certain groups along the lines of identity like race, class, gender etc., they cause representative harm. This is further elaborated in the ‘Harms of representation’ section. Harm can further be classified into five types: stereotyping, recognition, denigration, under-representation and ex-nomination.  There are many technical approaches to dealing with the problem of bias in a training dataset such as scrubbing to neutral, demographic sampling etc among others. But they all still suffer from bias. Eg: who decides what is ‘neutral’. When we consider bias purely as a technical problem, which is hard enough, we are already missing part of the picture. Bias in systems is commonly caused by bias in training data. We can only gather data about the world we have which has a long history of discrimination. So, the default tendency of these systems would be to reflect our darkest biases.  Structural bias is a social issue first and a technical issue second. If we are unable to consider both and see it as inherently socio-technical, then these problems of bias are going to continue to plague the ML field. Instead of just thinking about ML contributing to decision making in say hiring or criminal justice, we also need to think of the role of ML in the harmful representation of human identity. While technical responses to bias are very important and we need more of them, they won’t get us all the way to addressing representational harms to group identity. Representational harms often exceed the scope of individual technical interventions. Developing theoretical fixes that come from the tech world for allocational harms is necessary but not sufficient. The ability to move outside our disciplinary boundaries is paramount to cracking the problem of bias in ML systems. Every design decision has consequences and powerful social implications. Datasets reflect not only the culture but also the hierarchy of the world that they were made in. Our current datasets stand on the shoulder of older datasets building on earlier corpora. Classifications can be sticky and sometimes they stick around longer than we intend them to, even when they are harmful. ML can be deployed easily in contentious forms of categorization that could have serious repercussions. Eg: free-of-bias criminality detector that has Physiognomy at the heart of how it predicts the likelihood of a person being a criminal based on his appearance. What is bias? 14th century: an oblique or diagonal line 16th century: undue prejudice 20th century: systematic differences between the sample and a population In ML: underfitting (low variance and high bias) vs overfitting (high variance and low bias) In Law:  judgments based on preconceived notions or prejudices as opposed to the impartial evaluation of facts. Impartiality underpins jury selection, due process, limitations placed on judges etc. Bias is hard to fix with model validation techniques alone. So you can have an unbiased system in an ML sense producing a biased result in a legal sense. Bias is a skew that produces a type of harm. Where does bias come from? Commonly from Training data. It can be incomplete, biased or otherwise skewed. It can draw from non-representative samples that are wholly defined before use. Sometimes it is not obvious because it was constructed in a non-transparent way. In addition to human labeling, other ways that human biases and cultural assumptions can creep in ending up in exclusion or overrepresentation of subpopulation. Case in point: stop-and-frisk program data used as training data by an ML system.  This dataset was biased due to systemic racial discrimination in policing. Harms of allocation Majority of the literature understand bias as harms of allocation. Allocative harm is when a system allocates or withholds certain groups, an opportunity or resource. It is an economically oriented view primarily. Eg: who gets a mortgage, loan etc. Allocation is immediate, it is a time-bound moment of decision making. It is readily quantifiable. In other words, it raises questions of fairness and justice in discrete and specific transactions. Harms of representation It gets tricky when it comes to systems that represent society but don't allocate resources. These are representational harms. When systems reinforce the subordination of certain groups along the lines of identity like race, class, gender etc. It is a long-term process that affects attitudes and beliefs. It is harder to formalize and track. It is a diffused depiction of humans and society. It is at the root of all of the other forms of allocative harm. 5 types of allocative harms Source: Kate Crawford’s NIPS 2017 Keynote presentation: Trouble with Bias Stereotyping 2016 paper on word embedding that looked at Gender stereotypical associations and the distances between gender pronouns and occupations. Google translate swaps the genders of pronouns even in a gender-neutral language like Turkish   Recognition When a group is erased or made invisible by a system In a narrow sense, it is purely a technical problem. i.e., does a system recognize a face inside an image or video? Failure to recognize someone’s humanity. In the broader sense, it is about respect, dignity, and personhood. The broader harm is whether the system works for you. Eg: system could not process darker skin tones, Nikon’s camera s/w mischaracterized Asian faces as blinking, HP's algorithms had difficulty recognizing anyone with a darker shade of pale. Denigration When people use culturally offensive or inappropriate labels Eg: autosuggestions when people typed ‘jews should’ Under-representation An image search of 'CEOs' yielded only one woman CEO at the bottom-most part of the page. The majority were white male. ex-nomination Technical responses to the problem of biases Improve accuracy Blacklist Scrub to neutral Demographics or equal representation Awareness Politics of classification Where did identity categories come from? What if bias is a deeper and more consistent issue with classification? Source: Kate Crawford’s NIPS 2017 Keynote presentation: Trouble with Bias The fact that bias issues keep creeping into our systems and manifesting in new ways, suggests that we must understand that classification is not simply a technical issue but a social issue as well. One that has real consequences for people that are being classified. There are two themes: Classification is always a product of its time We are currently in the biggest experiment of classification in human history Eg: labeled faces in the wild dataset has 77.5% men, and 83.5% white. An ML system trained on this dataset will work best for that group. What can we do to tackle these problems? Start working on fairness forensics Test our systems: eg: build pre-release trials to see how a system is working across different populations How do we track the life cycle of a training dataset to know who built it and what the demographics skews might be in that dataset Start taking interdisciplinarity seriously Working with people who are not in our field but have deep expertise in other areas Eg: FATE (Fairness Accountability Transparency Ethics) group at Microsoft Research Build spaces for collaboration like the AI now institute. Think harder on the ethics of classification The ultimate question for fairness in machine learning is this. Who is going to benefit from the system we are building? And who might be harmed?
Read more
  • 0
  • 0
  • 7730
article-image-top-research-papers-nips-2017-part-2
Sugandha Lahoti
07 Dec 2017
8 min read
Save for later

Top Research papers showcased at NIPS 2017 - Part 2

Sugandha Lahoti
07 Dec 2017
8 min read
Continuing from where we left our previous post, we are back with a quick roundup of top research papers on Machine Translation, Predictive Modelling, Image-to-Image Translation, and Recommendation Systems from NIPS 2017. Machine Translation In layman terms, Machine translation (MT) is the process by which a computer software translates a text from one natural language to another. This year at NIPS, a large number of presentations focused on innovative ways of improving translations. Here are our top picks. Value Networks: Improving beam search for better Translation Microsoft has ventured into translation tasks with the introduction of Value Networks in their paper “Decoding with Value Networks for Neural Machine Translation”. Their prediction network improves beam search which is a shortcoming of Neural Machine Translation (NMT). This new methodology inspired by the success of AlphaGo, takes the source sentence x, the currently available decoding output y1, ··· , yt1 and a candidate word w at step t as inputs, using which it predicts the long-term value (e.g., BLEU score) of the partial target sentence if it is completed by the NMT(Neural Machine Translational) model. Experiments show that this approach significantly improves the translation accuracy of several translation tasks. CoVe: Contextualizing Word Vectors for Machine Translation Salesforce researchers have used a new approach to contextualize word vectors in their paper “Learned in Translation: Contextualized Word Vectors”. A wide variety of common NLP tasks namely sentiment analysis, question classification, entailment, and question answering use only supervised word and character vectors to contextualize Word vectors. The paper uses a deep LSTM encoder from an attentional sequence-to-sequence model trained for machine translation. Their research portrays that adding these context vectors (CoVe) improves performance over using only unsupervised word and character vectors. For fine-grained sentiment analysis and entailment also, CoVe improves the performance of the baseline models to the state-of-the-art. Predictive Modelling A lot of research showcased at NIPS was focussed around improving the predictive capabilities of Neural Networks. Here is a quick look at the top presentations. Deep Ensembles for Predictive Uncertainty Estimation Bayesian Solutions are most frequently used in quantifying predictive uncertainty in Neural networks. However, these solutions can at times be computationally intensive. They also require significant modifications to the training pipeline. DeepMind researchers have proposed an alternative to Bayesian NNs in their paper “Simple and scalable predictive uncertainty estimation using deep ensembles”. Their proposed method is easy to implement, readily parallelizable requires very little hyperparameter tuning, and yields high-quality predictive uncertainty estimates. VAIN: Scaling Multi-agent Predictive Modelling Multi-agent predictive modeling predicts the behavior of large physical or social systems by an interaction between various agents. However, most approaches come at a prohibitive cost. For instance, Interaction Networks (INs) were not able to scale with the number of interactions in the system (typically quadratic or higher order in the number of agents). Facebook researchers have introduced VAIN, which is a simple attentional mechanism for multi-agent predictive modeling that scales linearly with the number of agents. They can achieve similar accuracy but at a much lower cost. You can read more about the mechanism in their paper “VAIN: Attentional Multi-agent Predictive Modeling” PredRNN: RNNs for Predictive Learning with ST-LSTM Another paper titled “PredRNN: Recurrent Neural Networks for Predictive Learning using Spatiotemporal LSTMs” showcased a new predictive recurrent neural network.  This architecture is based on the idea that spatiotemporal predictive learning should memorize both spatial appearances and temporal variations in a unified memory pool. The core of this RNN is a new Spatiotemporal LSTM (ST-LSTM) unit that extracts and memorizes spatial and temporal representations simultaneously. Memory states are allowed to zigzag in two directions: across stacked RNN layers vertically and through all RNN states horizontally. PredRNN is a more general framework, that can be easily extended to other predictive learning tasks by integrating with other architectures. It achieved state-of-the-art prediction performance on three video prediction datasets. Recommendation Systems New researches were presented by Google and Microsoft to address the cold-start problem and to build robust and powerful of Recommendation systems. Off-Policy Evaluation For Slate Recommendation Microsoft researchers have studied and evaluated policies that recommend an ordered set of items in their paper “Off-Policy Evaluation For Slate Recommendation”. General recommendation approaches require large amounts of logged data to evaluate whole-page metrics that depend on multiple recommended items, which happens when showing ranked lists. The number of these possible lists is called as slates. Microsoft researchers have developed a technique for evaluating page-level metrics of such policies offline using logged past data, reducing the need for online A/B tests. Their method models the observed quality of the recommended set as an additive decomposition across items. It fits many realistic measures of quality and shows exponential savings in the amount of required data compared with other off-policy evaluation approaches. Meta-Learning on Cold-Start Recommendations Matrix Factorization techniques for product recommendations, although efficient, suffer from serious cold-start problems. The cold start problem concerns with the recommendations for users with no or few past history i.e new users. Providing recommendations to such users becomes a difficult problem for recommendation models because their learning and predictive ability are limited. Google researchers have come up with a meta-learning strategy to address item cold-start when new items arrive continuously. Their paper “A Meta-Learning Perspective on Cold-Start Recommendations for Items” has two deep neural network architectures that implement this meta-learning strategy. The first architecture learns a linear classifier whose weights are determined by the item history while the second architecture learns a neural network whose biases are instead adjusted. On evaluating this technique on the real-world problem of Tweet recommendation, the proposed techniques significantly beat the MF baseline. Image-to-Image Translation NIPS 2017 exhibited a new image-to-image translation system, a model to hide images within images, and use of feature transforms to improve universal style. Unsupervised Image-to-Image Translation Researchers at Nvidia have proposed an unsupervised image-to-image translation framework based on Coupled GANs. Unsupervised image-to-image translation learns a joint distribution of images in different domains by using images from the marginal distributions in individual domains. However, there exists an infinite set of joint distributions that can arrive from the given marginal distributions. So, one could infer nothing about the joint distribution from the marginal distributions, without additional assumptions. Their paper “Unsupervised Image-to-Image Translation Networks ” uses a shared-latent space assumption to address this issue. Their method presents high-quality image translation results on various challenging unsupervised image translation tasks, such as street scene image translation, animal image translation, and face image translation. Deep Steganography Steganography is commonly used to unobtrusively hide a small message within the noisy regions of a larger image. Google researchers in their paper “Hiding Images in Plain Sight: Deep Steganography” have demonstrated the successful application of deep learning to hiding images. They have placed a full-size color image within another image of the same size. They have also trained Deep neural networks to create the hiding and revealing processes and are designed to specifically work as a pair. Their approach compresses and distributes the secret image's representation across all of the available bits, instead of encoding the secret message within the least significant bits of the carrier image. This system is trained on images drawn randomly from the ImageNet database and works well on natural images. Improving Universal style transfer on images NIPS 2017 witnessed another paper aimed at improving the Universal Style Transfer. Universal style transfer is used for transferring arbitrary visual styles to content images. The paper “Universal Style Transfer via Feature Transforms” by Nvidia researchers highlight feature transforms, as a simple yet effective method to tackle the limitations of existing feed-forward methods for Universal Style Transfer, without training on any pre-defined styles. Existing feed-forward based methods are mainly limited by the inability of generalizing to unseen styles or compromised visual quality. The research paper embeds a pair of feature transforms, whitening and coloring, to an image reconstruction network. The whitening and coloring transform reflect a direct matching of feature covariance of the content image to a given style image. The algorithm can generate high-quality stylized images with comparisons to a number of recent methods. Key Takeaways from NIPS 2017 The Research papers covered in this and the previous post highlight that most organizations are at the forefront of machine learning and are actively exploring virtually all aspects of the field. Deep learning practices were also in trend. The conference was focussed on the current state and recent advances in Deep Learning. A lot of talks and presentations were about industry-ready neural networks suggesting a fast transition from research to industry. Researchers are also focusing on areas of language understanding, speech recognition, translation, visual processing, and prediction. Most of these techniques rely on using GANs as the backend. For live content coverage, you can visit NIPS’ Facebook page.
Read more
  • 0
  • 0
  • 2318

article-image-top-research-papers-showcased-nips-2017-part-1
Sugandha Lahoti
07 Dec 2017
6 min read
Save for later

Top Research papers showcased at NIPS 2017 - Part 1

Sugandha Lahoti
07 Dec 2017
6 min read
The ongoing 31st annual Conference on Neural Information Processing Systems (NIPS 2017) in Long Beach, California is scheduled from December 4-9, 2017. The 6-day conference is hosting a number of invited talks, demonstrations, tutorials, and paper presentations pertaining to the latest in machine learning, deep learning and AI research. This year the conference has grown larger than life with a record-high 3,240 papers, 678 selected ones, and a completely sold-out event. Top tech members from Google, Microsoft, IBM, DeepMind, Facebook, Amazon, are among other prominent players who enthusiastically participated this year. Here is a quick roundup of some of the top research papers till date. Generative Adversarial Networks Generative Adversarial Networks are a hot topic of research at the ongoing NIPS conference. GANs cast out an easy way to train the DL algorithms by slashing out the amount of data required in training with unlabelled data. Here are a few research papers on GANs. Regularization can stabilize training of GANs Microsoft researchers have proposed a new regularization approach to yield a stable GAN training procedure at low computational costs. Their new model overcomes the fundamental limitation of GANs occurring due to a dimensional mismatch between the model distribution and the true distribution. This results in their density ratio and the associated f-divergence to be undefined. Their paper “Stabilizing Training of Generative Adversarial Networks through Regularization” turns GAN models into reliable building blocks for deep learning. They have also used this for several datasets including image generation tasks. AdaGAN: Boosting GAN Performance Training GANs can at times be a hard task. They can also suffer from the problem of missing modes where the model is not able to produce examples in certain regions of the space. Google researchers have developed an iterative procedure called AdaGAN in their paper “AdaGAN: Boosting Generative Models”, an approach inspired by boosting algorithms, where many potentially weak individual predictors are greedily aggregated to form a strong composite predictor. It adds a new component into a mixture model at every step by running a GAN algorithm on a re-weighted sample. The paper also addresses the problem of missing modes. Houdini: Generating Adversarial Examples The generation of adversarial examples is considered as a critical milestone for evaluating and upgrading the robustness of learning in machines. Also, current methods are confined to classification tasks and are unable to alter the performance measure of the problem at hand. In order to tackle such an issue, Facebook researchers have come up with a research paper titled “Houdini: Fooling Deep Structured Prediction Models”, a novel and a flexible approach for generating adversarial examples distinctly tailormade for the final performance measure of the task taken into account (combinational and non-decomposable tasks). Stochastic hard-attention for Memory Addressing in GANs DeepMind researchers showcased a new method which uses stochastic hard-attention to retrieve memory content in generative models. Their paper titled “Variational memory addressing in generative models” was presented at the 2nd day of the conference and is an advancement over the popular differentiable soft-attention mechanism. Their new technique allows developers to apply variational inference to memory addressing. This leads to more precise memory lookups using target information, especially in models with large memory buffers and with many confounding entries in the memory. Image and Video Processing A lot of hype was also around developing sophisticated models and techniques for image and video processing. Here is a quick glance at the top presentations. Fader Networks: Image manipulation through disentanglement Facebook researchers have introduced Fader Networks, in their paper titled “Fader Networks: Manipulating Images by Sliding Attributes”. These fader networks use an encoder-decoder architecture to reconstruct images by disentangling their salient information and the values of particular attributes directly in a latent space. Disentanglement helps in manipulating these attributes to generate variations of pictures of faces while preserving their naturalness. This innovative approach results in much simpler training schemes and scales for manipulating multiple attributes jointly. Visual interaction networks for Video simulation Another paper titled “Visual interaction networks: Learning a physics simulator from video Tuesday” proposes a new neural-network model to learn physical objects without prior knowledge. Deepmind’s Visual Interaction Network is used for video analysis and is able to infer the states of multiple physical objects from just a few frames of video. It then uses these to predict object positions many steps into the future. It can also deduce the locations of invisible objects. Transfer, Reinforcement, and Continual Learning A lot of research is going on in the field of Transfer, Reinforcement, and Continual learning to make stable and powerful deep learning models. Here are a few research papers presented in this domain. Two new techniques for Transfer Learning Currently, a large set of input/output (I/O) examples are required for learning any underlying input-output mapping. By leveraging information based on the related tasks, the researchers at Microsoft have addressed the problem of data and computation efficiency of program induction. Their paper “Neural Program Meta-Induction” uses two approaches for cross-task knowledge transfer. First is Portfolio adaption, where a set of induction models is pretrained on a set of related tasks, and the best model is adapted towards the new task using transfer learning. The second one is Meta program induction, a k-shot learning approach which makes a model generalize itself to new tasks without requiring any additional training. Hybrid Reward Architecture to solve the problem of generalization in Reinforcement Learning A new paper from Microsoft “Hybrid Reward Architecture for Reinforcement Learning” highlights a new method to address the generalization problem faced by a typical deep RL method. Hybrid Reward Architecture (HRA) takes a decomposed reward function as the input and learns a separate value function for each component reward function. This is especially useful in domains where the optimal value function cannot easily be reduced to a low-dimensional representation. In the new approach, the overall value function is much smoother and can be easier approximated by a low-dimensional representation, enabling more effective learning. Gradient Episodic Memory to counter catastrophic forgetting in continual learning models Continual learning is all about improving the ability of models to solve sequential tasks without forgetting previously acquired knowledge. In the paper “Gradient Episodic Memory for Continual Learning”, Facebook researchers have proposed a set of metrics to evaluate models over a continuous series of data. These metrics characterize models by their test accuracy and the ability to transfer knowledge across tasks. They have also proposed a model for continual learning, called Gradient Episodic Memory (GEM) that reduces the problem of catastrophic forgetting. They have also experimented with variants of the MNIST and CIFAR-100 datasets to demonstrate the performance of GEM when compared to other methods. In our next post, we will cover a selection of papers presented so far at NIPS 2017 in the areas of Predictive Modelling, Machine Translation, and more. For live content coverage, you can visit NIPS’ Facebook page.
Read more
  • 0
  • 0
  • 3178

article-image-what-are-slowly-changing-dimensions-scd-and-why-you-need-them-in-your-data-warehouse
Savia Lobo
07 Dec 2017
8 min read
Save for later

What are Slowly changing Dimensions (SCD) and why you need them in your Data Warehouse?

Savia Lobo
07 Dec 2017
8 min read
[box type="note" align="" class="" width=""]Below given post is an excerpt from a book by Rahul Malewar titled Learning Informatica PowerCenter 10.x. The book is a quick guide to explore Informatica PowerCenter and its features such as working on sources, targets, transformations, performance optimization, and managing your data at speed. [/box] Our article explores what Slowly Changing Dimensions (SCD) are and how to implement them in Informatica PowerCenter. As the name suggests, SCD allows maintaining changes in the Dimension table in the data warehouse. These are dimensions that gradually change with time, rather than changing on a regular basis. When you implement SCDs, you actually decide how you wish to maintain historical data with the current data. Dimensions present within data warehousing and in data management include static data about certain entities such as customers, geographical locations, products, and so on. Here we talk about general SCDs: SCD1, SCD2, and SCD3. Apart from these, there are also Hybrid SCDs that you might come across. A Hybrid SCD is nothing but a combination of multiple SCDs to serve your complex business requirements. Types of SCD The various types of SCD are described as follows: Type 1 dimension mapping (SCD1): This keeps only current data and does not maintain historical data. Note : Use SCD1 mapping when you do not want history of previous data. Type 2 dimension/version number mapping (SCD2): This keeps current as well as historical data in the table. It allows you to insert new records and changed records using a new column (PM_VERSION_NUMBER) by maintaining the version number in the table to track the changes. We use a new column PM_PRIMARYKEY to maintain the history. Note : Use SCD2 mapping when you want to keep a full history of dimension data, and track the progression of changes using a version number. Consider there is a column LOCATION in the EMPLOYEE table and you wish to track the changes in the location on employees. Consider a record for Employee ID 1001 present in your EMPLOYEE dimension table. Steve was initially working in India and then shifted to USA. We are willing to maintain history on the LOCATION field. Type 2 dimension/flag mapping: This keeps current as well as historical data in the table. It allows you to insert new records and changed records using a new column (PM_CURRENT_FLAG) by maintaining the flag in the table to track the changes. We use a new column PRIMARY_KEY to maintain the history. Note : Use SCD2 mapping when you want to keep a full history of dimension data, and track the progression of changes using a flag. Let's take an example to understand different SCDs. Type 2 dimension/effective date range mapping: This keeps current as well as historical data in the table. SCD2 allows you to insert new records and changed records using two new columns (PM_BEGIN_DATE and PM_END_DATE) by maintaining the date range in the table to track the changes. We use a new column PRIMARY_KEY to maintain the history. Note : Use SCD2 mapping when you want to keep a full history of dimension data, and track the progression of changes using start date and end date. Type 3 Dimension mapping: This keeps current as well as historical data in the table. We maintain only partial history by adding a new column PM_PREV_COLUMN_NAME, that is, we do not maintain full history. Note: Use SCD3 mapping when you wish to maintain only partial history. EMPLOYEE_ID NAME LOCATION 1001 STEVE INDIA Your data warehouse table should reflect the current status of Steve. To implement this, we have different types of SCDs. SCD1 As you can see in the following table, INDIA will be replaced with USA, so we end up having only current data, and we lose historical data: PM_PRIMARY_KEY EMPLOYEE_ID NAME LOCATION 100 1001 STEVE USA Now if Steve is again shifted to JAPAN, the LOCATION data will be replaced from USA to JAPAN: PM_PRIMARY_KEY EMPLOYEE_ID NAME LOCATION 100 1001 STEVE JAPAN The advantage of SCD1 is that we do not consume a lot of space in maintaining the data. The disadvantage is that we don't have historical data. SCD2 - Version number As you can see in the following table, we are maintaining the full history by adding a new record to maintain the history of the previous records: PM_PRIMARYKEY EMPLOYEE_ID NAME LOCATION PM_VERSION_NUMBER 100 1001 STEVE INDIA 0 101 1001 STEVE USA 1 102 1001 STEVE JAPAN 2 200 1002 MIKE UK 0 We add two new columns in the table: PM_PRIMARYKEY to handle the issues of duplicate records in the primary key in the EMPLOYEE_ID (supposed to be the primary key) column, and PM_VERSION_NUMBER to understand current and history records. SCD2 - FLAG As you can see in the following table, we are maintaining the full history by adding new records to maintain the history of the previous records:   PM_PRIMARYKEY EMPLOYEE_ID NAME LOCATION PM_CURRENT_FLAG 100 1001 STEVE INDIA 0 101 1001 STEVE USA 1 We add two new columns in the table: PM_PRIMARYKEY to handle the issues of duplicate records in the primary key in the EMPLOYEE_ID column, and PM_CURRENT_FLAG to understand current and history records. Again, if Steve is shifted, the data looks like this: PM_PRIMARYKEY EMPLOYEE_ID NAME LOCATION PM_CURRENT_FLAG 100 1001 STEVE INDIA 0 101 1001 STEVE USA 0 102 1001 STEVE JAPAN 1 SCD2 - Date range As you can see in the following table, we are maintaining the full history by adding new records to maintain the history of the previous records: PM_PRIMARYKEY EMPLOYEE_ID NAME LOCATION PM_BEGIN_DATE PM_END_DATE 100 1001 STEVE INDIA 01-01-14 31-05-14 101 1001 STEVE USA 01-06-14 99-99-9999 We add three new columns in the table: PM_PRIMARYKEY to handle the issues of duplicate records in the primary key in the EMPLOYEE_ID column, and PM_BEGIN_DATE and PM_END_DATE to understand the versions in the data. The advantage of SCD2 is that you have complete history of the data, which is a must for data warehouse. The disadvantage of SCD2 is that it consumes a lot of space. SCD3 As you can see in the following table, we are maintaining the history by adding new columns: PM_PRIMARYKEY EMPLOYEE_ID NAME LOCATION PM_PREV_LOCATION 100 1001 STEVE USA INDIA An optional column PM_PRIMARYKEY can be added to maintain the primary key constraints. We add a new column PM_PREV_LOCATION in the table to store the changes in the data. As you can see, we added a new column to store data as against SCD2,where we added rows to maintain history. If Steve is now shifted to JAPAN, the data changes to this: PM_PRIMARYKEY EMPLOYEE_ID NAME LOCATION PM_PREV_LOCATION 100 1001 STEVE JAPAN USA As you can notice, we lost INDIA from the data warehouse, that is why we say we are maintaining partial history. Note : To implement SCD3, decide how many versions of a particular column you wish to maintain. Based on this, the columns will be added in the table. SCD3 is best when you are not interested in maintaining the complete but only partial history. The drawback of SCD3 is that it doesn't store the full history. At this point, you should be very clear about the different types of SCDs. We need to implement these concepts practically in Informatica PowerCenter. Informatica PowerCenter provides a utility called wizard to implement SCD. Using the wizard, you can easily implement any SCD. In the next topics, you will learn how to use the wizard to implement SCD1, SCD2, and SCD3. Before you proceed to the next section, please make sure you have a proper understanding of the transformations in Informatica PowerCenter. You should be clear about the source qualifier, expression, filter, router, lookup, update strategy, and sequence generator transformations. Wizard creates a mapping using all these transformations to implement the SCD functionality. When we implement SCD, there will be some new records that need to be loaded into the target table, and there will be some existing records for which we need to maintain the history. Note : The record that comes for the first time in the table will be referred to as the NEW record, and the record for which we need to maintain history will be referred to as the CHANGED record. Based on the comparison of the source data with the target data, we will decide which one is the NEW record and which is the CHANGED record. To start with, we will use a sample file as our source and the Oracle table as the target to implement SCDs. Before we implement SCDs, let's talk about the logic that will serve our purpose, and then we will fine-tune the logic for each type of SCD. Extract all records from the source. Look up on the target table, and cache all the data. Compare the source data with the target data to flag the NEW and CHANGED records. Filter the data based on the NEW and CHANGED flags. Generate the primary key for every new row inserted into the table. Load the NEW record into the table, and update the existing record if needed. In this article we concentrated on a very important table feature called slowly changing dimensions. We also discussed different types of SCDs, i.e., SCD1, SCD2, and SCD3. If you are looking to explore more in Informatica Powercentre, go ahead and check out the book Learning Informatica Powercentre 10.x.  
Read more
  • 0
  • 1
  • 31520
article-image-implementing-linear-regression-analysis-r
Amarabha Banerjee
06 Dec 2017
7 min read
Save for later

Implementing Linear Regression Analysis with R

Amarabha Banerjee
06 Dec 2017
7 min read
[box type="note" align="" class="" width=""]This article is from the book Advanced Analytics with R and Tableau, written by Jen Stirrup & Ruben Oliva Ramos. The book offers a wide range of machine learning algorithms to help you learn descriptive, prescriptive, predictive, and visually appealing analytical solutions designed with R and Tableau. [/box] One of the most popular analytical methods for statistical analysis is regression analysis. In this article we explore the basics of regression analysis and how R can be used to effectively perform it. Getting started with regression Regression means the unbiased prediction of the conditional expected value, using independent variables, and the dependent variable. A dependent variable is the variable that we want to predict. Examples of a dependent variable could be a number such as price, sales, or weight. An independent variable is a characteristic, or feature, that helps to determine the dependent variable. So, for example, the independent variable of weight could help to determine the dependent variable of weight. Regression analysis can be used in forecasting, time series modeling, and cause and effect relationships. Simple linear regression R can help us to build prediction stories with Tableau. Linear regression is a great starting place when you want to predict a number, such as profit, cost, or sales. In simple linear regression, there is only one independent variable x, which predicts a dependent value, y. Simple linear regression is usually expressed with a line that identifies the slope that helps us to make predictions. So, if sales = x and profit = y, what is the slope that allows us to make the prediction? We will do this in R to create the calculation, and then we will repeat it in R. We can also color-code it so that we can see what is above and what is below the slope. Using lm() function What is linear regression? Linear regression has the objective of finding a model that fits a regression line through the data well, whilst reducing the discrepancy, or error, between the data and the regression line. We are trying here to predict the line of best fit between one or many variables from a scatter plot of points of data. To find the line of best fit, we need to calculate a couple of things about the line. We can use the lm() function to obtain the line, which we can call m: We need to calculate the slope of the line m We also need to calculate the intercept with the y axis c So we begin with the equation of the line: y = mx + c To get the line, we use the concept of Ordinary Least Squares (OLS). This means that we sum the square of the y-distances between the points and the line. Furthermore, we can rearrange the formula to give us beta (or m) in terms of the number of points n, x, and y. This would assume that we can minimize the mean error with the line and the points. It will be the best predictor for all of the points in the training set and future feature vectors. Example in R Let's start with a simple example in R, where we predict women's weight from their height. If we were articulating this question per Microsoft's Team Data Science Process, we would be stating this as a business question during the business understanding phase. How can we come up with a model that helps us to predict what the women's weight is going to be, dependent on their height? Using this business question as a basis for further investigation, how do we come up with a model from the data, which we could then use for further analysis? Simple linear regression is about two variables, an independent and a dependent variable, which is also known as the predictor variable. With only one variable, and no other information, the best prediction is the mean of the sample itself. In other words, when all we have is one variable, the mean is the best predictor of any one amount. The first step is to collect a random sample of data. In R, we are lucky to have sample data that we can use. To explore linear regression, we will use the women dataset, which is installed by default with R. The variability of the weight amount can only be explained by the weights themselves, because that is all we have. To conduct the regression, we will use the lm function, which appears as follows: model <- lm(y ~ x, data=mydata) To see the women dataset, open up RStudio. When we type in the variable name, we will get the contents of the variable. In this example, the variable name women will give us the data itself. The women's height and weight are printed out to the console, and here is an example: > women When we type in this variable name, we get the actual data itself, which we can see next: We can visualize the data quite simply in R, using the plot(women) command. The plot command provides a quick and easy way of visualizing the data. Our objective here is simply to see the relationship of the data. The results appear as follows: Now that we can see the relationship of the data, we can use the summary command to explore the data further: summary(women) This will give us the results, which are given here as follows: Let's look at the results in closer detail: Next, we can create a model that will use the lm function to create a linear regression model of the data. We will assign the results to a model called linearregressionmodel, as follows: linearregressionmodel <- lm(weight ~ height, data=women) What does the model produce? We can use the summary command again, and this will provide some descriptive statistics about the lm model that we have generated. One of the nice, understated features of R is its ability to use variables. Here we have our variable, linearregressionmodel – note that one word is storing a whole model! summary(linearregressionmodel) How does this appear in the R interface? Here is an example: What do these numbers mean? Let's take a closer look at some of the key numbers. Residual standard error In the output, residual standard error is cost, which is 1.525. Comparing actual values with predicted results Now, we will look at real values of weight of 15 women first and then will look at predicted values. Actual values of weight of 15 women are as follows, using the following command: women$weight When we execute the women$weight command, this is the result that we obtain: When we look at the predicted values, these are also read out in R: How can we put these pieces of data together? women$pred linearregressionmodel$fitted.values. This is a very simple merge. When we look inside the women variable again, this is the result: If you liked this article, please be sure to check out Advanced Analytics with R and Tableau which consists of more useful analytics techniques with R and Tableau. It will enable you to make quick, cogent, and data-driven decisions for your business using advanced analytical techniques such as forecasting, predictions, association rules, clustering, classification, and other advanced Tableau/R calculated field functions.    
Read more
  • 0
  • 0
  • 4962

article-image-implementing-deep-learning-keras
Amey Varangaonkar
05 Dec 2017
4 min read
Save for later

Implementing Deep Learning with Keras

Amey Varangaonkar
05 Dec 2017
4 min read
[box type="note" align="" class="" width=""]The following excerpt is from the title Deep Learning with Theano, Chapter 5 written by Christopher Bourez. The book offers a complete overview of Deep Learning with Theano, a Python-based library that makes optimizing numerical expressions and deep learning models easy on CPU or GPU. [/box] In this article, we introduce you to the highly popular deep learning library - Keras, which sits on top of the both, Theano and Tensorflow. It is a flexible platform for training deep learning models with ease. Keras is a high-level neural network API, written in Python and capable of running on top of either TensorFlow or Theano. It was developed to make implementing deep learning models as fast and easy as possible for research and development. You can install Keras easily using conda, as follows: conda install keras When writing your Python code, importing Keras will tell you which backend is used: >>> import keras Using Theano backend. Using cuDNN version 5110 on context None Preallocating 10867/11439 Mb (0.950000) on cuda0 Mapped name None to device cuda0: Tesla K80 (0000:83:00.0) Mapped name dev0 to device cuda0: Tesla K80 (0000:83:00.0) Using cuDNN version 5110 on context dev1 Preallocating 10867/11439 Mb (0.950000) on cuda1 Mapped name dev1 to device cuda1: Tesla K80 (0000:84:00.0) If you have installed Tensorflow, it might not use Theano. To specify which backend to use, write a Keras configuration file, ~/.keras/keras.json: { "epsilon": 1e-07, "floatx": "float32", "image_data_format": "channels_last", "backend": "theano" } It is also possible to specify the Theano backend directly with the environment Variable: KERAS_BACKEND=theano python Note that the device used is the device we specified for Theano in the ~/.theanorc file. It is also possible to modify these variables with Theano environment variables: KERAS_BACKEND=theano THEANO_FLAGS=device=cuda,floatX=float32,mode=FAST_ RUN python Programming with Keras Keras provides a set of methods for data pre-processing and for building models. Layers and models are callable functions on tensors and return tensors. In Keras, there is no difference between a layer/module and a model: a model can be part of a bigger model and composed of multiple layers. Such a sub-model behaves as a module, with inputs/outputs. Let's create a network with two linear layers, a ReLU non-linearity in between, and a softmax output: from keras.layers import Input, Dense from keras.models import Model inputs = Input(shape=(784,)) x = Dense(64, activation='relu')(inputs) predictions = Dense(10, activation='softmax')(x) model = Model(inputs=inputs, outputs=predictions) The model module contains methods to get input and output shape for either one or multiple inputs/outputs, and list the submodules of our module: >>> model.input_shape (None, 784) >>> model.get_input_shape_at(0) (None, 784) >>> model.output_shape (None, 10) >>> model.get_output_shape_at(0) (None, 10) >>> model.name 'Sequential_1' >>> model.input /dense_3_input >>> model.output Softmax.0 >>> model.get_output_at(0) Softmax.0 >>> model.layers [<keras.layers.core.Dense object at 0x7f0abf7d6a90>, <keras.layers.core.Dense object at 0x7f0abf74af90>] In order to avoid specify inputs to every layer, Keras proposes a functional way of writing models with the Sequential module, to build a new module or model composed. The following definition of the model builds exactly the same model as shown previously, with input_dim to specify the input dimension of the block that would be unknown otherwise and generate an error: from keras.models import Sequential from keras.layers import Dense, Activation model = Sequential() model.add(Dense(units=64, input_dim=784, activation='relu')) model.add(Dense(units=10, activation='softmax')) The model is considered a module or layer that can be part of a bigger model: model2 = Sequential() model2.add(model) model2.add(Dense(units=10, activation='softmax')) Each module/model/layer can be compiled then and trained with data : model.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy']) model.fit(data, labels) Thus, we see it is fairly easy to train a model in Keras. The simplicity and ease of use that Keras offers makes it a very popular choice of tool for deep learning. If you think the article is useful, check out the book Deep Learning with Theano for interesting deep learning concepts and their implementation using Theano. For more information on the Keras library and how to train efficient deep learning models, make sure to check our highly popular title Deep Learning with Keras.  
Read more
  • 0
  • 0
  • 2365