Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

6719 Articles
article-image-why-motion-and-interaction-matter-in-a-ux-design-video
Sugandha Lahoti
16 Oct 2018
2 min read
Save for later

Why Motion and Interaction matter in a UX design? [Video]

Sugandha Lahoti
16 Oct 2018
2 min read
Designing prototypes is a great way to extend your sketching skills and to test the products you’ve been building. In particular, user experience prototyping solves problems for users, infuses user needs into conversations, that eventually build better products and services. Motion and interaction are part of user experience prototyping. Motion helps in enforcing and exploring what the interaction design is like and prototyping interactions help us define how a product works. This clip is taken from the video Advanced UX Techniques by Chris R. Becker. In this course, you will explore UX techniques such as sketching, wireframes, and high-fidelity prototypes. https://www.youtube.com/watch?v=TTpxvuIBFwE Interaction Design (IxD) is the design of interactive products and services in which a designer’s focus is on including the way users will interact with it. In this video, we’ll explore the following five aspects of interaction design: Words: Do users understand, read and use the shape Objects: Do users recognize and use the shapes, if it’s a phone or a keyboard Time: The time is taken by users in accomplishing a task ( Are they reading a long article) Behavior: How do users respond or react to anything that app designers make them do. Visuals: Do users like what they see As you iterate on the prototypes of your app, you should be evaluating them against these aspects in your interaction design. The role of interaction design is trying to define the ways a user can interact. Interactions can be complex so strong focus should be given on thinking about how the systems are interconnected. Interactions are learned and can be improved through animations. Watch the clip above to learn more about why motion and interaction design are key aspects in a UX Design. About the Author Chris R. Becker is an Imaginative and creative Sr. UX designer/IxD/design thinker and educator. He designs across media platforms from the web to iOS and Android as well as SaaS and service design. He leads Design thinking workshops and UX deliverables, all the while using communication skills both in the classroom and for client presentations. What UX designers can teach Machine Learning Engineers? To start with: Model Interpretability. Trends UX Design. Grafana 5.3 is now stable, comes with Google Stackdriver built-in support, a new Postgres query builder
Read more
  • 0
  • 0
  • 2742

article-image-npm-at-nodejs-interactive-2018-npm-6-the-rise-and-fall-of-javascript-frameworks-and-more
Bhagyashree R
16 Oct 2018
7 min read
Save for later

npm at Node+JS Interactive 2018: npm 6, the rise and fall of JavaScript frameworks, and more

Bhagyashree R
16 Oct 2018
7 min read
Last week, Laurie Voss, the co-founder and COO of npm, at the Node+JS Interactive 2018 event spoke about npm and the future of JavaScript. He discussed several development tools used within the npm community, best practices, frameworks that are on the rise, and frameworks that are dying. He found these answers with the help of 1.5 billion log events per day and the JavaScript Ecosystem Survey 2017 consisting of 16,000 JavaScript developers. This survey gives data about what JavaScript users are doing and where the community is going. Let’s see some of the key highlights of this talk: npm is secure, popular, and fast With more than 10 million users and 6 billion package downloads every week, npm has become ridiculously popular. According to GitHub Octoverse 2017, JavaScript is the top language on GitHub by opened pull request. 85% of the developers who write JavaScript are using npm which is rising rapidly to reach 100%. These developers write JavaScript applications that run on 73% of browsers, 70% of servers, 44% of mobile devices, and 6% of IoT/robotics. The stats highlight that, npm is the package manager for mainly web developers and 97% of the code in a modern web app is downloaded from npm. The current version of npm, that is, npm 6 was released in April this year. This release comes with improved performance and addresses the major concern highlighted by the JavaScript Ecosystem survey, that is, security. Here are the major improvements in npm 6: npm 6 is super fast npm is now 20% faster than npm 4, so it is time for you to upgrade! You can do that using this command: npm install npm -g According to Laurie, npm is fast, and so is yarn. In fact, all of the package managers are nearly at the same speed now. This is the result of the makers of all the package managers coming together to make a community called package.community. This community of package manager maintainers and users are focused towards making package managers better, working on compatibility, and supporting each other. npm 6 locks by default This was one of the biggest changes in npm 6, which makes sure that what you have in the development environment is exactly what you put in production. This functionality is facilitated by a new file called package-lock.json. This so-called “lock file” saves information about your node_modules/ tree since the time you last edited your dependencies. This new feature comes with a number of benefits including increased reproducibility across teams, reduced network overhead when installing, and making it easier to debug issues with dependencies. npm ci This command is similar to npm install, but is meant to be used in automated environments. It is primarily used in environments such as test platforms, continuous integration, and deployment. It can be about 2-3x faster than regular npm install by skipping certain user-oriented features. It is also stricter than a regular install, which helps in catching errors or inconsistencies caused by the incrementally-installed local environments of most npm users. Advances in npm security Two-factor authentication In order to provide strong digital security, npm now has two-factor authentication (2FA). Two-factor authentication confirms your identity using two methods: Something you know such as, your username and password Something you have such as, a phone or tablet Quick audits Quick audits tells you whether the packages you are installing are secure or not. These security warnings will be more detailed and useful in npm 6 as compared to previous versions. The talk highlighted that currently quick audits are happening a lot (about 3.5 million scans per week!). These audit shows that 11% of the packages installed by developers have critical vulnerability: Source: npm To know the vulnerabilities that exist in your app and how critical they are, run the following commands: npm audit: This automatically runs when you install a package with npm install. It submits a description of the dependencies configured in your package to your default registry and asks for a report of known vulnerabilities. npm audit fix: Run the npm audit fix subcommand to automatically install compatible updates to vulnerable dependencies. The rise and fall of frameworks After speaking about the current status of npm, Laurie moved on to explaining what npm users are doing, which frameworks they are using, and which frameworks they are no longer interested to use. Not all npm users develop JavaScript applications The interesting thing here was that, though most of the JavaScript users use npm, it is not that they are the only npm users. Developers writing applications in other languages also use npm. These languages include Java, PHP, Python, and C#, among others: Source: npm Comparing various frameworks Next, he discussed the tools developers are currently opting for. This comparison was done on the basis of a metrics called share of registry. Share of registry shows the relative popularity of a package with other packages in the registry. Frontend frameworks “No framework dies, they only fade away with time.” To prove the above statement the best example would be Backbone. As compared to 2013, Backbone’s downloads has come down rapidly and not many users are using it now. Most of the developers are maintaining old applications written in Backbone and not writing new applications with it. So, what framework are they using? 60% of the respondents of the survey are using React. Despite a huge fraction gravitating towards React, its growth is slowing down a little. Angular is also an extremely popular framework. Ember is making a comeback now after facing a rough patch during 2016-2017. The next framework is Vue, which seems to be just taking off and is probably the reason behind the slowing growth of React. Here’s the graph showing the relative download growth of the frontend frameworks: Source: npm Backend frameworks Comparing backend frameworks was fairly easy, as Express was the clear winner: Source: npm But once Express is taken out of the picture, Koa seems to be the second most popular framework. With the growing use of server-side JavaScript, there is a rapid decline in the use of Sails. While, Hapi, another backend framework, is doing very well in absolute terms, it is not growing much in relative terms. Next.js is growing but with a very low pace. The following graph shows the relative growth of these backed frameworks: Source: npm Some predictions based on the survey It would be unwise to bet against React as it has tons of users and tons of modules Angular is a safer but less interesting choice Keep an eye on Next.js If you are looking for something new to learn, go for GraphQL With 46% of npm users using TypeScript for transpiling it is surely worth your attention WASM seems promising No matter what happens to JavaScript, npm is here to stay To conclude Laurie, rightly said, no framework is here forever: “Nothing lasts forever!..Any framework that we see today will have its hay days and then it will have an after-life where it will slowly slowly degrade.” To  watch the full talk, check out this YouTube video: npm and the future of JavaScript. npm v6 is out! React 16.5.0 is now out with a new package for scheduling, support for DevTools, and more! Node.js and JS Foundation announce intent to merge; developers have mixed feelings
Read more
  • 0
  • 0
  • 4336

article-image-testing-webassembly-modules-with-jest-tutorial
Sugandha Lahoti
15 Oct 2018
7 min read
Save for later

Testing WebAssembly modules with Jest [Tutorial]

Sugandha Lahoti
15 Oct 2018
7 min read
WebAssembly (Wasm) represents an important stepping stone for the web platform. Enabling a developer to run compiled code on the web without a plugin or browser lock-in presents many new opportunities. This article is taken from the book Learn WebAssembly by Mike Rourke. This book will introduce you t powerful WebAssembly concepts that will help you write lean and powerful web applications with native performance. Well-tested code prevents regression bugs, simplifies refactoring, and alleviates some of the frustrations that go along with adding new features. Once you've compiled a WebAssembly module, you should write tests to ensure it's functioning as expected, even if you've written tests for C, C++, or Rust code you compiled it from. In this tutorial, we'll use Jest, a JavaScript testing framework, to test the functions in a compiled Wasm module. The code being tested All of the code used in this example is located on GitHub. The code and corresponding tests are very simple and are not representative of real-world applications, but they're intended to demonstrate how to use Jest for testing. The following code represents the file structure of the /testing-example folder: ├── /src | ├── /__tests__ | │ └── main.test.js | └── main.c ├── package.json └── package-lock.json The contents of the C file that we'll test, /src/main.c, is shown as follows: int addTwoNumbers(int leftValue, int rightValue) { return leftValue + rightValue; } float divideTwoNumbers(float leftValue, float rightValue) { return leftValue / rightValue; } double findFactorial(float value) { int i; double factorial = 1; for (i = 1; i <= value; i++) { factorial = factorial * i; } return factorial; } All three functions in the file are performing simple mathematical operations. The package.json file includes a script to compile the C file to a Wasm file for testing. Run the following command to compile the C file: npm run build There should be a file named main.wasm in the /src directory. Let's move on to describing the testing configuration step. Testing configuration The only dependency we'll use for this example is Jest, a JavaScript testing framework built by Facebook. Jest is an excellent choice for testing because it includes most of the features you'll need out of the box, such as coverage, assertions, and mocking. In most cases, you can use it with zero configuration, depending on the complexity of your application. If you're interested in learning more, check out Jest's website at https://jestjs.io. Open a terminal instance in the /chapter-09-node/testing-example folder and run the following command to install Jest: npm install In the package.json file, there are three entries in the scripts section: build, pretest, and test. The build script executes the emcc command with the required flags to compile /src/main.c to /src/main.wasm. The test script executes the jest command with the --verbose flag, which provides additional details for each of the test suites. The pretest script simply runs the build script to ensure /src/main.wasm exists prior to running any tests. Tests file review Let's walk through the test file, located at /src/__tests__/main.test.js, and review the purpose of each section of code. The first section of the test file instantiates the main.wasm file and assigns the result to the local wasmInstance variable: const fs = require('fs'); const path = require('path'); describe('main.wasm Tests', () => { let wasmInstance; beforeAll(async () => { const wasmPath = path.resolve(__dirname, '..', 'main.wasm'); const buffer = fs.readFileSync(wasmPath); const results = await WebAssembly.instantiate(buffer, { env: { memoryBase: 0, tableBase: 0, memory: new WebAssembly.Memory({ initial: 1024 }), table: new WebAssembly.Table({ initial: 16, element: 'anyfunc' }), abort: console.log } }); wasmInstance = results.instance.exports; }); ... Jest provides life-cycle methods to perform any setup or teardown actions prior to running tests. You can specify functions to run before or after all of the tests (beforeAll()/afterAll()), or before or after each test (beforeEach()/afterEach()). We need a compiled instance of the Wasm module from which we can call exported functions, so we put the instantiation code in the beforeAll() function. We're wrapping the entire test suite in a describe() block for the file. Jest uses a describe() function to encapsulate suites of related tests and test() or it() to represent a single test. Here's a simple example of this concept: const add = (a, b) => a + b; describe('the add function', () => { test('returns 6 when 4 and 2 are passed in', () => { const result = add(4, 2); expect(result).toEqual(6); }); test('returns 20 when 12 and 8 are passed in', () => { const result = add(12, 8); expect(result).toEqual(20); }); }); The next section of code contains all the test suites and tests for each exported function: ... describe('the _addTwoNumbers function', () => { test('returns 300 when 100 and 200 are passed in', () => { const result = wasmInstance._addTwoNumbers(100, 200); expect(result).toEqual(300); }); test('returns -20 when -10 and -10 are passed in', () => { const result = wasmInstance._addTwoNumbers(-10, -10); expect(result).toEqual(-20); }); }); describe('the _divideTwoNumbers function', () => { test.each([ [10, 100, 10], [-2, -10, 5], ])('returns %f when %f and %f are passed in', (expected, a, b) => { const result = wasmInstance._divideTwoNumbers(a, b); expect(result).toEqual(expected); }); test('returns ~3.77 when 20.75 and 5.5 are passed in', () => { const result = wasmInstance._divideTwoNumbers(20.75, 5.5); expect(result).toBeCloseTo(3.77, 2); }); }); describe('the _findFactorial function', () => { test.each([ [120, 5], [362880, 9.2], ])('returns %p when %p is passed in', (expected, input) => { const result = wasmInstance._findFactorial(input); expect(result).toEqual(expected); }); }); }); The first describe() block, for the _addTwoNumbers() function, has two test() instances to ensure that the function returns the sum of the two numbers passed in as arguments. The next two describe() blocks, for the _divideTwoNumbers() and _findFactorial() functions, use Jest's .each feature, which allows you to run the same test with different data. The expect() function allows you to make assertions on the value passed in as an argument. The .toBeCloseTo() assertion in the last _divideTwoNumbers() test checks whether the result is within two decimal places of 3.77. The rest use the .toEqual() assertion to check for equality. Writing tests with Jest is relatively simple, and running them is even easier! Let's try running our tests and reviewing some of the CLI flags that Jest provides. Running the wasm tests To run the tests, open a terminal instance in the /chapter-09-node/testing-example folder and run the following command: npm test You should see the following output in your terminal: main.wasm Tests the _addTwoNumbers function ✓ returns 300 when 100 and 200 are passed in (4ms) ✓ returns -20 when -10 and -10 are passed in the _divideTwoNumbers function ✓ returns 10 when 100 and 10 are passed in ✓ returns -2 when -10 and 5 are passed in (1ms) ✓ returns ~3.77 when 20.75 and 5.5 are passed in the _findFactorial function ✓ returns 120 when 5 is passed in (1ms) ✓ returns 362880 when 9.2 is passed in Test Suites: 1 passed, 1 total Tests: 7 passed, 7 total Snapshots: 0 total Time: 1.008s Ran all test suites. If you have a large number of tests, you could remove the --verbose flag from the test script in package.json and only pass the flag to the npm test command if needed. There are several other CLI flags you can pass to the jest command. The following list contains some of the more commonly used flags: --bail: Exits the test suite immediately upon the first failing test suite --coverage: Collects test coverage and displays it in the terminal after the tests have run --watch: Watches files for changes and reruns tests related to changed files You can pass these flags to the npm test command by adding them after a --. For example, if you wanted to use the --bail flag, you'd run this command: npm test -- --bail You can view the entire list of CLI options on the official site at https://jestjs.io/docs/en/cli. In this article, we saw how the Jest testing framework can be leveraged to test a compiled module in WebAssembly to ensure it's functioning correctly. To learn more about WebAssembly and its functionalities read the book, Learn WebAssembly. Blazor 0.6 release and what it means for WebAssembly Introducing Wasmjit: A kernel mode WebAssembly runtime for Linux. Why is everyone going crazy over WebAssembly?
Read more
  • 0
  • 0
  • 6710

article-image-how-to-do-data-storytelling-well-with-tableau-video
Sugandha Lahoti
13 Oct 2018
2 min read
Save for later

How to do data storytelling well with Tableau [Video]

Sugandha Lahoti
13 Oct 2018
2 min read
Data tells you what is happening, but stories tell you why it matters. Tableau story points and dashboards are useful features in Tableau that are helping to drive an evolution in data storytelling. This clip is taken from the video Tableau Data Stories for Everyone by Fabio Fierro. https://www.youtube.com/watch?v=oqV0juvO5og The challenge of data storytelling The challenge for a ‘data storyteller’ is to act as a bridge between raw data, insights, and the world outside them where those insights actually matter and can have an impact on decision making. By intelligently plotting data in a way that’s both clear and engaging, you can help stakeholders - whether that’s senior management or customers - better understand the context of your data and appreciate the connections you are trying to draw in your analytical work. There are a number of different data storytelling techniques you can use when working with Tableau to more effectively communicate with your audience. These include: Looking at changes over time Drilling down into interesting details of your analysis Zooming out to show a broad view on what your analysis might show The contrast shows to highlight interesting points of difference that could be useful for your audience Intersections highlight important shifts when one category overtakes other. Factors explain a subject by dividing it into types or categories and discussing if a subcategory needs to be focused on more. Outliers display anomalies which are exceptionally different. Watch the clip above to learn more about how you can use Tableau for incredible data storytelling. About the author Fabio Fierro is a Chief consultant of a group of Tableau experts and storytellers. He has several years’ experience in delivering end-to-end business intelligence solution within the corporate world. As a business analyst, he enjoys creating innovative solutions to analyze any kind of data. Announcing Tableau Prep 2018.2.1! A tale of two tools: Tableau and Power BI Visualizing BigQuery Data with Tableau.
Read more
  • 0
  • 0
  • 2854

article-image-chaos-conf-2018-recap-chaos-engineering-hits-maturity-as-community-moves-towards-controlled-experimentation
Richard Gall
12 Oct 2018
11 min read
Save for later

Chaos Conf 2018 Recap: Chaos engineering hits maturity as community moves towards controlled experimentation

Richard Gall
12 Oct 2018
11 min read
Conferences can sometimes be confusing. Even at the most professional and well-planned conferences, you sometimes just take a minute and think what's actually the point of this? Am I learning anything? Am I meant to be networking? Will anyone notice if I steal extra food for the journey home? Chaos Conf 2018 was different, however. It had a clear purpose: to take the first step in properly forging a chaos engineering community. After almost a decade somewhat hidden in the corners of particularly innovative teams at Netflix and Amazon, chaos engineering might feel that its time has come. As software infrastructure becomes more complex, less monolithic, and as business and consumer demands expect more of the software systems that have become integral to the very functioning of life, resiliency has never been more important but more challenging to achieve. But while it feels like the right time for chaos engineering, it hasn't quite established itself in the mainstream. This is something the conference host, Gremlin, a platform that offers chaos engineering as a service, is acutely aware of. On the hand it's actively helping push chaos engineering into the hands of businesses, but on the other its growth and success, backed by millions of VC cash (and faith), depends upon chaos engineering becoming a mainstream discipline in the DevOps and SRE worlds. It's perhaps this reason that the conference felt so important. It was, according to Gremlin, the first ever public chaos engineering conference. And while it was relatively small in the grand scheme of many of today's festival-esque conferences attended by thousands of delegates (Dreamforce, the Salesforce conference, was also running in San Francisco in the same week), the fact that the conference had quickly sold out all 350 of its tickets - with more hoping on waiting lists - indicates that this was an event that had been eagerly awaited. And with some big names from the industry - notably Adrian Cockcroft from AWS and Jessie Frazelle from Microsoft - Chaos Conf had the air of an event that had outgrown its insider status before it had even began. The renovated cinema and bar in San Francisco's Mission District, complete with pinball machines upstairs, was the perfect container for a passionate community that had grown out of the clean corporate environs of Silicon Valley to embrace the chaotic mess that resembles modern software engineering. Kolton Andrus sets out a vision for the future of Gremlin and chaos engineering Chaos Conf was quick to deliver big news. They keynote speech, by Gremlin co-founder Kolton Andrus launched Gremlin's brand new Application Level Fault Injection (ALFI) feature, which makes it possible to run chaos experiments at an application level. Andrus broke the news by building towards it with a story of the progression of chaos engineering. Starting with Chaos Monkey, the tool first developed by Netflix, and moving from infrastructure to network, he showed how, as chaos engineering has evolved, it requires and faciliates different levels of control and insight on how your software works. "As we're maturing, the host level failures and the network level failures are necessary to building a robust and resilient system, but not sufficient. We need more - we need a finer granularity," Andrus explains. This is where ALFI comes in. By allowing Gremlin users to inject failure at an application level, it allows them to have more control over the 'blast radius' of their chaos experiments. The narrative Andrus was setting was clear, and would ultimately inform the ethos of the day - chaos engineering isn't just about chaos, it's about controlled experimentation to ensure resiliency. To do that requires a level of intelligence - technical and organizational - about how the various components of your software work, and how humans interact with them. Adrian Cockcroft on the importance of historical context and domain knowledge Adrian Cockcroft (@adrianco) VP at AWS followed Andrus' talk. In it he took the opportunity to set the broader context of chaos engineering, highlighting how tackling system failures is often a question of culture - how we approach system failure and think about our software. Developers love to learn things from first principles" he said. "But some historical context and domain knowledge can help illuminate the path and obstacles." If this sounds like Cockcroft was about to stray into theoretical territory, he certainly didn't. He offered plenty of practical frameworks for thinking through potential failure. But the talk wasn't theoretical - Cockcroft offered a taxonomy of failure that provides a useful framework for thinking through potential failure at every level. He also touched on how he sees the future of resiliency evolving, focusing on: Observability of systems Epidemic failure modes Automation and continuous chaos The crucial point Cockcroft makes is that cloud is the big driver for chaos engineering. "As datacenters migrate to the cloud, fragile and manual disaster recovery will be replaced by chaos engineering" read one of his slides. But more than that, the cloud also paves the way for the future of the discipline, one where 'chaos' is simply an automated part of the test and deployment pipeline. Selling chaos engineering to your boss Kriss Rochefolle, DevOps engineer and author of one of the best selling DevOps books in French, delivered a short talk on how engineers can sell chaos to their boss. He takes on the assumption that a rational proposal, informed by ROI is the best way to sell chaos engineering. He suggests instead that engineers need to play into emotions, and presenting chaos engineer as a method for tackling and minimizing the fear of (inevitable failure. Follow Kriss on Twitter: @crochefolle Walmart and chaos engineering Vilas Veraraghavan, the Director of Engineering was keen to clarify that Walmart doesn't practice chaos. Rather it practices resiliency - chaos engineering is simply a method the organization uses to achieve that. It was particularly useful to note the entire process that Vilas' team adopts when it comes to resiliency, which has largely developed out of Vilas' own work building his team from scratch. You can learn more about how Walmart is using chaos engineering for software resiliency in this post. Twitter's Ronnie Chen on diving and planning for failure Ronnie Chen (@rondoftw) is an engineering manager at Twitter. But she didn't talk about Twitter. In fact, she didn't even talk about engineering. Instead she spoke about her experience as a technical diver. By talking about her experiences, Ronnie was able to make a number of vital points about how to manage and tackle failure as a team. With mortality rates so high in diving, it's a good example of the relationship between complexity and risk. Chen made the point that things don't fail because of a single catalyst. Instead, failures - particularly fatal ones - happen because of a 'failure cascade'. Chen never makes the link explicit, but the comparison is clear - the ultimate outcome (ie. success or failure) is impacted by a whole range of situational and behavioral factors that we can't afford to ignore. Chen also made the point that, in diving, inexperienced people should be at the front of an expedition. "If you're inexperienced people are leading, they're learning and growing, and being able to operate with a safety net... when you do this, all kinds of hidden dependencies reveal themselves... every undocumented assumption, every piece of ancient team lore that you didn't even know you were relying on, comes to light." Charity Majors on the importance of observability Charity Majors (@mipsytipsy), CEO of Honeycomb, talked in detail about the key differences between monitoring and observability. As with other talks, context was important: a world where architectural complexity has grown rapidly in the space of a decade. Majors made the point that this increase in complexity has taken us from having known unknowns in our architectures, to many more unknown unknowns in a distributed system. This means that monitoring is dead - it simply isn't sophisticated enough to deal with the complexities and dependencies within a distributed system. Observability, meanwhile, allows you to to understand "what's happening in your systems just by observing it from the outside." Put simply, it lets you understand how your software is functioning from your perspective - almost turning it inside out. Majors then linked the concept to observability to the broader philosophy of chaos engineering - echoing some of the points raised by Adrian Cockcroft in his keynote. But this was her key takeaway: "Software engineers spend too much time looking at code in elaborately falsified environments, and not enough time observing it in the real world." This leads to one conclusion - the importance of testing in production. "Accept no substitute." Tammy Butow and Ana Medina on making an impact Tammy Butow (@tammybutow) and Ana Medina  (@Ana_M_Medina) from Gremlin took us through how to put chaos engineering into practice - from integrating it into your organizational culture to some practical tests you can run. One of the best examples of putting chaos into practice is Gremlin's concept of 'Failure Fridays', in which chaos testing becomes a valuable step in the product development process, to dogfood it and test out how a customer experiences it. Another way which Tammy and Ana suggested chaos engineering can be used was as a way of testing out new versions of technologies before you properly upgrade in production. To end, their talk, they demo'd a chaos battle between EKS (Kubernetes on AWS) and AKS (Kubernetes on Azure), doing an app container attack, a packet loss attack and a region failover attack. Jessie Frazelle on how containers can empower experimentation Jessie Frazelle (@jessfraz) didn't actually talk that much about chaos engineering. However, like Ronnie Chen's talk, chaos engineering seeped through what she said about bugs and containers. Bugs, for Frazelle, are a way of exploring how things work, and how different parts of a software infrastructure interact with each other: "Bugs are like my favorite thing... some people really hate when they get one of those bugs that turns out to be a rabbit hole and your kind of debugging it until the end of time... while debugging those bugs I hate them but afterwards, I'm like, that was crazy!" This was essentially an endorsement of the core concept of chaos engineering - injecting bugs into your software to understand how it reacts. Jessie then went on to talk about containers, joking that they're NOT REAL. This is because they're made up of  numerous different component parts, like Cgroups, namespaces, and LSMs. She contrasted containers with Virtual machines, zones and jails, which are 'first class concepts' - in other worlds, real things (Jessie wrote about this in detail last year in this blog post). In practice what this means is that whereas containers are like Lego pieces, VMs, zones, and jails are like a pre-assembled lego set that you don't need to play around with in the same way. From this perspective, it's easy to see how containers are relevant to chaos engineering - they empower a level of experimentation that you simply don't have with other virtualization technologies. "The box says to build the death star. But you can build whatever you want." The chaos ends... Chaos Conf was undoubtedly a huge success, and a lot of credit has to go to Gremlin for organizing the conference. It's clear that the team care a lot about the chaos engineering community and want it to expand in a way that transcends the success of the Gremlin platform. While chaos engineering might not feel relevant to a lot of people at the moment, it's only a matter of time that it's impact will be felt. That doesn't mean that everyone will suddenly become a chaos engineer by July 2019, but the cultural ripples will likely be felt across the software engineering landscape. But without Chaos Conf, it would be difficult to see chaos engineering growing as a discipline or set of practices. By sharing ideas and learning how other people work, a more coherent picture of chaos engineering started to emerge, one that can quickly make an impact in ways people wouldn't have expected six months ago. You can watch videos of all the talks from Chaos Conf 2018 on YouTube.
Read more
  • 0
  • 0
  • 5273

article-image-5-nation-joint-activity-alert-report-finds-most-threat-actors-use-publicly-available-tools-for-cyber-attacks
Melisha Dsouza
12 Oct 2018
4 min read
Save for later

5 nation joint Activity Alert Report finds most threat actors use publicly available tools for cyber attacks

Melisha Dsouza
12 Oct 2018
4 min read
NCCIC, in collaboration with cybersecurity authorities of  Australia, Canada, New Zealand, the United Kingdom, and the United States has released a joint ‘Activity Alert Report’. This report highlights five publicly available tools frequently observed in cyber attacks worldwide. Today, malicious tools are available free for use and can be misused by cybercriminals to endanger public security and privacy. There are numerous cyber incidents encountered on a daily basis that challenge even the most secure network and exploit confidential information across finance, government, health sectors. What’s surprising is that a majority of these exploits are caused by freely available tools that find loopholes in security systems to achieve an attacker’s objectives. The report highlights the five most frequently used tools that are used by cybercriminals all over the globe to perform cyber crimes. These fall into five categories: #1 Remote Access Trojan: JBiFrost Once the  RAT program is installed on a victim’s machine, it allows remote administrative control of the system. It can then be used to exploit the system as per the hacker’s objectives. For example, installing malicious backdoors to obtain confidential data. These are often difficult to detect because they are designed to not appear in lists of running programs and to mimic the behavior of legitimate applications. RATs also disable network analysis tools (e.g., Wireshark) on the victim’s system. Operating systems Windows, Linux, MAC OS X, and Android are susceptible to this threat. Hackers spammed companies with emails to infiltrate their systems with the Adwind RAT into their systems. The entire story can be found on Symantec’s blog. #2 Webshell: China Chopper The China Chopper is being used widely since 2012. These webshells are malicious scripts which are uploaded to a target system to grant the hacker remote access to administrative capabilities on the system. The hackers can then pivot to additional hosts within a network. China Chopper consists of the client-side, which is run by the attacker, and the server, which is installed on the victim server and is also attacker-controlled. The client can issue terminal commands and manage files on the victim server. It can then upload and download files to and from the victim using  wget. They can then either modify or delete the existing files. #3 Credential Stealer: Mimikatz Mimikatz is mainly used by attackers to access the memory within a targeted Windows system and collect the credentials of logged in users. These credentials can be then used to give access to other machines on a network. Besides obtaining credentials, the tool can obtain Local Area Network Manager and NT LAN Manager hashes, certificates, and long-term keys on Windows XP (2003) through Windows 8.1 (2012r2). When the "Invoke-Mimikatz" PowerShell script is used to operate Mimikatz, its activity is difficult to isolate and identify. In 2017, this tool was used in combination with NotPetya infected hundreds of computers in Russia and Ukraine. The attack paralysed systems and disabled the subway payment systems. The good news is that Mimikatz can be detected by most up-to-date antivirus tools. That being said, hackers can modify Mimikatz code to go undetected by antivirus. # 4 Lateral Movement Framework: PowerShell Empire PowerShell Empire is a post-exploitation or lateral movement tool. It allows an attacker to move around a network after gaining initial access. This tool can be used to generate executables for social engineering access to networks. The tool consists of a a threat actor that can escalate privileges, harvest credentials, exfiltrate information, and move laterally across a network. Traditional antivirus tools fail to detect PowerShell Empire. In 2018, the tool was used by hackers sending out Winter Olympics-themed socially engineered emails and malicious attachments in a spear-phishing campaign targeting several South Korean organizations. # 5 C2 Obfuscation and Exfiltration: HUC Packet Transmitter HUC Packet Transmitter (HTran) is a proxy tool used by attackers to obfuscate their location. The tool intercepts and redirects the Transmission Control Protocol (TCP) connections from the local host to a remote host. This makes it possible to detect an attacker’s communications with victim networks. HTran uses a threat actor to facilitate TCP connections between the victim and a hop point. Threat actors can then redirect their packets through multiple compromised hosts running HTran to gain greater access to hosts in a network. The research encourages everyone to use the report to stay informed about the potential network threats due to these malicious tools. They also provide a complete list of detection and prevention measures for each tool in detail. You can head over to the official site of the US-CERT for more information on this research. 6 artificial intelligence cybersecurity tools you need to know How will AI impact job roles in Cybersecurity New cybersecurity threats posed by artificial intelligence  
Read more
  • 0
  • 0
  • 2967
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $15.99/month. Cancel anytime
article-image-implementing-proximal-policy-optimization-ppo-algorithm-in-unity-tutorial
Natasha Mathur
12 Oct 2018
10 min read
Save for later

Implementing Proximal Policy Optimization (PPO) algorithm in Unity [Tutorial]

Natasha Mathur
12 Oct 2018
10 min read
ML-agents uses a reinforcement learning technique called PPO or Proximal Policy Optimization. This is the preferred training method that Unity has developed which uses a neural network. This PPO algorithm is implemented in TensorFlow and runs in a separate Python process (communicating with the running Unity application over a socket). In this tutorial, we look at how to implement PPO, a reinforcement learning algorithm used for training the ML agents in Unity. This tutorial also explores training statistics with TensorBoard. This tutorial is an excerpt taken from the book 'Learn Unity ML-Agents – Fundamentals of Unity Machine Learning'  by Micheal Lanham. Before implementing PPO, let's have a look at how to set a special unity environment needed for controlling the Unity training environment. Go through the following steps to learn how to configure the 3D ball environment for external. How to set up a 3D environment in Unity for external training Open the Unity editor and load the ML-Agents demo unityenvironment project. If you still have it open from the last chapter, then that will work as well. Open the 3DBall.scene in the ML-Agents/Examples/3DBall folder. Locate the Brain3DBrain object in the Hierarchy window and select it. In the Inspector window set the Brain Type to External. From the menu, select Edit | Project Settings | Player. From the Inspector window, set the properties as shown in the following screenshot: Setting the Player resolution properties From the menu, select File | Build Settings. Click on the Add Open Scene button and make sure that only the 3DBall scene is active, as shown in the following dialog: Setting the Build Settings for the Unity environment Set the Target Platform to your chosen desktop OS (Windows, in this example) and click the Build button at the bottom of the dialog. You will be prompted to choose a folder to build into. Select the python folder in the base of the ml-agents folder. If you are prompted to enter a name for the file, enter 3DBall. On newer versions of Unity, from 2018 onward, the name of the folder will be set by the name of the Unity environment build folder, which will be python.Be sure that you know where Unity is placing the build, and be sure that the file is in the python folder. At the time of writing, on Windows, Unity will name the executable python.exe and not 3DBall.exe. This is important to remember when we get to set up the Python notebook. With the environment built, we can move on to running the Basics notebook against the app. Let's see how to run the Jupyter notebook to control the environment. Running the environment Open up the Basics Jupyter notebook again; remember that we wanted to leave it open after testing the Python install.  Go through the following steps to run the environment. Ensure that you update the first code block with your environment name, like so: env_name = "python" # Name of the Unity environment binary to launch train_mode = True # Whether to run the environment in training or inference mode We have the environment name set to 'python' here because that is the name of the executable that gets built into the python folder. You can include the file extension, but you don't have to. If you are not sure what the filename is, check the folder; it really will save you some frustration. Go inside the first code block and then click the Run button on the toolbar. Clicking Run will run the block of code you currently have your cursor in. This is a really powerful feature of a notebook; being able to move back and forth between code blocks and execute what you need is very useful when building complex algorithms. Click inside the second code block and click Run. The second code block is responsible for loading code dependencies. Note the following line in the second code block: from unityagents import UnityEnvironment This line is where we import the unityagents UnityEnvironment class. This class is our controller for running the environment. Run the third code block. Note how a Unity window will launch, showing the environment. You should also notice an output showing a successful startup and the brain stats. If you encounter an error at this point, go back and ensure you have the env_name variable set with the correct filename. Run the fourth code block. You should again see some more output, but unfortunately, with this control method, you don't see interactive activity. We will try to resolve this issue in a later chapter. Run the fifth code block. This will run through some random actions in order to generate some random output. Finally, run the sixth code block. This will close the Unity environment. Feel free to review the Basics notebook and play with the code. Take advantage of the ability to modify the code or make minor changes and quickly rerun code blocks. Now that we know how to set up a 3D environment in Unity, we can move on to implementation of PPO. How to implement PPO in Unity The implementation of PPO provided by Unity for training has been set up in a single script that we can put together quite quickly. Open up Unity to the unityenvironment sample projects and go through the following steps: Locate the GridWorld scene in the Assets/ML-Agents/Examples/GridWorld folder. Double-click it to open it. Locate the GridWorldBrain and set it to External. Set up the project up using the steps mentioned in the previous section. From the menu, select File | Build Settings.... Uncheck any earlier scenes and be sure to click Add Open Scenes to add the GridWorld scene to the build. Click Build to build the project, and again make sure that you put the output in the python folder. Again, if you are lost, refer to the ML-Agents external brains section. Open a Python shell or Anaconda prompt window. Be sure to navigate to the root source folder, ml-agents. Activate the ml-agents environment with the following: activate ml-agents From the ml-agents folder, run the following command: python python/learn.py python/python.exe --run-id=grid1 --train You may have to use Python 3 instead, depending on your Python engine. This will execute the learn.py script against the python/python.exe environment; be sure to put your executable name if you are not on Windows. Then we set a useful run-id we can use to identify runs later. Finally, we set the --train switch in order for the agent/brain to also be trained. As the script runs, you should see the Unity environment get launched, and the shell window or prompt will start to show training statistics, as shown in the following screenshot of the console window: Training output generated from learn.py Let the training run for as long as it needs. Depending on your machine and the number of iterations, you could be looking at a few hours of training—yes, you read that right. As the environment is trained, you will see the agent moving around and getting reset over and over again. In the next section, we will take a closer look at what the statistics are telling us. Understanding training statistics with TensorBoard Inherently, ML has its roots in statistics, statistical analysis, and probability theory. While we won't strictly use statistical methods to train our models like some ML algorithms do, we will use statistics to evaluate training performance. Hopefully, you have some memory of high school statistics, but if not, a quick refresher will certainly be helpful. The Unity PPO and other RL algorithms use a tool called TensorBoard, which allows us to evaluate statistics as an agent/environment is running. Go through the following steps as we run another Grid environment while watching the training with TensorBoard: Open the trainer_config.yaml file in Visual Studio Code or another text editor. This file contains the various training parameters we use to train our models. Locate the configuration for the GridWorldBrain, as shown in the following code: GridWorldBrain: batch_size: 32 normalize: false num_layers: 3 hidden_units: 256 beta: 5.0e-3 gamma: 0.9 buffer_size: 256 max_steps: 5.0e5 summary_freq: 2000 time_horizon: 5 Change the num_layers parameter from 1 to 3, as shown in the highlighted code. This parameter sets the number of layers the neural network will have. Adding more layers allows our model to better generalize, which is a good thing. However, this will decrease our training performance, or the time it takes our agent to learn. Sometimes, this isn't a bad thing if you have the CPU/GPU to throw at training, but not all of us do, so evaluating training performance will be essential. Open a command prompt or shell in the ml-agents folder and run the following command: python python/learn.py python/python.exe --run-id=grid2 --train Note how we updated the --run-id parameter to grid2 from grid1. This will allow us to add another run of data and compare it to the last run in real time. This will run a new training session. If you have problems starting a session, make sure you are only running one environment at a time. Open a new command prompt or shell window to the same ml-agents folder. Keep your other training window running. Run the following command: tensorboard --logdir=summaries This will start the TensorBoard web server, which will serve up a web UI to view our training results. Copy the hosting endpoint—typically http://localhost:6006, or perhaps the machine name—and paste it into a web browser. After a while, you should see the TensorBoard UI, as shown in the following screenshot: TensorBoard UI showing the results of training on GridWorld You will need to wait a while to see progress from the second training session. When you do, though, as shown in the preceding image, you will notice that the new model (grid2) is lagging behind in training. Note how the blue line on each of the plots takes several thousand iterations to catch up. This is a result of the more general multilayer network. This isn't a big deal in this example, but on more complex problems, that lag could make a huge difference. While some of the plots show the potential for improvement—such as the entropy plot—overall, we don't see a significant improvement. Using a single-layer network for this example is probably sufficient. We learned about PPO and its implementation in Unity. To learn more PPO concepts in Unity, be sure to check out the book Learn Unity ML-Agents – Fundamentals of Unity Machine Learning. Implementing Unity game engine and assets for 2D game development [Tutorial] Creating interactive Unity character animations and avatars [Tutorial] Unity 2D & 3D game kits simplify Unity game development for beginners
Read more
  • 0
  • 0
  • 6161

article-image-boston-dynamics-adds-military-grade-mortar-parkour-skills-to-its-popular-humanoid-atlas-robot
Natasha Mathur
12 Oct 2018
2 min read
Save for later

Boston Dynamics adds military-grade mortor (parkour) skills to its popular humanoid Atlas Robot

Natasha Mathur
12 Oct 2018
2 min read
Boston Dynamics, a robotics design company, has now added parkour skills to its popular and advanced humanoid robot, named Atlas. Parkour is a training discipline that involves using movement developed by the military obstacle course training. The company posted a video on YouTube yesterday that shows Atlas jumping over a log, climbing and leaping up staggered tall boxes mimicking a parkour runner in the military. “The control software (in Atlas) uses the whole body including legs, arms, and torso, to marshal the energy and strength for jumping over the log and leaping up the steps without breaking its pace.  (Step height 40 cm.) Atlas uses computer vision to locate itself with respect to visible markers on the approach to hit the terrain accurately”, mentioned Boston Dynamics in yesterday’s video. Atlas Parkour  The original version of Atlas was made public, back in 2013, and was created for the United States Defense Advanced Research Projects Agency (DARPA). It quickly became famous for its control system. This advanced control system robot coordinates the motion of its arms, torso, and legs to achieve whole-body mobile manipulation. Boston Dynamics then unveiled the next generation of its Atlas robot, back in 2016. This next-gen electrically powered and hydraulically actuated Atlas Robot was capable of walking on the snow, picking up boxes, and getting up by itself after a fall. It was designed mainly to operate outdoors and inside buildings. Atlas, the next-generation Atlas consists of sensors embedded in its body and legs to balance. It also comprises LIDAR and stereo sensors in its head. This helps it avoid obstacles, assess the terrain well and also help it with navigation. Boston Dynamics has a variety of other robots such as Handle, SpotMini, Spot, LS3, WildCat, BigDog, SandFlea, and Rhex. These robots are capable of performing actions that range from doing backflips, opening (and holding) doors, washing the dishes, trail running, and lifting boxes among others. For more information, check out the official Boston Dynamics Website. Boston Dynamics’ ‘Android of robots’ vision starts with launching 1000 robot dogs in 2019 Meet CIMON, the first AI robot to join the astronauts aboard ISS What we learned at the ICRA 2018 conference for robotics & automation
Read more
  • 0
  • 0
  • 3969

article-image-privacy-experts-urge-the-senate-commerce-committee-for-a-strong-federal-privacy-bill-that-sets-a-floor-not-a-ceiling
Sugandha Lahoti
11 Oct 2018
9 min read
Save for later

Privacy experts urge the Senate Commerce Committee for a strong federal privacy bill “that sets a floor, not a ceiling”

Sugandha Lahoti
11 Oct 2018
9 min read
The Senate Commerce Committee held a hearing yesterday on consumer data privacy. The hearing focused on the perspective of privacy advocates and other experts. These advocates encouraged federal lawmakers to create strict data protection regulation rules, giving consumers more control over their personal data. The major focus was on implementing a strong common federal consumer privacy bill “that sets a floor, not a ceiling.” Representatives included Andrea Jelinek, the chair of the European Data Protection Board; Alastair Mactaggart, the advocate behind California's Consumer Privacy Act; Laura Moy, executive director of the Georgetown Law Center on Privacy and Technology; and Nuala O'Connor, president of the Center for Democracy and Technology. The Goal: Protect user privacy, allow innovation John Thune, the Committee Chairman said in his opening statement, “Over the last few decades, Congress has tried and failed to enact comprehensive privacy legislation. Also in light of recent security incidents including Facebook’s Cambridge Analytica and another security breach, and of the recent data breach in Google+, it is increasingly clear that industry self-regulation in this area is not sufficient. A national standard for privacy rules of the road is needed to protect consumers.” Senator Edward Markey, in his opening statement, spoke on data protection and privacy saying that “Data is the oil of the 21st century”. He further adds, “Though it has come with an unexpected cost to the users, any data-driven website that uses their customer’s personal information as a commodity, collecting, and selling user information without their permission.” He said that the goal of this hearing was to give users meaningful control over their personal information while maintaining a thriving competitive data ecosystem in which entrepreneurs can continue to develop. What did the industry tell the Senate Commerce Committee in the last hearing on the topic of consumer privacy? A few weeks ago, the Commerce committee held a discussion with Google, Facebook, Amazon, AT&T, and other industry players to understand their perspective on the same topic. The industry unanimously agreed that privacy regulations need to be put in place However, these companies pushed for the committee to make online privacy policy at the federal level rather than at the state level to avoid a nightmarish patchwork of policies for businesses to comply by. They also shared that complying by GDPR has been quite resource intensive. While they acknowledged that it was too soon to assess the impact of GDPR, they cautioned the Senate Commerce Committee that policies like the GDPR and CCPA could be detrimental to growth and innovation and thereby eventually cost the consumer more. As such, they expressed interest in being part of the team that formulates the new federal privacy policy. Also, they believed that the FTC was the right body to oversee the implementation of the new privacy laws. Overall, the last hearing’s meta-conversation between the committee and the industry was heavy with defensive stances and scripted almost colluded recommendations. The Telcos wanted tech companies to do better. The message was that user privacy and tech innovation are too interlinked and there is a need to strike a delicate balance to make privacy work practically. The key message from yesterday’s Senate Commerce Committee hearing with privacy advocates and EU regulator This time, the hearing was focused solely on establishing strict privacy laws and to draft clear guidelines regarding, definitions of ‘sensitive’ data, prohibited uses of data, and establishing limits for how long corporations can hold on to consumer data for various uses. A focal point of the hearing was to give users the key elements of Knowledge, Notice, and No. Consumers need knowledge that their data is being shared and how it is used, notice when their data is compromised, and the ability to say no to the entities that want their personal information. It should also include limits on how companies can use consumer’s information. The bill should prohibit companies from giving financial incentives to users in exchange for their personal information. Privacy must not become a luxury good that only the fortunate can afford. The bill should also ban “take it or leave it” offerings, in which a company requires a consumer to forfeit their privacy in order to consume a product. Companies should not be able to coerce users into providing their personal information by threatening to deprive them of a service. The law should include individual rights like the ability to access, correct, delete, and remove information. Companies should only collect user data which is absolutely necessary to carry out the service and keep that private information safe and secure. The legislation should also include special protections for children and teenagers. The federal government should be given strong enforcement powers and robust rule-making authority in order to ensure rules keep pace with changing technologies. Some of the witnesses believed that the FTC may not the right body to do this and that a new entity focused on this aspect may do a better and more agile job. “We can’t be shy about data regulation”, Laura Moy Laura Moy, Deputy Director of the Privacy and Technology center at Georgetown University law center talked at length about Data regulation. “This is not a time to be shy about data regulation,” Moy said. “Now is the time to intervene.” She emphasized that information should not in any way be used for discrimination. Nor it should be used to amplify hate speech, be sold to data brokers or used to target misinformation or disinformation. She also talked about Robust Enforcement, where she said she plans to call for legislation to “enable robust enforcement both by a federal agency and state attorneys general and foster regulatory agility.” She also addressed the question of whether companies should be able to tell consumers that if they don’t agree to share non-essential data, they cannot receive products or service? She disagreed saying that if companies do so, they have violated the idea of “Free choice”. She also addressed issues as to whether companies should be eligible for offering financial initiatives in exchange for user personal information, “GDPR was not a revolution, but just an evolution of a law [that existed for 20 years]”, Andrea Jelinek Andrea Jelinek, Chairperson, European Data Protection Board, highlighted the key concepts of GDPR and how it can be an inspiration to develop a policy in the U.S. at the federal level. In her opening statements, she said, “The volume of Digital information doubles every two years and deeply modifies our way of life. If we do not modify the roots of data processing gains with legislative initiatives, it will turn into a losing game for our economy, society, and each individual.” She addressed the issue of how GDPR is going to be enforced in the investigation of Facebook by Ireland’s Data protection authority. She also gave stats on the number of GDPR investigations opened in the EU so far. From the figures dating till October 1st, GDPR has 272 cases regarding identifying the lead supervisory authority and concern supervisory authority. There are 243 issues on mutual assistance according to Article 61 of the GDPR. There are also 223 opinions regarding data protection impact assessment. Company practices that have generated the most complaints and concerns from consumers revolved around “User Consent”. She explained why GDPR went with the “regulation route”, choosing one data privacy policy for the entire continent instead of each member country having their own. Jelinek countered Google’s point about compliance taking too much time and effort from the team by saying that given Google’s size, it would have taken around 3.5 hours per employee to get the compliance implemented. She also observed that it could have been reduced a lot, had they followed good data practices, to begin with. She also clarified that GDPR was not a really new or disruptive regulatory framework. In addition to the two years provided to companies to comply with the new rules, there was a 20-year-old data protection directive already in place in Europe in various forms. In that sense she said, GDPR was not a revolution, but just an evolution of a law that existed for 20 years. Californians for Consumer Privacy Act Alastair McTaggart, Chairman of Californians for consumer privacy, talked about CCPA’s two main elements. First, the Right to know, which allows Californians to know the information corporations have collected concerning them. Second, the Right to say no to businesses to stop selling their personal information. He said, “CCPA puts the focus on giving choice back to the consumer and enforced data security, a choice which is sorely needed." He also addressed questions like, “If he believes federal law should also grant permission for 13, 14, and 15-year-old?” What should the new Federal Privacy law look like according to CDT’s O’Connor Center for Democracy and Technology (CDT) President and CEO, Laura O'Connor said, "As with many new technological advancements and emerging business models, we have seen exuberance and abundance, and we have seen missteps and unintended consequences. International bodies and US states have responded by enacting new laws, and it is time for the US federal government to pass omnibus federal privacy legislation to protect individual digital rights and human dignity, and to provide certainty, stability, and clarity to consumers and companies in the digital world." She also highlighted five important pointers that should be kept in mind while designing the new Federal Privacy law. A comprehensive federal privacy law should apply broadly to all personal data and unregulated commercial entities, not just to tech companies. The law should include individual rights like the ability to access, correct, delete, and remove information. Congress should prohibit the collection, use, and sharing of certain types of data when not necessary for the immediate provision of the service. The FTC should be expressly empowered to investigate data abuses that result in discriminatory advertising and other practices. A federal privacy law should be clear on its face and provide specific guidance to companies and markets about legitimate data practices. It is promising to see the Senate Commerce committee sincerely taking in notes from both industry and privacy advocates to enable building strict privacy standards. They are hoping this new legislation is more focused on protecting consumer data than the businesses that profit from it. Only time will tell if a bipartisan consensus to this important initiative will be reached. For a detailed version of this story, it is recommended to hear the full Senate Commerce Committee hearing. Consumer protection organizations submit a new data protection framework to the Senate Commerce Committee. Google, Amazon, AT&T met the U.S Senate Committee to discuss consumer data privacy. Facebook, Twitter open up at Senate Intelligence hearing, the committee does ‘homework’ this time.
Read more
  • 0
  • 0
  • 2447

article-image-mozilla-announces-3-5-million-award-for-responsible-computer-science-challenge-to-encourage-teaching-ethical-coding-to-cs-graduates
Melisha Dsouza
11 Oct 2018
3 min read
Save for later

Mozilla announces $3.5 million award for ‘Responsible Computer Science Challenge’ to encourage teaching ethical coding to CS graduates

Melisha Dsouza
11 Oct 2018
3 min read
Mozilla, along with Omidyar Network, Schmidt Futures, and Craig Newmark Philanthropies, has launched an initiative for professors, graduate students, and teaching assistants at U.S. colleges and universities to integrate and demonstrate the relevance of ethics into computer science education at the undergraduate level. This competition, titled 'Responsible Computer Science Challenge' has solely been launched to foster the idea of 'ethical coding' into today's times. Code written by computer scientists are widely used in fields ranging from data collection to analysis. Poorly designed code can have a negative impact on a user's privacy and security. This challenge seeks creative approaches to integrating ethics and societal considerations into undergraduate computer science education. Ideas pitched by contestants will be judged by an independent panel of experts from academia, profit and non-profit organizations and tech companies. The best proposals will be awarded up to $3.5 million over the next two years. "We are looking to encourage ways of teaching ethics that make sense in a computer science program, that make sense today, and that make sense in understanding questions of data." -Mitchell Baker, founder and chairwoman of the Mozilla Foundation What is this challenge all about? Professors are encouraged to tweak class material, for example, integrating a reading assignment on ethics to go with each project, or having computer science lessons co-taught with teaching assistants from the ethics department. The coursework introduced should encourage students to use their logical skills and come up with ideas to incorporate humanistic principles. The challenge consists of two stages: a Concept Development and Pilot Stage and a Spread and Scale Stage.T he first stage will award these proposals up to $150,000 to try out their ideas firsthand, for instance at the university where the educator teaches. The second stage will select the best of the pilots and grant them $200,000 to help them scale to other universities. Baker asserts that the competition and its prize money will yield substantial and relevant practical ideas. Ideas will be judged based on the potential of their approach, the feasibility of success, a difference from existing solutions, impact on the society, bringing new perspectives to ethics and scalability of the solution. Mozilla’s competition comes as welcomed venture after many of the top universities, like Harvard and MIT, are taking initiatives to integrate ethics within their computer science department. To know all about the competition, head over to Mozilla’s official Blog. You can also check out the entire coverage of this story at Fast Company. Mozilla drops “meritocracy” from its revised governance statement and leadership structure to actively promote diversity and inclusion Mozilla optimizes calls between JavaScript and WebAssembly in Firefox, making it almost as fast as JS to JS calls Mozilla’s new Firefox DNS security updates spark privacy hue and cry
Read more
  • 0
  • 0
  • 4323
article-image-building-your-own-basic-behavior-tree-tutorial
Natasha Mathur
11 Oct 2018
12 min read
Save for later

Building your own Basic Behavior tree in Unity [Tutorial]

Natasha Mathur
11 Oct 2018
12 min read
Behavior trees (BTs) have been gaining popularity among game developers very steadily.  Games such as Halo and Gears of War are among the more famous franchises to make extensive use of BTs. An abundance of computing power in PCs, gaming consoles, and mobile devices has made them a good option for implementing AI in games of all types and scopes. In this tutorial, we will look at the basics of a behavior tree and its implementation.  Over the last decade, BTs have become the pattern of choice for many developers when it comes to implementing behavioral rules for their AI agents. This tutorial is an excerpt taken from the book 'Unity 2017 Game AI programming - Third Edition' written by Raymundo Barrera, Aung Sithu Kyaw, and Thet Naing Swe. Note: You need to have Unity 2017 installed on a system that has either Windows 7 SP1+, 8, 10, 64-bit versions or Mac OS X 10.9+. Let's first have a look at the basics of behavior trees. Learning the basics of behavior trees Behavior trees got their name from their hierarchical, branching system of nodes with a common parent, known as the root. Behavior trees mimic the real thing they are named after—in this case, trees, and their branching structure. If we were to visualize a behavior tree, it would look something like the following figure: A basic tree structure Of course, behavior trees can be made up of any number of nodes and child nodes. The nodes at the very end of the hierarchy are referred to as leaf nodes, just like a tree. Nodes can represent behaviors or tests. Unlike state machines, which rely on transition rules to traverse through them, a BT's flow is defined strictly by each node's order within the larger hierarchy. A BT begins evaluating from the top of the tree (based on the preceding visualization), then continues through each child, which, in turn, runs through each of its children until a condition is met or the leaf node is reached. BTs always begin evaluating from the root node. Evaluating the existing solutions - Unity Asset store and others The Unity asset store is an excellent resource for developers. Not only are you able to purchase art, audio, and other kinds of assets, but it is also populated with a large number of plugins and frameworks. Most relevant to our purposes, there are a number of behavior tree plugins available on the asset store, ranging from free to a few hundred dollars. Most, if not all, provide some sort of GUI to make visualizing and arranging a fairly painless experience. There are many advantages of going with an off-the-shelf solution from the asset store. Many of the frameworks include advanced functionality such as runtime (and often visual) debugging, robust APIs, serialization, and data-oriented tree support. Many even include sample leaf logic nodes to use in your game, minimizing the amount of coding you have to do to get up and running. Some other alternatives are Behavior Machine and Behavior Designer, which offer different pricing tiers (Behavior Machine even offers a free edition) and a wide array of useful features. Many other options can be found for free around the web as both generic C# and Unity-specific implementations. Ultimately, as with any other system, the choice of rolling your own or using an existing solution will depend on your time, budget, and project. Implementing a basic behavior tree framework Our example focuses on simple logic to highlight the functionality of the tree, rather than muddy up the example with complex game logic. The goal of our example is to make you feel comfortable with what can seem like an intimidating concept in game AI, and give you the necessary tools to build your own tree and expand upon the provided code if you do so. Implementing a base Node class There is a base functionality that needs to go into every node. Our simple framework will have all the nodes derived from a base abstract Node.cs class. This class will provide said base functionality or at least the signature to expand upon that functionality: using UnityEngine; using System.Collections; [System.Serializable] public abstract class Node { /* Delegate that returns the state of the node.*/ public delegate NodeStates NodeReturn(); /* The current state of the node */ protected NodeStates m_nodeState; public NodeStates nodeState { get { return m_nodeState; } } /* The constructor for the node */ public Node() {} /* Implementing classes use this method to evaluate the desired set of conditions */ public abstract NodeStates Evaluate(); } The class is fairly simple. Think of Node.cs as a blueprint for all the other node types to be built upon. We begin with the NodeReturn delegate, which is not implemented in our example, but the next two fields are. However, m_nodeState is the state of a node at any given point. As we learned earlier, it will be either FAILURE, SUCCESS, or RUNNING. The nodeState value is simply a getter for m_nodeState since it is protected and we don't want any other area of the code directly setting m_nodeState inadvertently. Next, we have an empty constructor, for the sake of being explicit, even though it is not being used. Lastly, we have the meat and potatoes of our Node.cs class—the Evaluate() method. As we'll see in the classes that implement Node.cs, Evaluate() is where the magic happens. It runs the code that determines the state of the node. Extending nodes to selectors To create a selector, we simply expand upon the functionality that we described in the Node.cs class: using UnityEngine; using System.Collections; using System.Collections.Generic; public class Selector : Node { /** The child nodes for this selector */ protected List<Node> m_nodes = new List<Node>(); /** The constructor requires a lsit of child nodes to be * passed in*/ public Selector(List<Node> nodes) { m_nodes = nodes; } /* If any of the children reports a success, the selector will * immediately report a success upwards. If all children fail, * it will report a failure instead.*/ public override NodeStates Evaluate() { foreach (Node node in m_nodes) { switch (node.Evaluate()) { case NodeStates.FAILURE: continue; case NodeStates.SUCCESS: m_nodeState = NodeStates.SUCCESS; return m_nodeState; case NodeStates.RUNNING: m_nodeState = NodeStates.RUNNING; return m_nodeState; default: continue; } } m_nodeState = NodeStates.FAILURE; return m_nodeState; } } As we learned earlier, selectors are composite nodes: this means that they have one or more child nodes. These child nodes are stored in the m_nodes List<Node> variable. Although it's conceivable that one could extend the functionality of this class to allow adding more child nodes after the class has been instantiated, we initially provide this list via the constructor. The next portion of the code is a bit more interesting as it shows us a real implementation of the concepts we learned earlier. The Evaluate() method runs through all of its child nodes and evaluates each one individually. As a failure doesn't necessarily mean a failure for the entire selector, if one of the children returns FAILURE, we simply continue on to the next one. Inversely, if any child returns SUCCESS, then we're all set; we can set this node's state accordingly and return that value. If we make it through the entire list of child nodes and none of them have returned SUCCESS, then we can essentially determine that the entire selector has failed and we assign and return a FAILURE state. Moving on to sequences Sequences are very similar in their implementation, but as you might have guessed by now, the Evaluate() method behaves differently: using UnityEngine; using System.Collections; using System.Collections.Generic; public class Sequence : Node { /** Children nodes that belong to this sequence */ private List<Node> m_nodes = new List<Node>(); /** Must provide an initial set of children nodes to work */ public Sequence(List<Node> nodes) { m_nodes = nodes; } /* If any child node returns a failure, the entire node fails. Whence all * nodes return a success, the node reports a success. */ public override NodeStates Evaluate() { bool anyChildRunning = false; foreach(Node node in m_nodes) { switch (node.Evaluate()) { case NodeStates.FAILURE: m_nodeState = NodeStates.FAILURE; return m_nodeState; case NodeStates.SUCCESS: continue; case NodeStates.RUNNING: anyChildRunning = true; continue; default: m_nodeState = NodeStates.SUCCESS; return m_nodeState; } } m_nodeState = anyChildRunning ? NodeStates.RUNNING : NodeStates.SUCCESS; return m_nodeState; } } The Evaluate() method in a sequence will need to return true for all the child nodes, and if any one of them fails during the process, the entire sequence fails, which is why we check for FAILURE first and set and report it accordingly. A SUCCESS state simply means we get to live to fight another day, and we continue on to the next child node. If any of the child nodes are determined to be in the RUNNING state, we report that as the state for the node, and then the parent node or the logic driving the entire tree can evaluate it again. Implementing a decorator as an inverter The structure of Inverter.cs is a bit different, but it derives from Node, just like the rest of the nodes. Let's take a look at the code and spot the differences: using UnityEngine; using System.Collections; public class Inverter : Node { /* Child node to evaluate */ private Node m_node; public Node node { get { return m_node; } } /* The constructor requires the child node that this inverter decorator * wraps*/ public Inverter(Node node) { m_node = node; } /* Reports a success if the child fails and * a failure if the child succeeds. Running will report * as running */ public override NodeStates Evaluate() { switch (m_node.Evaluate()) { case NodeStates.FAILURE: m_nodeState = NodeStates.SUCCESS; return m_nodeState; case NodeStates.SUCCESS: m_nodeState = NodeStates.FAILURE; return m_nodeState; case NodeStates.RUNNING: m_nodeState = NodeStates.RUNNING; return m_nodeState; } m_nodeState = NodeStates.SUCCESS; return m_nodeState; } } As you can see, since a decorator only has one child, we don't have List<Node>, but rather a single node variable, m_node. We pass this node in via the constructor (essentially requiring it), but there is no reason you couldn't modify this code to provide an empty constructor and a method to assign the child node after instantiation. The Evalute() implementation implements the behavior of an inverter.  When the child evaluates as SUCCESS, the inverter reports a FAILURE, and when the child evaluates as FAILURE, the inverter reports a SUCCESS. The RUNNING state is reported normally. Creating a generic action node Now we arrive at ActionNode.cs, which is a generic leaf node to pass in some logic via a delegate. You are free to implement leaf nodes in any way that fits your logic, as long as it derives from Node. This particular example is equal parts flexible and restrictive. It's flexible in the sense that it allows you to pass in any method matching the delegate signature, but is restrictive for this very reason—it only provides one delegate signature that doesn't take in any arguments: using System; using UnityEngine; using System.Collections; public class ActionNode : Node { /* Method signature for the action. */ public delegate NodeStates ActionNodeDelegate(); /* The delegate that is called to evaluate this node */ private ActionNodeDelegate m_action; /* Because this node contains no logic itself, * the logic must be passed in in the form of * a delegate. As the signature states, the action * needs to return a NodeStates enum */ public ActionNode(ActionNodeDelegate action) { m_action = action; } /* Evaluates the node using the passed in delegate and * reports the resulting state as appropriate */ public override NodeStates Evaluate() { switch (m_action()) { case NodeStates.SUCCESS: m_nodeState = NodeStates.SUCCESS; return m_nodeState; case NodeStates.FAILURE: m_nodeState = NodeStates.FAILURE; return m_nodeState; case NodeStates.RUNNING: m_nodeState = NodeStates.RUNNING; return m_nodeState; default: m_nodeState = NodeStates.FAILURE; return m_nodeState; } } } The key to making this node work is the m_action delegate. For those familiar with C++, a delegate in C# can be thought of as a function pointer of sorts. You can also think of a delegate as a variable containing (or more accurately, pointing to) a function. This allows you to set the function to be called at runtime. The constructor requires you to pass in a method matching its signature and is expecting that method to return a NodeStates enum. That method can implement any logic you want, as long as these conditions are met. Unlike other nodes we've implemented, this one doesn't fall through to any state outside of the switch itself, so it defaults to a FAILURE state. You may choose to default to a SUCCESS or RUNNING state, if you so wish, by modifying the default return. You can easily expand on this class by deriving from it or simply making the changes to it that you need. You can also skip this generic action node altogether and implement one-off versions of specific leaf nodes, but it's good practice to reuse as much code as possible. Just remember to derive from Node and implement the required code! We learned basics of how a behavior tree works, then we created a sample behavior tree using our framework. If you found this post useful and want to learn other concepts in Behavior trees, be sure to check out the book 'Unity 2017 Game AI programming - Third Edition'. AI for game developers: 7 ways AI can take your game to the next level Techniques and Practices of Game AI
Read more
  • 0
  • 5
  • 32243

article-image-creating-and-deploying-a-chatbot-using-dialogflow-tutorial
Bhagyashree R
10 Oct 2018
8 min read
Save for later

Creating and deploying a chatbot using Dialogflow [Tutorial]

Bhagyashree R
10 Oct 2018
8 min read
Dialogflow (previously called API.AI) is a conversational agent building platform from Google. It is a web-based platform that can be accessed from any web browser. The tool has evolved over time from what was built as an answer to Apple Siri for the Android platform. It was called SpeakToIt, an Android app that created Siri-like conversational experiences on any Android smartphone. The AI and natural language technology that powered the SpeakToIt app was opened up to developers as API.AI in 2015. This tutorial is an excerpt from a book written by Srini Janarthanam titled Hands-On Chatbots and Conversational UI Development.  In this article, we will create a basic chatbot using Dialogflow, add user intents, and finally, we will see how to integrate the chatbot with a website and Facebook. Setting up Dialogflow First, let us create a developer account on API.AI (now called as Dialogflow). Go to Dialogflow: Click GO TO CONSOLE on the top-right corner. Sign in. You may need to use your Google account to sign in. Creating a basic agent Let us create our first agent on Dialogflow: To create a new agent, click the drop-down menu on the left on the home page and click Create new agent. Fill in the form on the right. Give it a name and description. Choose a time zone and click CREATE. This will take you to the page with the intents listing. You will notice that there are two intents already: Default Fallback Intent and Default Welcome Intent. Let's add your first intent. Intent is what the user or bot wants to convey using utterances or button presses. An intent is a symbolic representation of an utterance. We need intents because there are many ways to ask for the same thing. The process of identifying intents is to map the many ways unambiguously to an intent. For instance, the user could ask to know the weather in their city using the following utterances: "hows the weather in london" "whats the weather like in london" "weather in london" "is it sunny outside just now" In the preceding utterances, the user is asking for a weather report in the city of London. In some of these utterances, they also mention time (that is, now). In others, it is implicit. The first step of our algorithm is to map these many utterances into a single intent: request_weather_report. The Intent name corresponds to users' intents. So name them from the user's perspective. Let's add a user_greet intent that corresponds to the act of greeting the chatbot by the user.  To add an intent, click the CREATE INTENT button. You will see the following page where you can create a new intent: Give the intent a name (for example, user_greet). Add sample user utterances in the User says text field. These are sample utterances that will help the agent identify the user's intent. Let's add a few greeting utterances that the user might say to our chatbot: hello hello there Hi there Albert hello doctor good day doctor Ignore the Events tab for the moment and move on to the Action tab. Add a name to identify the system intent here (for example, bot_greet to represent chatbot's greeting to the user). In the Response tab, add the bot's response to the user. This is the actual utterance that the bot will send to the user. Let's add the following utterance in the Text response field. You can add more responses so that the agent can randomly pick one to make it less repetitive and boring: Hi there. I am Albert. Nice to meet you! You can also add up to 10 additional responses by clicking the ADD MESSAGE CONTENT. Click SAVE button in the top-right corner to save the intent. You have created your very first intent for the agent. Test it by using the simulator on the right side of the page. In the Try it now box, type hello and press Enter: You will see the chatbot recognizing your typed utterance and responding appropriately. Now go on and add a few more intents by repeating steps 5 through 10. To create a new intent, click the + sign beside the Intents option in the menu on the left: Think about what kind of information users will ask the chatbot and make a list. These will become user intents. The following is a sample list to get you started: request_name request_birth_info request_parents_names request_first_job_experience request_info_on_hobbies request_info_patent_job request_info_lecturer_job_bern Of course, this list can be endless. So go on and have fun. Once you have put in the sufficient number of facts in the mentioned format, you can test the chatbot on the simulator as explained in step 10. Deploying the chatbot Now that we have a chatbot, let us get it published on a platform where users can actually use it. Dialogflow enables you to integrate the chatbot (that is, agent) with many platforms. Click Integrations to see all the platforms that are available: In this section, we will explore two platform integrations: website and Facebook: Website integration Website integration allows you to put that chatbot on a website. The user can interact with the chatbot on the website just as they would with a live chat agent. On the Integrations page, find the Web Demo platform and slide the switch from off to on. Click Web Demo to open the following settings dialog box: Click the bot.dialogflow.com URL to open the sample webpage where you can find the bot on a chat widget embedded on the page. Try having a chat with it: You can share the bot privately by email or on social media by clicking the Email and Share option. The chat widget can also be embedded in any website by using the iframe embed code found in the settings dialog box. Copy and paste the code into an HTML page and try it out in a web browser: <iframe width="350" height="430" src="https://console.api.ai/api-client/demo/embedded/ 2d55ca53-1a4c-4241-8852-a7ed4f48d266"> </iframe> Facebook integration In order to publish the API.AI chatbot on Facebook Messenger, we need a Facebook page to start with. We also need a Facebook Messenger app that subscribes to the page. To perform the following steps you need to first create a Facebook page and a Facebook Messenger app. Let's discuss the further steps here: Having created a Facebook Messenger app, get its Page Access Token. You can get this on the app's Messenger Settings tab: In the same tab, click Set up Webhooks. A dialog box called New Page Subscription will open. Keep it open in one browser tab. In another browser tab, from the Integrations page of API.AI, click Facebook Messenger: Copy the URL in the Callback URL text field. This is the URL of the API.AI agent to call from the Messenger app. Paste this in the Callback URL text field of the New Page Subscription dialog box on the Facebook Messenger app. Type in a verification token. It can be anything as long as it matches the one on the other side. Let's type in iam-einstein-bot. Subscribe to messages and messaging_postbacks in the Subscription Fields section. And wait! Don't click Verify and Save just yet: In the API.AI browser tab, you will have the integrations settings open. Slide the switch to on from the off position on the top-right corner. This will allow you to edit the settings. Type the Verify Token. This has to be the same as the one used in the Facebook Messenger App settings in step 5. Paste the Page Access Token and click START. Now go back to the Facebook Messenger app and click Verify and Save. This will connect the app to the agent (chatbot). Now on the Facebook Messenger settings page, under Webhooks, select the correct Facebook page that the app needs to subscribe to and hit Subscribe: You should now be able to open the Facebook page, click Send Message, and have a chat with the chatbot: Brilliant! Now you have successfully created a chatbot in API.AI and deployed it on two platforms: web and Facebook Messenger. In addition to these platforms, API.AI enables integration of your agent with several popular messaging platforms such as Slack, Skype, Cisco Spark, Viber, Kik, Telegram, and even Twitter. If you found this post useful, do check out the book, Hands-On Chatbots and Conversational UI Development, which will help you explore the world of conversational user interfaces. Build and train an RNN chatbot using TensorFlow [Tutorial] Facebook’s Wit.ai: Why we need yet another chatbot development framework? Voice, natural language, and conversations: Are they the next web UI?
Read more
  • 0
  • 0
  • 10902

article-image-building-your-first-chatbot-using-chatfuel-with-no-code-tutorial
Bhagyashree R
09 Oct 2018
12 min read
Save for later

Building your first chatbot using Chatfuel with no code [Tutorial]

Bhagyashree R
09 Oct 2018
12 min read
Building chatbots is fun. Although chatbots are just another kind of software, they are very different in terms of the expectations that they create in users. Chatbots are conversational. This ability to process language makes them project a kind of human personality and intelligence, whether we as developers intend it to be or not. To develop software with a personality and intelligence is quite challenging and therefore interesting. This tutorial is an excerpt from a book written by Srini Janarthanam titled Hands-On Chatbots and Conversational UI Development. In this article, we will explore a popular tool called Chatfuel, and learn how to build a chatbot from scratch. Chatfuel is a tool that enables you to build a chatbot without having to code at all. It is a web-based tool with a GUI editor that allows the user to build a chatbot in a modular fashion. Getting started with Chatfuel Let's get started. Go to Chatfuel's website to create an account: Click GET STARTED FOR FREE. Remember, the Chatfuel toolkit is currently free to use. This will lead you to one of the following two options: If you are logged into Facebook, it will ask for permission to link your Chatfuel account to your Facebook account If you are not logged in, it will ask you to log into Facebook first before asking for permission Chatfuel links to Facebook to deploy bots. So it requires permission to use your Facebook account: Authorize Chatfuel to receive information about you and to be your Pages manager: That's it! You are all set to build your very first bot: Building your first bot Chatfuel bots can be published on two deployment platforms: Facebook Messenger and Telegram. Let us build a chatbot for Facebook Messenger first. In order to do that, we need to create a Facebook Page. Every chatbot on Facebook Messenger needs to be attached to a page. Here is how we can build a Facebook Page: Go to https://www.facebook.com/pages/create/. Click the category appropriate to the page content. In our case, we will use Brand or Product and choose App Page. Give the page a name. In our case, let's use Get_Around_Edinburgh. Note that Facebook does not make it easy to change page names. So choose wisely. Once the page is created, you will see Chatfuel asking for permission to connect to the page: Click CONNECT TO PAGE. You will be taken to the bot editor. The name of the bot is set to My First Bot. It has a Messenger URL, which you can see by the side of the name. Messenger URLs start with m.me. You might notice that the bot also comes with a Welcome message that is built in. On the left, you see the main menu with a number of options, with the Build option selected by default: Click the Messenger URL to start your first conversation with the bot. This will open Facebook Messenger in your browser tab: To start the conversation, click the Get Started button at the bottom of the chat window. There you go! Your conversation has just started. The bot has sent you a welcome message: Notice how it greets you with your name. It is because you have given the bot access to your info on Facebook. Now that you have built your first bot and had a conversation with it, give yourself a pat on your back. Welcome to the world of chatbots! Adding conversational flow Let's start building our bot: On the welcome block, click the default text and edit it.  Hovering the mouse around the block can reveal options such as deleting the card, rearranging the order of the cards, and adding new cards between existing cards. Delete the Main menu button: Add a Text card. Let's add a follow-up text card and ask the user a question. Add buttons for user responses. Click ADD BUTTON and type in the name of the button. Ignore block names for now. Since they are incomplete, they will appear in red. Remember, you can add up to three buttons to a text card: Button responses need to be tied to blocks so that when users hit a button the chatbot would know what to do or say. Let's add a few blocks. To add a new block, click ADD BLOCK in the Bot Structure tab. This creates a new untitled block. On the right side, fill in the name of the block. Repeat the same for each block you want to build: Now, go back to the buttons and specify block names to connect to. Click the button, choose Blocks, and provide the name of the block: For each block, you created, add content by adding appropriate cards. Remember, each block can have more than one card. Each card will appear as a response, one after another: Repeat the preceding steps to add more blocks and connect them to buttons of other blocks. When you are done, you can test it by clicking the TEST THIS CHATBOT button in the top-right corner of the editor. You should now see the new welcome message with buttons for responses. Go on and click one of them to have a conversation: Great! You now have a bot with a conversational flow. Handling navigation How can the user and the chatbot navigate through the conversation? How do they respond to each other and move the conversation forward? In this section, we will examine the devices to facilitate conversation flow. Buttons Buttons are a way to let users respond to the chatbot in an unambiguous manner. You can add buttons to text, gallery, and list cards. Buttons have a label and a payload. The label is what the user gets to see. Payload is what happens in the backend when the user clicks the button: A button can take one of four types of payloads: next block, URL, phone number, or share: The next block is identified by the name of the block. This will tell the chatbot which block to execute when the button is pressed. The URL can be specified, if the chatbot is to open a web page on the embedded web browser. Since the browser is embedded, the size of the window can also be specified. The phone number can be specified, if the chatbot is to make a voice call to someone. The share option can be used in cards such as lists and galleries to share the card with other contacts of the user. Go to block cards Buttons can be used to navigate the user from one block to another, however, the user has to push the button to enable navigation. However, there may be circumstances where the navigation needs to happen automatically. For instance, if the chatbot is giving the user step-by-step instructions on how to do something, it can be built by putting all the cards (one step of information per card) in one block. However, it might be a good idea to put them in different blocks for the sake of modularity. In such a case, we need to provide the user a next step button to move on to the next step. In Chatfuel, we can use the Go to Block card to address this problem. A Go to Block card can be placed at the end of any block to take the chatbot to another block. Once the chatbot executes all the cards in a block, it moves to another block automatically without any user intervention. Using Go to Block cards, we can build the chatbot in a modular fashion. To add a Go to Block card at the end of a block, choose ADD A CARD, click the + icon and choose Go to Block card. Fill in the block name for redirection: Redirections can also be made random and conditional. By choosing the random option, we can make the chatbot choose one of the mentioned blocks randomly. This adds a bit of uncertainty to the conversation. However, this needs to be used very carefully because the context of the conversation may get tricky to maintain. Conditional redirections can be done if there is a need to check the context before the redirection is done. Let's revisit this option after we discuss context. Managing context In any conversation, the context of conversation needs to be managed. Context can be maintained by creating a local cache where the information transferred between the two dialogue partners can be stored. For instance, the user may tell the chatbot their food preferences, and this information can be stored in context for future reference if not used immediately. Another instance is in a conversation where the user is asking questions about a story. These questions may be incomplete and may need to be interpreted in terms of the information available in the context. In this section, we will explore how context can be recorded and utilized during the conversation in Chatfuel. Let's take the task of finding a restaurant as part of your tour guide chatbot. The conversation between the chatbot and the user might go as follows: User : Find a restaurant Bot : Ok. Where? User : City center. Bot : Ok. Any cuisine that you fancy? User : Indian Bot : Ok. Let me see... I found a few Indian restaurants in the city center. Here they are. In the preceding conversation, up until the last bot utterance, the bot needs to save the information locally. When it has gathered all the information it needs, it can go off and search the database with appropriate parameters. Notice that it also needs to use that information in generating utterances dynamically. Let's explore how to do these two—dynamically generating utterances and searching the database. First, we need to build the conversational flow to take the user through the conversation just as we discussed in the Next steps section. Let's assume that the user clicks the Find_a_restaurant button on the welcome block. Let's build the basic blocks with text messages and buttons to navigate through the conversation: User input cards As you can imagine, building the blocks for every cuisine and location combination can become a laborious task. Let's try to build the same functionality in another way—forms. In order to use forms, the user input card needs to be used. Let's create a new block called Restaurant_search and to it, let's add a User Input card. To add a User Input card, click ADD A CARD, click the + icon, and select the User Input card. Add all the questions you want to ask the user under MESSAGE TO USER. The answers to each of these questions can be saved to variables. Name the variables against every question. These variables are always denoted with double curly brackets (for example, {{restaurant_location}}): Information provided by the user can also be validated before acceptance. In case the required information is a phone number, email address, or a number, these can be validated by choosing the appropriate format of the input information. After the user input card, let's add a Go to Block card to redirect the flow to the results page: And add a block where we present the results. As you can see here, the variables holding information can be used in chatbot utterances. These will be dynamically replaced from the context when the conversation is happening: The following screenshot shows the conversation so far on Messenger: Setting user attributes In addition to the user input cards, there is also another way to save information in context. This can be done by using the set up user attribute card. Using this card, you can set context-specific information at any point during the conversation. Let's take a look at how to do it. To add this card, choose ADD A CARD, click the + icon, and choose the Set Up User Attribute card: The preceding screenshot shows the user-likes-history variable being set to true when the user asked for historical attractions. This information can later be used to drive the conversation (as used in the Go to Block card) or to provide recommendations. Variables that are already in the context can be reset to new values or no value at all. To clear the value of a variable, use the special NOT SET value from the drop-down menu that appears as soon as you try to fill in a value for the variable. Also, you can set/reset more than one variable in a card. Default contextual variables Besides defining your own contextual variables, you can also use a list of predefined variables. The information contained in these variables include the following: Information that is obtained from the deployment platform (that is, Facebook) including the user's name, gender, time zone, and locale Contextual information—last pushed button, last visited block name, and so on To get a full list of variables, create a new text card and type {{. This will open the drop-down menu with a list of variables you can choose from. This list will also include the variables created by you: As with the developer-defined variables, these built-in variables can also be used in text messages and in conditional redirections using the Go to Block cards. Congratulations! In this tutorial, you have started a journey toward building awesome chatbots. Using tour guiding as the use case, we explored a variety of chatbot design and development topics along the way. If you found this post useful, do check out the book, Hands-On Chatbots and Conversational UI Development, which will help you explore the world of conversational user interfaces. Facebook’s Wit.ai: Why we need yet another chatbot development framework? Building a two-way interactive chatbot with Twilio: A step-by-step guide How to create a conversational assistant or chatbot using Python
Read more
  • 0
  • 0
  • 5850
article-image-microsoft-open-sources-infer-net-its-popular-model-based-machine-learning-framework
Melisha Dsouza
08 Oct 2018
3 min read
Save for later

Microsoft open sources Infer.NET, it’s popular model-based machine learning framework

Melisha Dsouza
08 Oct 2018
3 min read
Last week, Microsoft open sourced Infer.NET, the cross-platform framework used for model-based machine learning. This popular machine learning engine used in Office, Xbox and Azure, will be available on GitHub under the permissive MIT license for free use in commercial applications. Features of  Infer.NET The team at Microsoft Research in Cambridge initially envisioned Infer.NET as a research tool and released it for academic use in 2008. The framework has served as a base to publish hundreds of papers across a variety of fields, including information retrieval and healthcare. The team then started using the framework as a machine learning engine within a wide range of Microsoft products. A model-based approach to machine learning Infer.NET allows users to incorporate domain knowledge into their model. The framework can be used to build bespoke machine learning algorithms directly from their model. To sum it up, this framework actually constructs a learning algorithm for users based on the model they have provided. Facilitates interpretability Infer.NET also facilitates interpretability. If users have designed the model themselves and the learning algorithm follows that model, they can understand why the system behaves in a particular way or makes certain predictions. Probabilistic Approach In Infer.NET, models are described using a probabilistic program. This is used to describe real-world processes in a language that machines understand. Infer.NET compiles the probabilistic program into high-performance code for implementing something cryptically called deterministic approximate Bayesian inference. This approach allows a notable amount of scalability. For instance, it can be used in a system that automatically extracts knowledge from billions of web pages, comprising petabytes of data. Additional Features The framework also supports the ability of the system to learn as new data arrives. The team is also working towards developing and growing it further. Infer.NET will become a part of ML.NET (the machine learning framework for .NET developers). They have already set up the repository under the .NET Foundation and moved the package and namespaces to Microsoft.ML.Probabilistic.  Being cross platform, Infer.NET supports .NET Framework 4.6.1, .NET Core 2.0, and Mono 5.0. Windows users get to use Visual Studio 2017, while macOS and Linux folks have command-line options, which could be incorporated into the code wrangler of their choice. Download the framework to learn more about Infer.NET. You can also check the documentation for a detailed User Guide. To know more about this news, head over to Microsoft’s official blog. Microsoft announces new Surface devices to enhance user productivity, with style and elegance Neural Network Intelligence: Microsoft’s open source automated machine learning toolkit Microsoft’s new neural text-to-speech service lets machines speak like people
Read more
  • 0
  • 0
  • 3929

article-image-minecraft-java-team-are-open-sourcing-some-of-minecrafts-code-as-libraries
Sugandha Lahoti
08 Oct 2018
2 min read
Save for later

Minecraft Java team are open sourcing some of Minecraft's code as libraries

Sugandha Lahoti
08 Oct 2018
2 min read
Stockholm's Minecraft Java team are open sourcing some of Minecraft's code as libraries for game developers. Developers can now use them to improve their Minecraft mods, use them for their own projects, or help improve pieces of the Minecraft Java engine. The team will open up different libraries gradually. These libraries are open source and MIT licensed. For now, they have open sourced two libraries Brigadier and DataFixerUpper. Brigadier The first library, Brigadier takes random strings of text entered into Minecraft and turns into an actual function that the game will perform. Basically, if you enter in the game something like /give Dinnerbone sticks, it goes internally into Brigadier and breaks it down into pieces. Then it tries to figure out what the developer is trying to do with this random piece of text. Nathan Adams, a Java developer hopes that giving the Minecraft community access to Brigadier can make it “extremely user-friendly one day.” Brigadier has been available for a week now. It has already seen improvements in the code and the readme doc. DataFixerUpper Another important library of the Minecraft game engine, the DataFixerUpper is also being open sourced. When a developer adds a new feature into Minecraft, they have to change the way level data and save files are stored. DataFixerUpper turns these data formats to what the game should currently be using now. Also in consideration for open sourcing is the Blaze3D library, which is a complete rewrite of the render engine for Minecraft 1.14. You can check out the announcement on the Minecraft website. You can also download Brigadier and DataFixerUpper. Minecraft is serious about global warming, adds a new (spigot) plugin to allow changes in climate mechanics. Learning with Minecraft Mods A Brief History of Minecraft Modding
Read more
  • 0
  • 0
  • 7378