Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech Guides

851 Articles
article-image-hyperledger-blockchain
Savia Lobo
26 Oct 2017
6 min read
Save for later

Hyperledger: The Enterprise-ready Blockchain

Savia Lobo
26 Oct 2017
6 min read
As one of the most widely discussed phenomena across the global media, Blockchain has certainly grown from just a hype to becoming a mainstream reality. Leading industry experts from finance, supply chain, and IoT are collaborating to make Blockchain available for commercial adoption. But while Blockchain is being projected as the future of digital transactions, it still suffers from two major limitations: carrying out private transactions and scalability. As such, a pressing need to develop a Blockchain-based distributed ledger to overcome these problems was widely felt. Enter Hyperledger Founded by Linux in 2015, Hyperledger aims at providing enterprises a platform to build robust blockchain applications for their businesses and to create open-source enterprise-grade frameworks to carry out secure business transactions. It is a fulcrum, which includes leading industries and software developers working collaboratively for building blockchain frameworks that can further be used to deploy blockchain applications for industries. With leading industry experts such as IBM, Intel, Accenture, SAP, among others collaborating with the Hyperledger community, and with the recent addition of BTS, Oracle, and Patientory Foundation, the community is gaining a lot of traction. No wonder, Brian Behlendorf, Executive Director at Hyperledger, says, “Growth and interest in Hyperledger remain high in 2017”. There are a total of 8 projects: five are frameworks (Sawtooth, Fabric, Burrow, Iroha, and Indy), and the other three are tools (Composer, Cello, and Explorer) supporting those frameworks. Each framework provides a different approach in building desired blockchain applications. Hyperledger Fabric, the community’s first framework, is contributed by IBM. It hosts smart contracts using Chaincode, an interface written in Go or Java, which contains the business logic of the ledger. Hyperledger Sawtooth, developed by Intel offers a modular blockchain architecture. It consists of Proof of Elapsed Time (PoET), a consensus algorithm developed by Intel for high efficiency among distributed ledgers. Hyperledger Burrow, a joint proposal by Intel and Monax, is a permissioned smart contract machine. It executes the smart contract code following the Ethereum specification with an engine, a strong audit trail, and a consensus mechanism. Apart from these already launched frameworks, two more - namely Indy and Iroha, are still in the incubation phase. The Hyperledger community is also building supporting tools such as  Composer which is already launched in the market and Cello and Explorer which are awaiting unveiling. [box type="shadow" align="" class="" width=""]Although a plethora of Hyperledger tools and frameworks are available, in the rest of the article we take Hyperledger Fabric - one of the most popular and trending frameworks - for the purpose of demonstrating how Hyperledger is being used by businesses.[/box] Why should businesses use Hyperledger? In order to lock down a framework upon which Blockchain apps can be built, several key aspects are worth considering. Some of the most important ones among them are portability, security, reliability, interoperability, and user-friendliness. Hyperledger as a platform offers all of the above features for building cross-platform and production-ready applications for businesses. Let’s take a simple example here to see how Hyperledger works for businesses. Consider a restaurant business. A restaurant owner buys vegetables from a wholesale shop at a much lower cost than in the market. The shopkeeper creates a network wherein other buyers cannot see the cost at which vegetables are sold to a buyer. Similarly, the restaurant owner can view only his transaction with the shopkeeper. For the vegetables to reach the restaurant, they must pass through numerous stages such as transport, delivery, and so on. The restaurant owner can track the delivery of his vegetables at each stage and so can the shopkeeper. The transport and the delivery organizations, however, won’t be able to see the transaction details. This means that the shopkeeper can establish a confidential network within a private network of other stakeholders. This type of a network can be set up using Hyperledger Fabric. Let’s break down the above example into some of the reasons to consider incorporating Hyperledger for your business networks: With Hyperledger you get performance, scalability, and multiple levels of trust. You get data on a need-to-know basis - Only the parties in the network that need the data get to know about it. Backed by bigshots like Intel and IBM, Hyperledger strives to offer a strong standard for Blockchain code which in turn provides better functionality at increased speeds. Furthermore, with the recent release of Fabric v1.0, businesses can create out-of-the-box blockchain solutions on its highly elastic and extensible architecture further eased by using Hyperledger Composer. The Composer aids businesses in creating smart contracts and blockchain applications without having to know the underlying complex intricacies of the blockchain network. It is a great fit for real-world enterprise usage, built with collaborative efforts from leading industry experts. Although Ethereum is used by many businesses, some of the reasons why Hyperledger could be a better enterprise fit are: While Ethereum is a public Blockchain, Hyperledger is a private blockchain. This means enterprises within the network know who is present on the peer nodes, unlike Ethereum. Hyperledger is a permissioned network i.e., it has the ability to grant permission on who can participate in the consensus mechanism of the Blockchain network. Ethereum, on the other hand, is permissionless. Hyperledger has no built-in cryptocurrency. Ethereum, on the other hand, has a built-in cryptocurrency, called Ether. Many applications don’t need a cryptocurrency to function, and using Ethereum there can be a disadvantage. Hyperledger gives you the flexibility of choosing a programming language such as Java or Go, for preparing smart contracts. Ethereum, on the other hand, uses Solidity which is a lot less common in use. Hyperledger is highly scalable — unlike traditional Blockchain and Ethereum — with minimal performance losses. “Since Hyperledger Fabric was designed to meet key requirements for permissioned blockchains with transaction privacy and configurable policies, we’ve been able to build solutions quickly and flexibly. ” - Mohan Venkataraman, CTO, IT People Corporation. Future of Hyperledger The Hyperledger community is expanding rapidly with many industries collaborating and offering their capabilities in building cross-industry blockchain applications. Hyperledger has found adoption within business networks in varied industries such as healthcare, finance, and supply chain to build state-of-the-art blockchain applications which assure privacy and decentralized permissioned networks. It is shaping up to be a technology which can revolutionize the way businesses deal with different access control within a consortium, with an armor of enhanced security measures. With the continuous developments in these frameworks, smarter, faster, and more secure business transactions will soon be a reality. Besides, we can expect to see Hyperledger on the cloud with IBM’s plans to extend Blockchain technologies onto its cloud. Add to that the exciting prospect of blending aspects of Artificial Intelligence with Hyperledger, transactions look more advanced, tamper-proof, and secure than ever before.
Read more
  • 0
  • 0
  • 5449

article-image-more-than-half-suffer-from-it-industry-burnout
Aaron Lazar
02 Jul 2018
7 min read
Save for later

Why does more than half the IT industry suffer from Burnout?

Aaron Lazar
02 Jul 2018
7 min read
I remember when I was in college a few years ago, this was a question everyone was asking. People who were studying Computer Science were always scared of this happening. Although it’s ironic because knowing the above, they were still brave enough to get into Computer Science in the first place! Okay, on a serious note, this is a highly debated topic and the IT industry is labeled to be notorious for employee burnout. The harsh reality Honestly speaking, I have developer friends who earn pretty good salary packages, even those working at a junior level. However, just two in five of them are actually satisfied with their jobs. They seem to be heading towards burnout quite quickly, too quickly in fact. I would understand if you told me that a middle aged person, having certain health conditions et al, working in a tech company, was nearing burnout. Here I see people in their early 20’s struggling to keep up, wishing for the weekend to come! Facts and figures Last month, a workspace app called Blind surveyed over 11K (11,487 to be precise) employees in the tech industry and the responses weren’t surprising! At least for me. The question posed to them was pretty simple: Are you currently suffering from job burnout? Source: TeamBlind Oh yeah, that’s a whopping 6,566 employees! Here’s some more shocking stats: When narrowed down to 30 companies, 25 of them had an employee burnout rate of 50% or higher. Only 5 companies had an employee burnout rate below 50%. Moreover, 16 out of the 30 companies had an employee burnout rate that was higher than the survey average of 57.16%. While Netflix had the least number of employees facing burnout, companies like Credit Karma, Twitch and Nvidia recorded the highest. I thought I’d analyse a bit and understand what some of the most common reasons causing burnout in the tech industry, could be. So here they are: #1 Unreasonable workload Now I know this is true for a fact! I’ve been working closely with developers and architects for close to 5 years now and I’m aware of how unreasonable projects can get. Especially their timelines. Customer expectation is something really hard to meet in the IT sector, mainly because the customer usually doesn’t know much about tech. Still, deadlines are set extremely tight, like a noose around developers’ necks, not giving them any space to maneuver whatsoever. Naturally, this will come down hard on them and they will surely experience burnout at some time, if not already. #2 Unreasonable managers In our recent Skill-Up survey, more than 60% of the respondents felt they knew more about tech, than what their managers did. More than 40% claimed that the biggest organisational barriers to their organisation’s (theirs as well) goals was their manager’s lack of tech knowledge. As with almost everyone, developers expect managers to be like a mentor, able to guide them into taking the right decisions and making the right choices. Rather, with the lack of knowledge, managers are unable to relate to their team members, ultimately coming across as unreasonable to them. On the other side of town, IT Management has been rated as one of the top 20 most stressful jobs in the world, by careeraddict! #3 Rapidly changing tech The tech landscape is one that changes ever so fast, and developers tend to get caught up in the hustle to stay relevant. I honestly feel the quote, “Time and tide wait for none” needs to be appended to “Time, tide and tech wait for none”! The competition is so high that if they don’t keep up, they’re probably history in a couple of years or so. I remember in the beginning of 2016, there was a huge hype about Data Science and AI - there was a predicted shortage of a million data scientists by 2018. Thousands of engineers all around the world started diving into their pockets to fund their Data Science Masters Degrees. All this can put a serious strain on their health and they ultimately meet burnout. #4 Disproportionate compensation Tonnes of software developers feel they’re underpaid, obviously leading them to lose interest in their work. Ever wonder why developers jump companies so many times in their careers? Now this stagnation is happening while on the other hand, work responsibilities are rising. There’s a huge imbalance that’s throwing employees off track. Chris Bolte, CEO of Paysa, says that companies recruit employees at competitive rates. But once they're on board, the companies don't tend to pay much more than the standard yearly increase. This is obviously a bummer and a huge demotivation for the employees. #5 Organisation culture The culture prevailing in tech organisations has a lot to do with how fast employees reach burnout. No employee wants to feel they’re tools or perhaps cogs in a wheel. They want to feel a sense of empowerment, that they’re making an impact and they have a say in the decisions that drive results. Without a culture of continuous learning and opportunities for professional and personal growth, employees are likely to be driven to burnout pretty quickly, either causing them to leave the organisation or worse still, lose confidence in themselves. #6 Work life imbalance This is a very tricky thing, especially if you’re out working long hours and you’re mostly unhappy at work. Moreover, developers usually tend to take their work home so that they can complete projects on time, and that messes up everything. When there’s no proper work life balance, you’re most probably going to run into a health problem, which will lead you to burnout, eventually. #7 Peer pressure This happens a lot, not just in the IT industry, but it’s more common here owing to the immense competition and the fast pace of the industry itself. Developers will obviously want to put in more efforts than they can, simply because their team members are doing it already. This can go two ways: One where their efforts still go unnoticed, and secondly, although they’re noticed, they’ve lost on their health and other important aspects of life. By the time they think of actually doing something innovative and productive, they’ve crashed and burned. [dropcap]I[/dropcap]f you ask me, burnout is a part and parcel of every industry and it majorly depends on mindset. The mindset of employees as well as the employer. Developers should try avoiding long work hours as far as possible, while trying to take their minds off work by picking up a nice hobby and exploring more ways to enrich their lives. On the other side of the equation, employers and managers should do better at understanding their team’s limitations or problems, while also maintaining an unbiased approach towards the whole team. They should realize that a motivated and balanced team is great for their balance sheet in the long run. They must be serious enough to include employee morale and nurturing a great working environment as one of management’s key performance indicators. If the IT industry must rise as a phoenix from the ashes, it will take more than a handful of people or organizations changing their ways. Change begins from within every individual and at the top for every organization. Should software be more boring? The “Boring Software” manifesto thinks so These 2 software skills subscription services will save you time – and cash Don’t call us ninjas or rockstars, say developers  
Read more
  • 0
  • 0
  • 5436

article-image-5-reasons-to-learn-programming
Aaron Lazar
25 Jun 2018
9 min read
Save for later

5 Reasons to learn programming

Aaron Lazar
25 Jun 2018
9 min read
The year is 2018 and it’s all over the television, the internet, the newspapers; people are talking about it in coffee shops, at office desks across from where we sit, and what not. There’s a scramble for people to learn how to program. It’s a confusing and scary situation for someone who has never written a line of code, to think about all these discussions that are doing the rounds. In this article, I’m going to give you 5 reasons why I think you should learn to code, even if you are not a programmer by profession. Okay, first thing’s first: What is Programming? Programming is the process of writing/creating a set of instructions that tell a computer how to perform a certain task. Just like you would tell someone to do something and you would tell them in a language like English, computers also understand particular languages. This is called a programming language. There are several like Java, Python, C# (pronounced Csharp), etc. Just like many would find English easier to learn that French or maybe Cantonese, every person finds each language different, although almost all languages can do pretty much the same thing. So now, let’s see what our top 5 reasons are to learn a programming language, and ultimately, how to program a computer. #1 Automate stuff: How many times do we find ourselves doing the same old monotonous work ourselves. For example, a salesperson who has a list of 100 odd leads, will normally mail each person manually. How cool would it be if you could automate that and let your computer send each person a mail separately addressing them appropriately? Or maybe, you’re a manager who has a load of data you can’t really make sense of. You can use a language like Python to sort it and visualise your findings. Yes, that’s possible with programming! There’s a lot of other stuff that can be automated too, like HR scanning resumes manually. You can program your computer to do it for you, while you spend that time doing something more productive! Now while there might be softwares readily available that could do this for you, they’re pretty much standard and non-customisable. With programming, you can build something that’s tailor-made to your exact requirement. #2 Start thinking more logically: When you learn to program, you start thinking about outcomes more logically. Programming languages are all about logic and problem-solving. You will soon learn how to break down problems into small parts and tackle them individually. You can apply this learning in your own personal and work life. #3 Earn great moolah Programming pays really well and even freelance jobs pay close to $100 an hour. You could have your day job, while taking advantage of your programming skills to build websites, games, create applications for clients, after work or over the weekend, while making some good bucks. Here’s a list of average salaries earned by programmers, based on the language they used: Source: TOP 10 ChallengeRocket.com ranking of projected earnings in 2017 #4 Another great idea! Well, in case you’re an entrepreneur or are planning to become one, learning a programming language is sure to benefit you a great deal. The most successful startups these days are AI and software based and even though you might not be the one doing the programming, you will be interacting with those who will. It makes things much easier when you’re discussing with such a person, and more importantly, it saves you from being taken for a ride in many ways. #5 Having fun Unlike several other things that are boring to learn and will get you frustrated in a matter of hours, programming isn’t like that. That’s not to say that programming doesn’t have a learning curve, but with the right sources, you can learn it quickly and effectively. There are few things that can compare to the satisfaction of creating something. You can use programming to build your own game or maybe prank somebody! I tried that once - every time a friend clicked on the browser icon on my PC, it would make a loud farting noise! Don’t believe me yet? Over 80% of respondents to our most recent Skill-Up survey said that they programmed for fun, outside of work. #bonusreason! What’s to lose? I mean, seriously what can you lose? You’re going to be learning something completely new and will be probably much better at solving problems at home or your workplace. If you’re thinking you won’t find time to learn, think again. I’m sure all of us can make time, at least an hour a day to do something productive, if we commit to it. And you can always consider this your “me time”. Okay, so now you have your 5+1 reasons to learn to program. You’ve had some quality time to think about it and you’re ready to start learning. But you have some questions like where to start? Do you need to take a course or a college degree? Will it cost much? How long will it take to learn programming? The list is never ending. I’m going to put up some FAQs that most people ask me before they intend to start learning how to code. So here it goes… FAQs Where to start? Honestly speaking, you can start in the confines of your home! You just need a computer, an internet connection and the will to learn, if you want to get started with programming. You can begin by understanding what programming is a bit more, selecting a programming language, and then diving right into coding with the help of some material like the book, Introduction to Programming. What language do I pick? Every language can pretty much do what others can, but there are certain languages that have been built to solve a particular problem. Like for example, JavaScript, HTML and CSS are mainly used for building websites. Python is quite simple to learn and can be used to do a variety of things, most notably working with data. On the other hand, C# can be used to develop some cool games, while also being a great language to build websites and other applications. Think about what you want to do and then choose a language accordingly. I would suggest you choose between Python and JavaScript to start off. Do you need to take a course or a college degree? Not really, unless you plan on making it your full time career or becoming a software engineer or something like that. I’ve known some of the top professionals who haven’t earned a degree and still are at the position where they are. Mark Zuckerberg for example, dropped out of Harvard to start Facebook (he recently received an honorary degree in 2017, though). Programming is about learning to solve problems and in most cases, you don’t need a degree to prove that you’re great at solving problems. You can take an online course or buy a book to start learning. Sometimes, just looking at code often can teach you a lot too. Take HTML and CSS for example. If you like how a website looks, you could just checkout its source code to understand why it is the way it. Do this for a few sites and you you grasp the basics of what the HTML/CSS code do and how to write or alter simple code snippets. Will it cost much? You can learn a lot freely if you have a lot of time and patience at hand; sorting out the good from the bad. There are plenty of resources out there from Q&A sites like stackoverflow to youtube with its vast collection of videos. If you are like most people with a day job, you are better off spending a little to learn. There are several reasonably priced videos and courses from Packt, that will help you get started with computer programming. Alternatively, you can purchase a book or two for under $100. Trust me, once you become good at programming, you’ll be earning way more than you invested! How long will it take to learn programming? I can’t really answer that for certain. I took about 4 months to learn Python, while a friend of mine could code small programs within a couple of weeks. It all depends on the language you choose to learn, the amount of time you invest and how committed you are to learning something new. What jobs can I get? You may be quite happy in your current job as a non-programmer who now knows to code. But in case, you’re wondering about job prospects in programming, here is the rundown. As a programmer, you have a variety of jobs to choose from, depending on your area of interest. You could be a web developer, or a game developer, or you could also be building desktop applications like a notepad or word processor. There are a huge number of jobs available for those who can work with a lot of data as well, while there are a growing number of jobs for professionals who can manage thousands of computers working together - their maintenance, security, etc. Okay, so you have enough information to start your adventures into learning programming! You might hear people talk a lot about professionals losing jobs due to automation. Don’t let something like that be the reason behind why you want to learn how to program. Computer Science and programming has become more ingrained in school education, and our little ones are being coached to be industry ready. Always remember, programming is not everyone’s cup of tea and you shouldn’t do it just because everyone else is. Do it if you’re really passionate about solving problems in a better way. You will never know if programming is really meant for you until you try it. So go forth and get your hands dirty with some code! What is the difference between functional and object oriented programming? The Top 7 Python programming books you need to read Top 5 programming languages for crunching Big Data effectively
Read more
  • 0
  • 0
  • 5396

article-image-5-javascript-machine-learning-libraries-you-need-to-know
Pravin Dhandre
08 Jun 2018
3 min read
Save for later

5 JavaScript machine learning libraries you need to know

Pravin Dhandre
08 Jun 2018
3 min read
Technologies like machine learning, predictive analytics, natural language processing and artificial intelligence are the most trending and innovative technologies of 21st century. Whether it is an enterprise software or a simple photo editing application, they all are backed and rooted in machine learning technology making them smart enough to be a friend to humans. Until now, the tools and frameworks that were capable of running machine learning were majorly developed in languages like Python, R and Java. However, recently the web ecosystem has picked up machine learning into its fold and is achieving transformation in web applications. Today in this article, we will look at the most useful and popular libraries to perform machine learning in your browser without the need of softwares, compilers, installations and GPUs. TensorFlow.js GitHub: 7.5k+ stars With the growing popularity of TensorFlow among machine learning and deep learning enthusiasts, Google recently released TensorFlowjs, the JavaScript version of TensorFlow. With this library, JavaScript developers can train and deploy their machine learning models faster in browser without much hassle. This library is speedy, tensile, scalable and a great start to practically experience the taste of machine learning. With TensorFlow.js, importing existing models and retraining pretrained model is a piece of cake. To check out examples on tensorflow.js, visit GitHub repository. ConvNetJS GitHub: 9k+ stars ConvNetJS provides neural networks implementation in JavaScript with numerous demos of neural networks available on GitHub repository. The framework has a good number of active followers who are programmers and coders. The library provides support to various neural network modules, and popular machine learning techniques like Classification and Regression. Developers who are interested in getting reinforcement learning onto the browser or in training complex convolutional networks, can visit the ConvNetJS official page. Brain.js GitHub: 8k+ stars Brain.js is another addition to the web development ecosystem that brings smart features onto the browser with just a few lines of code. Using Brain.js, one can easily create simple neural networks and can develop smart functionality in their browser applications without much of the complexity. It is already preferred by web developers for client side applications like in-browser games or placement of Ads, or for character recognition. You can checkout its GitHub repository to see a complete demonstration of approximating XOR function using brain.js. Synaptic GitHub: 6k+ stars Synaptic is a well-liked machine learning library for training recurrent neural networks as it has in-built architecture-free generalized algorithm. Few of the in-built architectures include multilayer perceptrons, LSTM networks and Hopfield networks. With Synaptic, you can develop various in-browser applications such as Paint an Image, Learn Image Filters, Self-Organizing Map or Reading from Wikipedia. Neurojs GitHub: 4k+ stars Another recently developed framework especially for reinforcement learning tasks in your browser, is neurojs. It mainly focuses on Q-learning, but can be used for any type of neural network based task whether it is for building a browser game or an autonomous driving application. Some of the exciting features this library has to offer are full-stack neural network implementation, extended support to reinforcement learning tasks, import/export of weight configurations and many more. To see the complete list of features, visit the GitHub page. How should web developers learn machine learning? NVIDIA open sources NVVL, library for machine learning training Build a foodie bot with JavaScript
Read more
  • 0
  • 0
  • 5329

article-image-how-are-container-technologies-changing-programming-languages
Xavier Bruhiere
11 Apr 2017
7 min read
Save for later

How are container technologies changing programming languages?

Xavier Bruhiere
11 Apr 2017
7 min read
In March 2013, Solomon Hykes presented Docker, which democratized access to Linux containers. The underlying technology, control groups, was already incubating for a few years at Google. However, Docker abstracts away the complexity of containers' lifecycle and adoption skyrocketed among developers. In June 2016, Datadog published some compelling statistics about Docker adoption: the industry as a whole increasingly adopted containers for production. Since everybody is talking about how to containarize everything, I would like to take a step back and study how it is influencing the development of our most fundamental medium: programming languages. The rise of Golang, the Java8 release, Python 3.6 improvements--how do language development and containerization marketsplay together in 2017? Scope of Container Technologies Let's define the scope of what we call container technologies. Way back in 2006, two Google engineers started to work on a new technology for the partition hierarchical group of tasks. They called it cgroups and submitted the code to the Linux Kernel. This lightweight approach of virtualization (sorry Mike) was an opportunity for infrastructure-heavy companies and Heroku and Google, among others, took advantage of it to orchestrate so-called containers. Put simply, they were now able to think of application deployment as the dynamic manipulation of theses determinist runtimes. Whatever the code or the business logic, it was encapsulated into a uniform execution format. Cgroups are very low level though, and tooling around the original primitives quickly emerged, like LXC backed by Canonical. Then, Solomon Hykes came in and made the technology widely accessible with Docker. The possibilities were endless and, indeed, developers and startups alike rushed in all directions. Lately, however, the hype seems to have cooled down. Docker market share is being questioned while the company sorts its business strategy. At the end of the day, developers forgetabout vendors/technology and just want simple tooling for more efficient coding. Docker-compose, Red Hat Container Development Kit, GC Container Builder, or local Kubernetes are very sophisticated pieces of technologies that hide the details of the underlying container mechanics. What they give to engineers are powerful primitives for advanced development practices: development/production environment parity, transparent services replication, and predictable runtime configuration. However,this is not just about development correctness or convenience, considering how containers are eating the IaaS landscape. It is also about deployment optimizations and resilience. Tech giants who operate crazy large infrastructures developed incredible frameworks, often in the open, to push how fast they could deploy auto-scalable, self-healing, zero-downtime fleets. Apache Mesos backed by Microsoft, or Kubernetes by Google, make at least two promises: Safe and agile deployments at the (micro-)service level Reliable orchestration with elegant service discovery, load-balancing, and failure management (because you have to accept that production always goes wrong at some point) Containers enabled us to manage complexity with infrastructure design patterns like micro-services or serverless. Behind the hype of these buzzwords, engineers try to improve team collaboration, safe and agile deployments, large project maintenance, and monitoring. However,we quickly came to realize it was sold with a DevOps tax. Fortunately, the software industry has a hard-won experience of such balance, and we start to see it converging toward the most robust approaches. This container landscape overview hopefully provides the requirements to now study how it has impacted the development of programming languages. We will take a look first at their ecosystems, and then we will dive into language designs themselves. Language Ecosystems and Usages Most developers are now aware of how invasive container technologies can be. It makes its way into your development toolbox or how your company manages its servers. Some will argue that the fundamentals of coding did not evolve much, but the trend is hard to ignore anyway. While we are free, of course, to stay away from Silicon Valley’s latest fashions, I think containers tackle a problem most languages struggle with: dependencies and packaging. Go, for example, got packaging right, but it’s still trying to figure out how to handle dependencies versioning and vendoring. JavaScript, on the other hand, has npm to manage fine-grained third-party code, but build tools are scattered all over Github. Containers won't spare you the pain of setting things up (they target runtimes, not build steps), but it can lower the bar of language adoption. Official images can run most standard language projects and one can both give a try and deploy a basic hello world in no time. When you realize that Go1.5+ needs Go1.4 to be compiled, it can be a relief to just docker run your five-lines-long main.go. Growing a language community is a sure way to develop its tooling and libraries, but containers also influence how we design those components. They are the cloud counterparts of the current functional trend. We tend to embrace a world where both functions and servers are immutable and single-purpose. We want predictable, pure primitives (in the mathematical sense). All of that to match increasingly distributed and intensive workloads. I hope those approaches come from a product’s need but, obviously, having the right technology at hand drives the innovation. As software engineers in 2017, we also design libraries and tools with containers in mind: high performance networking, distributed process management, Data pipelines, and so on. Language Design What about languages? To get things straight, I don't think containers influence how Guido Van Rossum designs Python. And that is the point of containers. They abstract the runtime to let you focus on your code™  (it is literally on every Docker-based PaaS landing page). You should be able to design whatever logic implementation you need, and containers will come in handy to help you run it when needed. I do believe, however, that both languages last evolutions and the rise of containers serve the same maturation of ideas in the tech community. Correctness at compile time: Both Python 3.6, ELM, and JavaScript ES7 are bringing back typing to their language (see type hints or Typescripts). An application running locally will launch just the same in production. You can even run tests against multiple runtimes without complex scripts or heavy setup. Simplicity: Go won a lot of its market share thanks to its initial simplicity, taking a lot of decisions for you. Containers try their best to offer one unified way to run code, whatever the stack. Functional: Scala, JavaScript, and Elixir, all enforce immutable states, function compositions with support for lambda expressions, and function purity. It echoes the serverless trend that promotes function as a service. Most of the providers leverage some kind of container technology to bring the required agility to their platforms. There is something elegant about having language features, programmatical design patterns, and infrastructure operations going hand in hand. While I don't think one of them influences the other, I certainly believe that their development smoothen other’s innovations. Conclusion Container technologies and the fame around them are finally starting to converge toward fewer and more robust usages. At the same time, infrastructure designs, new languages, and evolutions of existing ones seem to promote the same underlying patterns: simple, functional, decoupled components. I think this coincidence comes from industry maturity and openness, more than, as I said, one technology influencing the other. Containers, however, are shaking how we collaborate and design tools for the languages we love. It changes the way we on-board developers learning a new language. It changes how we setup local development environments with micro-replicates of production topology. It changes the way we package and deploy code. And, most importantly, it enables architectures like micro-services or lambdas that influence how we design our programs. In my opinion, programming language design should continue to evolve decoupled from containers. They serve different purposes, and given the pace of the tech industry, major languages should never depend on new shining tools. That being said, the evolution of languages now comes with the activity of its community—what they build, how they use it, and how they spread it in companies. Coping with containers is an opportunity to bring new developers, improve production robustness, and accelerate both technical and human growth. About the author Xavier Bruhiere is a lead developer at AppTurbo in Paris, where he develops innovative prototypes to support company growth. He is addicted to learning, hacking on intriguing hot techs (both soft and hard), and practicing high-intensity sports.
Read more
  • 0
  • 0
  • 5321

article-image-exploring-net-core-3-0-components-with-mark-j-price-a-microsoft-specialist
Packt Editorial Staff
15 Nov 2019
8 min read
Save for later

Exploring .Net Core 3.0 components with Mark J. Price, a Microsoft specialist

Packt Editorial Staff
15 Nov 2019
8 min read
There has been continuous transformation since the last few years to bring .NET to platforms other than Windows. .NET Core 3.0 released in September 2019 with primary focus on adding Windows specific features. .NET Core 3.0 supports side-by-side and app-local deployments, a fast JSON reader, serial port access and other PIN access for Internet of Things (IoT) solutions, and tiered compilation on by default. In this article we will explore the .Net Core components of its new 3.0 release. This article is an excerpt from the book C# 8.0 and .NET Core 3.0 - Modern Cross-Platform Development - Fourth Edition written by Mark J. Price. Mark follows a step-by-step approach in the book filled with exciting projects and fascinating theory for the readers in this highly acclaimed franchise.  Pieces of .NET Core components These are pieces that play an important role in the development of the .NET Core: Language compilers: These turn your source code written with languages such as C#, F#, and Visual Basic into intermediate language (IL) code stored in assemblies. With C# 6.0 and later, Microsoft switched to an open source rewritten compiler known as Roslyn that is also used by Visual Basic. Common Language Runtime (CoreCLR): This runtime loads assemblies, compiles the IL code stored in them into native code instructions for your computer's CPU, and executes the code within an environment that manages resources such as threads and memory. Base Class Libraries (BCL) of assemblies in NuGet packages (CoreFX): These are prebuilt assemblies of types packaged and distributed using NuGet for performing common tasks when building applications. You can use them to quickly build anything you want rather combining LEGO™ pieces. .NET Core 2.0 implemented .NET Standard 2.0, which is a superset of all previous versions of .NET Standard, and lifted .NET Core up to parity with .NET Framework and Xamarin. .NET Core 3.0 implements .NET Standard 2.1, which adds new capabilities and enables performance improvements beyond those available in .NET Framework. Understanding assemblies, packages, and namespaces An assembly is where a type is stored in the filesystem. Assemblies are a mechanism for deploying code. For example, the System.Data.dll assembly contains types for managing data. To use types in other assemblies, they must be referenced. Assemblies are often distributed as NuGet packages, which can contain multiple assemblies and other resources. You will also hear about metapackages and platforms, which are combinations of NuGet packages. A namespace is the address of a type. Namespaces are a mechanism to uniquely identify a type by requiring a full address rather than just a short name. In the real world, Bob of 34 Sycamore Street is different from Bob of 12 Willow Drive. In .NET, the IActionFilter interface of the System.Web.Mvc namespace is different from the IActionFilter interface of the System.Web.Http.Filters namespace. Understanding dependent assemblies If an assembly is compiled as a class library and provides types for other assemblies to use, then it has the file extension .dll (dynamic link library), and it cannot be executed standalone. Likewise, if an assembly is compiled as an application, then it has the file extension .exe (executable) and can be executed standalone. Before .NET Core 3.0, console apps were compiled to .dll files and had to be executed by the dotnet run command or a host executable. Any assembly can reference one or more class library assemblies as dependencies, but you cannot have circular references. So, assembly B cannot reference assembly A, if assembly A already references assembly B. The compiler will warn you if you attempt to add a dependency reference that would cause a circular reference. Understanding the Microsoft .NET Core App platform By default, console applications have a dependency reference on the Microsoft .NET Core App platform. This platform contains thousands of types in NuGet packages that almost all applications would need, such as the int and string types. When using .NET Core, you reference the dependency assemblies, NuGet packages, and platforms that your application needs in a project file. Let's explore the relationship between assemblies and namespaces. In Visual Studio Code, create a folder named test01 with a subfolder named AssembliesAndNamespaces, and enter dotnet new console to create a console application. Save the current workspace as test01 in the test01 folder and add the AssembliesAndNamespaces folder to the workspace. Open AssembliesAndNamespaces.csproj, and note that it is a typical project file for a .NET Core application, as shown in the following markup: Check out this code on GitHub. Although it is possible to include the assemblies that your application uses with its deployment package, by default the project will probe for shared assemblies installed in well-known paths. First, it will look for the specified version of .NET Core in the current user's .dotnet/store and .nuget folders, and then it looks in a fallback folder that depends on your OS, as shown in the following root paths: Windows: C:\Program Files\dotnet\sdk macOS: /usr/local/share/dotnet/sdk Most common .NET Core types are in the System.Runtime.dll assembly. You can see the relationship between some assemblies and the namespaces that they supply types for, and note that there is not always a one-to-one mapping between assemblies and namespaces, as shown in the following table: Assembly Example namespaces Example types System.Runtime.dll System, System.Collections, System.Collections.Generic Int32, String, IEnumerable<T> System.Console.dll System Console System.Threading.dll System.Threading Interlocked, Monitor, Mutex System.Xml.XDocument.dll System.Xml.Linq XDocument, XElement, XNode Understanding NuGet packages .NET Core is split into a set of packages, distributed using a Microsoft-supported package management technology named NuGet. Each of these packages represents a single assembly of the same name. For example, the System.Collections package contains the System.Collections.dll assembly. The following are the benefits of packages: Packages can ship on their own schedule. Packages can be tested independently of other packages. Packages can support different OSes and CPUs by including multiple versions of the same assembly built for different OSes and CPUs. Packages can have dependencies specific to only one library. Apps are smaller because unreferenced packages aren't part of the distribution. The following table lists some of the more important packages and their important types: Package Important types System.Runtime Object, String, Int32, Array System.Collections List<T>, Dictionary<TKey, TValue> System.Net.Http HttpClient, HttpResponseMessage System.IO.FileSystem File, Directory System.Reflection Assembly, TypeInfo, MethodInfo Understanding frameworks There is a two-way relationship between frameworks and packages. Packages define the APIs, while frameworks group packages. A framework without any packages would not define any APIs. .NET packages each support a set of frameworks. For example, the System.IO.FileSystem package version 4.3.0 supports the following frameworks: .NET Standard, version 1.3 or later. .NET Framework, version 4.6 or later. Six Mono and Xamarin platforms (for example, Xamarin.iOS 1.0). Understanding dotnet commands When you install .NET Core SDK, it includes the command-line interface (CLI) named dotnet. Creating new projects The dotnet command-line interface has commands that work on the current folder to create a new project using templates. In Visual Studio Code, navigate to Terminal. Enter the dotnet new -l command to list your currently installed templates, as shown in the following screenshot: Managing projects The dotnet CLI has the following commands that work on the project in the current folder, to manage the project: dotnet restore: This downloads dependencies for the project. dotnet build: This compiles the project. dotnet test: This runs unit tests on the project. dotnet run: This runs the project. dotnet pack: This creates a NuGet package for the project. dotnet publish: This compiles and then publishes the project, either with dependencies or as a self-contained application. add: This adds a reference to a package or class library to the project. remove: This removes a reference to a package or class library from the project. list: This lists the package or class library references for the project. To summarize, we explored the .NET Core components of the new 3.0 release. If you want to learn the fundamentals, build practical applications, and explore latest features of C# 8.0 and .NET Core 3.0, check out our latest book C# 8.0 and .NET Core 3.0 - Modern Cross-Platform Development - Fourth Edition written by Mark J. Price. .NET Framework API Porting Project concludes with .NET Core 3.0 .NET Core 3.0 is now available with C# 8, F# 4.7, ASP.NET Core 3.0 and general availability of EF Core 3.0 and EF 6.3 .NET Core 3.0 Preview 6 is available, packed with updates to compiling assemblies, optimizing applications ASP.NET Core and Blazor Inspecting APIs in ASP.NET Core [Tutorial] Use App Metrics to analyze HTTP traffic, errors & network performance of a .NET Core app [Tutorial]
Read more
  • 0
  • 0
  • 5318
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at £15.99/month. Cancel anytime
article-image-amazon-sagemaker-machine-learning-cloud-easy
Amey Varangaonkar
12 Apr 2018
5 min read
Save for later

Amazon Sagemaker makes machine learning on the cloud easy

Amey Varangaonkar
12 Apr 2018
5 min read
Amazon Sagemaker was launched by Amazon back in November 2017. It was built with the promise of simplifying machine learning on the cloud. The software was a response not only to the increasing importance of machine learning, but also the fact that there is a demand to perform machine learning in the cloud. Amazon Sagemaker is clearly a smart move by Amazon that will consolidate the dominance of AWS in the cloud market. What is Amazon Sagemaker? Amazon Sagemaker is Amazon’s premium cloud-based service which serves as a platform for machine learning developers and data scientists to build, train and deploy machine learning models on the cloud. One of the features that makes Sagemaker stand out from the rest is that it is business-ready. This means machine learning models can be optimized for high performance and deployed at scale to work on data with varying sizes and complexity. The basic intention of Sagemaker, as Vogels mentioned in his keynote, is to remove any barriers that slow down the machine learning process for developers. In a standard machine learning process, a developer spends most of the time doing the following standard tasks: Collecting, cleaning and preparing the training data set. Selecting the most appropriate algorithm for the machine learning problem Training the model for accurate prediction Optimizing the model’s performance Integrating the model with the application Deploying the application to production Most of these tasks require a lot of expertise, and more importantly, time and efforts. Not to mention the computational resources such as storage space and processing memory. The larger the dataset, the bigger this problem becomes. Amazon Sagemaker removes these complexities by providing a solid platform with built-in modules that can be used together or individually to complete each of the above tasks with relative ease. How Amazon Sagemaker Works Amazon Sagemaker offers a lot of options for machine learning developers to train and optimize their machine learning models to work at scale. For starters, Sagemaker comes integrated with hosted Jupyter notebooks to allow developers to visually explore and analyze their dataset. You can also move your data directly from popular Amazon databases such as RDS, DynamoDB and Redshift into S3 and conduct your analysis there. The simple block diagram below demonstrates the core working of Amazon Sagemaker: Amazon Sagemaker includes 12 high performance, production-ready algorithms which can be used to build and deploy models at scale. Some of the popular ones include k-means clustering, Principal Component Analysis (PCA), neural topic modeling, and more. It comes pre-configured with popular machine learning and deep learning frameworks such as Tensorflow, PyTorch, Apache MXNet and more, but you can also use your own framework without any hassle. Once your model is trained, Sagemaker makes use of the AWS’ auto-scaled clusters to deploy the model, making sure the model doesn’t lack in performance and is highly available at all times. Not just that, Sagemaker also includes built-in testing capabilities for you to test and check your model for any issues, before it can be deployed for production. Benefits of using Amazon Sagemaker Business are likely to adopt Amazon Sagemaker, mainly because of the fact that it makes the whole machine learning process so effortless. With Sagemaker, it becomes very easy to build and deploy smarter applications that give accurate predictions, and thereby help increase the business profitability. Significantly reduces time: With built-in modules, Sagemaker significantly reduces the time required to do a variety of machine learning tasks, and the models can be deployed to production in very little time. This is important for businesses, as near-real time insights obtained from smart applications help them optimize their processes quickly, and effectively get an edge over their competition. Effortless and more productive machine learning: By virtue of the one-click training and deployment feature offered by Sagemaker, machine learning engineers and developers can now focus on asking the right questions of the data, and focus on the results rather than the process. They can also devote more time to optimizing the model rather than focusing on collecting and cleaning the data, which takes up most of their time. Flexibility in using the algorithms and frameworks: With Sagemaker, developers have the freedom to choose the best-possible algorithm and tool for performing machine learning effectively. Easy integration, access and optimization: The models trained using Sagemaker can be integrated into an existing business application seamlessly, and are optimized for speed and high performance. Backed by the computational power of AWS, business can rest assured their applications will continue to perform optimally without any risk of failure. Sagemaker - Amazon’s answer to Cloud Auto ML In a 3-way cloud war between Google, Microsoft and Amazon, it is clear Google and Amazon are trying to go head to head in order to establish their supremacy in the market, especially in the AI space. Sagemaker is Amazon’s answer to Google’s Cloud Auto ML, which was made publicly available in January, and delivers a similar promise - making machine learning easier than ever for developers. With Amazon serving a large customer-base, a platform like Sagemaker helps them to create a system that runs at scale and handles vast amounts of data quite effortlessly.  Amazon is yet to release any technical paper on how Sagemaker’s streaming algorithms work, but that will certainly be something to look out for in the near future. Considering Amazon identifies AI as key to their future product development, to think of Sagemaker as a better, more complete cloud service which also has deep learning capabilities is definitely not far-fetched.
Read more
  • 0
  • 0
  • 5305

article-image-best-automation-tools-sysadmins
Rick Blaisdell
09 Aug 2017
4 min read
Save for later

The best automation tools for sysadmins

Rick Blaisdell
09 Aug 2017
4 min read
Artificial Intelligence and cognitive computing have made huge strides over the past couple of years. Today, software automation has become an important tool that provides businesses with the necessary assets to keep up with market competition. Just take a look at ATMs, which have replaced bank tellers, or smart apps that have brought airline boarding passes to your fingertips. Moreover, as some statistics reveal, in the next couple of years, around 45 percent of work activities will be replaced or affected by robotic process automation. However, this is not the focus of this post. Our focus here is about how automation is helping us to keep up the pace and streamline our activities. So let’s take a look at system administrators. There are plenty of tasks performed by sysadmins that could easily be automated. To make the job easier, here is a list of automation software that any system administrator would be interested in: WPKG – The automated software deployment, upgrade, and removal program that allows you to build dependency trees of applications. The tool runs in the background and it doesn’t need any user interaction. The WPKG tool can be used to automate Windows 8 deployment tasks, so it’s good to have in any toolbox. AutoHotkey– The open-source scripting language for Microsoft Windows that allows you to create mouse macros manually. One of the most advantageous features that this tool provides is the ability to create stand-alone, fully executable .exe files, from any script, and operates on other PCs. Puppet Open Source – I think every IT professional has heard about Puppet and how it has captured the market during the last couple of years. This tool allows you to automate your IT infrastructure from acquisition to provisioning and management stages. The advantages? Scalability and scope!  As I mentioned, automation has already started to change the way we do business, and it will continue doing so in the upcoming years. It can represent a strong motive to increase service to your end users. Let’s take a dive into the benefits of automation: Reliability – This might be considered one of the highest advantages that automation can provide to an organization. Let’s take computer operations as an example. It requires a professional with both technical skills, and agility in pressing buttons and other physical operations. Also, we all know that human error is one of the most common problems in any business. Automation removes these errors. System Performance – Every business wishes to improve performance.  Automation, due to its flexibility and agility, makes that possible. Productivity – Today, we rely on computers and, most of the time, we work on complex tasks. One of the perks that automation has to offer is to increase productivity using job-scheduling software. It eliminates the lag time between jobs, while minimizing operator intervention. Availability – We all know how far cloud computing has come and how much hours of unavailability can cost. Automation can help by delivering high availability. Let’s take, for example, service interruptions. If your system crashes, and automated back-ups are available, then you have nothing to worry about. Automated recovery will always play a huge role in business continuity. Automation will always be in our future, and it will continue to grow. The key to using it to its full potential is to understand how it works and how it can help our business, regardless of the industry, as well as finding the best software to maximize efficiency. About the Author Rick Blaisdell is an experienced CTO, offering cloud services and creating technical strategies, which reduce IT operational costs and improve efficiency. He has 20 years of product, business development, and high-tech experience with Fortune 500 companies developing innovative technology strategies.
Read more
  • 0
  • 0
  • 5304

article-image-organisation-needs-to-know-about-gdpr
Aaron Lazar
16 Apr 2018
5 min read
Save for later

What your organisation needs to know about GDPR

Aaron Lazar
16 Apr 2018
5 min read
GDPR is an acronym that has been doing the rounds for a couple of years now. It’s become even more visible in the last few weeks, thanks to the Facebook and Cambridge Analytica data hijacking scandal. And with the deadline date looming - 25 May 2018 - every organization on the planet needs to make sure their on top of things. But what is GDPR exactly? And how is it going to affect you? What is GDPR? Before April, 2016, a data protection directive enforced in 1995 was in place. This governed all organisations that dealt with collecting, storing and processing data. This directive became outdated with rapidly evolving technological trends, which meant a revised directive was needed. In April 2016, the European Union drew up General Data Protection Regulation. It has been specifically created to to protect the personal data and privacy of European citizens. It's important to note at this point that the directive doesn't just apply to EU organizations - it applies to anyone who deals with data on EU citizens. A relatively new genre of crime involving stealing data, has cropped up over the past decade. Data is so powerful, that its misuse could be devastating, possibly resulting in another world war. GDPR aims to set a new benchmark for the protection of consumer data rights by making organisations more accountable. Governed by GDPR, organisations will now be responsible for guarding every quantum of information that is connected to an individual, including IP addresses and web cookies! Read more: Why GDPR is good for everyone. Why should organizations bother with GDPR? In December 2017, the RSA, one of the first cryptosystems and security organisations, surveyed 7,500 customers in France, Italy, Germany, the UK and the US, and the results were interesting. When asked what their main concern was, customers responded that lost passwords, banking information, passports and other important documents were their major concern. The more interesting part was that over 60% of the respondents said that in the event of a breach, they would blame the organisation that lost their data rather than the hacker. If you work for or own a company that deals with the data of EU citizens, you’ll probably have GDPR on your radar. If you don’t comply, you’ll face a hefty fine - more on that below. What kind of data are we talking about? The GDPR aims to protect data related to identity information like name, physical address, sexual orientation and more. It also covers any ID numbers; IP addresses, cookies and RFID tags; genetic and any data related to health; biometric data like fingerprints, retina scans, etc; racial or ethnic data; political opinions. Who must comply with GDPR? You’ll be governed by GDPR if: You’re a company located in the EU You’re not located in the EU but you still process data of EU citizens You have more than 250 employees You have lesser than 250 employees but process data that could impact the rights and freedom of EU citizens When does GDPR come into force? In case you missed it in the first paragraph, GDPR comes into effect on 25 May 2018. If you're not ready yet, now is the time to scramble to get things right and make sure you comply with GDPR regulations. What if you don’t make the date? Unlike an invitation to a birthday party, if you miss the date to comply with the GDPR, you’re likely to be fined to the tune of €20 million or 4% of the worldwide turnover of your company. A more relaxed fine includes €10 million or 2% of the worldwide turnover of your company, for misusing data in ways involving failure to report a data breach, failure to incorporate privacy by design and failure to ensure that data protection is applied at the initial stage of a project. It also includes the failure to hire a Data Protection Officer/Chief Data Officer, who has professional experience and knowledge of data protection laws that are proportionate to what the organisation carries out. If it makes you feel any better, you’re not the only one. A report from Ovum states that more than 50% of the companies feel they’re most likely to be fined for non compliance. How do you prepare for GDPR? Well, here are a few honest steps that you could perform to ensure a successful compliance: Prepare to shell out between $1 million to $10 million to meet GDPR requirements Hire a DPO or a CDO who’s capable of handling all your data policies and migration Fully understand GDPR and its requirements Perform a risk assessment, understand what kind of data you store and what implications it might have Strategize to mitigate that risk Review/Create your data protection plan Plan for a 72 hour incident response system Implement internal plans and policies to ensure employees follow For the third time then - time is running out! It’s imperative that you ensure your organisation complies with GDPR before the 25th of May, 2018. We’ll follow up with some more thoughts to help you make the shift, as well as give you more insight into this game changing regulation. If you own or are part of an organisation that has migrated to comply with GDPR, please share some tips in the comments section below to help others still in the midst of the transition.
Read more
  • 0
  • 0
  • 5302

article-image-key-trends-in-software-development-in-2019-cloud-native-and-the-shrinking-stack
Richard Gall
18 Dec 2018
8 min read
Save for later

Key trends in software development in 2019: cloud native and the shrinking stack

Richard Gall
18 Dec 2018
8 min read
Bill Gates is quoted as saying that we tend to overestimate the pace of change over a period of 2 years, but underestimate change over a decade. It’s an astute observation: much of what will matter in 2019 actually looks a lot like what we said will be important in development this year. But if you look back 10 years, the change in the types of applications and websites we build - as well as how we build them - is astonishing. The web as we understood it in 2008 is almost unrecognisable. Today, we are in the midst of the app and API economy. Notions of surfing the web sound almost as archaic as a dial up tone. Similarly, the JavaScript framework boom now feels old hat - building for browsers just sounds weird... So, as we move into 2019, progressive web apps, artificial intelligence, and native app development remain at the top of development agenda. But this doesn’t mean these changes are to be ignored as empty hype. If anything, as adoption increases and new tools emerge, we will begin to see more radical shifts in ways of working. The cutting edge will need to sharpen itself elsewhere. What will it mean to be a web developer in 2019? But these changes are enforcing wider changes in the industry. Arguably, it’s transforming what it means to be a web developer. As applications become increasingly lightweight (thanks to libraries and frameworks like React and Vue), and data becomes more intensive, thanks to the range of services upon which applications and websites depend, developers need to expand across the stack. You can see this in some of the latest Packt titles - in Modern JavaScript Web Development Cookbook, for example, you’ll learn microservices and native app development - topics that have typically fallen outside of the strict remit of web development. The simplification of many aspects of development has, ironically, forced developers to look more closely at how these aspects fit together. As you move further into layers of abstraction, the way things interact and work alongside each other become vital. For the most part, it’s no longer a case of writing the requisite code to make something run on the specific part of the application you’re working on, it’s rather about understanding how the various pieces - from the backend to the front end - fit together. This means, in 2019, you need to dive deeper and get to know your software systems inside out. Get comfortable with the backend. Dive into cloud. Start playing with microservices. Rethink and revisit languages you thought you knew. Get to know your infrastructure: tackling the challenges of API development It might sound strange, but as the stack shrinks and the responsibilities of developers - web and otherwise - shift, understanding the architectural components within the software their building is essential. You could blame some of this on DevOps - essentially, it has made developers responsible for how their code runs once it hits production. Because of this important change, the requisite skills and toolchain for the modern developer is also expanding. There are a range of routes into software architecture, but exploring API design is a good place to begin. Hands on RESTful API Design offers a practical way into the topic. While REST is the standard for API design, the diverse range of tools and approaches is making managing the client a potentially complex but interesting area. GraphQL, a query language developed by Facebook is said to have killed off REST (although we wouldn’t be so hasty), while Redux and Relay, two libraries for managing data in React applications, have seen a lot of interest over the last 12 months as two key tools for working with APIs. Want to get started with GraphQL? Try Beginning GraphQL. Learn Redux with Learning Redux.       Microservices: take responsibility for your infrastructure The reason that we’re seeing so many tools offering ways of managing APIs is that microservices are becoming the dominant architectural mode. This requires developer attention too. That’s not to say that you need to implement microservices now (in fact, there are probably many reasons not to), but if you want to be building software in 5 years time, getting to grips with the principles behind microservices and the tools that can help you use them. Perhaps one of the central technologies driving microservices are containers. You could run microservices in a virtual machine, but because they’re harder to scale than containers, you probably wouldn’t be seeing the benefits you’d be expecting from a microservices architecture. This means getting to grips with core container technologies is vital. Docker is the obvious place to start. There are varying degrees to which developers need to understand it, but even if you don’t think you’ll be using it immediately it does give you a nice real-world foundation in containers if you don’t already have one. Watch and learn how to put Docker to work with the Hands on Docker for Microservices video.  But beyond Docker, Kubernetes is the go to tool that allows you to scale and orchestrate containers. This gives you control over how you scale application services in a way that you probably couldn’t have imagined a decade ago. Get a grounding in Kubernetes with Getting Started with Kubernetes - Third Edition, or follow a 7 day learning plan with Kubernetes in 7 Days. If you want to learn how Docker and Kubernetes come together as part of a fully integrated approach to development, check out Hands on Microservices with Node.js. It's time for developers to embrace cloud It should come as no surprise that, if the general trend is towards full stack, where everything is everyone’s problem, that developers simply can’t afford to ignore cloud. And why would you want to - the levels of abstraction it offers, and the various services and integrations that come with the leading cloud services can make many elements of the development process much easier. Issues surrounding scale, hardware, setup and maintenance almost disappear when you use cloud. That’s not to say that cloud platforms don’t bring their own set of challenges, but they do allow you to focus on more interesting problems. But more importantly, they open up new opportunities. Serverless becomes a possibility - allowing you to scale incredibly quickly by running everything on your cloud provider, but there are other advantages too. Want to get started with serverless? Check out some of these titles… JavaScript Cloud Native Development Cookbook Hands-on Serverless Architecture with AWS Lambda [Video] Serverless Computing with Azure [Video] For example, when you use cloud you can bring advanced features like artificial intelligence into your applications. AWS has a whole suite of machine learning tools - AWS Lex can help you build conversational interfaces, while AWS Polly turns text into speech. Similarly, Azure Cognitive Services has a diverse range of features for vision, speech, language, and search. What cloud brings you, as a developer, is a way of increasing the complexity of applications and processes, while maintaining agility. Adding in features and optimizations previously might have felt sluggish - maybe even impossible. But by leveraging AWS and Azure (among others), you can do much more than you previously realised. Back to basics: New languages, and fresh approaches With all of this ostensible complexity in contemporary software development, you’d be forgiven for thinking that languages simply don’t matter. That’s obviously nonsense. There’s an argument that gaining a deeper understanding of how languages work, what they offer, and where they may be weak, can make you a much more accomplished developer. Be prepared is sage advice for a world where everything is unpredictable - both in the real world and inside our software systems too. So, you have two options - and both are smart. Either go back to a language you know and explore a new paradigm or learn a new language from scratch. Learn a new language: Kotlin Quick Start Guide Hands-On Go Programming Mastering Go Learning TypeScript 2.x - Second Edition     Explore a new programming paradigm: Functional Programming in Go [Video] Mastering Functional Programming Hands-On Functional Programming in RUST Hands-On Object-Oriented Programming with Kotlin     2019: the same, but different, basically... It's not what you should be saying if you work for a tech publisher, but I'll be honest: software development in 2019 will look a lot like it has in 2018.  But that doesn't mean you have time to be complacent. In just a matter of years, much of what feels new or ‘emerging’ today will be the norm. You don’t have to look hard to see the set of skills many full stack developer job postings are asking for - the demands are so diverse that adaptability is clearly immensely valuable both for your immediate projects and future career prospects. So, as 2019 begins, commit to developing yourself sharpening your skill set.
Read more
  • 0
  • 0
  • 5295
article-image-ai-and-raspberry-pi-machine-learning-and-iot-whats-impact
RakaMahesa
11 Apr 2017
5 min read
Save for later

AI and the Raspberry Pi: Machine Learning and IoT, What's the Impact?

RakaMahesa
11 Apr 2017
5 min read
Ah, Raspberry Pi, the little computer that could. On its initial release back in 2012, it quickly gained popularity among creators and hobbyists as a cheap and portable computer that could be the brain of their hardware projects. Fast forward to 2017, and Raspberry Pi is on its third generation and has been used in many more projects across various fields of study.  Tech giants are noticing this trend and have started to pay closer attention to the miniature computer. Microsoft, for example, released Windows 10 IoT Core, a variant of Windows 10 that could run on a Raspberry Pi. Recently, Google revealed that they have plans to bring artificial intelligence tools to the Pi. And not just Google's AI, more and more AI libraries and tools are being ported to the Raspberry Pi every day.  But what does it all mean? Does it have any impact on Raspberry Pi’s usage? Does it change anything in the world of Internet of Things? For starters, let's recap what the Raspberry Pi is and how it has been used so far. The Raspberry Pi, in short, is a super cheap computer (it only costs $35) and is the size of a credit card. However, despite its ability to be usedasa usual, general-purpose computer, most people useRaspberry Pias the base of their hardware projects.  These projects range from simple toy-like projects to complicated gadgets that actually do important work. They can be as simple as a media center for your TV or as complex as a house automation system. Do keep in mind that this kind of projectscan always be built using desktop computers, but it's not really practical to do so without the low price and the small size of the Raspberry Pi.  Before we go on talking about having artificial intelligence on the Raspberry Pi, we need to have the same understanding of AI. (from http://i2.cdn.turner.com/cnn/2010/TECH/innovation/07/09/face.recognition.facebook/t1larg.tech.face.recognition.courtesy.jpg)  Artificial Intelligence has a wide range of complexity. It can range from a complicated digital assistant like Siri, to a news-sorting program, to a simple face detection system that can be found in many cameras. The more complicated the AI system, the bigger the computing power required by the system. So, with the limited processing power we have on the Raspberry Pi, the types of AI that can run on that mini computer will be limited to the simple ones as well.  Also, there's another aspect of AI called machine learning. It's the kind of technology that enables an AI to play and win against humans in a match of Go. The core of machine learning is basically to make a computer improve its own algorithm by processing a large amount of data. For example, if we feed a computer thousands of cat pictures, it will be able to define a pattern for 'cat' and use that pattern to find cats in other pictures.  There are two parts in machine learning. The first one is the training part, where we let a computer find an algorithm that suits the problem. The second aspect is the application part, where we apply the new algorithm to solve the actual problem. While the application part can usually be run on a Raspberry Pi, the training part requires a much higher processing power. To make it work, the training part is done on a high-performance computer elsewhere, and the Raspberry Pi only executes the training result. So, now we know that the Raspberry Pi can run simple AI. But what's the impact of this?  Well, to put it simply, having AI will enable creators to build an entirely new class of gadgets on the Raspberry Pi. It will allow the makers to create an actually smart device based on the small computer. Without AI, the so-called smart device will only act following a limited set of rules that have been defined. For example, we can develop a device that automatically turns off lights at a specific time every day, but without AI we can't have the device detect if there's anyone in the room or not.  With artificial intelligence, our devices will be able to adapt to unscripted changes in our environment. Imagine connecting a toy car with a Raspberry Pi and a webcam and have the car be able to smartly map its path to the goal, or a device that automatically opens the garage door if it sees our car coming in. Having AI on the Raspberry Pi will enable the development of such smart devices.  There's another thing to consider. One of Raspberry Pi's strong points is its versatility. With its USB ports and GPIO pins, the computer is able to interface with various digital sensors. The addition of AI will enable the Raspberry Pi to process even more sensors like fingerprint readers or speech recognition with a microphone, further enhancing its flexibility.  All in all, artificial intelligence is a perfect addition to the Raspberry Pi. It enables the creation of even smarter devices based on the computer and unlocks the potential of the Internet of Things to every maker and tinkerer in the world. About the author RakaMahesa is a game developer at Chocoarts (http://chocoarts.com/),who is interested in digital technology in general. Outside of work hours, he likes to work on his own projects, with Corridoom VR being his latest released game. Raka also regularly tweets as @legacy99. 
Read more
  • 0
  • 0
  • 5295

article-image-5-mistake-developers-make-when-working-hbase
Tess Hsu
19 Oct 2016
3 min read
Save for later

5 Mistake Developers Make When Working With HBase

Tess Hsu
19 Oct 2016
3 min read
Having worked with HBase for over six years, I want to share some common mistakes developers make when using HBase: 1. Use aPrefixFilter without setting a start row. This came up several times on the mailing list over the years. Here is the filter: Github The use case is to find rowsthathave a given prefix. Some people complain that the scan was too slow using PrefixFilter. This was due to them not specifying the proper start row.Suppose there are 10K regions in the table, and the first row satisfying the prefix is in the 3000th region. Without a proper start row, the scan begins with the first region. In HBase1.x, you can use the following method of Scan: public Scan setRowPrefixFilter(byte[] rowPrefix) { This setsa start row for you. 2. Incur low free HDFSspace due to HBase snapshots hanging around. In theory, you can have many HBase snapshots in your cluster. This does place a considerable burden on HDFS, and the large number of hfiles may slow down Namenode. Suppose you have a five-column-family table with 40K regions. Each column family has 6 hfiles before compaction kicks in. For this table, you may have 1.2 million hfiles. Take a snapshot to reference the 1.2 million hfiles. After routine compactions, another snapshot is taken, so a million more hfiles (roughly) would be referenced. Prior hfiles stay until the snapshot that references them is deleted. This means that having a practical schedule of cleaning unneeded snapshots is a recipe for satisfactory cluster performance. 3. Retrieve last N rows without using a reverse scan. In some scenarios, you may need to retrieve the last N rows. Assuming salting of keys is not involved, you can use the following API of Scan: public Scan setReversed(boolean reversed) { On the client side, you can choose the proper data structure so that sorting is not needed. For example, use LinkedList 4. Running multiple region servers on the same host due to heap size consideration. Some users run several region servers on the same machine to keep as much data in the block cache as possible, while at the same time minimizing GC time. Compared to having one region server with a huge heap, GC tuning is a lot easier. Deployment has some pain points, because a lot of the start / stop scripts don't work out of the box. With the introduction of bucket cache, GC activities come down greatly. There is no need to use the above trick. See here. 5. Receive a NoNode zookeeper exception due to misconfigured parent znode. When thezookeeper.znode.parentconfig value on the client side doesn't match the one for your cluster, you may see the following exception: Exception in thread "main" org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode for /hbase/master at org.apache.zookeeper.KeeperException.create(KeeperException.java:111) at org.apache.zookeeper.KeeperException.create(KeeperException.java:51) atorg.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1155) atorg.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1184) at com.ngdata.sep.util.zookeeper.ZooKeeperImpl.getData(ZooKeeperImpl.java:238) One possible scenario is that hbase-site.xml is not on the classpath of the client application.The default value for zookeeper.znode.parent doesn't match the actual one for your cluster. When you get hbase-site.xml onto the classpath, the problem should be gone. About the author Ted Yu is a staff engineer at HortonWorks. He has also been an HBase committer/PMC for five years. His work on HBase covers various components: security, backup/restore, load balancer, MOB, and so on. He has provided support for customers at eBay, Micron, PayPal, and JPMC. He is also a Spark contributor.
Read more
  • 0
  • 0
  • 5286

article-image-grover-a-gan-that-fights-neural-fake-news-as-long-as-it-creates-said-news
Vincy Davis
11 Jun 2019
7 min read
Save for later

GROVER: A GAN that fights neural fake news, as long as it creates said news

Vincy Davis
11 Jun 2019
7 min read
Last month, a team of researchers from the University of Washington and the Allen Institute for Artificial Intelligence, published a paper titled ‘Defending Against Neural Fake News’. The goal of this paper is to reliably detect “neural fake news”, so that its harm can be minimized. With this regard, the researchers have built a model named ‘GROVER’. This works as a generator of fake news, which can also spot its own generated fake news articles, as well as those generated by other AI models. GROVER (Generating aRticles by Only Viewing mEtadata Records) models can generate an efficient yet controllable news article, with not only the body, but also the title, news source, publication date, and author list. The researchers affirm that the ‘best models for generating neural disinformation are also the best models at detecting it’. The framework for GROVER represents fake news generation and detection as an adversarial game: Adversary This system will generate fake stories that match specified attributes: generally, being viral or persuasive. The stories must be realistic to read for both human users as well as the verifier. Verifier This system will classify news stories as real or fake. A verifier will have access to unlimited real news stories and few fake news stories from a specific adversary. The dual objective of these two systems suggest an escalating ‘arms race’ between attackers and defenders. It is expected that as the verification systems get better, the adversaries too will follow. Modeling Conditional Generation of Neural Fake News using GROVER GROVER adopts a language modeling framework which allows for flexible decomposition of an article in the order of p(domain, date, authors, headline, body). During inference time, a set of fields are set as ‘F’ for context, with each field ‘f ‘ containing field-specific start and end tokens. During training, the inference is simulated by randomly partitioning an article’s fields into two disjoint sets F1 and F2. The researchers also randomly drop out individual fields with probability 10%, and drop out all but the body with probability 35%. This allows the model to learn how to perform unconditional generation. For Language Modeling, two evaluation modes are considered: unconditional, where no context is provided and the model must generate the article body; and conditional, in which the full metadata is provided as context. The researchers evaluate the quality of disinformation generated by their largest model, GROVER-Mega, using p=.96. The articles are classified into four classes: human-written articles from reputable news websites (Human News), GROVER-written articles conditioned on the same metadata (Machine News), human-written articles from known propaganda websites (Human Propaganda), and GROVER-written articles conditioned on the propaganda metadata (Machine Propaganda). Image Source: Defending Against Neural Fake News When rated by qualified workers on Amazon Mechanical Turk, it was found that though the quality of GROVER-written news is not as high as human-written news, it is very skilled at rewriting propaganda. The overall trustworthiness score of propaganda increases from 2.19 to 2.42 (out of 3) when rewritten by GROVER. Neural Fake News Detection using GROVER The role of the Verifier is to mitigate the harm of neural fake news by classifying articles as Human or Machine written. The neural fake news detection is framed in a semi-supervised method. The neural verifier (or discriminator) will have access to many human-written news articles from March 2019 and before, i.e., the entire RealNews training set. However, it will   have limited access to generations, and more recent news articles. For example, using 10k news articles from April 2019, for generating article body text; another 10k articles are used as a set of human-written news articles, it is split in a balanced way, with 10k for training, 2k for validation, and 8k for testing. It is evaluated using two modes: In the unpaired setting, a verifier is provided single news articles, which must be classified independently as Human or Machine.  In the paired setting, a model is given two news articles with the same metadata, one real and one machine-generated. The verifier must assign the machine-written article a higher Machine probability than the human-written article. Both the modes are evaluated in terms of accuracy. Image Source: Defending Against Neural Fake News It was found that the paired setting appears significantly easier than the unpaired setting across the board, suggesting that it is often difficult for the model to calibrate its predictions. Second, model size is highly important in the arms race between generators and discriminators. Using GROVER to discriminate GROVER’s generations results in roughly 90% accuracy across the range of sizes. If a larger generator is used, accuracy slips below 81%; conversely, if the discriminator is larger, accuracy is above 98%. Lastly, other discriminators perform worse than GROVER overall. This suggests that effective discrimination requires having a similar inductive bias, as the generator. Thus it has been found that GROVER can rewrite propaganda articles, with humans rating the rewritten versions as more trustworthy. At the same time, GROVER can also defend these models. The researchers are of the opinion that an ensemble of deep generative model, such as GROVER should be used to analyze the content of a text. Obviously the working of the GROVER model has caught many people’s attention. https://twitter.com/str_t5/status/1137108356588605440 https://twitter.com/currencyat/status/1137420508092391424 While some are finding this to be an interesting mechanism to combat fake news, others point out that, it doesn't matter if GROVER can identify its own texts, if it can't identify the texts generated by other models. Releasing a model like GROVERcan turn out to be extremely irresponsible rather than defensive. A user on Reddit says that “These techniques for detecting fake news are fundamentally misguided. You cannot just train a statistical model on a bunch of news messages and expect it to be useful in detecting fake news. The reason for this should be obvious: there is no real information about the label ('fake' vs 'real' news) encoded in the data. Whether or not a piece of news is fake or real depends on the state of the external world, which is simply not present in the data. The label is practically independent of the data.” Another user on Hacker News comments that “Generative neural networks these days are both fascinating and depressing - feels like we're finally tapping into how subsets of human thinking & creativity work. But that knocks us off our pedestal, and threatens to make even the creative tasks we thought were strictly a human specialty irrelevant; I know we're a long way off from generalized AI, but we seem to be making rapid progress, and I'm not sure society's mature enough or ready for it. Especially if the cutting edge tools are in the service of AdTech and such, endlessly optimizing how to absorb everybody's spare attention. Perhaps there's some bright future where we all just relax and computers and robots take care of everything for us, but can't help feeling like some part of the human spirit is dying.” Few users feel that this ‘generating and detecting its own fake news’, kind of model is going to be unnecessary in the future. It’s just a matter of time that the text written by algorithms will be exactly similar to a human written text. At that point, there will be no way to distinguish between such articles. A user suggests that “I think to combat fake news, especially algorithmic one, we'll need to innovate around authentication mechanism that can effectively prove who you are and how much effort you put into writing something. Digital signatures or things like that.” For more details about the GROVER model, head over to the research paper. Worried about Deepfakes? Check out the new algorithm that manipulate talking-head videos by altering the transcripts Speech2Face: A neural network that “imagines” faces from hearing voices. Is it too soon to worry about ethnic profiling? OpenAI researchers have developed Sparse Transformers, a neural network which can predict what comes next in a sequence
Read more
  • 0
  • 0
  • 5271
article-image-unity-plugins-for-augmented-reality-app-development
Sugandha Lahoti
10 Apr 2018
4 min read
Save for later

Unity plugins for augmented reality application development

Sugandha Lahoti
10 Apr 2018
4 min read
Augmented Reality is the powerhouse for the next set of magic tricks headed to our mobile devices.  Augmented Reality combines real-world objects with Digital information. Heard about Pokemon Go? It was first showcased by Niantic at WWDC 2017 and was built on Apple’s augmented reality framework, ARKit. Following the widespread success of Pokemon Go, a large number of companies are eager to invest in AR technology. Unity is one of the dominant players in the industry when it comes to creating desktop, console and mobile games. Augmented Reality has been exciting game developers for quite some time now, and following this excitement Unity has released prominent tools for developers to experiment with AR Apps. Bear in mind that Unity is not designed exclusively for Augmented Reality and so developers can access additional functionality by importing extensions. These extensions also provide pre-designed game components such as characters or game props. Let us briefly look at 3 prominent tools or extensions for Augmented Reality development provided by Unity: Unity ARKit plugin The Unity ARKit plugin uses the functionality of the ARKit SDK within Unity projects. As on September 2017, this plugin is also extended for iOS apps as iOS ARKit plugin. The ARKit plugin provides Unity developers with access to features such as motion tracking, vertical and horizontal plane finding, live video rendering, hit-testing, raw point cloud data, ambient light estimation, and more for their AR projects. This plugin also provides easy integration of AR features in existing Unity projects. A new tool, the Unity ARKit Remote speeds up iteration by allowing developers to make real-time changes to the scene and debug scripts in the Unity Editor. The latest update to iOS ARKit is version 1.5 which provides developers with the more tools to power more immersive AR experiences. Google ARCore Google ARCore for Unity provides mobile AR experiences for Android, without the need for additional hardware. The latest major version ARCore 1.0 enables AR applications to track a phone’s motion in the real world, detect planes in the environment, and understand lighting in the camera scene. ARCore 1.0 introduces featured oriented points which help in the placement of anchors on textured surfaces. These feature points enhance the environmental understanding of the scene. So ARCore is not just limited to horizontal and vertical planes like ARKit, but can create AR Apps on any surface. ARCore 1.0 is supported by the Android Emulator in Android Studio 3.1 Beta and is available for use on multiple supported Android devices. Vuforia integration with Unity Vuforia allows developers to build cross-platform AR apps directly from the Unity editor. It provides Augmented Reality support for Android, iOS, and UWP devices, through a single API. It attaches digital content to different types of objects and environments using Model Targets and Ground Plane, across a broad range of devices and operating systems. Ground Plane attaches digital content to horizontal surfaces. Model Targets provides Object Recognition capabilities. Other targets include Image (to put AR content on flat objects) and Cloud (manage large collections of Image Targets from your own CMS). Vuforia also includes Device Tracking capability which provides an inside-out device tracker for rotational head and hand tracking. It also provides APIs to create immersive experiences that transition between AR and VR. You can browse through various AR projects from the Unity community to help you get started with your next big AR idea as well as to choose the toolkit best suited for you. Leap Motion open sources its $100 augmented reality headset, North Star Unity and Unreal comparison Types of Augmented Reality targets Create Your First Augmented Reality Experience: The Tools and Terms You Need to Understand
Read more
  • 0
  • 0
  • 5256

article-image-analyzing-enterprise-application-behavior-with-wireshark-2
Vijin Boricha
09 Jul 2018
19 min read
Save for later

Analyzing enterprise application behavior with Wireshark 2

Vijin Boricha
09 Jul 2018
19 min read
One of the important things that you can use Wireshark for is application analysis and troubleshooting. When the application slows down, it can be due to the LAN (quite uncommon in wired LAN), the WAN service (common due to insufficient bandwidth or high delay), or slow servers or clients. It can also be due to slow or problematic applications. The purpose of this article is to get into the details of how applications work, and provide relevant guidelines and recipes for isolating and solving these problems. In the first recipe, we will learn how to find out and categorize applications that work over our network. Then, we will go through various types of applications to see how they work, how networks influence their behavior, and what can go wrong. Further, we will learn how to use Wireshark in order to resolve and troubleshoot common applications that are used in an enterprise network. These are Microsoft Terminal Server and Citrix, databases, and Simple Network Management Protocol (SNMP). This is an excerpt from Network Analysis using Wireshark 2 Cookbook - Second Edition written by Nagendra Kumar Nainar, Yogesh Ramdoss, Yoram Orzach. Find out what is running over your network The first thing to do when monitoring a new network is to find out what is running over it. There are various types of applications and network protocols, and they can influence and interfere with each other when all of them are running over the network. In some cases, you will have different VLANs, different Virtual Routing and Forwardings (VRFs), or servers that are connected to virtual ports in a blade server. Eventually, everything is running on the same infrastructure, and they can influence each other. There is a common confusion between VRFs and VLANs. Even though their purpose is quite the same, they are configured in different places. While VLANs are configured in the LAN in order to provide network separation in the OSI layers 1 and 2, VRFs are multiple instances of routing tables to make them coexist in the same router. This is a layer 3 operation that separates between different customer's networks. VRFs are generally seen in service provider environments using Multi-Protocol Label Switching (MPLS) to provide layer 3 connectivity to different customers over the same router's network, in such a way that no customer can see any other customer's network. In this recipe, we will see how to get to the details of what is running over the network, and the applications that can slow it down. The term blade server refers to a server enclosure, which is a chassis of server shelves on the front and LAN switches on the back. There are several different acronyms for it; for example, IBM calls them blade center and HP calls them blade system. Getting ready When you get into a new network, the first thing to do is connect Wireshark to sniff what is running over the applications and protocols. Make sure you follow these points: When you are required to monitor a server, port-mirror it and see what is running on its connection to the network. When you are required to monitor a remote office, port-mirror the router port that connects you to the WAN connection. Then, check what is running over it. When you are required to monitor a slow connection to the internet, port-mirror it to see what is going on there. In this recipe, we will see how to use the Wireshark tools for analyzing what is running and what can cause problems. How to do it... For analyzing, follow these steps: Connect Wireshark using one of the options mentioned in the previous section. You can use the following tools: Navigate to Statistics | Protocol Hierarchy to view the protocols that run over the network and the percentage of the total traffic Navigate to Statistics | Conversations to see who is talking and what protocols are used In the Protocol Hierarchy feature, you will get a window that will help you analyze who is talking over the network. It is shown in the following screenshot: In the preceding screenshot, you can see the protocol distribution: Ethernet: IP, Logical-Link Control (LLC), and configuration test protocol (loopback) Internet Protocol Version 4: UDP, TCP, Protocol Independent Multicast (PIM), Internet Group Management Protocol (IGMP), and Generic Routing Encapsulation  (GRE) If you click on the + sign, all the underlying protocols will be shown. To see a specific protocol throughput, click down to the protocols as shown in the following screenshot. You will see the application average throughput during the capture (HTTP in this example): Clicking on the + sign to the left of HTTP will open a list of protocols that run over HTTP (XML, MIME, JavaScripts, and more) and their average throughput during the capture period. There's more... In some cases (especially when you need to prepare management reports), you are required to provide a graphical picture of the network statistics. There are various sources available for this, for example: Etherape (for Linux): http://etherape.sourceforge.net/ Compass (for Windows): http://download.cnet.com/Compass-Free/3000-2085_4-75447541.html?tag=mncol;1 Analyzing Microsoft Terminal Server and Citrix communications problems Microsoft Terminal Server, which uses Remote Desktop Protocol (RDP) and Citrix metaframe Independent Computing Architecture (ICA) protocols, are widely used for local and remote connectivity for PCs and thin clients. The important thing to remember about these types of applications is that they transfer screen changes over the network. If there, are only a few changes, they will require low bandwidth. If there many changes, they will require high bandwidth. Another thing is that the traffic in these applications is entirely asymmetric. Downstream traffic takes from tens of Kbps up to several Mbps, while the upstream traffic will be at most several Kbps. When working with these applications, don't forget to design your network according to this. In this recipe, we will see some typical problems of these applications and how to locate them. For the convenience of writing, we will refer to Microsoft Terminal Server, and every time we write Microsoft Terminal Server, we will refer to all applications in this category, for example, Citrix Metaframe. Getting ready When suspecting a slow performance with Microsoft Terminal Server, first check with the user what the problem is. Then, connect the Wireshark to the network with port-mirror to the complaining client or to the server. How to do it... For locating a problem when Microsoft Terminal Server is involved, start with going to the users and asking questions. Follow these steps: When users complain about a slow network, ask them a simple question: Do they see the slowness in the data presented on the screen or when they switch between windows? If they say that the switch between windows is very fast, it is not a Microsoft Terminal Server problem. Microsoft Terminal Server problems will cause slow window changes, picture freezes, slow scrolling of graphical documents, and so on. If they say that they are trying to generate a report (when the software is running over Microsoft Terminal Server), but the report is generated after a long period of time, this is a database problem and not Microsoft Terminal Server or Citrix. When a user works with Microsoft Terminal Server over a high-delay communication line and types very fast, they might experience delays with the characters. This is because Microsoft Terminal Server is transferring window changes, and with high delays, these windows changes will be transferred slowly. When measuring the communication line with Wireshark: Use I/O graphs to monitor the line Use filters to monitor the upstream and the downstream directions Configure bits per second on the y axis You will get the following screenshot: In the preceding screenshot, you can see a typical traffic pattern with high downstream and very low upstream traffic. Notice that the Y-Axis is configured to Bits/Tick. In the time between 485s and 500s, you see that the throughput got to the maximum. This is when applications will slow down and users will start to feel screen freezes, menus that move very slowly, and so on. When a Citrix ICA client connects to a presentation server, it uses TCP ports 2598 or 1494. When monitoring Microsoft Terminal Server servers, don't forget that the clients access the server with Microsoft Terminal Server and the servers access the application with another client that is installed on the server. The performance problem can come from Microsoft Terminal Server or the application. If the problem is an Microsoft Terminal Server problem, it is necessary to figure out whether it is a network problem or a system problem: Check the network with Wireshark to see if there are any loads. Loads such as the one shown in the previous screenshot can be solved by simply increasing the communication lines. Check the server's performance. Applications like Microsoft Terminal Server are mostly memory consuming, so check mostly for memory (RAM) issues. How it works... Microsoft Terminal Server, Citrix Metaframe, and applications simply transfer window changes over the network. From your client (PC with software client or thin client), you connect to the terminal server; and the terminal server, runs various clients that are used to connect from it to other servers. In the following screenshot, you can see the principle of terminal server operation: There's more... From the terminal server vendors, you will hear that their applications improve two things. They will say that it improves manageability of clients because you don't have to manage PCs and software for every user; you simply install everything on the server, and if something fails, you fix it on the server. They will also say that traffic over the network will be reduced. Well, I will not get into the first argument. This is not our subject, but I strongly reject the second one. When working with a terminal client, your traffic entirely depends on what you are doing: When working with text/character-based applications, for example, some Enterprise Resource Planning (ERP) screens, you type in and read data. When working with the terminal client, you will connect to the terminal server that will connect to the database server. Depending on the database application you are working with, the terminal server can improve performance significantly or does not improve it at all. We will discuss this in the database section. Here, you can expect a load of tens to hundreds of Kbps. If you are working with regular office documents such as Word, PowerPoint, and so on, it entirely depends on what you are doing. Working with a simple Word document will require tens to hundreds of Kbps. Working with PowerPoint will require hundreds of Kbps to several Mbps, and when you present the PowerPoint file with full screen (the F5 function), the throughput can jump up to 8 to 10 Mbps. Browsing the internet will take between hundreds of Kbps and several Mbps, depending on what you will do over it. High resolution movies over terminal server to the internet-well, just don't do it. Before you implement any terminal environment, test it. I once had a software house that wanted their logo (at the top-right corner of the user window) to be very clear and striking. They refreshed it 10 times a second, which caused the 2 Mbps communication line to be blocked. You never know what you don't test! Analyzing the database traffic and common problems Some of you may wonder why we have this section here. After all, databases are considered to be a completely different branch in the IT environment. There are databases and applications on one side and the network and infrastructure on the other side. It is correct since we are not supposed to debug databases; there are DBAs for this. But through the information that runs over the network, we can see some issues that can help the DBAs with solving the relevant problems. In most of the cases, the IT staff will come to us first because people blame the network for everything. We will have to make sure that the problems are not coming from the network and that's it. In a minority of the cases, we will see some details on the capture file that can help the DBAs with what they are doing. Getting ready When the IT team comes to us complaining about the slow network, there are some things to do just to verify that it is not the case. Follow the instructions in the following section to make sure you avoid the slow network issue. How to do it... In the case of database problems, follow these steps: When you get complaints about the slow network responses, start asking these questions: Is the problem local or global? Does it occur only in the remote offices or also in the center? When the problem occurs in the entire network, it is not a WAN bandwidth issue. Does it happen the same for all clients? If not, there might be a specific problem that happens only with some users because only those users are running a specific application that causes the problem. Is the communication line between the clients and the server loaded? What is the application that loads them? Do all applications work slowly, or is it only the application that works with the specific database? Maybe some PCs are old and tired, or is it a server that runs out of resources? When we are done with the questionnaire, let's start our work: Open Wireshark and start capturing packets. You can configure port-mirror to a specific PC, the server, a VLAN, or a router that connects to a remote office in which you have the clients. Look at the TCP events (expert info). Do they happen on the entire communication link, on specific IP address/addresses, or on specific TCP port number/numbers? This will help you isolate the problem and check whether it is on a specific link, server, or application. When measuring traffic on a connection to the internet, you will get many retransmissions and duplicate ACKs to websites, mail servers, and so on. This is the internet. In an organization, you should expect 0.1 to 0.5 percent of retransmissions. When connecting to the internet, you can expect much higher numbers. But there are some network issues that can influence database behavior. In the following example, we see the behavior of a client that works with the server over a communication line with a round trip delay of 35 to 40 ms. We are looking at the TCP stream number 8 (1) and the connection started with TCP SYN/SYN-ACK/ACK. I've set this as a reference (2). We can see that the entire connection took 371 packets (3): The connection continues, and we can see time intervals of around 35 ms between DB requests and responses: Since we have 371 packets travelling back and forth, 371 x 35 ms gives us around 13 seconds. Add to this some retransmissions that might happen and some inefficiencies, and this leads to a user waiting for 10 to 15 seconds and more for a database query. In this case, you should consult the DBA on how to significantly reduce the number of packets that run over the network, or you can move to another way of access, for example, terminal server or web access. Another problem that can happen is that you will have a software issue that will reflect in the capture file. If you have a look at the following screenshot, you will see that there are five retransmissions, and then a new connection is opened from the client side. It looks like a TCP problem but it occurs only in a specific window in the software. It is simply a software procedure that stopped processing, and this stopped the TCP from responding to the client: How it works... Well, how databases work was always be a miracle to me. Our task is to find out how they influence the network, and this is what we've learned in this section. There's more... When you right-click on one of the packets in the database client to the server session, a window with the conversation will open. It can be helpful to the DBA to see what is running over the network. When you are facing delay problems, for example, when working over cellular lines over the internet or over international connections, the database client to the server will not always be efficient enough. You might need to move to web or terminal access to the database. An important issue is how the database works. If the client is accessing the database server, and the database server is using files shared from another server, it can be that the client-server works great; but the problems come from the database server to the shared files on the file server. Make sure that you know all these dependencies before starting with your tests. And most importantly, make sure you have very professional DBAs among your friends. One day, you will need them! Analyzing SNMP SNMP is a well-known protocol that is used to monitor and manage different types of devices in a network by collecting data and statistics at regular intervals. Beyond just monitoring, it can also be used to configure and modify settings with appropriate authorization given to SNMP servers. Devices that typically support SNMP are switches, routers, servers, workstations, hosts, VoIP Phones, and many more. It is important to know that there are three versions of SNMP: SNMPv1, SNMPv2c, and SNMPv3. Versions v2c and v3, which came later, offer better performance and security. SNMP consists of three components: The device being managed (referred to as managed device). SNMP Agent. This is a piece of software running on the managed device that collects the data from the device and stores it in a database, referred to as the Managed Information Base (MIB) database. As configured, this SNMP agent exports the data/statistics to the server (using UDP port 161) at regular intervals, and also any events and traps. SNMP server, also called Network Management Server (NMS). This is a server that communicates with all the agents in the network to collect the exported data and build a central repository. SNMP server provides access to the IT staff managing network; they can monitor, manage, and configure the network remotely. It is very important to be aware that some of the MIBs implemented in a device could be vendor-specific. Almost all the vendors publicize these MIBs implemented in their devices. Getting ready Generally, the complaints we get from the network management team are about not getting any statistics or traps from a device(s) for a specific interval, or having completely no visibility to a device(s). Follow the instructions in the following section to analyze and troubleshoot these issues. How to do it... In the case of SNMP problems, follow these steps. When you get complaints about SNMP, start asking these questions: Is this a new managed device that has been brought into the network recently? In other words, did the SNMP in the device ever work properly? If this is a new device, talk to relevant device administrator and/or check the SNMP-related configurations, such as community strings. If SNMP configurations looks correct, make sure that the NMS's IP address configured is correct and also check the relevant password credentials. If SNMP v3 is in use, which supports encryption, make sure to check encryption-related settings like transport methods. If the setting and configuration looks valid and correct, make sure the managed devices have connectivity with the NMS, which can be verified by simple ICMP pings. If it is a managed device that has been working properly and didn't report any statistics or alerts for a specific duration: Did the device in discussion have any issues in the control plane or management plane that stopped it from exporting SNMP statistics? Please be aware that for most devices in the network, SNMP is a least-priority protocol, which means that if a device has a higher-priority process to work on, it will hold the SNMP requests and responses in the queue. Is the issue experienced only for a specific device or for multiple devices in the network? Did the network (between managed device and NMS) experience any issue? For example, during any layer 2 spanning-tree convergence, traffic loss could occur between the managed device and SNMP server, by which NMS would lose visibility to the managed devices. As you can see in the following picture, an SNMP Server with IP address 172.18.254.139 is performing SNMP walk with a sequence of GET-NEXT-REQUEST to a workstation with IP address 10.81.64.22, which in turn responds with GET-RESPONSE. For simplicity, the Wireshark filter used for these captures is SNMP. The workstation is enabled with SNMP v2c, with community string public. Let's discuss some of the commonly seen failure scenarios. Polling a managed device with a wrong SNMP version As I mentioned earlier, the workstation is enabled with v2c, but when the NMS polls the device with the wrong SNMP version, it doesn't get any response. So, it is very important to make sure that the managed devices are polled with the correct SNMP version. Polling a managed device with a wrong MIB object ID (OID) In the following example, the NMS is polling the managed device to get a number of bytes sent out on interfaces. The MIB OID for byte count is .1.3.6.1.2.1.2.2.1.16, which is ifOutOctets. The managed device in discussion has two interfaces, mapped to OID .1.3.6.1.2.1.2.2.1.16.1 and .1.3.6.1.2.1.2.2.1.16.2. When NMS polls the device to check the statistics for the third interface (which is not present), it returns a noSuchInstance error. How it works... As you have learned in the earlier sections, SNMP is a very simple and straightforward protocol and all its related information on standards and MIB OIDs is readily available in the internet. There's more... Here are some of the websites with good information about SNMP and MIB OIDs: Microsoft TechNet SNMP: https://technet.microsoft.com/en-us/library/cc776379(v=ws.10).aspx Cisco IOS MIB locator: http://mibs.cloudapps.cisco.com/ITDIT/MIBS/servlet/index We have learned to perform enterprise-level network analysis with real-world examples like Analyzing Microsoft Terminal Server and Citrix communications problems. Get to know more about security and network forensics from our book Network Analysis using Wireshark 2 Cookbook - Second Edition. What’s new in Wireshark 2.6 ? Top 5 penetration testing tools for ethical hackers 5 pen testing rules of engagement: What to consider while performing Penetration testing
Read more
  • 0
  • 0
  • 5230