Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech Guides

851 Articles
article-image-5-common-misconceptions-about-devops
Hari Vignesh
08 Aug 2017
4 min read
Save for later

5 common misconceptions about DevOps

Hari Vignesh
08 Aug 2017
4 min read
DevOps is a transformative operational concept designed to help development and production teams coordinate operations more effectively. In theory, DevOps is designed to be focused on cultural changes that stimulate collaboration and efficiency, but the focus often ends up being placed on everyday tasks, distracting organizations from the core principles — and values  — that DevOps is built around. This has led to many technology professionals developing misconceptions about DevOps because they have been part of deployments,or know people who have been involved in DevOps plans, who have strayed from the core principles of the movement. Let’s discuss a few of the misconceptions. We need to employ ‘DevOps’ DevOps is not a job title or a specific role. Your organization probably already has Senior Systems guys and Senior Developers who have many of the traits needed to work in the way that DevOps promotes. With a bit of effort and help from outside consultants, mailing lists or conferences, you might easily be able to restructure your business around the principles you propose without employing new people — or losing old ones. Again, there is no such thing as a DevOp person. It is not a job title. Feel free to advertise for people who work with a DevOps mentality, but there are no DevOps job titles. Oftentimes, good people to consider in the role as a bridge between teams are generalists, architects, and Senior Systems Administrators and Developers. Many companies in the past decade have employed a number of specialists — a DNS Administrator is not unheard of. You can still have these roles, but you’ll need some generalists who have a good background in multiple technologies. They should be able to champion the values of simple systems over complex ones, and begin establishing automation and cooperation between teams. Adopting tools makes you DevOps Some who have recently caught wind of the DevOps movement believe they can instantly achieve this nirvana of software delivery simply by following a checklist of tools to implement within their team. Their assumption is, that if they purchase and implement a configuration management tool like Chef, a monitoring service like Librato, or an incident management platform like VictorOps, then they’ve achieved DevOps! But that's not quite true. DevOps requires a cultural shift beyond simply implementing a new lineup of tools. Each department, technical or not, needs to understand the cultural shift behind DevOps. It’s one that emphasizes empathy and better collaboration. It’s more about people. DevOps emphasizes continuous change There’s no way around it — you will need to deal with more change and release tasks when integrating DevOps principles into your operations — the focus is placed heavily on accelerating deployment through development and operations integration, after all. This perception comes out of DevOps’ initial popularity among web app developers. It has been explained that most businesses will not face change that is so frequent, and do not need to worry about continuous change deployment just because they are supporting DevOps. DevOps does not equal “developers managing production” DevOps means development and operations teams working together collaboratively to put the operations requirements about stability, reliability, and performance into the development practices, while at the same time bringing development into the management of the production environment (e.g. by putting them on call, or by leveraging their development skills to help automate key processes). It doesn’t mean a return to the laissez-faire “anything goes” model, where developers have unfettered access to the production environment 24/7 and can change things as and when they like. DevOps eliminates traditional IT roles If, in your DevOps environment, your developers suddenly need to be good system admins, change managers and database analysts, something went wrong. DevOps as a movement that eliminates traditional IT roles will put too much strain on workers. The goal is to break down collaboration barriers, not ask your developers to do everything. Specialized skills play a key role in support effective operations, and traditional roles are valuable in DevOps.  About the Author  Hari Vignesh Jayapalan is a Google Certified Android app developer, IDF Certified UI & UX Professional, street magician, fitness freak, technology enthusiast, and wannabe entrepreneur. He can be found on Twitter @HariofSpades. 
Read more
  • 0
  • 0
  • 3587

article-image-why-use-mobx-for-state-management
Amarabha Banerjee
07 Aug 2018
3 min read
Save for later

Why use MobX for State Management?

Amarabha Banerjee
07 Aug 2018
3 min read
The most downloaded Front-end framework in 2017 was React.js. The reason being the component driven architecture and easy to implement Reactive programming principles in applications. However, one of the key challenges faced by the developers is state management for large-scale applications. The ‘Setstate’ facility in React can be a workaround for small level applications, but as the complexity of the application grows, so does the importance of managing application state. That’s where MobX solves a lot of problems faced by the React developers. It’s easy to use and lightweight. MobX features a robust spreadsheet like architecture as shown below. Source: MobX MobX treats any change to the state as a derivative, like in the case of spreadsheets, where you can apply a formula for a particular column and the column values will change accordingly. The same thing happens in MobX. The change in state is reflected as a derivative, based on the derivative, reactions are generated, which are then trickled down to the component tree. So each change is reflected automatically across all the components. This is intuitive and removes a lot of overhead processes which plague Redux. Reactive programming is at the core of MobX. It uses concepts similar to RxJS. Values can be made observable so that when the value changes, anything that uses that value will update automatically. This is a pretty simple to grasp the concept that can make working with data much more intuitive. It has the potential to become not just a State management tool, but according to the creator of MobX, a ‘Data Flow tool’. Code can be expressed more concisely with the new JavaScript decorator syntax although create-react-app doesn’t support this syntax out of the box. MobX is suitable for different Front-end frameworks like Angular which improves its interoperability feature. The additional advantage is not having to go through the laborious set-up an installation and multi-component update process in Redux. MobX is not however without its limitations. Testing is still a matter of concern. It’s not immediately obvious how components should be broken up for testing. It is a bit easier without using the decorator syntax to separate the logic from the view, however, the documentation doesn’t touch on how this works when using the decorator syntax. The advantages and ease of use of MobX outweigh the negatives presently. As long as React component structure and Reactive programming paradigm remain as the foundation for modern day web development, Mobx usage will grow, and we might even see other similar libraries cropping up to tackle the issue of state management. Is Future-Fetcher/Context API replacing Redux? Why do React developers love Redux for state management? Creating Reusable Generic Modals in React and Redux
Read more
  • 0
  • 0
  • 3584

article-image-5-reasons-to-choose-aws-iot-core-for-your-next-iot-project
Savia Lobo
19 Apr 2018
5 min read
Save for later

5 reasons to choose AWS IoT Core for your next IoT project

Savia Lobo
19 Apr 2018
5 min read
Many cloud service providers have been marching towards adopting IoT (Internet of Things) services to attract more customers. This league includes top cloud merchants such as AWS, Microsoft Azure, IBM, and much recently, Google. Among these, Amazon Web Services have been the most popular. Its AWS IoT Core service is a fully-managed cloud platform that provides IoT devices with an easy and secure connection to interact with cloud applications and other IoT devices. AWS IoT Core can keep track of billions of IoT devices, with the messages travelling to and from them. It processes and routes those messages to the AWS endpoints and to other devices reliably and securely. This means, with the help of AWS IoT Core, you can keep track of all your devices and have a real-time communication with them. Undoubtedly, there is a lot of competition around cloud platforms to host IoT services. Users are bound to a specific cloud platform for a varied set of reasons such as a yearly subscription, by choice, or other reasons. Here are 5 reasons to choose AWS IoT core for your IoT projects: Build applications on the platform of your choice with AWS IoT Core Device SDK AWS IoT Core Device SDK is the primary mode of connection between your application and the AWS IoT core. It uses the MQTT, HTTP, or webSockets protocols to effectively connect and exchange messages with this service. The languages supported by the AWS IoT device SDK are C, Arduino, and JavaScript. The SDK provides developers with mobile SDKs for Android and iOS, and a bunch of SDKs for Embedded C, Python and many more. It also includes open-source libraries, developer guides with samples, and porting guides. With these features, developers can build novel IoT products and solutions on the hardware platform of their choice. AWS IoT Summit 2018 held recently in Sydney shed light on cloud technologies and how it can help businesses lower costs, improve efficiency and innovate at scale. It had sessions dedicated to IoT. (Intelligence of Things: IoT, AWS DeepLens, and Amazon SageMaker) Handle the underlying infrastructure and protocol support with Device Gateway The device gateway acts as an entry gate for IoT devices to connect to the Amazon Web Services (AWS). It handles multiple protocols, which ensures secure and effective connection of the IoT devices with the IoT Core. The list of protocols include MQTT, WebSockets, and HTTP 1.1. Also, with the device gateway, one does not have to worry about the infrastructure as it automatically manages and scales huge amount of devices at ease. Authentication and Authorization is now easy with AWS methods of authentication AWS IoT Core supports SigV4, an AWS method of authentication, X.509 certificate based authentication, and customer created token based authentication. The user can create, deploy and manage certificates and policies for the devices from the console or using the API. AWS IoT Core also supports connections from users’ mobile apps using Amazon Cognito, which creates a unique ID for app users and can be used to retrieve temporary, limited-privilege AWS credentials. AWS IoT Core also enables temporary AWS credentials after a device has authenticated with an X.509 certificate. This is done so that the device can more easily access other AWS services such as DynamoDB or S3. Determine device’s current state automatically with Device Shadow Device shadow is a JSON document, which stores and retrieves the current state for a device. It provides persistent representations such as the last reported state and the desired future state of one’s device even when the device is offline. With Device Shadow, one can easily build applications to interact with the applications by providing REST APIs. It aids applications to set their desired future state without having to request for device starting state. AWS IoT core differentiates between the desire state and the last reported state. It can further command the device to make up the difference. Route messages both internally and externally using AWS Rules Engine The Rules Engine helps build IoT applications without having to manage any infrastructure. Based on the rules defined, the Rules engine evaluates all the incoming messages within the AWS IoT Core, transforms it, and delivers them to other devices or cloud services. One can author or write rules within the management console using the SQL-like syntax The Rules Engine can also route messages to AWS endpoints such as AWS Lambda, Amazon Kinesis, Amazon S3, Amazon Machine Learning, Amazon DynamoDB, Amazon CloudWatch, and Amazon Elasticsearch Service with built-in Kibana integration. It can also reach external endpoints using AWS Lambda, Amazon Kinesis, and Amazon Simple Notification Service (SNS). There are many other reasons to choose AWS IoT Core for your projects. However, it is purely one’s choice as many might already be using or bound to other cloud services. For those, who haven’t yet started, they may choose AWS for a plethora of other cloud services that they offer, which includes AWS IoT Core too.  
Read more
  • 0
  • 0
  • 3574

article-image-how-can-artificial-intelligence-support-your-big-data-architecture
Natasha Mathur
26 Sep 2018
6 min read
Save for later

How can Artificial Intelligence support your Big Data architecture?

Natasha Mathur
26 Sep 2018
6 min read
Getting a big data project in place is a tough challenge. But making it deliver results is even harder. That’s where artificial intelligence comes in. By integrating artificial intelligence into your big data architecture, you’ll be able to better manage, and analyze data in a way that provide a substantial impact on your organization. With big data getting even bigger over the next couple of years, AI won’t simply be an optional extra, it will be essential. According to IDC, the accumulated volume of big data will increase from 4.4 zettabytes to roughly 44 zettabytes or 44 trillion GB, by 2020. Only by using Artificial Intelligence will you really be able to properly leverage such huge quantities of data. The International Data Corporation (IDC) also predicted a need for 181,000 people with deep analytical skills, data management, and interpretation skills, this year. AI comes to rescue again. AI can ultimately compensate for the lack of analytical resources today with the power of machine learning that enables automation.  Now that we know why Big data needs AI, let’s have a look at how AI helps big data. But, for that, you first need to understand the big data architecture. While it’s clear that artificial intelligence is an important development in the context of big data, what are the specific ways it can support and augment your big data architecture? It can, in fact, help you across every component in the architecture. That’s good news for anyone working with big data, and good for organizations that depend on it for growth as well. Artificial Intelligence in Big data Architecture In a big data architecture, data is collected from different data sources and then moves forward to other layers. Artificial Intelligence in data sources Using machine learning, this process of structuring data becomes easier, thereby, making it easier for the organizations to store and analyze their data. Now, keep in mind that large amounts of data from various sources can sometimes make data analysis even harder. This is because we now have access to heterogeneous sources of data that add different dimensions and attributes to the data. This further slows down the entire process of collecting data. To make things much quicker and more accurate, it’s important to consider only the most important dimensions. This process is what’s called data dimensionality reduction (DDR). With DDR, it is important to keep note of the fact that the model should always convey the same information without any loss of insight or intelligence. Principal Component Analysis or PCA is another useful machine learning method that’s used for dimensionality reduction. PCA performs feature extraction, meaning it combines all the input variables from the data, then drops the “least important” variables while making sure to retain the most valuable parts of all of the variables. Also, each of the “new” variables after PCA are independent of each other. Artificial Intelligence in data storage Once data is collected from the data source, it then needs to be stored. AI can allow you to automate storage with machine learning. This also makes structuring the data easier. Machine learning models automatically learn to recognize patterns, regularities, and interdependencies from unstructured data and then adapt, dynamically and independently, to new situations. K-means clustering is one of the most popular unsupervised algorithms for data clustering, which is used when there’s large-scale data without any defined categories or groups. The K-means Clustering algorithm performs pre-clustering or classification of data into larger categories. Unstructured data gets stored as binary objects, annotations are stored in NoSQL databases, and raw data is ingested into data lakes. All this data act as input to machine learning models. This approach is great as it automates refining of the large-scale data. So, as the data keeps coming, the machine learning model will keep storing it depending on what category it fits. Artificial Intelligence in data analysis After the data storage layer comes the data analysis part. There are numerous machine learning algorithms that help with effective and quick data analysis in big data architecture. One such algorithm that can really step up the game when it comes to data analysis is Bayes Theorem. Bayes theorem uses stored data to ‘predict’ the future. This makes it a wonderful fit for big data. The more data you feed to a Bayes algorithm, the more accurate its predictive results become. Bayes Theorem determines the probability of an event based on prior knowledge of conditions that might be related to the event. Another machine learning algorithm that great for performing data analysis are decision trees. Decision trees help you reach a particular decision by presenting all possible options and their probability of occurrence. They’re extremely easy to understand and interpret. LASSO (least absolute shrinkage and selection operator) is another algorithm that will help with data analysis. LASSO is a regression analysis method. It is capable of performing both variable selection and regularization which enhances the prediction accuracy and interpretability of the outcome model. The lasso regression analysis can be used to determine which of your predictors are most important. Once the analysis is done, the results are presented to other users or stakeholders. This is where data utilization part comes into play. Data helps to inform decision making at various levels and in different departments within an organization. Artificial intelligence takes big data to the next level Heaps of data gets generated every day by organizations all across the globe. Given such huge amount of data, it can sometimes go beyond the reach of current technologies to get right insights and results out of this data. Artificial intelligence takes the big data process to another level, making it easier to manage and analyze a complex array of data sources. This doesn’t mean that humans will instantly lose their jobs - it simply means we can put machines to work to do things that even the smartest and most hardworking humans would be incapable of. There’s a saying that goes "big data is for machines; small data is for people”, and it couldn’t be any truer. 7 AI tools mobile developers need to know How AI is going to transform the Data Center How Serverless computing is making AI development easier
Read more
  • 0
  • 0
  • 3566

article-image-6-reasons-to-choose-mysql-8-for-designing-database-solutions
Amey Varangaonkar
08 May 2018
4 min read
Save for later

6 reasons to choose MySQL 8 for designing database solutions

Amey Varangaonkar
08 May 2018
4 min read
Whether you are a standalone developer or an enterprise consultant, you would obviously choose a database that provides good benefits and results when compared to other related products. MySQL 8 provides numerous advantages as the first choice in this competitive market. It has various powerful features available that make it a more comprehensive database. Today we will go through the benefits of using MySQL as the preferred database solution: [box type="note" align="" class="" width=""]The following excerpt is taken from the book MySQL 8 Administrator’s Guide, co-authored by Chintan Mehta, Ankit Bhavsar, Hetal Oza and Subhash Shah. This book presents step-by-step techniques on managing, monitoring and securing the MySQL database without any hassle.[/box] Security The first thing that comes to mind is securing data because nowadays data has become precious and can impact business continuity if legal obligations are not met; in fact, it can be so bad that it can close down your business in no time. MySQL is the most secure and reliable database management system used by many well-known enterprises such as Facebook, Twitter, and Wikipedia. It really provides a good security layer that protects sensitive information from intruders. MySQL gives access control management so that granting and revoking required access from the user is easy. Roles can also be defined with a list of permissions that can be granted or revoked for the user. All user passwords are stored in an encrypted format using plugin-specific algorithms. Scalability Day by day, the mountain of data is growing because of extensive use of technology in numerous ways. Because of this, load average is going through the roof. In some cases, it is unpredictable that data cannot exceed up to some limit or number of users will not go out of bounds. Scalable databases would be a preferable solution so that, at any point, we can meet unexpected demands to scale. MySQL is a rewarding database system for its scalability, which can scale horizontally and vertically; in terms of data, spreading database and load of application queries across multiple MySQL servers is quite feasible. It is pretty easy to add horsepower to the MySQL cluster to handle the load. An open source relational database management system MySQL is an open source database management system that makes debugging, upgrading, and enhancing the functionality fast and easy. You can view the source and make the changes accordingly and use it in your own way. You can also distribute an extended version of MySQL, but you will need to have a license for this. High performance MySQL gives high-speed transaction processing with optimal speed. It can cache the results, which boosts read performance. Replication and clustering make the  system scalable for more concurrency and manages the heavy workload. Database indexes also accelerate the performance of SELECT query statements for substantial amount of data. To enhance performance, MySQL 8 has included indexes in performance schema to speed up data retrieval. High availability Today, in the world of competitive marketing, an organization's key point is to have their system up and running. Any failure or downtime directly impacts business and revenue; hence, high availability is a factor that cannot be overlooked. MySQL is quite reliable and has constant availability using cluster and replication configurations. Cluster servers instantly handle failures and manage the failover part to keep your system available almost all the time. If one  server gets down, it will redirect the user's request to another node and perform the requested operation. Cross-platform capabilities MySQL provides cross-platform flexibility that can run on various platforms such as Windows, Linux, Solaris, OS2, and so on. It has great API support  for the all  major languages, which makes it very easy to integrate with languages such as  PHP, C++, Perl,  Python, Java, and so on. It is also part of the Linux Apache  MySQL PHP (LAMP) server that is used worldwide for web applications. That’s it then! We discussed few important reasons of MySQL being the most popular relational database in the world and is widely adopted across many enterprises. If you want to learn more about MySQL’s administrative features, make sure to check out the book MySQL 8 Administrator’s Guide today! 12 most common MySQL errors you should be aware of Top 10 MySQL 8 performance benchmarking aspects to know
Read more
  • 0
  • 0
  • 3561

article-image-artificial-general-intelligence-did-it-gain-traction-in-research-in-2018
Prasad Ramesh
21 Feb 2019
4 min read
Save for later

Artificial General Intelligence, did it gain traction in research in 2018?

Prasad Ramesh
21 Feb 2019
4 min read
In 2017, we predicted that artificial general intelligence will gain traction in research and certain areas will aid towards AGI systems. The prediction was made in a set of other AI predictions in an article titled 18 striking AI Trends to watch in 2018. Let’s see how 2018 went for AGI research. Artificial general intelligence or AGI is an area of AI in which efforts are made to make machines have intelligence closer to the complex nature of human intelligence. Such a system could possibly, in theory, perform tasks that a human can with the ability to learn as it progresses through tasks, collects data/sensory input. Human intelligence also involves learning a skill and applying it to other areas. For example, if a human learns Dota 2, they can apply the same learned experience to other similar strategy games, only the UI and characters in the game that can be adopted will be different. A machine cannot do this, AI systems are trained for a specific area and the skills cannot really be transferred to another task with complete efficiency and the fear of causing technical debt. That is, a machine cannot generalize skills as a human can. Come 2018, we saw Deepmind’s AlphaZero, something that is at least beginning to show what an idea of AGI could look like. But even this is not really AGI, an AlphaZero like system may excel at playing a variety of games or even understand the rules of novel games but cannot deal with the real world and its challenges. Some groundwork and basic ideas for AGI were set in a paper by the US Air Force. Dr. Paul Yaworsky, in the paper, says that artificial general intelligence is an effort to cover the gap between lower and higher level work in AI. So to speak, try and make sense of the abstract nature of intelligence. The paper also shows an organized hierarchical model for intelligence considering the external world. One of Packt’s authors, Sudharsan Ravichandiran thinks that: “Great things are happening around RL research each and every day. Deep Meta reinforcement learning will be the future of AI where we will be so close to achieving artificial general intelligence (AGI). Instead of creating different models to perform different tasks, with AGI, a single model can master a wide variety of tasks and mimics the human intelligence.” Honda came up with a program called Curious Minded Machine in association with MIT, University of Pennsylvania, and the University of Washington. The idea sounds simple at first - it is to build a model on how children ‘learn to learn’. But something like this which children do instinctively is a very complex task for a machine/computer with artificial intelligence. The teams will showcase their work in various fields they are working on at the end of three years since the inception of the program. There was another effort by SingularityNET and Mindfire to explore AI and “cracking the brain code”. The effort is to better understand the functioning of the human brain. Together these two companies will focus on three key areas—talent, AI services, and AI education. Mindfire Mission 2 will take place in early 2019, Switzerland. These were the areas of work we saw on AGI in 2018. There were only small steps taken towards the research direction and nothing noteworthy that gained mainstream traction. On an average, experts think AGI would take at least a 100 more years to be a reality, as per Martin Ford’s interviews with machine learning experts for his best selling book, ‘Architects of Intelligence’. OpenAI released a new language model called GPT-2 in February 2019. With just one line of words, the model can generate whole articles. The results are good enough to pass as something written by a human. This does not mean that the machine actually understands human language, it’s merely generating sentences by associating words. This development has triggered passionate discussions within the community on not just the technical merits of the findings, but also the dangers and implications of applications of such research on the larger society. Get ready to see more tangible research in AGI in the next few decades. The US Air Force lays groundwork towards artificial general intelligence based on hierarchical model of intelligence Facebook’s artificial intelligence research team, FAIR, turns five. But what are its biggest accomplishments? Unity and Deepmind partner to develop Virtual worlds for advancing Artificial Intelligence
Read more
  • 0
  • 0
  • 3559
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $15.99/month. Cancel anytime
article-image-atomic-game-engine-how-become-contributor
RaheelHassim
05 Dec 2016
6 min read
Save for later

The Atomic Game Engine: How to Become a Contributor

RaheelHassim
05 Dec 2016
6 min read
What is the Atomic Game Engine? The Atomic Game Engine is a powerful multiplatform game development tool that can be used for both 2D and 3D content. It is layered on top of Urho3D, an open source game development tool, and also makes use of an extensive list of third-party libraries including Duktape, Node.js, Poco, libcurl, and many others. What makes it great? It supports many platforms such as Windows, OSX, Linux, Android, iOS and WebGL. It also has a flexible scripting approach and users can choose to code in C#, JavaScript, TypeScript or C++. There is an extensive library of example games available to all users, which show off different aspects and qualities of the engine. Image taken from: http://atomicgameengine.com/blog/announcement-2/ What makes it even greater for developers? Atomic has recently announced that it is now under the permissive MIT license. Errr great… What exactly does that mean? This means that Atomic is now completely open source and anyone can use it, modify it, publish it, and even sell it as long as the copyright notice remains on all substantial portions of the software. Basically, just don’t remove the text in the picture below from any of the scripts and it should be fine. Here’s what the MIT license in the Atomic Game Engine looks like:   Atomic Game  Engine MIT License Why should I spend time and effort contributing to the Atomic Game Engine? The non-restrictive MIT license makes it easy for developers to freely contribute to the engine and getting creative without the fear of breaking any laws. The Atomic Game Engine acknowledges all of their contributors by publishing their names to the list of developers working on the engine and contributors have access to a very active community where almost all questions are answered and developers are supported. As a junior software developer, I feel I’ve gained invaluable experience by contributing to open source software and it’s also a really nice addition to my portfolio. There is a list of issues available on the GitHub page where the issues have a difficulty level, priority, and issue type labeled. This is wonderful! How do I get started? Contributors can download the MIT Open Source code here: https://github.com/AtomicGameEngine/AtomicGameEngine *Disclaimer: This tutorial is based on using the Windows platform, SmartGit, and Visual Studio Community Version 2015. **Another Disclaimer: I wrote this tutorial with someone like myself in mind. i.e. amazingly average in many ways, but also relatively new to the industry and a first time contributor to open source software. Step 1: Install Visual Studio Community 2015 here. Visual Studio download page Step 2: Install CMake, making sure cmake is on your path. CMake install options Step 3: Fork the Atomic Game Engine’s repository to create your own version of it. a) Go to the AtomicGameEngine GitHub Page and click on the Fork button. This will allow you to experiment and make changes to your own copy of the engine without affecting the original version. Fork the repository b) Navigate to your GitHub profile and click on your forked version of the engine. GitHub profile page with repositories Step 4: Clone the repository and include all of the submodules. a) Click the green Clone or download button on the right and copy the web URL of your repository. Your AGE GitHub page b) Open up SmartGit (or any other Git Client) to clone the repository onto your machine. Clone repository in SmartGit c) Paste the URL you copied earlier into the Repository URL field. Copy remote url d) Include all Submodules and Fetch all Heads and Tags. Include all submodules e) Select a local directory to save the engine. Add a local directory to save the engine on your machine h) Your engine should start cloning... We’ve set everything up for our local repository. Next, we’d like to sync the original AtomicGameEngine with our local version of the engine so that we can always stay up-to-date with any changes made to the original engine. Step 4: Create an upstream branch. a)    Click Remote → Add →                       i)        Add the AtomicGameEngine Remote URL                      ii)        Name it upstream.  Adding an upstream to the original engine We are ready to start building a Visual Studio Solution of the engine. Step 5: Run the CMake_VS2015.bat batch file in the AtomicGameEngine directory. This will generate a new folder in the root directory, which will contain the Atomic.sln for Visual Studio. AGE directory At this point, we can make some changes to the engine (click here for a list of issues). Create a feature branch off the master for Pull Requests. Remember to stick to code conventions already being used. Once you’re happy with the changes you’ve made to the engine: -       Update your branch by merging in upstream. Resolve all conflicts and test it again. -       Commit your changes and Push them up to your branch. It’s now time to send a Pull Request. Step 6: Send a Pull Request. a)    Go to your fork of the AtomicGameEngine repository on GitHub. Select the branch you want to send through, and click New Pull Request. b)    Always remember to reference the Issue Number in your message to make it easier for the creators to manage the Issues List. Personal version of the AGE Your Pull Request will get reviewed by the creators and if the content is acceptable, it will get landed into the engine and you’ll become an official contributor to the Atomic Game Engine! Resources for the Blog: __________________________________________________________________________ [1] The Atomic Game Engine Website [2] Building the Atomic Editor from Source [3] GitHub Help: Fork a Repo [4] What I mean when I use the MIT license About the Author: RaheelHassim is a Software Developer who recently graduated from Wits University in Johannesburg, South Africa. She was awarded the IGDA Women in Games Ambassadors scholarship in 2016 and attended the Games Developers Conference. Her games career started at Luma Interactive where she became a contributor to the Atomic Game Engine. In her free time she binge watches Friends and plays music covers on her guitar.
Read more
  • 0
  • 0
  • 3521

article-image-what-the-eu-copyright-directive-means-for-developers-and-what-you-can-do
Richard Gall
11 Sep 2018
6 min read
Save for later

What the EU Copyright Directive means for developers - and what you can do

Richard Gall
11 Sep 2018
6 min read
Tomorrow, on Wednesday 12 September, the European Parliament will vote on amendments to the EU Copyright Bill, first proposed back in September 2016. This bill could have a huge impact on open source, software engineering, and even the future of the internet. Back in July, MEPs voted down a digital copyright bill that was incredibly restrictive. It asserted the rights of large media organizations to tightly control links to their stories, copyright filters on user generated content. https://twitter.com/EFF/status/1014815462155153408 The vote tomorrow is an opportunity to amend aspects of the directive - that means many of the elements that were rejected in July could still find their way through. What parts of the EU copyright directive are most important for software developers? There are some positive aspects of the directive. To a certain extent, it could be seen as evidence of the European Union continuing a broader project to protect citizens by updating digital legislation - a move that GDPR began back in May 2018. However, there are many unintended consequences of the legislation. It's unclear whether the negative impact is down to any level of malicious intent from law makers, or is simply reflective of a significant level of ignorance about how the web and software works. There are 3 articles within the directive that developers need to pay particular attention to. Article 13 of the EU copyright directive: copyright filters Article 13 of the directive has perhaps had the most attention. Essentially, it will require "information society service providers" - user-generated information and content platforms - to use "recognition technologies" to protect against copyright infringement. This could have a severe impact on sites like GitHub, and by extension, the very philosophy of open collaboration and sharing on which they're built. It's for this reason that GitHub has played a big part in educating Brussels law makers about the possible consequences of the legislation. Last week, the platform hosted an event to discuss what can be done about tomorrow's vote. In it, Marten Mickos, CEO of cybersecurity company Hacker One gave a keynote speech, saying that "Article 13 is just crap. It will benefit nobody but the richest, the wealthiest, the biggest - those that can spend tens of millions or hundreds of millions on building some amazing filters that will somehow know whether something is copyrighted or not." https://youtu.be/Sm_p3sf9kq4 A number MEPs in Brussels have, fortunately, proposed changes that would exclude software development platforms to instead focus the legislation on sites where users upload music and video. However, for those that believe strongly in an open internet, even these amendments could be a small compromise that not only places an unnecessary burden on small sites that simply couldn't build functional copyright filters, but also opens a door to censorship online. A better alternative could be to ditch copyright filters and instead opt for licensing agreements instead. This is something put forward by German politician Julia Reda - if you're interested in policy amendments you can read them in detail here. [caption id="attachment_22485" align="alignright" width="300"] Image via commons.wikimedia.org[/caption] Julia Reda is a member of the Pirate Party in Germany - she's a vocal advocate of internet freedoms and an important voice in the fight against many of the directive (she wants the directive to be dropped in its entirety). She's put together a complete list of amendments and alternatives here. Article 11 of the EU Copyright Directive: the "link tax" Article 11 follows the same spirit of article 13 of the bill. It gives large press organizations more control over how their content is shared and linked to online. It has been called the "link tax" - it could mean that you would need a license to link to content. According to news sites, this law would allow them to charge internet giants like Facebook and Google that link to their content. As Cory Doctorow points out in an article written for Motherboard in June, only smaller platforms would lose out - the likes of Facebook and Google could easily manage the cost. But there are other problems with article 11. It could, not only, as Doctorow also writes, "crush scholarly and encyclopedic projects like Wikipedia that only publish material that can be freely shared," but it could also "inhibit political discussions". This is because the 'link tax' will essentially allow large media organizations to fully control how and where their content is shared. "Links are facts" Doctorow argues, meaning that links are a vital component within public discourse, which allows the public to know who thinks what, and who said what. Article 3 of the EU Copyright Directive: restrictions on data mining Article 3 of the directive hasn't received as much attention as the two above, but it does nevertheless have important implications for the data mining and analytics landscape. Essentially, this proportion of the directive was originally aimed at posing restrictions on the data that can be mined for insights except in specific cases of scientific research. This was rejected by MEPs. However, it is still an area of fierce debate. Those that oppose it argue that restrictions on text and data mining could seriously hamper innovation and hold back many startups for whom data is central to the way they operate. However, given the relative success of GDPR in restoring some level of integrity to data (from a citizen's perspective), there are aspects of this article that might be worth building on as a basis for a compromise. With trust in a tech world at an all time low, this could be a stepping stone to a more transparent and harmonious digital domain. An open internet is worth fighting for - we all depend on it The difficulty unpicking the directive is that it's not immediately clear who its defending. On the one hand, EU legislators will see this as something that defends citizens from everything that they think is wrong with the digital world (and, let's be honest, there are things that are wrong with it). Equally, those organizations lobbying for the change will, as already mentioned, want to present this as a chance to knock back tech corporations that have had it easy for too long. Ultimately, though, the intention doesn't really matter. What really matters are the consequences of this legislation, which could well be catastrophic. The important thing is that the conversation isn't owned by well-intentioned law makers that don't really understand what's at stake, or media conglomerates with their own interests in protecting their content from the perceived 'excesses' of a digital world whose creativity is mistaken for hostility. If you're an EU citizen, get in touch with your MEP today. Visit saveyourinternet.eu to help the campaign. Read next German OpenStreetMap protest against “Article 13” EU copyright reform making their map unusable YouTube’s CBO speaks out against Article 13 of EU’s controversial copyright law
Read more
  • 0
  • 0
  • 3509

article-image-newsql-what-hype-about
Amey Varangaonkar
06 Nov 2017
6 min read
Save for later

NewSQL: What the hype is all about

Amey Varangaonkar
06 Nov 2017
6 min read
First, there was data. Data became database. Then came SQL. Next came NoSQL. And now comes NewSQL. NewSQL Origins For decades, relational database or SQL was the reigning data management standard in enterprises all over the world. With the advent of Big Data and cloud-based storage rose the need for a faster, more flexible and scalable data management system, which didn’t necessarily comply with the SQL standards of ACID compliance. This was popularly dubbed as NoSQL, and databases like MongoDB, Neo4j, and others gained prominence in no time. We can attribute the emergence and eventual adoption of NoSQL databases to a couple of very important factors. The high costs and lack of flexibility of the traditional relational databases drove many SQL users away. Also, NoSQL databases are mostly open source, and their enterprise versions are comparatively cheaper too. They are schema-less meaning they can be used to manage unstructured data effectively. In addition, they can scale well horizontally - i.e. you could add more machines to increase computing power and use it to handle high volumes of data. All these features of NoSQL come with an important tradeoff, however - these systems can’t simultaneously ensure total consistency. Of late, there has been a rise in another type of database systems, with the aim to combine ‘the best of both the worlds’. Popularly dubbed as ‘NewSQL’, this system promises to combine the relational data model of SQL and the scalability and speed of NoSQL. NewSQL - The dark horse in the databases race NewSQL is ‘SQL on Steroids’, say many. This is mainly because all NewSQL systems start with the relational data model and the SQL query language, but also incorporate the features that have led to the rise of NoSQL - addressing the issues of scalability, flexibility, and high performance. They offer the assurance of ACID transactions like in the relational models. However, what makes them really unique is that they allow the horizontal scaling functionality of NoSQL, and can process large volumes of data with high performance and reliability. This is why businesses really like the concept of NewSQL - the performance of NoSQL and the reliability and consistency of the SQL model, all packed in one. To understand what the hype surrounding NewSQL is all about, it’s worth comparing NewSQL database systems with the traditional SQL and NoSQL database systems, and see where they stand out: Characteristic Relational (SQL) NoSQL NewSQL ACID compliance Yes No Yes OLTP/OLAP support Yes No Yes Rigid Schema Structure Yes No In some cases Support for unstructured data No Yes In some cases Performance with large data Moderate Fast Very fast Performance overhead Huge Moderate Minimal Support from Community Very high High Low   As we can see from the table above, NewSQL really comes through as the best when you’re dealing with larger datasets with a desire to lower performance overheads. To give you a practical example, consider an organization that has to work with a large number of short transactions, access a limited amount of data, but executes those queries repeatedly. For such organizations, a NewSQL database system would be a perfect fit. These features are leading to the gradual growth of NewSQL systems. However, it will take some time for more industries to adopt them. Not all NewSQL databases are created equal Today, one has a host of NewSQL solutions to choose from. Some popular solutions are Clustrix, MemSQL, VoltDB and CockroachDB.  Cloud Spanner, the latest NewSQL offering by Google, became generally available in February 2017 - indicating Google’s interest in the NewSQL domain and the value a NewSQL database can offer to their existing cloud offerings. It is important to understand that there are significant differences among these various NewSQL solutions. As such you should choose a NewSQL solution carefully after evaluating your organization’s data requirements and problems. As this article on Dataconomy points out, while some databases handle transactional workloads well, they do not offer the benefit of native clustering - SAP HANA is one such example. NuoDB focuses on cloud deployments, but its overall throughput is found to be rather sub-par. MemSQL is a suitable choice when it comes to clustered analytics but falls short when it comes to consistency. Thus, the choice of the database purely depends on the task you want to do, and what trade-offs you are ready to allow without letting it affect your workflow too much. DBAs and Programmers in the NewSQL world Regardless of which database system an enterprise adopts, the role of DBAs will continue to be important going forward. Core database administration and maintenance tasks such as backup, recovery, replication, etc. will need to be taken care of. The major challenge for the NewSQL DBAs will be in choosing and then customizing the right database solution that fits the organizational requirements. Some degree of capacity planning and overall database administration skills might also have to be recalibrated. Likewise, NewSQL database programmers may find themselves dealing with data manipulation and querying tasks similar to those faced while working with traditional database systems. But NewSQL programmers will be doing these tasks at a much larger, or shall we say, at a more ‘distributed’ scale. In conclusion When it comes to solving a particular problem related to data management, it’s often said that 80% of the solution comes down to selecting the right tool, and 20% is about understanding the problem at hand! In order to choose the right database system for your organization, you must ask yourself these two questions: What is the nature of the data you will work with? What are you willing to trade-off? In other words, how important are factors such as the scalability and performance of the database system? For example, if you primarily work with mostly transactional data with a priority on high performance and high scalability, then NewSQL databases might fit your bill just perfectly. If you’re going to work with volatile data, NewSQL might help you there as well, however, there are better NoSQL solutions to tackle your data problem. As we have seen earlier, NewSQL databases have been designed to combine the advantages and power of both relational and NoSQL systems. It is important to know that NewSQL databases are not designed to replace either NoSQL or SQL relational models. They are rather intentionally-built alternatives for data processing, which mask the flaws and shortcomings of both relational and nonrelational database systems. The ultimate goal of NewSQL is to deliver a high performance, highly available solution to handle modern data, without compromising on data consistency and high-speed transaction capabilities.
Read more
  • 0
  • 0
  • 3508

article-image-erp-tool-in-focus-odoo-11
Sugandha Lahoti
22 May 2018
3 min read
Save for later

ERP tool in focus: Odoo 11

Sugandha Lahoti
22 May 2018
3 min read
What is Odoo? Odoo is an all-in-one management software that offers a range of business applications. It forms a complete suite of enterprise management applications targeting companies of all sizes. It is versatile in the sense that it can be used across multiple categories including CRM, website, e-commerce, billing, accounting, manufacturing, warehouse, and project management, and inventory. The community version is free-of-charge and can be installed with ease. Odoo is one of the fastest growing open source, business application development software products available. With the announcement of version 11 of Odoo, there are many new features added to Odoo and the face of business application development with Odoo has changed. In Odoo 11, the online installation documentation continues to improve and there are now options for Docker installations. In addition, Odoo 11 uses Python 3 instead of Python 2.7. This will not change the steps you take in installing Odoo but will change the specific libraries that will be installed. While much of the process is the same as previous versions of Odoo, there have been some pricing changes in Odoo 11. There are only two free users now and you pay for additional users. There is one free application that you can install for an unlimited number of users, but as soon as you have more than one application, then you must pay $25 for each user, including the first user. If you have thought about developing in Odoo, now is the best time to start. Before I convince you on why Odoo is great, let’s take a step back and revisit our fundamentals. What is an ERP? ERP is an acronym often used for Enterprise Resource Planning. The ERP gives a global and real-time view of data that can enable companies to address concerns and drive improvements. It automates the core business operations such as the order to fulfillment and procures to pay processes. It also reduces risk management for companies and enhances customer services by providing a single source for billing and relationship tracking. Why Odoo? Odoo is Extensible and easy to customize Odoo's framework was built with extensibility in mind. Extensions and modifications can be implemented as modules, to be applied over the module with the feature being changed, without actually changing it. This provides a clean and easy-to-control and customized applications. You get integrated information Instead of distributing data throughout several separate databases, Odoo maintains a single location for all the data. Moreover, the data remains consistent and up to date. Single reporting system Odoo has a unified and single reporting system to analyze and track the status. Users can also run their own reports without any help from IT. Single reporting systems, such as those provided by Odoo ERP software helps make reporting easier and customizable. Built around Python Odoo is built using the Python programming language, which is one of the most popular languages used by developers. Large community The capability to combine several modules into feature-rich applications, along with the open source nature of Odoo, is probably the important factors explaining the community that has grown around Odoo. In fact, there are thousands of community modules available for Odoo, covering virtually every topic, and the number of people getting involved has been steadily growing every year. Go through our video, Odoo 11 development essentials to learn to scaffold a new module, create new models, and use the proper functions that make Odoo 11 the best ERP out there. Top 5 free Business Intelligence tools How to build a live interactive visual dashboard in Power BI with Azure Stream Tableau 2018.1 brings new features to help organizations easily scale analytics
Read more
  • 0
  • 0
  • 3487
article-image-5-web-development-tools-matter-2018
Richard Gall
12 Dec 2017
4 min read
Save for later

5 web development tools will matter in 2018

Richard Gall
12 Dec 2017
4 min read
It's been a year of change and innovation in web development. We've seen Angular shifting quickly, React rising to dominate, and the surprising success of Vue.js. We've discussed what 'things' will matter in web development in 2018 here, but let's get down to the key tools you might be using or learning. Read what 5 trends and issues we think will matter in 2018 in web development here. 1. Vue.js If you remember back to 2016, the JavaScript framework debate centred on React and Angular. Which one was better? You didn't have to look hard to find Quora and Reddit threads, or Medium posts comparing and contrasting the virtues of one or the other. But in 2017, Vue has started to pick up pace to enter the running as a real competitor to the two hyped tools. What's most notable about Vue.js is simply how much people enjoy using it. The State of Vue.js report reported that 96% of users would use it for their next project. While it's clearly pointless to say that one tool is 'better' than another, the developer experience offered by Vue says a lot about what's important to developers - it's only likely to become more popular in 2018. Explore Vue eBooks and videos. 2.Webpack Webpack is a tool that's been around for a number of years but has recently seen its popularity grow. Again, this is likely down to the increased emphasis on improving the development experience - making development easier and more enjoyable. Webpack, is, quite simply brings all the assets you need in front end development - like JavaScript, fonts, and images, in one place. This is particularly useful if you're developing complicated front ends. So, if you're looking for something that's going to make complexity more manageable in 2018, we certainly recommend spending some time with Webpack. Learn Webpack with Deploying Web Applications with Webpack. 3. React Okay, you were probably expecting to see React. But why not include it? It's gone from strength to strength throughout 2017 and is only going to continue to grow in popularity throughout 2018. It's important though that we don't get caught up in the hype - that, after all, is one of the primary reasons we've seen JavaScript fatigue dominate the conversation. Instead, React's success is dependent on how we integrate it within our wider tech stacks - tools like webpack, for example. Ultimately, if React continues to allow developers to build incredible UI in a way that's relatively stress-free it won't be disappearing any time soon. Discover React content here. 4. GraphQL GraphQL might seem a little left field, but this tool built by Facebook has quietly been infiltrating its way into development toolchains since it was made public back in 2015. It's seen by some as software that's going to transform the way we build APIs. This article explains everything you need to know about GraphQL incredibly well, but to put it simply, GraphQL "is about designing your APIs more effectively and getting very specific about how clients access your data". Being built by Facebook, it's a tool that integrates very well with React - if you're interested, this case study by the New York Times explains how GraphQL and React played a part in their website redesign in 2017. Learn GraphQL with React and Relay. Download or stream our video. 5. WebAssembly While we don't want to get sucked into the depths of the hype cycle, WebAssembly is one of the newest and most exciting things in web development. WebAssembly is, according to the project site, "a new portable size- and load-time-efficient format suitable for the web". The most important thing you need to know is that it's fast - faster than JavaScript. "Unlike other approaches that require plug-ins to achieve near-native performance in the browser, WebAssembly runs entirely within the Web Platform. This means that developers can integrate WebAssembly libraries for CPU-intensive calculations (e.g. compression, face detection, physics) into existing web apps that use JavaScript for less intensive work" explains Mozilla fellow David Bryant in this Medium post. We think 2018 will be the year WebAssembly finally breaks through and makes it big - and perhaps offering a way to move past conversations around JavaScript fatigue.
Read more
  • 0
  • 0
  • 3487

article-image-visit-3d-printing-filament-factory-3dkberlin
Michael Ang
02 Sep 2015
5 min read
Save for later

Visit a 3D printing filament factory - 3dk.berlin

Michael Ang
02 Sep 2015
5 min read
Have you ever wondered where the filament for your 3D printer comes from and how it’s made? I recently had the chance to visit 3dk.berlin, a local filament manufacturer in Berlin. 3dk.berlin distinguishes itself by offering a huge variety of colors for their filament. As a designer it’s great to have a large palette of colors to choose from, and I chose 3dk filament for my Polygon Construction Kit workshop at Thingscon 2015 (they’re sponsoring the workshop). Today we’ll be looking at how one filament producer takes raw plastic and forms it into the colored filament you can use in your 3D printer. Some of the many colors offered by 3dk.berlin 3dk.berlin is located at the very edge of Berlin, in the area of Heiligensee which is basically its own small town. 3dk is a family-owned business run by Volker Bernhardt as part of BERNHARDT Kunststoffverarbeitungs GmbH (that’s German for "plastics processing company"). 3dk is focused on bringing BERNHARDT’s experience with injection moulded and extruded plastics to the new field of 3D printing. Inside the factory neutral-colored plastic pellets are mixed with colored "master batch" pellets and then extruded into filament. The extruding machine melts and mixes the pellets, then squeezes them through a nozzle, which determines the diameter of the extruded filament. The hot filament is run through a cool water bath and coiled on large spools. Conceptually it’s quite simple, but getting extremely consistent filament diameter, color and printing properties is demanding. Small details like air and moisture trapped inside the filament can lead to inconsistent prints. Bigger problems like material contamination can lead to a jammed nozzle in your printer. 3dk spent 1.5 years developing and fine tuning their machine before they were satisfied with the results to a German level of precision. They didn’t let me to take pictures of their extrusion machines since some of their techniques are proprietary but you can get a good view of a similar machine in this filament extrusion machine video. Florian (no small guy himself) with a mega-spool from the extrusion machine The filament from the extrusion machine is wound onto 10kg spools - these are big! The filament from these large spools is then rewound onto smaller spools for sale to customers. 3dk tests their filament on a variety of printers in-house to ensure ongoing quality. Where we might do a small print of 20 grams to test a new filament, 3dk might do a "small" test of 2kg! Test print with a full-size plant (about 4 feet tall) Why produce filament in Germany when cheaper filament is available from abroad? Florian Deurer from 3dk explained some of the benefits to me. 3dk gets their PLA base material directly from a supplier that does use additives. The same PLA is used by other manufacturers for items like food wrapping. The filament colorants come from a German supplier and are also "harmless for food". For the colorants in particular there might be the temptation for less scrupulous or regulated manufacturers to use toxic substances like heavy metals or other chemicals. Beyond safety and practical considerations like printing quality, using locally produced filament provides local jobs What really sets 3dk apart from other filament makers in an increasingly competitive field is the range of colors they produce. I asked Florian for some orange filament and he asked "which one?" The colors on offer range from subtle (there’s a whole selection of whites, for example) to more extreme bright colors and metallic effects. Designers will be happy to hear that they can order custom colors using the Pantone color standard (for orders of 5kg / 11lbs and up).   Which white would you like? Standard, milky, or pearl? Looking to the future of 3D printing, it will be great to see more environmentally friendly materials become available. The most popular material for home 3D printing right now is probably PLA plastic (the same material 3dk uses for most of their filament). PLA is usually derived from corn, which is an annually renewable crop. PLA is technically compostable, but this has to take place in industrial composting conditions at high temperature and humidity. People are making progress on recycling PLA and ABS plastic prints back into filament at home but the machines to make this easy and more common are still being developed. 100% recycled PLA print of Origamix_Rabbit by Mirice printed on an i3 Berlin 3dk offers a filament made from industrially recycled PLA. The color and texture for this material varies a little on the spool but I found it to print very well in my first tests and your object ends up a nice slightly transparent olive green. I recently got a "sneak peek" at a filament 3dk is working on that is compostable under natural conditions. This filament is pre-production, so the specifications haven’t been finalized, but Florian told me that the prints are stable under normal conditions but can break down when exposed to soil bacteria. The pigments also contain "nothing bad" and break down into minerals. The sample print I saw was flexible with a nice surface finish and color. A future where we can manufacture objects at home and throw them onto our compost heap after giving them some good use sounds pretty bright to me! A friendlier future for 3D printing? This print can naturally biodegrade About the Author Michael Ang is a Berlin-based artist / engineer working at the intersection of technology and human experience. He is the creator of the Polygon Construction Kit, a toolkit for creating large physical polygons using small 3D-printed connectors. His Light Catchers project collects crowdsourced light recordings into a public light sculpture.
Read more
  • 0
  • 0
  • 3475

article-image-why-choose-ansible-for-your-automation-and-configuration-management-needs
Savia Lobo
03 Jul 2018
4 min read
Save for later

Why choose Ansible for your automation and configuration management needs?

Savia Lobo
03 Jul 2018
4 min read
Off late, organizations are moving towards getting their systems automated. The benefits are many. Firstly, it saves off a huge chunk of time and secondly saves investments in human resources for simple tasks such as updates and so on. Few years back, Chef and Puppet were the two popular names when asked about tools for software automation. Over the years, these have got a strong rival which has surpassed them and now sits as one of the famous tools for software automation. Ansible is the one! Ansible is an open source tool for IT configuration management, deployment, and orchestration. It is perhaps the definitive configuration management tool. Chef and Puppet may have got there first, but its rise over the last couple of years is largely down to its impressive automation capabilities. And with the demands on operations engineers and sysadmins facing constant time pressures, the need to automate isn’t “nice to have”, but a necessity. Its tagline is “allowing smart people to do smart things.” It’s hard to argue that any software should aim to do much more than that. Ansible’s rise in popularity Ansible, originated in the year 2013, is a leader in IT automation and DevOps. It was bought by Red Hat in the year 2015 to achieve their goal of creating frictionless IT. The reason Red Hat acquired Ansible was its simplicity and versatility. It got the second mover advantage of entering the DevOps world after Puppet. It meant that it can orchestrate multi-tier applications in the cloud. This results in server uptime by implementing an ‘Immutable server architecture’ for deploying, creating, delete, or migrate servers across different clouds. For those starting afresh, it is easy to write, maintain automation workflows and gives them a plethora of modules which make it easy for newbies to get started. Benefits Red Hat and its community Ansible complements Red Hat’s popular cloud products, OpenStack and OpenShift. Red Hat proved to be a complex yet safe open source software for enterprises. However, it was not easy-to-use. Due to this many developers started migrating to other cloud services for easy and simple deployment options. By adopting Ansible, Red Hat finally provided an easy option to automate and modernize theri IT solutions. Customers can now focus on automating various baseline tasks. It also aids Red Hat to refresh its traditional playbooks; it allows enterprises to use IT services and infrastructure together with the help of Ansible’s YAML. The most prominent benefit of using Ansible for both enterprises and individuals is that it is agentless. It achieves this by leveraging SSH and Windows remote Management. Both these approaches reuse connections and use minimal network traffic. The approach also has added security benefits and improves both client and central management server resource utilization. Thus, the user does not have to worry about the network or server management, and can focus on other priority tasks. What can you use it for? Easy Configurations: Ansible provides developers with easy to understand configurations; understood by both humans and machines. It also includes many modules and user-built roles. Thus, one need not start building from scratch. Application lifecycle management: One can be rest assured about their application development lifecycle with Ansible. Here, it is used for defining the application and Red Hat Ansible Tower is used for managing the entire deployment process. Continuous Delivery: Manage your business with the help of Ansible push-based architecture, which allows a more sturdy control over all the required operations. Orchestration of server configuration in batches makes it easy to roll out changes across the environment. Security and Compliance: While security policies are defined in Ansible, one can choose to integrate the process of scanning and solving issues across the site into other automated processes. Scanning of jobs and system tracking ensures that systems do not deviate from the parameters assigned. Additionally, Ansible Tower provides a secure storage for machine credentials and RBAC (role-based access control). Orchestration: It brings in a high amount of discipline and order within the environment. This ensures all application pieces work in unison and are easily manageable; despite the complexity of the said applications. Though it is popular as the IT automation tool, many organizations use it in combination with Chef and Puppet. This is because it may have scaling issues and lacks in performance for larger deployments. Don’t let that stop you from trying Ansible; it is most loved by DevOps as it is written in Python and thus it is easy to learn. Moreover, it offers a credible support and an agentless architecture, which makes it easy to control servers and much more within an application development environment. An In-depth Look at Ansible Plugins Mastering Ansible – Protecting Your Secrets with Ansible Zefflin Systems unveils ServiceNow Plugin for Red Hat Ansible 2.0
Read more
  • 0
  • 0
  • 3466
article-image-ai-chip-wars-brainwave-microsofts-answer-googles-tpu
Amarabha Banerjee
18 Oct 2017
5 min read
Save for later

AI chip wars: Is Brainwave Microsoft's Answer to Google's TPU?

Amarabha Banerjee
18 Oct 2017
5 min read
When Google decided to design their own chip with TPU, it generated a lot of buzz for faster and smarter computations with its ASIC-based architecture. Google claimed its move would significantly enable intelligent apps to take over, and industry experts somehow believed a reply from Microsoft was always coming (remember Bing?). Well, Microsoft has announced its arrival into the game – with its own real-time AI-enabled chip called Brainwave. Interestingly, as the two tech giants compete in chip manufacturing, developers are certainly going to have more options now, while facing the complex computational processes of modern day systems. What is Brainwave? Until recently, Nvidia was the dominant market player in the microchip segment, creating GPUs (Graphics Processing Unit) for faster processing and computation. But after Google disrupted the trend with its TPU (tensor processing unit) processor, the surprise package in the market has come from Microsoft. More so because its ‘real-time data processing’ Brainwave chip claims to be faster than the Google chip (the TPU 2.0 or the Cloud TPU chip). The one thing that is common between both Google and Microsoft chips is that they can both train and simulate deep neural networks much faster than any of the existing chips. The fact that Microsoft has claimed that Brainwave supports Real-Time AI systems with minimal lag, by itself raises an interesting question - are we looking at a new revolution in the microchip industry? The answer perhaps lies in the inherent methodology and architecture of both these chips (TPU and Brainwave) and the way they function. What are the practical challenges of implementing them in real-world applications? The Brainwave Architecture: Move over GPU, DPU is here In case you are wondering what the hype with Microsoft’s Brainwave chip is about, the answer lies directly in its architecture and design. The present-day complex computational standards are defined by high-end games for which GPUs (Graphical Processing Units) were originally designed. Brainwave differs completely from the GPU architecture: the core components of a Brainwave chip are Field Programmable Gate Arrays or FPGAs. Microsoft has developed a huge number of FPGA modules on top of which DNN (Deep Neural Network) layers are synthesized. Together, this setup can be compared with something similar to Hardware Microservices where each task is assigned by a software to different FPGA and DNN modules. These software controlled Modules are called DNN Processing Units or DPUs. This eliminates the latency of the CPU and the need for data transfer to and fro from the backend. The two methodologies involved here are seemingly different in their architecture and application: one is the hard DPU and the other is the Soft DPU. While Microsoft has used the soft DPU approach where the allocation of memory modules are determined by software and the volume of data at the time of processing, the hard DPU has a predefined memory allocation which doesn’t allow for flexibility so vital in real-time processing. The software controlled feature is exclusive to Microsoft, and unlike other AI processing chips, Microsoft have developed their own easy to process data types that are faster to process. This enables the Brainwave chip to perform near real-time AI computations easily.  Thus, in a way Microsoft brainwave holds an edge over the Google TPU when it comes to real-time decision making and computation capabilities. Brainwave’s edge over TPU 2 - Is it real time? The reason Google had ventured out into designing their own chips was their need to increase the number of data centers, with the increase in user queries. They had realized the fact that instead of running data queries via data centers, it would be far more plausible if the computation was performed in the native system. That’s where they needed more computational capabilities than what the modern day market leaders like Intel X86 Xeon processors and the Nvidia Tesla K80 GPUs offered. But Google opted for Application Specific Integrated Circuits (ASIC) instead of FPGAs, the reason being that it was completely customizable. It was not specific for one particular Neural Network but was rather applicable for multiple Networks. The trade-off for this ability to run multiple Neural Networks was of course Real Time computation which Brainwave could achieve because of using the DPU architecture. The initial data released by Microsoft shows that the Brainwave has a data transfer bandwidth of 20TB/sec, 20 times faster than the latest Nvidia GPU chip. Also, the energy efficiency of Brainwave is claimed to be 4.5 times better than the current chips. Whether Google would up their ante and improve on the existing TPU architecture to make it suitable for real-time computation is something only time can tell. [caption id="attachment_1064" align="alignnone" width="644"] Source: Brainwave_HOTCHIPS2017 PPT on Microsoft Research Blog[/caption] Future outlook and challenges Microsoft is yet to declare the benchmarking results for the Brainwave chip. But Microsoft Azure customers most definitely look forward to the availability of Brainwave chip for faster and better computational abilities. What is even more promising is Brainwave works seamlessly with Google’s TensorFlow and Microsoft’s own CNTK framework. Tech startups like Rigetti, Mythic and Waves are trying to create mainstream applications which will employ AI and quantum computation techniques. This will bring AI to the masses, by creating practical AI driven applications for daily consumers, and these companies have shown a keen interest in both the Microsoft and the Google AI chips. In fact, Brainwave will be most suited for these companies such as the above which are looking to use AI capabilities for everyday tasks, as they are less in number because of the limited computational capabilities of the current chips. The challenges with all AI chips, including Brainwave, will still revolve around their data handling capabilities, the reliability of performance, and on improving memory capabilities of our current hardware systems.
Read more
  • 0
  • 0
  • 3457

article-image-4-myths-about-git-and-github-you-should-know-about
Savia Lobo
07 Oct 2018
3 min read
Save for later

4 myths about Git and GitHub you should know about

Savia Lobo
07 Oct 2018
3 min read
With an aim to replace BitKeeper, Linus Torvalds created Git in 2005 to support the development of the Linux kernel. However, Git isn’t necessarily limited to code, any product or project that requires or exhibits characteristics such as having multiple contributors, requiring release management and versioning stands to have an improved workflow through Git. Just as every solution or tool has its own positives and negatives, Git is also surrounded by myths. Alex Magana and Joseph Mul, the authors of Introduction to Git and GitHub course discuss in this post some of the myths about the Git tool and GitHub. Git is GitHub Due to the usage of Git and GitHub as the complete set that forms the version control toolkit, adopters of the two tools misconceive Git and GitHub as interchangeable tools. Git is a tool that offers the ability to track changes on files that constitute a project. Git offers the utility that is used to monitor changes and persists the changes. On the other hand, GitHub is akin to a website hosting service. The difference here is that with GitHub, the hosted content is a repository. The repository can then be accessed from this central point and the codebase shared. Backups are equivalent to version control This emanates from a misunderstanding of what version control is and by extension what Git achieves when it’s incorporated into the development workflow. Contrary to archives created based on a team’s backup policy, Git tracks changes made to files and maintains snapshots of a repository at a given point in time. Git is only suitable for teams With the usage of hosting services such as GitHub, the element of sharing and collaboration, may be perceived as a preserve of teams. Git offers gains beyond source control. It lends itself to the delivery of a feature or product from the point of development to deployment. This means that Git is a tool for delivery. It can, therefore, be utilized to roll out functionality and manage changes to source code for teams and individuals alike. To effectively use Git, you need to learn every command to work When working as an individual or a team, the common commands required to allow you to contribute a repository encompass commands for initiating tracking of specific files, persisting changes made to tracked files, reverting changes made to files incorporating changes introduced by other developers working on the same project you are on. The four myths mentioned by the authors provides a clarification on both Git and GitHub and its uses. If you found this post useful, do check out the course titled Introduction to Git and GitHub by Alex and Joseph. GitHub addresses technical debt, now runs on Rails 5.2.1 GitLab 11.3 released with support for Maven repositories, protected environments and more GitLab raises $100 million, Alphabet backs it to surpass Microsoft’s GitHub  
Read more
  • 0
  • 0
  • 3451