Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech Guides

851 Articles
article-image-7-ai-tools-mobile-developers-need-to-know
Bhagyashree R
20 Sep 2018
11 min read
Save for later

7 AI tools mobile developers need to know

Bhagyashree R
20 Sep 2018
11 min read
Advancements in artificial intelligence (AI) and machine learning has enabled the evolution of mobile applications that we see today. With AI, apps are now capable of recognizing speech, images, and gestures, and translate voices with extraordinary success rates. With a number of apps hitting the app stores, it is crucial that they stand apart from competitors by meeting the rising standards of consumers. To stay relevant it is important that mobile developers keep up with these advancements in artificial intelligence. As AI and machine learning become increasingly popular, there is a growing selection of tools and software available for developers to build their apps with. These cloud-based and device-based artificial intelligence tools provide developers a way to power their apps with unique features. In this article, we will look at some of these tools and how app developers are using them in their apps. Caffe2 - A flexible deep learning framework Source: Qualcomm Caffe2 is a lightweight, modular, scalable deep learning framework developed by Facebook. It is a successor of Caffe, a project started at the University of California, Berkeley. It is primarily built for production use cases and mobile development and offers developers greater flexibility for building high-performance products. Caffe2 aims to provide an easy way to experiment with deep learning and leverage community contributions of new models and algorithms. It is cross-platform and integrates with Visual Studio, Android Studio, and Xcode for mobile development. Its core C++ libraries provide speed and portability, while its Python and C++ APIs make it easy for you to prototype, train, and deploy your models. It utilizes GPUs when they are available. It is fine-tuned to take full advantage of the NVIDIA GPU deep learning platform. To deliver high performance, Caffe2 uses some of the deep learning SDK libraries by NVIDIA such as cuDNN, cuBLAS, and NCCL. Functionalities Enable automation Image processing Perform object detection Statistical and mathematical operations Supports distributed training enabling quick scaling up or down Applications Facebook is using Caffe2 to help their developers and researchers train large machine learning models and deliver AI on mobile devices. Using Caffe2, they significantly improved the efficiency and quality of machine translation systems. As a result, all machine translation models at Facebook have been transitioned from phrase-based systems to neural models for all languages. OpenCV - Give the power of vision to your apps Source: AndroidPub OpenCV short for Open Source Computer Vision Library is a collection of programming functions for real-time computer vision and machine learning. It has C++, Python, and Java interfaces and supports Windows, Linux, Mac OS, iOS and Android. It also supports the deep learning frameworks TensorFlow and PyTorch. Written natively in C/C++, the library can take advantage of multi-core processing. OpenCV aims to provide a common infrastructure for computer vision applications and to accelerate the use of machine perception in the commercial products. The library consists of more than 2500 optimized algorithms including both classic and state-of-the-art computer vision algorithms. Functionalities These algorithms can be used for the following: To detect and recognize faces Identify objects Classify human actions in videos Track camera movements and moving objects Extract 3D models of objects Produce 3D point clouds from stereo cameras Stitch images together to produce a high-resolution image of an entire scene Find similar images from an image database Applications Plickers is an assessment tool, that lets you poll your class for free, without the need for student devices. It uses OpenCV as its graphics and video SDK. You just have to give each student a card called a paper clicker, and use your iPhone/iPad to scan them to do instant checks-for-understanding, exit tickets, and impromptu polls. Also check out FastCV BoofCV TensorFlow Lite and Mobile - An Open Source Machine Learning Framework for Everyone Source: YouTube TensorFlow is an open source software library for building machine learning models. Its flexible architecture allows easy model deployment across a variety of platforms ranging from desktops to mobile and edge devices. Currently, TensorFlow provides two solutions for deploying machine learning models on mobile devices: TensorFlow Mobile and TensorFlow Lite. TensorFlow Lite is an improved version of TensorFlow Mobile, offering better performance and smaller app size. Additionally, it has very few dependencies as compared to TensorFlow Mobile, so it can be built and hosted on simpler, more constrained device scenarios. TensorFlow Lite also supports hardware acceleration with the Android Neural Networks API. But the catch here is that TensorFlow Lite is currently in developer preview and only has coverage to a limited set of operators. So, to develop production-ready mobile TensorFlow apps, it is recommended to use TensorFlow Mobile. Also, TensorFlow Mobile supports customization to add new operators not supported by TensorFlow Mobile by default, which is a requirement for most of the models of different AI apps. Although TensorFlow Lite is in developer preview, its future releases “will greatly simplify the developer experience of targeting a model for small devices”. It is also likely to replace TensorFlow Mobile, or at least overcome its current limitations. Functionalities Speech recognition Image recognition Object localization Gesture recognition Optical character recognition Translation Text classification Voice synthesis Applications The Alibaba tech team is using TensorFlow Lite to implement and optimize speaker recognition on the client side. This addresses many of the common issues of the server-side model, such as poor network connectivity, extended latency, and poor user experience. Google uses TensorFlow for advanced machine learning models including Google Translate and RankBrain. Core ML - Integrate machine learning in your iOS apps Source: AppleToolBox Core ML is a machine learning framework which can be used to integrate machine learning model in your iOS apps. It supports Vision for image analysis, Natural Language for natural language processing, and GameplayKit for evaluating learned decision trees. Core ML is built on top of the following low-level APIs, providing a simple higher level abstraction to these: Accelerate optimizes large-scale mathematical computations and image calculations for high performance. Basic neural network subroutines (BNNS) provides a collection of functions using which you can implement and run neural networks trained with previously obtained data. Metal Performance Shaders is a collection of highly optimized compute and graphic shaders that are designed to integrate easily and efficiently into your Metal app. To train and deploy custom models you can also use the Create ML framework. It is a machine learning framework in Swift, which can be used to train models using native Apple technologies like Swift, Xcode, and Other Apple frameworks. Functionalities Face and face landmark detection Text detection Barcode recognition Image registration Language and script identification Design games with functional and reusable architecture Applications Lumina is a camera designed in Swift for easily integrating Core ML models - as well as image streaming, QR/Barcode detection, and many other features. ML Kit by Google - Seamlessly build machine learning into your apps Source: Google ML Kit is a cross-platform suite of machine learning tools for its Firebase mobile development platform. It comprises of Google's ML technologies, such as the Google Cloud Vision API, TensorFlow Lite, and the Android Neural Networks API together in a single SDK enabling you to apply ML techniques to your apps easily. You can leverage its ready-to-use APIs for common mobile use cases such as recognizing text, detecting faces, identifying landmarks, scanning barcodes, and labeling images. If these APIs don't cover your machine learning problem, you can use your own existing TensorFlow Lite models. You just have to upload your model on Firebase and ML Kit will take care of the hosting and serving. These APIs can run on-device or in the cloud. Its on-device APIs process your data quickly and work even when there’s no network connection. Its cloud-based APIs leverage the power of Google Cloud Platform's machine learning technology to give you an even higher level of accuracy. Functionalities Automate tedious data entry for credit cards, receipts, and business cards, or help organize photos. Extract text from documents, which you can use to increase accessibility or translate documents. Real-time face detection can be used in applications like video chat or games that respond to the player's expressions. Using image labeling you can add capabilities such as content moderation and automatic metadata generation. Applications A popular calorie counter app, Lose It! uses Google ML Kit Text Recognition API to quickly capture nutrition information to ensure it’s easy to record and extremely accurate. PicsArt uses ML Kit custom model APIs to provide TensorFlow–powered 1000+ effects to enable millions of users to create amazing images with their mobile phones. Dialogflow - Give users new ways to interact with your product Source: Medium Dialogflow is a Natural Language Understanding (NLU) platform that makes it easy for developers to design and integrate conversational user interfaces into mobile apps, web applications, devices, and bots. You can integrate it on Alexa, Cortana, Facebook Messenger, and other platforms your users are on. With Dialogflow you can build interfaces, such as chatbots and conversational IVR that enable natural and rich interactions between your users and your business. It provides this human-like interaction with the help of agents. Agents can understand the vast and varied nuances of human language and translate that to standard and structured meaning that your apps and services can understand. It comes in two types: Dialogflow Standard Edition and Dialogflow Enterprise Edition. Dialogflow Enterprise Edition users have access to Google Cloud Support and a service level agreement (SLA) for production deployments. Functionalities Provide customer support One-click integration on 14+ platforms Supports multilingual responses Improve NLU quality by training with negative examples Debug using more insights and diagnostics Applications Domino’s simplified the process of ordering pizza using Dialogflow’s conversational technology. Domino's leveraged large customer service knowledge and Dialogflow's NLU capabilities to build both simple customer interactions and increasingly complex ordering scenarios. Also check out Wit.ai Rasa NLU Microsoft Cognitive Services - Make your apps see, hear, speak, understand and interpret your user needs Source: Neel Bhatt Cognitive Services is a collection of APIs, SDKs, and services to enable developers easily add cognitive features to their applications such as emotion and video detection, facial, speech, and vision recognition, among others. You need not be an expert in data science to make your systems more intelligent and engaging. The pre-built services come with high-quality RESTful intelligent APIs for the following: Vision: Make your apps identify and analyze content within images and videos. Provides capabilities such as image classification, optical character recognition in images, face detection, person identification, and emotion identification. Speech: Integrate speech processing capabilities into your app or services such as text-to-speech, speech-to-text, speaker recognition, and speech translation. Language: Your application or service will understand the meaning of the unstructured text or the intent behind a speaker's utterances. It comes with capabilities such as text sentiment analysis, key phrase extraction, automated and customizable text translation. Knowledge: Create knowledge-rich resources that can be integrated into apps and services. It provides features such as QnA extraction from unstructured text, knowledge base creation from collections of Q&As, and semantic matching for knowledge bases. Search: Using Search API you can find exactly what you are looking for across billions of web pages. It provides features like ad-free, safe, location-aware web search, Bing visual search, custom search engine creation, and many more. Applications To safeguard against fraud, Uber uses the Face API, part of Microsoft Cognitive Services, to help ensure the driver using the app matches the account on file. Cardinal Blue developed an app called PicCollage, a popular mobile app that allows users to combine photos, videos, captions, stickers, and special effects to create unique collages. Also check out AWS machine learning services IBM Watson These were some of the tools that will help you integrate intelligence into your apps. These libraries make it easier to add capabilities like speech recognition, natural language processing, computer vision, and many others, giving users the wow moment of accomplishing something that wasn’t quite possible before. Along with choosing the right AI tool, you must also consider other factors that can affect your app performance. These factors include the accuracy of your machine learning model, which can be affected by bias and variance, using correct datasets for training, seamless user interaction, and resource-optimization, among others. While building any intelligent app it is also important to keep in mind that the AI in your app is solving a problem and it doesn’t exist because it is cool. Thinking from the user’s perspective will allow you to assess the importance of a particular problem. A great AI app will not just help users do something faster, but enable them to do something they couldn’t do before. With the growing popularity and the need to speed up the development of intelligent apps, many companies ranging from huge tech giants to startups are providing AI solutions. In the future we will definitely see more developer tools coming into the market, making AI in apps a norm. 6 most commonly used Java Machine learning libraries 5 ways artificial intelligence is upgrading software engineering Machine Learning as a Service (MLaaS): How Google Cloud Platform, Microsoft Azure, and AWS are democratizing Artificial Intelligence
Read more
  • 0
  • 0
  • 18411

article-image-earn-1m-per-year-hint-learn-machine-learning
Neil Aitken
01 Aug 2018
10 min read
Save for later

How to earn $1m per year? Hint: Learn machine learning

Neil Aitken
01 Aug 2018
10 min read
Internet job portal ‘Indeed.com’ links potential employers with people who are looking to take the next step in their careers. The proportion of job posts on their site, relating to ‘Data Science’, a specific job in the AI category, is growing fast (see chart below). More broadly, Artificial Intelligence & machine learning skills, of which ‘Data Scientist’ is just one example, are in demand. No wonder that it has been termed as the sexiest job role of the 21st century. Interest comes from an explosion of jobs in the field from big companies and Start-Ups, all of which are competing to come up with the best AI business and to earn the money that comes with software that automates tasks. The skills shortage associated with Artificial Intelligence represents an opportunity for any developer. There has never been a better time to consider whether reskilling or upskilling in AI could be a lucrative path for you. Below : Indeed.com. Proportion of job postings containing Data Scientist or Data Science. [caption id="attachment_21240" align="aligncenter" width="1525"] Artificial Intelligence skills are increasingly in demand and create a real opportunity for those prepared to reskill or upskill.[/caption] Source: Indeed  The AI skills gap the market is experiencing comes from the difficulty associated with finding an individual demonstrating a competent mixture of the very disparate faculties that AI roles require. Artificial Intelligence and it’s near equivalents such as Machine Learning and Neural Networks operate at the intersection of what have mostly been two very different disciplines – statistics and software development. In simple terms, they are half coding, half maths. Hamish Ogilvy, CEO of AI based Internal Search company Sajari is all too familiar with the problem. He’s on the front line, hiring AI developers. “The hardest part”, says Ogilvy, “is that AI is pretty complex and the average developer/engineer does not have the background in maths/stats/science to actually understand what is happening. On the flip side the trouble with the stats/maths/science people is that they typically can't code, so finding people in that sweet spot that have both is pretty tough.” He’s right. The New York Times suggests that the pool of qualified talent is only 10,000 people, worldwide. Those who do have jobs are typically happily ensconced, paid well, treated appropriately and given no reason whatsoever to want to leave. [caption id="attachment_21244" align="aligncenter" width="1920"] Judged by $ investments in the area alone, AI skills are worth developing for those wishing to stay current on technology skills.[/caption] In fact, an instinct to develop AI skills will serve any technology employee well. No One can have escaped the many estimates, from reputable consultancies, suggesting that Automation will replace up to 30% of jobs in the next 10 years. No job is safe. Every industry is touched by AI in some form. Any responsible individual with a view to the management of their own skills could learn ML and AI skills to stay relevant in current times. Even if you don't want to move out of your current job, learning ML will probably help you adapt better in your industry. What is a typical AI job and what will it pay? OpenAI, a world class Artificial Intelligence research laboratory, revealed the salaries of some of its key Data Science employees recently. Those working in the AI field with a specialization can earn $300 to $500k in their first year out of university. Experts in Artificial Intelligence now command salaries of up to $1m. [caption id="attachment_21242" align="aligncenter" width="432"] The New York Times observes AI salaries[/caption] [caption id="attachment_21241" align="aligncenter" width="1121"] The New York Times observes AI salaries[/caption] Source: The New York times Indraneil Roy, an Expert in AI and Talent Acquisition who works for Edge Networks puts it this way when outlining the difficulties of hiring the right skills and to explain why wages in the field are so high. “The challenge is the quality of resources. As demand is high for this skill, we are already seeing candidates with fake experience and work pedigree not up to standards.” The phenomenon is also causing a ‘brain drain’ in Universities. About a third of jobs in the AI field will go to someone with a Ph.D., and all of those are drawn from universities working on the discipline, often lured by the significant pay packages which are available. So, with huge demand and the universities drained, where will future AI employees come from? 3 ways to skill up to become an AI expert (And earn all that money?) There is still not a lot of agreed terminology or even job roles and responsibility in the sector. However, some things are clear. Those wishing to evolve in to the field of AI must understand the conceptual thinking involved, as a starting point, whether that view is found on the job or as part of an informal / formal educational course. Specifically, most jobs in the specialty require a working knowledge of neural networks, data / analytics, predictive analytics, with some basic programming and database skills. There are some great resources available online to train you up. Most, as you’d expect, are available on your smartphone so there really is no excuse for not having a look. 1. Free online course: Machine Learning & Statistics and probability Hamish Ogilvy summed the online education which is available in the area well. There are “so many free courses now on AI from Stanford,” he said, “that people are able to educate themselves and make up for the failings of antiquated university courses. AI is just maths really,” he says “complex models and stats. So that's what people need grounding in to be successful.” Microsoft offer free AI courses for technical professionals: Microsoft’s training materials are second to none. They’re also provided free and provide a shortcut to a credible understanding in an area simply because it comes from a technical behemoth. Importantly, they also have a list of AI services which you can play with, again for free. For example, a Natural Language engine offers a facility for you to submit text from Instant Messaging conversations and establish the sentiment being felt by the writer. Practical experience of the tools, processes and concepts involved will set you apart. See below. [caption id="attachment_21245" align="aligncenter" width="1999"] Check out Microsoft’s free AI training program for developers.[/caption] Google are taking a proactive stance on Machine Learning. They see it’s potential to improve efficiency in every industry and also offer free ML training courses on their site. 2. Take courses on AI/ML Packt’s machine learning courses, books and videos: Packt is working towards a mission to help the world put software to work in new ways, through the delivery of effective learning and information services to IT professionals. It has published over 6,000 books and videos so far, providing IT professionals with the actionable knowledge they need to get the job done - whether that's specific learning on an emerging technology or optimizing key skills in more established tools. You can choose from a variety of Packt’s books, videos and courses for AI/ML. Here’s a list of top ones: Artificial Intelligence by Example [Book] Artificial Intelligence for Big data [Book] Learn Artificial Intelligence with TensorFlow [Video] Introduction to Artificial Intelligence with Java [Video] Advanced Artificial Intelligence Projects with Python [Video] Python Machine learning - Second Edition [Book] Machine Learning with R - Second Edition [Book] Coursera’s machine learning courses Coursera is a company which make training courses, for a variety of subjects, available online. Taken from actual University course content and delivered with tests, videos and training notes, all accessed online, each course is roughly a University Module. Students pick up an ‘up to under-graduate’ level of understanding of the content involved. Coursera’s courses are often cited as merit worthy and are recognizable in the industry. Costs vary but are typically between $2k and $5k per course. 3. Learn by doing Familiarize yourself with relevant frameworks and tools including Tensor Flow, Python and Keras. TensorFlow from Google is the most used open source AI software library. You can use existing code in your experiments and experiment with neural networks in much the same way as you can in Microsoft’s. Python is a programming language written for a big data world. Its proponents will tell you that Python saves developers hundreds of lines of code, allowing you to tie together information and systems faster than ever before. Python is used extensively in ML and AI applications and should be at the top of your study list. Keras, a deep learning library is similarly ubiquitous. It’s a high level Neural Network API designed to allow prototyping of your software as fast as possible. Finally, a lesser known but still valuable resources is the Accord.net. It is one final example of the many  software elements with which you can engage with to train yourself up. Accord Framework.net will expose you to image libraries, natural learning and real time facial recognition. Earn extra points with employers AI has several lighthouse tasks which are proving the potential of the technology in these still early stages. We’ve included a couple of examples, Natural Language processing and image recognition, above. Practical expertise in these areas specifically, image or voice recognition or pattern matching are valued highly by employers. Alternatively, have you patented something? A registered patent in your name is highly prized. Especially something to do with Machine Learning. Both will help you showcase Extra skills / achievements that will help your application.’ The specifics of how to apply for patents differ by country but you can find out more about the overall principles of how to submit an idea here. Passion and engagement in the subject are also, clearly appealing characteristics for potential employers to see in applicants. Participating in competitions like Kaggle, and having a portfolio of projects you can showcase on facilities like GitHub are also well prized. Of all of these suggestions, for those employed, any on the job experience you can get will stand you in the best stead. Indraneil says "Individual candidates need to spend more time doing relevant projects while in employment. Start ups involved in building products and platforms on AI seem to have better talent." The fact that there are not many AI specialists around is a bad sign There is a demand for employees with AI skills and an investment in relevant training may pay you well. Unfortunately, the underlying problem this situation reveals could be far worse than the problems experienced so far. Together, once found, all these AI scientists are going to automate millions of jobs, in every industry, in every country around the world. If Industry, Governments and Universities cannot train enough people to fill the roles being created by an evolving skills market, we may rightly be concerned to worry about how they will deal with retraining all those displaced by AI, for whom there may be no obvious replacement role available. 18 striking AI Trends to watch in 2018 – Part 1 DeepMind, Elon Musk, and others pledge not to build lethal AI Attention designers, Artificial Intelligence can now create realistic virtual textures
Read more
  • 0
  • 0
  • 18247

article-image-devops-engineering-and-full-stack-development
Richard Gall
28 Jul 2015
5 min read
Save for later

DevOps engineering and full-stack development – 2 sides of the same agile coin

Richard Gall
28 Jul 2015
5 min read
Two of the most talked-about and on-trend roles in tech dominated our Skill Up survey – DevOps engineers and Full-Stack developers. Even before we started exploring our data, we knew that both would feature heavily. Given the amount of time spent online arguing about DevOps and the merits and drawbacks of full-stack development, it’s interesting to see exactly what it means to be a DevOps engineer or full-stack developer. From salary to tool use, both our Web Development and SysAdmin and Security Salary and Skills Reports offer an insight into the professional lives of people actually performing these roles every day. The similarities between DevOps engineering and full-stack development The similarities between the two roles are striking. Both DevOps engineering and full-stack development are having a considerable impact on the way in which technology is used and understood within organizations and businesses – which makes them particularly valuable. In SMEs, for example, DevOps engineers command almost the same amount of money as in Enterprise. Considering the current economic climate, it’s a clear signal of the value of DevOps practices in environments where flexibility and the ability to adapt to changing demands and expectations are crucial to growth. Full-stack developers also command the highest salaries in many industries. In consultancy, for example, full-stack developers earn significantly more than any other web development role. While this could suggest that organizations aren’t yet willing to invest in (or simply don’t need) in-house full-stack developers, it highlights that they are nevertheless willing to spend money on individuals with full-stack knowledge, who are capable of delivering cutting-edge insight. However, just as we saw Cloud consultancies dominate the tech consultancy market a few years ago, over time it’s likely that full-stack development will become more and more established as a standard. DevOps engineers and full-stack developers share the same philosophical germ. They are symptoms of a growing business demand for greater agility and flexibility, and hint at a trend towards greater generalization in the skillset of technical professionals. part of the thrill of #devops to me is how there's no true agreement about what it is. it's like watching LOST all over again — jon devops hendren (@devops) May 18, 2015 Full-stack developers are using DevOps tools I’ve always seen them as manifestations of similar ideas in different technical areas. However, when you look at the data we’ve collected in our survey, alongside some wider research, the relationship between the DevOps engineer and the Full-Stack developer might possibly be more than purely conceptual. ‘Full-Stack’ and ‘DevOps’ are both terms that blur the lines between developer and engineer, and both are two sides of an intriguing form of cross-pollination; technologies more commonly used for deployment and automation. Docker and Vagrant were the most notable, highlighting the impact of containerization and virtualization on web development, but we also found a number of references to the Microsoft automation tool PowerShell – a distinctly DevOps-esque tool if ever there was one. Perhaps there’s a danger of overstating my point – surely we shouldn’t be surprised if web developers are using these tools – it’s not that strange, right? Maybe, but the fact that tools such as these are being used by web developers in their day-to-day work suggests that they are no longer simply expected to develop: they also need to deploy and configure their projects. Indeed, it’s worth noting that across all our web development respondents, a large number plan on learning Docker over the next 12 months. DevOps engineers use a huge range of tools DevOps Engineers were even more eclectic in their tool-usage than full-stack developers. Python is the language of-choice and Puppet the go-to configuration management tool, but web tools such as JavaScript and PHP are also being used. References to Flask, for example, the Python microframework, emphasise the way in which DevOps Engineers have an eye on web development while they’re automating your infrastructure. Taken alone, these responses might not fully evidence the relationship between DevOps engineers and Full-Stack developers. However, there are jobs out there asking for a combination of both skillsets. One, posted by a recruiter working for a nameless ‘creative media house’ in London, was looking for someone to become ‘a key member of multi-party cloud research projects, helping to bring a microservices-based video automation system to life, integrate development and developed systems into onside and global infrastructure’. The tools being asked for were very varied indeed. From a high-level language, such as JavaScript, to scripting languages such as Bash, Python and Perl, to continuous integration tools, configuration management tools and containerization technologies, whoever eventually gets the job certainly deserves to be called a polyglot. Blurring the line between full-stack and DevOps A further indication of the blurred line between engineers and developers can be found in this article from computing.co.uk. It’s an interesting tale of how working practices develop according to necessity and how methodologies and ideas interact with the practical details of a given situation. It tells the story of how the Washington Post went about building its submission platform, and how the way in which the project was resourced and managed changed according to certain pressures – internal and external. The title might actually be misleading – if you read it, it’s not so much that DevOps necessitates full-stack development, more that each thing grows out of the next. It might even be said that the reverse is true – that full-stack development necessitates DevOps thinking. The relationship between DevOps and full-stack development gives a real indication of the state of the tech world in 2015. Within a tech landscape of increasing complexity and cross-pollination there are going to be opportunities for developers and engineers to significantly drive their value as technical professionals. It’s simply a question of learning more, and of being open to new challenges and ideas about how to work effectively. It probably won’t be easy, but it might just be a fun journey.
Read more
  • 0
  • 0
  • 18230

article-image-top-5-programming-languages-big-data
Amey Varangaonkar
04 Apr 2018
8 min read
Save for later

Top 5 programming languages for crunching Big Data effectively

Amey Varangaonkar
04 Apr 2018
8 min read
One of the most important decisions that Big Data professionals have to make, especially the ones who are new to the scene or are just starting out, is choosing the best programming languages for big data manipulation and analysis. Understanding the Big Data problem and framing the architecture to solve it is not quite enough these days - the execution needs to be perfect as well, and choosing the right language goes a long way. The best languages for big data In this article, we look at the 5 of the most popularly used - not to mention highly effective - programming languages for developing Big Data solutions. Scala A beautiful crossover of the object-oriented and functional programming paradigms, Scala is fast and robust, and a popular choice of language for many Big Data professionals.The fact that two of the most popular Big Data processing frameworks in Apache Spark and Apache Kafka have been built on top of Scala tells you everything you need to know about the power of Scala. Scala runs on the JVM, which means the codes written in Scala can be easily used within a Java-based Big Data ecosystem. One significant factor that differentiates Scala from Java, though, is that Scala is a lot less verbose in comparison. You can write 100s of lines of confusing-looking Java code in less than 15 lines in Scala. One negative aspect of Scala, though, is its steep learning curve when compared to languages like Go and Python, and this may put off beginners looking to use it. Why use Scala for big data? Fast and robust Suitable for working with Big Data tools like Apache Spark for distributed Big Data processing JVM compliant, can be used in a Java-based ecosystem Python Python has been declared as one of the fastest growing programming languages in 2018 as per the recently held Stack Overflow Developer Survey. Its general-purpose nature means it can be used across a broad spectrum of use-cases, and Big Data programming is one major area of application. Many libraries for data analysis and manipulation which are increasingly being used in a Big Data framework to clean and manipulate large chunks of data, such as pandas, NumPy, SciPy - are all Python-based. Not just that, most popular machine learning and deep learning frameworks such as scikit-learn, Tensorflow and many more, are also written in Python and are finding increasing application within the Big Data ecosystem. One drawback of using Python, and a reason why it is not a first-class citizen when it comes to Big Data programming yet, is that it’s slow. Although very easy to use, Big Data professionals have found systems built with languages such as Java or Scala faster and more robust to use than the systems built with Python. However, Python makes up for this limitation with other qualities. As Python is primarily a scripting language, interactive coding and development of analytical solutions for Big Data becomes very easy. Python can integrate effortlessly with the existing Big Data frameworks such as Apache Hadoop and Apache Spark, allowing you to perform predictive analytics at scale without any problem. Why use Python for big data? General-purpose Rich libraries for data analysis and machine learning Easy to use Supports iterative development Rich integration with Big Data tools Interactive computing through Jupyter notebooks R It won’t come as a surprise to many that those who love statistics, love R. The ‘language of statistics’ as it is popularly called as, R is used to build data models which can be used for effective and accurate data analysis. Powered by a large repository of R packages (CRAN, also called as Comprehensive R Archive Network), with R you have just about every type of tool to accomplish any task in Big Data processing - right from analysis to data visualization. R can be integrated seamlessly with Apache Hadoop and Apache Spark, among other popular frameworks, for Big Data processing and analytics. One issue with using R as a programming language for Big Data is that it is not very general-purpose. It means the code written in R is not production-deployable and generally has to be translated to some other programming language such as Python or Java. That said, if your goal is to only build statistical models for Big Data analytics, R is an option you should definitely consider. Why use R for big data? Built for data science Support for Hadoop and Spark Strong statistical modeling and visualization capabilities Support for Jupyter notebooks Java Last, but not the least, there’s always the good old Java. Some of the traditional Big Data frameworks such as Apache Hadoop and all the tools within its ecosystem are all Java-based, and still in use today in many enterprises. Not to mention the fact that Java is the most stable and production-ready language among all the languages we have discussed so far! Using Java to develop your Big Data applications gives you the ability to use a large ecosystem of tools and libraries for interoperability, monitoring and much more, most of which have already been tried and tested. One major drawback of Java is its verbosity. The fact that you have to write hundreds of lines of codes in Java for a task which can written in barely 15-20 lines of code in Python or Scala, can turnoff many budding programmers. However, the introduction of lambda functions in Java 8 does make life quite easier. Java also does not support iterative development unlike newer languages like Python, and this is an area of focus for the future Java releases. Despite the flaws, Java remains a strong contender when it comes to the preferred language for Big Data programming because of its history and the continued reliance on the traditional Big Data tools and frameworks. Why use Java for big data? Traditional Big Data tools and frameworks are written in Java Stable and production-ready Large ecosystem of tried and tested tools and libraries Go Last but not the least, there’s Go - one of the fastest rising programming languages in recent times. Designed by a group of Google engineers who were frustrated with C++, we think Go is a good shout in this list - simply because of the fact that it powers so many tools used in the Big Data infrastructure, including Kubernetes, Docker and many more. Go is fast, easy to learn, and fairly easy to develop applications with, not to mention deploy them. More importantly, as businesses look at building data analysis systems that can operate at scale, Go-based systems are being used to integrate machine learning and parallel processing of data. It is also possible to interface other languages with Go-based systems with relative ease. Why use Go for big data? Fast, easy to use Many tools used in the Big Data infrastructure are Go-based Efficient distributed computing There are a few other languages you might want to consider - Julia, SAS and MATLAB being some major ones which are useful in their own right. However, when compared to the languages we talked about above, we thought they fell a bit short in some aspects - be it speed, efficiency, ease of use, documentation, or community support, among other things. Let’s take a quick look at the comparison table of all the languages we discussed above. Note that we have used the ✓ symbol for the best possible language/s to help you make an informed decision. This is just our view, and that’s not to say that the other languages are any worse! Scala Python R Java Go Speed ✓ ✓ ✓ Ease of use ✓ ✓ ✓ Quick Learning curve ✓ ✓ Data Analysis capability ✓ ✓ ✓ General-purpose ✓ ✓ ✓ ✓ Big Data support ✓ ✓ ✓ ✓ ✓ Interfacing with other languages ✓ ✓ ✓ Production-ready ✓ ✓ ✓ So...which language should you choose? To answer the question in short - it all depends on the use-case you want to develop. If your focus is hardcore data analysis which involves a lot of statistical computing, R would be your go-to language. On the other hand, if you want to develop streaming applications for your Big Data, Scala can be a preferable choice. If you wish to use Machine Learning to leverage your Big Data and build predictive models, Python will come to your rescue. Lastly, if you plan to build Big Data solutions using just the traditionally-available tools, Java is the language for you. You also have the option of combining the power of two languages to get a more efficient and powerful solution. For example, you can train your machine learning model in Python and deploy it on Spark in a distributed mode. Ultimately, it all depends on how efficiently your solution can function, and more importantly, how fast and accurate it is. Which language do you prefer for crunching your Big Data? Do let us know!
Read more
  • 0
  • 1
  • 18070

article-image-python-web-development-django-vs-flask-2018
Aaron Lazar
28 May 2018
7 min read
Save for later

Python web development: Django vs Flask in 2018

Aaron Lazar
28 May 2018
7 min read
A colleague of mine, wrote an article over two years ago, talking about the two top Python web frameworks, Django and Flask. It’s 2018 now, and a lot has changed in the IT world. There have been a couple of frameworks that emerged or gained popularity in the last 3 years, like Bottle or CherryPy, for example. However, Django and Flask have managed to stand their ground and have continued to remain as the top 2 Python frameworks. Moreover, there have been some major breakthroughs in web application architecture like the rise of Microservices, that has in turn pushed the growth of newer architectures like Serverless and Cloud-Native. I thought it would be a good idea to present a more modern comparison of these two frameworks, to help you take an informed decision on which one you should be choosing for your application development. So before we dive into ripping these frameworks apart, let’s briefly go over a few factors we’ll be considering while evaluating them. Here’s what I got in mind, in no particular order: Ease of use Popularity Community support Job market Performance Modern Architecture support Ease of use This is something l like to cover first, because I know it’s really important for developers who are just starting out, to assess the learning curve before they attempt to scale it. When I’m talking about ease of use, I’m talking about how easy it is to get started with using the tool in your day to day projects. Flask, like it’s webpage, is a very simple tool to learn, simply because it’s built to be simple. Moreover, the framework is un-opinionated, meaning that it will allow you to implement things the way you choose to, without throwing a fuss. This is really important when you’re starting out. You don’t want to run into too much issues that will break your confidence as a developer. On the other hand, Django is a great framework to learn too. While several Python developers will disagree with me, I would say Django is a pretty complex framework, especially for a newbie. Now this is not all that bad, right. I mean, especially when you’re building a large project, you want to be the one holding the reins. If you’re starting out with some basic projects then, it may be wise not to choose Django. The way I see it, learning Flask first will allow you to learn Django much faster. Popularity Both frameworks are quite popular, with Django starring at 34k on Github, and Flask having a slight edge at 36k. If you take a look at the Google trends, both tools follow a pretty similar trend, with Django’s volume much higher, owing to its longer existence. Source: SEM Rush As mentioned before, Flask is more popular among beginners and those who want to build basic websites easily. On the other hand, Django is more popular among the professionals who have years of experience building robust websites. Community Support and Documentation In terms of community support, we’re looking at how involved the community is, in developing the tool and providing support to those who need it. This is quite important for someone who’s starting out with a tool, or for that matter, when there’s a new version releasing and you need to keep yourself up to date.. Django features 170k tags on Stackoverflow, which is over 7 times that of Flask, which stands at 21k. Although Django is a clear winner in terms of numbers, both mailing lists are quite active and you can receive all the help you need, quite easily. When it comes to documentation, Django has some solid documentation that can help you get up and running in no time. On the other hand, Flask has good documentation too, but you usually have to do some digging to find what you’re looking for. Job Scenes Jobs are really important especially if you’re looking for a corporate one It’s quite natural that the organization that’s hiring you will already be working with a particular stack and they will expect you to have those skills before you step in. Django records around 2k jobs on Indeed in the USA, while Flask records exactly half that amount. A couple of years ago, the situation was pretty much the same; Django was a prime requirement, while Flask had just started gaining popularity. You’ll find a comment stating that “Picking up Flask might be a tad easier then Django, but for Django you will have more job openings” Itjobswatch.uk lists Django as the 2nd most needed Skill for a Python Developer, whereas Flask is way down at 20. Source: itjobswatch.uk Clearly Django is in more demand that Flask. However, if you are an independent developer, you’re still free to choose the framework you wish to work with. Performance Honestly speaking, Flask is a microframework, which means it delivers a much better performance in terms of speed. This is also because in Flask, you could write 10k lines of code, for something that would take 24k lines in Django. Response time comparison for data from remote server: Django vs Flask In the above image we see how both tools perform in terms of loading a response from the server and then returning it. Both tools are pretty much the same, but Flask has a slight edge over Django. Load time comparison from database with ORM: Django vs Flask In this image, we see how the gap between the tools is quite large, with Flask being much more efficient in loading data from the database. When we talk about performance, we also need to consider the power each framework provides you when you want to build large apps. Django is a clear winner here, as it allows you to build massive, enterprise grade applications. Django serves as a full-stack framework, which can easily be integrated with JavaScript to build great applications. On the other hand, Flask is not suitable for large applications. The JetBrains Python Developer Survey revealed that Django was a more preferred option among the respondents. Jetbrains Python Developer Survey 2017 Modern Architecture Support The monolith has been broken and microservices have risen. What’s interesting is that although applications are huge, they’re now composed of smaller services working together to make up the actual application. While you would think Django would be a great framework to build microservices, it turns out that Flask serves much better, thanks to its lightweight architecture and simplicity. While you work on a huge enterprise application, you might find Flask being interwoven wherever a light framework works best. Here’s the story of one company that ditched Django for microservices. I’m not going to score these tools because they’re both awesome in their own right. The difference arises when you need to choose one for your projects and it’s quite evident that Flask should be your choice when you’re working on a small project or maybe a smaller application built into a larger one, maybe a blog or a small website or a web service. Although, if you’re on the A team, making a super awesome website for maybe, Facebook or a billion dollar enterprise, instead of going the Django unchained route, choose Django with a hint of Flask added in, for all the right reasons. :) Django recently hit version 2.0 last year, while Flask hit version 1.0 last month. Here’s some great resources to get you up and running with Django and Flask. So what are you waiting for? Go build that website! Why functional programming in Python matters Should you move to Python 3.7 Why is Python so good for AI and Machine Learning?
Read more
  • 0
  • 0
  • 18056

article-image-best-game-engines-for-ai-game-development
Natasha Mathur
24 Aug 2018
8 min read
Save for later

Best game engines for Artificial Intelligence game development

Natasha Mathur
24 Aug 2018
8 min read
"A computer would deserve to be called intelligent if it could deceive a human into believing that it was human" — Alan Turing It is quite common to find games which are initially exciting but take a boring turn eventually, making you want to quit the game. Then, there are games which are too difficult to hold your interest and you end up quitting in the beginning phase itself.  These are also two of the most common problems that game developers face when building games. This is where AI comes to your rescue, to spice things up. Why use Artificial Intelligence in games? The major reason for using AI in games is to provide a challenging opponent to make the game more fun to play. But, AI in the gaming industry is not a recent news. The gaming world has been leveraging the wonders of AI for a long time now. One of the first examples of AI is the computerized game, Nim, was created back in 1951. Other games such as Façade, Black & White, The Sims, Versu, and F.E.A.R. are all great AI games, that hit the market long time back. Even modern-day games like Need for Speed, Civilization, or Counter-Strike use AI. AI controls a lot of elements in games and is usually behind characters such as enemy creeps, neutral merchants, or even animals. AI in games is used to enable the non-human characters (NPCs) with responsive, adaptive, and intelligent behaviors similar to human-like intelligence. AI helps make NPCs seem intelligent as they are able to actively change their level of skills based on the person playing the game. This makes the game seem more personalized to the gamer. Playing video games is fun, and developing these games is equally fun. There are different game engines on the market to help with the development of games. A game engine is a software that provides game creators with the necessary set of features to build games quickly and efficiently. Let’s have a look at the top game engines for Artificial Intelligence game development. Unity3D Developer:  Unity Technologies Release Date: June 8, 2005 Unity is a cross-platform game engine which provides users with the ability to create games in both 2D and 3D. It is extremely popular and loved by game designers from large and small studios alike. Apart from 3D, and 2D games, it also helps with simulations for desktops, laptops, home consoles, smart TVs, and mobile devices. Key AI features: Unity offers a machine learning agents toolkit to the game developers, which help them include AI agents within games. As per the Unity team, “machine Learning Agents Toolkit (ML-Agents) is an open-source Unity plugin that enables games and simulations to serve as environments for training intelligent agents”. Unity AI - Unity 3D Artificial Intelligence  The ML-Agents SDK transforms games and simulations created using the Unity Editor into environments for training intelligent agents. These ML agents are trained using deep Reinforcement Learning, imitation learning, neuroevolution, or other machine learning methods via Python APIs. There’s also a TensorFlow based algorithm provided by Unity to allow game developers to easily train intelligent agents for 2D, 3D, and VR/AR games. These trained agents are then used for controlling the NPC behavior within games. The ML-Agents toolkit is beneficial for both game developers and AI researchers. Apart from this, Unity3D is easy to use and learn, compatible with every game platform and provides great community support. Learning Resources: Unity AI Programming Essentials Unity 2017 Game AI programming - Third Edition Unity 5.x Game AI Programming Cookbook Unreal Engine 4 Developer: Epic games Release Date: May 1998 Unreal Engine is widely used among developers all around the world. It is a collection of integrated tools for game developers which helps them build games, simulations, and visualization. It is also among the top game engines which are used to develop high-end AAA titles. Gears of War, Batman: Arkham Asylum and Mass Effect are some of the popular games developed using Unreal Engine. Key AI features: Unreal Engine uses a set of tools which helps add AI capabilities to a game. It uses tools such as behavior Tree, navigation Component, blackboard asset, enumeration, target point, AI Controller, and Navigation Volumes. Behavior tree creates different states and the logic behind AI. Navigation Component helps with handling movement for AI. Blackboard Asset stores information and acts as the local variable for AI. Enumeration creates states. It also allows alternating between these states. Target Point creates a basic Path node form. The AI Controller and Character tool is responsible for handling the communication between the world and the controlled pawn for AI. At last, the Navigation Volumes feature creates Navigation Mesh in the environment to allow easy Pathfinding for AI. There are also features such as Blueprint Visual Scripting which can be converted into performant C++ code, AIComponents, and the Environment Query System (EQS) which provides agents the ability to perceive their environment. Apart from its AI capabilities, the Unreal engine offers the largest community support with lifetime hours of video tutorials and assets. It is also compatible with a variety of operating platforms such as iOS, Android, Linux, Mac, Windows, and most game consoles. But there are certain inbuilt-tools in Unreal Engine which can be hard for beginners to learn. Learning resources: Unreal Engine 4 AI programming essentials CryEngine 3 Developer: Crytek Release Date: May 2, 2002 CryEngine is a powerful game development platform that comes packed with a set of tools and features to create world-class gaming experiences. It is the game engine behind games such as Sniper: Ghost Warrior 2, SNOW, etc. Key AI features: CryEngine comes with an AI system designed for easy creation of custom AI actors. This is flexible enough to handle a larger set of complex and different worlds. The core of CryEngine’s AI system is based on lots of scripting. There are different AI elements within this system that add the AI capabilities to the NPCs within the game. Some of these elements are AI Actions which allows the developers to script AI behaviors without creating new code. AI Actors Logger can log AI events and signals to files. AI Control Objects use AI object to control AI entities/actors. AI Debug Draw is the primary tool offered by CryEngine for information on the current state of the AI System and AI actors. AI Debugger registers the inputs that AI agents receive and the decisions that they make in real-time during a game session. AI Sequence system works in parallel to FG and AI systems. This helps to simplify and group AI control. CryEngine offers the easiest A.l. coding of any tech currently on the market. Since CryEngine is relatively new as compared to other game engines, it does not have a very flourishing community yet. Despite the easy AI coding, the overall learning curve of Unreal Engine is high. Panda3D Developer: Disney Interactive until 2010,  Walt Disney Imagineering, Carnegie Mellon University Release Date: 2002 Panda3D is a game engine, a framework for 3D rendering and game development for Python and C++ programs. It includes graphics, audio, I/O, collision detection, and other abilities for the creation of 3D games. Key AI features: Panda3D comes packed with an AI library named PandAI v1.0. PandAI is an AI library which provides 'Artificially Intelligent' behavior in NPC (Non-Playable Characters) in games. The PandAI library offers functionality for Steering Behaviors (Seek, Flee, Pursue, Evade, Wander, Flock, Obstacle Avoidance, Path Following) and path finding (helps the NPCs to intelligently avoiding obstacles via the shortest path ). This AI library is composed of several different entities. For instance, there’s a main AIWorld Class to update any AICharacters added to it. Each AICharacter has its own AIBehavior object for tracking all the position and rotation updates. Each AIBehavior object has the functionality to implement all the steering behaviors and pathfinding behaviors. These features within Panda3D gives you the ability to call the respective functions. Panda3D is a relatively simple game engine which lets you add AI capabilities within your games. The community is not as robust and has a low learning curve. AI is a fantastic tool which makes the entities in games seem more organic, alive, and real. The main goal here is not to copy the entire human thought process but to just sell the illusion of life. These game engines provide the developers with the entire framework needed to add AI capabilities to their games. The entire game development process is more fun as there is no need to create all systems including the physics, graphics, and AI, from scratch. Now, if you’re wondering about the best AI game engines out of the four mentioned in this article then there is no specific answer to that as selecting the best AI game engine depends on the requirements of your project. Game Engine Wars: Unity vs Unreal Engine Unity switches to WebAssembly as the output format for the Unity WebGL build target Developing Games Using AI  
Read more
  • 0
  • 1
  • 17644
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at £15.99/month. Cancel anytime
article-image-5-ways-artificial-intelligence-is-upgrading-software-engineering
Melisha Dsouza
02 Sep 2018
8 min read
Save for later

5 ways artificial intelligence is upgrading software engineering

Melisha Dsouza
02 Sep 2018
8 min read
47% of digitally mature organizations, or those that have advanced digital practices, said they have a defined AI strategy (Source: Adobe). It is estimated that  AI-enabled tools alone will generate $2.9 trillion in business value by 2021.  80% of enterprises are smartly investing in AI. The stats speak for themselves. AI clearly follows the motto “go big or go home”. This explosive growth of AI in different sectors of technology is also beginning to show its colors in software development. Shawn Drost, co-founder and lead instructor of coding boot camp ‘Hack Reactor’ says that AI still has a long way to go and is only impacting the workflow of a small portion of software engineers on a minority of projects right now. AI promises to change how organizations will conduct business and to make applications smarter. It is only logical then that software development, i.e., the way we build apps, will be impacted by AI as well. Forrester Research recently surveyed 25 application development and delivery (AD&D) teams, and respondents said AI will improve planning, development and especially testing. We can expect better software created under traditional environments. 5 areas of Software Engineering AI will transform The 5 major spheres of software development-  Software design, Software testing, GUI testing, strategic decision making, and automated code generation- are all areas where AI can help. A majority of interest in applying AI to software development is already seen in automated testing and bug detection tools. Next in line are the software design precepts, decision-making strategies, and finally automating software deployment pipelines. Let's take an in-depth look into the areas of high and medium interest of software engineering impacted by AI according to the Forrester Research report.     Source: Forbes.com #1 Software design In software engineering, planning a project and designing it from scratch need designers to apply their specialized learning and experience to come up with alternative solutions before settling on a definite solution. A designer begins with a vision of the solution, and after that retracts and forwards investigating plan changes until they reach the desired solution. Settling on the correct plan choices for each stage is a tedious and mistake-prone action for designers. Along this line, a few AI developments have demonstrated the advantages of enhancing traditional methods with intelligent specialists. The catch here is that the operator behaves like an individual partner to the client. This associate should have the capacity to offer opportune direction on the most proficient method to do design projects. For instance, take the example of AIDA- The Artificial Intelligence Design Assistant, deployed by Bookmark (a website building platform). Using AI, AIDA understands a users needs and desires and uses this knowledge to create an appropriate website for the user. It makes selections from millions of combinations to create a website style, focus, image and more that are customized for the user. In about 2 minutes, AIDA designs the first version of the website, and from that point it becomes a drag and drop operation. You can get a detailed overview of this tool on designshack. #2 Software testing Applications interact with each other through countless  APIs. They leverage legacy systems and grow in complexity everyday. Increase in complexity also leads to its fair share of challenges that can be overcome by machine-based intelligence. AI tools can be used to create test information, explore information authenticity, advancement and examination of the scope and also for test management. Artificial intelligence, trained right, can ensure the testing performed is error free. Testers freed from repetitive manual tests thus have more time to create new automated software tests with sophisticated features. Also, if software tests are repeated every time source code is modified, repeating those tests can be not only time-consuming but extremely costly. AI comes to the rescue once again by automating the testing for you! With AI automated testing, one can increase the overall scope of tests leading to an overall improvement of software quality. Take, for instance, the Functionize tool. It enables users to test fast and release faster with AI enabled cloud testing. The users just have to type a test plan in English and it will be automatically get converted into a functional test case. The tool allows one to elastically scale functional, load, and performance tests across every browser and device in the cloud. It also includes Self-healing tests that update autonomously in real-time. SapFix is another AI Hybrid tool deployed by Facebook which can automatically generate fixes for specific bugs identified by 'Sapienz'. It then proposes these fixes to engineers for approval and deployment to production.   #3 GUI testing Graphical User Interfaces (GUI) have become important in interacting with today's software. They are increasingly being used in critical systems and testing them is necessary to avert failures. With very few tools and techniques available to aid in the testing process, testing GUIs is difficult. Currently used GUI testing methods are ad hoc. They require the test designer to perform humongous tasks like manually developing test cases, identifying the conditions to check during test execution, determining when to check these conditions, and finally evaluate whether the GUI software is adequately tested. Phew! Now that is a lot of work. Also, not forgetting that if the GUI is modified after being tested, the test designer must change the test suite and perform re-testing. As a result, GUI testing today is resource intensive and it is difficult to determine if the testing is adequate. Applitools is a GUI tester tool empowered by AI. The Applitools Eyes SDK automatically tests whether visual code is functioning properly or not. Applitools enables users to test their visual code just as thoroughly as their functional UI code to ensure that the visual look of the application is as you expect it to be. Users can test how their application looks in multiple screen layouts to ensure that they all fit the design. It allows users to keep track of both the web page behaviour, as well as the look of the webpage. Users can test everything they develop from the functional behavior of their application to its visual look. #4 Using Artificial Intelligence in Strategic Decision-Making Normally, developers have to go through a long process to decide what features to include in a product. However, machine learning AI solution trained on business factors and past development projects can analyze the performance of existing applications and help both teams of engineers and business stakeholders like project managers to find solutions to maximize impact and cut risk. Normally, the transformation of business requirements into technology specifications requires a significant timeline for planning. Machine learning can help software development companies to speed up the process, deliver the product in lesser time, and increase revenue within a short span. AI canvas is a well known tool for Strategic Decision making.The canvas helps identify the key questions and feasibility challenges associated with building and deploying machine learning models in the enterprise. The AI Canvas is a simple tool that helps enterprises organize what they need to know into seven categories, namely- Prediction, Judgement, Action, Outcome, Input, Training and feedback. Clarifying these seven factors for each critical decision throughout the organization will help in identifying opportunities for AIs to either reduce costs or enhance performance.   #5 Automatic Code generation/Intelligent Programming Assistants Coding a huge project from scratch is often labour intensive and time consuming. An Intelligent AI programming assistant will reduce the workload by a great extent. To combat the issues of time and money constraints, researchers have tried to build systems that can write code before, but the problem is that these methods aren’t that good with ambiguity. Hence, a lot of details are needed about what the target program aims at doing, and writing down these details can be as much work as just writing the code. With AI, the story can be flipped. ”‘Bayou’- an A.I. based application is an Intelligent programming assistant. It began as an initiative aimed at extracting knowledge from online source code repositories like GitHub. Users can try it out at askbayou.com. Bayou follows a method called neural sketch learning. It trains an artificial neural network to recognize high-level patterns in hundreds of thousands of Java programs. It does this by creating a “sketch” for each program it reads and then associates this sketch with the “intent” that lies behind the program. This DARPA initiative aims at making programming easier and less error prone. Sounds intriguing? Now that you know how this tool works, why not try it for yourself on i-programmer.info. Summing it all up Software engineering has seen massive transformation over the past few years. AI and software intelligence tools aim to make software development easier and more reliable. According to a Forrester Research report on AI's impact on software development, automated testing and bug detection tools use AI the most to improve software development. It will be interesting to see the future developments in software engineering empowered with AI. I’m expecting faster, more efficient, more effective, and less costly software development cycles while engineers and other development personnel focus on bettering their skills to make advanced use of AI in their processes. Implementing Software Engineering Best Practices and Techniques with Apache Maven Intelligent Edge Analytics: 7 ways machine learning is driving edge computing adoption in 2018 15 millions jobs in Britain at stake with AI robots set to replace humans at workforce
Read more
  • 0
  • 0
  • 17644

article-image-5-types-of-deep-transfer-learning
Bhagyashree R
25 Nov 2018
5 min read
Save for later

5 types of deep transfer learning

Bhagyashree R
25 Nov 2018
5 min read
Transfer learning is a method of reusing a model or knowledge for another related task. Transfer learning is sometimes also considered as an extension of existing ML algorithms. Extensive research and work is being done in the context of transfer learning and on understanding how knowledge can be transferred among tasks. However, the Neural Information Processing Systems (NIPS) 1995 workshop Learning to Learn: Knowledge Consolidation and Transfer in Inductive Systems is believed to have provided the initial motivations for research in this field. The literature on transfer learning has gone through a lot of iterations, and the terms associated with it have been used loosely and often interchangeably. Hence, it is sometimes confusing to differentiate between transfer learning, domain adaptation, and multitask learning. Rest assured, these are all related and try to solve similar problems. In this article, we will look into the five types of deep transfer learning to get more clarity on how these differ from each other. [box type="shadow" align="" class="" width=""]This article is an excerpt from a book written by Dipanjan Sarkar, Raghav Bali, and Tamoghna Ghosh titled Hands-On Transfer Learning with Python. This book covers deep learning and transfer learning in detail. It also focuses on real-world examples and research problems using TensorFlow, Keras, and the Python ecosystem with hands-on examples.[/box] #1 Domain adaptation Domain adaptation is usually referred to in scenarios where the marginal probabilities between the source and target domains are different, such as P (Xs) ≠ P (Xt). There is an inherent shift or drift in the data distribution of the source and target domains that requires tweaks to transfer the learning. For instance, a corpus of movie reviews labeled as positive or negative would be different from a corpus of product-review sentiments. A classifier trained on movie-review sentiment would see a different distribution if utilized to classify product reviews. Thus, domain adaptation techniques are utilized in transfer learning in these scenarios. #2 Domain confusion Different layers in a deep learning network capture different sets of features. We can utilize this fact to learn domain-invariant features and improve their transferability across domains. Instead of allowing the model to learn any representation, we nudge the representations of both domains to be as similar as possible. This can be achieved by applying certain preprocessing steps directly to the representations themselves. Some of these have been discussed by Baochen Sun, Jiashi Feng, and Kate Saenko in their paper Return of Frustratingly Easy Domain Adaptation. This nudge toward the similarity of representation has also been presented by Ganin et. al. in their paper, Domain-Adversarial Training of Neural Networks. The basic idea behind this technique is to add another objective to the source model to encourage similarity by confusing the domain itself, hence domain confusion. #3 Multitask learning Multitask learning is a slightly different flavor of the transfer learning world. In the case of multitask learning, several tasks are learned simultaneously without distinction between the source and targets. In this case, the learner receives information about multiple tasks at once, as compared to transfer learning, where the learner initially has no idea about the target task. This is depicted in the following diagram: Multitask learning: Learner receives information from all tasks simultaneously #4 One-shot learning Deep learning systems are data hungry by nature, such that they need many training examples to learn the weights. This is one of the limiting aspects of deep neural networks, though such is not the case with human learning. For instance, once a child is shown what an apple looks like, they can easily identify a different variety of apple (with one or a few training examples); this is not the case with ML and deep learning algorithms. One-shot learning is a variant of transfer learning where we try to infer the required output based on just one or a few training examples. This is essentially helpful in real-world scenarios where it is not possible to have labeled data for every possible class (if it is a classification task) and in scenarios where new classes can be added often. The landmark paper by Fei-Fei and their co-authors, One Shot Learning of Object Categories, is supposedly what coined the term one-shot learning and the research in this subfield. This paper presented a variation on a Bayesian framework for representation learning for object categorization. This approach has since been improved upon, and applied using deep learning systems. #5 Zero-shot learning Zero-shot learning is another extreme variant of transfer learning, which relies on no labeled examples to learn a task. This might sound unbelievable, especially when learning using examples is what most supervised learning algorithms are about. Zero-data learning, or zero-short learning, methods make clever adjustments during the training stage itself to exploit additional information to understand unseen data. In their book on Deep Learning, Goodfellow and their co-authors present zero-shot learning as a scenario where three variables are learned, such as the traditional input variable, x, the traditional output variable, y, and the additional random variable that describes the task, T. The model is thus trained to learn the conditional probability distribution of P(y | x, T). Zero-shot learning comes in handy in scenarios such as machine translation, where we may not even have labels in the target language. In this article we learned about the five types of deep transfer learning types: Domain adaptation, domain confusion, multitask learning, one-shot learning, and zero-shot learning. If you found this post useful, do check out the book, Hands-On Transfer Learning with Python, which covers deep learning and transfer learning in detail. It also focuses on real-world examples and research problems using TensorFlow, Keras, and the Python ecosystem with hands-on examples. CMU students propose a competitive reinforcement learning approach based on A3C using visual transfer between Atari games What is Meta Learning? Is the machine learning process similar to how humans learn?
Read more
  • 0
  • 0
  • 17384

article-image-7-best-practices-for-logging-in-node-js
Guest Contributor
05 Mar 2019
5 min read
Save for later

7 Best Practices for Logging in Node.js

Guest Contributor
05 Mar 2019
5 min read
Node.js is one of the easiest platforms for prototyping and agile development. It’s used by large companies looking to scale their products quickly. However, using a platform on its own isn’t enough for most big projects today. Logging is also a key part of ensuring your web or mobile app runs smoothly for all users. Application logging is the practice of recording information about your application’s runtime. These files are usually saved a logging platform which helps identify potential problems. While no app is perfect 100% of the time, logging helps developers cut down on errors and even cyber attacks. The nature of software is complex. We can’t always predict how an application will react to data, errors, or system changes. Logging helps us better understand our own programs. So how do you handle logging in Node.js specifically? Following are some of the best practices for logging in Node.js to get the best results. 1. Understand the Regulations Let’s discuss the current legal regulations about what you can and cannot log. You should never log sensitive information or personal data. That means excluding credentials like passwords, credit card number or even email addresses. Recent changes to regulation like Europe’s GDPR make this even more essential. You don’t want to get tied up in the legal red tape of sensitive data. When in doubt, stick to the 3 things that are needed for a solid log message: timestamp, log level, and description. Beyond this, you don’t need any extensive framework. 2. Take advantage of Winston Node.js is built with a logging framework known as Winston. Winston is defined as transport for your logs, and you can install it directly into your application. Follow this guide to install Winston on your own. Winston is a powerful tool that comes with different logging levels with values. You can fully customize this console as well with colors, messages, and output details. The most recent version available is 3.0.0, but always make sure you have the latest edition to keep your app running smoothly. 3. Add Morgan In addition to Winston, Morgan is an HTTP request logger that collects server logs and standardizes them. Think of it as a logger simplification. Morgan. While you’re free to use Morgan on its own, most developers choose to use it with Winston since they make a powerful team. Morgan also works well with express.js. 4. Consider the Intel Package While Winston and Morgan are a great combination, they’re not your only option. Intel is another package solution with similar features as well as unique options. While you’ll see a lot of overlap in what they offer, Intel also includes a stack trace object. These features will come in handy when it’s time to actually debug. Because it gives a stack trace as a JSON object, it’s much easier to pass messages up the logger chain. Think of Intel like the breadcrumbs taking your developers to the error. 5. Use Environment Variables You’ll hear a lot of discussion about configuration management in the Node.js world. Decoupling your code from services and database is no straightforward process. In Node.js, it’s best to use environment variables. You can also look up values from process.env within your code. To determine which environment your program is running on, look up the NODE_ENV variables. You can also use the nconf module found here. 6. Choose a Style Guide No developer wants to spend time reading through lines of code only to have to change the spaces to tabs, reformat the braces, etc. Style guides are a must, especially when logging on Node.js. If you’re working with a team of developers, it’s time to decide on a team style guide that everyone sticks to across the board. When the code is written in a consistent style, you don’t have to worry about opinionated developers fighting for a say. It doesn’t matter which style you stick with, just make sure you can actually stick to it. The Googe style guide for Java is a great place to start if you can’t make a single decision. 7. Deal with Errors Finally, accept that errors will happen and prepare for them. You don’t want an error to bring down your entire software or program. Exception management is key. Use an asyn structure to cleanly handle any errors. Whether the app simply restarts or moves on to the next stage, make sure something happens. Users need their errors to be handled. As you can see, there are a few best practices to keep in mind when logging in Node.js. Don’t rely on your developers alone to debug the platform. Set a structure in place to handle these problems as they arise. Your users expect quality experience every time. Make sure you can deliver with these tips above. Author Bio Ashley Lipman Content marketing specialist Ashley is an award-winning writer who discovered her passion for providing creative solutions for building brands online. Since her first high school award in Creative Writing, she continues to deliver awesome content through various niches. Introducing Zero Server, a zero-configuration server for React, Node.js, HTML, and Markdown 5 reasons you should learn Node.js Deploying Node.js apps on Google App Engine is now easy
Read more
  • 0
  • 0
  • 17157

article-image-a-non-programmers-guide-to-learning-machine-learning
Natasha Mathur
05 Sep 2018
11 min read
Save for later

A non programmer’s guide to learning Machine learning

Natasha Mathur
05 Sep 2018
11 min read
Artificial intelligence might seem intimidating, but it isn’t actually as complex as you might think. Many of the tools that have been developed over the last decade or so have all helped to make artificial intelligence and machine learning more accessible to engineers with varying degrees of experience and knowledge. Today, we’ve got to a stage where it’s now accessible even to people who have barely written a line of code in their life! Pretty exciting, right? But if you’re completely new to the field, it can be challenging to know how to get started - fortunately, we’re about to help you overcome that first hurdle. If you are an AI denier, then be sure to first read ‘why learn Machine Learning as a non-techie’ before you move forward. A strong purpose and belief is the first step to learning anything new. Alright, now here’s how you can get started with artificial intelligence and machine learning techniques quickly. 0. Use a free MLaaS or a no code interactive machine learning tool to experience first hand what is possible with learning machine learning: Some popular examples of no code machine learning as a service option are Microsoft Azure, BigML, Orange, and Amazon ML. Read Q2 under the FAQ section below to know more on this topic. 1. Learn Linear Algebra: Linear Algebra is the elementary unit for ML. It helps you effectively comprehend the theory behind the Machine learning algorithms and how they work. It also improves your math skills such as statistics, programming skills, which are all other skills that helps in ML. Learning Resources: Linear Algebra for Beginners: Open Doors to Great Careers Linear algebra Basics 2. Learn just enough Python or any programming: Now, you can get started with any language of your interest, but we suggest Python as  it’s great for people who are new to programming. It’s easy to learn due to its simple syntax. You’ll be able to quickly implement the ML algorithms. Also,  It has a rich development ecosystem that offers a ton of libraries and frameworks in Machine Learning such as Scikit Learn, Lasagne, Numpy, Scipy, Theano, Tensorflow, etc. Learning Resources: Python Machine Learning Learn Python in 7 Days Python for Beginners 2017 [Video] Learn Python with codecademy Python editor for beginner programmers 3. Learn basic Probability Theory and statistics: A lot of fundamental Statistical and Probability Theories form the basis for ML. You’ve probably already learned Probability and statistics in school, it easy to dive into advanced statistics for ML. Machine learning in its currently widely used form is a way to predict odds and see patterns. Knowing statistics and probability is important as it will help you with better understanding of why any machine learning algorithm works. For example, your grounding in this area, will help to ask the right questions, choose the right set of algorithms and know what to expect as answers from your ML model on questions such as: What are the odds of this person also liking this movie given their current movie watching choices ( Collaborative filtering and content-based filtering) How similar is this user to that group of users who brought a bunch of stuff on my site (clustering, collaborative filtering, and classification) Could this person be at risk of cancer given a certain set of traits and health indicator observations (logistic regression) Should you buy that stock (decision tree) Also, check out our interview with James D. Miller to know more about why learning stats is important in this field. Learning resources: Statistics for Data Science [Video] 4. Learn machine learning algorithms: Do not get intimidated!  You don’t have to be an expert to learn ML algorithms. Knowing basic ML algorithms that are majorly used in the real world applications like linear regression, naive Bayes, and decision trees, are enough to get you started. Learn what they do and how they are used in Machine Learning. 5. Learn numpy sci-kit learn,Keras or any other popular machine learning framework: It can be confusing initially to decide which framework to learn. Each one has its own advantages and disadvantages. Numpy is a linear algebra library which is useful for performing mathematical and logical operations. You can easily work with large multidimensional arrays using Numpy. Sci-kit learn helps with quick implementation of popular algorithms on datasets as just one line of code makes different algorithms available for you. Keras is minimalistic and straightforward with high-levels of extensibility, so it is easier to approach. Learning Resources:  Hands-on Machine Learning with TensorFlow [Video]  Hands-on Scikit-learn for Machine Learning [Video] If you have reached till here, it is time to put your learning into practice. Go ahead and create a simple linear regression model using some publicly available dataset in your area of interest. Kaggle, ourworldindata.org, UC Irvine Machine Learning repository, elitedatascience, all have a rich set of clean datasets in varied fields. Now, it is necessary to commit and put in daily efforts to practise these skills. Quora, Reddit, Medium, and stackoverflow will be your best friends when it comes to solving doubts regarding any of these skills. Data Helpers is another great resource that provides newcomers with help on queries regarding entering the ML field and related topics. Additionally, once you start getting hang of these skills, identify your strengths and interests, to realign your career goals. Research on the kind of work you want to put your newly gained Machine Learning skill to use. It needn’t be professional or serious, it just needs to be something that you deeply care about or are passionate about. This will pull you through your learning milestones, should you feel low at some point. Also, don’t forget to collaborate with other people and learn from them. You can work with web developers, software programmers, data analysts, data administrators, game developers etc. Finally, keep yourself updated with all the latest happenings in the ML world. Follow top experts and influencers on social media, top blogs on Machine Learning, and conferences. Once you are done checking off these steps off your list, you’ll be ready to start off with your ML project.                                                  Now, we’ll be looking at the most frequently asked questions by beginners in the field of Machine learning. Frequently asked questions by Beginners in ML As a beginner, it’s natural to have a lot of questions regarding ML. We’ll be addressing the top three frequently asked questions by beginners or non-programmers when it comes to Machine learning: Q.1 I am looking to make a career in Machine learning but I have no prior programming experience. Do I need to know programming for Machine learning? In a nutshell, Yes. If you want a career in Machine learning then having some form of programming knowledge really helps. As mentioned earlier in this article, learning a programming language can really help you with implementing ML algorithms. It also lets you know the internal mechanism behind Machine learning. So, having programming as a prior skill is great. Again, as mentioned before, you can get started with Python which is the easiest and the most common languages for ML. However, programming is just a part of Machine learning. For instance, “machine learning engineers” typically write more code than develop models, while “research scientists” work more on modelling and analyzing different models. Now, ML is based on the principles of statistical inference and for talking statistically to the computer, we need a language, there comes Coding. So, even though the nature of your job in ML might not require you to code as much, there’s still some amount of coding required. Read Also: Why is Python so good for AI and ML? 5 Python Experts Explain Top languages for Artificial Intelligence development Q.2 Are there any tools that can help me with Machine learning without touching a single line of code? Yes. With the rise of MLaaS (Machine learning as a service), there are certain tools that help you get started with machine learning right-away. These are especially useful for business applications of ML, such as predictive modelling and clustering. Read Also: How MLaaS is transforming cloud Some of the most popular ones are: BigML:  This cloud based web-service lets you upload your data, prepare it and run algorithms on it. It’s great for people with not so extensive data science backgrounds. It offers a clean and easy to use interfaces for configuring algorithms (decision trees) and reviewing the results. Being focused “only” on Machine Learning, it comes with a wide set of features, all well integrated within a usable Web UI. Other than that, it also offers an API so that if you like it you can build an application around it. Microsoft Azure: The Microsoft Azure ML studio is a “GUI-based integrated development environment for constructing and operationalizing Machine Learning workflow on Azure”. So, via an integrated development environment called ML Studio, people without data science background or non-programmers can also build data models with the help of drag-and-drop gestures and simple data flow diagrams. This also saves a lot of time through ML Studio's library of sample experiments. Learning resources: Microsoft Azure Machine Learning Machine Learning In The Cloud With Azure ML[Video] Orange: This is an open source machine learning and data visualization studio for novice and experts alike. It provides a toolbox comprising of text mining (topic modelling) and image recognition. It also offers a design tool for visual programming which allows you to connect together data preparation, algorithms, and result evaluation, thereby, creating machine learning “programs”. Apart from that, it provides over 100 widgets for the environment and there’s also a Python API and library available which you can integrate into your application. Amazon ML: Amazon ML is a part of Amazon Web Services ( AWS ) that combines powerful machine learning algorithms with interactive visual tools to guide you towards easily creating, evaluating, and deploying machine learning models. So, whether you are a data scientist or a newbie, it offers ML services and tools tailored to meet your needs and level of expertise. Building ML models using Amazon ML consists of three operations: data analysis, model training, and evaluation. Learning Resources: Effective Amazon Machine Learning Q.3  Do I need to know advanced mathematics ( college graduate level ) to learn Machine learning? It depends. As mentioned earlier, understanding of the following mathematical topics: Probability, Statistics and Linear Algebra can really make your machine learning journey easier and also help simplify your code. These help you understand the “why” behind the working of the machine learning algorithms, which is quite fundamental to understanding ML. However, not knowing advanced mathematics is not an excuse to not learning Machine Learning. There a lot of libraries which makes the task of applying an ML algorithm to solve a task easier. One such example is the widely used Python’s scikit-learn library. With scikit-learn, you just need one line of code and you’ll have the most common algorithms there for you, ready to be used. But, if you want to go deeper into machine learning then knowing advanced mathematics is a prerequisite as it will help you understand the algorithms, the formulas, how the learning is done and many other Machine Learning concepts. Also, with so many courses and tutorials online, you can always learn advanced mathematics on the side while exploring Machine learning. So, we looked at the three most asked questions by beginners in the field of Machine Learning. In the past, machine learning has provided us with self-driving cars, effective web search, speech recognition, etc. Machine learning is extremely pervasive, in fact, many researchers believe that ML is the best way to make progress towards human-level AI. Learning ML is not an easy task but its not next to impossible either. In the end, it all depends on the amount of dedication and efforts that you’re willing to put in to get a grasp of it. We just touched the tip of the iceberg in this article, there’s a lot more to know in Machine Learning which you will get a hang of as you get your feet dirty in it. That being said, all the best for the road ahead! Facebook launches a 6-part ML video series 7 of the best ML conferences for the rest of 2018 Google introduces Machine Learning courses for AI beginners
Read more
  • 0
  • 0
  • 16621
article-image-6-artificial-intelligence-cybersecurity-tools-you-need-to-know
Savia Lobo
25 Aug 2018
7 min read
Save for later

6 artificial intelligence cybersecurity tools you need to know

Savia Lobo
25 Aug 2018
7 min read
Recently, most of the organizations experienced severe downfall due to an undetected malware, Deeplocker, which secretly evaded even the stringent cyber security mechanisms. Deeplocker leverages the AI model to attack the target host by using indicators such as facial recognition, geolocation and voice recognition. This incidence speaks volumes about the big role AI plays in the cybersecurity domain. In fact, some may even go on to say that AI for cybersecurity is no longer a nice to have tech rather a necessity. Large and small organizations and even startups are hugely investing in building AI systems to analyze the huge data trove and in turn, help their cybersecurity professionals to identify possible threats and take precautions or immediate actions to solve it. If AI can be used in getting the systems protected, it can also harm it. How? The hackers and intruders can also use it to launch an attack--this would be a much smarter attack--which would be difficult to combat. Phishing, one of the most common and simple social engineering cyber attack is now easy for attackers to master. There are a plethora of tools on the dark web that can help anyone to get their hands on phishing. In such trying conditions, it is only imperative that organizations take necessary precautions to guard their information castles. What better than AI? How 6 tools are using artificial intelligence for cybersecurity Symantec’s Targeted attack analytics (TAA) tool This tool was developed by Symantec and is used to uncover stealthy and targeted attacks. It applies AI and machine learning on the processes, knowledge, and capabilities of the Symantec’s security experts and researchers. The TAA tool was used by Symantec to counter the Dragonfly 2.0 attack last year. This attack targeted multiple energy companies and tried to gain access to operational networks. Eric Chein, Technical Director of Symantec Security says, “ With TAA, we’re taking the intelligence generated from our leading research teams and uniting it with the power of advanced machine learning to help customers automatically identify these dangerous threats and take action.” The TAA tools analyze incidents within the network against the incidents found in their Symantec threat data lake. TAA unveils suspicious activity in individual endpoints and collates that information to determine whether each action indicate hidden malicious activity. The TAA tools are now available for Symantec Advanced Threat Protection (ATP) customers. Sophos’ Intercept X tool Sophos is a British security software and hardware company. Its tool, Intercept X, uses a deep learning neural network that works similar to a human brain. In 2010, the US Defense Advanced Research Projects Agency (DARPA) created their first Cyber Genome Program to uncover the ‘DNA’ of malware and other cyber threats, which led to the creation of algorithm present in the Intercept X. Before a file executes, the Intercept X is able to extract millions of features from a file, conduct a deep analysis, and determine if a file is benign or malicious in 20 milliseconds. The model is trained on real-world feedback and bi-directional sharing of threat intelligence via an access to millions of samples provided by the data scientists. This results in high accuracy rate for both existing and zero-day malware, and a lower false positive rate. Intercept X utilizes behavioral analysis to restrict new ransomware and boot-record attacks.  The Intercept X has been tested on several third parties such as NSS labs and received high-scores. It is also proven on VirusTotal since August of 2016. Maik Morgenstern, CTO, AV-TEST said, “One of the best performance scores we have ever seen in our tests.” Darktrace Antigena Darktrace Antigena is Darktrace’s active self-defense product. Antigena expands Darktrace’s core capabilities to detect and replicate the function of digital antibodies that identify and neutralize threats and viruses. Antigena makes use of Darktrace’s Enterprise Immune System to identify suspicious activity and responds to them in real-time, depending on the severity of the threat. With the help of underlying machine learning technology, Darktrace Antigena identifies and protects against unknown threats as they develop. It does this without the need for human intervention, prior knowledge of attacks, rules or signatures. With such automated response capability, organizations can respond to threats quickly, without disrupting the normal pattern of business activity. Darktrace Antigena modules help to regulate user and machine access to the internet, message protocols and machine and network connectivity via various products such as Antigena Internet, Antigena Communication, and Antigena network. IBM QRadar Advisor IBM’s QRadar Advisor uses the IBM Watson technology to fight against cyber attacks. It uses AI to auto-investigate indicators of any compromise or exploit. QRadar Advisor uses cognitive reasoning to give critical insights and further accelerates the response cycle. With the help of IBM’s QRadar Advisor, security analysts can assess threat incidents and reduce the risk of missing them. Features of the IBM QRadar Advisor Automatic investigations of incidents QRadar Advisor with Watson investigates threat incidents by mining local data using observables in the incident to gather broader local context. It later quickly assesses the threats regarding whether they have bypassed layered defenses or were blocked. Provides Intelligent reasoning QRadar identifies the likely threat by applying cognitive reasoning. It connects threat entities related to the original incident such as malicious files, suspicious IP addresses, and rogue entities to draw relationships among these entities. Identifies high priority risks With this tool, one can get critical insights on an incident, such as whether or not a malware has executed, with supporting evidence to focus your time on the higher risk threats. Then make a decision quickly on the best response method for your business. Key insights on users and critical assets IBM’s QRadar can detect suspicious behavior from insiders through integration with the User Behavior Analytics (UBA) App and understands how certain activities or profiles impact systems. Vectra’s Cognito Vectra’s Cognito platform uses AI to detect attackers in real-time. It automates threat detection and hunts for covert attackers. Cognito uses behavioral detection algorithms to collect network metadata, logs and cloud events. It further analyzes these events and stores them to reveal hidden attackers in workloads and user/IoT devices. Cognito platform consists of Cognito Detect and Cognito Recall. Cognito Detect reveals hidden attackers in real time using machine learning, data science, and behavioral analytics. It automatically triggers responses from existing security enforcement points by driving dynamic incident response rules. Cognito Recall determines exploits that exist in historical data. It further speeds up detection of incident investigations with actionable context about compromised devices and workloads over time. It’s a quick and easy fix to find all devices or workloads accessed by compromised accounts and identify files involved in exfiltration. Just as diamond cuts diamond, AI cuts AI. By using AI to attack and to prevent on either side, AI systems will learn different and newer patterns and also identify unique deviations to security analysts. This provides organizations to resolve an attack on the way much before it reaches to the core. Given the rate at which AI and machine learning are expanding, the days when AI will redefine the entire cybersecurity ecosystem are not that far. DeepMind AI can spot over 50 sight-threatening eye diseases with expert accuracy IBM’s DeepLocker: The Artificial Intelligence powered sneaky new breed of Malware 7 Black Hat USA 2018 conference cybersecurity training highlights Top 5 cybersecurity trends you should be aware of in 2018  
Read more
  • 0
  • 0
  • 16449

article-image-vulnerabilities-in-the-application-and-transport-layer-of-the-tcp-ip-stack
Melisha Dsouza
07 Feb 2019
15 min read
Save for later

Vulnerabilities in the Application and Transport Layer of the TCP/IP stack

Melisha Dsouza
07 Feb 2019
15 min read
The Transport layer is responsible for end-to-end data communication and acts as an interface for network applications to access the network. This layer also takes care of error checking, flow control, and verification in the TCP/IP  protocol suite. The Application Layer handles the details of a particular application and performs 3 main tasks- formatting data, presenting data and transporting data.  In this tutorial, we will explore the different types of vulnerabilities in the Application and Transport Layer. This article is an excerpt from a book written by Glen D. Singh, Rishi Latchmepersad titled CompTIA Network+ Certification Guide This book covers all CompTIA certification exam topics in an easy-to-understand manner along with plenty of self-assessment scenarios for better preparation. This book will not only prepare you conceptually but will also help you pass the N10-007 exam. Vulnerabilities in the Application Layer The following are some of the application layer protocols which we should pay close attention to in our network: File Transfer Protocol (FTP) Telnet Secure Shell (SSH) Simple Mail Transfer Protocol (SMTP) Domain Name System (DNS) Dynamic Host Configuration Protocol (DHCP) Hypertext Transfer Protocol (HTTP) Each of these protocols was designed to provide the function it was built to do and with a lesser focus on security. Malicious users and hackers are able to compromise both the application that utilizes these protocols and the network protocols themselves. Cross Site Scripting (XSS) XSS focuses on exploiting a weakness in websites. In an XSS attack, the malicious user or hacker injects client-side scripts into a web page/site that a potential victim would trust. The scripts can be JavaScript, VBScript, ActiveX, and HTML, or even Flash (ActiveX), which will be executed on the victim's system. These scripts will be masked as legitimate requests between the web server and the client's browser. XSS focuses on the following: Redirecting a victim to a malicious website/server Using hidden Iframes and pop-up messages on the victim's browser Data manipulation Data theft Session hijacking Let's take a deeper look at what happens in an XSS attack: An attacker injects malicious code into a web page/site that a potential victim trusts. A trusted site can be a favorite shopping website, social media platform, or school or university web portal. A potential victim visits the trusted site. The malicious code interacts with the victim's web browser and executes. The web browser is usually unable to determine whether the scripts are malicious or not and therefore still executes the commands. The malicious scripts can be used obtain cookie information, tokens, session information, and so on about other websites that the browser has stored information about. The acquired details (cookies, tokens, sessions ID, and so on) are sent back to the hacker, who in turn uses them to log in to the sites that the victim's browser has visited: There are two types of XSS attacks: Stored XSS (persistent) Reflected (non-persistent) Stored XSS (persistent): In this attack, the attacker injects a malicious script directly into the web application or a website. The script is stored permanently on the page, so when a potential victim visits the compromised page, the victim's web browser will parse all the code of the web page/application fine. Afterward, the script is executed in the background without the victim's knowledge. At this point, the script is able to retrieve session cookies, passwords, and any other sensitive information stored in the user's web browser, and sends the loot back to the attacker in the background. Reflective XSS (non-persistent): In this attack, the attacker usually sends an email with the malicious link to the victim. When the victim clicks the link, it is opened in the victim's web browser (reflected), and at this point, the malicious script is invoked and begins to retrieve the loot (passwords, credit card numbers, and so on) stored in the victim's web browser. SQL injection (SQLi) SQLi attacks focus on parsing SQL commands into an SQL database that does not validate the user input. The attacker attempts to gain unauthorized access to a database either by creating or retrieving information stored in the database application. Nowadays, attackers are not only interested in gaining access, but also in retrieving (stealing) information and selling it to others for financial gain. SQLi can be used to perform: Authentication bypass: Allows the attacker to log in to a system without a valid user credential Information disclosure: Retrieves confidential information from the database Compromise data integrity: The attacker is able to manipulate information stored in the database Lightweight Directory Access Protocol (LDAP) injection LDAP is designed to query and update directory services, such as a database like Microsoft Active Directory. LDAP uses both TCP and UDP port 389 and LDAP uses port 636. In an LDAP injection attack, the attacker exploits the vulnerabilities within a web application that constructs LDAP messages or statements, which are based on the user input. If the receiving application does not validate or sanitize the user input, this increases the possibility of manipulating LDAP messages. Cross-Site Request Forgery (CSRF) This attack is a bit similar to the previously mentioned XSS attack. In a CSRF attack, the victim machine/browser is forced to execute malicious actions against a website with which the victim has been authenticated (a website that trusts the actions of the user). To have a better understanding of how this attack works, let's visualize a potential victim, Bob. On a regular day, Bob visits some of his favorite websites, such as various blogs, social media platforms, and so on, where he usually logs in automatically to view the content. Once Bob logs in to a particular website, the website would automatically trust the transactions between itself and the authenticated user, Bob. One day, he receives an email from the attacker but unfortunately Bob does not realize the email is a phishing/spam message and clicks on the link within the body of the message. His web browser opens the malicious URL in a new tab: The attack would cause Bob's machine/web browser to invoke malicious actions on the trusted website; the website would see all the requests are originating from Bob. The return traffic such as the loot (passwords, credit card details, user account, and so on) would be returned to the attacker. Session hijacking When a user visits a website, a cookie is stored in the user's web browser. Cookies are used to track the user's preferences and manage the session while the user is on the site. While the user is on the website, a session ID is also set within the cookie, and this information may be persistent, which allows a user to close the web browser and then later revisit the same website and automatically log in. However, the web developer can set how long the information is persistent for, whether it expires after an hour or a week, depending on the developer's preference. In a session hijacking attack, the attacker can attempt to obtain the session ID while it is being exchanged between the potential victim and the website. The attacker can then use this session ID of the victim on the website, and this would allow the attacker to gain access to the victim's session, further allowing access to the victim's user account and so on. Cookie poisoning A cookie stores information about a user's preferences while he/she is visiting a website. Cookie poisoning is when an attacker has modified a victim's cookie, which will then be used to gain confidential information about the victim such as his/her identity. DNS Distributed Denial-of-Service (DDoS) A DDoS attack can occur against a DNS server. Attacker sometimes target Internet Service Providers (ISPs) networks, public and private Domain Name System (DNS) servers, and so on to prevent other legitimate users from accessing the service. If a DNS server is unable to handle the amount of requests coming into the server, its performance will eventually begin to degrade gradually, until it either stops responding or crashes. This would result in a Denial-of-Service (DoS) attack. Registrar hijacking Whenever a person wants to purchase a domain, the person has to complete the registration process at a domain registrar. Attackers do try to compromise users accounts on various domain registrar websites in the hope of taking control of the victim's domain names. With a domain name, multiple DNS records can be created or modified to direct incoming requests to a specific device. If a hacker modifies the A record on a domain to redirect all traffic to a compromised or malicious server, anyone who visits the compromised domain will be redirected to the malicious website. Cache poisoning Whenever a user visits a website, there's the process of resolving a host name to an IP address which occurs in the background. The resolved data is stored within the local system in a cache area. The attacker can compromise this temporary storage area and manipulate any further resolution done by the local system. Typosquatting McAfee outlined typosquatting, also known as URL hijacking, as a type of cyber-attack that allows an attacker to create a domain name very close to a company's legitimate domain name in the hope of tricking victims into visiting the fake website to either steal their personal information or distribute a malicious payload to the victim's system. Let's take a look at a simple example of this type of attack. In this scenario, we have a user, Bob, who frequently uses the Google search engine to find his way around the internet. Since Bob uses the www.google.com website often, he sets it as his homepage on the web browser so each time he opens the application or clicks the Home icon, www.google.com is loaded onto the screen. One day Bob decides to use another computer, and the first thing he does is set his favorite search engine URL as his home page. However, he typed www.gooogle.com and didn't realize it. Whenever Bob visits this website, it looks like the real website. Since the domain was able to be resolved to a website, this is an example of how typosquatting works. It's always recommended to use a trusted search engine to find a URL for the website you want to visit. Trusted internet search engine companies focus on blacklisting malicious and fake URLs in their search results to help protect internet users such as yourself. Vulnerabilities at the Transport Layer In this section, we are going to discuss various weaknesses that exist within the underlying protocols of the Transport Layer. Fingerprinting In the cybersecurity world, fingerprinting is used to discover open ports and services that are running open on the target system. From a hacker's point of view, fingerprinting is done before the exploitation phase, as the more information a hacker can obtain about a target, the hacker can then narrow its attack scope and use specific tools to increase the chances of successfully compromising the target machine. This technique is also used by system/network administrators, network security engineers, and cybersecurity professionals alike. Imagine you're a network administrator assigned to secure a server; apart from applying system hardening techniques such as patching and configuring access controls, you would also need to check for any open ports that are not being used. Let's take a look at a more practical approach to fingerprinting in the computing world. We have a target machine, 10.10.10.100, on our network. As a hacker or a network security professional, we would like to know which TCP and UDP ports are open, the services that use the open ports, and the service daemon running on the target system. In the following screenshot, we've used nmap to help us discover the information we are seeking. The NMap tools delivers specially crafted probes to a target machine: Enumeration In a cyber attack, the hacker uses enumeration techniques to extract information about the target system or network. This information will aid the attacker in identifying system attack points. The following are the various network services and ports that stand out for a hacker: Port 53: DNS zone transfer and DNS enumeration Port 135: Microsoft RPC Endpoint Mapper Port 25: Simple Mail Transfer Protocol (SMTP) DNS enumeration DNS enumeration is where an attacker is attempting to determine whether there are other servers or devices that carry the domain name of an organization. Let's take a look at how DNS enumeration works. Imagine we are trying to find out all the publicly available servers Google has on the internet. Using the host utility in Linux and specifying a hostname, host www.google.com, we can see the IP address 172.217.6.196 has been resolved successfully. This means there's a device with a host name of www.google.com active. Furthermore, if we attempt to resolve the host name, gmail.google.com, another IP address is presented but when we attempt to resolve mx.google.com, no IP address is given. This is an indication that there isn't an active device with the mx.google.com host name: DNS zone transfer DNS zone transfer allows the copying of the master file from a DNS server to another DNS server. There are times when administrators do not configure the security settings on their DNS server properly, which allows an attacker to retrieve the master file containing a list of the names and addresses of a corporate network. Microsoft RPC Endpoint Mapper Not too long ago, CVE-2015-2370 was recorded on the CVE database. This vulnerability took advantage of the authentication implementation of the Remote Procedure Call (RPC) protocol in various versions of the Microsoft Windows platform, both desktop and server operating systems. A successful exploit would allow an attacker to gain local privileges on a vulnerable system. SMTP SMTP is used in mail servers, as with the POP and the Internet Message Access Protocol (IMAP). SMTP is used for sending mail, while POP and IMAP are used to retrieve mail from an email server. SMTP supports various commands, such as EXPN and VRFY. The EXPN command can be used to verify whether a particular mailbox exists on a local system, while the VRFY command can be used to validate a username on a mail server. An attacker can establish a connection between the attacker's machine and the mail server on port 25. Once a successful connection has been established, the server will send a banner back to the attacker's machine displaying the server name and the status of the port (open). Once this occurs, the attacker can then use the VRFY command followed by a user name to check for a valid user on the mail system using the VRFY bob syntax. SYN flooding One of the protocols that exist at the Transport Layer is TCP. TCP is used to establish a connection-oriented session between two devices that want to communication or exchange data. Let's recall how TCP works. There are two devices that want to exchange some messages, Bob and Alice. Bob sends a TCP Synchronization (SYN) packet to Alice, and Alice responds to Bob with a TCP Synchronization/Acknowledgment (SYN/ACK) packet. Finally, Bob replies with a TCP Acknowledgement (ACK) packet. The following diagram shows the TCP 3-Way Handshake mechanism: For every TCP SYN packet received on a device, a TCP ACK packet must be sent back in response. One type of attack that takes advantage of this design flaw in TCP is known as a SYN Flood attack. In a SYN Flood attack, the attacker sends a continuous stream of TCP SYN packets to a target system. This would cause the target machine to process each individual packet and response accordingly; eventually, with the high influx of TCP SYN packets, the target system will become too overwhelmed and stop responding to any requests: TCP reassembly and sequencing During a TCP transmission of datagrams between two devices, each packet is tagged with a sequence number by the sender. This sequence number is used to reassemble the packets back into data. During the transmission of packets, each packet may take a different path to the destination. This may cause the packets to be received in an out-of-order fashion, or in the order they were sent over the wire by the sender. An attacker can attempt to guess the sequencing numbers of packets and inject malicious packets into the network destined for the target. When the target receives the packets, the receiver would assume they came from the real sender as they would contain the appropriate sequence numbers and a spoofed IP address. Summary In this article, we have explored the different types of vulnerabilities that exist at the Application and Transport Layer of the TCP/IP protocol suite. To understand other networking concepts like network architecture, security, network monitoring, and troubleshooting; and ace the CompTIA certification exam, check out our book CompTIA Network+ Certification Guide AWS announces more flexibility its Certification Exams, drops its exam prerequisites Top 10 IT certifications for cloud and networking professionals in 2018 What matters on an engineering resume? Hacker Rank report says skills, not certifications
Read more
  • 0
  • 0
  • 15789

article-image-developers-guide-to-software-architecture-patterns
Sugandha Lahoti
06 Aug 2018
11 min read
Save for later

Developer's guide to Software architecture patterns

Sugandha Lahoti
06 Aug 2018
11 min read
As we all know, patterns are a kind of simplified and smarter solution for a repetitive concern or recurring challenge in any field of importance. In the field of software engineering, there are primarily many designs, integration, and architecture patterns. In this article, we will cover the need for software patterns and describe the most prominent and dominant software architecture patterns. This article is an excerpt from Architectural Patterns by Pethuru Raj, Anupama Raman, and Harihara Subramanian. Why software patterns? There is a bevy of noteworthy transformations happening in the IT space, especially in software engineering. The complexity of recent software solutions is continuously going up due to the continued evolution of the business expectations. With complex software, not only does the software development activity become very difficult, but also the software maintenance and enhancement tasks become tedious and time-consuming. Software patterns come as a soothing factor for software architects, developers, and operators. Types of software patterns Several newer types of patterns are emerging in order to cater to different demands. This section throws some light on these. An architecture pattern expresses a fundamental structural organization or schema for complex systems. It provides a set of predefined subsystems, specifies their unique responsibilities, and includes the decision-enabling rules and guidelines for organizing the relationships between them. The architecture pattern for a software system illustrates the macro-level structure for the whole software solution. A design pattern provides a scheme for refining the subsystems or components of a system, or the relationships between them. It describes a commonly recurring structure of communicating components that solves a general design problem within a particular context. The design pattern for a software system prescribes the ways and means of building the software components. There are other patterns, too. The dawn of the big data era mandates for distributed computing. The monolithic and massive nature of enterprise-scale applications demands microservices-centric applications. Here, application services need to be found and integrated in order to give an integrated result and view. Thus, there are integration-enabled patterns. Similarly, there are patterns for simplifying software deployment and delivery. Other complex actions are being addressed through the smart leverage of simple as well as composite patterns. Software architecture patterns Let's look at some of the prominent and dominant software architecture patterns. Object-oriented architecture (OOA) Objects are the fundamental and foundational building blocks for all kinds of software applications. Therefore, the object-oriented architectural style has become the dominant one for producing object-oriented software applications. Ultimately, a software system is viewed as a dynamic collection of cooperating objects, instead of a set of routines or procedural instructions. We know that there are proven object-oriented programming methods and enabling languages, such as C++, Java, and so on. The properties of inheritance, polymorphism, encapsulation, and composition being provided by OOA come in handy in producing highly modular (highly cohesive and loosely coupled), usable and reusable software applications. The object-oriented style is suitable if we want to encapsulate logic and data together in reusable components. Also, the complex business logic that requires abstraction and dynamic behavior can effectively use this OOA. Component-based assembly (CBD) architecture Monolithic and massive applications can be partitioned into multiple interactive and smaller components. When components are found, bound, and composed, we get the full-fledged software applications.  CBA does not focus on issues such as communication protocols and shared states. Components are reusable, replaceable, substitutable, extensible, independent, and so on. Design patterns such as the dependency injection (DI) pattern or the service locator pattern can be used to manage dependencies between components and promote loose coupling and reuse. Such patterns are often used to build composite applications that combine and reuse components across multiple applications. Aspect-oriented programming (AOP) aspects are another popular application building block. By deft maneuvering of this unit of development, different applications can be built and deployed. The AOP style aims to increase modularity by allowing the separation of cross-cutting concerns. AOP includes programming methods and tools that support the modularization of concerns at the level of the source code. Agent-oriented software engineering (AOSE) is a programming paradigm where the construction of the software is centered on the concept of software agents. In contrast to the proven object-oriented programming, which has objects (providing methods with variable parameters) at its core, agent-oriented programming has externally specified agents with interfaces and messaging capabilities at its core. They can be thought of as abstractions of objects. Exchanged messages are interpreted by receiving agents, in a way specific to its class of agents. Domain-driven design (DDD) architecture Domain-driven design is an object-oriented approach to designing software based on the business domain, its elements and behaviors, and the relationships between them. It aims to enable software systems that are a correct realization of the underlying business domain by defining a domain model expressed in the language of business domain experts. The domain model can be viewed as a framework from which solutions can then be readied and rationalized. DDD is good if we have a complex domain and we wish to improve communication and understanding within the development team. DDD can also be an ideal approach if we have large and complex enterprise data scenarios that are difficult to manage using the existing techniques. Client/server architecture This pattern segregates the system into two main applications, where the client makes requests to the server. In many cases, the server is a database with application logic represented as stored procedures. This pattern helps to design distributed systems that involve a client system and a server system and a connecting network. The main benefits of the client/server architecture pattern are: Higher security: All data gets stored on the server, which generally offers a greater control of security than client machines. Centralized data access: Because data is stored only on the server, access and updates to the data are far easier to administer than in other architectural styles. Ease of maintenance: The server system can be a single machine or a cluster of multiple machines. The server application and the database can be made to run on a single machine or replicated across multiple machines to ensure easy scalability and high availability. However, the traditional two-tier client/server architecture pattern has numerous disadvantages. Firstly, the tendency of keeping both application and data on a server can negatively impact system extensibility and scalability. The server can be a single point of failure. The reliability is the main worry here. To address these issues, the client-server architecture has evolved into the more general three-tier (or N-tier) architecture. This multi-tier architecture not only surmounts the issues just mentioned but also brings forth a set of new benefits. Multi-tier distributed computing architecture The two-tier architecture is neither flexible nor extensible. Hence, multi-tier distributed computing architecture has attracted a lot of attention. The application components can be deployed in multiple machines (these can be co-located and geographically distributed). Application components can be integrated through messages or remote procedure calls (RPCs), remote method invocations (RMIs), common object request broker architecture (CORBA), enterprise Java beans (EJBs), and so on. The distributed deployment of application services ensures high availability, scalability, manageability, and so on. Web, cloud, mobile, and other customer-facing applications are deployed using this architecture. Thus, based on the business requirements and the application complexity, IT teams can choose the simple two-tier client/server architecture or the advanced N-tier distributed architecture to deploy their applications. These patterns are for simplifying the deployment and delivery of software applications to their subscribers and users. Layered/tiered architecture This pattern is an improvement over the client/server architecture pattern. This is the most commonly used architectural pattern. Typically, an enterprise software application comprises three or more layers: presentation/user interface layer, business logic layer, and data persistence layer. The presentation layer is primarily usded for user interface applications (thick clients) or web browsers (thin clients). With the fast proliferation of mobile devices, mobile browsers are also being attached to the presentation layer. Such tiered segregation comes in handy in managing and maintaining each layer accordingly. The power of plug-in and play gets realized with this approach. Additional layers can be fit in as needed. There are model view controller (MVC) pattern-compliant frameworks hugely simplifying enterprise-grade and web-scale applications. MVC is a web application architecture pattern. The main advantage of the layered architecture is the separation of concerns. That is, each layer can focus solely on its role and responsibility. The layered and tiered pattern makes the application: Maintainable Testable Easy to assign specific and separate roles Easy to update and enhance layers separately This architecture pattern is good for developing web-scale, production-grade, and cloud-hosted applications quickly and in a risk-free fashion. When there are business and technology changes, this layered architecture comes in handy in embedding newer things in order to meet varying business requirements. Event-driven architecture (EDA) The world is eventually becoming event-driven. That is, applications have to be sensitive and responsive proactively, pre-emptively, and precisely. Whenever there is an event happening, applications have to receive the event information and plunge into the necessary activities immediately. The request and reply notion paves the way for the fire and forgets tenet. The communication becomes asynchronous. There is no need for the participating applications to be available online all the time. EDA is typically based on an asynchronous message-driven communication model to propagate information throughout an enterprise. It supports a more natural alignment with an organization's operational model by describing business activities as series of events. EDA does not bind functionally disparate systems and teams into the same centralized management model. EDA ultimately leads to highly decoupled systems. The common issues being introduced by system dependencies are getting eliminated through the adoption of the proven and potential EDA. We have seen various forms of events used in different areas. There are business and technical events. Systems update their status and condition emitting events to be captured and subjected to a variety of investigations in order to precisely understand the prevailing situations. The submission of web forms and clicking on some hypertexts generate events to be captured. Incremental database synchronization mechanisms, RFID readings, email messages, short message service (SMS), instant messaging, and so on are events not to be taken lightly. There are event processing engines, message-oriented middleware (MoM) solutions such as message queues and brokers to collect and stock event data and messages. Millions of events can be collected, parsed, and delivered through multiple topics through these MoM solutions. As event sources/producers publish notifications, event receivers can choose to listen to or filter out specific events and make proactive decisions in real-time on what to do next. EDA style is built on the fundamental aspects of event notifications to facilitate immediate information dissemination and reactive business process execution. In an EDA environment, information can be propagated to all the services and applications in real-time. The EDA pattern enables highly reactive enterprise applications. Real-time analytics is the new normal with the surging popularity of the EDA pattern. Service-oriented architecture (SOA) With the arrival of service paradigms, software packages and libraries are being developed as a collection of services. Services are capable of running independently of the underlying technology. Also, services can be implemented using any programming and script languages. Services are self-defined, autonomous, and interoperable, publicly discoverable, assessable, accessible, reusable, and compostable. Services interact with one another through messaging. There are service providers/developers and consumers/clients. Every service has two parts: the interface and the implementation. The interface is the single point of contact for requesting services. Interfaces give the required separation between services. All kinds of deficiencies and differences of service implementation get hidden by the service interface. Precisely speaking, SOA enables application functionality to be provided as a set of services, and the creation of personal as well as professional applications that make use of software services. In short, SOA is for service-enablement and service-based integration of monolithic and massive applications. The complexity of enterprise process/application integration gets moderated through the smart leverage of the service paradigm. To summarize, we detailed the prominent and dominant software architecture patterns and how they are used for producing and running any kind of enterprise-class and production-grade software applications. To know more about patterns associated with object-oriented, component-based, client-server, and cloud architectures, grab the book Architectural Patterns. Why we need Design Patterns? Implementing 5 Common Design Patterns in JavaScript (ES8) An Introduction to Node.js Design Patterns
Read more
  • 0
  • 0
  • 15239
article-image-difference-between-working-indie-and-aaa-game-development
Raka Mahesa
02 Oct 2017
5 min read
Save for later

The Difference Between Working in Indie and AAA Game Development

Raka Mahesa
02 Oct 2017
5 min read
Let's say we have two groups of video games. In the first group, we have games like The Witcher 3, Civilization VI, and Overwatch. And in the second group, we have games like Super Meat Boy, Braid, and Stardew Valley. Can you tell the difference between these two groups? Is one group of games better than the other? No, they are all good games that have achieved both critical and financial success. Are the games in the first group sequels, while games in the second group are new? No, Overwatch is a new, original IP. Are the games in the first group more expensive than the second group? Now we're getting closer. The truth is, the first group of games comes from searching Google for "popular AAA games," while the second group comes from searching for "popular indie games." In short, the games in the first group are AAA games, and in the second group are indie games. Indie vs. AAA game development Now that we've seen the difference between the two groups, why do people separate these games into two different groups? What makes these two groups of games different from each other? Some would say that they are priced differently, but there are actually AAA games with low pricing as well as indie games with expensive pricing. How about the scale of the games? Again, there are indie games with big, massive worlds, and there are also AAA games set in short, small worlds. From my perspective, the key difference between the two groups of games is the size of the company developing the games. Indie games are usually made by companies with less than 30 people, and some are even made by less than five people. On the other hand, AAA games are made by much bigger companies, usually with hundreds of employees. Game development teams: size matters Earlier, I mentioned that company size is the key difference between indie games and AAA games. So it's not surprising that it's also the main difference between indie and AAA game development. In fact, the difference in team or company size leads to every difference between the two game development processes. Let's start with something personal, your role or position in the development team. Big teams usually have every position they need already filled. If they need someone to work on the game engine, they already have a engine programmer there. If they need someone to design a level, they already have a level designer working on it. In a big team, your role is already determined from the start, and you will rarely work on any task outside of your job description. If AAA game development values specialists, then indie game development values generalists who can fill multiple roles. It's not weird at all in a small development team if a programmer is asked to deal with both networking as well as enemy AI. Small teams usually aren't able to individually cover all the needed positions, so they turn to people who are able to work on a variety of tasks. Funding across the games industry Let's move to another difference, this time from the funding aspect. A large team requires a large amount of funding, simply because it has more people that need to be paid. And, if you look at the bigger picture, it also means that video games made by a large team have a large development cost. The opposite rings true as well; indie game development has much smaller development costs because they have smaller teams. Because every project has a chance of failure, the large development cost of AAA games becomes a big problem. If you're only spending a little money, maybe you're fine with a small chance of failure, but if you're spending a large sum of money, you definitely want to reduce that risk as much as possible. This ends up with AAA game development being much more risk-averse; they're trying to avoid risk as much as possible. In AAA game development, when there's a decision that needs to be made, the team will try to make sure that they don't make the wrong choice. They will do extensive market research and they will see what is trending in the market. They'd want to grab as many audience members as possible, so if there's any design that will exclude a significant amount of customers, it will be cut out. On the other hand, indie game development doesn't spend that much money. With a smaller development cost, indie games don't need to have a massive amount of sale to recoup their costs. Because of that, they're willing to take risks with experimental and unorthodox design, giving the team the creative freedom without needing to do market research. That said, indie game development harbors a different kind of risk. Unlike their bigger counterpart, indie game developers tend to live from one game to the next. That is, they use the revenue from their current game to fund the development of their next game. So if any of their games don't perform well, they could immediately close down. And that's another difference between the two game development process, AAA game development tends to be more financially stable compared to indie development. There are more differences between indie and AAA game development, but the ones listed above are definitely some of the most prominent. All in all, one development process isn't better than the other, and it falls back on you to decide which one is better suited for you. Raka Mahesa is a game developer at Chocoarts, who is interested in digital technology in general. Outside of work hours, he likes to work on his own projects, with Corridoom VR being his latest released game. Raka also regularly tweets as @legacy99.
Read more
  • 0
  • 0
  • 15214

article-image-unity-and-unreal-comparison
Raka Mahesa
26 Jan 2018
5 min read
Save for later

Unity and Unreal comparison

Raka Mahesa
26 Jan 2018
5 min read
If you want to find out how to get into game development, you’ve probably come across the two key game engines in the industry: Unreal Engine and Unity. But how do Unity and Unreal Engine compare? What are the differences between Unity and Unreal engine? Is one better than the other? Explore the newest and most popular Unity eBooks and video courses. Discover Unreal eBooks and video courses here. Unity and Unreal price comparison Unreal Engine has a simple pricing scheme: You get everything for free, but you have to pay 5 percent of your earnings. Unity also has a free tier that includes the core features of the engine, but if your company has an annual revenue of more than $100,000, you have to use the paid tier, which will cost you $35 per month. The paid tier also gives you additional features including a custom splash screen, an enhanced analytics feature, and expanded multiplayer hosting  The question here is which pricing scheme fits with your business model (and budget). If you have a small, nimble team Unity might be the better option, but if you have a big team developing a complex game, Unreal Engine might be more cost effective. The good thing is, without spending a dime, you can get the full capability of both tools, so you can't really go wrong starting with either of them. How do Unity's and Unreal's capabilities compare? We'll start with a simple, but important, question: what platforms do Unreal Engine and Unity support? Unreal engine supports developing games for mobile platforms like iOS and Android, for consoles like PS4, XBOX ONE, and Nintendo Switch, and for desktop operating systems like Windows, Mac, and Linux. It also has support for VR platforms such as Oculus, SteamVR, PSVR, Google Daydream, and Samsung Gear VR.  Unity, on the other hand, not only supports all of those platforms, it also supports smart TV platforms like Android TV and Samsung SmartTV, as well as augmented reality platforms like Apple ARKit and Google ARCore. And Unity doesn't simply support more platforms than Unreal, it is also usually the first game engine to provide compatibility when a new platform is launched. Unity is the clear winner when it comes to compatibility, and if you're looking to release your game to as many platforms as possible, then Unity is your best choice. Comparing Unity and Unreal's feature sets Even though both software have similar capabilities, Unreal Engine provides more built-in tools that makes game development easier. Unreal has a built-in, extensive material editor as well as a built-in cinematic editor that allows developers to easily create cinematic sequences in their games. Meanwhile, Unity relies on third-party addons from their asset store to provide similar functionalities. That said, the 2D development tool provided by Unity is much more effective than Unreal’s.  Do keep in mind that features can't only be judged by their numbers alone. One of the most important qualities in a tool is how easy they are to use. Ease of use is, of course, relatively subjective – what one person loves using might be a nightmare for another.  Is Unity or Unreal easier to use? Based on the built-in tools provided by the engine, we can see that Unreal is the more powerful of the two options. But that also means Unity is simpler to use. The same comparison can be seen in their programming aspect. Unity is using C# for their main programming language, which is easier to use and learn. Unreal, on the other hand, is using C++, which is much more powerful, but is also harder to learn and more prone to mistakes. Fortunately, Unreal makes up for its complexity by providing an alternative, easy-to-use scripting language: Blueprint. Blueprint is a scripting language where developers can simply connect nodes together to program gameplay elements. Using this tool, non-programmers like artists and writers are able to script gameplay events without relying on programmers. Comparing the Unity and Unreal communities The last point we're going to address is something not directly related to the engine itself, but it is nevertheless pretty important - the community. A big community makes it much easier to get help when you run into trouble; it also means more tool and resource development. Unity is the winner on this front, as can be seen with the huge amount of tutorials and third-party libraries that are created for it. It’s important to remember one thing: both development tools are fully capable of producing great games with amazing graphics and good performance that can sell millions. One tool may need more work than the other to get the same result, but that result is perfectly achievable with both engines. So you don't need to worry that choosing one tool over the other will negatively affect your end product. So, have you made up your mind on which tool you're going to use? Raka Mahesa is a game developer at Chocoarts who is interested in digital technology in general. Outside of work, he enjoys working on his own projects, with Corridoom VR being his latest relesed gme. Raka also regularly tweets @legacy99.
Read more
  • 0
  • 0
  • 15086