Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech Guides

851 Articles
article-image-ubers-kepler-gl-an-open-source-toolbox-for-geospatial-analysis
Pravin Dhandre
28 Jun 2018
4 min read
Save for later

Uber's kepler.gl, an open source toolbox for GeoSpatial Analysis

Pravin Dhandre
28 Jun 2018
4 min read
Geography Visualization, also called as Geovisualization plays a pivotal role in areas like cartography, geographic information systems, remote sensing and global positioning systems. Uber, a peer-to-peer transportation network company headquartered at California believes in data-driven decision making and hence keeps developing smart frameworks like deck.gl for exploring and visualizing advanced geospatial data at scale. Uber strives to make the data web-based and shareable in real-time across their teams and customers. Early this month, Uber surprised the geospatial market with its newly open-source toolbox, kepler.gl, a geoanalytics tool to gain quick insights from geospatial data with amazing and intuitive visualizations. What’s exactly Kepler.gl is? kepler.gl is a visualization-rich web platform, developed on top of deck.gl, a WebGL-powered data visualization library providing real-time visual analytics of millions of geolocation points. The platform provides visual exploration of geographical data sets along with spatial aggregation of all data points collected. The platform is said to be data-agnostic with a single interface to convert your data into insightful visualizations. https://www.youtube.com/watch?v=i2fRN4e2s0A The platform is very user-friendly where one can just drag the CSV or the GeoJSON files and drop them into the browser to visualize the dataset more intuitively. The platform is supported with different map layers, filtering option, aggregation feature through which you can get the final visualization in an animated format or like a video. The usability of features is so high that you can apply all the metrics available to your data points without much of a hassle. The web platform exhibits high performance where you can get insights from your spatial data in less than 10 minutes and that too in a single window. Another advantage of this framework is it does not involve any sort of coding and hence non-technical users can also reap the benefits by churn valuable insights from the data points. The platform is also equipped with some advanced, complex features such as 2D cartographic plane,a separate dimension for altitude, visibility of height of hexagon and grids. The users seem happy with the new height feature which helps them detect abnormalities and illicit traits in an aggregated map. With the filtering menu, the analysts and engineers can compare their data and have a granular look at their data points. This option also helps in reading the histogram well and one can easily detect outliers and make their dataset more reliable. It  has a feature to add playback to time series data points which makes getting useful information of real time location systems easy. The team at Uber looks at this toolbox with a long-term vision where they are planning to keep adding new features and enhancements to make it highly functional and a single-click visualization dashboard. The team has already announced that they would be powering it up with two major enhancements to the current functionality in next couple of months. They would add support on, More robust exploration: There will be interlinkage between charts and maps, and support for custom charts, maps and widgets like the renowned BI tool Tableau through which it will facilitate analytics teams to unveil deeper insights. Addition of newer geo-analytical capabilities: To support massive datasets, there will be added features on data operations such as polygon aggregation, union of data points, operations like joining and buffering. Companies across different verticals such as Airbnb, Atkins Global, Cityswifter, Mapbox have found great value in kepler.gl offerings and are looking towards engineering their products to leverage this framework. The visualization specialists at these companies have already praised Uber for building such a simple yet fast platform with remarkable capabilities. To get started with kepler.gl, read the documentation available at Github and start creating visualizations and enhance your geospatial data analysis. Top 7 libraries for geospatial analysis Using R to implement Kriging – A Spatial Interpolation technique for Geostatistics data Data Visualization with ggplot2
Read more
  • 0
  • 0
  • 5749

article-image-how-is-artificial-intelligence-changing-the-mobile-developer-role
Bhagyashree R
15 Oct 2018
10 min read
Save for later

How is Artificial Intelligence changing the mobile developer role?

Bhagyashree R
15 Oct 2018
10 min read
Last year, at Google I/O, Sundar Pichai, the CEO of Google, said: “We are moving from a mobile-first world to an AI-first world” Is it only applicable to Google? Not really. In the recent past, we have seen several advancements in Artificial Intelligence and in parallel a plethora of intelligent apps coming into the market. These advancements are enabling developers to take their apps to the next level by integrating recommendation service, image recognition, speech recognition, voice translation, and many more cool capabilities. Artificial Intelligence is becoming a potent tool for mobile developers to experiment and innovate. The Artificial Intelligence components that are integral to mobile experiences, such as voice-based assistants and location-based services, increasingly require mobile developers to have a basic understanding of Artificial Intelligence to be effective. Of course, you don’t have to be Artificial Intelligence experts to include intelligent components in your app. But, you should definitely understand something about what you’re building into your app and why. After all AI in mobile is not just limited to calling an API, isn't it? There’s more to it and in this article we will explore how Artificial Intelligence will shape the mobile developer role in the immediate future. Read also: AI on mobile: How AI is taking over the mobile devices marketspace What is changing in the mobile developer role? Focus shifting to data With Artificial Intelligence becoming more and more accessible, intelligent apps are becoming the new norm for businesses. Artificial Intelligence strengthens the relationship between brands and customers, inspiring developers to build smart apps that increase user retention. This also means that developers have to direct their focus to data. They have to understand things like how the data will be collected? How will the data be fed to machines and how often will data input be needed? When nearly 1 in 4 people abandon an app after its first use, as a mobile app developer, you need to rethink how you drive in-app personalization and engagement. Explore “humanized” way of user-app interaction With so many chatbots such as Siri and Google Assistant coming into the market, we can see that “humanizing” the interaction between the user and the app is becoming mainstream. “Humanizing” is the process where the app becomes relatable to the user, and the more effective it is conducted, the more the end user will interact with the app. Users now want easy navigation and searching system and Artificial Intelligence fits perfectly in the scenario. The advances in technologies like text-to-speech, speech-to-text, Natural Language Processing, and cloud services, in general, have contributed to the mass adoption of these types of interfaces. Companies are increasingly expecting mobile developers to be comfortable working with AI functionalities Artificial Intelligence is the future. Companies are now expecting their mobile developers to know how to handle the huge amount of data generated every day and how to use it. Here's is an example of what Google wants their engineers to do: “We're looking for engineers who bring fresh ideas from all areas, including information retrieval, distributed computing, large-scale system design, networking and data storage, security, artificial intelligence, natural language processing, UI design and mobile; the list goes on and is growing every day.” This open-ended requirement list shows that it is the right time to learn and embrace Artificial Intelligence as soon as possible. What skills do you need to build intelligent apps? Ideally, data scientists are the ones who conceptualize mathematical models and machine learning engineers are the ones who translate it into the code and train the model. But, when you are working in a resource-tight environment, for example in a start-up, you will be responsible for doing the end-to-end job. It is not as scary as it sounds, because you have several resources to get started with! Taking your first steps with machine learning as a service Learning anything starts with motivating yourself. Directly diving into the maths and coding part of machine learning might exhaust and bore you. That's why it's a good idea to know what the end goal of your entire learning process is going to be and what types of solutions are possible using machine learning. There are many products available that you can try to quickly get started such as Google Cloud AutoML (Beta), Firebase MLKit (Beta), and Fritz Mobile SDK, among others. Read also: Machine Learning as a Service (MLaaS): How Google Cloud Platform, Microsoft Azure, and AWS are democratizing Artificial Intelligence Getting your hands dirty After getting a “warm-up” the next step will involve creating and training your own model. This is where you’ll be introduced to TensorFlow Lite, which is going to be your best friend throughout your journey as a machine learning mobile developer. There are many other machine learning tools coming into the market that you can make use of. These tools make building AI in mobile easier. For instance, you can use Dialogflow, a Natural Language Understanding (NLU) platform that makes it easy for developers to design and integrate conversational user interfaces into mobile apps, web applications, devices, and bots. You can then integrate it on Alexa, Cortana, Facebook Messenger, and other platforms your users are on. Read also: 7 Artificial Intelligence tools mobile developers need to know For practicing you can leverage an amazing codelab by Google, TensorFlow For Poets. It guides you through creating and training a custom image classification model. Through this codelab you will learn the basics of data collection, model optimization, and other key components involved in creating your own model. The codelab is divided into two parts. The first part covers creating and training the model, and the second part is focused on TensorFlow Lite which is the mobile version of TensorFlow that allows you to run the same model on a mobile device. Mathematics is the foundation of machine learning Love it or hate it, machine learning and Artificial Intelligence are built on mathematical principles like calculus, linear algebra, probability, statistics, and optimization. You need to learn some essential foundational concepts and the notation used to express them. There are many reasons why learning mathematics for machine learning is important. It will help you in the process of selecting the right algorithm which includes giving considerations to accuracy, training time, model complexity, number of parameters and number of features. Maths is needed when choosing parameter settings and validation strategies, identifying underfitting and overfitting by understanding the bias-variance tradeoff. Read also: Bias-Variance tradeoff: How to choose between bias and variance for your machine learning model [Tutorial] Read also: What is Statistical Analysis and why does it matter? What are the key aspects of Artificial Intelligence for mobile to keep in mind? Understanding the problem Your number one priority should be the user problem you are trying to solve. Instead of randomly integrating a machine learning model into an application, developers should understand how the model applies to the particular application or use case. This is important because you might end up building a great machine learning model with excellent accuracy rate, but if it does not solve any problem, it will end up being redundant. You must also understand that while there are many business problems which require machine learning approaches, not all of them do. Most business problems can be solved through simple analytics or a baseline approach. Data is your best friend Machine learning is dependent on data; the data that you use, and how you use it, will define the success of your machine learning model. You can also make use of thousands of open source datasets available online. Google recently launched a tool for dataset search named, Google Dataset Search which will make it easier for you to search the right dataset for your problem. Typically, there’s no shortage of data; however, the abundant existence of data does not mean that the data is clean, reliable, or can be used as intended. Data cleanliness is a huge issue. For example, a typical company will have multiple customer records for a single individual, all of which differ slightly. If the data isn’t clean, it isn’t reliable. The bottom line is, it’s a bad practice to just grabbing the data and using it without considering its origin. Read also: Best Machine Learning Datasets for beginners Decide which model to choose A machine learning algorithm is trained and the artifact that it creates after the training process is called the machine learning model. An ML model is used to find patterns in data without the developer having to explicitly program those patterns. We cannot look through such a huge amount of data and understand the patterns. Think of the model as your helper who will look through all those terabytes of data and extract knowledge and insights from the data. You have two choices here: either you can create your own model or use a pre-built model. While there are several pre-built models available, your business-specific use cases may require specialized models to yield the desired results. These off-the-shelf model may also need some fine-tuning or modification to deliver the value the app is intended to provide. Read also: 10 machine learning algorithms every engineer needs to know Thinking about resource utilization is important Artificial Intelligence-powered apps or apps, in general, should be developed with resource utilization in mind. Though companies are working towards improving mobile hardware, currently, it is not the same as what we can accomplish with GPU clusters in the cloud. Therefore, developers need to consider how the models they intend to use would affect resources including battery power and memory usage. In terms of computational resources, inferencing or making predictions is less costly than training. Inferencing on the device means that the models need to be loaded into RAM, which also requires significant computational time on the GPU or CPU. In scenarios that involve continuous inferencing, such as audio and image data which can chew up bandwidth quickly, on-device inferencing is a good choice. Learning never stops Maintenance is important, and to do that you need to establish a feedback loop and have a process and culture of continuous evaluation and improvement. A change in consumer behavior or a market trend can make a negative impact on the model. Eventually, something will break or no longer work as intended, which is another reason why developers need to understand the basics of what it is they’re adding to an app. You need to have some knowledge of how the Artificial Intelligence component that you just put together is working or how it could be made to run faster. Wrapping up Before falling for the Artificial Intelligence and machine learning hype, it’s important to understand and analyze the problem you are trying to solve. You should examine whether applying machine learning can improve the quality of the service, and decide if this improvement justifies the effort of deploying a machine learning model. If you just want a simple API endpoint and don’t want to dedicate much time in deploying a model, cloud-based web services are the best option for you. Tools like ML Kit for Firebase looks promising and seems like a good choice for startups or developers just starting out. TensorFlow Lite and Core ML are good options if you have mobile developers on your team or if you’re willing to get your hands a little dirty. Artificial Intelligence is influencing the app development process by providing us a data-driven approach for solving user problems. It wouldn't be surprising if in the near future Artificial Intelligence becomes a forerunning factor for app developers in their expertise and creativity. 10 useful Google Cloud Artificial Intelligence services for your next machine learning project [Tutorial] How Artificial Intelligence is going to transform the Data Center How Serverless computing is making Artificial Intelligence development easier
Read more
  • 0
  • 0
  • 5700

article-image-is-linux-hard-to-learn
Jay LaCroix
30 Jan 2018
6 min read
Save for later

Is Linux hard to learn?

Jay LaCroix
30 Jan 2018
6 min read
This post is an extract from Linux Mint Essentials by Jay LaCroix. Quite often, I am asked whether or not Linux is hard to learn. The reputation Linux has of being hard to use and learn most likely stems from the early days when typical distributions actually were quite difficult to use. I remember a time when simply installing a video card driver required manually recompiling the kernel (which took many hours) and enabling support for media such as MP3s required multiple manual commands. Nowadays, however, how difficult Linux is to learn and use is determined by which distribution you pick. If, for example, you're a beginner and you choose a distribution tailored for advanced users, you are likely to find yourself frustrated very quickly. In fact, there are distros available that make you do everything manually, such as choosing which version of the kernel to run and installing and configuring the desktop environment. This level of customizability is wonderful for advanced users who wish to build their own Linux system from the ground up, though it is more likely that beginners would be put off by it. General purpose distributions such as Mint are actually very easy to learn, and in some cases, some tasks in Mint are even easier to perform than in other operating systems. The ease of use we enjoy with a number of Linux distributions is due in part to the advancements that Ubuntu has made in usability. Around the time when Windows Vista was released, a renaissance of sorts occurred in the Linux community. At that time, quite a few people were so outraged by Windows Vista that a lot more effort was put into making Ubuntu easier to use. It can be argued that the time period of Vista was the fastest growth in usability that Linux ever saw. Tasks that were once rites of passage (such as installing drivers and media codecs) became trivial. The exciting changes in Ubuntu during that time inspired other distributions to make similar changes. Nowadays, usage of Ubuntu is beginning to decline due to the fact that not everyone is pleased about its new user interface (Unity); however, there is no denying the positive impact it had on Linux usability. Being based on Ubuntu, Mint inherits many of those benefits, but also aims to improve on its proposed weaknesses. Due to its great reception, it eventually went on to surpass Ubuntu itself. Mint currently sits at the very top of the charts on Distrowatch.com, and with a good reason—it's an amazing distribution. Distributions such as Mint are incredibly user friendly. Even the installation procedure is a cinch, and most can get through it by simply accepting the defaults. Installing new software is also straightforward as everything is included in software repositories and managed through a graphical application. In fact, I recently acquired an HP printer that comes with a CD full of required software for Windows, but when connected to my Mint computer, it just worked. No installation of any software was required. Linux has never been easier! Why use Linux Mint When it comes to Linux, there are many distributions available, each vying for your attention. But which Linux distribution should you use? In this post, taken from Linux Mint Essentials, we’ll explore why you should choose Linux Mint rather than larger distributions such as Fedora and Ubuntu. In the first instance, the user-friendly nature of Linux Mint is certainly a good reason to use it. However, there’s much more to it than just that. Of course, it’s true that Ubuntu is the big player when it comes to Linux distributions - but because Linux Mint is built on Ubuntu it has the power of its foundations. That means by choosing Mint, you’re not compromising on what has become a standard in Linux. So, Linux Mint takes the already solid foundation of Ubuntu, and improves on it by using a different user interface, adding custom tools, and including a number of further tweaks to make its media formats recognized right from the start. It’s not uncommon for a Linux distribution to be based on other distributions. This is because it's much easier to build a distribution on an already existing foundation, since building your own base is quite time consuming (and expensive). By utilizing the existing foundation of Ubuntu, Mint benefits from the massive software repository that Ubuntu has at its disposal, without having to reinvent the wheel and recreate everything from the ground up. The development time saved by doing this allows the Linux Mint developers to focus on adding exciting features and tweaks to improve its ease of use. Given the fact that Ubuntu is open source, it's perfectly fine to use it as a base for a completely separate distribution. Unlike the proprietary software market, the developers of Mint aren't at risk of being sued for recycling the package base of another distribution. In fact, Ubuntu itself is built on the foundation of another distribution (Debian), and Mint is not the only distribution to use Ubuntu as a base. As mentioned before, Mint utilizes a different user interface than Ubuntu. Ubuntu ships with the Unity interface, which (so far) has not been highly regarded by the majority of the Linux community. Unity split Ubuntu's user community in half as some people loved the new interface, though others were not so enthused and made their distaste well-known. Rather than adopt Unity during this transition, Mint opted for two primary environments instead, Cinnamon and MATE. Cinnamon is recommended for more modern computers, and MATE is useful for older computers that are lower in processing power and memory. MATE is also useful for those who prefer the older style of Linux environments, as it is a fork of GNOME 2.x. Many people consider Cinnamon to be the default desktop environment in Linux Mint, but that is open to debate. The Mint developers have yet to declare either of them as the default. Mint actually ships five different versions (also known as spins) of its distribution. Four of them (Cinnamon, MATE, KDE, and Xfce) feature different user interfaces as the main difference, while the fifth is a completely different distribution that is based on Debian instead of Ubuntu. Due to its popularity, Cinnamon is the closest thing to a default in Mint and as such, it is a recommended starting point.
Read more
  • 0
  • 0
  • 5696

article-image-microservices-require-a-high-level-vision-to-shape-the-direction-of-the-system-in-the-long-term-says-jaime-buelta
Bhagyashree R
25 Nov 2019
9 min read
Save for later

"Microservices require a high-level vision to shape the direction of the system in the long term," says Jaime Buelta

Bhagyashree R
25 Nov 2019
9 min read
Looking back 4-5 years ago, the sentiment around microservices architecture has changed quite a bit. First, it was in the hype phase when after seeing the success stories of companies like Netflix, Amazon, and Gilt.com developers thought that microservices are the de facto of application development. Cut to now, we have realized that microservices is yet another architectural style which when applied to the right problem in the right way works amazingly well but comes with its own pros and cons. To get an understanding of what exactly microservices are, when we should use them, when not to use them, we sat with Jaime Buelta, the author of Hands-On Docker for Microservices with Python. Along with explaining microservices and their benefits, Buelta shared some best practices developers should keep in mind if they decide to migrate their monoliths to microservices. [box type="shadow" align="" class="" width=""] Further Learning Before jumping to microservices, Buelta recommends building solid foundations in general software architecture and web services. “They'll be very useful when dealing with microservices and afterward,” he says. Buelta’s book, Hands-on Docker for Microservices with Python aims to guide you in your journey of building microservices. In this book, you’ll learn how to structure big systems, encapsulate them using Docker, and deploy them using Kubernetes. [/box] Microservices: The benefits and risks A traditional monolith application encloses all its capabilities in a single unit. On the contrary, in the microservices architecture, the application is divided into smaller standalone services that are independently deployable, upgradeable, and replaceable. Each microservice is built for a single business purpose, which communicates with other microservices with lightweight mechanisms. Buelta explains, “Microservice architecture is a way of structuring a system, where several independent services communicate with each other in a well-defined way (typically through web RESTful services). The key element is that each microservice can be updated and deployed independently.” Microservices architecture does not only dictates how you build your application but also how your team is organized. [box type="shadow" align="" class="" width=""]"Though [it] is normally described in terms of the involved technologies, it’s also an organizational structure. Each independent team can take full ownership of a microservice. This allows organizations to grow without developers clashing with each other," he adds. [/box] One of the key benefits of microservices is it enables innovation without much impact on the system as a whole. With microservices, you can do horizontal scaling, have strong module boundaries, use diverse technologies, and develop parallelly. Coming to the risks associated with microservices, Buelta said, "The main risk in its adoption, especially when coming from a monolith, is to make a design where the services are not truly independent. This generates an overhead and complexity increase in inter-service communication." He adds, "Microservices require a high-level vision to shape the direction of the system in the long term. My recommendation to organizations moving towards this kind of structure is to put someone in charge of the “big picture”. You don't want to lose sight of the forest for the trees." Migrating from monoliths to microservices Martin Fowler, a renowned author and software consultant, advises going for a "monolith-first" approach. This is because using microservices architecture from the get-go can be risky as it is mostly found suitable for large systems and large teams. Buelta shared his perspective, "The main metric to start thinking into getting into this kind of migration is raw team size. For small teams, it is not worth it, as developers understand everything that is going on and can ask the person sitting right across the room for any question. A monolith works great in these situations, and that’s why virtually every system starts like this." This asserts the "two-pizza team" rule by Amazon, which says that if a team responsible for one microservice couldn’t be fed with two pizzas, it is too big. [box type="shadow" align="" class="" width=""]"As business and teams grow, they need better coordination. Developers start stepping into each other's toes often. Knowing the intent of a particular piece of code is trickier. Migrating then makes sense to give some separation of function and clarity. Each team can set its own objectives and work mostly on their own, presenting a clear external interface. But for this to make sense, there should be a critical mass of developers," he adds.[/box] Best practices to follow when migrating to microservices When asked about what best practices developers can practice when migrating to microservices, Buelta said, "The key to a successful microservice architecture is that each service is as independent as possible." A question that arises here is ‘how can you make the services independent?’ "The best way to discover the interdependence of system is to think in terms of new features: “If there’s a new feature, can it be implemented by changing a single service? What kind of features are the ones that will require coordination of several microservices? Are they common requests, or are they rare? No design will be perfect, but at least will help make informed decisions,” explains Buelta. Buelta advises doing it right instead of doing it twice. "Once the migration is done, making changes on the boundaries of the microservices is difficult. It’s worth to invest time into the initial phase of the project," he adds. Migrating from one architectural pattern to another is a big change. We asked what challenges he and his team faced during the process, to which he said, [box type="shadow" align="" class="" width=""]"The most difficult challenge is actually people. They tend to be underestimated, but moving into microservices is actually changing the way people work. Not an easy task![/box] He adds, “I’ve faced some of these problems like having to give enough training and support for developers. Especially, explaining the rationale behind some of the changes. This helps developers understand the whys of the change they find so frustrating. For example, a common complaint moving from a monolith is to have to coordinate deployments that used to be a single monolith release. This needs more thought to ensure backward compatibility and minimize risk. This sometimes is not immediately obvious, and needs to be explained." On choosing Docker, Kubernetes, and Python as his technology stack We asked Buelta what technologies he prefers for implementing microservices. For language his answer was simple: "Python is a very natural choice for me. It’s my favorite programming language!" He adds, "It’s very well suited for the task. Not only is it readable and easy to use, but it also has ample support for web development. On top of that, it has a vibrant ecosystem of third-party modules for any conceivable demand. These demands include connecting to other systems like databases, external APIs, etc." Docker is often touted as one of the most important tools when it comes to microservices. Buelta explained why, "Docker allows to encapsulate and replicate the application in a convenient standard package. This reduces uncertainty and environment complexity. It simplifies greatly the move from development to production for applications. It also helps in reducing hardware utilization.  You can fit multiple containers with different environments, even different operative systems, in the same physical box or virtual machine." For Kubernetes, he said, "Finally, Kubernetes allows us to deploy multiple Docker containers working in a coordinated fashion. It forces you to think in a clustered way, keeping the production environment in mind. It also allows us to define the cluster using code, so new deployments or configuration changes are defined in files. All this enables techniques like GitOps, which I described in the book, storing the full configuration in source control. This makes any change in a specific and reversible way, as they are regular git merges. It also makes recovering or duplicating infrastructure from scratch easy." "There is a bit of a learning curve involved in Docker and Kubernetes, but it’s totally worth it. Both are very powerful tools. And they encourage you to work in a way that’s suited for avoiding downfalls in production," he shared. On multilingual microservices Microservices allow you to use diverse technologies as each microservice ideally is handled by an independent team. Buelta shared his opinion regarding multilingual microservices, "Multilingual microservices are great! That’s one of its greatest advantages. A typical example of this is to migrate legacy code written in some language to another. A microservice can replace another that exposes the same external interface. All while being completely different internally. I’ve done migrations from old PHP apps to replace them with Python apps, for example." He adds, "As an organization, working with two or more frameworks at the same time can help understand better both of them, and when to use one or the other." Though using multilingual microservices is a great advantage, it can also increase the operational overhead. Buelta advises, "A balance needs to be stuck, though. It doesn’t make sense to use a different tool each time and not be able to share knowledge across teams. The specific numbers may depend on company size, but in general, more than two or three should require a good explanation of why there’s a new tool that needs to be introduced in the stack. Keeping tools at a reasonable level also helps to share knowledge and how to use them most effectively." About the author Jaime Buelta has been a professional programmer and a full-time Python developer who has been exposed to a lot of different technologies over his career. He has developed software for a variety of fields and industries, including aerospace, networking and communications, industrial SCADA systems, video game online services, and financial services. As part of these companies, he worked closely with various functional areas, such as marketing, management, sales, and game design, helping the companies achieve their goals. He is a strong proponent of automating everything and making computers do most of the heavy lifting so users can focus on the important stuff. He is currently living in Dublin, Ireland, and has been a regular speaker at PyCon Ireland. Check out Buelta’s book, Hands-On Docker for Microservices with Python on PacktPub. In this book, you will learn how to build production-grade microservices as well as orchestrate a complex system of services using containers. Follow Jaime Buelta on Twitter: @jaimebuelta. Microsoft launches Open Application Model (OAM) and Dapr to ease developments in Kubernetes and microservices Kong CTO Marco Palladino on how the platform is paving the way for microservices adoption [Interview] Yuri Shkuro on Observability challenges in microservices and cloud-native applications
Read more
  • 0
  • 0
  • 5691

article-image-reactive-programming-swift
Milton Moura
16 Mar 2016
6 min read
Save for later

Reactive Programming in Swift

Milton Moura
16 Mar 2016
6 min read
In this post we will learn how to use some of Swift's functional features to write more concise and expressive code using RxSwift, a reactive programming framework, to manage application states and concurrent tasks. Swift and its functional features Swift can be described as a modern object-oriented language with native support for generic programming. Although it is not a functional language, it has some features that allows us to program using a functional approach, like closures, functions as first-class types, and immutable value types. Nevertheless, Cocoa Touch is an object-oriented framework and bares the constraints that this paradigm enforces. Typical issues that arise in software development projects include managing shared application state and concurrent asynchronous tasks that compete for the data that resides there. Functional programming solves these problems by privileging the immutable state and defining application logic as expressions that do not change during the application's lifecycle. By defining self-contained functions, computations can be easily parallelized and concurrency issues minimized. The Reactive Model The reactive programming model has its roots in FRP (functional reactive programming), which shifts the paradigm from discrete, imperative, command-driven programming to a series of transformations that can be applied to a stream of inputs continously over time. While that might sound like a mouthful, there's nothing quite like a simple example to get a feel for what this means. Expressing a relationship between variables Let's say you have two variables (A and B) whose value changes over the running time of an application, and a third one (C) that derives its own value based on the previous two. 1. var A = 10 2. var B = 20 3. let C = A * 2 + B 4. 5. // Current Values 6. // A = 10, B = 20, C = 40 7. 8. A = 0 9. 10. // Current Values 11. // A = 0, B = 20, C = 40 The definition of C with regards to A and B is evaluated only once, when the assignment operation is executed. The relationship between them is lost immediatly after that. Changing A or B from then on will have no effect on the value of C. At any given moment, to evaluate that expression you must reassign the value of C and calculate it once again, based on the current values of A and B. How would we do this in a reactive programming approach? In the reactive model, we would create two streams that propagate changes in the values of either A or B over time. Each value change is represented as a signal in its corresponding stream. We then combine both streams and assign a transformation that we want to perform on each signal emitted, thus creating a new stream that will emit only transformed values. The usual way to demonstrate this is using Marbles Diagrams, where each line represents the continuity of time and each marble an event that occurs at a determined point in time: Reacting in Cocoa Touch To address this in Cocoa Touch, you could use Key-Value Observing to add observers to the changing variables and handle them when the KVO system notifies you: self.addObserver(self, forKeyPath:"valueA", options: .New, context: nil) self.addObserver(self, forKeyPath:"valueB", options: .New, context: nil) override func observeValueForKeyPath(keyPath: String?, ofObject object: AnyObject?, change: [String : AnyObject]?, context: UnsafeMutablePointer<Void>) { let C = valueA * 2 + valueB } If your variables are tied to the user interface, in UIKit you could define a handler that is invoked when change events are triggered: sliderA.addTarget(self, action: "update", forControlEvents: UIControlEvents.ValueChanged) sliderB.addTarget(self, action: "update", forControlEvents: UIControlEvents.ValueChanged) func update() { let C = sliderA.value * 2 + sliderB.value } But none of these approaches define a persistent and explicit relationship between the variables involved, their lifecycle, and the events that change their value. We can overcome this with a reactive programming model. There are a couple of different implementations currently available for OS X and iOS development such as RxSwift and ReactiveCocoa. We will focus on RxSwift but the basic concepts we address are similar in both frameworks. RxSwift RxSwift extends the Observer pattern to simulate asynchronous streams of data flowing out of your Cocoa Touch objects as if they were typical collections. By extending some of Cocoa Touch's classes with observable streams, you are able to subscribe to their output and use them with composable operations, such as filter(), merge(), map(), reduce(), and others. Returning to our previous example, let's say we have an iOS application with two sliders (sliderA and sliderB) and we wish to continously update a label (labelC) with the same expression we used before (A * 2 + B): 1. combineLatest(sliderA.rx_value, sliderB.rx_value) { 2. $0 * 2 + $1 3. }.map { 4. "Sum of slider values is ($0)" 5. }.bindTo(labelC.rx_text)  We take advantage of the rx_value extension of the UISlider class that transforms the slider's value property into an observable type that emits an item when its value changes. By applying the combineLatest() operation on both slider's observable types, we create a new observable type that emits items whenever any of its source streams emits an item. The resulting emission is a tuple with both slider's values that can be transformed in the operation callback (line 2). Then, we map the transformed value into an informative string (line 4) and bind its value to our label (line 5). By composing three independent operations (combineLatest(), map() and bindTo()) we were able to concisely express a relationship between three objects and continuously update our application's UI, reacting accordingly to changes in the application state. What's next? We are only scratching the surface on what you can do with RxSwift. In the sample source code, you will find an example on how to download online resources using chainable asynchronous tasks. Be sure to check it out if this article sparked your curiosity. Then take some time to read the documentation and learn about the several other API extensions that will help you develop iOS apps in a more functional and expressive way. Discover how patterns in Swift can help you to deal with a large number of similar objects in our article Using the Flyweight Pattern. About the author Milton Moura (@mgcm) is a freelance iOS developer based in Portugal. He has worked professionally in several industries, from aviation to telecommunications and energy and is now fully dedicated to creating amazing applications using Apple technologies. With a passion for design and user interaction, he is also very interested in new approaches to software development. You can find out more at http://defaultbreak.com
Read more
  • 0
  • 0
  • 5674

article-image-denys-vuika-on-building-secure-and-performant-electron-apps-and-more
Bhagyashree R
02 Dec 2019
7 min read
Save for later

Denys Vuika on building secure and performant Electron apps, and more

Bhagyashree R
02 Dec 2019
7 min read
Building cross-platform desktop applications can be difficult. It requires you to have knowledge of specific tools and technologies for each platform you want to target. Wouldn't it be great if you could write and maintain a single codebase and that too with your existing web development skills? Electron helps you do exactly that. It is a framework for building cross-platform desktop apps with JavaScript, HTML, and CSS. Electron was originally not a separate project but was actually built to port the Mac-only Atom text editor to different platforms. The Atom team at GitHub tried out solutions like Chromium Embedded Framework (CEF) and node-webkit (now known as NW.js), but nothing was working right. This is when Cheng Zhao, a GitHub engineer started a new project and rewrote node-webkit from scratch. This project was Atom Shell, that we now know as Electron. It was open-sourced in 2014 and was renamed to Electron in May 2015. To get an insight into why so many companies are adopting Electron, we interviewed Denys Vuika, a veteran programmer and author of the book Electron Projects. He also talked about when you should choose Electron, best practices for building secure Electron apps, and more. Electron Projects is a project-based guide that will help you explore the components of the Electron framework and its integration with other JS libraries to build 12 real-world desktop apps with an increasing level of complexity. When is Electron the best bet and when is it not  Many popular applications are built using Electron including VSCode, GitHub Desktop, and Slack. It enables developers to deliver new features fast, while also maintaining consistency with all platforms. Vuika says, “The cost and speed of the development, code reuse are the main reasons I believe. The companies can effectively reuse existing code to build desktop applications that look and behave exactly the same across the platforms. No need to have separate developer teams for various platforms.” When we asked Vuika about why he chose Electron, he said, “Historically, I got into the Electron app development to build applications that run on macOS and Linux, alongside traditional Windows platform. I didn't want to study another stack just to build for macOS, so Electron shell with the web-based content was extremely appealing.” Sharing when you should choose Electron, he said, "Electron is the best bet when you want to have a single codebase and single developer team working with all major platforms. Web developers should have a very minimal learning curve to get started with Electron development. And the desktop application codebase can also be shared with the website counterpart. That saves a huge amount of time and money. Also, the Node.js integration with millions of useful packages to cover all possible scenarios." The case when it is not a good choice is, “if you are just trying to wrap the website functionality into a desktop shell. The biggest benefit of Electron applications is access to the local file system and hardware.” Building Electron application using Angular, React, Vue Electron integrates with all the three most popular JavaScript frameworks: React, Vue, and Angular. All these three have their own pros and cons. If you are coming from a JavaScript background, React could probably be a good option as it has much less abstraction away from vanilla JS. Other advantages are it is very flexible, you can extend its core functionality by adding libraries, and it is backed by a great community. Vue is a lightweight framework that’s easier to learn and get productive. Angular has exceptional TypeScript support and includes dependency injections, Http services, internationalization, formatting pipes, server-side rendering, a CLI, animations and much more. When it comes to Electron, choosing one of them depends on which framework you are comfortable with and what fits your needs. Vuika recommends, "There are three pretty much big developer camps out there: React, Angular and Vue. All of them focus on web components and client applications, so it’s a matter of personal preferences or historical decisions if speaking about companies. Also, each JavaScript framework has more than one set of mature UI libraries and design systems, so there are always options to choose from. For novice developers he recommends, “keep in mind it is still a web stack. Pick whatever you are comfortable to build a web application with." Vuika's book, Electron Projects, has a dedicated chapter, Integrating Electron applications with Angular, React, and Vue to help you learn how to integrate them with your Electron apps. Tips on building performant and secure apps Electron’s core components are Chromium, more specifically the libchromiumcontent library, Node.js, and of Chromium Google V8 javascript engine. Each Electron app ships with its own isolated copy of Chromium, which could affect their memory footprint as well as the bundle size. Sharing other reasons Vuika said, “It has some memory footprint but, based on my personal experience, most of the memory issues are usually related to the application implementation and resource management rather than the Electron shell itself.” Some of the best practices that the Electron team recommends are: examining modules and their dependencies before adding to your applications, ensuring the main process is not blocked, among others. You can find the full checklist on Electron’s official site. Vuika suggests, “Electron developers have all the development toolset they use for web development: Chrome Developer Tools with debuggers, profilers, and many other great features. There are also build tools for each frontend framework that allow minification, code splitting, and tree shaking. Routing libraries allow loading only the content the user needs at a particular point. Many areas to improve memory and resource consumption.” More recently, some developers have also started using Rust and also recommend using WebAssembly with Electron to minimize the Electron pain points while enjoying its benefits.  Coming to security, Vuika says, “With Electron, a web application can have nearly full access to the local file system and operating system resources by means of the Node.js process. Developers should be very careful trusting web content, especially if using remotely served HTML content.”   “Electron team has recently published a very good article on the security that I strongly recommend to read and keep in the bookmarks. The article dwells on explaining major security pitfalls, as well as ways to harden your applications,” he recommends. Meanwhile, Electron is also improving with its every subsequent release. Starting with Electron 6.0 the team has started laying “the groundwork for a future requirement that native Node modules loaded in the renderer process be either N-API or Context Aware.” This update is expected to come in Electron 11.0.  “Also, keep in mind that Electron keeps improving and evolving all the time. It is getting more secure and faster with each next release. For developers, it is more important to build the knowledge of creating and debugging applications, as for me,” he adds. About the author Denys Vuika is an Applications Platform Developer and Tech Lead at Alfresco Software, Inc. He is a full-stack developer and a constant open source contributor, with more than 16 years of programming experience, including ten years of front-end development with AngularJS, Angular, ASP.NET, React.js and other modern web technologies, more than three years of Node.js development. Denys works with web technologies on a daily basis, has a good understanding of Cloud development, and containerization of the web applications. Denys Vuika is a frequent Medium blogger and the author of the "Developing with Angular" book on Angular, Javascript, and Typescript development. He also maintains a series of Angular-based open source projects. Check out Vuika’s latest book, Electron Projects on PacktPub. This book is a project-based guide to help you create, package, and deploy desktop applications on multiple platforms using modern JavaScript frameworks Follow Denys Vuika on Twitter: @DenysVuika. Electron 6.0 releases with improved Promise support, native Touch ID authentication support, and more The Electron team publicly shares the release timeline for Electron 5.0 How to create a desktop application with Electron [Tutorial]
Read more
  • 0
  • 0
  • 5670
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at £15.99/month. Cancel anytime
article-image-do-you-need-to-be-polyglot-great-programmer
Amit Kothari
19 Jan 2018
6 min read
Save for later

Do you need to be a polyglot to be a great programmer?

Amit Kothari
19 Jan 2018
6 min read
Recently, I was talking to someone who has been working as a developer for over a year. They asked me which programming languages they should learn in order to improve their employability and to grow as a developer. This made me think: Do we really need to be a polyglot to be a good programmer? A polyglot programmer is someone who can write code in multiple languages. Most of us are already using multiple programming languages. Someone working on web apps uses HTML, CSS, and JavaScript. Similarly, backend services might be written in a specific language, but the developer might still be using SQL for database queries or YAML for configuration files. As developers, we like to try and learn new programming languages and frameworks. We do this for many reasons, to solve specific problems, to find a better alternative, or simply to keep ourselves up to date with what's new and trending. The benefits of being a polyglot programmer There are obvious benefits of being a polyglot developer. It increases your employability. Being proficient in multiple languages looks very good on your resume. It shows your experience as a developer and also indicates that you are flexible, able to work with different tools in different situations. It provides you with more opportunities and greater variety. When you’re looking for a new job or maybe even in your current role, if you are able to write code in multiple languages many more opportunities open up to you. When you're a polyglot you become much more in control of your career destiny! Developer happiness. Many developers simply feel more productive when they are using a specific language. But to know what you enjoy, you need to be open minded and willing to explore lots of different languages. Polyglots get to try out different syntaxes, get to know different communities – and this exploration is surely one of the best things about being a developer. Along with all these benefits, working with different languages give us a chance to learn about different programming paradigms. We can learn different ways of solving a problem and different ways of thinking. We can then bring all this learning together to write better code. The challenges While there are many benefits of learning and knowing multiple programming languages, this constant learning comes with its own challenges. Lack of proficiency: In his book "JavaScript: The Good Parts," Douglas Crockford talks about good and bad parts of JavaScript. Similarly, other languages also have certain aspects that should be approached with caution. If you’re frequently changing programming languages without spending enough time to learn one properly, you might run into issues around things like performance and security. Maintenance becomes a nightmare. Having too many languages in a tech stack will likely become a maintenance nightmare for both the development and the operations side. This will take you somewhere that is the opposite of agile and efficient. Developer fatigue. Constantly learning and adapting to new languages and technology may result in developer fatigue. It’s a fact of tech today that developers feel stressed and under pressure – this is bound to affect not only their productivity but their health as well. From an organization's perspective, there are tradeoffs when adding a new language to their tech stack. There may be operational costs and costs to up-skill the team. On the upside, code quality and productivity may improve. Companies who avoid investing in up-skilling their teams and upgrading their tech stack may end up with systems that are difficult to maintain. Even small changes may take weeks to deliver and finding skilled developers can become challenging. On the other hand, constantly changing programming languages and technology may result in features not getting delivered for months; in some cases years. There are many cases where a project started in one programming language and after years of development, the team decided to rewrite the whole system in a newer language or framework. While architectures like microservices solve some of these problems by allowing us to write different parts of a given system in different languages without needing to rewrite the whole system, it is important to understand the cost of introducing a new language. The benefits we get out of it should always outweigh the cost. "Any fool can write code that a computer can understand. Good programmers write code that humans can understand." - Martin Fowler How to become a better developer Learning different programming languages is one way to grow as a developer but there are others things we can do to improve. Write clean code. As developers, we spend more time reading code than writing it. Writing code that is easy to read and understand is one of the key traits of a good developer. Write easy to maintain code. A good programmer puts in extra effort to make sure that the code is easy to maintain. Use design principles and test-driven development to make sure that the code can be modified with ease and confidence that it is not going to affect the existing functionality. Understand the problem. A good developer will try and understand the problem and then pick the appropriate tool to solve the problem instead of starting with a technology just because it's trending. There are lots of obvious advantages of learning multiple programming languages. Not only does it look good on a resume, it also helps you to improve as a developer. However, it is just as important to understand the business problems you’re trying to solve. Whether you’re a polyglot or not, the most important thing any developer can do is focus on the problems instead of the tool. I hope you enjoyed this post; please let us know what you think! Are you a polyglot? Do you think trying to become one is important today? Amit Kothari is a full stack software developer based in Melbourne, Australia. He has 10+ years experience in designing and implementing software mainly in Java/JEE. His recent experience is in building web applications using JavaScript frameworks like React and AngularJS and backend micro services/ REST API in Java. He is passionate about lean software development and continuous delivery.
Read more
  • 0
  • 2
  • 5650

article-image-python-tensorflow-excel-and-more-data-professionals-reveal-their-top-tools
Amey Varangaonkar
06 Jun 2018
4 min read
Save for later

Python, Tensorflow, Excel and more - Data professionals reveal their top tools

Amey Varangaonkar
06 Jun 2018
4 min read
Data professionals are constantly on the lookout for the best tools to simplify their data science tasks - be it data acquisition, machine learning, or visualizing the results of the analysis. With so much on their plate already, having robust, efficient tools in the arsenal helps them a lot in reducing the procedural complexities. Not just that, the time taken to do these tasks is considerably reduced as well. But what tools do data professionals rely on to make their lives easier? Thanks to the Skill-up 2018 survey that we recently conducted, we have some interesting observations to share with you! Read the Skill Up report in full. Sign up to our weekly newsletter and download the PDF for free. Key Takeaways Python is the most widely used programming language by data professionals Python finds a wide adoption across all spectrums of data science - including data analysis, machine learning, deep learning and data visualization Excel continues to be favored by the data professionals because of its effectiveness and simplicity R is slowly falling behind Python in the race to Data Science supremacy Now, let’s look at these observations, in more depth. Python continues its ascension as the top dog Python’s rise in popularity as well as adoption over the last 3 years has been quite staggering, to say the least. Python’s ease of use, powerful analytical and machine learning capabilities as well as its applications outside of data science make it quite a popular language in the tech community. It thus comes as no surprise that it stood out from the others and was the undisputed choice of language for the data pros. R, on the other hand, seems to be finding it difficult to play catch-up to Python, with less than half the number of votes - despite being the tool of choice for many statisticians and researchers. Is the paradigm shift well and truly on? Is Python edging R out for good? Source: Packt Skill-Up Survey 2018 It is interesting to see SQL as the number 2, but considering the number of people working with databases these days it doesn’t come as a surprise. Also, JavaScript is preferred more than Java, indicating the rising need for web-based dashboards for effective Business Intelligence. Data professionals still love Excel, but Python libraries are taking over Microsoft Excel has traditionally been a highly popular tool for data analysis, especially when dealing with data with hundreds and thousands of records. Excel’s perfect setting for data manipulation and charting continues to be the reason why people still use it for basic-level data analysis, as indicated by our survey. Almost 53% of the respondents prefer having Excel in their analysis toolkit for their day to day tasks. Top libraries, tools and frameworks used by data professionals (Source: Packt Skill-Up Survey 2018) The survey also indicated Python’s rising dominance in the data science domain, with 8 out of the 10 most-used tools for data analysis being Python-based. Python’s offerings for data wrangling, scientific computing, machine learning and deep learning make its libraries the obvious choice for data professionals. Here’s a quick look at  15 useful Python libraries to make the above-mentioned data science tasks easier. Tensorflow and PyTorch are in demand AI’s popularity is soaring with every passing day as it finds applications across all types of industries and business domains. In our survey, we found machine learning and deep learning to be two of the most valuable skills to have for any data scientist, as can be seen from the word cloud below: Word cloud for the most valued skills by data professionals (Source: Packt Skill-Up Survey) Python’s two popular deep learning frameworks - Tensorflow and PyTorch have thus gained a lot of attention and adoption in the recent times. Along with Keras - another Python library - these two libraries are the most used frameworks used by data scientists and ML developers for building efficient machine learning and deep learning models. Which language/libraries do you use for your everyday Data Science tasks? Do you agree with your peers’ choice of tools? Feel free to let us know! Read more Data cleaning is the worst part of data analysis, say data scientists 30 common data science terms explained Top 10 deep learning frameworks
Read more
  • 0
  • 0
  • 5645

article-image-python-web-development-frameworks-django-flask
Owen Roberts
22 Dec 2015
5 min read
Save for later

Python Web Development Frameworks: Django or Flask?

Owen Roberts
22 Dec 2015
5 min read
I love Python, I’ve been using it close to three years now after a friend gave me a Raspberry Pi they had grown bored with. In the last year I’ve also started to seriously get into web development for my own personal projects but juggling all these different languages can sometimes get a bit too much for me; so this New Year I’ve promised myself I’m going to get into the world of Python web development. Python web dev has exploded in the last year. Django has been around for a decade now, but with long term support and the wealth of improvements that we’ve seen to the framework in just the last year it’s really reaching new heights of popularity. Not only Django, but Flask’s rise to fame has meant that writing a web page doesn’t have to involve reams and reams of code too! Both these frameworks are about cutting down on time spent coding without sacrificing quality, but which one do you go for? In this blog I’m going to show you the best bundles you need to get started with taking Python to the world of the web with titles I've been recommended - and at only $5 per eBook, hopefully this little hamper list inspires you to give something new a try for 2016! So, first of all which do you start with, Django or Flask? Let’s have a look at each and see what they can do for you. Route #1: Django So the first route to enter the world of Python web dev is Django, also touted as “the web framework for perfectionists with deadlines”. Django is all about clean, pragmatic design and getting to your finished app in as little time as possible. Having been around the longest it's also got a great amount of support meaning it's perfect for larger, more professional projects. The best way to get started is with our Django By Example or Learning Django Web Development titles. Both have everything you need to take the first steps in the world of web development in Python; taking what you already know and applying it in new ways. The By Example title is great as it works through 4 different applications to see how Django works in different situations, while the Learning title is a great supplement to learning the key features that need to be used in every application. Now that the groundwork has been laid, we need to build upon that. With Django we've got to catch up with 10 years of experience and community secrets fast! Django Design Patterns and Best Practices is filled with some of the community's best hacks and cheats to get the most out of developing Django, so if you're a developer who likes to save time and avoid mistakes (and who doesn't?!) then this book is the perfect desk companion for any Django lover. Finally, to top everything off and prepare us for the next steps in the world of Django why not try a new paradigm with Test-Driven Development with Django? I'm honestly one of those developers that hates having to test right at the end, so being able to peel down a complex critical task into layers throughout just makes more sense to me. Route #2: Flask Flask has exploded in popularity in the last year and it's not hard to see why – with the focus on as much minimal code as possible, Python is perfect for developers who are looking to get a quick web page up, as well as those who just hate having to write mountains of code when a single line can do. As an added bonus the creators of the framework looked at Django and took on board feedback from that community as well, so you get the combined force of two different frameworks at your fingertips. Flask is easy to pick up, but difficult to master, so having a good selection of titles to help you along is the best way to get involved in this new world of Python web dev. Learning Flask Framework is the logical first step for getting into Flask. Released last month it's come heartily recommended as the all-in-one first stop to getting the most out of Flask. Want to try a different way to learn though? Well, Learning Flask video is a great supplement to the learning title, it shows us everything we need to start building our first Flask titles in just under 2 hours – almost as quick as it takes the average Flask developer to build their own sites. The Flask Framework Cookbook is the next logical step as a desktop companion for someone just starting their own projects. Having over 80 different recipes to get the most out of the framework is essential for those dipping their feet into this new world without worrying about losing everything. Finally, Flask Blueprints is something a little different, and is especially good for getting the most out of Flask. Now, if you're serious about learning Flask you're likely to get everything you need quickly, but the great thing about the framework is how you apply it. The different projects inside this title make sure you can make the most out of Flask's best features for every project you might come across! Want to explore more Python? Take a look at our dedicated Python page. You'll find our latest titles, as well as even more free content.
Read more
  • 0
  • 0
  • 5644

article-image-how-nodejs-changing-web-development
Antonio Cucciniello
05 Jul 2017
5 min read
Save for later

How is Node.js Changing Web Development?

Antonio Cucciniello
05 Jul 2017
5 min read
If you have remotely been paying attention to what is going on in the web development space, you know that Node.js has become extremely popular and is many developers’ choice of backend technology. It all started in 2009 by Ryan Dahl. It is a JavaScript runtime that is built on Google Chrome's V8 JavaScript Engine.Over the past couple of years, more and more engineers have moved towards Node.js in many of the their web applications. With plenty of people using it now, how has Node.js changed web development? Scalability Scalability is the one thing that makes Node.js so popular. Node.js runs everything in a single thread. This single thread is event driven (due to JavaScript being the language that it is written with). It is also non-blocking. Now, when you spin up a server in your Node web app, every time a new user connects to the server, that launches an event. That event gets handled concurrently with the other events that are occurring or users that are connecting to the server. In web applications built with other technologies, this would slow down the server after a large amount of users. In contrast, with a Node application, and the non-blocking event driven nature, this allows for highly scalable applications. This now allows companies that are attempting to scale, to build their apps with Node, which will prevent any slowdowns they may have had. This also means they do not have to purchase as much server space as someone using a web app that was not developed with Node. Ease of Use As previously mentioned, Node.js is written with JavaScript. Now, JavaScript was always used to add functionality to the frontend of applications. But with the addition of Node.js, you can now write the entire application in JavaScript. This now makes it so much easier to be a frontend developer who can edit some backend code, or be a backend engineer who can play around with some frontend code. This in turn makes it so much easier to become a Full Stack Engineer. You do not really need to know anything new except the basic concepts of how things work in the backend. As a result, we have recently seen the rise in a full stack JavaScript developer. This also reduces the complexity of working with multiple languages; it minimizes any confusion that might arise when you have to switch from JavaScript on the front end to whatever language would have been used on the backend.  Open Source Community When Node was released, NPM, node package manager, was also given to the public. The Node package manager does exactly what it says on the tin: it allows developers to quickly add and use third party libraries and frameworks in their code. If you have used Node, then you can vouch for me here when I say there is almost always a package that you can use in your application that can make it easier to develop your application or automate a larger task. There are packages to help create http servers, help with image processing, and help with unit testing. If you need it, it’s probably been made. The even more awesome part about this community is that it’s growing by the day, and people are extremely active by contributing the many open source packages out there to help developers with various needs. This increases the productivity of all developers that are using Node in their application because they can shift the focus from something that is not that important in their application, to the main purpose of it. Aid in Frontend Development With the release of Node it did not only benefit the backend side of development, it also benefitted the frontend. With new frameworks that can be used on the frontend such as React.js or virtual-dom, these are all installed using NPM. With packages like browserify you can also use Node’s require to use packages on the frontend that normally would be used on the backend! You can be even more productive and develop things faster on the front end as well! Conclusion Node.js is definitely changing web development for the better. It is making engineers more productive with the use of one language across the entire stack. So, my question to you is, if you have not tried out Node in your application, what are you waiting for? Do you not like being more productive? If you enjoyed this post, tweet about your opinion of how Node.js changed web development. If you dislike Node.js, I would love to hear your opinion as well! About the author Antonio Cucciniello is a Software Engineer with a background in C, C++ and JavaScript (Node.Js) from New Jersey.   His most recent project called Edit Docs is an Amazon Echo skill that allows users to edit Google Drive files using your voice.  He loves building cool things with software, reading books on self-help and improvement, finance, and entrepreneurship. Follow him on twitter @antocucciniello, and follow him on GitHub here: https://github.com/acucciniello.
Read more
  • 0
  • 0
  • 5594
article-image-top-6-java-machine-learningdeep-learning-frameworks-cant-miss
Kartikey Pandey
08 Dec 2017
4 min read
Save for later

Top 6 Java Machine Learning/Deep Learning frameworks you can’t miss

Kartikey Pandey
08 Dec 2017
4 min read
The data science tech market is buzzing with new and interesting Machine Learning libraries and tools almost everyday. In an increasingly growing market, it becomes difficult to choose the right tool or set of tools. More importantly, Artificial Intelligence and Deep Learning based projects require a different approach than traditional programming which makes things tricky to zero-in on one library or a framework. The choice of a framework is largely based upon the type of problem, one is expecting to solve. But there are other considerations too. Speed is one such factor that more or less would always play an important role in decision making. Other reasons could be how open-ended it is, architecture, functions, complexity of use, support for algorithms, and so on. Here, we present to you six Java libraries for your next Deep Learning and Artificial Intelligence project you shouldn’t miss if you are a Java loyalist or simply a web developer who wants to enter the world of deep learning. DeepLearning4j (DL4J) One of the first, commercial grade, and most popular deep learning frameworks developed in Java. It also supports other JVM languages (Java, Clojure, Scala). What’s interesting about the DL4J, is that it comes with an in-built GPU support for the training process. It also supports Hadoop YARN for distributed application management. It is popular for solving problems related to image recognition, fraud detection and NLP. MALLET Mallet (Machine Learning for Language Toolkit) is an open source Java Machine Learning toolkit. It supports NLP, clustering, modelling, and classification. The most important capability of Mallet is its support for a wide variety of algorithms such as Naive Bayes and Decision Trees. Another useful feature it has is topic modelling toolkit. Topic models are useful when analyzing large collections of unlabelled texts.   Massive Online Analysis (MOA) MOA is an open source data streaming and mining framework for real time analytics. It has a strong and growing community and is similar and related to Weka. It also has the ability to deal with massive data streams. Encog This framework supports a wide array of algorithms and neural networks such as Artificial Neural Network, Bayesian Network, Genetic Programming and algorithms. Neuroph Neuroph as the name suggests offers great simplicity when working on neural networks. The main USP of Neuroph is its incredibly useful GUI (Graphical User Interface) tool that helps in creating and training neural networks. Neuroph is a good choice of framework when you have a quick project on hand and you don’t want to spend hours learning the theory. Neuroph helps you quickly set up and running in putting neural networks to work for your project. Java Machine Learning Library The Java Machine Learning Library offers a great set of reference implementation of algorithms that you can’t miss for your next Machine Learning project. Some of the key highlights are support vector machines and clustering algorithms. These are a few key frameworks and tools you might want to consider when working on your next research work. The Java ML library ecosystem is vast with many tools and libraries to support, and we just touched the tip of that iceberg in this article. One particular tool that deserve an honourable mention is Environment for Developing KDD-Applications Supported by Index-Structure (ELKI). It is designed particularly with researchers and research students kept in mind. The main focus of ELKI is its broad coverage of data algorithms which makes it a natural fit for research work. What’s really important while choosing any of the above or tools outside of the list is a good understanding of the requirements and the problems you intend to solve. To reiterate, some of the key considerations to bear in mind before zeroing in on a tool would be - support for algorithms, implementation of neural networks, dataset size (small, medium, large), and speed.
Read more
  • 0
  • 0
  • 5546

article-image-why-learn-machine-learning-as-a-non-techie
Natasha Mathur
11 Sep 2018
9 min read
Save for later

Why learn machine learning as a non-techie?

Natasha Mathur
11 Sep 2018
9 min read
“..what we want is a machine that can learn from experience..” ~Alan Turing, 1947 Thanks to artificial intelligence, Turing’s vision is coming true. Machines are learning, from others’ experience (using training datasets) and from their own as well.  Machines can now play chess, Go, and other games, they can help predict cancer, manage your day, summarize today’s news for you, edit your essays, identify your face, and even mimic dance moves and facial expressions. Come to think of it, every job role and career demands that you learn from experience, improve over time and explore new ways to do things.  Yes, machines are very effective at the former two, but humans still have an edge when it comes to innovative thinking. Imagine what you could achieve if you put together your mind with that of an efficient learning algorithm! You might think that artificial intelligence and machine learning are a dense and impenetrable field limited to research labs and textbooks. Does that mean only software engineers and researchers can dream of making it into this fascinating field? Not quite. We’ll unpick machine learning in the following sections and present our case for why it makes sense for everyone to understand this field better. Machine learning is, potentially, a first-class ticket to an exciting career, whether you are starting off fresh from college or are considering a career switch. Beyond the artificial intelligence and machine learning hype Artificial intelligence is simply an area of computing that solves complex real-world problems. Yes, research still happens in universities, and yes, data scientists are still exploring the limits of artificial intelligence in forward-thinking businesses, but it's much more than that. AI is so pervasive - and mysterious - that its applications hide in plain sight. Look around you carefully. From Netflix recommending personalized content to its 130 million viewers, to Youtube’s video search and automatic captions in videos, to Amazon’s shopping recommendations, to Instagram hashtags, Snapchat filters, spam filters on your Gmail and virtual assistants like Siri on our smartphones, artificial intelligence, and machine learning techniques are in action everywhere. This means as a user you are at some level already impacted by algorithms every day. The question then is should you be the person who’s career is limited by algorithms or the one whose career is propelled by algorithms. Why get into artificial intelligence development as a non-programmer? Artificial Intelligence is a perfect blend of knowledge, high salary, and some really great opportunities. Your non-programming field does not have to deter your growth in the AI field. In fact, your background can give you an edge over the traditional software developers and data scientists in terms of domain awareness and better understanding what the system should do, what it should look for, and make the users feel. Below are some reasons proving why you should make the jump in AI. Machine learning can help you be better at your current job How? You may ask. Take a news reporter or editor’s job for example. They must possess a blend of research/analysis centric capabilities, a creative set of skills and speed to come up with timely, quality articles on topics of interest to their readers. A data journalist or a writer with machine learning experience could quickly find great topics to write on with the help of machine learning based web scraping apps. Also, they could let the data lead them to unique stories that are emerging before traditional news reporters find their way to them. They could further also get a quick summary of multiple perspectives on a given topic using custom-built news feed algorithms. Then could they also find further research resources by tweaking their search parameters, even adding quality filters on top to only allow for high-quality citations. This kind of writer has cut down on the time they spent finding and understanding topics - which means more time to actually write compelling pieces and to connect with real sources for further insight. Algorithms can also find and correct language issues in writing now. This means editors can spend more time improving the content quality from a scope perspective. You can quickly start to see how artificial intelligence can complement the work you do and help you grow in your career. Yes, all this sounds lovely in theory, but is it really happening in practice? There are others like you who are successfully exploring machine learning Don’t believe me? Mason Fish, a software Engineer at Docker, Inc was earlier a musician. He had done his bachelor’s and masters from two different music conservatories. After graduating, he worked for five years as a professional musician. But, today he helps build and maintain services for Docker, a tool used by software engineers all over the world! This was just one case of a non-programmer diving into the computer science world. When musicians can learn to code and get core developer jobs in cutting-edge tech companies, it is not far fetched to say they can also learn to build machine learning models. Below are some examples of non-programmers of varied experience levels who are exploring the Machine Learning world. Per Harald Borgen, an economics graduate was able to boost the sales at his workplace Xeneta using machine learning algorithms, an accomplishment that helped accelerate his career. You can read his blog to see how he transformed from a machine learning newbie to a seasoned practitioner. Another example is a 14-year-old Tanmay Bakshi, who started a youtube channel at just 7 years of age where he teaches coding, algorithms, AI and machine learning concepts. Similarly, Sean Le Van created an AI chatbot when he was 14 years old using ML algorithms.   Rosebud Anwuri is another great example as she switched from chemical engineering to Data science. “My first exposure to Data Science was from a book that had nothing to do with Data Science,” writes Anwuri on her blog. She created her first Data Science learning path from an answer on Quora, last year. Fast forward to this year, she has been invited to speak at Stanford’s Women in Data Science Conference in Nigeria and has facilitated a workshop at The Women in Machine Learning and Data Science among others. She also writes on Machine Learning and Data Science on her blog.   Like Anwuri, Sce Pike dreamed of being an artist or singer in college and did her major in fine arts and anthropology. Pike went from art to web design to “human factors design,” which involves human-machine interactions, for the telecommunications giant Qualcomm. In addition to that, Pike started her own company IOTAS, that offers smart-home services to renters and homeowners. “I have had to approach my work with logic, research, and great design. Looking back, I’m amazed where I am now,” says Sce Pike. Read also: Data science for non-techies: How I got started (Part 1) Adapt or perish in the oncoming job automation wave of the fourth industrial revolution Ok, so maybe you’re happy with how you are growing anyway in your career. Be warned though, your job may not look the same even in the next few years. Automation is expected to replace up to 30% of jobs in the next 10 years, so upskilling to machine learning is a wise choice. Last month, Bank of England’s Chief Economist warned that 15 million jobs in Britain could be at stake because of artificial intelligence. Machine learning as a skill could help you stay relevant in the future and prepare for what’s being called, “the third machine age”. You can develop machine learning apps with no to minimal coding experience Thanks to great advancements by big tech companies and open source projects, machine learning today is accessible to people with varying degrees of programming experience - from new developers and even those who have never written a line of code in their life. So, whether you’re a curious web/UX designer, a news reporter, an artist, a school student, a filmmaker or an NGO worker, you will find good use of machine learning in your field. There are tools for machine learning for users with varying levels of experience. In fact, there are certain Machine Learning Applications that you can build even today. Some examples are Image and text classification with Neural Network, Facial recognition, Gaming bots, music generation, object detection, etc. Machine learning skills are highly rewarded Machine learning is a nascent field where demand far outweighs supply. According to research done by Indeed.com, the number one job requirement in AI is that of a Machine Learning Engineer, with data scientist jobs taking the second spot. In fact, AI researchers can earn more than 1 million dollar per year and the AI geniuses at Elon Musk’s OpenAI are a living proof for this. OpenAI paid its top AI researcher, Ilya Sutskever, more than  $1.9 million, back in 2016. Another leading researcher, Ian Goodfellow, in OpenAI was paid more than $800,000. Machine Learning is not hard to learn. It might seem intimidating at first, but once you get the basics right, the rest of the ML journey becomes easier. If you’re convinced that ML is for you, but are confused about how to get started then don’t worry, we’ve got you covered. To help you get started, here is a non-programmer’s guide to learning Machine Learning. So, yes, it doesn’t matter if you’re a non-programmer, musician, a librarian, or a student, the future is AI-driven so don’t be afraid to make that dive into Machine Learning. As Robert Frost said, “Two roads diverged in a wood, and I took the one less traveled by, And that has made all the difference”. 8 Machine learning best practices [Tutorial] Google introduces Machine Learning courses for AI beginners Top languages for Artificial Intelligence development
Read more
  • 0
  • 0
  • 5502

article-image-why-retailers-need-to-prioritize-ecommerce-automation-in-2019
Guest Contributor
14 Jan 2019
6 min read
Save for later

Why Retailers need to prioritize eCommerce Automation in 2019

Guest Contributor
14 Jan 2019
6 min read
The retail giant Amazon plans to reinvent in-store shopping in much the same way it revolutionized online shopping. Amazon’s cashierless stores (3000 of which you can expect by 2021) give a glimpse into what the future of eCommerce could look like with automation. eCommerce automation involves combining the right eCommerce software with the right processes and the right people to streamline and automate the order lifecycle. This reduces the complexity and redundancy of many tasks that an eCommerce retailer typically faces from the time a customer places her order until the time it is delivered to them. Let’s look at why eCommerce retailers should prioritize automation in 2019. 1. Augmented Customer Experience + Personalization A PwC study titled “Experience is Everything” suggests that 42% of consumers would pay more for a friendly, welcoming experience. Way back in 2015, Gartner predicted that nearly 50% of companies would be implementing changes in their business model in order to augment customer experience. This is especially true for eCommerce, and automation in eCommerce certainly represents one of these changes. Customization and personalization of services are a huge boost for customer experience. A BCG report revealed that retailers who implemented personalization strategies saw a 6-10% boost in their sales. How can automation help? To start with, you can automate your email marketing campaigns and make them more personalized by adding recommendations, discount codes, and more. 2.  Fraud Prevention The scope for fraud on the Internet is huge. According to the October 2017 Global Fraud Index, Account Takeover fraud cost online retailers a whopping $3.3 billion dollars in Q2 2017 alone. Additionally, the average transaction rate for eCommerce fraud has also been on this rise. eCommerce retailers have been using a number of fraud prevention tools such as address verification service, CVN (card verification number), credit history check, and more to verify the buyer’s identity. An eCommerce tool equipped with Machine Learning capabilities such as Shuup, can detect fraudulent activity and effectively run through thousands of checks in the blink of an eye. Automating order handling can ensure that these preliminary checks are carried out without fail, in addition to carrying out specific checks to assess the riskiness of a particular order. Depending on the nature and scale of the enterprise, different retailers would want to set different thresholds for fraud detection, and eCommerce automation makes that possible. If a medium-risk transaction is detected, the system can be automated to notify the finance department for immediate review, whereas high-risk transactions can be canceled immediately. Automating your eCommerce software processes allows you to break the mold of one-size-fits-all coding and make the solution specific to the needs of your organization. 3. Better Customer Service Customer service and support is an essential part of the buying process. Automated customer support does not necessarily mean your customers will get canned responses to all their queries. However,  it means that common queries can be dealt with in a more efficient manner. Live chats and chatbots have become incredibly popular with eCommerce retailers because these features offer convenience to both the customer and the retailer. The retailer’s support staff will not be held up with routine inquiries and the customer can get their queries resolved quickly. Timely responses are a huge part of what constitutes positive customer experience. Live chat can even be used for shopping carts in order to decrease cart abandonment rates. Automating priority tickets as well as the follow-up on resolved and unresolved tickets is another way to automate customer service. With new generation automated CRM systems/ help desk software you can automate the tags that are used to filter tickets. It saves the customer service rep’s time and ensures that priority tickets are resolved quickly. Customer support/service can be made as a strategic asset in your overall strategy. The same Oracle study mentioned above suggests that back in 2011, 50% of customers would give a brand at most one week to respond to a query before they stopped doing business with them. You can imagine what those same numbers for 2018 would be. 4. Order Fulfillment Physical fulfillment of orders is prone to errors as it requires humans to oversee the warehouse selection. With an automated solution, you can set up an order fulfillment to match the warehouse requirements as closely as possible. This ensures that the closest warehouse which has the required item in stock is selected. This guarantees timely delivery of the order to the customer. It could also be set up to integrate with your billing software so as to calculate accurate billing/shipping charges, taxes (for out of state/overseas shipments), etc. Automation can also help manage your inventory effectively. Product availability is updated automatically with each transaction, be it a return or the addition of a new product. If the stock of a particular in-demand item was nearing danger levels, your eCommerce software would send an automated email to the supplier asking to replenish its stock ASAP. Automation ensures that your prerequisites to order fulfillment are fulfilled for successful processing and delivery of an order. Challenges with eCommerce automation The aforementioned benefits are not without some risks, just as any evolving concept tends to be. One of the most important challenges to making automation work for eCommerce smoothly is data accuracy. As eCommerce platforms gear up for greater multichannel/omnichannel retailing strategies, automation is certainly going to help them bolster and enhance their workflows, but only if the right checks are in place. Automation still has a long way to go, so for now, it might be best to focus on automating tasks that take up a lot of time such as updating inventory, reserving products for certain customers, updating customers on their orders, etc. Advanced applications such as fraud detection might still take a few years to be truly ‘automated’ and free from any need for human review. For now, eCommerce retailers still have a whole lot to look forward to. All things considered, automating the key tasks/functions of your eCommerce platform will impart flexibility, agility, and a scope for improved customer experience. Invest rightly in automation solutions for your eCommerce software to stay competitive in the dynamic, unpredictable eCommerce retail scenario. Author Bio Fretty Francis works as a Content Marketing Specialist at SoftwareSuggest, an online platform that recommends software solutions to businesses. Her areas of expertise include eCommerce platforms, hotel software, and project management software. In her spare time, she likes to travel and catch up on the latest technologies. Software developer tops the 100 Best Jobs of 2019 list by U.S. News and World Report IBM Q System One, IBM’s standalone quantum computer unveiled at CES 2019 Announcing ‘TypeScript Roadmap’ for January 2019- June 2019
Read more
  • 0
  • 0
  • 5486
article-image-python-r-war
Amey Varangaonkar
28 Aug 2017
7 min read
Save for later

Is Python edging R out in the data science wars?

Amey Varangaonkar
28 Aug 2017
7 min read
When it comes to the ‘lingua franca’ of data science, there seems to be a face-off between R and Python. R has long been established as the language of researchers and statisticians but Python has come up quickly as a bona-fide challenger, helping embed analytics as a necessity for businesses and other organizations in 2017. If  a tech war does exist between the two languages, it’s a battle fought not so much on technical features but instead on the wider changes within modern business and technology. R is a language purpose-built for statistics, for performing accurate and intensive analysis. So, the fact that R is being challenged by Python — a language that is flexible, fast, and relatively easy to learn — suggests we are seeing a change in who’s actually doing data science, where they’re doing it, and what they’re trying to achieve. Python versus R — A Closer Look Let’s make a quick comparison of the two languages on aspects important to those working with data and see what we can learn about the two worlds where R and Python operate. Learning curve Python is the easier language to learn. While R certainly isn’t impenetrable, Python’s syntax marks it as a great language to learn even if you’re completely new to programming. The fact that such an easy language would come to rival R within data science indicates the pace at which the field is expanding. More and more people are taking on data-related roles, possibly without a great deal of programming knowledge — Python makes the barrier to entry much lower than R. That said, once you get to grips with the basics of R, it becomes relatively easier to learn the more advanced stuff. This is why statisticians and experienced programmers find R easier to use. Packages and libraries Many R packages are in-built. Python, meanwhile, depends upon a range of external packages. This obviously makes R much more efficient as a statistical tool — it means that if you’re using Python you need to know exactly what you’re trying to do and what external support you’re going to need. Data Visualization R is well-known for its excellent graphical capabilities. This makes it easy to present and communicate data in varied forms. For statisticians and researchers, the importance of that is obvious. It means you can perform your analysis and present your work in a way that is relatively seamless. The ggplot2 package in R, for example, allows you to create complex and elegant plots with ease and as a result, its popularity in the R community has increased over the years. Python also offers a wide range of libraries which can be used for effective data storytelling. The breadth of external packages available with Python means the scope of what’s possible is always expanding. Matplotlib has been a mainstay of Python data visualization. It’s also worth remarking on upcoming libraries like Seaborn. Seaborn is a neat little library that sits on top of Matplotlib, wrapping its functionality and giving you a neater API for specific applications. So, to sum up, you have sufficient options to perform your data visualization tasks effectively — using either R or Python! Analytics and Machine Learning Thanks to libraries like scikit-learn, Python helps you build machine learning systems with relative ease. This takes us back to the point about barrier to entry. If machine learning is upending how we use and understand data, it makes sense that more people want a piece of the action without having to put in too much effort. But Python also has another advantage; it’s great for creating web services where data can be uploaded by different people. In a world where accessibility and data empowerment have never been more important (i.e., where everyone takes an interest in data, not just the data team), this could prove crucial. With packages such as caret, MICE, and e1071, R too gives you the power to perform effective machine learning in order to get crucial insights out of your data. However, R falls short in comparison to Python, thanks to the latter’s superior libraries and more diverse use-cases. Deep Learning Both R and Python have libraries for deep learning. It’s much easier and more efficient with Python though — most likely because the Python world changes much more quickly, new libraries and tools springing up as quickly as the data science world hooks on to a new buzzword. Theano, and most recently Keras and TensorFlow have all made a huge impact on making it relatively easy to build incredibly complex and sophisticated deep learning systems. If you’re clued-up and experienced with R it shouldn’t be too hard to do the same, using libraries such as MXNetR, deepr, and H2O — that said, if you want to switch models, you may need to switch tools, which could be a bit of a headache. Big Data With Python, you can write efficient MapReduce applications with ease, or scale your R program on Hadoop to work with petabytes of data. Both R and Python are equally good when it comes to working with Big Data, as they can be seamlessly integrated with Big Data tools such as Apache Spark and Apache Hadoop, among many others. It’s likely that it’s in this field that we’re going to see R moving more and more into industry as businesses look for a concise way to handle large datasets. This is true in industries such as bioinformatics which have a close connection with the academic world and necessarily depend upon a combination of size and accuracy when it comes to working with data. So, where does this comparison leave us? Ultimately, what we see are two different languages offering great solutions to very different problems in data science. In Python, we have a flexible and adaptable language with a vibrant community of developers working on a huge range of problems and tasks, each one trying to find more effective and more intelligent ways of doing things. In R, we have a purely statistical language with a large repository of over 8000 packages for data analysis and visualization. While Python is production-ready and is better suited for organizations looking to harness technical innovation to its advantage, R’s analytical and data visualization capabilities can make your life as a statistician or data analyst easier. Recent surveys indicate that Python commands a higher salary than R — that is because it’s a language that can be used across domains; a problem-solving language. That’s not to say that R isn’t a valuable language; rather, Python is the language that just seems to fit the times at the moment. In the end, it all boils down to your background, and the kind of data problems you want to solve. If you come from a statistics or research background and your problems only revolve around statistical analysis and visualization, then R would best fit your bill. However, if you’re a Computer Science graduate looking to build a general-purpose, enterprise-wide data model which can integrate seamlessly with the other business workflows, you will find Python easier to use. R and Python are two different animals. Instead of comparing the two, maybe it’s time we understood where and how each can be best used and then harnessed their power to the fullest to solve our data problems. One thing is for sure, though — neither is going away anytime soon. Both R and Python occupy a large chunk of the data science market-share today, and it will take a major disruption to take either one of them out of the equation completely.
Read more
  • 0
  • 1
  • 5463

article-image-deep-learning-games-neural-networks-design-virtual-world
Amey Varangaonkar
28 Mar 2018
4 min read
Save for later

Deep Learning in games - Neural Networks set to design virtual worlds

Amey Varangaonkar
28 Mar 2018
4 min read
Games these days are closer to reality than ever. Life-like graphics, smart gameplay and realistic machine-human interactions have led to major game studios up the ante when it comes to adopting the latest and most up to date tech for developing games. In fact, not so long ago, we shared with you a few interesting ways in which Artificial Intelligence is transforming the gaming industry. Inclusion of deep learning in games has emerged as one popular solution to make the games smarter. Deep learning can be used to enhance the realism and excitement in games by teaching the game agents how to behave more accurately, and in a more life-like manner. We recently came up with this interesting implementation of deep learning to to play the game of FIFA 18, and we were quite impressed! Using just 2 layers of neural networks and with a limited amount of training, the bot that was developed managed to learn the basic rules of football (soccer). Not just that, it was also able to perform the basic movements and tasks in the game correctly. To achieve this, 2 neural networks were developed - a Convolutional Neural Network to detect objects within the game, and a second layer of LSTM (Long Short Term Memory) network to specify the movements accordingly. The same user also managed to leverage deep learning to improve the in-game graphics of the FIFA 18 game. Using the deepfakes algorithm, he managed to swap the in-game face of one of the players with the player’s real-life face. The reason? The in-game faces, although quite realistic, could be better and more realistic. The experiment ended up being near perfect, as the resultant face that was created was as good as perfect. How did he do it? After gathering some training data which was basically some images of players scraped off Google, the user managed to train two autoencoders which learnt the distinction between the in-game face and the real-world face. Then, using the deepfakes algorithm, the inputs were reversed, recreating the real-world face in the game itself. The difference is quite astonishing: Apart from improving the gameplay and the in-game character graphics, deep learning can also be used to enhance the way the opponents/adversaries interact with the player in the game. If we take the example of the FIFA game mentioned before, deep learning can be used to enhance the behaviour and appearance of the in-game crowd, who can react or cheer their team better as per their team’s performance. How can Deep Learning benefit video games? The following are some of the clear advantages of implementing deep learning techniques in games: Highly accurate results can be achieved with more and more training data Manual intervention is minimal Game developers can focus on effective storytelling than on the in-game graphics Another obvious question comes to mind at this stage, however. What are the drawbacks of implementing deep learning for games? A few come to mind immediately: Complexity of the training models can be quite high Images in games need to be generated in real-time which is quite a challenge The computation time can be quite significant The training dataset for achieving accurate results can be quite humongous With advancements in technology and better, faster hardware, many of the current limitations in developing smarter games  can be overcome. Fast generative models can look into the real-time generation of images, while faster graphic cards can take care of the model computation issue. All in all, dabbling with deep learning in games seems worth the punt which game studios should definitely think of taking. What do you think? Is incorporating deep learning techniques in games a scalable idea?
Read more
  • 0
  • 0
  • 5451