Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech Guides

851 Articles
article-image-top-5-cybersecurity-myths-debunked
Guest Contributor
11 Jul 2018
6 min read
Save for later

Top 5 cybersecurity myths debunked

Guest Contributor
11 Jul 2018
6 min read
Whether it’s for work or pleasure, we are all spending more time online than ever before. Given how advanced and user-friendly modern technology is, it is not surprising that the online world has come to dominate the offline. However, as our lives are increasingly digitized, the need to keep us and our information secure from criminals has become increasingly obvious. Recently, a virtually unknown marketing and data-aggregation company Exactis has fallen victim to a major data breach. According to statements, the company might’ve been responsible for exposing up to 340 million individual records on a publicly accessible server. In this time and age, data breaches are not a rare occurrence. Major corporations face cybersecurity problems on a daily basis. Clearly, there is a thriving criminal market for hackers. But how can the average internet user keep safe? Knowing these 5 myths will definitely help you get started! Myth 1: A Firewall keeps me safe As you would expect, hackers know a great deal about computers. The purpose of what they do is to gain access to systems that they should not have access to. According to a research conducted by Breach Investigation Reports, cybersecurity professionals only regard 17% of threats as being highly challenging. This implies that they view the vast majority of what they do as very easy. All businesses and organizations should maintain a firewall, but it should not lull you into a false sense of security. A determined hacker will use a variety of online and offline techniques to get into your systems. Just last month, Cisco, a well known tech company, has discovered 24 security vulnerabilities in their firewalls, switches, and security devices. On June 20, the company released the necessary updates, which counteract those vulnerabilities. While firewalls are a security measure, it is essential to understand that they are susceptible to something known as a zero-day attack. Zero-day attacks are unknown, or newly designed intrusions that target vulnerabilities before a security patch is released. Myth 2: HTTPS means I’m secure Sending information over an HTTPS connection means that the information will be encrypted and secured, preventing snooping from outside parties. HTTPS ensures that data is safe as it is transferred between a web server and a web browser. While HTTPS will keep your information from being decrypted and read by a third party, it remains vulnerable. Though the HTTPS protocol has been developed to ensure secure communication, the infamous DROWN attack proved everyone wrong. As a result of DROWN more than 11 million HTTPS websites’ had their virtual security compromised. Remember, from the perspective of a hacker, who’s looking for a way to exploit your website, the notion of unbreakable or unhackable does not exist. Myth 3: My host ensures security This is a statement that’s never true. Hosting service providers are responsible for thousands of websites, so it is absurd to think that they can manage security on each one individually. They might have some excellent general security policies in place, yet they can’t ensure total security for quite a few reasons. Just like any other company that collects and maintains data, hosting providers are just as susceptible to cyber attacks. Just last year, Deep Hosting, a Dark Web hosting provider, suffered a security breach, which led to some sites being exported. It’s best not to assume that your host has it covered when it comes to your security. If you haven’t set the protections up yourself, consider them non-existent until you’ve seen and configured them. Myth 4: No Internet connection means no virtual security threats This is a pervasive myth, but a myth nonetheless. Unless you are dealing with a machine that is literally never allowed to connect to a network, at some point, it will communicate with other computers. Whenever this happens, there is the potential for malware and viruses to spread. In some instances, malware can infect your operating system via physical data sharing devices like USB drives or CDs. Infecting your computer with malware could have detrimental outcomes. For instance, a ransomware application can easily encrypt vast quantities of data in just a few moments. Your best bet to maintain a secure system at all times is by running a reliable antimalware tool on your computer. Don’t assume that just because a computer has remained offline, it can’t be infected. In 2013 first reports came in that scientist have developed a prototype malware that might be able to use inaudible audio signals to communicate. As a result of that, a malicious piece of software could communicate and potentially spread to computers that are not connected to a network. Myth 5: A VPN ensures security VPNs can be an excellent way of improving your overall online security by hiding your identity and making you much more difficult to trace. However, you should always be very careful about the VPN services that you use, especially if they are free. There are many free VPNs which exist for nefarious purposes. They might be hiding your IP address (many are not), but their primary function is to siphon away your personal data, which they will then sell. The simplest way to avoid these types of thefts is to, first of all, ensure that you thoroughly research and vet any service before using it. Check this list to be sure that a VPN service of your choice does not log data. Often a VPNs selling point is security and privacy. However, that’s not the case at all times. Not too long ago, PureVPN, a service that stated in its policies that it maintains a strict no-log approach at all times, have been exposed to lying. As it turns out, the company handed over information to the FBI regarding the activity of a cyberbully, Ryan Lin, who used a number of security tools, including PureVPN, to conceal his identity. [dropcap]M[/dropcap]any users have fallen prey to virtual security myths and suffered detrimental consequences. Cybersecurity is something that we should all take more seriously, especially as we are putting more of our lives online than ever before. Knowing the above 5 cybersecurity myths is a useful first step in implementing better practices yourself. About the author   Harold Kilpatrick is a cybersecurity consultant and a freelance blogger. He's currently working on a cybersecurity campaign to raise awareness around the threats that businesses can face online.   Cryptojacking is a growing cybersecurity threat, report warns Top 5 cybersecurity assessment tools for networking professionals How can cybersecurity keep up with the rapid pace of technological change?
Read more
  • 0
  • 0
  • 3660

article-image-python-data-visualization-myths-you-should-know-about
Savia Lobo
02 Nov 2018
4 min read
Save for later

Python Data Visualization myths you should know about

Savia Lobo
02 Nov 2018
4 min read
In recent years, we have experienced an exponential growth of data. As the amount of data grows, the need for developers with knowledge of data analytics and especially data visualization spikes. Data visualizations help in getting a clear and concise view of the data, making it more tangible for (non-technical) audiences. MATLAB and R are the two available languages that have been traditionally used for data science and data visualization. However, Python is the most requested and used language in the industry. Its ease of use and the speed at which you can manipulate and visualize data combined with the number of available libraries makes Python the best choice. So Data visualization seems easy, doesn’t it? However, there are a lot of myths surrounding it. Let us have a look at some of them. Myth 1: Data visualizations are just for data scientists Today's data visualization libraries are very convenient, so any person can create meaningful visualizations in just a few minutes. Myth 2: Data visualization technologies are difficult to learn Of course, building and designing sophisticated data visualizations will take some work and learning but with very little knowledge of the libraries and what they are capable of, you can create simple visualizations that will help you get valuable insights into your data. Python is a comparably easy language. The “pythonic” approach is also used when building visualization libraries for Python which makes them easy to understand and use. Myth 3: Data visualization isn’t needed for data insights Imagine having a table of data with 20 columns and several thousand rows. What do you think will give you better insights into this data? Just looking at the table and trying to make sense of all the columns and values in them, or creating some simple plots that visualize the content of this table? Of course, you could force yourself to get insights without visualizations, but the key is to work smarter, not harder. Myth 4: Data visualization takes a lot of time If you have a basic understanding of your data, you can create some basic visualizations in no time. There are a lot of libraries, which will be covered in this course, that allow you to simply import some data and build visualizations in a few lines of code. The more difficult part is creating visualizations which are descriptive and display the concepts you wanted to show but don’t worry, this will be discussed in the course in detail as well. Amidst all the myths, Data visualization in combination with Python is an essential skill when working with data. When properly utilized, it is a powerful combination that not only enables you to get better insights into your data but also gives you the tool to communicate results better. Head over to our course titled ‘Data Visualization with Python’, to use Python with NumPy, Pandas, Matplotlib, and Seaborn to create impactful data visualizations with the real world, public data. About Tim and Mario Tim Großmann is a CS student with interest in diverse topics ranging from AI to IoT. He previously worked at the Bosch Center for Artificial Intelligence in Silicon Valley in the field of big data engineering. He’s highly involved in different Open Source projects and actively speaks at meetups and conferences about his projects and experiences. Mario Döbler is a graduate student with a focus in deep learning and AI. He previously worked at the Bosch Center for Artificial Intelligence in Silicon Valley in the field of deep learning. Currently, he dedicates himself to apply deep learning to medical data to make health care accessible to everyone. 4 tips for learning Data Visualization with Python Setting up Apache Druid in Hadoop for Data visualizations [Tutorial] 8 ways to improve your data visualizations  
Read more
  • 0
  • 0
  • 3649

article-image-docker-has-turned-us-all-sysadmins
Richard Gall
29 Dec 2015
5 min read
Save for later

Docker has turned us all into sysadmins

Richard Gall
29 Dec 2015
5 min read
Docker has been one of my favorite software stories of the last couple of years. On the face of it, it should be pretty boring. Containerization isn't, after all, as revolutionary as most of the hype around Docker would have you believe. What's actually happened is that Docker has refined the concept, and found a really clear way of communicating the idea. Deploying applications and managing your infrastructure doesn't sound immediately 'sexy'. After all, it was data scientist that was proclaimed the sexiest job of the twenty-first century; sysadmins hardly got an honorable mention. But Docker has, amazingly, changed all that. It's started to make sysadmins sexy… And why should we be surprised? If a SysAdmin's role is all about delivering software, managing infrastructures, maintaining it and making sure it performs for the people using it, it's vital (if not obviously sexy). A decade ago, when software architectures were apparently immutable and much more rigid, the idea of administration wasn't quite so crucial. But now, in a world of mobile and cloud, where technology is about mobility as much as it is about stability (in the past, tech glued us to desktops; now it's encouraging us to work in the park), sysadmins are crucial. Tools like Docker are crucial to this. By letting us isolate and package applications in their component pieces we can start using software in a way that's infinitely more agile and efficient. Where once the focus was on making sure software was simply 'there,' waiting for us to use it, it's now something that actively invites invention, reconfiguration and exploration. Docker's importance to the 'API economy' (which you're going to be hearing a lot more about in 2016) only serves to underline its significance to modern software. Not only does it provide 'a convenient way to package API-provisioning applications, but it also 'makes the composition of API-providing applications more programmatic', as this article on InfoWorld has it. Essentially, it's a tool that unlocks and spreads value. Can we, then, say the same about the humble sysadmin? Well yes – it's clear that administering systems is no longer a matter of simple organization, a question of robust management, but is a business critical role that can be the difference between success and failure. However, what this paradigm shift really means is that we've all become SysAdmins. Whatever role we're working in, we're deeply conscious of the importance of delivery and collaboration. It's not something we expect other people to do, it's something that we know is crucial. And it's for that reason that I love Docker – it's being used across the tech world, a gravitational pull bringing together disparate job roles in a way that's going to become more and more prominent over the next 12 months. Let's take a look at just two of the areas in which Docker is going to have a huge impact. Docker in web development Web development is one field where Docker has already taken hold. It's changing the typical web development workflow, arguably making web developers more productive. If you build in a single container on your PC, that container can then be deployed and managed anywhere. It also gives you options: you can build different services in different containers, or you can build a full-stack application in a single container (although Docker purists might say you shouldn't). In a nutshell, it's this ability to separate an application into its component parts that underlines why microservices are fundamental to the API economy. It means different 'bits' – the services – can be used and shared between different organizations. Fundamentally though, Docker bridges the difficult gap between development and deployment. Instead of having to worry about what happens once it has been deployed, when you build inside a container you can be confident that you know it's going to work – wherever you deploy it. With Docker, delivering your product is easier (essentially, it helps developers manage the 'ops' bit of DevOps, in a simpler way than tackling the methodology in full); which means you can focus on the specific process of development and optimizing your products. Docker in data science Docker's place within data science isn't quite as clearly defined or fully realised as it is in web development. But it's easy to see why it would be so useful to anyone working with data. What I like is that with Docker, you really get back to the 'science' of data science – it's the software version of working in a sterile and controlled environment. This post provides a great insight on just how great Docker is for data – admittedly it wasn't something I had thought that much about, but once you do, it's clear just how simple it is. As the author of puts it: 'You can package up a model in a Docker container, go have that run on some data and return some results - quickly. If you change the model, you can know that other people will be able to replicate the results because of the containerization of the model.' Wherever Docker rears its head, it's clearly a tool that can be used by everyone. However you identify – web developer, data scientist, or anything else for that matter – it's worth exploring and learning how to apply Docker to your problems and projects. Indeed, the huge range of Docker use cases is possibly one of the main reasons that Docker is such an impressive story – the fact that there are thousands of other stories all circulating around it. Maybe it's time to try it and find out what it can do for you?
Read more
  • 0
  • 0
  • 3640

article-image-mark-reinhold-on-the-evolution-of-java-platform-and-openjdk
Sugandha Lahoti
02 Aug 2018
5 min read
Save for later

Mark Reinhold on the evolution of Java platform and OpenJDK

Sugandha Lahoti
02 Aug 2018
5 min read
Yesterday, Mark Reinhold, Chief architect of the Java Platform Group and tech lead at OpenJDK talked about both the short-term and long-term technical roadmap of Java and the JDK. He was speaking at the ongoing OpenJDK Committers’ Workshop which meets twice a year to discuss the state of the OpenJDK Community and the JDK technical roadmap. With decades as one of the world’s most popular programming language, you’d be forgiven for thinking it might be slowing down - especially with younger languages like Kotlin jostling for position in the popularity stakes. However, there’s plenty of life in it yet. Mark explained what Java’s future might look like and how developers can influence its growth for the better. Who is in charge of the future of Java and OpenJDK? Mark believes that the success of the Java platform depends on contributors focussing on the big picture. The leaders who guide the development platform are not merely developers, who are only interested in writing code or developing new features; the true leaders are what Mark likes to call, “stewards”. These stewards are people who assume responsibility for overseeing and protecting something considered worth caring for and preserving. They try to preserve the past while evolving in the future. A developer is considered as a steward if they demonstrate 3 key qualities: Deep Knowledge in at least one key area. Breadth of care across the platform: They think from time to time about the entire platform and how the whole thing fits together. Empathy: They have the ability to put themselves in the minds of ordinary developers who use the platform rather than work on the platform. In the case of OpenJDK, stewards are effectively in charge of the development of the platform. These stewards are led by Mark Reinhold, but he’s also supported by John Rose for the Java Virtual Machine and Brian Goetz for the language and libraries. Apart from these guys, many other developers, who have the 3 key qualities above, contribute to stewardship as a part of their day to day work. Every single one of them has demonstrated a deep long-term track-record of expertise in at least one area combined with a breadth of care with the entire platform and the ability to empathize with the ordinary developers. Stewards ensure reliability and compatibility The stewardship of the Java platform is guided by two key values. First, it's thinking about long-term goals and working to balance conservation with innovation. Second, it is about preserving the values of Readability and Compatibility. Readability is essential to maintainability. This means you don’t think about the code from a short-term perspective. Thinking about the long-term reliability of the code you’re writing is vital, not least because it makes life easier for other people using the software in the future. Compatibility is similar. It’s all about recognizing that software doesn’t exist in a vacuum - it exists in an ecosystem of tools and developers. There are a number of different types of compatibility that highlight what it means in practice: Source: existing code continues to compile Binary: existing code continues to link at run-time Behavior: Existing APIs continue to behave within the bounds of their specifications. Migration: Adopting a new feature incrementally Intellectual: New features are built on existing knowledge. Add selective features but make them look like they have been there all along. The Java platform ensures that stewards strive to balance conservation and innovation. It’s only through balance that the project can maintain its core values of readability and compatibility. How stewards guide the Java platform As Mark pointed out, it takes considerable solitary thinking, maybe months and years, before an idea takes off. Even then, it needs to be discussed intensively with other stewards. The fruits of these discussions surface in two ways that ensure visibility and transparency: New JEPs in the JEP’s process New OpenJDK projects which explain a problem area in depth, eventually generating more JEPs, which later wind up as features. Transparency is essential. Anyone is open to make an appeal if they don’t like a decision. In fact, if you don’t agree to a decision that the Head JDK makes, you are also free to appeal to the OpenJDK Governing Board. How you can influence the evolution of Java All developers, external contributors, and organizations have the opportunity to influence the direction of the Java platform. The degree of that influence is determined by the degree of the contributions made in the JDK community on a meaningful and ongoing basis. This includes detailed bug reports, constructive critiques, bug fixes, small enhancements, entire non-trivial JEPs. If you only participate in order to serve yourself or your employer’s narrow technical interests then you are unlikely to gain much influence. However, if you deliver a strong track record of consistent serious contributions over a long period of time, then your influence will grow quite large and you might even become a steward yourself. The OpenJDK community has been going strong over the past years under the leadership of the Java stewards. You can go through the entire conference on YouTube to review life at OpenJDK Community, and a quick look at what's ahead for the Java platform in general. Oracle announces a new pricing structure for Java Oracle reveals issues in Object Serialization. Plans to drop it from core Java. 5 Things you need to know about Java 10
Read more
  • 0
  • 0
  • 3634

article-image-why-you-should-analyze-user-behavior-data-before-developing-a-mobile-app
Guest Contributor
16 Jan 2019
6 min read
Save for later

Why you should analyze user-behavior data before developing a mobile app?

Guest Contributor
16 Jan 2019
6 min read
What is the first thing that comes to your mind when we say “mobile app”? If you are a user, you are probably thinking of, it is something that’s convenient, and eases your life. However, in a business context, an idea that can be converted into an app model and helps boost your profitability. When successful entrepreneurs launch their original idea, they do not just design and develop it for the market; they research, understand the market minutely and more importantly study the users in-depth. One part that leads you to success is the complete understanding of the user. Here, we will try to understand why user behavior analysis is important and how you can best deliver it. Why analyze user behavior? Instead of asking this question, let’s ask the most important question- who are you designing the app for? The users of the app will be members of the target audience, and technically it is for them that you are planning the app layout and coding the all new idea for. In this case, you need to ensure it is usable for them, and they find the app convenient. You need to understand every aspect of user behavior, ranging from an understanding of how they use the app to what engages them. Analysis of user behavior will help you design the UX accordingly, and allows you to deliver effective app solutions. For this, we need to identify the different ways in which you can identify user behavior and what you need to consider, in order to deliver a perfect app solution. 4 Effective ways to analyze user behavior data Here’re four effective ways that will help you to analyze user behavior data to design and develop a mobile app accordingly. The app goal: Whenever the user uses an app, they do it with a specific goal in mind. For instance, when you use Uber, you are choosing travel convenience and avoiding haggling with the driver over the fare. The Uber app allows its users to book their ride with ease and know the amount for the ride beforehand. When you are designing for the user, you need to understand the goal they are attempting to achieve with the app, and how best you can help them achieve it, in the simplest way possible. The mobile usage: While designing an app for the users, you need to understand how they use the mobile phone. What is most convenient for them? For instance, 79% people use their left hand instead of the right hand to cradle their phone or use the apps. Have you considered them while designing the app? Most people prefer the portrait mode for certain apps; however, when they are viewing videos, they prefer to hold it in the landscape mode. If your app does not change the view according to the preferences stated, then you are likely to lose out on the customers. Do users use the thumb to access the buttons on the screen or, do they use their finger? How do they navigate through the screens? Do they hold the phone in one hand or cradle the phone? When you are able to answer these questions, then you have nailed the design strategy You would know just where to place the buttons and how to design the interaction? There are places within the mobile screen which have been marked as inaccessible. If you place the buttons or other clickable elements in that part of the screen, then you are halting the access to the mobile app. Acknowledge feedback: What do users like the most about the mobile app in general, and what are the aspects that frustrate them? For instance, there are mobile app designs that don’t connect well with the user. An app that takes more than 3 seconds to load can be frustrating. If the images don’t load faster, then the app can be discarded immediately. This is true for e-commerce apps, as there are lots of images, and people tend to expect an immediate response from these apps. When the users give you their feedback, make sure you incorporate that into the app. The motivations: Finally, you need to take into account the motivations of the user towards using the app and completing an action. What makes them want to click on the buy now or, the action button in your app? Study your users. For some users, safety plays the predominant motivator while for others, the motivation factor is the value for money the app delivers. Along with the motivators, there are barriers too, which you need to consider in order to design the best user-centric app for the business idea. After identifying different ways to identify user behavior, now, let’s talk about two simple methods that can be used to analyze user behavior data: Questionnaire: Prepare a questionnaire including questions like what do you like the best about our app? Which other apps would you use as our alternative? What do you want us to improve? The questions are endless, but make sure these questions give you insights on your users. Spread this questionnaire among a group of people and based on their answers, you can drive user-behavior data and develop a mobile app accordingly. Mobile App Analytics Platforms: Another method is mobile app analytics platforms. Prepare navigation flow, a flowchart of all the app screens, to submit it on mobile app analytics platforms and identify how users are going from one screen to another. Through this navigation flow of your app, you can identify how users are interacting with screens and how they move through your app. This data will help you to know user behavior. This data helps you make data-driven changes. Conclusion Analyzing user behavior always must be a high priority for businesses who want to make a successful app and grow over time. When it comes to analyzing user behavior, top companies and brands like Uber, Airbnb, Pinterest, and Starbucks are using AI (Artificial Intelligence) to provide a personalized experience to their users. Through AI and machine learning, businesses will learn about customer or user behaviors on a deeper level and get help in delivering a better application. The possibilities are endless. The point is - are you utilizing already existing data to optimize the overall process? Author Bio Yuvrajsinh is a Marketing Manager at Space-O Technologies, a firm having expertise in developing Uber-like apps. He spends most of his time researching on the mobile app and startup trends. He is a regular contributor to popular publications like Entrepreneur, Yourstory, and Upwork. If you have any confusion, or question, or need any consultation regarding the mobile app development process, feel free to contact him. 5 UX design tips for building a great e-commerce mobile app 4 key benefits of using Firebase for mobile app development 9 reasons to choose Agile Methodology for Mobile App Development  
Read more
  • 0
  • 0
  • 3627

article-image-6-use-cases-machine-learning-healthcare
Sugandha Lahoti
10 Nov 2017
7 min read
Save for later

6 use cases of Machine Learning in Healthcare

Sugandha Lahoti
10 Nov 2017
7 min read
While hospitals have sophisticated processes and highly skilled administrators, management can still be an administrative nightmare for the already time-starved healthcare professionals. A sprinkle of automation can do wonders here. It could free up a practitioner's invaluable time and, thereby, allow them to focus on tending to critically ill patients and complex medical procedures. At the most basic level, machine learning can mechanize routine tasks such as automating documentation, billing, and regulatory processes. It can also provide ways and tools to diagnose and treat patients more efficiently. However, these tasks only scratch the surface. Machine learning is here to revolutionize healthcare and other allied industries such as pharma and medicine. Below are some ways it is being put to use in these domains. Helping with disease identification and drug discovery Healthcare systems generate copious amounts of data and use them for disease prediction. However, the necessary software to generate meaningful insights from this unstructured data is often not in place. Hence, drug and disease discovery end up taking time. Machine Learning algorithms can discover signatures of diseases at rapid rates by allowing systems to learn and make predictions based on the previously processed data. They can also be used to determine which chemical compounds could work together to aid drug discovery. Thus the time-consuming process of experimenting and testing millions of compounds is eliminated. With the fast discovery of diseases, the chances of detecting symptoms earlier and the probability of survival increases. It also boosts available treatment options. IBM has collaborated with Teva Pharmaceutical to discover new treatment options for respiratory and central nervous system diseases using Machine Learning algorithms such as predictive and visual analytics that run on IBM Watson Health Cloud. To gain more insights on how IBM Watson is changing the face of healthcare, check this article. Enabling precision medicine Precision Medicine revolves around healthcare practices specific to a particular patient. This includes analyzing a person’s genetic information, health history, environmental exposure, and needs and preferences to guide diagnosis for diseases and subsequent treatment. Here, machine learning algorithms are utilized to sift through vast databases of patient data to identify factors such as their genetic history and predisposition to diseases, that could strongly determine treatment success or failure. ML techniques in precision medicine exploit molecular and genomic data to assist doctors in directing therapies to patients and shed light on disease mechanisms and heterogeneity. It also predicts what diseases are likely to occur in the future and suggests methods to avoid them. Cellworks, a Life Sciences Technology company, brings together a SaaS-based platform for generating precision medicine products. Their platform analyses the genomic profile of the patient and then provides patient-specific reports for improved diagnosis and treatment. Assisting radiology and radiotherapy CTI and MRI scans for radiological diagnosis and interpretation are burdensome and laborious (not to mention, time-consuming). They involve segmentation—differentiating between healthy and infectious tissues—which when done manually has a good probability of resulting in errors and misdiagnosis. Machine Learning algorithms can speed up the segmentation process while also increasing accuracy in radiotherapy planning. ML can provide physicians information for better diagnostics which helps in obtaining accurate tumor location. It also predicts radiotherapy response to help create a personalized treatment plan. Apart from these, ML algorithms find use in medical image analysis as they learn from examples. This involves classification techniques which analyze images and available clinical information to generate the most likely diagnosis. Deep Learning can also be used for detecting lung cancer nodules in early screening CT scans and displaying the results in useful ways for clinical use. Google’s machine-learning division, DeepMind, is automating radiotherapy treatment for head and neck cancers using scans from almost 700 diagnosed patients. An ML algorithm scans the reports of symptomatic patients against these previous scans to help physicians develop a suitable treatment process. Arterys, a cloud-based platform, automates cardiac analysis using deep learning. Providing Neurocritical Care A large number of neurological diseases develop gradually or in stages, so the decay of the brain happens over time. Traditional approaches to neurological care such as peak activation, EEG epileptic spikes, Pronator drift etc., are not accurate enough to diagnose and classify neurological and psychiatric disorders. This is because they are typically used for end results assessment rather than for progressive analysis on how the brain disease develops. Moreover, timely personalized neurological treatments and diagnoses rely highly on the constant availability of an expert. Machine Learning algorithms can advance the science of detection and prediction by learning how the brain progressively develops into these conditions. Deep Learning techniques are applied in the area of neuroimaging to detect abstract and complex patterns from single-subject data to detect and diagnose brain disorders. Machine learning techniques such as SVM, RBFN, and RF are amalgamated with PDT (Pronator drift tests) to detect stroke symptoms based on quantification of proximal arm weakness using inertial sensors and signal processing. Machine Learning algorithms can also be used for detecting signs of dementia before its onset. The Douglas Mental Health University Institute uses PET scans to train ML algorithms to spot signs of dementia by analyzing it against scans of patients who have mild cognitive impairment. Then they run the scans belonging to symptomatic patients on the trained algorithm to predict the possibility of dementia. Predicting epidemic outbreaks Epidemic predictions traditionally rely on manual accounting. This includes self-reports or aggregation of information from healthcare services such as reports by different health protection agencies like CDC, NHIS, National Immunization Survey etc. However, they are time-consuming and error-prone. Thus predicting and prioritizing the outbreaks becomes challenging. ML algorithms can automatically perform analysis, improve calculations and verify information with minimal human intervention. Machine learning techniques like support vector machines and artificial neural networks can predict the epidemic potential of a disease and provide alerts for disease outbreak. They do this using data collected from satellites, and from real-time social media updates, historical information on the web, and other sources. They also use geospatial data such as temperature, weather conditions, wind speed, and other data points to predict the magnitude of impact an epidemic can cause in a particular area and to recommend necessary measures for preventing and containing them early on. AIME, a medical startup, has come up with an algorithm to predict outcome and even the epicenter of epidemics such as dengue fever before their occurrence. Better hospital management Machine Learning can bring about a change in traditional hospital management systems by envisioning hospitals as a digital patient-centric care center. These include automating routine tasks such as billing, admission and clearance, monitoring patients’ vitals etc. With administrative tasks out of the way, hospital authorities could fully focus on the care and treatment of patients. ML techniques such as computer vision can be used to feed all the vital signs of a patient directly into the EHR from the monitoring devices. Smart tracking devices are also used on patients to provide real-time whereabouts. Predictive analysis techniques provide continuous stream of real-time images and data. This analysis can sense risk and prioritize activities for the benefit of all patients. ML can also automate non-clinical functions, including pharmacy, laundry, and food delivery. The John Hopkins Hospital has its own command center that uses predictive analytics for efficient operational flow. Conclusion The digital health era focuses on health and wellness rather than diseases. The incorporation of machine learning in healthcare provides an improved patient experience, a better public health management, and reduces costs by automating manual labour. The next step in this amalgamation is a successful collaboration of clinicians and doctors with machines. This would bring about a futuristic health revolution with improved, precise, and more efficient care and treatment.
Read more
  • 0
  • 0
  • 3625
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $15.99/month. Cancel anytime
article-image-how-ai-is-transforming-the-manufacturing-industry
Kunal Parikh
15 Dec 2017
7 min read
Save for later

How AI is transforming the manufacturing Industry

Kunal Parikh
15 Dec 2017
7 min read
After more than five decades of incremental innovation, the manufacturing sector is finally ready to pivot thanks to Industry 4.0. Self-consciousness of technology plays a key role in ushering in Industry 4.0. AI in the manufacturing sector aims to achieve just that i.e. the creation of systems that can perceive their environment and take action consequently. One of the prominent minds in AI, Andrew Ng believes Factories to be AI's next frontier! Andrew is on a mission to AI-ify manufacturing with its new start-up called Landing.AI. For this initiative he partnered with Foxconn, world's largest contract manufacturer and makers of Apple iPhones. Together they aim to develop a wide range of AI transformation programs, from introduction of new technologies, operational processes, automated quality control and much more. In this AI-powered industrial revolution, machines are becoming smarter and interconnected. Manufacturers are using embedded intelligence of machines for collecting and analyzing data to generate meaningful insights. These are then used to run equipment efficiently, optimize workflows of operations and supply chains, among other things. Thus, AI is leaving an indelible impact across the manufacturing cycle. Further, a new wave of automation is transforming the role of human workforce wherein AI-driven robots are empowering production 24-hours a day. This is helping industrial environments gear up for the shift towards the smart factory environment. Below are some ways AI is revolutionizing manufacturing. Predictive analytics for increased production output Smart manufacturing systems are leveraging the power of predictive analysis and machine learning algorithms to enhance the production capacity. Predictive analytics derives its power from the data collected from the devices or sensors embedded in a manufacturer’s industrial equipments. These sensors become part of the IoT (Internet of Things) which collects and shares data with data scientists on the cloud. This setup is helping manufacturing industries to move from repair-and-replace to a predict-and-fix maintenance model. They do this by enabling these businesses to retrieve the right information at the right time to make the right decisions. For instance, in a pump manufacturing company, data scientists could collect, store and analyze sensor data based on machine attributes like heat, vibration, noise etc. This data can be stored in the cloud allowing for an array of analyses to be performed from understanding machine performance to predicting and monitoring disruption in processes and equipment remotely. Further, syncing up production schedules with parts availability can ensure enhanced production output. Enhancing product and service quality with machine and deep learning algorithms Manufacturers can deploy supervised/unsupervised ML, DL and reinforcement learning algorithms to monitor quality issues in the manufacturing process. For instance, researchers at Lappeenranta University of Technology in Finland have developed an innovative welding system for high-strength steel. They used unsupervised learning to allow the system to learn to mimic human’s ability to self-explore and self-correct. This welding system detects imperfections and self-corrects during the welding process using a new kind of sensor system controlled by a neural network program. Further, it also calculates other faults that may arise during the entire process. Visual inspection technology in an industrial environment identifies both functional and cosmetic defects. IBM has developed a new offering for manufacturing clients to automate visual quality inspections. Rooted in deep learning, a centralized ‘learning service’ collects images of all products - normal and abnormal. Next, it builds analytical models to recognize and classify different characteristics of machine parts and components into OK or NG. Characteristics that meet quality specifications are considered as OK while those that don't are classified as ‘NG’. Predictive maintenance for enhanced MRO (Maintenance, Repair and Overhaul) performance Manufacturing industries strive to achieve excellence throughout the production process. To ensure this, machinery embedded with sensors generate real-time performance and workload data. This helps in diagnosing faults and in predicting the need for equipment maintenance. For instance, a machine may break down due to lack of maintenance in the long run, incurring losses to business. With predictive maintenance, businesses can be better equipped to handle equipment malfunction by identifying significant causal factors like weather, temperature etc. Targeted predictive maintenance generates critical information such as which machine parts will need replacing and when. This helps in reducing equipment downtime, lowering maintenance costs and pre-emptively addressing aging equipment. Reinforcement Learning for managing warehouses Large warehouses face challenging times in streamlining space, managing inventories and reducing transit time. Manufacturing industries are employing reinforcement learning for efficient warehouse management. RL approach uses trial and error iterations within an environment to achieve a particular goal. Imagine what a breeze warehousing could be and the associated cost savings, if robots could pick up the right products from various lots and move them to the right destinations with great precision. Here, reinforcement learning based algorithms can improve the efficiency of such intelligent warehouses with multirobot systems by addressing task scheduling and path planning issues. Fanuc, a Tokyo based company, employs robots having reinforced learning ability to perform such tasks with great agility and precision. AI in supply chain management AI is helping manufacturers gain an in-depth understanding of the complex variables at play in the supply chain and in predicting future scenarios. To enable seamless insights generation, businesses are opting for more flexible and efficient cyber-physical systems. These intelligent systems are self-configurative and self-optimizing structures that can predict problems and minimize losses. Thus they help businesses to innovate rapidly by reducing the time to market, foresee uncertainties and deal with them promptly. Siemens, for example, is creating a self-organizing factory that aims to automate the entire supply chain by generating work orders using the demand and order information. Implication of AI in Industry 4.0 Industry 4.0 is the new way of manufacturing using automation, devices connected on the IoT, cloud and cognitive computing. It propagates the concept of the “smart factory” in which cyber-physical systems observe the physical processes of the factory and make discrete decisions accordingly. As AI finds its application in Industry 4.0, computers will merge together with robotics to automate and maximize the efficiency of the industrial processes. Powered by machine learning algorithms, the computer systems could control the robots with minimum human intervention. For instance, in a manufacturing setup, AI can work alongside systems like SCADA - to control industrial processes in an efficient manner. These systems can monitor, collect and process real-time data by directly interacting with devices such as sensors, pumps, motors etc. through human-machine interface (HMI) software. These machine-to-machine communication systems give new direction to the human-machine collaboration potential thus changing the way we see workforce management. Industry 4.0 will favor those who can build software, hardware, and firmware - those who can adapt and maintain new equipment and those who can design automation and robotics. Within Industry 4.0, augmented reality and virtual reality are other cutting edge production ready technologies that are making the idea of a smart factory a reality. The recent relaunch of Google Glass especially designed for the factory floor is worth a mention here. The Wi-Fi-enabled glasses allow factory workers, mechanics, and other technicians to view instructional videos, manuals, training videos etc., all in their line of sight. This helps in maintaining higher standards of work while ensuring safety with agility. In Conclusion Manufacturing industries are gearing themselves to harness AI along with IoT, AR/VR to create an agile manufacturing environment and to make smarter and real-time decisions. AI is helping realize the full potential of Industrial Internet of Things (IIoT) by applying machine learning, deep learning and other evolutionary algorithms to the sensor data. Human-machine collaboration is transforming the scenario at the fulfillment centers creating a win-win situation for both humans and robots. Robots employed at the fulfillment centers having motion sensors move on to the field of QR codes with precision and agility withouting running into each other creating a fascinating view. Imagine a real-life JARVIS from the movie Iron Man managing entire supply chains or factory spaces. The day is not far away when we can see a JARVIS like advanced virtual assistant uses sensors to collect real-time data, AI to process data,and blockchain to securely transmit the information while using AR to interact with us visually. It could take care of system and mechanical failures remotely while ceasing control of the factory for efficient energy management. Manufacturers could go save the world or unveil new products, Iron Man style!
Read more
  • 0
  • 0
  • 3618

article-image-deep-learning-microsoft-cntk
SarvexJatasra
05 Aug 2016
7 min read
Save for later

Deep Learning with Microsoft CNTK

SarvexJatasra
05 Aug 2016
7 min read
“Deep learning (deep structured learning, hierarchical learning, or deep machine learning) is a branch of machine learning based on a set of algorithms that attempt to model high-level abstractions in data by using multiple processing layers, with complex structures or otherwise, composed of multiple nonlinear transformations.” -Wikipedia High Performance Computing is not a new concept, but only recently the technical advances along with economies of scale have ensured that HPC is accessible to the masses with affordable yet powerful configurations. Anyone interested can buy commodity hardware and start working on Deep Learning, thus bringing a machine learning subset of artificial intelligence out of research labs and into garages. DeepLearning.net is a starting point for more information about Deep Learning. Nvidia ParallelForAll is a nice resource for learning GPU-based Deep Learning (Core Concepts, History and Training, and Sequence Learning). What is CNTK Microsoft Research released its Computational Network Toolkit in January this year. CNTK is a unified deep-learning toolkit that describes neural networks as a series of computational steps via a directed graph. CNTK allows the following models: -          Feed Forward Deep Neural Networks (DNN) -          Convolutional Neural Networks (CNN) -          Recurrent Neural Networks (RNN)/Long Short Term Memory Units (LTSM) -          Stochastic Gradient Descent (SGD) Why CNTK? Better Scaling When Microsoft CNTK was released, the stunning feature that it brought was distributed computing, that is, a developer was not limited by the number of GPUs installed on a single machine. This was a significant breakthrough because even the best of machines was limited by4-way SLI, thus limiting the total number of cores to 4 x 3072 = 12288. The configuration of the developer machine put an extra load on the hardware configuration, because this configuration left very little room for upgrades. There is only one motherboard available that supports4-way PCI-E Gen3x16, and there are very few manufacturers who provide good quality 1600W watt power supply to support four Titans. This meant that developers were forced to pay a hefty premium for upgradability in terms of the motherboard and processor, settling for an older generation processor. Distributed computing in High Performance Computing is essential, since it allows scaling out as opposed to scaling up. Developers can build grids with cheaper nodes and the latest processors with a lower hardware cost for an entry barrier. Microsoft Research demonstrated in December 2015 that distributed GPU computing is most efficient in CNTK. In comparison, Google TensorFlow, FAIR Torch, andCaffe did not allow scaling beyond a single machine, and Theano was the worst, as it did not even scale on multiple GPUs on the same machine. Google Research, on April 13, released support for distributed computing. The speed up claimed is 56X for 100 GPUs and 40X for 50 GPUs. The performance deceleration is sharp for any sizable distributed Machine Learning setup. I do not have any comparative performance figures for CNTK, but scaling with GPUs on a single machine for CNTKhad very good numbers. GPU Performance One of the shocking finds with my custom build commodity hardware (2xTitanX) was the TFLOPS achieved under Ubuntu 14.04 LTS and Windows 10. With the fully updated OS and latest drivers from NVIDIA, I got double the number of TFLOPS under Windows than Ubuntu. I would like to rerun the samples with Ubuntu 16.04 LTS, but until then, I have a clear winner in performance with Windows. CNTK works perfectly on Windows, but TensorFlow has a dependency onBazel, which as of now does not build on Windows (Bug#947). Google can look into this and make TensorFlow work on windows, or Ubuntu &Nvidia can achieve the same TFLOPS as Windows. But until that time, architects have two options:to either settle for lower TFLPOS under Ubuntu with TensorFlow, or migrate to CNTK with increased performance. Getting started with CNTK Let’s see how toget started with CNTK. Binary Installation Currently, the CNTK binary installation is the easiest way to get started with CNTK. Just follow the instructions. The only downside is that the currently available binaries are compiled with CUDA 7.0, rather than the latest CUDA 7.5 (released almost a year ago). Codebase Compilation If you want to learn CNTK in detail, and if you are feeling adventurous, you should try compiling CNTK from source. Compile the code base, even when you do not expect to use the generated binary, because the whole compilation process will be a good peek under the hood and enhance your understanding of Deep Learning. The instructions for the Windows installation are available here, whereas the Linux installation instructions are available here. If you want to enable 1-bit Stochastic Gradient Descent (1bit-SGD), you should follow these instructions.1bit-SGD is licensed more restrictively, and you have to understand the differences if you are looking for commercial deployments. Windows Compilation is characterized by older versions of libraries. NvidiaCUDA andcuDNN were recently updated to 7.5, whereas other dependencies such asNvidia CUB, Boost, and OpenCV are still using older versions. Kindly pay extra attention to the versions listed in the documentation to ensure smooth compilation. Nvidia has updated the support for its Nsight to Visual Studio 2015; however, Microsoft CNTK still supportsonly Visual Studio 2013. Samples To test the CNTK installation, here are some really great samples: Simple2d (Feed Forward) Speech / AN4 (Feed Forward &LSTM) Image / MNIST (CNN) Text / PennTreeback (RNN) Alternative Deep Learning Toolkits Theano Theano is possibly the oldest Deep Learning Framework available. The latest release, 0.8,which was released on March 16, enables the much awaited multi-GPU support (there are no indications of distributed computing support though).cuDNNv5 and CNMeM are also supported. A detailed report is available here. Python bindings are available. Caffe Caffe is a deep learning framework primarily oriented towards image processing. Python bindings are available. Google TensorFlow TensorFlow is a deep learning framework written in C++ with Python API bindings. The computation graph is pure Python, making it slower than other frameworks, as demonstrated by benchmarks. Google has been pushing Go for a long time now, and it has even open sourced the language. But when it came to TensorFlow, Python was chosen over Go. There are concerns about Google supporting commercial implementations. FAIR Torch Facebook AI Research (Fair) has release its extension to Torch7. Torch is a scientific computing framework with Lua as its primary language. Lua has certain advantages over Python (lower interpreter overhead, simpler integration with C code), which lend themselves to Torch. Moreover, multi-core using OpenMP directives points to better performance. Leaf Leaf is the latest addition to the machine learning framework. It is based on the Rust programming language (supposed to replace C/C++). Leaf is a framework created by hackers for hackers rather than scientists. Leaf has some nice performance improvements. Conclusion Deep Learning with GPUs is an emerging field, and there is much required to be done to make good products out of machine learning. So every product needs to evaluate all of the possible alternatives (programming language, operating system, drivers, libraries, and frameworks) available for specific use-cases. Currently, there is no one-size-fits-all approach available. About the author SarvexJatasra is a Technology Aficionado, exploring ways to apply technology to make lives easier. He is currently working as the Chief Technology Officer at 8Minutes. When not in touch with technology, he is involved in physical activities such as swimming and cycling.
Read more
  • 0
  • 0
  • 3612

article-image-4-tips-for-learning-data-visualization-with-python
Sugandha Lahoti
01 Nov 2018
4 min read
Save for later

4 tips for learning Data Visualization with Python

Sugandha Lahoti
01 Nov 2018
4 min read
Data today is the world’s most important resource. However, without properly visualizing your data to discover meaningful insights, it’s useless. Creating visualizations helps in getting a clearer and concise view of the data, making it more tangible for (non-technical) audiences. Python is the choice of programming language for developers these days. However, sometimes developers face issues performing data visualization with Python. In this post, Tim Großmann, and Mario Döbler, the authors of the Data Visualization with Python course, discuss some of the best practices you should keep in mind while visualizing data with Python. #1 Start looking and experimenting with examples One of the most important ways to deeply understand and learn to use Python for data visualizations is to download example projects and play around with them. You should read their documentation and comments and change values, observing what influence it has. In many cases, they can even serve as a starting point to insert your own data. Think about how you could modify the given examples to visualize your own data. #2 Start from scratch and build on it Sometimes starting with an empty canvas is the best approach. Start with only the necessary components like your data and the import of your library of choice. This builds a nice flow and process that will enable you to debug problems with precision. Once you have gone through the whole process of building a simple visualization, you will have a good understanding of where an error might occur and how to fix it. Starting from scratch sometimes shows you that simpler solutions will save you a lot of time while still communicating the essence of your idea. #3 Make full use of documentation There are libraries with plenty of documentation to answer every single question you have. Make sure to make best use of it, research their API, look at the given example, and search for open issues on their GitHub pages when encountering a problem. Especially the libraries covered in the course “Data Visualization with Python” not only has extensive documentation, but also an active community that is constantly creating new questions on StackOverflow which will help you to find solutions to your problems in no time. #4 Use every opportunity you have with data to visualize it Every time you encounter new data take a few minutes and think about what information might be interesting and visualize it. Think back to the last time you had to give a presentation about your findings and all you had was a table with numerical values in it. For you it was understandable, but your colleagues sat there and scratched their heads. Try to create some simple visualizations that would have impressed the entire team with your results. Only practice makes you perfect. We hope that these tips will not only enable you to get better insights into your data but also gives you the tool to communicate results better. Don’t forget to checkout our course Data Visualization with Python to understand, explore, and effectively present data using the powerful data visualization techniques of Python. About the authors Tim Großmann is a CS student with interest in diverse topics ranging from AI to IoT. He previously worked at the Bosch Center for Artificial Intelligence in Silicon Valley in the field of big data engineering. He’s highly involved in different Open Source projects and actively speaks at meetups and conferences about his projects and experiences. Mario Döbler is a graduate student with a focus in deep learning and AI. He previously worked at the Bosch Center for Artificial Intelligence in Silicon Valley in the field of deep learning. Currently, he dedicates himself to apply deep learning to medical data to make health care accessible to everyone. 8 ways to improve your data visualizations Seaborn v0.9.0 brings better data visualization with new relational plots, theme updates, and more Getting started with Data Visualization in Tableau
Read more
  • 0
  • 0
  • 3608

article-image-active-learning-an-approach-to-training-machine-learning-models-efficiently
Savia Lobo
27 Apr 2018
4 min read
Save for later

Active Learning : An approach to training machine learning models efficiently

Savia Lobo
27 Apr 2018
4 min read
Training a machine learning model to give accurate results requires crunching huge amounts of labelled data in it. Data being naturally unlabelled, need ‘experts’ who can scan through the data and tag them with correct labels. To perform topic-specific data labelling, for example, classifying diseases based on their type, would definitely require a doctor or someone with a medical background to label the data. Getting such topic-specific experts to label data can get difficult and quite expensive. Also, doing this for many machine learning projects is impractical. Active learning can help here. What is Active Learning Active learning is a type of semi-supervised machine learning, which aids in reducing the amount of labeled data required to train a model. In active learning, the model focuses only on data that the model is confused about and requests the experts to label them. The model later trains a bit more on the small amount of labeled data, and repeats the same for such confusing data labeling. Active learning, in short, prioritizes confusing samples that need labeling. This enables models to learn faster, and allows experts to skip labeling data that is not a priority, and to provide the model with the most useful information on the confused samples. This in turn can fetch great machine learning models, as active learning can reduce the number of labels required to collect from experts. Types of Active learning An active learning environment includes a learner (the model being trained), huge amount of raw and unlabelled data, and the expert (the person/system labelling the data). The role of the learner is to choose which instances or examples should be labelled. The learner’s goal is to reduce the number of labeled examples needed for an ML model to learn. On the other hand, the expert on receiving the data to be labelled, analyzes the data to determine appropriate labels for it. There are three types of Active learning scenarios. Query Synthesis - In such a scenario, the learner constructs examples, which are further sent to the expert for labeling. Stream-based active learning - Here, from the stream of unlabelled data, the learner decides the instances to be labelled or choose to discard them. Pool-based active learning - This is the most common scenario in active learning. Here, the learner chooses only the most informative or best instances and forwards them to the expert for labelling. Some Real-life applications of Active learning Natural Language Processing (NLP): Most of the NLP applications require a lot of labelled data such as POS (Parts-of-speech) tagging, NER (Named Entity Recognition), and so on. Also, there is a huge cost incurred in labelling this data. Thus, using active learning can reduce the amount of labelled data required to label. Scene understanding in self-driving cars: Active learning can also be used in detecting objects, such as pedestrians from a video camera mounted on a moving car,a key area to ensure safety in autonomous vehicles. This can result in high levels of detection accuracy in complex and variable backgrounds. Drug designing: Drugs are biological or chemical compounds that interact with specific ‘targets’ in the body (usually proteins, RNA or DNA) with an aim to modify their activity. The goal of drug designing is to find which compounds bind to a particular target. The data comes from large collections of compounds, vendor catalogs, corporate collections, and combinatorial chemistry. With active learning, the learner can find out the compounds that are active (binds to target) or inactive. Active learning is still being researched using different deep learning algorithms such as CNNs and LSTMs, which act as learners in order to improve their efficiency. Also, GANs (Generative Adversarial Networks) are being implemented in the active learning framework. There are also some research papers that try to learn active learning strategies using meta-learning. Why is Python so good for AI and Machine Learning? 5 Python Experts Explain AWS Greengrass brings machine learning to the edge Unity Machine Learning Agents: Transforming Games with Artificial Intelligence
Read more
  • 0
  • 0
  • 3606
article-image-amazon-reinvents-speech-recognition-and-machine-translation-with-ai
Amey Varangaonkar
04 Jul 2018
4 min read
Save for later

How Amazon is reinventing Speech Recognition and Machine Translation with AI

Amey Varangaonkar
04 Jul 2018
4 min read
In the recently held AWS Summit in San Francisco, Amazon announced the general availability of two of its premium offerings - Amazon Transcribe and Amazon Translate. What’s so special about the two products is that customers will now be able to see the power of Artificial Intelligence in action, and use it to solve their day-to-day problems. These offerings from AWS will make it easier for startups and companies looking to adopt and integrate AI into their existing process and simplify their core tasks - especially pertaining to speech and language processing. Effective speech-to-text conversion with Amazon Transcribe In the AWS summit keynote, Amazon Solutions Architect Niranjan Hira expressed his excitement talking about the features of Amazon Transcribe; the automatic speech recognition service by AWS. This API can be integrated with the other tools and services offered by Amazon such as Amazon S3, and Quicksight. Source: YouTube Amazon Transcribe boasts wonderful features like: Simple API: It is very easy to use the Transcribe API to perform speech to text conversion, with minimum need for programming. Timestamp generation: The speech when converted to text also includes the timestamps for every word, so that tracking the word becomes easy and hassle-free. Variety of use-cases: The Transcribe API can be used to generate accurate transcripts of any audio or video file, of varied quality. Subtitle generation becomes easier using this API especially for low-quality audio recordings - customer service calls are a very good example. Easy to read text: Transcribe uses the cutting edge deep learning technology to parse text from speech without any errors. With appropriate punctuations and grammar in place, the transcripts are very easy to read and understand. Machine translation simplified with Amazon Translate Amazon Translate is a machine translation service offered by Amazon. It makes use of neural networks and advanced deep learning techniques to deliver accurate, high-quality translations. Key features of Amazon Translate include: Continuous training: The architecture of this service is built in such a way that the neural networks keep learning and improving. High accuracy: The continuous learning by the translation engines from new and varied datasets results in a higher accuracy of machine translations. The machine translation capability offered by this service is almost 30% more efficient than human translation. Easy to integrate with other AWS services: With a simple API call, Translate allows you to integrate the service within third party applications to allow real-time translation capabilities. Highly scalable: Regardless of the volume, Translate does not compromise the speed and accuracy of the machine translation. Know more about Amazon Translate from Yoni Friedman’s keynote at the AWS Summit. With all the businesses slowly migrating to cloud, it is clear all the cloud vendors - mainly Amazon, Google and Microsoft - are doing everything they can to establish their dominance. Google recently launched Cloud ML for GCP which offers machine learning and predictive analytics services improving businesses. Microsoft’s Azure Cognitive Services offer effective machine translation services as well, and are slowly gaining a lot of momentum. With these releases, the onus was on Amazon to respond, and they have done so in style. With the Transcribe and Translate APIs, Amazon’s goal of making it easier for startups and small-scale businesses to adopt AWS and incorporate AI seems to be on track. These services will also help AWS distinguish their cloud offerings, given that computing and storage resources are provided by rivals as well. Read more Verizon chooses Amazon Web Services(AWS) as its preferred cloud provider Tensor Processing Unit (TPU) 3.0: Google’s answer to cloud-ready Artificial Intelligence Amazon is selling facial recognition technology to police
Read more
  • 0
  • 0
  • 3605

article-image-blockchain-tools
Aaron Lazar
23 Oct 2017
7 min read
Save for later

"My Favorite Tools to Build a Blockchain App" - Ed, The Engineer

Aaron Lazar
23 Oct 2017
7 min read
Hey! It’s great seeing you here. I am Ed, the Engineer and today I’m going to open up my secret toolbox and share some great tools I use to build Blockchains. If you’re a Blockchain developer or a developer-to-be, you’ve come to the right place! If you are not one, maybe you should consider becoming one. “There are only 5,000 developers dedicated to writing software for cryptocurrencies, Bitcoin, and blockchain in general. And perhaps another 20,000 had dabbled with the technology, or have written front end applications that connect with the blockchain.” - William Mougayar, The Business Blockchain Decentralized apps or dapps, as they are fondly called, are serverless applications that can be run on the client-side, within a blockchain based distributed network. We’re going to learn what the best tools are to build dapps and over the next few minutes, we’ll take these tools apart one by one. For a better understanding of where they fit into our development cycle, we’ll group them up into stages - just like the buildings we build. So, shall we begin? Yes, we can!! ;) The Foundation: Platforms The first and foremost element for any structure to stand tall and strong is its foundation. The same goes for Blockchain apps. Here, in place of all the mortar and other things, we’ve got Decentralized and Public blockchains. There are several existing networks on the likes of Bitcoin, Ethereum or Hyperledger that can be used to build dapps. Ethereum and Bitcoin are both decentralized, public chains that are open source, while Hyperledger is private and also open source. Bitcoin may not be a good choice to build dapps on as it was originally designed for peer-to-peer transactions and not for building smart contracts. The Pillars of Concrete: Languages Now, once you’ve got your foundation in place, you need to start raising pillars that will act as the skeleton for your applications. How do we do this? Well, we’ve got two great languages specifically for building dapps. Solidity It’s an object-oriented language that you can use for writing smart contracts. The best part of Solidity is that you can use it across all platforms - making it the number one choice for many developers to use. It’s a lot like JavaScript and way more robust than other languages. Along with Solidity, you might want to use Solc, the compiler for Solidity. At the moment, Solidity is the language that’s getting the most support and has the best documentation. Serpent Before the dawn of Solidity, Serpent was the reigning language for building dapps. Something like how bricks replaced stone to build massive structures. Serpent though is still being used in many places to build dapps and it has great real-time garbage collection. The Transit Mixers: Frameworks After you choose your language to build dapps, you need a framework to simplify the mixing of concrete to build your pillars. I find these frameworks interesting: Embark This is a framework for Ethereum you can use to quicken development and to streamline the process by using tools or functionalities. It allows you to develop and deploy dapps easily, or even build a serverless HTML5 application that uses decentralized technology. It equips you with tools to create new smart contracts which can be made available in JavaScript code. Truffle Here is another great framework for Ethereum, which boasts of taking on the task of managing your contract artifacts for you. It includes support for the library that links complex Ethereum apps and provides custom deployments. The Contractors: Integrated Development Environments Maybe, you are not the kind that likes to build things from scratch. You just need a one-stop place where you can tell what kind of building you want and everything else just falls in place. Hire a contractor. If you’re looking for the complete package to build dapps, there are two great tools you can use, Ethereum Studio and Remix (Browser-Solidity). The IDE takes care of everything - right from emulating the live network to testing and deploying your dapps. Ethereum Studio This is an adapted version of Cloud9, built for Ethereum with some additional tools. It has a blockchain emulator called the sandbox, which is great for writing automated tests. Fair warning: You must pay for this tool as it’s not open source and you must use Azure Cloud to access it. Remix  This can pretty much do the same things that Ethereum Studio can. You can run Remix from your local computer and allow it to communicate with an Ethereum node client that’s on your local machine. This will let you execute smart contracts while connected to your local blockchain. Remix is still under development during the time of writing this article. The Rebound Hammer: Testing tools Nothing goes live until it’s tried and tested. Just like the rebound hammer you may use to check the quality of concrete, we have a great tool that helps you test dapps. Blockchain Testnet For testing purposes, use the testnet, an alternative blockchain. Whether you want to create a new dapp using Ethereum or any other chain, I recommend that you use the related testnet, which ideally works as a substitute in place of the true blockchain that you will be using for the real dapp. Testnet coins are different from actual bitcoins, and do not hold any value, allowing you as a developer or tester to experiment, without needing to use real bitcoins or having to worry about breaking the primary bitcoin chain. The Wallpaper: dapp Browsers Once you’ve developed your dapp, it needs to look pretty for the consumers to use. Dapp browsers are mostly the User Interfaces for the Decentralized Web. Two popular tools that help you bring dapps to your browser are Mist and Metamask. Mist  It is a popular browser for decentralized web apps. Just as Firefox or Chrome are for the Web 2.0, the Mist Browser will be for the decentralized Web 3.0. Ethereum developers would be able to use Mist not only to store Ether or send transactions but to also deploy smart contracts. Metamask  With Metamask, you can comfortably run dapps in your browser without having to run a full Ethereum node. It includes a secure identity vault that provides a UI to manage your identities on various sites, as well as sign blockchain contracts. There! Now you can build a Blockchain! Now you have all the tools you need to make amazing and reliable dapps. I know you’re always hungry for more - this Github repo created by Christopher Allen has a great listing of tools and resources you can use to begin/improve your Blockchain development skills. If you’re one of those lazy-but-smart folks who want to get things done at the click of a mouse button, then BaaS or Blockchain as a Service is something you might be interested in. There are several big players in this market at the moment, on the likes of IBM, Azure, SAP and AWS. BaaS is basically for organizations and enterprises that need blockchain networks that are open, trusted and ready for business. If you go the BaaS way, let me warn you - you’re probably going to miss out on all the fun of building your very own blockchain from scratch. With so many banks and financial entities beginning to set up their blockchains for recording transactions and transfer of assets, and investors betting billions on distributed ledger-related startups, there are hardly a handful of developers out there, who have the required skills. This leaves you with a strong enough reason to develop great blockchains and sharpen your skills in the area. Our Building Blockchain Projects book should help you put some of these tools to use in building reliable and robust dapps. So what are you waiting for? Go grab it now and have fun building blockchains!
Read more
  • 0
  • 2
  • 3591

article-image-open-source-software-are-maintainers-the-only-ones-responsible-for-software-sustainability
Savia Lobo
01 Dec 2018
6 min read
Save for later

Open Source Software: Are maintainers the only ones responsible for software sustainability?

Savia Lobo
01 Dec 2018
6 min read
Last week, a Californian Computer Scientist disclosed a malicious package ‘flatmap-stream’ in the popular npm package, ‘event-stream’. The reason for this breach is, the ownership of the event-stream package was transferred by Dominic Tarr (original author) to a malicious user, right9ctrl. Following this, many Twitter and GitHub users have supported him whereas the others think he should have been more careful while transferring package ownership. Andre Staltz, an open source hacker mentions in a support to Dominic, “The fact that he gave ownership meant that he *cared* at least to do a tiny action that seemed ok. Not caring would be doing absolutely nothing at all, and that's the case quite often, and OSS maintainers get criticized also for *that*” Who’s responsible for maintaining the open source software? At the NDC Sydney 2018 conference held in September, two open source maintainers Nick Randolph, Technical Lead at Built To Roam and Geoffrey Huntley, an open source software engineer talked on why should companies and people should contribute back to open source and how they can do it. However, if something goes wrong with the project, who is responsible for it? Most users blame the maintainers of the project, but the license does not say so. In fact users, contributors, and maintainers together are equally responsible. Open source is a fantastic avenue for personal development as it does not require the supply, material, planning, and approval like other software Some reasons to contribute to Open Source Software: Other people will help you for free You will save a lot on training and documentation You will not be criticized by open source advocates Ability to hire best engineers You will be able to influence the direction of the projects to which you contribute Companies have embraced open source software as it allows them to get solutions to the market faster for their customers. It has allowed companies to focus on delivering business value instead of low-level technical tasks. The problem with Open Source The majority of open-source software that the world depends on is built by volunteers. When a business chooses to use open-source software this volunteer labor is essentially an unpaid vendor with no contractual obligations. However the speakers say, “Historically, we have defined open-source software in terms of freedom for the consumer, in the future now that open-source has ‘won’ this dialogue needs to change. Did we get it right? Did we ever stop to think about how software is maintained, the rights of maintainers and the cost of maintenance?” The maintainers said, as per the Open Source Software license, once the software is released to the world their responsibility ends. They need not respond to GitHub issues, no need to create documentation, no need to answer questions on stack overflow, and so on. The popular example where a security damage was caused by the popular Heartbleed Bug where the security issue was found in the OpenSSL cryptographic software library, which caused a huge loss of revenue. However, when an OSS breaks or users need new features, they log an issue on GitHub and then sit back awaiting a response. If the comments are not addressed by the maintainer, users start complaining about how badly the project is run. The thing about OSS that's too often forgotten, it's AS-IS, no exceptions. How should Businesses secure their supply chain? Different projects may operate differently, with more or fewer people, with work being prioritized differently, on differing release schedules but in all cases the software delivered is as-is, meaning that there is absolutely no SLA. The speakers say that it businesses should analyze the level of contribution they need to make towards the open source community. They have highlighted that in order to secure their supply chain, users should contribute with money or time. The truth is that free software is not really free. How much is this going to cost in man hours? If not with money, they can contribute with time. For instance, there is an initiative called as opensourcefriday.com and as an engineering leader you or your employees can pull request and learn how the open source they depend upon works. This means you are having a positive influence in the community and also contributing back to open source. And if your company faces any critical issue, the maintainer is likely to help you as you have actively contributed to the community. Source: YouTube How do you know how much to contribute? In order to shift the goal of the software, you have to be the maintainer or a core contributor to influence the direction. If you just want to protect the supply chain, you can simply fix what’s broken. If you wish to contribute at a consistent velocity, contribute at a rate that you can maintain for as long as you want. Source: YouTube According to Nick and Geoffrey what users and businesses should do is: Protect their software chain and see that from a business perspective what are the components I am making use of and make sure that these components are going to exist, going forward. We also need to think about the sustainability of the project and let it not wither away soon. If the project is good for the community, how can we make it sustainable by making more and more people joining the project? Companies should also keep a track of what they are contributing back to these projects. People should share their experiences and their best practices. This contribution will help analyze the risk factors. Share so that the industry matures beyond simple security concerns. Watch the complete talk by Nick and Geoffrey on YouTube https://www.youtube.com/watch?v=Mm_RuObpeGo&app=desktop The Linux and RISC-V foundations team up to drive open source development and adoption of RISC-V instruction set architecture (ISA) OpenStack Foundation to tackle open source infrastructure problems, will conduct conferences under the name ‘Open Infrastructure Summit’ The Ceph Foundation has been launched by the Linux Foundation to support the open source storage project
Read more
  • 0
  • 0
  • 3591
article-image-nlp-deep-learning
Savia Lobo
10 Nov 2017
7 min read
Save for later

Facelifting NLP with Deep Learning

Savia Lobo
10 Nov 2017
7 min read
Over the recent years, the world has witnessed a global move towards digitization. Massive improvements in computational capabilities have been made; thanks to the boom in the AI chip market as well as computation farms. These have resulted in data abundance and fast data processing ecosystems which are accessible to everyone - important pillars for the growth of AI and allied fields. Terms such as ‘Machine learning’ and ‘Deep learning’ in particular have gained a lot of traction in the data science community, mainly because of the multitude of domains they lend themselves to. Along with image processing, computer vision and games, one key area transformed by machine learning, and more recently by deep learning, is Natural Language Processing, simply known as NLP. Human language is a heady concoction of otherwise incoherent words and phrases with more exceptions than rules, full of jargons and words with different meanings. Making machines comprehend a human language in all its glory, not to mention its users’ idiosyncrasies, can be quite a challenge. Then there is the matter of there being thousands of languages, dialects, accents, slangs and what not. Yet, it is a challenge worth taking up - mainly because language finds its application in almost everything humans do - from web search to e-mails to content curation, and more. According to Tractica, a market intelligence firm, “Natural Language Processing market will reach $22.3 Billion by 2025.” NLP Evolution - From Machine Learning to Deep Learning Before deep learning embraced NLP into a smarter version of a conversational machine, machine learning based NLP systems were utilized to process natural language. Machine learning based NLP systems were trained on models which were shallow in nature as they were often based on incomplete and time-consuming custom-made features. They included algorithms such as support vector machines (SVM) and logistic regression. These models found their applications in tasks such as spam detection in emails, grouping together similar words in a document, spin articles, and much more. ML-based NLP systems relied heavily on the quality of the training data. Because of the limited nature of the capabilities offered by machine learning, when it came to understanding high-level texts and speech outputs from humans, the classical NLP model fell short. This led to the conclusion that machine learning algorithms can handle only narrow features and as such cannot perform high-level reasoning, which human conversations often comprise of. Also, as the scale of the data grew, machine learning couldn’t be an effective tool to tackle the different NLP problems related to efficiently training the models and their optimization. Here’s where deep learning proves to be a stepping stone. Deep learning includes Artificial Neural Networks (ANNs) that function similar to neural nerves in a human brain, a reason why they are considered to emulate human thinking remarkably. Deep learning models perform significantly better as the quantity of data fed to them increases. For instance, Google’s Smart Reply can generate relevant responses to the emails received by the user. This system uses a pair of  RNNs, one to encode the incoming mail and the other to predict relevant responses. With the incorporation of DL in NLP, the need for feature engineering is highly reduced, saving time - a major asset. This means machines can be trained to understand languages other than English without complex and custom feature engineering by applying deep neural network models. In spite of the constant upgrades happening to language, the quest to get machines more and more friendly to humans is made possible using deep learning.      Key Deep Learning techniques used for NLP NLP-based deep learning models make use of word-embeddings, pre-trained using a large corpus or collection of unlabeled data. With advancements in word embedding techniques, the ability of the machines to derive deeper insights from languages has increased. To do so, NLP uses a technique called Word2vec that converts a given word into a vector for the better understanding of the machines. Continuous-bag-of words and skip-gram models - models used for learning word vectors, help in capturing the sequential patterns within sentences. The latter predicts the outside words using the center word as an input and is used in large datasets whereas the former does the vice versa. Similarly, GloVe also computes vector representations but using a technique called matrix factorization. A disadvantage of the word embedding approach is that it cannot understand phrases and sentences. As mentioned earlier, the bag-of-words model converts each word into a corresponding vector. This can simplify many problems but it can also change the context of the text. For instance, it may not collectively understand the use of idioms or sub-phrases such as “Break a leg”. Also, recognizing indicative or negative words such as ‘not’, ‘but’, that attaches a semantical meaning to a word is difficult for the model to understand. A solution to this would be using ‘negative sampling’, i.e., a frequency-based sampling of negative terms while training the word2vec model. This is where neural networks can come into play. CNNs (Convolutional Neural Networks)  and RNNs (Recurrent Neural Networks) are the two widely used neural network models in NLP. CNNs are good performers for text classification. However, the downside is that they are poor in learning the sequential information from the text. Expresso, built on Caffe, is one of the many tools used to develop CNNs. RNNs are preferred over CNNs for NLP as they allow sequential processing. For example, an RNN can differentiate between the words ‘fan’ and ‘fan-following’. This means RNNs are better equipped to handle complex dependencies and unbounded texts. Also, unlike CNNs, RNNs can handle input context of arbitrary length because of its flexible computational steps. All the above highlight why RNNs have better modeling potential than CNNs as far NLP is concerned. Although RNNs are the preferred choice, they have a limitation: The vanishing gradient problem. This problem can be solved using LSTM (Long-short term memory), which helps in understanding the association of words within a text, and back-propagates an error through unlimited steps. LSTM includes a forget gate, which forgets the learned weights if carrying it forward is negligible. Thus, long-term dependencies are reduced. Other than LSTM, GRU (Gated Recurrent Units) is also widely opted to solve the vanishing gradient problem. Current Implementations Deep Learning is good at identifying patterns within unstructured data. Social Media is a major dump of unstructured media content - a goldmine for human sentiment analysis. Facebook uses DeepText, a Deep Learning based text understanding engine, which can understand the textual content of thousands of posts with near-human accuracy. CRM systems strive to maximize customer lifetime value by understanding what customers want and then taking appropriate measures. TalkIQ, uses neural-network based text analysis and deep learning models to extract meaning from the conversations that organizations have with their customers in order to gain deeper insights in real-time. Google’s Cloud Speech API helps convert audio to texts; it can also recognize audio in 110 languages. Other implementations include Automated Text Summarization for summarizing the concept within a huge document, Speech Processing for converting voice requests into search recommendations, and much more. Many other areas such as fraud detection tools, UI/UX, IoT devices, and more, that make use of speech and text analytics can perform explicitly well by imbibing deep learning neural network models. The future of NLP with Deep Learning With the advancements in deep learning, machines will be able to understand human communication in a much more comprehensive way. They will be able to extract complex patterns and relationships and decipher the variations and ambiguities in various languages. This will find some interesting use-cases - smarter chatbots being a very important one. Understanding complex and longer customer queries and giving out accurate answers are what we can expect from these chatbots in the near future. The advancements in NLP and deep learning could also lead to the development of expert systems which perform smarter searches, allowing the applications to search for content using informal, conversational language. Understanding and interpreting unindexed unstructured information, which is currently a challenge for NLP, is something that is possible as well. The possibilities are definitely there - how NLP evolves by blending itself with the innovations in Artificial Intelligence is all that remains to be seen.
Read more
  • 0
  • 0
  • 3589

article-image-what-you-should-know-about-unity-2018-interface
Amarabha Banerjee
23 Jul 2018
8 min read
Save for later

What you should know about Unity 2018 Interface

Amarabha Banerjee
23 Jul 2018
8 min read
In this article we will show Unity 2018 primary views and windows; we will also cover layouts and the toolbar. The interface components covered in the post are the used most ones. This article is taken from the book Getting Started with Unity 2018 written by Dr. Edward Lavieri. Unity 2018 User Interface Components at glance When we first launch Unity, we might be intimidated by all the areas, tabs, menus, and buttons on the interface. Unity is a complex game engine with a lot of functionality, so we should expect more components for us to interact with. If we break the interface down into separate components, we can examine each one independently to gain a thorough understanding of the entire interface. As you can see here, we have identified six primary areas of the interface. We will examine each of these in subsequent sections. As you will quickly learn, this interface is customizable. The following screenshot shows the default configuration of the Unity user interface. Menu The Unity editor's main menu bar, as depicted here, consists of eight pull-down options. We will briefly review each menu option in this section. Additional details will be provided in subsequent chapters, as we start developing our Cucumber Beetle game: Unity's menus are contextual. This means that only menu items pertinent to the currently selected object will be enabled. Other non-applicable menu items will appear as gray instead of black and not be selectable. Unity The Unity menu item, shown here, gives us access to information about Unity, our software license, display options, module information, and access to preferences: Accessing the Unity | About Unity... menu option gives you access to the version of the engine you are running. There is additional information as well, but you would probably only use this menu option to check your Unity version. The Unity | Preferences... option brings up the Unity Preferences dialog window. That interface has seven side tabs: General, External Tools, Colors, Keys, GI Cache, 2D, and Cache Server. You are encouraged to become familiar with them as you gain experience in Unity. The Unity | Modules option provides you with a list of playback engines that are running as well as any Unity extensions. You can quit the Unity game engine by selecting the Unity | Quit menu option. File Unity's File menu includes access to your game's scenes and projects. We will use these features throughout our game development process. As you can see in the following screenshot, we also have access to the Build Settings. Edit The Edit menu has similar functionality to standard editors, not just game engines. For example, the standard Cut, Copy, Paste, Delete, Undo, and Redo options are there. Moreover, the short keys are aligned with the software industry standard. As you can see from the following screenshot, there is additional functionality accessible here. There are play, pause, and step commands. We can also sign in and out of our Unity account: The Edit | Project Settings option gives us access to Input, Tags and Layers, Audio, Time, Player, Physics, Physics 2D, Quality, Graphics, Network, Editor, and Script Execution Order. In most cases, selecting one of these options opens or focuses keyboard control to the specific functionality. Assets Assets are representations of things that we can use in our game. Examples include audio files, art files, and 3D models. There are several types of assets that can be used in Unity. As you can see from the following screenshot, we are able to create, import, and export assets: You will become increasingly familiar with this collection of functionality as you progress through the book and start developing your game. GameObject The GameObject menu provides us with the ability to create and manipulate GameObjects. In Unity, GameObjects are things we use in our game such as lights, cameras, 3D objects, trees, characters, cars, and so much more. As you can see here, we can create an empty GameObject as well as an empty child GameObject: We will have extensive hands-on dealings with the GameObject menu items throughout this book. At this point, it is important that you know this is where you go to create GameObjects as well as perform some manipulations on them. Component We know that GameObjects are just things. They actually only become meaningful when we add components to them. Components are an important concept in Unity, and we will be working with them a lot as we progress with our game's development. It is the components that implement functionality for our GameObjects. The following screenshot shows the various categories of components. This is one method for creating components in Unity: Window The Window menu option provides access to a lot of extra features. As you can see here, there is a Minimize option that will minimize the main Unity editor window. The Zoom option toggles full screen and zoomed view: The Layouts option provides access to various editor layouts, to save or delete a layout. The following table provides a brief description of the remaining options available via the Window menu item. You will gain hands-on experience with these windows as you progress through this book: Window OptionDescriptionServicesAccess to integrated services: Ads, Analytics, Cloud Build, Collaborate, Performance Reporting, In-App Purchasing, and Multiplayer.SceneBrings focus to the Scene view. Opens the window if not already open. Additional details are provided later in this chapter.GameBrings focus to the Game view. Opens the window if not already open. Additional details are provided later in this chapter.InspectorBrings focus to the Inspector window. Opens the window if not already open. Additional details are provided later in this chapter.HierarchyBrings focus to the Hierarchy window. Opens the window if not already open. Additional details are provided later in this chapter.ProjectBrings focus to the Project window. Opens the window if not already open. Additional details are provided later in this chapter.AnimationBrings focus to the Animation window. Opens the window if not already open.ProfilerBrings focus to the Profiler window. Opens the window if not already open.Audio MixerBrings focus to the Audio Mixer window. Opens the window if not already open.Asset StoreBrings focus to the Asset Store window. Opens the window if not already open.Version ControlUnity provides functionality for most popular version control systems.Collab HistoryIf you are using an integrated collaboration tool, you can access the history of changes to your project here.AnimatorBrings focus to the Animator window. Opens the window if not already open.Animator ParameterBrings focus to the Animator Parameter window. Opens the window if not already open.Sprite PackerBrings focus to the Sprite Packer window. Opens the window if not already open. In order to use this feature, you will need to enable Legacy Sprite Packing in Project Settings.ExperimentalBrings focus to the Experimental window. Opens the window if not already open. By default, the Look Dev experimental feature is available. Additional experimental features can be found in the Unity Asset Store.Test RunnerBrings focus to the Experimental window. Opens the window if not already open. This is a tool that runs tests on your code both in edit and play modes. Builds can also be tested.Timeline EditorBrings focus to the Timeline Editor window. Opens the window if not already open. This is a contextual menu item.LightingAccess to the Lighting window and the Light Explorer window.Occlusion CullingThis feature allows you to select and edit how objects are drawn. With occlusion culling, only the objects within the current camera's visual range, and not obscured by other objects, are rendered.Frame DebuggerThis feature allows you to step through a game, one frame at a time, so you can see the draw calls on a given frame.NavigationUnity's navigation system allows us to implement artificial intelligence with regards to non-player character movement Physics DebuggerBrings focus to the Physics Debugger window. Opens the window if not already open. Here we can toggle several physics-related components to help debug physics in our games.ConsoleBrings focus to the Console window. Opens the window if not already open. The Console window shows warnings and errors. You can also output data here during gameplay, which is a common internal testing approach. To summarize, we have discussed the Unity 2018 interface. If you are interested to know more about using Unity 2018 and want to leverage its powerful features, you may refer to the book Getting Started with Unity 2018. What’s got game developers excited about Unity 2018.2? Put your game face on! Unity 2018.1 is now available Implementing lighting & camera effects in Unity 2018
Read more
  • 0
  • 0
  • 3588