Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech Guides

851 Articles
article-image-neurips-invited-talk-reproducible-reusable-and-robust-reinforcement-learning
Prasad Ramesh
25 Feb 2019
6 min read
Save for later

NeurIPS Invited Talk: Reproducible, Reusable, and Robust Reinforcement Learning

Prasad Ramesh
25 Feb 2019
6 min read
On the second day of NeurIPS conference held in Montreal, Canada last year, Dr. Joelle Pineau presented a talk on reproducibility in reinforcement learning. She is an Associate Professor at McGill University and Research Scientist for Facebook, Montreal, and the talk is ‘Reproducible, Reusable, and Robust Reinforcement Learning’. Reproducibility and crisis Dr. Pineau starts by stating a quote from Bollen et. al in National Science Foundation: “Reproducibility refers to the ability of a researcher to duplicate the results of a prior study, using the same materials as were used by the original investigator. Reproducibility is a minimum necessary condition for a finding to be believable and informative.” Reproducibility is not a new concept and has appeared across various fields. In a 2016 The Nature journal survey of 1576 scientists, 52% said that there is a significant reproducibility crisis, 38% agreed to a slight crisis. Reinforcement learning is a very general framework for decision making. About 20,000 papers are published in this area alone in 2018 and the year is not even over yet, compared to just about 2,000 papers in the year 2000. The focus of the talk is a class of reinforcement learning that has gotten the most attention and has shown a lot of promise for practical applications—policy gradients. In this method, the idea is that the policy/strategy is learned as a function and this function can be represented by a neural network. Pineau picks four research papers in the class of policy gradients that come across literature most often. They use the Mujocu simulator to compare the four algorithms. It is not important to know which algorithm is which but the approach to empirically compare these algorithms is the intention. The results were different in different environments (Hopper, Swimmer) but the variance was also drastically different for an algorithm. Even on using different code and policies the results were very different for a given algorithm in different environments. It was observed that people writing papers may not be always motivated to find the best possible hyperparameters and very often use the default hyperparameters. On using the best hyperparameters possible for two algorithms compared fairly, the results were pretty clean, distinguishable. Where n=5, five different random seeds. Picking n influences the size of the confidence interval (CI). n=5 here as most papers used 5 trials at the most. Some people were also run “n” runs where n was not specified and would report the top 5 results. It is a good way to show good results but there’s a strong positive bias, the variance appears to be small. Source: NeurIPS website Some people argue that the field of reinforcement learning is broken. Pineau stresses that this is not her message and notes that sometimes fair comparisons don’t have to give the cleanest results. Different methods may have a very distinct set of hyperparameters in number, value, and variable sensitivity. Most importantly the best method to choose heavily depends on the data and computation budget you can spare. An important point to get the said reproducibility when using algorithms to your problem. Pineau and her team surveyed 50 RL papers from 2018 and found that significance testing was applied only on 5% of the papers. Graphs and shading is seen in many papers but without information on what the shading area is, confidence interval or standard deviation cannot be known. Pineau says: “Shading is good but shading is not knowledge unless you define it properly.” A reproducibility checklist For people publishing papers Pineau presents a checklist created in consultation with her colleagues. It says for algorithms the things included should be a clear description, an analysis of complexity, and a link to source code and dependencies. For theoretical claims, a statement of the result, a clear explanation of any assumptions, and a complete proof of the claim should be included. There are also other items presented in the checklist for figures and tables. Here is the complete checklist: Source: NeurIPS website Role of infrastructure on reproducibility People can think that since the experiments are run on computers results will be more predictable than those of other sciences. But even in hardware, there is room for variability. Hence, specifying it can be useful. For example the properties of CUDA operations. On some myths “Reinforcement Learning is the only case of ML where it is acceptable to test on your training set.” Do you have to train and test on the same task? Pineau says that you really don’t have to after presenting three examples. The first one is where the agent moves around in four directions on an image then identifies what the image is, on higher n, the variance is greatly reduced. The second one is of an Atari game where the black background is replaced with videos which are a source of noise, a better representation of the real world as compared to a simulated limited environment where external real-world factors are not present. She then talks about multi-task RL in photorealistic simulators to incorporate noise. The simulator is an emulator built from images videos taken from real homes. Environments created are completely photorealistic but have properties of the real world, for example, mirror reflection. Working in the real world is very different than a limited simulation. For one, a lot more data is required to represent the real world as compared to a simulation. The talk ends with a message that science is not a competitive sport but is a collective institution that aims to understand and explain. There is an ICLR reproducibility challenge where you can join. The goal is to get community members to try and reproduce the empirical results presented in a paper, it is on an open review basis. Last year, 80% changed their paper with the feedback given by contributors who tested a given paper. Head over to NeurIPS facebook page for the entire lecture and other sessions from the conference. How NeurIPS 2018 is taking on its diversity and inclusion challenges NeurIPS 2018: Rethinking transparency and accountability in machine learning Researchers unveil a new algorithm that allows analyzing high-dimensional data sets more effectively, at NeurIPS conference
Read more
  • 0
  • 0
  • 3212

article-image-go-phish-what-do-thieves-get-from-stealing-our-data
Guest Contributor
24 Dec 2018
7 min read
Save for later

Go Phish! What do thieves get from stealing our data?

Guest Contributor
24 Dec 2018
7 min read
If black hats were sharks, then our emails would be a school of innocent, unsuspecting guppies nonchalantly drifting along. For black hats or malicious hackers, getting into the average person’s email is as challenging as overeating at a buffet. After all, e-mail is the most successful federated communication system ever built, with over 281 billion emails sent per day and growing. We’re helpless without email. Most people cannot imagine an hour going by without checking and answering emails, let alone a day. Over email, you send updates on your address and banking information to your service providers or clients, health information to your university or insurance agent, and more. Despite this, email traffic generally does not have end-to-end encryption, leaving it highly vulnerable. And 91% of cyber attacks are carried out through e-mail. Fish, meet barrel. And for whatever e-mail scanners or antivirus you have running, know that black hats are developing their own predatory tools at a much faster rate. Social engineering, baiting, and placing malicious links in places as seemingly harmless as unsubscribe buttons are just a few items from their arsenal of tricks. Cybersecurity companies are getting better at detecting threats and identifying suspicious emails or links, but most people are just not tech savvy enough to avoid these pitfalls. Many think that they don’t even need to bother, which you have to realize is like walking blindfolded through the Temple of Doom and expecting to get out of there unscathed. Don’t be that person. Don’t be in that school of fish just waiting to be a shark snack. It’s time to understand why protecting your email is so important and how black hats are plotting your demise. Data exploitation and ransom With the amount of conversation happening lately about the importance of having control over your data, it should be clear how valuable data can be. Data can be used for consumer and marketing purposes or misused to fraudulently conduct purchases on e-commerce sites. It can be sold to other parties who will use it for illicit or illegal purposes, or even just to steal even more data from your friends and family. Equifax was one of the more famous data breaches that occurred recently. It affected over 200,000 people and compromised their credit card information, social security numbers, credit scores, and other very sensitive information. Now if you’re not in the 1%, you probably think you’re not the type to be subject to be a ransom attack, but you’d be wrong. You don’t need to be famous or powerful for people to try to bleed you dry in this way. Ransomware attacks, or attacks that are meant to hold on to your data in return for ransom money, rose by 250% in 2017. WannaCry is an example of an infamous ransomware attack, which caused an estimated $1B in damage or more. Identity Theft The dangers of identity theft may be obvious, but many people don’t understand to what extent it can really affect their future. Identity theft may actually be the worst thing a hacker can do with your information. In 2017, the direct and indirect cost of identity theft in the US was estimated at $16.8 billion. Identity theft harmed 16.7 million people,  which is about 7% of American adults! And one weakness leads to another - back in 2014, the Department of Justice estimated that about ⅓ of Americans who suffered a data breach subsequently became victims of financial fraud. Now in 2018, this is only likely to have increased. Here are just a few things thieves can do with your identifying information: Open credit cards or take out loans Aside from your name, if black hats also obtain your Social Security number, birthdate, and address, they can open credit cards and apply for loans in your name. Intercept your tax refund The tax refund you are excited about may not come after all if you get hacked. People who wait until the last moment to declare are more vulnerable and thieves may counterfile a fake tax return using your identity. Use it to receive medical treatment By obtaining your SSN and health insurance account numbers, black hats can use or sell your information in order to receive medical treatment. According to a study from Michigan State University, there were nearly 1,800 incidents of medical data breaches with patients’ information from October 2009 to December 2016. These breaches can be used to receive treatments, prescriptions, and even put your own health at risk if the thief’s medical information is now mixed up with yours. Travel with your airline miles Airline miles can be exchanged for cash, gift cards, and products or upgrades. Millions of miles have been stolen easily through phishing emails and other simple email scams. Open utility accounts 13% of 2016’s fraud incidents were related to phone and utility accounts. Thieves can open an account with a gas, phone, or electric company using your stolen SSN and then run up huge bills in your name, right under your nose. Outsmarting the sharks The first and simplest step you can take to defend against email fraud is to learn to avoid phishing schemes. A phishing scheme is when someone emails you pretending to be someone they’re not. (Think Nigerian princes or friends who suddenly find themselves abroad without a wallet when you could have sworn they were at the bar Friday night.) They could also be pretending to be from your email or healthcare provider asking you to log in. These e-mails often include links to phishing sites that will collect your passwords and personal information. You may have heard that using passphrases instead of passwords can help protect you, and it’s true that they are more secure. They’re even stronger when you include special characters like quotation marks, and use languages other than English. This is the best known practice for generating strong passwords. But these passphrases can still be stolen through phishing, just like any password. So don’t let a clever passphrase lull you into a false sense of security. Phishing is extremely prevalent. About 1.4 million of these fake sites are created each month, and around 135 million phishing attempts are made via email every single day. Here are some main rules of thumb to avoid phishing, and all they take are common sense: Don’t follow any links that don’t have https in the URL. Avoid links that lack the S. Don’t enter your password after following any link from any e-mail. Even if it really looks legit. If it’s from your bank, for example, just enter your banking app normally to complete whatever the e-mail is asking you to do. Do not follow the e-mailed link. Chances are, you’ll discover your account is normal and requires no attention at all. Bullet dodged. Keep your accounts secure with two factor authentication - that means adding an extra step to your login process, like receiving a security code to your phone. This is annoying for sure, but it does help keep predators out until a better solution is offered to the masses. We’re looking at you, e-mail security industry! We’re in dangerous waters these days, and the hacker sharks are circling, but you’re not helpless if you pay attention. Treat your e-mail with the same careful consideration with which you’d (hopefully) treat your wallet or other tangible assets, and you’ll go a long way towards avoiding the worst. Good luck out there! Author Bio Georg Greve is the Co-founding Chairman and Head of Product Development at Vereign, an intuitive software platform on a mission to bring authenticity and privacy to day-to-day online communication. Georg is also a software developer, physicist, and entrepreneur, with two decades of experience working closely with Red Hat, IBM, and Google as well as the United Nations, European Commission and various countries. His interest in information security dates back even further. He previously worked on the secure messaging platform Kolab, and as Founding President of the Free Software Foundation Europe (FSFE), where he received the German Federal Cross of Merit on Ribbon for his groundbreaking work on Open Standards and Free Software. Dark Web Phishing Kits: Cheap, plentiful and ready to trick you. Using machine learning for phishing domain detection [Tutorial] Meet ‘Gophish’, the open source Phishing Toolkit that simulates real-world phishing attacks
Read more
  • 0
  • 0
  • 3205

article-image-max-fatouretchi-explains-the-3-main-pillars-of-effective-customer-relationship-management
Packt Editorial Staff
03 Jun 2019
6 min read
Save for later

Max Fatouretchi explains the 3 main pillars of effective Customer Relationship Management

Packt Editorial Staff
03 Jun 2019
6 min read
Customer Relationship Management (CRM) is about process efficiency, reducing operational costs, and improving customer interactions and experience. The never-ending CRM journey could be beautiful and exciting, and it's something that matters to all the stakeholders in a company. One important saying is that CRM matters to all roles in a company and everyone needs to feel the sense of ownership right from the beginning of the journey. In this article we will look at 3 main pillars for effective customer relationship management. This article is an excerpt taken from the book The Art of CRM, written by Max Fatouretchi. Max, founder of Academy4CRM institute, draws on his experience over 20 years and 200 CRM implementations worldwide.The book covers modern CRM opportunities and challenges based on the author’s years of experience including AI, machine learning, cloud hosting, and GDPR compliance. Three key pillars of CRM The main role of the architect is to design a solution that can not only satisfy the needs and requirements of all the different business users but at the same time have the agility and structure for a good foundation to support future applications and extensions. Having understood the drivers and the requirements, you are ready to establish the critical quality properties the system will have to exhibit in order to identify scenarios to characterize each one of them. The output of the process is a tree of attributes, a so-called quality attribute tree including usability, availability, performance, and evolution. You always need to consider that the CRM rollout in the company will affect everyone, and above all, it needs to support the business strategies while improving operational efficiencies, enabling business orchestration, and improving customer experience over all the channels. Technically speaking, there are three main pillars for any CRM implementation; these enable value to the business: Operational CRM The operational CRM is all about marketing, sales, and services functionalities. We will cover some case studies later in this book from different projects I've personally engaged with across a wide area of applications. Analytical CRM The analytical CRM will use the data "collected" from the operational CRM and provide the users and business leaders with individual KPIs, dashboards, and analytical tools in order to enable them to slice and dice the data about their business performance as they need. This foundation is for the business orchestration. Collaboration CRM The collaboration CRM will provide the technology to integrate all kinds of communication channels and front-ends with core CRM for both internal and external users, for employees, partners, and for customers so-called bring your own device. This includes support for different types of devices that could integrate with the CRM core platform and be administered with the same tools, leverage the same infrastructure including security, and maintenance. It's using the same platform, same authentication procedures, same workflow engine and fully leveraging the core entities and data. With these three pillars in place, you'll be able to create a comprehensive view of your business and manage client's communication over all your channels. Through this, you'll have the ingredients for predictive client insights, business intelligence, marketing, sales, and services automation. But before we move on, Figure 1.1 is an illustration of the three pillars of a CRM solution and related modules, which should help you visualize what we've just talked about: Figure 1.1: The three pillars of CRM It's also important to remember that any CRM journey always begins with either a business strategy and/or a business pain-point. All of the stakeholders must have a clear understanding of where the company is heading to, and what the business drivers for the CRM investment are. It's also important for all CRM team members to remember that the potential success or failure of CRM projects remains primarily on business stakeholders and not on the IT staff. Role-based ownership in CRM Typically, the business decision makers are the ones bringing up the need and sponsoring the CRM solution. Often but not always, the IT department is tasked with the selection of the platform and conducting the due diligence with a number of vendors. More importantly, while different business users may have different roles and expectations from the system, everyone needs to have a common understanding of the company's vision, while the team members need to support the same business strategies at the highest level. The team will work together towards the success of the project for the company as a whole while having individual expectations. In addition to that, you will notice that the focus and the level of engagement of people involved in the project (project team) will vary during the lifecycle of the project as time goes on. It also helps to categorize the characteristics of team members from visionary to leadership, stakeholders, and owners. While key sponsors are more visionary, and usually, the first players to actively support and advocate for a CRM strategy, they will define the tactics, and end users will ultimately take more ownership during the deployment and operation phase. In the Figure 1.2 we see the engagement level of stakeholders, key-users, and end-users in a CRM implementation project. The visionaries are here to set the company's vision and strategies for the CRM, the key users (department leads) are the key-sponsors who promote the solution, and the end-users are to engage in reviews and provide feedback. Figure 1.2: CRM role based ownership Before we start the development, we must have identified the stakeholders, and have a crystal-clear vision of the functional requirements based on the business requirements. Furthermore, we must also ensure we have converted these to a detail specification. All this is done by business analysts, project managers, solution specialists, and architects, with the level of IT engagement being driven by the outcome of this process. This will also help to define your metrics for business Key Performance Indicators (KPI) figures and for TCO/ROI (Total-Cost-of-Ownership and Return-on-Investment) of the project. These metrics are a compass and a measurement tool for the success of your CRM project and will help enable your need to justify your investment but also allow you to measure the improvements you've made. You will also use these metrics as a design guide for an efficient solution that not only provides the functionalities supporting the business requirements and justification of your investment but something that also delivers data for your CRM dashboards. This data can then help fine tune the business processes for higher efficiencies going forward. In this article, we've looked at all the important elements of a CRM system, including operational CRM, analytical CRM, and collaboration CRM. Bringing CRM up to date, The Art of CRM shows how to add AI and machine learning, ensure compliance with GDPR, and choose between on-premise, cloud, and hybrid hosting solutions. What can Artificial Intelligence do for the Aviation industry 8 programming languages to learn in 2019 Packt and Humble Bundle partner for a new set of artificial intelligence eBooks and videos
Read more
  • 0
  • 0
  • 3194

article-image-generative-adversarial-networks-gans-next-milestone-deep-learning
Savia Lobo
09 Nov 2017
7 min read
Save for later

Generative Adversarial Networks (GANs): The next milestone In Deep Learning

Savia Lobo
09 Nov 2017
7 min read
With the rise in popularity of deep learning as a concept and a paradigm, neural networks are captivating the interest of machine learning enthusiasts and developers alike, by being able to replicate the human brain for efficient predictions, image recognition, text recognition, and much more. However, can these neural networks do something more, or are they just limited to predictions? Can they self-generate new data by learning from a training dataset? Generative Adversarial networks (GANs) are here, to answer all these questions. So, what are GANs all about? Generative Adversarial Networks follow unsupervised machine learning, unlike traditional neural networks. When a neural network is taught to identify a bird, it is fed with a huge number of images including birds, as training data. Each picture is labeled before it is put to use in training the models. This labeling of data is both costly and time-consuming. So, how can you train your neural networks by giving it less data to train on? GANs are of a great help here. They cast out an easy way to train the DL algorithms by slashing out the amount of data required to train the neural network models, that too, with no labeling of data required. The architecture of a GAN includes a generative network model(G), which produces fake images or texts, and an adversarial network model--also known as the discriminator model (D)--that distinguishes between the real and the fake productions by comparing the content sent by the generator with the training data it has. Both of these are trained separately by feeding each of them with training data and a competitive goal. Source: Learning Generative Adversarial Networks GANs in action GANs were introduced by Ian Goodfellow, an AI researcher at Google Brain. He compares the generator and the discriminator models with a counterfeiter and a police officer. “You can think of this being like a competition between counterfeiters and the police,” Goodfellow said. “Counterfeiters want to make fake money and have it look real, and the police want to look at any particular bill and determine if it’s fake.” Both the discriminator and the generator are trained simultaneously to create a powerful GAN architecture. Let’s peek into how a GAN model is trained- Specify the problem statement and state the type of manipulation that the GAN model is expected to carry out. Collect data based on the problem statement. For instance, for image manipulation, a lot of images are required to be collected to feed in. The discriminator is fed with an image; one from the training set and one produced by the generator The discriminator can be termed as ‘successfully trained’ if it returns 1 for the real image and 0 for the fake image. The goal of the generator is to successfully fool the discriminator and getting the output as 1 for each of its generated image. In the beginning of the training, the discriminator loss--the ability to differentiate real and fake image or data--is minimal. As the training advances, the generator loss decreases and the discriminator loss increases, This means, the generator is now able to generate real images. Real world applications of GANs The basic application of GANs can be seen in generating photo-realistic images. But there is more to what GANs can do. Some of the instances where GANs are majorly put to use include: Image Synthesis Image Synthesis is one of the primary use cases of GANs. Here, multilayer perceptron models are used in both the generator and the discriminator to generate photo-realistic images based on the training dataset of the images. Text-to-image synthesis Generative Adversarial networks can also be utilized for text-to-image synthesis. An example of this is in generating a photo-realistic image based on a caption. To do this, a dataset of images with their associated captions are given as training data. The dataset is first encoded using a hybrid neural network called the character-level convolutional Recurrent Neural network, which creates a joint representation of both in multimodal space for both the generator and the discriminator. Both Generator and Discriminator are then trained based on this encoded data. Image Inpainting Images that have missing parts or have too much of noise are given as an input to the generator which produces a near to real image. For instance, using TensorFlow framework, DCGANs (Deep Convolutional GANs), can generate a complete image from a broken image. DCGANs are a class of CNNs that stabilizes GANs for efficient usage. Video generation Static images can be transformed into short scenes with plausible motions using GANs. These GANs use scene dynamics in order to add motion to static images. The videos generated by these models are not real but illusions. Drug discovery Unlike text and image manipulation, Insilico medicine uses GANs to generate an artificially intelligent drug discovery mechanism. To do this, the generator is trained to predict a drug for a disease which was previously incurable.The task of the discriminator is to determine whether the drug actually cures the disease. Challenges in training a GAN Whenever a competition is laid out, there has to be a distinct winner. In GANs, there are two models competing against each other. Hence, there can be difficulties in training them. Here are some challenges faced while training GANs: Fair training: While training both the models, precaution has to be taken that the discriminator does not overpower the generator. If it does, the generator would fail to train effectively. On the other hand, if the discriminator is lenient, it would allow any illegitimate content to be generated. Failure to understand the number of objects and the dimensions of objects, present in a particular image. This usually occurs during the initial learning phase. For instance, GANs, at times output an image which ends up having more than two eyes, which is not normal in the real world. Sometimes, it may present a 3D image like a 2D one. This is because they cannot differentiate between the two. Failure to understand the holistic structure: GANs lack in identifying universally correct images. It may generate an image which can be totally opposed to how they look in real. For instance, a cat having an elongated body shape, or a cow standing on its hind legs, etc. Mode collapse is another challenge, which occurs when a low variation dataset is processed by a GANs. Real world includes complex and multimodal distributions, where data may have different concentrated sub-groups. The problem here is, the generator would be able to yield images based on anyone sub-group resulting in an inaccurate output. Thus, causing a mode collapse. To tackle these and other challenges that arise while training GANs, researchers have come up with DCGANs (Deep Convolutional GANs), WassersteinGANs, CycleGANs to ensure fair training, enhance accuracy, and reduce the training time. AdaGANs are implemented to eliminate mode collapse problem. Conclusion Although the adoption of GANs is not as widespread as one might imagine, there’s no doubt that they could change the way unsupervised machine learning is used today. It is not too far-fetched to think that their implementation in the future could find practical applications in not just image or text processing, but also in domains such as cryptography and cybersecurity. Innovations in developing newer GAN models with improved accuracy and lesser training time is the key here - but it is something surely worth keeping an eye on.
Read more
  • 0
  • 0
  • 3184

article-image-shoshana-zuboff-on-21st-century-solutions-for-tackling-the-unique-complexities-of-surveillance-capitalism
Savia Lobo
05 Jun 2019
4 min read
Save for later

Shoshana Zuboff on 21st century solutions for tackling the unique complexities of surveillance capitalism

Savia Lobo
05 Jun 2019
4 min read
The Canadian Parliament's Standing Committee on Access to Information, Privacy and Ethics hosted the hearing of the International Grand Committee on Big Data, Privacy and Democracy from Monday, May 27 to Wednesday, May 29.  Witnesses from at least 11 countries appeared before representatives to testify on how governments can protect democracy and citizen rights in the age of big data. This section of the hearing, which took place on May 28, includes Shoshana Zuboff’s take on how to tackle the complexities of surveillance capitalism. She has also provided 21st-century solutions to help tackle the same. Shoshana Zuboff, Author of 'The Age of Surveillance Capitalism', talks about economic imperatives within surveillance capitalism. Zuboff says that the unilateral claiming of private human experience, its translation into behavioral data. These predictions are sold in a new kind of marketplace that trades exclusively in human futures. When we deconstruct the competitive dynamics of these markets we get to understand what the new imperatives are, which are, Scale: as they need a lot of data in order to make good predictions economies of scale; secondly, scope: they need a variety of data to make good predictions. She shared a brief quote from a data scientist, which says, “We can engineer the context around a particular behavior and force change. That way we are learning how to rate the music and then we let the music make them dance.” This behavioral modification is systemically institutionalized on a global scale and mediated by a now ubiquitous digital infrastructure. She further explains the kind of law and regulation needed today will be 21st century solutions aimed at the unique 21st century complexities of surveillance capitalism. She mentioned three arenas in which legislative and regulatory strategies can effectively align with the structure and consequences of surveillance capitalism briefly: We need lawmakers to devise strategies that interrupt and in many cases outlaw surveillance capitalism's foundational mechanisms. This includes the unilateral taking of private human experience as a free source of raw material and its translation into data. It includes the extreme information asymmetries necessary for predicting human behavior. It includes the manufacture of computational prediction products based on the unilateral and secret capture of human experience. It includes the operation of prediction markets that trade in human futures. From the point of view of supply and demand, surveillance capitalism can be understood as a market failure. Every piece of research over the last decades has shown that when users are informed of the backstage operations of surveillance capitalism they want no part of it, they want protection, they reject it, they want alternatives. We need laws and regulatory frameworks designed to advantage companies that want to break with the surveillance capitalist paradigm. Forging an alternative trajectory to the digital future will require alliances of new competitors who can summon and institutionalize an alternative ecosystem. True competitors that align themselves with the actual needs of people and the norms of market democracy are likely to attract just about every person on earth as their customers. Lawmakers will need to support new forms of citizen action, collective action just as nearly a century ago workers won legal protection for their rights to organize to bargain and to and to strike. New forms of citizen solidarity are already emerging in municipalities that seek an alternative to the Google-owned Smart City future. In communities that want to resist the social cost of so-called disruption imposed for the sake of others gained and among workers who seek fair wages and reasonable security in the precarious conditions of the so-called gig economy. She says, “Citizens need your help but you need citizens because ultimately they will be the wind behind your wings, they will be the sea change in public opinion and public awareness that supports your political initiatives.” “If together we aim to shift the trajectory of the digital future back toward its emancipatory promise, we resurrect the possibility that the future can be a place that all of us might call home,” she concludes. To know more you can listen to the full hearing video titled, “Meeting No. 152 ETHI - Standing Committee on Access to Information, Privacy and Ethics” on ParlVU. WWDC 2019 highlights: Apple introduces SwiftUI, new privacy-focused sign in, updates to iOS, macOS, and iPad and more Experts present most pressing issues facing global lawmakers on citizens’ privacy, democracy and rights to freedom of speech Apple previews iOS 13: Sign in with Apple, dark mode, advanced photo and camera features, and an all-new Maps experience  
Read more
  • 0
  • 0
  • 3184

article-image-theres-another-player-in-the-advertising-game-augmented-reality
Guest Contributor
10 Jul 2018
7 min read
Save for later

There’s another player in the advertising game: augmented reality

Guest Contributor
10 Jul 2018
7 min read
Customer purchase does not necessarily depend on the need for the product; instead, it often depends on how well the product has been advertised. Most advertising companies target customer emotions and experiences to sell their product. However, with the increasing online awareness, intrusive ads and an oversaturated advertising space, customers rely more on online reviews before purchasing any product. Companies have to think out-of-the-box to get the customers engaged with their product! Augmented Reality can help companies fetch their audience back by creating an interactive buying experience on their device that converts their casual browsing activity into a successful purchase. It is estimated that there are around 4 billion users in the world who are actively engaged on the internet. This shows that over half of the world’s population is active online which means having an online platform will be beneficial, but there’s a large audience that requires engaging within the right way because it’s becoming the norm. For now, AR is still fairly new in the advertising world but it’s expected that by 2020, AR revenue will outweigh VR (Virtual Reality) by about $120 billion and it’s no surprise this is the case. Ways AR can benefit businesses There are many reasons why AR could be beneficial to a business: Creates Emotional Connection AR provides the platform for advertising companies to engage with their audiences in a unique way, using an immersive advertisement to create a connection that brings the consumers emotions into play. A memorable experience encourages them to make purchases because psychologically, it was an experience that they’ve had like no other and one they’re unlikely to get elsewhere. It can also help create exposure. Because of the excitement that users had, they’ll encourage others to try it too. Saves Money It’s amazing to think that such advanced technology can be cheaper than your traditional method of advertising. Print advertising can still be an extremely expensive method in many cases given that it is a high volume game and due to the costs of applying an ad on the front page of a publication. AR ads can vary depending on the quality but even some of the simplest forms of AR advertising can be affordable. Increases Sales Not only is AR a useful tool for promoting goods and services, but it also provides the opportunity to increase conversions. One issue that many customers have is whether the goods they are purchasing are right for them. AR removes this barrier and enables them to ‘try out’ the product before they purchase, making it more likely for the customer to buy. Examples of AR advertising Early adopters have already taken up the technology for showcasing their services and products. It’s not mainstream yet but as the above figures suggest, it won’t be long before AR becomes widespread. Here are a few examples of companies using AR technology in their marketing strategy. IKEA’s virtual furnitures IKEA is the famous Swedish home retailer who adopted the technology back in 2013 for their iOS app. Their idea allowed potential purchasers to scan their catalogue with their mobile phone and then browse their products through the app. When they selected something that they think might be suitable for their home they could see the virtual furniture through their app or tablet in their living space. This way customers could judge whether it was the right product or not. Pepsi Max’s Unbelievable Campaign Pepsi didn’t necessarily use the technology to promote their product directly but instead used it to create a buzz for the brand. They installed screens into an everyday bus shelter in London and used it to layer virtual images over a real-life camera. Audiences were able to interact with the video in the bus shelter through the camera that was installed on the bus shelter. The video currently has over 8 million views on Youtube and several shares have been made through social networks. Lacoste’s virtual trial room on a marker Lacoste launched an app that used marker-based AR technology where users were able to stand on a marker in the store that allowed them to try on different LCST branded trainers. As mentioned before, this would be a great way for users to try on their apparel before deciding whether to purchase it. Challenges businesses face with integrating AR into their advertising plan Although AR is an exciting prospect for businesses and many positives can be taken from implementing it into advertising plans, it has its fair share of challenges. Let’s take a brief look into what these could be. Mobile Application is required AR requires a specific type of application in order to work. For consumers to engage themselves within an AR world they’ll need to be able to download the specific app to their mobile first. This means that customers will find themselves downloading different applications for the companies that released their app. This is potentially one of the reasons why some companies have chosen not to invest in AR, yet. Solutions like augmented reality digital placement (ARDP) are in the process of resolving this problem. ARDP uses media-rich banners to bring AR to life in a consumer’s handheld device without having to download multiple apps. ARDP would require both AR and app developers to come together to make AR more accessible to users. Poor Hardware Specifications Similar to video and console games, the quality of graphics on an AR app greatly impacts the user experience. If you think of the power that console systems output, if a user was to come across a game they played that had poor graphics knowing the console's capabilities, they will be less likely to play it. In order for it to work, the handheld device would need enough hardware power to produce the ideal graphics. Phone companies such as Apple and Samsung have done this over time when they’ve released new phones. So in the near future, we should expect modern smartphones to produce top of the range AR. Complexity in the Development Phase Creating an AR advertisement requires a high level of expertise. Unless you have AR developers already in your in-house team, the development stage of the process may prove difficult for your business. There are AR software development toolkits available that have made the process easier but it still requires a good level of coding knowledge. If the resources aren’t available in-house, you can either seek help from app development companies that have AR software engineering experience or you could outsource the work through websites such as Elance, Upwork, and Guru. In short, the development process in ad creation requires a high level of coding knowledge. The increased awareness of the benefits of implementing AR advertising will alert developers everywhere and should be seen as a rising opportunity. We can expect an increase in demand for AR developers as those who have the expertise in the technology will be high on the agenda for many advertising companies and agencies who are looking to take advantage of the market to engage with their customers differently. For projects that involve AR development, augmented reality developers should be at the forefront of business creative teams, ensuring that the ideas that are created can be implemented correctly. [author title="About Jamie Costello"] Jamie Costello is a student and an aspiring freelance writer based in Manchester. His interests are to write about a variety of topics but his biggest passion concerns technology. He says, “When I'm not writing or studying, I enjoy swimming and playing games consoles”.[/author]   Read Next Adobe glides into Augmented Reality with Adobe Aero Understanding the hype behind Magic Leap’s New Augmented Reality Headsets Apple’s new ARKit 2.0 brings persistent AR, shared augmented reality experiences and more    
Read more
  • 0
  • 0
  • 3180
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $15.99/month. Cancel anytime
article-image-decoding-chief-robotics-officer-cro
Aaron Lazar
20 Dec 2017
7 min read
Save for later

Decoding the Chief Robotics Officer (CRO) Role

Aaron Lazar
20 Dec 2017
7 min read
The world is moving swiftly towards an automated culture and this means more and more machines will enter the fray. There’ve been umpteen debates on whether this is a good or bad thing - talks of how fortune tellers might be overtaken by Artificial Intelligence, etc. From the progress mankind has made, benefiting from these machines, we can safely say for now, that it’s only been a boon. With this explosion of moving and thinking metal, there’s a strong need for some governance at a top level. It now looks like we need to shift a bit to make more room at the C-level table, cos we’ve got the Master of the Machines arriving! Well “Master of the Machines” does sound cool, although not many companies would appreciate the “professionalism” of the term. The point is, the rise of a brand new C-level role, the Chief Robotics Officer, seems just on the horizon. Like we did in the Chief Data Officer article, we’re going to tell you more about this role and its accompanying responsibilities and by the end, you’ll be able to understand whether your organisation needs a CRO. Facts and Figures As far as I can remember, one of the first Chief Robotics Officers (CRO), was John Connor (#challengeme). Jokes apart, the role was introduced at the Chief Robotics Officer (CRO) Summit after having been spoken quite a lot in 2015. You’ve probably heard about this role by another name - Chief Autonomy Officer. Gartner predicts that 10% of large enterprises in supply-chain industries will have created a CRO position by 2020. Cisco states that as many as 60% of industries like logistics, healthcare, manufacturing, energy, etc,will have a CRO by 2025. The next generation of AI and Robots will affect workforce, business models, operations and competitive position of leading organisations. Therefore, it’s not surprising that the Boston Consulting Group projects that the market for robots will reach $67 billion by 2025. Why all the fuss? It’s quite evident that robots and smart machines will soon take over/redefine the way a lot of jobs are currently being performed by humans. This means that robots will be working alongside humans and as such there’s a need for the development of principles, processes, and disciplines that govern or manage this collaboration. According to Myria research, “The CROs (and their teams) will be at the forefront of technology, to translate technology fluency into clear business advantages, and to maintain Robotics and Intelligent Operational Systems (RIOS) capabilities that are intimately linked to customer-facing activities, and ultimately, to company performance”. With companies like Amazon, Adidas, Crowne Plaza Hotels and Walmart already deploying robots worth millions in research, to move in the direction of automation, there is clearly a need for a CRO. What might the Chief Robotics Officer’s responsibilities be? If you search for job listings of the role, you probably won’t succeed because the role is still in the making and there are no properly defined responsibilities. Although, if we were to think of what the CRO’s responsibilities might me, here’s what we could expect: Piece the Puzzle together: CROs will be responsible for bringing business functions like Engineering, HR, IT, and others together, implementing and maintaining automation technologies within the technical, social and financial contexts of a company. Manage the Robotics Life Cycle: The CRO would be responsible for defining and managing different aspects of the robotics life cycle. They will need to identify ways and means to improve the way robots function and boost productivity. Code of Conduct: CROs will need to design and develop the principles, processes and disciplines that will manage robots and smart machines, to enable them to collaborate seamlessly with a human workforce. Define Integration: CROs will define robotic environments and integration touch points with other business functions such as supply chain, manufacturing, agriculture, etc. Brand Guardians: With the addition of non-humans in the workforce, CROs will be responsible for the brand health and any violations caused by their robots. Define Management Techniques: CROs will bridge the gap between the machines and humans and will develop techniques that humans can use to manage robotic workers. On a broader level, these responsibilities look quite similar to those of a Chief Information Officer, Chief Digital Information Officer or even the Director of IT. Key CRO Skills Well, with the robots in place people management skills would be lesser required, or not. You might think that a CRO is expected to possess only technical skills because of the nature of the job. Although, they still will have to interact with humans and manage their collaboration with the machines. This brings in the challenge of managing change. Not everyone is comfortable working with machines and a certain amount of understanding and skill will need to be developed. With Brand Management and other strategic goals involved, the CRO must be on their toes moulding the technological side of the business to achieve short and long term goals. IT Managers, those in charge of automation and Directors who are skilled in Robotics, will be interested in scaling up to the position. On another note, there might be over 35% vacant robotics jobs by 2020, owing to the rapid growth of the field. Futuristic Challenges Some of the major challenges we expect to see could be managing change and an environment where humans and bots work together. The European Union has been thinking of considering robots as “electronic persons” with rights in the near future. This will result in complications about who is right and who is wrong. Moreover, there are plans about rewarding and penalising bots, based on their performance. How do you penalize a bot? Maybe penalising would come in the form of not charging the bot for a few days or formatting it’s memory, if it’s been naughty! Or rewards could be in the form of a software update or debugging it more frequently? These probably sound silly at the moment, but you never know what the future might have in store. The Million $ Question: Do we need a CRO? So far, there haven’t been any companies that have publicly announced about hiring a CRO, although many manufacturing companies already have senior roles related to robotics, such as Vice President of Advanced Automation and Robotics, or Vice President of Advanced Engineering. However, these roles are purely technical and not strategic. It’s clear that there needs to be someone at the high table calling the shots and strategies for a collaborative future, and world where robots and machines will work in harmony. Remy Glaisner of Myria Research predicts that the CROs will occupy a strategic role on a par with CIOs within the next five to eight years. CIOs might even get replaced by CROs in the long run. You never know, in the future the CRO might work with a bot themselves - the bot helping in taking decisions at an organisation/strategic level. The sky's the limit! In the end, small, medium or even a large sized businesses that are already planning to hire a CRO to drive automation, are on the right track. A careful evaluation of the benefits of having one in your organisation to lead your strategy, will help you decide on whether to take the CRO path or not. With automation bound to increase in importance in a coming years, it looks as though strategic representation will be inevitable for people with skills in the field.
Read more
  • 0
  • 0
  • 3178

article-image-keras
Janu Verma
13 Jan 2017
6 min read
Save for later

Introduction to Keras

Janu Verma
13 Jan 2017
6 min read
Keras is a high-level library for deep learning, which is built on top of theano and tensorflow. It is written in Python and provides a scikit-learn type API for building neural networks. It enables developers to quickly build neural networks without worrying about the mathematical details of tensor algebra, optimization methods, and numerical methods. The key idea behind keras is to facilitate fast prototyping and experimentation. In the words of Francois Chollet, creator of keras, “Being able to go from idea to result with the least possible delay is the key to doing good research.” Key features of keras: Any one of the theano and tensorflow backends can be used. Supports both CPU and GPU. Keras is modular in nature in the sense that each component of a neural network model is a separate, standalone module, and these modules can be combined to create new models. New modules are easy to add. Write only Python code. Installation: Keras has the following dependencies: numpy - scipy - pyyaml - hdf5 (for saving/loading models) - theano (for theano backend) - tensorflow (for tensorflow backend). The easiest way to install keras is using Python Project Index (PyPI): sudo pip install keras Example: MNIST digits classification using keras We will learn about the basic functionality of keras using an example. We will build a simple neural network for classifying hand-written digits from the MNIST dataset. Classification of hand-written digits was the first big problem where deep learning outshone all the other known methods and this paved the way for deep learning on a successful track. Let's start by importing data; we will use the sample of hand-written digits provided with the scikit-learn base package: from sklearn import datasets mnist = datasets.load_digits() X = mnist.data Y = mnist.target Let's examine the data: print X.shape, Y.shape print X[0] print Y[0] Since we are working with numpy arrays, let's import numpy: import numpy # set seed np.random.seed(1234) Now, we'll split the data into training and test sets by randomly picking 70% of the data points as a training set and the remaining for validation: from sklearn.cross_validation import train_test_split train_X, test_X, train_y, test_y = train_test_split(X, Y, train_size=0.7, random_state=0) Keras requires the labels to be one-hot-encoded, i.e., the labels 1, 2, 3,..,etc., need to be converted to vectors like [1,0,0,...], [0,1,0,0...], [0,0,1,0,0...], respectively: def one_hot_encode_object_array(arr): '''One hot encode a numpy array of objects (e.g. strings)''' uniques, ids = np.unique(arr, return_inverse=True) return np_utils.to_categorical(ids, len(uniques)) # One hot encode labels for training and test sets. train_y_ohe = one_hot_encode_object_array(train_y) test_y_ohe = one_hot_encode_object_array(test_y) We are now ready to build a neural network model. Start by importing the relevant classes from keras: from keras.models import Sequential from keras.layers import Dense, Activation from keras.utils import np_utils In keras, we have to specify the structure of the model before we can use it. A Sequential model is a linear stack of layers. There are other alternatives in keras, but we will with sequential for simplicity: model = Sequential() This creates an instance of the constructor; we don't have anything in the model as yet. As stated previously, keras is modular and we can add different components to the model via modules. Let's add a fully connected layer with 32 units. Each unit receives an input from every unit in the input layer, and since the number of units in the input is equal to the dimension (64) of the input vectors, we need the input shape to be 64. Keras uses a Dense module to create a fully connected layer: model.add(Dense(32, input_shape=(64,))) Next, we add an activation function after the first layer. We will use sigmoid activation. Other choices like relu, etc., are also possible: model.add(Activation('sigmoid')) We can add any number of layers this way. But for simplicity, we will restrict to only one hidden layer. Add the output layer. Since the output is a 10-dimensional vector, we require the output layer to have 10 units: model.add(Dense(10)) Add activation for the output layer. In classification tasks, we use softmax activation. This provides a probilistic interpretation for the output labels: model.add(Activation('softmax')) Next, we need to configure the model. There are some more choices we need to make before we can run the model, e.g., choose an optimization method, loss function, and metric of evaluation: model.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy']) The compile method configures the model, and the model is now ready to be trained on data. Similar to sklearn, keras has a fit method for training: model.fit(train_X, train_y_ohe, nb_epoch=10, batch_size=30) Training neural networks often involves the concept of minibatching, which means showing the network a subset of the data, adjusting the weights, and then showing it another subset of the data. When the network has seen all the data once, that's called an "epoch". Tuning the minibatch/epoch strategy is a somewhat problem-specific issue. After the model has trained, we can compute its accuracy on the validation set: loss, accuracy = model.evaluate(test_X, test_y_ohe) print accuracy Conclusion We have seen how a neural network can be built using keras, and how easy and intuitive the keras API is. This is just an introduction, a hello-world program, if you will. There is a lot more functionality in keras, including convolutional neural networks, recurrent neural networks, language modeling, deep dream, etc. About the author Janu Verma is a Researcher in the IBM T.J. Watson Research Center, New York. His research interests are in mathematics, machine learning, information visualization, computational biology and healthcare analytics. He has held research positions at Cornell University, Kansas State University, Tata Institute of Fundamental Research, Indian Institute of Science, and Indian Statistical Institute. He has written papers for IEEE Vis, KDD, International Conference on HealthCare Informatics, Computer Graphics and Applications, Nature Genetics, IEEE Sensors Journals, etc. His current focus is on the development of visual analytics systems for prediction and understanding. He advises startups and companies on data science and machine learning in the Delhi-NCR area; email to schedule a meeting.
Read more
  • 0
  • 0
  • 3174

article-image-4-misconceptions-about-data-wrangling
Sugandha Lahoti
17 Oct 2018
4 min read
Save for later

4 misconceptions about data wrangling

Sugandha Lahoti
17 Oct 2018
4 min read
Around 80% of the time in data analysis is spent on cleaning and preparing data for analysis. This is, however, an important task, and is a prerequisite to the rest of the data analysis workflow, including visualization, analysis, and reporting. Although, being an important task given its nature, there are certain myths associated with data wrangling which developers should be cautious of. In this post, we will discuss four such misconceptions. Myth #1: Data wrangling is all about writing SQL query There was a time when data processing needed data to be presented in a relational manner so that SQL queries could be written. Today, there are many other types of data sources in addition to the classic static SQL databases, which can be analyzed. Often, an engineer has to pull data from diverse sources such as web portals, Twitter feeds, sensor fusion streams, police or hospital records. Static SQL query can help only so much in those diverse domains. A programmatic approach, which is flexible enough to interface with myriad sources and is able to parse the raw data through clever algorithmic techniques and use of fundamental data structures (trees, graphs, hash tables, heaps), will be the winner. Myth #2: Knowledge of statistics is not required for data wrangling Quick statistical tests and visualizations are always invaluable to check the ‘quality’ of the data you sourced. These tests can help detect outliers and wrong data entry, without running complex scripts. For effective data wrangling, you don’t need to have knowledge of advanced statistics. However, you must understand basic descriptive statistics and know how to execute them using built-in Python libraries. Myth #3: You have to be a machine learning expert to do great data wrangling Deep knowledge of machine learning is certainly not a pre-requisite for data wrangling. It is true that the end goal of data wrangling is often to prepare the data so that it can be used in a machine learning task downstream. As a data wrangler, you do not have to know all the nitty-gritties of your project’s machine learning pipeline. However, it is always a good idea to talk to the machine learning expert who will use your data and understand the data structure interface and format he/she needs to run the model fast and accurately. Myth #4: Deep knowledge of programming is not required for data wrangling As explained above, the diversity and complexity of data sources require that you are comfortable with deep notions of fundamental data structures and how a programming language paradigm handles them. Increasing deep knowledge of the programming framework (Python for example) will surely help you to come up with innovative methods for dealing with data source interfacing and data cleaning issues. The speed and efficiency of your data processing pipeline can often be benefited from using advanced knowledge of basic algorithms e.g. search, sort, graph traversal, hash table building, etc. Although built-in methods in standard libraries are optimized, having this knowledge gives you an edge for any situation. You read a guest post from Tirthajyoti Sarkar and Shubhadeep Roychowdhury, the authors of Data Wrangling with Python. We hope that these misconceptions would help you realize that data wrangling is not as difficult as it seems. Have fun wrangling data! About the authors Dr. Tirthajyoti Sarkar works as a Sr. Principal Engineer in the semiconductor technology domain where he applies cutting-edge data science/machine learning techniques for design automation and predictive analytics. Shubhadeep Roychowdhury works as a Sr. Software Engineer at a Paris based Cyber Security startup. He holds a Master Degree in Computer Science from West Bengal University Of Technology and certifications in Machine Learning from Stanford. Don’t forget to check out Data Wrangling with Python to learn the essential basics of data wrangling using Python. 30 common data science terms explained Python, Tensorflow, Excel and more – Data professionals reveal their top tools How to create a strong data science project portfolio that lands you a job
Read more
  • 0
  • 0
  • 3163

article-image-deepvariant-deep-learning-artificial-intelligence-human-genome-sequencing
Abhishek Jha
05 Dec 2017
5 min read
Save for later

DeepVariant: Using Artificial Intelligence into Human Genome Sequencing

Abhishek Jha
05 Dec 2017
5 min read
In 2003, when The New York Times announced that the human genome project was successfully complete two years ahead of its schedule (leave aside the conspiracy theory that the genome was never ‘completely’ sequenced), it heralded a new dawn in the history of modern science. The challenge thereafter was to make sense out of the staggering data that became available. The High Throughput Sequencing technology came to revolutionize the processing of genomic data in a way, but had its own limitations (such as the high rate of erroneous base calls produced). Google has now launched an artificial intelligence tool, DeepVariant, to analyze the huge data resulting from the sequencing of the genome. It took two years of research for Google to build DeepVariant. It's a combined effort from Google’s Brain team, a group that focuses on developing and applying AI techniques, and Verily Life Sciences, another Alphabet subsidiary that is focused on the life sciences. How the DeepVariant makes sense of your genome? DeepVariant uses the latest deep learning techniques to turn high-throughput sequencing readouts into a picture of a full genome. It automatically identifies small insertion and deletion mutations and single-base-pair mutations in sequencing data. Ever since the high-throughput sequencing made genome sequencing more accessible, the data produced has at best offered error-prone snapshot of a full genome. Researchers have found it challenging to distinguish small mutations from random errors generated during the sequencing process, especially in repetitive portions of a genome. A number of tools and methods have come out to interpret these readouts (both public and private funded), but all of them have used simpler statistical and machine-learning approaches to identify mutations. Google claims DeepVariant offers significantly greater accuracy than all previous classical methods. DeepVariant transforms the task of variant calling (the process to identify variants from sequence data) into an image classification problem well-suited to Google's existing technology and expertise. Google's team collected millions of high-throughput reads and fully sequenced genomes from the Genome in a Bottle (GIAB) project, and fed the data to a deep-learning system that interpreted sequenced data with a high level of accuracy. “Using multiple replicates of GIAB reference genomes, we produced tens of millions of training examples in the form of multi-channel tensors encoding the HTS instrument data, and then trained a TensorFlow-based image classification model to identify the true genome sequence from the experimental data produced by the instruments.” Google said. The result has been remarkable. Within a year, DeepVariant went on to win first place in the PrecisionFDA Truth Challenge, outperforming all state-of-the-art methods in accurate genetic sequencing. “Since then, we've further reduced the error rate by more than 50%,” the team claims. Image Source: research.googleblog.com “The success of DeepVariant is important because it demonstrates that in genomics, deep learning can be used to automatically train systems that perform better than complicated hand-engineered systems,” says Brendan Frey, CEO of Deep Genomics, one of the several companies using AI on genomics for potential drugs. DeepVariant is ‘open’ for all The best thing about DeepVariant is that it has been launched as an open source software. This will encourage enthusiastic researchers for collaboration and possibly accelerate its adoption to solve real world problems. “To further this goal, we partnered with Google Cloud Platform (GCP) to deploy DeepVariant workflows on GCP, available today, in configurations optimized for low-cost and fast turnarounds using scalable GCP technologies like the Pipelines API,” Google said. This paired set of releases could facilitate a scalable, cloud-based solution to handle even the largest genomics datasets. The road ahead: What DeepVariant means for future According to Google, DeepVariant is the first of “what we hope will be many contributions that leverage Google's computing infrastructure and Machine learning expertise” to better understand the genome and provide deep learning-based genomics tools to the community. This is, in fact, all part of a “broader goal” to apply Google technologies to healthcare and other scientific applications. As AI starts to propel different branches of medicine take big leaps forward in coming years, there is a whole lot of medical data to mine and drive insights from. But with genomic medicine, the scale is huge. We are talking about an unprecedented set of data that is equally complex. “For the first time in history, our ability to measure our biology, and even to act on it, has far surpassed our ability to understand it,” says Frey. “The only technology we have for interpreting and acting on these vast amounts of data is AI. That’s going to completely change the future of medicine.” These are exciting times for medical research. In 1990, when the human genome project was initiated, it met with a lot of skepticism from many people, including scientists and non-scientists alike. But today, we have completely worked out each A, T, C, and G that makes up the DNA of all 23 pairs of human chromosomes. After high-throughput sequencing made the genomic data accessible, Google’s DeepVariant could just be the next big thing to take genetic sequencing to a whole new level.
Read more
  • 0
  • 0
  • 3160
article-image-who-are-set-be-biggest-players-iot
Raka Mahesa
07 Aug 2017
5 min read
Save for later

Who are set to be the biggest players in IoT?

Raka Mahesa
07 Aug 2017
5 min read
The Internet of Things, also known as IoT, may sound like some technological buzzword, but in reality it's a phenomenon that's taking place right now, as more and more devices get connected to the Internet. It's an ecosystem with $6.7 billion in revenue in 2015 alone and is projected to grow even more in the future. So, with those kinds of numbers, who are the biggest players in an ecosystem with such high value?  Let's clear up one thing before we go further. How exactly do we define "the biggest players" in a technological ecosystem? After all, there are many, many ways to measure the size of an industry player. The quickest way is probably to simply check the their revenues or their market share number. Another way to do that, and also the way that we'll use in this post, is to see how much influence a company has on the ecosystem.  Whatever action they take, the biggest players in an ecosystem will have an impact that can be felt throughout the industry. For example, when Apple unveiled that the latest iPhone had no headphone jack, many smartphone manufacturers followed suit, and a lot of audio hardware vendors introduced new wireless headsets. Or imagine if Samsung, the company with the biggest smartphone market share, suddenly stopped using Android and instead usedtheir own mobile platform, the impact would be massive. The bigger the size of the player, the bigger the impact it will have on the ecosystem.  IoT companies  So, with that part cleared up, let's talk about IoT companies. Companies that dabble in the IoT ecosystem can be segregated into two categories: those that focus on consumer products like Amazon and Apple, and those that focus on enterprise products like Cisco, Oracle, and SalesForce. As for companies that offer solutions for both segments like Samsung, they tend to fall into the consumer-focused category.  Companies that focus on enterprise products, with a few exceptions, are more driven by their sales performance instead of their technology innovation. Because of that, those companies tend to not have as much impact on the ecosystem as their consumer-focused counterparts. And that's why we'll focus on companies that focus on consumer products when we're talking about the biggest players in IoT. Big players: ARM and Amazon  Well, it's finally time for the big reveal on who the biggest players in the Internet of Things are. The IoT ecosystem is pretty interesting; it has so many components that make it quite difficult for one single company to tackle the whole thing. And it has not matured yet, which means there are still many segments with empty leading position, ready to be taken by any company who can rise up to the challenge.  That said, there is actually one company that drives the whole ecosystem: ARM, a.k.a. the company whose chipset infrastructure became the basis of the entire smartphone technology. If you have a smart device that can process information and do calculation, there is a high chance that it's powered by an ARM-based chipset. With such widespread usage, any technological progress made by the company will increase the capability of IoT technology as a whole.  While ARM has the advantage in market share on the hardware side, it's Amazon who has the market share advantage on the software side with AWS. Similar to how Google has a hand in every aspect of the web, Amazon also seems to have a hand in every part of IoT. They provide the services to connect smart devices to the Internet, as well as the platform for developers to host their cloud apps. And for mainstream consumers, Amazon directly sells smart devices like Amazon Dash and Amazon Echo, the latter, which also serves as a platform for developers to create home applications. In short, wherever you look in the IoT ecosystem, Amazon usually has a part in it.  Wearables If there is one segment of IoT that Amazon doesn't seem to have an interest in, it is probably the wearable segment. It was predicted that this segment of the market was to be dominated by smart watches, but instead, the fitness tracker devices from Fitbit won this category. With wearable devices being much more personal than smartphones, if Fitbit can expand beyond fitness tracking, they'll become the dominant force in the IoT ecosystem.  The smart home  Surprisingly, no one seems to have conquered the most obvious space for Internet of Things, the smart home segment. The leading companies in this segment seem to be Amazon, Apple, and Google, but none of them is the dominant force yet. Apple plays with their HomeKit library and doesn't seem to be catching much interest, though maybe they'll have better luck with Apple HomePod. Google is actually the one with the most potential here, with their Google Home, Google Cloud IoT service, and Android Embedded version. However, other than Google Home, these projects are still in beta and not ready for launch yet.  Those are the biggest players in the still-evolving ecosystem of the Internet of Things. It's still pretty early however, a lot of thing can still change, and what is true right now may not be true in a couple of years. After all, before the iPhone, no one expected Apple to be the biggest player in the mobile phone industry.  About the author  Raka Mahesa is a game developer at Chocoarts (http://chocoarts.com/), who is interested in digital technology in general. Outside of work hours, he likes to work on his own projects, with Corridoom VR being his latest released game. Raka also regularly tweets as @legacy99
Read more
  • 0
  • 0
  • 3152

article-image-5-more-2d-game-dev-engines
Ed Bowkett
09 Jan 2015
4 min read
Save for later

5 more 2D Game Engines I didn’t consider

Ed Bowkett
09 Jan 2015
4 min read
In my recent blog, I covered 5 game engines that you can use to create 2D games. The response in the comments and on other social media websites was encouraging but also pointed out other 2D game engines. Having briefly looked at these, I thought it would be a good idea to list alternatives down. In this blog we will cover 5 game engines you can use to create 2D games. 2D games are very appealing for a wide range of reasons. They’re great for the indie game scene, they’re great to learn the fundamentals of game development, and it’s a great place to start coding and you have fun doing it. I’ve thrown in some odd ones that you might not have considered before and remember, this isn’t a definitive list! Just my thoughts! LÖVE2D LÖVE2D Platform game LÖVE2D is a 2D game framework that you can use to make 2D games using Lua, a lightweight scripting language. It can be used across Windows, Linux and Mac and costs nothing to use. The code is easy enough to use, though it might be useful to learn Lua as well. Once you get over that, games can be created with ease, and clones of Mario and Snake have become ever popular with this engine.  It has support for Box2D implementation, networking abilities and user created plugins. However, the possible downside to LÖVE is that it’s only for desktops, however to learn how to programme games this is a good starting point. Libgdx A puzzle game in Libgdx Libgdx is a game development framework written in Java. It’s cross platform which is a major plus when developing games and can be deployed across Windows, Linux and Mac. It’s also free which a benefit is to aspiring game developers. It has multiple third party support for other tools such as Spine and Nextpeer, whilst also allowing BOX2d physics and rendering capabilities through openGL. Example projects include puzzle games, tower defense games and platformers. Extremely fun for the indie developers and hobbyists. Just learn Java…… Gamesalad Creating a game using GameSalad Similar to its rivals, Construct 2 and Gamemaker, GameSalad is a game engine aimed at non-programmers. It uses a drag and drop system, similar to its competitors. Further benefits of GameSalad are that it doesn’t require any programming knowledge; instead you use Actors that defines the rules and behaviors of certain game objects. It’s cross platform which is another big plus, however to unlock the full capabilities of cross platform development you need to pay $299 a year, which is excessive for a game engine, that, whilst is good for hobbyists and beginner game developers, for what it does, the cost-value of the product isn’t that great. Still, you can try it for free and it has the same qualities as other engines. Stencyl Stencyl Stencyl is a game engine that is free (free for Flash, other platforms need to be paid for) and again is a great alternative to the other drag and drop game engines that they have. Again supporting multiple platforms, support for shaders, follows an actor system, animations and support for iOS 8. The cost isn’t too bad either, for the cheaper option, with the ability to publish on web and desktop priced at $99 a year and studio priced at $199 a year.  V-Play A basic 2D game using V-Play A game development tool I actually know very little about but was pitched by our customers as a good alternative. V-Play appears to be component based, written in Javascript and QML (QT markup language) cross platform; it even appears to have plugins for monetizing your game and game analytics for assessing your game. It also allows for touch, includes level design, and includes Box2D as the physics engine. Whilst I know this is brief about what V-Play does and offers, as I’ve not really come across it, I don’t know too much about it, possibly one to write a blog on for the future! This blog was to show off the other frameworks I had not considered in my previous blog, proposed by our readers. It shows that there are always more options out there; it all depends on what you want, how much you want to spend and the quality you expect from it. These are all valid choices and have opened me up to a game development tool I’d not tinkered with before, so Christmas should be fun for me!
Read more
  • 0
  • 0
  • 3150

article-image-minecraft-programmers-sandbox
Aaron Mills
30 Jan 2015
6 min read
Save for later

Minecraft: The Programmer's Sandbox

Aaron Mills
30 Jan 2015
6 min read
If you are familiar with gaming or the Java programming language, you've almost certainly heard of Minecraft. This extremely popular game has captured the imagination of a generation. The premise of the game is simple: you are presented with a near-infinitely explorable world built out of textured one-meter cubes. Modifying the landscape around you is simple and powerful. There are things to craft, such as weapons and mechanisms. There are enemies to fight or hide from, animals to tame, and crops to farm. However, the game doesn't actually provide you with any kind of goal or story beyond what you define for yourself. This makes Minecraft the perfect example of a Sandbox Game, if not the golden standard. But more than that, it has also become a Sandbox for people who like to write code. So let us take a moment and delve into why this is so and what it means for Minecraft to have become “The Programmer's Sandbox”. Originally the product of one man, Markus “Notch” Persson, Minecraft is written entirely in Java. The choice of Java as the language has helped define Minecraft in many ways. On the surface, we have the innate portability that Java provides. But when you dig deeper, Java opens up a whole new realm of possibilities. This is largely because of the inherent ease with which Java applications can be inspected, decompiled, and modified. This means that any part of the code can be changed in any way, allowing us to rewrite the game as we desire. This has lead to a large and vibrant modding community, perhaps even the largest such community ever to exist. The Minecraft modding community would not be what it is today without the herculean efforts of several groups of people, sinces the raw code isn't particularly modding friendly. It's obfuscated and not very extensible in ways that let mods exist side by side. But the efforts of teams such as the Mod Coder Pack (MCP) and Forge have changed that. Today, getting started with Minecraft modding is as simple as downloading Forge, running a one-line setup command (gradlew setupDecompWorkspace eclipse), and pointing your IDE at the resulting folder. From there you can dive straight into the code and create a mod that will be compatible with the vast majority of all other mods. And this opens up realms of possibilities for anyone with an interest in seeing their own creations become part of a vibrant explorable world. It is this desire that has driven the community to innovate and design the tools to let anyone just jump into Minecraft modding and get their feet wet in minutes. As an example, here is a simple mod that I have created that adds a block in Minecraft. This is simple, but will give you an idea of what an example looks like: package com.example.examplemod; import cpw.mods.fml.common.Mod; import cpw.mods.fml.common.Mod.EventHandler; import cpw.mods.fml.common.event.FMLInitializationEvent; import cpw.mods.fml.common.registry.GameRegistry; import net.minecraft.block.Block; import net.minecraft.block.BlockStone; @Mod(modid = ExampleMod.MODID, version = ExampleMod.VERSION) public class ExampleMod { public static final String MODID = "examplemod"; public static final String VERSION = "1.0"; @EventHandler public void init(FMLInitializationEvent event) { // some example code Block simpleBlock = new BlockStone().setBlockName("simpleBlock").setBlock TextureName("examplemod:simpleBlock"); GameRegistry.registerBlock(simpleBlock, "simpleBlock"); } } And here is a figure showing the block from the mod in Minecraft: The Minecraft modding community consists of a wide range of people, from the self-taught programmers to the industry code experts. The reason that such a wide range of modders exists is because the code is both accessible enough for the novice and flexible enough for the expert. Adding a new decorative block can be done with just a few simple lines of code, but mods can also become major projects with a line count in the tens or even hundreds of thousands. So whether this is your first time writing code, or you are a Java Guru, you can quickly and easily bring your creations to life in the sandbox world of Minecraft. People have created all kinds of crazy new things for Minecraft: Massive Toroidal Fusion Reactors, Force-fields, ICBMs, Arcane Magic Runes, Flying Magic Carpets, Pipes for pumping fluids around, Zombie Apocalypse Mini-Games, and even entirely new dimensions with giant Bosses and quests and loot. You can even find a mod that lets you visit the Moon. There really is no limit to what you can add to Minecraft. In many cases, people have taken elements from other game genres and incorporated them into the game: RPG Leveling Systems, Survival Horror Adventures, FPS Shooters, and more. These are just some examples of things that people have actually added to the game. The simplicity and flexibility of the game makes this possible. There are several factors that make Minecraft a particularly accessible game to mod. For one, the art assets are all fairly simple. You don't need HD textures or high poly models; the game's art style intentionally avoids these. It instead opts for pixel art and blocky models. So even if you are a genius coder, but have no real skills in textures and modeling, it's still possible to make something that looks good and fits into the game. But the reverse is also true: if you are a great artist, but your coding skills are weak, you can still create awesome decorative blocks. And if you need help with code, there are dozens, if not hundreds, of Open Source mods to learn from and copy. So yes, Minecraft may be a fun Sandbox game by itself. But if you are the type of person who wants to get your hands a bit dirty, it opens up a whole realm of possibilities, a realm where you are no longer limited by the vision of the game's creators but can make your own vision a reality. This is the true beauty of Minecraft: it really can be whatever you want it to be. About the Author Aaron Mills was born in 1983 and lives in the Pacific Northwest, which is a land rich in lore, trees, and rain. He has a Bachelor's Degree in Computer Science and studied at Washington State University Vancouver. He is best known for his work on the Minecraft Mod, Railcraft, but has also contributed significantly to the Minecraft Mods of Forestry and Buildcraft as well some contributions to the Minecraft Forge project.
Read more
  • 0
  • 0
  • 3144
article-image-what-can-happen-when-artificial-intelligence-decides-on-your-loan-request
Guest Contributor
23 Feb 2019
5 min read
Save for later

What can happen when artificial intelligence decides on your loan request

Guest Contributor
23 Feb 2019
5 min read
As the number of potential borrowers continues to rapidly grow, loan companies and banks are having a bad time trying to figure out how likely their customers are to pay back. Probably, getting information on clients’ creditworthiness is the greatest challenge for most financial companies, and it especially concerns those clients who don’t have any credit history yet. There is no denying that the alternative lending business has become one of the most influential financial branches both in the USA and Europe. Debt is a huge business of our days that needs a lot of resources. In such a challenging situation, any means that can improve productivity and reduce the risk of mistake while performing financial activities are warmly welcomed. This is actually how Artificial Intelligence became the redemption for loan providers. Fortunately for lenders, AI successfully deals with this task by following the borrowers’ digital footprint. For example, some applications for digital lending collect and analyze an individual’s web browsing history (upon receiving their personal agreement on the use of this information). In some countries such as China and Africa, they may also look through their social network profiles, geolocation data, and the messages sent to friends and family, counting the number of punctuation mistakes. The collected information helps loan providers make the right decision on their clients’ creditworthiness and avoid long loan processes. When AI Overfits Unfortunately, there is the other side of the coin. There’s a theory which states that people who pay for their gas inside the petrol station, not at the pump, are usually smokers. And that is the group whose creditworthiness is estimated to be low. But what if this poor guy simply wanted to buy a Snickers? This example shows that if a lender leaves without checking the information carefully gathered by AI software, they may easily end up with making bad mistakes and misinterpretations. Artificial Intelligence in the financial sector may significantly reduce costs, efforts, and further financial complications, but there are hidden social costs such as the above. A robust analysis, design, implementation and feedback framework is necessary to meaningfully counter AI bias. Other Use Cases for AI in Finances Of course, there are also enough examples of how AI helps to improve customer experience in the financial sector. Some startups use AI software to help clients find the company that is the best at providing them with the required service. They juxtapose the clients’ requirements with the companies’ services finding perfect matches. Even though this technology reminds us of how dating apps work, such applications can drastically save time for both parties and help borrowers pay faster. AI can also be used for streamlining finances. AI helps banks and alternative lending companies in automating some of their working processes such as basic customer service, contract management, or transactions monitoring. A good example is Upstart, the pet project of two former Google employees. The startup was originally aimed to help young people lacking the credit history, to get a loan or any other kind of financial support. For this purpose, the company uses the clients’ educational background and experience, taking into account things such as their attained degrees and school/university attendance. However, such approach to lending may end up being a little snobbish: it can simply overlook large groups of population who can’t afford higher education. As a result of insufficient educational background, these people can become deprived of the opportunity to get their loan. Nonetheless, one of the main goals of the company was automating as many of its operating procedures as possible. By 2018, more than 60% of all their loans had been fully automated with more to come. We cannot automate fairness and opportunity, yet The implementation of machine learning in providing loans by checking the digital footprint of people may lead to ethical and legal disputes. Even today some people state that the use of AI in the financial sector encouraged inequality in the number of loans provided to the black and white population of the USA. They believe that AI continues the bias against minorities and make the black people “underbanked.” Both lending companies and banks should remember that the quality of work done these days with the help of machine learning methods highly depends on people—both employees who use the software and AI developers who create and fine-tune it. So we should see AI in loan management as a useful tool—but not as a replacement for humans. Author Bio Darya Shmat is a business development representative at Iflexion, where Darya expertly applies 10+ years of practical experience to help banking and financial industry clients find the right development or QA solution. Blockchain governance and uses beyond finance – Carnegie Mellon university podcast Why Retailers need to prioritize eCommerce Automation in 2019 Glancing at the Fintech growth story – Powered by ML, AI & APIs
Read more
  • 0
  • 0
  • 3134

article-image-social-engineering-attacks-things-to-watch-out-while-online
Savia Lobo
16 Jul 2018
4 min read
Save for later

Social engineering attacks – things to watch out for while online

Savia Lobo
16 Jul 2018
4 min read
The rise in the adoption of the internet is directly proportional to the rise in cybersecurity attacks. We feel that just by having layers of firewall or browsing over ‘https’, where ‘s’ stands for secure will indeed secure us from all those malware from attacking our systems. We also feel safe by having Google secure all our credentials, just because it is Google! All this is a myth. In fact, the biggest loophole in security breakouts is us, humans! It is innate human nature to help out those in need or get curious over a sale or a competition that can fetch a huge sum of money. These and many other factors act as a bait using which hackers or attackers find out ways to fish account credentials. These ways lead to social engineering attacks, which if unnoticed can highly affect one’s security online. Common Social Engineering Attacks Phishing This method is analogous to fishing where the bait is laid to attract fishes. Similarly, here the bait are emails sent out to customers with a malicious attachment or a clickable link. These emails are sent across to millions of users who are tricked to log into fake versions of popular websites, for instance, IBM, Microsoft, and so on. The main aim of a phishing attack is to gain the login information for instance passwords, bank account information, and so on. However, some attacks might be targeted at specific people or organizations. Such a targeted phishing is known as spear phishing. Spear phishing is a targeted phishing attack where the attackers craft a message for a specific individual. Once the target is identified, for instance, a manager of a renowned firm, via browsing his/her profile on social media sites such as Twitter or LinkedIn. The attacker then creates a spoof email address, which makes the manager believe that it’s from his/her higher authority. The mail may comprise of questions on important credentials, which should be confidential among managers and the higher authorities. Ads Often while browsing the web, users encounter flash advertisements asking them permissions to allow a blocked cookie. However, these pop-ups can be, at times, malicious. Sometimes, these malicious ads attack the user’s browser and get them redirected to another new domain. While being in the new domain the browser window can’t be closed. In another case, instead of redirection to a new site, the malicious site appears on the current page, using an iframe in HTML. After any one of the two scenarios is successful, the attacker tries to trick the user to download a fake Flash update, prompting them to fill up information on a phishing form, or claiming that their system is affected with a malware. Lost USB Drive What would you do if you find a USB drive stranded next to a photocopy machine or near the water cooler? You would insert it into your system to find out who really the owner is. Most of us fall prey to such social help, while this is what could result into USB baiting. A social engineering attack where hackers load malicious file within the USB drive and drop it near a crowded place or library. The USB baiting also appeared in the famous American show Mr. Robot in 2016. Here, the USB key simply needed a fraction of seconds to start off using HID spoofing to gather FBI passwords. A similar flash drive attack actually took place in 2008 when an infected flash drive was plugged into a US military laptop situated in the middle east. The drive caused a digital breach within the foreign intelligence agency. How can you protect yourself from these attacks? For organizations to avoid making such huge mistakes, which can lead to huge financial loss, the employees should be given a good training program. In this training program employees can be made aware of the different kinds of social engineering attacks and the channels via which attackers can approach. One way could be giving them a hands-on experience by putting them into the attacker's shoes and letting them perform an attack. Tools such as Kali Linux could be used in order to find out ways and techniques in which hackers think and how to safeguard individual or organizational information. The following video will help you in learning how a social engineering attack works. The author has made use of Kali Linux to better explain the attack practically. YouTube has a $25 million plan to counter fake news and misinformation 10 great tools to stay completely anonymous online Twitter allegedly deleted 70 million fake accounts in an attempt to curb fake news      
Read more
  • 0
  • 1
  • 3129