Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech Guides

851 Articles
article-image-war-data-science-python-versus-r
Akram Hussain
30 Jun 2014
7 min read
Save for later

The War on Data Science: Python versus R

Akram Hussain
30 Jun 2014
7 min read
Data science The relatively new field of data science has taken the world of big data by storm. Data science gives valuable meaning to large sets of complex and unstructured data. The focus is around concepts like data analysis and visualization. However, in the field of artificial intelligence, a valuable concept known as Machine Learning has now been adopted by organizations and is becoming a core area for many data scientists to explore and implement. In order to fully appreciate and carry out these tasks, data scientists are required to use powerful languages. R and Python currently dominate this field, but which is better and why? The power of R R offers a broad, flexible approach to data science. As a programming language, R focuses on allowing users to write algorithms and computational statistics for data analysis. R can be very rewarding to those who are comfortable using it. One of the greatest benefits R brings is its ability to integrate with other languages like C++, Java, C, and tools such as SPSS, Stata, Matlab, and so on. The rise to prominence as the most powerful language for data science was supported by R’s strong community and over 5600 packages available. However, R is very different to other languages; it’s not as easily applicable to general programming (not to say it can’t be done). R’s strength and its ability to communicate with every data analysis platform also limit its ability outside this category. Game dev, Web dev, and so on are all achievable, but there’s just no benefit of using R in these domains. As a language, R is difficult to adopt with a steep learning curve, even for those who have experience in using statistical tools like SPSS and SAS. The violent Python Python is a high level, multi-paradigm programming language. Python has emerged as one of the more promising languages of recent times thanks to its easy syntax and operability with a wide variety of different eco-systems. More interestingly, Python has caught the attention of data scientists over the years, and thanks to its object-oriented features and very powerful libraries, Python has become the go-to language for data science, many arguing it’s taken over R. However, like R, Python has its flaws too. One of the drawbacks in using Python is its speed. Python is a slow language and one of the fundamentals of data science is speed! As mentioned, Python is very good as a programming language, but it’s a bit like a jack of all trades and master of none. Unlike R, it doesn’t purely focus on data analysis but has impressive libraries to carry out such tasks. The great battle begins While comparing the two languages, we will go over four fundamental areas of data science and discuss which is better. The topics we will explore are data mining, data analysis, data visualization, and machine learning. Data mining: As mentioned, one of the key components to data science is data mining. R seems to win this battle; in the 2013 Data Miners Survey, 70% of data miners (from the 1200 who participated in the survey) use R for data mining. However, it could be argued that you wouldn’t really use Python to “mine” data but rather use the language and its libraries for data analysis and development of data models. Data analysis: R and Python boast impressive packages and libraries. Python, NumPy, Pandas, and SciPy’s libraries are very powerful for data analysis and scientific computing. R, on the other hand, is different in that it doesn’t offer just a few packages; the whole language is formed around analysis and computational statistics. An argument could be made for Python being faster than R for analysis, and it is cleaner to code sets of data. However, I noticed that Python excels at the programming side of analysis, whereas for statistical and mathematical programming R is a lot stronger thanks to its array-orientated syntax. The winner of this is debatable; for mathematical analysis, R wins. But for general analysis and programming clean statistical codes more related to machine learning, I would say Python wins. Data visualization: the “cool” part of data science. The phrase “A picture paints a thousand words” has never been truer than in this field. R boasts its GGplot2 package which allows you to write impressively concise code that produces stunning visualizations. However. Python has Matplotlib, a 2D plotting library that is equally as impressive, where you can create anything from bar charts and pie charts, to error charts and scatter plots. The overall concession of the two is that R’s GGplot2 offers a more professional feel and look to data models. Another one for R. Machine learning: it knows the things you like before you do. Machine learning is one of the hottest things to hit the world of data science. Companies such as Netflix, Amazon, and Facebook have all adopted this concept. Machine learning is about using complex algorithms and data patterns to predict user likes and dislikes. It is possible to generate recommendations based on a user’s behaviour. Python has a very impressive library, Scikit-learn, to support machine learning. It covers everything from clustering and classification to building your very own recommendation systems. However, R has a whole eco system of packages specifically created to carry out machine learning tasks. Which is better for machine learning? I would say Python’s strong libraries and OOP syntax might have the edge here. One to rule them all From the surface of both languages, they seem equally matched on the majority of data science tasks. Where they really differentiate is dependent on an individual’s needs and what they want to achieve. There is nothing stopping data scientists using both languages. One of the benefits of using R is that it is compatible with other languages and tools as R’s rich packagescan be used within a Python program using RPy (R from Python). An example of such a situation would include using the Ipython environment to carry out data analysis tasks with NumPy and SciPy, yet to visually represent the data we could decide to use the R GGplot2 package: the best of both worlds. An interesting theory that has been floating around for some time is to integrate R into Python as a data science library; the benefits of such an approach would mean data scientists have one awesome place that would provide R’s strong data analysis and statistical packages with all of Python’s OOP benefits, but whether this will happen remains to be seen. The dark horse We have explored both Python and R and discussed their individual strengths and flaws in data science. As mentioned earlier, they are the two most popular and dominant languages available in this field. However a new emerging language called Julia might challenge both in the future. Julia is a high performance language. The language is essentially trying to solve the problem of speed for large scale scientific computation. Julia is expressive and dynamic, it’s fast as C, it can be used for general programming (its focus is on scientific computing) and the language is easy and clean to use. Sounds too good to be true, right?
Read more
  • 0
  • 0
  • 3445

article-image-ian-goodfellow-et-al-better-text-generation-via-filling-blanks-using-maskgans
Savia Lobo
19 Feb 2018
5 min read
Save for later

Ian Goodfellow et al on better text generation via filling in the blanks using MaskGANs

Savia Lobo
19 Feb 2018
5 min read
In the paper, “MaskGAN: Better Text Generation via Filling in the ______”, Ian Goodfellow, along with William Fedus and Andrew M. Dai have proposed a way to improve sample quality using Generative Adversarial Networks (GANs), which explicitly trains the generator to produce high quality samples and have also shown a lot of success in image generation.  Ian Goodfellow is a Research scientist at Google Brain. His research interests lies in the fields of deep learning, machine learning security and privacy, and particularly in generative models. Ian Goodfellow is known as the father of Generative Adversarial Networks. He runs the Self-Organizing Conference on Machine Learning, which was founded at OpenAI in 2016. Generative Adversarial Networks (GANs) is an architecture for training generative models in an adversarial setup, with a generator generating images that is trying to fool a discriminator that is trained to discriminate between real and synthetic images. GANs have had a lot of success in producing more realistic images than other approaches but they have only seen limited use for text sequences. They were originally designed to output differentiable values, as such discrete language generation is challenging for them. The team of researchers, introduce an actor-critic conditional GAN that fills in missing text conditioned on the surrounding context. The paper also shows that this GAN produces more realistic text samples compared to a maximum likelihood trained model. MaskGAN: Better Text Generation via Filling in the _______ What problem is the paper attempting to solve? This paper highlights how text classification was traditionally done using Recurrent Neural Network models by sampling from a distribution that is conditioned on the previous word and a hidden state that consists of a representation of the words generated so far. These are typically trained with maximum likelihood in an approach known as teacher forcing. However, this method causes problems when, during sample generation, the model is often forced to condition on sequences that were never conditioned on at training time, which leads to unpredictable dynamics in the hidden state of the RNN. Also, methods such as Professor Forcing and Scheduled Sampling have been proposed to solve this issue, which work indirectly by either causing the hidden state dynamics to become predictable (Professor Forcing) or by randomly conditioning on sampled words at training time, however, they do not directly specify a cost function on the output of the RNN encouraging high sample quality. The method proposed in the paper is trying to solve problem of text generation with GANs, by a sensible combination of novel approaches. MaskGANs Paper summary This paper proposes to improve sample quality using Generative Adversarial Network (GANs), which explicitly trains the generator to produce high quality samples. The model is trained on a text fill-in-the-blank or in-filling task. In this task, portions of a body of text are deleted or redacted. The goal of the model is to then infill the missing portions of text so that it is indistinguishable from the original data. While in-filling text, the model operates autoregressively over the tokens it has thus far filled in, as in standard language modeling, while conditioning on the true known context. If the entire body of text is redacted, then this reduces to language modeling. The paper also shows qualitatively and quantitatively, evidence that this new proposed method produces more realistic text samples compared to a maximum likelihood trained model. Key Takeaways One can have a hold about what MaskGANs are, as this paper introduces a text generation model trained on in-filling (MaskGAN). The paper considers the actor-critic architecture in extremely large action spaces, new evaluation metrics, and the generation of synthetic training data. The proposed contiguous in-filling task i.e. MASKGAN, is a good approach to reduce mode collapse and help with training stability for textual GANs. The paper shows that MaskGAN samples on a larger dataset (IMDB reviews) is significantly better than the corresponding tuned MaskMLE model as shown by human evaluation. One can produce high-quality samples despite the MaskGAN model having much higher perplexity on the ground-truth test set Reviewer feedback summary/takeaways Overall Score: 21/30 Average Score: 7/10. Reviewers liked the overall idea behind the paper. They appreciated the benefits they received from context (left context and right context) by solving a "fill-in-the-blank" task at training time and translating this into text generation at test time. A reviewer also stated that experiments were well carried through and very thorough. A reviewer also commented that the importance of the MaskGAN mechanism has been highlighted and the description of the reinforcement learning training part has been clarified. However, with pros, the paper has also received some cons stating, There is a lot of pre-training required for the proposed architecture Generated texts are generally locally valid but not always valid globally It was not made very clear whether the discriminator also conditions on the unmasked sequence. A reviewer also stated that there were some unanswered questions such as Was pre-training done for the baseline as well? How was the masking done? How did you decide on the words to mask? Was this at random? Is it actually usable in place of ordinary LSTM (or RNN)-based generation?
Read more
  • 0
  • 0
  • 3443

article-image-what-is-digital-forensics
Savia Lobo
02 May 2018
5 min read
Save for later

What is Digital Forensics?

Savia Lobo
02 May 2018
5 min read
Who here hasn’t watched the American TV show, Mr. Robot? For the uninitiated, Mr. Robot is a digital crime thriller that features the protagonist Elliot. Elliot is a brilliant cyber security engineer and hacktivist who identifies potential suspects and evidences of any crime hard to solve. He does this by hacking into people’s digital devices such as smartphones, computers, machines, printers and so on. The science of identifying, preserving, and analyzing the evidences through digital media or storage media devices, in order to trace a crime is Digital Forensics. A real world example of digital forensics helping solve crime is the case of a floppy disk that helped investigators to solve the BTK serial killer case in 2005. The killer had eluded police capture since 1974 and had claimed the lives of at least 10 victims before he was caught. Types of Digital forensics The Digital world is vast. There are countless ways one can perform illegal or corrupt activities and go undetected. Digital Forensics lends a helping hand in detecting such activities. However, due to the presence of multiple digital media, the forensics carried out for each is also different.  Following are some types of forensics which can be conducted over different digital pathways. Computer Forensics refers to the branch of forensics that obtains evidences from computer systems such as computer hard drives, mobile phones, a personal digital assistant (PDA), Compact Disks CD, and so on. The digital police can also trace suspect’s e-mail or text communication logs, their internet browsing history, system or file transfer, hidden or deleted files, docs and spreadsheets, and so on. Mobile device Forensics recovers or gathers evidence from the call logs, text messages, and other data stored in the mobile devices. Tracing one’s location info via the inbuilt GPU systems or cell site logs or through in-app communication from apps such as WhatsApp, Skype, and so on on is also possible. Network forensics monitors and analyzes computer network traffic, LAN/WAN and internet traffic. The aim of network forensics is to gather information, collect evidence, detect and determine the extent of intrusions and the amount of data that is compromised. Database forensics is the forensic study of databases and their metadata.The information from database contents, log files and in-RAM data can be used for creating timelines or recover pertinent information during a forensic investigation. Challenges faced in Digital Forensics Data storage and extraction Storing data has always been tricky and expensive. An explosion in the volume of data generation has only aggravated the situation. Now data comes from different pathways such as social media, web, IoT, and many more.  The real-time analysis of data from IoT devices and other networks also contribute to the data heap. Due to this, investigators find it difficult to store and process data to extract clues or detect incidents, or to track the necessary traffic. Data gathering over scattered mediums Investigators have to face a lot of difficulty as evidence might be scattered over social networks, cloud resources, and Personal physical storage. Therefore, increased tools, expertise and time is a requirement to fully and accurately reconstruct the evidence. Automating these tasks partially may lead to deterioration of the quality of investigation. Investigations to preserve privacy At times, investigators collect information to reconstruct and locate an attack. This can violate user privacy. Also, when information has to be collected from the cloud, there are some other hurdles, such as accessing the evidence in logs, presence of volatile data, and so on. Carrying out Legitimate investigations only Modern infrastructures are complex and virtualized, often shifting their complexity at the border (such as in fog computing) or delegating some duties to third parties (such as in platform-as-a-service frameworks). An important challenge for modern digital forensics lies in executing investigations legally, for instance, without violating laws in borderless scenarios. Anti-forensics techniques on the rise Defensive measures for digital forensics comprise of encryption, obfuscation, and cloaking techniques, including information hiding.Therefore new forensics tools should be engineered in order to support heterogeneous investigations, preserve privacy, and offer scalability. The presence of digital media and electronics is a leading cause for the rise of digital forensics. Also, at this pace, digital media is on the rise, digital forensics is here to stay. Many of the investigators which include CYFOR,  and Pyramid CyberSecurity strive to offer solutions to complex cases in the digital world. One can also try to seek employment or specialize in this field by improving the skills needed for a career in digital forensics. If you are interested in digital forensics, check out our product portfolio on cyber security or subscribe today to a learning path for forensic analysts on MAPT, our digital library. How cybersecurity can help us secure cyberspace Top 5 penetration testing tools for ethical hackers What Blockchain Means for Security
Read more
  • 0
  • 0
  • 3442

article-image-the-state-of-the-cybersecurity-skills-gap-heading-into-2020
Guest Contributor
11 Nov 2019
6 min read
Save for later

The state of the Cybersecurity skills gap heading into 2020

Guest Contributor
11 Nov 2019
6 min read
Just this year, several high-profile cyber breaches exposed confidential information and resulted in millions of dollars in damages. Cybersecurity is more important than ever — a big problem for employers facing millions of unfilled cybersecurity positions and a shortage of talented workers. As for the exact number of openings, the estimates vary — but none of them look good. There may be as many as 3.5 million unfilled cybersecurity positions by 2021. As a result, cybersecurity professionals currently in the field are facing serious pressure and long working hours. At cybersecurity conferences, it's not uncommon to see entire tracks about managing mental health, addiction, and work stress. A kind of feedback loop may be forming — one where skilled professionals under major pressure burn out and leave the field, putting more strain on the workers that remain. The cycle continues, pushing talent out of cybersecurity and further widening the skills gap. Some experts go further and call the gap a crisis, though it's not clear we've hit that level yet. Employers are looking at different ways to handle this — by broadening the talent pool and by investing in tools that take the pressure off their cybersecurity workers. Cybersecurity skills gap is on the rise When asked about the skills their organization is most likely to be missing, cybersecurity nearly always tops the list. In a survey conducted by ESG this year, 53% of organizations reported they were facing a cybersecurity shortage. This is 10% more than in 2016. In every survey between this year and 2016, the number has only trended up. There are other ways to look at the gap — by worker hours or by the total number of positions unfilled — but there's only one real conclusion to draw from the data. There aren't enough cybersecurity workers, and every year the skills gap grows worse. Despite pushes for better education and the increasing importance of cybersecurity, there are no signs it's closing or will begin to close in 2020. The why of the skills gap is unclear. The number of graduates from cybersecurity programs is increasing. At the same time, the cost and frequency of cyberattacks are also rising. It may be that schools can't keep up with the growing levels of cybercrime and the needs of companies, especially in the wake of the past few years of high-profile breaches. Employers look for ways to broaden the Talent Pool One possible reason for the skills gap may be that employers are looking for very specific candidates. Cybersecurity can be a difficult field to break into if you don't have the resources to become credentialed. Even prospective candidates with ideal skill sets — experience with security and penetration testing, communication and teamwork skills, and the ability to train nontechnical staff — can be filtered out by automatic resume screening programs. These may be looking for specific job titles, certificates, and degrees. If a resume doesn't pass the keyword filter, the hiring team may never get a chance to read it at all. There are two possible solutions to this problem. The first is to build a better talent pipeline — one that starts at the university or high school level. Employers may join with universities to sponsor programs that encourage or incentivize students to pick up technical certificates or switch their major to cybersecurity or a related field. The high worth of cybersecurity professionals and the strong value of cybersecurity degrees may encourage schools to invest in these programs, taking some of the pressure off employers. This solution isn't universally popular. Some experts argue that cybersecurity training doesn't reflect the field — and that a classroom may never provide the right kind of experience. The second solution is to broaden the talent pool by making it easier for talented professionals to break into cybersecurity. Hiring teams may relax requirements for entry-level positions, and companies may develop training programs that are designed to help other security experts learn about the field. This doesn't mean companies will begin hiring nontechnical staff. Rather, they'll start looking for skilled individuals with unconventional skill sets and a technical background that they can be quickly brought up to speed — like veterans with security or technology training. It's not clear if employers will take the training approach, however. While business leaders find cybersecurity more important every year, companies can be resistant to spending more on employee training. These expenditures increased in 2017 but declined last year. AI tools may help cybersecurity workers Many new companies are developing AI antiviruses, anti-phishing tools and other cybersecurity platforms that may reduce the amount of labor needed from cybersecurity workers. While AI is quite effective at pattern-finding and could be useful for cybersecurity workers, the tech isn't guaranteed to be helpful. Some of these antiviruses are susceptible to adversarial attacks. One popular AI-powered antivirus was defeated with just a few lines of text appended to some of the most dangerous malware out there. Many cybersecurity experts are skeptical of AI tech in general and don't seem fully committed to the idea of a field where cybersecurity workers rely on these tools. Companies may continue to invest in AI cybersecurity technology because there doesn't seem to be many other short-term solutions to the widening skill gap. Depending on how effective these technologies are, they may help reduce the number of cybersecurity openings that need to be filled. Future of the Cybersecurity skills gap Employers and cybersecurity professionals are facing a major shortage of skilled workers. At the same time, both the public and private sectors are dealing with a new wave of cyberattacks that put confidential information and critical systems at risk. There are no signs yet that the cybersecurity skills gap will begin to close in 2020. Employers and training programs are looking for ways to bring new professionals into the field and expand the talent pipeline. At the same time, companies are investing in AI technology that may take some pressure off current cybersecurity workers. Not all cybersecurity experts place their full faith in this technology, but some solutions will be necessary to reduce the pressure of the growing skill gap. Author Bio Kayla Matthews writes about big data, cybersecurity, and technology. You can find her work on The Week, Information Age, KDnuggets and CloudTweaks, or over at ProductivityBytes.com. How will AI impact job roles in Cybersecurity 7 Black Hat USA 2018 conference cybersecurity training highlights: Hardware attacks, IO campaigns, Threat Hunting, Fuzzing, and more. UK’s NCSC report reveals significant ransomware, phishing, and supply chain threats to businesses
Read more
  • 0
  • 0
  • 3413

article-image-gomobile-golangs-foray-mobile-world
Erik Kappelman
15 Feb 2017
6 min read
Save for later

GoMobile: GoLang's Foray into the Mobile World

Erik Kappelman
15 Feb 2017
6 min read
There is no question that the trend today in mobile app design is to get every possible language onboard for creating mobile applications. This is sort of the case with GoMobile. Far from being originally intended to create mobile apps, Go or GoLang, was originally created at Google in 2007. Go has true concurrency capabilities, which can lend itself well to any programming task, certainly mobile app creation. The first thing you need to do to follow along with this blog is get the GoLang binaries on your machine. Although there is a GCC tool to compile Go, I would strongly recommend using the Go tools. I like Go because it is powerful, safe, and it feels new. It may simply be a personal preference, but I think Go is a largely underrated language. This blog will assume a minimum understanding of Go; don’t worry so much about the syntax, but you will need to understand how Go handles projects and packages. So to begin, let's create a new folder and specify it as our $GOPATH bash variable. This tells Go where to look for code and where to place the downloaded packages, such asGoMobile. After we specify our $GOPATH, we add the bin subdirectory of the $GOPATH to our global $PATH variable. This allows for the execution of Go tools like any other bash command: $ cd ~ $ mkdirGoMobile $ export$GOPATH=~/GoMobile $ export PATH=$PATH:$GOPATH/bin The next step is somewhat more convoluted. Today, we are getting started with Android development. I choose Android over iOS because GoMobile can build for Android on any platform, but can only build for iOS on OSX. In order for GoMobile to be able to work its magic, you’ll need to install Android NDK. I think the easiest way to do this is through Android Studio. Once you have the Android NDK installed, it’s time to get started. We are going to be using an example app from our friends over at Go today. The app structure required for Go-based mobile apps is fairly complex. With this in mind, I would suggest using this codebase as you begin developing your own apps. This might save you some time. So, let's first install GoMobile: $ go get golang.org/x/mobile/cmd/gomobile Now, let's get that example app: $ go get -d golang.org/x/mobile/example/basic For the next command, we are going to initialize GoMobile and specify the NDK location. The online help for this example is somewhat vague when it comes to specifying the NDK location, so hopefully my research will save you some time: $ gomobileinit -ndk=$HOME/Library/Android/sdk/ndk-bundle/ Obviously, this is the path on my machine, so yours may be different:however,if you’re on anything Unix-like, it ought to be relatively close. At this point, you are ready to build the example app. All you have to do is use the command below, and you’ll be left with a real live Android application: $ gomobile build golang.org/x/mobile/example/basic This will build an APK file and place the file in your $GOPATH. This file can be transferred to and installed on an actual Android device, or you can use an emulator. To use the emulator, you’ll need to install the APK file using the adb command. This command should already be onboard with your installation of Android Studio. The following command adds the adb command to your path(your path might be different, but you’ll get the idea): export PATH=$PATH:$HOME/Library/Android/sdk/platform-tools/ At this point, you ought to be able to run the adb install command and try out the app on your emulator: adb install basic.apk As you will see, there isn’t much to this particular app, but in this case, it’s about the journey and not the destination. There is another way to install the app on your emulator. First, uninstall the app from your Android VM. Second, run the following command: gomobile install golang.org/x/mobile/example/basic Although the result is the same, the second method is almost identical to the way that the regular Go builds applications. For consistency's sake, I would recommend using the second method. If you're new to Go, at this point, I would recommend checking out some of the documentation. There is an interactive tutorial called A Tour of Go. I have found this tutorial enormously helpful for beginning to intermediate needs. You will need to have a pretty deep understanding of Go to be an effective mobile app developer. If you are new to mobile app design in general, e.g., you don’t already know Java, I would recommend taking the Go route. Although Java is still the most widely used language the world over, I myself have a strong preference for Go. If you will be using Go in the other elements of your mobile app, say maybe, a web server that controls access to data required for the app’s operations, using Go and GoMobile can be even more helpful. This allows for code consistency across the various levels of a mobile app. This is similar to the benefit of using the MEAN stack for web development in that one language controls all the levels of the app. In fact, there are tools now that allow for JavaScript to be used in the creation of mobile apps, and then, presumably, a developer could use Node.js for a backend server ending up with a MEAN-like mobile stack. While this would probably work fine, Go is stronger and perhaps safer than JavaScript. Also, because mobile development is essentially software development, which is fundamentally different fromweb development, using a language geared toward software development makes more intuitive sense. However, these thoughts are largely opinions and I have said before in many blogs, there are so many options, just find the one you like that gets the job done. About the Author Erik Kappelman is a transportation modeler for the Montana Department of Transportation. He is also the CEO of Duplovici, a technology consulting and web design company.
Read more
  • 0
  • 0
  • 3404

article-image-can-cryptocurrency-establish-a-new-economic-world-order
Amarabha Banerjee
22 Jul 2018
5 min read
Save for later

Can Cryptocurrency establish a new economic world order?

Amarabha Banerjee
22 Jul 2018
5 min read
Cryptocurrency has already established one thing - there is a viable alternative to dollars and gold as a measure of wealth. Our present economic system is flawed. Cryptocurrencies, if utilized properly, can change the way the world deals with money and wealth. But can it completely overthrow the present system and create a new economic world order? To know the answer to this we will have to understand the concept of cryptocurrencies and the premise for their creation. Money - The weapon to control the world Money is a measure of wealth, which translates into power. The power centers have largely remained the same throughout history, be it a monarchy, or autocracy or democracy. Power has shifted from one king to one dictator, to a few elected/selected individuals. To remain in power, they had to control the source and distribution of money. That’s why till date, only the government can print money and distribute it among citizens. We can earn money in exchange for our time and skills or loan money in exchange for our future time. But there’s only so much of time that we can give away and hence the present day economy always runs on the philosophy of scarcity and demand. The money distribution follows a trickle down approach in a pyramid structure. Source: Credit Suisse Inception of Cryptocurrency - Delocalization of money It’s abundantly clear from the image above that while printing of money is under the control of the powerful and the wealth creators, the pyramidal distribution mechanism also has ensured very less money flows to the bottom most segments of the population. The money creators have been ensuring their safety and prosperity throughout history, by accumulating chunks of money for themselves. Subsequently, the global wealth gap has increased staggeringly. This could have possibly triggered the rise of cryptocurrencies, as a form of an alternative economic system, that theoretically, doesn’t just accumulate at the top, but also rewards anyone who is interested in mining these currencies and spending their time and resources. The main concept that made this possible was the distributed computing mechanism which has gained tremendous interest in recent times. Distributed Computing, Blockchain & the possibilities The foundation of our present economic system is a central power, be it government or a ruler or dictator. The alternative of this central system is a distributed system, where every single node of communication contains the power of decision making and is equally important for the system. So if one node is cut-off, the system will not fall apart, it will keep on functioning. That’s what makes distributed computing terrifying for the centralized economic systems. Because they can’t just attack the creator of the system or use a violent hack to bring down the entire system. Source: Medium.com When the white paper on Cryptocurrencies was first published by the anonymous Satoshi Nakamoto, there was this hope of constituting a parallel economy, where any individual with an access to a mobile phone and internet might be able to mine bitcoins and create wealth, for not just himself/herself, but for the system also. Satoshi himself invented the concept of Blockchain, an open, distributed ledger that can record transactions between two parties efficiently and in a verifiable and permanent way. Blockchain was the technology on top of which the first unit of Cryptocurrency, Bitcoins, were created. The concept of Bitcoin mining seemed revolutionary at that time. The more people that joined the system, the more enriched the system would become. The hope was that it would make the mainstream economic system take note and cause a major overhaul of the wealth distribution system. But sadly, none of that seems to have taken place yet. The phase of Disillusionment The reality is that bitcoin mining capabilities were controlled by system resources. The creators also had accumulated enough bitcoins for themselves similar to the traditional wealth creation system. Satoshi’s Bitcoin holdings were valued at $19.4 Billion during the Dec 2017 peak, making him the 44th richest person in the world during that time. This basically meant that the wealth distribution system was at fault again, very few could get their hands onto Bitcoins as their prices in traditional currencies had climbed. The government then duly played their part in declaring that trading in Bitcoins was illegal, cracking down on several cryptocurrency top guns. Recently different countries have joined the bandwagon to ban Cryptocurrency. Hence the value is much less now. The major concern is that the skepticism in public minds might kill the hype earlier than anticipated. Source: Bitcoin.com The Future and Hope for a better Alternative What we must keep in mind is that Bitcoins are just a derivative of the concept of Cryptocurrencies. The primary concept of distributed systems and the resulting technology - Blockchain, is still a very viable and novel one. The problem in the current Bitcoin system is the distribution mechanism. Whether we would be able to tap into the distributed system concept and create a better version of the Bitcoin model, only time will tell. But for the sake of better wealth propagation and wealth balance, we can only hope that this realignment of economic system happens sooner than later. Blockchain can solve tech’s trust issues – Imran Bashir A brief history of Blockchain Crypto-ML, a machine learning powered cryptocurrency platform
Read more
  • 0
  • 0
  • 3395
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $15.99/month. Cancel anytime
article-image-what-is-quantum-entanglement
Amarabha Banerjee
05 Aug 2018
3 min read
Save for later

What is Quantum Entanglement?

Amarabha Banerjee
05 Aug 2018
3 min read
Einstein described it as “Spooky action at a distance”. Quantum entanglement is a phenomenon observed in photons where particles share information of their state - even if separated by a huge distance. This state sharing phenomenon happens almost instantaneously. Quantum particles can be in any possible state until their state is measured by an observer. These states are called Eigen-Values. In case of quantum entanglement, two particles separated by several miles of distance, when observed, change into the same state. Quantum entanglement is hugely important for modern day computation tasks. The reason is that the state information between photons travel sometimes at speeds like 10k times the speed of light. This if implemented in physical systems, like quantum computers, can be a huge boost. Source: picoquant One important concept for us to understand this idea is ‘Qubit’. What is a Qubit? It’s the unit of information in Quantum computing. Like ‘Bit’ in case of normal computers. A bit can be represented by two states - ‘0’ or ‘1’. Qbits are also like ‘bits’, but they are governed by the weirder rules of Quantum Computing. Qubits don’t just contain pure states like ‘0’ and ‘1’, but they can also exist as superposition of these two states like {|0>,|1>},{ |1>,|0>}, {|0>,|0>}, {|1>,|1>}. This particular style of writing particle states is called the Dirac Notation. Because of these unique superposition of states, the quantum particles get entangled and share their state related information. A recent research experiment by a Chinese group has claimed to have packed 18 Qubits of information in just 6 entangled photons. This is revolutionary. What this basically means is that if one bit can pack in three times the information that it can carry presently, then our computers would become three times faster and smoother to work with. The reasons which make this a great start for future implementation of faster and practical quantum computers are: It’s very difficult to entangle so many electrons There are instances of more than 18 qubits getting packed into a larger number of photons, however the degree of entanglement has been much simpler Entanglement of each new particle takes increasingly more computer simulation time Introducing each new qubit creates a separate simulation taking up more processing time. The possible reason why this experiment has worked might be credited to the multiple degrees of freedom that photons can have. This particular experiment has been performed using Photons in a networking system. The fact that such a system allows multiple degrees of freedom for the Photon meant that this result is specific to this particular quantum system. It would be difficult to replicate the results in other systems like a Superconducting Network. Still this result means a great deal for the progress of quantum computing systems and how they can evolve to be a practical solution and not just remain in theory forever. Quantum Computing is poised to take a quantum leap with industries and governments on. PyCon US 2018 Highlights: Quantum computing, blockchains and serverless rule! Q# 101: Getting to know the basics of Microsoft’s new quantum computing language  
Read more
  • 0
  • 0
  • 3388

article-image-transparency-and-nwjs
Adam Lynch
07 Jan 2015
3 min read
Save for later

Transparency and NW.js

Adam Lynch
07 Jan 2015
3 min read
Yes, NW.js does support transparency, albeit it is disabled by default. One way to enable transparency is to use the transparency property to your application's manifest like this: { "name":"my-app", "main":"index.html", "description":"My app", "version":"0.0.1", "window":{ "transparent":true } } Transparency will then be enabled for the main window of your application from the start. Now, it's play time. Try giving a page's body a transparent or semi-transparent background color and any children an opaque background color in your CSS like this: body { background:transparent;//orbackground:rgba(255, 255, 255, 0.5); } body > * { background:#fff; } I could spend all day doing this. Programmatically enabling transparency The transparent option can also be passed when creating a new window: var gui = require('nw.gui'); var newWindow = newgui.Window.open('other.html', { position: 'center', width: 600, height: 800, transparent: true }); newWindow.show(); Whether you're working with the current window or another window you've just spawned, transparency can be toggled programmatically per window on the fly thanks to the Window API: newWindow.setTransparent(true); console.log(newWindow.isTransparent); // true The window's setTransparent method allows you to enable or disable transparency and its isTransparent property contains a Boolean indicating if it's enabled right now. Support Unfortunately, there are always exceptions. Transparency isn't supported at all on Windows XP or earlier. In some cases it might not work on later Windows versions, including when accessing the machine via Microsoft Remote Desktop or with some unusual themes or configurations. On Linux, transparency is supported if the window manager supports compositing. Aside from this, you'll also need to start your application with a couple of arguments. These can be set in your app's manifest under chromium-args: { "name":"my-app", "main":"index.html", "description":"My app", "version":"0.0.1", "window":{ "transparent":true }, "chromium-args":"--enable-transparent-visuals --disable-gpu" } Tips and noteworthy side-effects It's best to make your app frameless if it will be semi-transparent. Otherwise it will look a bit strange. This would depend on your use case of course. Strangely, enabling transparency for a window on Mac OS X will make its frame and toolbar transparent: Screenshot of a transparent window frame on Mac OS X Between the community and developers behind NW.js, there isn't certainty whether or not windows with transparency enabled should have a shadow like typically windows do. At the time of writing, if transparency is enabled in your manifest for example, your window will not have a shadow, even if all its content is completely opaque. Click-through NW.js even supports clicking through your transparent app to stuff behind it on your desktop, for example. This is enabled by adding a couple of runtime arguments to your chromium-args in your manifest. Namely --disable-gpu and --force-cpu-draw: { "name":"my-app", "main":"index.html", "description":"My app", "version":"0.0.1", "window":{ "transparent":true }, "chromium-args":"--disable-gpu --force-cpu-draw" } As of right now, this is only supported on Mac OS X and Windows. It only works with non-resizable frameless windows, although there may be exceptions depending on the operating system. One other thing to note is that click-through will only be possible on areas of your app that are completely transparent. If the target element of the click or an ancestor has a background color, even if it's 1% opaque in the alpha channel, the click will not go through your application to whatever is behind it. About the Author Adam Lynch is a TeamworkChat Product Lead & Senior Software Engineer at Teamwork. He can be found on Twitter @lynchy010.
Read more
  • 0
  • 0
  • 3385

article-image-how-machine-learning-as-a-service-transforming-cloud
Vijin Boricha
18 Apr 2018
4 min read
Save for later

How machine learning as a service is transforming cloud

Vijin Boricha
18 Apr 2018
4 min read
Machine learning as a service (MLaaS) is an innovation that is growing out of 2 of the most important tech trends - cloud and machine learning. It's significant because it enhances both. It makes cloud an even more compelling proposition for businesses. That's because cloud typically has three major operations: computing, networking and storage. When you bring machine learning into the picture, the data that cloud stores and processes can be used in radically different ways, solving a range of business problems. What is machine learning as a service? Cloud platforms have always competed to be the first or the best to provide new services. This includes platform as a service (PaaS) solutions, infrastructure as a service (IaaS) solutions and software as a service (SaaS) solutions. In essense, cloud providers like AWS and Azure provide sets of software to different things so their customers don't have to. Machine learning as a service is simply another instance of the services offered by cloud providers. It could include a wide range of features, from data visualization to predictive analytics and natural language processing. It makes running machine learning models easy, effectively automating some of the work that might have typically done manually by a data engineering team. Here are the biggest cloud providers who offer machine learning as a service: Google Cloud Platform Amazon Web Services Microsoft Azure IBM Cloud Every platform provides a different suite of services and features. It will ultimately depend on what's most important to you which one you choose. Let's take a look now at the key differences between these cloud providers' machine learning as a service offerings. Comparing the leading MLaaS products Google Cloud AI Google Cloud Platform has always provided their own services to help businesses grow. They provide modern machine learning services with pre-trained models and a service to generate your own tailored models. Majority of Google applications like Photos (image search), the Google app (voice search), and Inbox (Smart Reply) have been built using the same services that they provide to their users. Pros: Cheaper in comparison to other Cloud providers Provides IaaS and PaaS Solutions Cons: Google Prediction API is going to be discontinued (May 1st, 2018) Lacks a visual interface You'll need to know TensorFlow Amazon Machine Learning Amazon Machine Learning provides services for building ML models and generating predictions which help users develop robust, scalable, and cost-effective smart applications. With the help of Amazon Machine Learning you are able to use powerful machine learning technology without having any prior experience in machine learning algorithms and techniques. Pros: Provides versatile automated solutions It's accessible - users don't need to be machine learning experts Cons: The more you use, the more expensive it is Azure Machine Learning Studio Microsoft Azure provides you with Machine Learning Studio - a simple browser-based, drag-and-drop environment which functions without any kind of coding. You are provided with fully-managed cloud services that enable you to easily build, deploy and share predictive analytics solutions. Here you are also provided with a platform (Gallery) to share and contribute to the community. Pros: Consists of most versatile toolset for MLaaS You can contribute to and reuse machine learning solutions from the community Cons: Comparatively expensive A lot of manual work is required Watson Machine Learning Similar to the above platforms, IBM Watson Machine Learning is a service that helps  users to create, train, and deploy self-learning models to integrate predictive capabilities within their applications. This platform provides automated and collaborative workflows to grow intelligent business applications. Pros: Automated workflows Data science skills is not necessary Cons: Comparatively limited APIs and services Lacks streaming analytics Selecting the machine learning as a service solution that's right for you There are so many machine learning as a service solutions out there that it's easy to get confused. The crucial step to take before you make a decision to purchase anything is to plan your business requirements. Think carefully not only about what you want to achieve, but what you already do too. You want your MLaaS solution to easily integrate into the way you currently work. You also don't want it to replicate any work you're currently doing that you're pretty happy with. It gets repeated so much but it remains as true as it has ever been - make sure your software decisions are fully aligned with your business needs. It's easy to get seduced by the promise of innovative new tools, but without the right alignment they're not going to help you at all.
Read more
  • 0
  • 0
  • 3383

article-image-raspberry-pi-zero-w-what-you-need-know-and-why-its-great
Raka Mahesa
25 Apr 2017
6 min read
Save for later

Raspberry Pi Zero W: What you need to know and why it's great

Raka Mahesa
25 Apr 2017
6 min read
On February 28th, 2017, the Raspberry Pi Foundation announced the latest product in the Raspberry Pi series – the Raspberry Pi Zero W. The new product adds wireless connectivity to the Raspberry Pi Zero and is being retailed for just $10. This is great news for enthusiasts and hobbyists all around the world.  Wait, wait, Raspberry Pi? Raspberry Pi Zero? Wireless? What are we talking about? Okay, so, to understand the idea behind Raspberry Pi Zero W and the benefits it brings, we need to back up a bit and talk about the Raspberry Pi series of products and its history. The Raspberry Pi's history The Raspberry Pi is a computer that's the size of a credit card and was made available to the public for the low price of $35. And yes, despite the size and the price of the product, it's a full-fledged computer capable of running an operating system like Linux and Android, though Windows is a bit too heavy for it to run. It came with 2 USB ports and a HDMI port so you can plug your keyboard, mouse, and monitor into it and treat it just like your everyday computer.  The first generation of the Raspberry Pi was released in February 2012 and was an instant hit among the DIY and hobbyist crowd. The small-sized and low-priced computer proved to be perfect to power up their DIY projects. By the time this post was written, 10 million Raspberry Pi computers have been sold and countless numbers of projects using the miniature computer have been made. It has been used in projects including: home arcade boxes, automated pet feeders, media centers, security cameras, and many, many others.  The second generation of the Raspberry Pi was launched in February 2015. The computer now offered a higher-clocked, quad-core processor with 1 GB of RAM and was still being sold at $35. Then, a year later in February 2016, the Raspberry Pi 3 was launched. While the price remained the same, this latest generation of the computer boasted higher performance as well as wireless connectivity via WiFi and Bluetooth.  What's better than a $35 computer?  The Raspberry Pi has come a long way but, with all of that said, do you know what's better than a $35 computer? A $5 computer that’s even smaller, which is exactly what was launched in November 2015: the Raspberry Pi Zero. Despite its price, this new computer is actually faster than the original Raspberry Pi and, by using micro USB and mini HDMI instead of the normal-sized port, the Raspberry Pi Zero managed to shrink down to just half the size of a credit card.  Unfortunately, using micro USB and mini HDMI ports leads to another set of problems. Most people need additional dongles or converters to connect to those ports, and those accessories can be as expensive as the computer itself. For example: a micro-USB to Ethernet connector will cost $5, a micro-USB to USB connector will cost $4, and a micro-USB WiFi adapter will cost $10.  Welcome the Raspberry Pi Zero W  Needing additional dongles and accessories that cost as much as the computer itself pretty much undermines the point of a cheap computer. So to mitigate that the Raspberry Pi Zero W, a Raspberry Pi Zero with integrated WiFi and Bluetooth connectivity, was introduced in February 2017 at the price of $10. Here are the hardware specifications ofthe Raspberry Pi Zero W: Broadcom BCM2835 single-core CPU @1GHz 512MB LPDDR2 SDRAM Micro USB data port Micro USB power port Mini HDMI port with 1080p60 video output Micro SD card slot HAT-compatible 40-pin header Composite video and reset headers CSI camera connector 802.11n wireless LAN Bluetooth 4.0  Its dimensions are 65mm x 30mm x 5mm (for comparison, the size of a Raspberry Pi 3 is 85mm x 56mm x 17mm).  There are several things to note about the hardware. One of them is that the 40-pin GPIO connector is not soldered out of the box; you have to solder it yourself. These unsoldered connectors are what allow the computer to be so slim and will be pretty useful to people who don't need a GPIO connection.  Another thing to note is that the wireless chip is the same wireless chip found in the Raspberry Pi 3, so they should behave and perform pretty similarly. And because the rest of the hardware is basically the same as the ones found in the Raspberry Pi Zero, you can think of the RaspberryPi Zero W as a fusion between both series.  Is the wireless connectivity worth the added cost? You may wonder if the wireless connectivity is worth the additional $5. Well, it really depends on your use case. For example, in my home everything is already wireless and I don't have any LAN cables that I can plug in to connect to the Internet, so wireless connectivity is a really big deal for me.  And really, there are a lot of projects and places where having wireless connectivity could help a lot. Imagine if you want to setup a camera in front of your home that would send an email to you every time it spots a particular type of car. Without a WiFi connection, you would have to pull your Ethernet cable all the way out there to have an Internet connection. And it's not just the Internet to consider – having Bluetooth connectivity is a really practical way to connect to other devices, like your phone for instance.  All in all, the Raspberry Pi Zero W is a great addition to the Raspberry Pi line of computers. It's affordable, it's highly capable, and with the addition of wireless connectivity it has become practical to use too. So go get your hands on one and start your own project today.  About the author Raka Mahesa is a game developer at Chocoarts: chocoarts.com, who is interested in digital technology in general. In his spare time, he likes to work on his own projects, with Corridoom VR being his latest released game. Raka also regularly tweets as @legacy99. 
Read more
  • 0
  • 0
  • 3378
article-image-voice-natural-language-and-conversations-are-they-the-next-web-ui
Sugandha Lahoti
08 Jun 2018
5 min read
Save for later

Voice, natural language, and conversations: Are they the next web UI?

Sugandha Lahoti
08 Jun 2018
5 min read
Take any major conference that happened this year, Google I/O, Apple’s WWDC, or Microsoft Build. A major focus of all these conferences by top-notch tech leaders is improving User experience, smoothing out the process of how a user experiences their products. In present times, the user experience is heavily dependent on how a system interacts with a human. It may be either through responsive web designs, or appealing architectures. It may also be through an interactive module, such as a conversational UI, a chatbot, or a voice interface—essentially the same thing albeit with slight changes in their definition. Irrespective of they are called, these UX models have one fixed goal: to improve the interaction between a human, and a system, such that it feels real. In our recently conducted Packt Skill-up survey 2018, we asked developers and tech pros about whether Conversational User Interfaces and chatbots are going to be the future for web UI? Well, it seems yes, as over 65% of respondents, agreed that chat interactions and Conversational User Interfaces are the future of the web. After the recent preview of the power of Google Duplex, those numbers might be even higher if asked again today. Why has this paradigm of interacting with the web shifted from text and even visual searches on mobile to Voice, Natural language, and conversation UI? Why is Apple’s Siri, Google’s Voice assistant, Microsoft’s Cortana, Amazon Echo, releasing new versions every day? Computing power & NLP, the two pillars Any chatbot, or voice interface, requires two major factors to make them successful. One being computational power, which makes a conversational UI process complex calculations. And natural language processing, which actually makes a chatbot conduct human-like conversations. Both these areas have made tremendous progress in the recent times. A large number of computational chips namely GPUs, TPUs, as well as quantum computers, are being developed, which are capable of processing complex calculations in a jiffy. NLP has also gained momentum both in speech recognition capabilities (understanding language) and artificial intelligence (learning from experience). As technology in these areas expands, it paves way for companies to adopt conversational UIs as their main user interface. The last thing we need is more apps There are already a very large number of apps (read millions) available in app stores and they are increasing every day. We are almost at the peak of the hype cycle. And there is only downfall from here. Why? Well, I’m sure, you’ll agree, downloading, setting up, and managing an app is a hassle, not to mention, humans have limited attention spans, so switching between multiple apps happens quite often. Conversational UIs are rapidly taking up the vacuum left behind by mobile apps. They integrate functionalities of multiple apps in one. So you have a simple messaging app, which can also book cabs, search, and shop or order food. Moreover, they can simplify routine tasks. AI enabled chatbots, can remind you of scheduled meetings, bring up the news for you every morning, analyze your refrigerator for food items to be replenished, and update your shopping cart, all with simple commands. Advancements in deep learning have also produced, what are known as therapist bots. Users can confide in bots just as they do with human friends when they have a broken heart, have lost a job, or have been feeling down. (This view does assume that the service provider respects the users’ privacy and adheres to strict policies related to data privacy and protection.) The end of screen-based interfaces Another flavor of Conversational UI is the Voice User interfaces (VUI). Typically, we interact with a device directly through a touchscreen or indirectly with a remote control. However, VUI is the touch-less version of technology where you only need to think aloud with your voice. These interfaces can work solo, like Amazon Echo, or Google Home or be combined with text-based chatbots, like Apple Siri, Google voice assistant etc. You simply need to say a command or type it, and the task is done. “Siri, Text Robert, I’m running late for the meeting.” And boy! Are voice user interfaces growing rapidly! Google Duplex, announced at Google I/O 2018, can even make phone calls for the users imitating human natural conversation almost perfectly. In Fact, it also adds pause-fillers and phrases such as “um”, “uh-huh “, and “erm” to make the conversation sound as natural as possible. Voice interfaces also work amazingly for people with disabilities including Visual imparities. Users, who are unable to use screens and keyboards, can use VUI for their dat-to-day tasks. Here’s a touching review of Amazon Echo shared by a wheelchair-bound user about how the device changed his life. The world is being swept over by the wave of Conversational UI, Google duplex being the latest example. As AI deepens its roots, across the technology ecosystem, intelligent assistant applications like, Siri, Duplex, Cortana will advance. This boom will push us closer to Zero UI, a seamless and interactive UI which eradicates the barrier between user and device. Top 4 chatbot development frameworks for developers How to create a conversational assistant or chatbot using Python Building a two-way interactive chatbot with Twilio: A step-by-step guide
Read more
  • 0
  • 0
  • 3361

article-image-7-android-predictions-for-2019
Guest Contributor
13 Jan 2019
8 min read
Save for later

7 Android Predictions for 2019

Guest Contributor
13 Jan 2019
8 min read
Emerging technologies not only change the way users interact with their devices but they also improve the development process. One such tech where most features emerge is Google’s Android. The Android App Development platform is coming up with new features every year at a neck-breaking pace These are some of the safest Android predictions which can be made for Android development in the year 2019. #1 Voice Command and Virtual Assistants Voice command simply dictates the user’s voice into an electronic word processed document which allows users to operate the system by talking to it and also frees up cognitive working space. It also has some potential drawbacks - it requires a large amount of memory to store voice data files and is difficult to operate in crowded places due to noise interference. What does it have in store for 2019? In 2019, voice search is going to create a new user interface that would be a mandate to take into consideration when developing and designing applications in mobile apps. Voice Assistants are gaining much popularity and we can see every big player has the one such as Siri, Google Assistant, Bixby, Alexa, Cortana. This will continue to grow in 2019. Use Case App: Pingpong Board The use case for voice assistants is to create an application similar to the Ping Pong board. Inside the application, there are two screens: the First screen shows available players with the leaderboard and scores and the second screen displays two players who are playing the game along with their game points. #2 Chatbots Chatbots are trending as they support faster customer service at low labor costs by increasing customer satisfaction. However, simple chatbots are often limited to give a response to the customers which could frustrate them whereas complex chatbots cost more, inhibiting their widespread adoption. What Next? As per the technology experts, it is predicted the whole world is going to introduce their company by Chatbots. The customer support service will be provided efficiently and the customer feedbacks will also be responded quicker to get the better results. Chatbots are a takeaway in this digital world as every application or website wants to provide this facility for the improvised customer support. Chatbots can be taken as the small assistants which are integrated into our applications. We can create our one with the help of DialogFlow which is easy to develop without much coding. Nowadays, facebook messenger is used in spite of being a messaging app as many of the chatbots are integrated into it. Use Case: Allstate chatbot The largest P&C insurer in America developed its own 'ABle' chatbot to help their agents learn to sell commercial insurance products. The bot teaches agents through the commercial selling process and can extract documents and also understands which product an agent is working on and where are they in the process. #3 Virtual and Augmented Reality Augmented Reality systems are highly interactive in nature and operate simultaneously with the real-time environment by reducing the thin line between real and virtual world; enhancing the perceptions and interactions with the real world. It is expensive to develop AR based devices for the desired projects, lack privacy and low-performance level are few drawbacks for the AR compliant devices. What next? The hardware for VR is initially driven by the hardcore games and gadget freaks where the mobile hardware is been caught up in some instances excluding the traditional computing platforms. For the real-world uses with Augmented Reality and sensor into the mobile devices like never before, AR and VR are combined to get much better visibility of applications which seems that virtual reality revolution is finally arriving. Use Case App: MarXent + AR AR is helping professionals to visualize their final products during the creative process from interior design to architecture and construction. By using the headsets enabled by architects, engineers, and design professionals can step directly into their buildings and spaces to look at how their designs might look and can even make virtual spot changes. #4 Android App Architecture Google has finally introduced guidelines after many years to develop the best Android apps. Even you are not forced to use Android architecture components but it is considered a good starting point to build stable applications. The argument about the best pattern for Android - MVC, MVP, MVVM or anything else has turned off and we can trust the solutions from Google which are good enough for all majority of apps. What next? The developers always face confusion implementing the multithreading on Android and to solve these problems, tools like Async Task and EventBus are supporting it. Also, we can choose RxJava, Kotlin Coroutines or Android LiveData for multithreading management. This fetches more stability and less confusion in the developer community. Loads of applications are installed on our mobile devices but we hardly use some of them. For this, Progressive WebApps are becoming popular in e-commerce. #5 Hybrid Solutions Big companies like Facebook is leading in utilizing the cross platforms for most of the part, it is a pragmatic approach where the larger the audience the bigger the market share for advertising and subscription revenues. What Next? The hybrid mobile applications come with the unified development that can substantially save a good amount of money and provide fast deployment through offline support and bridges the gap between other two approaches providing all the extra functionality with very little overhead. The hybrid applications can possibly result in the loss of performance and make the developer rely on the framework itself to nicely play with the targeted operating system. So, escaping out of the traditional hardware and software solutions, the developers have approached the market aiming to offer a total solution or cross-platform solutions. #6 Machine Learning Google switched to AI first from mobile first strategy since some time. This is clearly seen in the TensorFlow and MLKit in the Firebase ecosystem which is gaining popularity for creating simple basic models that do not need expertise in data sciences to make your application intelligent. People are getting more aware of the capabilities of machine learning along with its implementation in Matlab or R for mobile development. What next? Machine learning is used in a variety of applications for banking and financial sector, healthcare, retail, publishing and social media etc. Also, used by Google and Facebook to push down relevant advertisements based on past user search. The major challenge is to implement machine learning by implementing different techniques and interpretation of results which is also complex but important not only for image and speech recognition but also for user behavior prediction and analysis. Machine learning will be utilized in the future for Quantum computing to manipulate and classify large numbers of vectors in high-dimensional spaces. We expect to have better-unsupervised algorithms in building smarter applications that will lead to faster and more accurate outcomes. #7 Rooting Android Rooting Android means to get root access or administrative rights for your device. No matter how much you pay for your device the internals of the device is still locked away. With the help of Rooting Android, several advantages are offered of removing the pre-installed OEM applications, ad-blocking for all the apps which is a great benefit to the user. What next? As the rooting android installs the incompatible application on your device it can brick your device and it is advised to always get your apps from reliable sources. It does not come with a warranty and a wrong setting can move the wrong item which can cause huge problems. The risk with the rooted devices is that the system might not get well updated later which can create errors. Still, It also provides more display options and internal storage along with the greater battery life and speed. It will also make full device backups and have access to root files. Conclusion The year 2019 is going to very interesting for Android app development. We will observe a lot of new technologies emerging that will change the face of mobile development for future use. The developers need to stay up to date with the emerging trends and learn quickly to implement them in designing new products. We can definitely see a bright future by more good quality apps with even more engaging user interactions. We also expect to have more stable solutions to develop applications which result in better products. It becomes important to observe closely the new trends and become a quick learner in mastering these skills that will be the most important in the future. Author Bio Rooney Reeves is a content strategist and a technical blogger associated with eTatvaSoft. An old hand writer by day and an avid reader by night, she has a vast experience in writing about new products, software design, and test-driven methodology. Read Next 8 programming languages to learn in 2019 18 people in tech every programmer and software engineer need to follow in 2019 Cloud computing trends in 2019
Read more
  • 0
  • 0
  • 3360

article-image-facebook-plans-to-use-bloomsbury-ai-to-fight-fake-news
Pravin Dhandre
30 Jul 2018
3 min read
Save for later

Facebook plans to use Bloomsbury AI to fight fake news

Pravin Dhandre
30 Jul 2018
3 min read
“Our investments in AI mean we can now remove more bad content quickly because we don't have to wait until after it's reported. It frees our reviewers to work on cases where human expertise is needed to understand the context or nuance of a situation. In Q1, for example, almost 90% of graphic violence content that we removed or added a warning label to was identified using AI. This shift from reactive to proactive detection is a big change -- and it will make Facebook safer for everyone.” Mark Zuckerberg, in Facebook’s earnings, call on Wednesday this week To understand the significance of the above statement, we must first look at the past. Last year, Social media giant Facebook suffered from multiple lawsuits across the UK, Germany, and US for defamation due to fake news articles and for spreading misleading information. To make amends, Facebook came up with fake news identification tools, however, failed to completely tame the effects of bogus news. In fact, the company’s revenue took a bad hit in advertising revenue along with its social reputation nosediving. Early this month, Facebook confirmed the acquisition of Bloomsbury AI, a London-based artificial intelligence start-up with over 60 patents acquired to date. Bloomsbury AI focuses on natural language processing - developing machine reading methods that can understand written text across a broad range of domains. The Artificial Intelligence team at Facebook would be on-boarding the complete team of Bloomsbury AI and will build highly robust methods to kill the plague of fake news throughout the Facebook platform. The rich expertise carried over by the Bloomsbury AI team will strengthen Facebook's endeavor in natural language processing research and gauge deeper understanding of natural language and its applications. It appears that the amalgamation will help Facebook to develop advanced machine reading, reasoning and question answering methods which will boost the Facebook’s NLP engine to understand the legitimacy of questions across a broad range of topics and make intellect choices thereby defeating the challenges of fake news and Autobots. No doubt, Facebook is going to leverage the Bloomsbury’s Cape service to answer a majority of the questions on unstructured text. The duo would play a significant role in parsing the content majorly to tackle fake photos and videos too. In addition, it has been said that the new team members would provide an active contribution to the ongoing artificial intelligence projects such as AI hardware chips, AI technology mimicking humans and many more. Facebook is investigating data analytics firm Crimson Hexagon over misuse of data Google, Microsoft, Twitter, and Facebook team up for Data Transfer Project Did Facebook just have another security scare?
Read more
  • 0
  • 0
  • 3357
article-image-how-we-improved-search-algolia
Johti Vashisht
29 Mar 2018
2 min read
Save for later

How we improved search with Algolia

Johti Vashisht
29 Mar 2018
2 min read
Packt prides itself on using the latest technology to deliver products and services to our customers effortlessly. That's why we've updated our search tool across our website to provide a more efficient and intuitive search experience with Algolia. We've loved building it and we're pretty confident you'll love using it too. Explore our content using our new search tool now. What is Algolia and why do we love it? Algolia is an incredibly powerful search platform. We love it because it's incredibly reliable and scalable - it's also a great platform for our development team to work with, which means a lot when we're working on multiple projects at the same time with tight deadlines. Why you will love our new search Our new search tool is fast and responsive, which means you'll be able to find the content you want quickly and easily. With the range of products on our website we know that can sometimes be a challenge - now you can go straight to the products best suited to your needs. How we integrated Algolia with the Packt website Back end algolia integration We built a Node.js function which was deployed as an AWS Lambda that can read from multiple sources and then push data into DynamoDB. A trigger within DynamoDB then pushes the data into a transformation and finally into Algolia. This means we only update in Algolia when the data has changed. Search setup We have 4 indices: the main index with relevance sorting and replicas to sort by title, price and release data. The results have been tuned to show conceptual matching information items first, such as relevant Tech Pages and Live Courses. Front end Algolia integration To allow rapid development and deployment, we used CircleCI 2.0 to build and deploy our project into an AWS S3 Bucket that sits behind a CloudFront CDN. The site is built using HTML, SCSS and pure Javascript together with Webpack for bundling and minification. We are using Algolia's InstantSearch.js library to show different widgets on the screen and Bootstrap for quickly implementing the design which allowed us to put together the bulk of the site in a single day.
Read more
  • 0
  • 1
  • 3348

article-image-tensorfire-dl-browser
Sugandha Lahoti
04 Sep 2017
7 min read
Save for later

TensorFire: Firing up Deep Neural Nets in your browsers

Sugandha Lahoti
04 Sep 2017
7 min read
Machine Learning is a powerful tool with applications in a wide variety of areas including image and object recognition, healthcare, language translation, and more. However, running ML tools requires complicated backends, complex architecture pipelines, and strict communication protocols. To overcome these obstacles, TensorFire, an in-browser DL library, is bringing the capabilities of machine learning to web browsers by running neural nets at blazingly fast speeds using GPU acceleration. It’s one more step towards democratizing machine learning using hardware and software already available with most people. How did in-browser deep learning libraries come to be? Deep Learning neural networks, a type of advanced machine learning, are probably one of the best approaches for predictive tasks. They are modular, can be tested efficiently and can be trained online. However, since neural nets make use of supervised learning (i.e. learning fixed mappings from input to output) they are useful only when large quantities of labelled training data and sufficient computational budget are available. They require installation of a variety of software, packages and libraries. Also, running a neural net has a suboptimal user experience as it opens a console window to show the execution of the net. This called for an environment that could make these models more accessible, transparent, and easy to customize. Browsers were a perfect choice as they are powerful, efficient, and have interactive UI frameworks. Deep Learning in-browser neural nets can be coded using JavaScript without any complex backend requirements. Once browsers came into play, in-browser deep learning libraries (read ConvNetJS, CaffeJS, MXNetJS etc.) have been growing in popularity. Many of these libraries work well. However, they leave a lot to be desired in terms of speed and easy access. TensorFire is the latest contestant in this race aiming to solve the problem of latency. What is TensorFire? It is a Javascript library which allows executing neural networks in web browsers without any setup or installation. It’s different from other existing in-browser libraries as it leverages the power of inbuilt GPUs of most modern devices to perform exhaustive calculations at much faster rates - almost 100x faster. Like TensorFlow, TensorFire is used to swiftly run ML & DL models. However, unlike TensorFlow which deploys ML models to one or more CPUs in a desktop, server, or mobile device, TensorFire utilizes GPUs irrespective of whether they support CUDA eliminating the need of any GPU-specific middleware. At its core, TensorFire is a JavaScript runtime and a DSL built on top of WebGL shader language for accelerating neural networks. Since, it runs in browsers, which are now used by almost everyone, it brings machine and deep learning capabilities to the masses. Why should you choose TensorFire? TensorFire is highly advantageous for running machine learning capabilities in the browsers due to four main reasons: 1.Speed They also utilize powerful GPUs (both AMD and Nvidia GPUs) built in modern devices to speed up the execution of neural networks. The WebGL shader language is used to easily write fast vectorized routines that operate on four-dimensional tensors. Unlike pure Javascript based libraries such as ConvNetJS, TensorFire uses WebGL shaders to run in parallel the computations needed to generate predictions from TensorFlow models. 2. Ease of use TensorFire also avoids shuffling of data between GPUs and CPUs by keeping as much data as possible on the GPU at a time, making it faster and easier to deploy.This means that even browsers that don’t fully support WebGL API extensions (such as the floating-point pixel types for textures) can be utilized to run deep neural networks.Since it has a low-precision approach, smaller models are easily deployed to the client resulting in fast prediction capabilities. TensorFire makes use of low-precision quantized tensors. 3. Privacy This is done by the website training a network on the server end and then distributing the weights to the client.This is a great fit for applications where the data is on the client-side and the deployment model is small.Instead of bringing data to the model, the model is delivered to users directly thus maintaining their privacy.TensorFire significantly improves latencies and simplifies the code bases on the server side since most computations happen on the client side. 4. Portability TensorFire eliminates the need for downloading, installing, and compiling anything as a trained model can be directly deployed into a web browser. It can also serve predictions locally from the browser. TensorFire eliminates the need to install native apps or make use of expensive compute farms. This means TensorFire based apps can have better reach among users. Is TensorFire really that good? TensorFire has its limitations. Using in-built browser GPUs for accelerating speed is both its boon and bane. Since GPUs are also responsible for handling the GUI of the computer, intensive GPU usage may render the browser unresponsive. Another issue is that although using TensorFire speeds up execution, it does not improve the compiling time. Also, the TensorFire library is restricted to inference building and as such cannot train models. However, it allows importing models pre-trained with Keras or TensorFlow. TensorFire is suitable for applications where the data is on the client-side and the deployed model is small. You can also use it in situations where the user doesn’t want to supply data to the servers. However, when both the trained model and the data are already established on the cloud, TensorFire has no additional benefit to offer. How is TensorFire being used in the real-world? TensorFire’s low-level APIs can be used for general purpose numerical computation running algorithms like PageRank for calculating relevance or Gaussian Elimination for inverting mathematical matrices in a fast and efficient way. Having capabilities of fast neural networks in the browsers allows for easy implementation of image recognition. TensorFire can be used to perform real-time client-side image recognition. It can also be used to run neural networks that apply the look and feel of one image into another, while making sure that the details of the original image are preserved. Deep Photo Style Transfer is an example. When compared with TensorFlow which required minutes to do the task, TensorFire took only few seconds. TensorFire also paves way for making tools and applications that can quickly parse and summarize long articles and perform sentiment analysis on their text. It can also enable running RNN in browsers to generate text with a character-by-character recurrent model. With TensorFire, neural nets running in browsers can be used for gesture recognition, distinguishing images, detecting objects etc. These techniques are generally employed using the SqueezeNet architecture - a small convolutional neural net that is highly accurate in its predictions with considerably fewer parameters. Neural networks in browsers can also be used for web-based games, or for user-modelling. This involves modelling some aspects of user behavior, or content of sites visited to provide a customized user experience. As TensorFire is written in JavaScript, it is readily available for use on the server side (available on Node.js) and thus can be used for server based applications as well. Since TensorFire is relatively new, its applications are just beginning to catch fire. With a plethora of features and advantages under its belt, TensorFire is poised to become the default choice for running in-browser neural networks. Because TensorFlow natively supports only CUDA, TensorFire may even outperform TensorFlow on computers that have non-Nvidia GPUs.
Read more
  • 0
  • 0
  • 3347