Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech Guides

851 Articles
article-image-the-rise-of-machine-learning-in-the-investment-industry
Natasha Mathur
15 Feb 2019
13 min read
Save for later

The rise of machine learning in the investment industry

Natasha Mathur
15 Feb 2019
13 min read
The investment industry has evolved dramatically over the last several decades and continues to do so amid increased competition, technological advances, and a challenging economic environment. In this article, we will review several key trends that have shaped the investment environment in general, and the context for algorithmic trading more specifically. This article is an excerpt taken from the book 'Hands on Machine Learning for algorithmic trading' written by Stefan Jansen. The book explores the strategic perspective, conceptual understanding, and practical tools to add value from applying ML to the trading and investment process. The trends that have propelled algorithmic trading and ML to current prominence include: Changes in the market microstructure, such as the spread of electronic trading and the integration of markets across asset classes and geographies The development of investment strategies framed in terms of risk-factor exposure, as opposed to asset classes The revolutions in computing power, data-generation and management, and analytic methods The outperformance of the pioneers in algorithmic traders relative to human, discretionary investors In addition, the financial crises of 2001 and 2008 have affected how investors approach diversification and risk management and have given rise to low-cost passive investment vehicles in the form of exchange-traded funds (ETFs). Amid low yield and low volatility after the 2008 crisis, cost-conscious investors shifted $2 trillion from actively-managed mutual funds into passively managed ETFs. Competitive pressure is also reflected in lower hedge fund fees that dropped from the traditional 2% annual management fee and 20% take of profits to an average of 1.48% and 17.4%, respectively, in 2017. Let's have a look at how ML has come to play a strategic role in algorithmic trading. Factor investing and smart beta funds The return provided by an asset is a function of the uncertainty or risk associated with financial investment. An equity investment implies, for example, assuming a company's business risk, and a bond investment implies assuming default risk. To the extent that specific risk characteristics predict returns, identifying and forecasting the behavior of these risk factors becomes a primary focus when designing an investment strategy. It yields valuable trading signals and is the key to superior active-management results. The industry's understanding of risk factors has evolved very substantially over time and has impacted how ML is used for algorithmic trading. Modern Portfolio Theory (MPT) introduced the distinction between idiosyncratic and systematic sources of risk for a given asset. Idiosyncratic risk can be eliminated through diversification, but systematic risk cannot. In the early 1960s, the Capital Asset Pricing Model (CAPM) identified a single factor driving all asset returns: the return on the market portfolio in excess of T-bills. The market portfolio consisted of all tradable securities, weighted by their market value. The systematic exposure of an asset to the market is measured by beta, which is the correlation between the returns of the asset and the market portfolio. The recognition that the risk of an asset does not depend on the asset in isolation, but rather how it moves relative to other assets, and the market as a whole, was a major conceptual breakthrough. In other words, assets do not earn a risk premium because of their specific, idiosyncratic characteristics, but because of their exposure to underlying factor risks. However, a large body of academic literature and long investing experience have disproved the CAPM prediction that asset risk premiums depend only on their exposure to a single factor measured by the asset's beta. Instead, numerous additional risk factors have since been discovered. A factor is a quantifiable signal, attribute, or any variable that has historically correlated with future stock returns and is expected to remain correlated in future. These risk factors were labeled anomalies since they contradicted the Efficient Market Hypothesis (EMH), which sustained that market equilibrium would always price securities according to the CAPM so that no other factors should have predictive power. The economic theory behind factors can be either rational, where factor risk premiums compensate for low returns during bad times, or behavioral, where agents fail to arbitrage away excess returns. Well-known anomalies include the value, size, and momentum effects that help predict returns while controlling for the CAPM market factor. The size effect rests on small firms systematically outperforming large firms, discovered by Banz (1981) and Reinganum (1981). The value effect (Basu 1982) states that firms with low valuation metrics outperform. It suggests that firms with low price multiples, such as the price-to-earnings or the price-to-book ratios, perform better than their more expensive peers (as suggested by the inventors of value investing, Benjamin Graham and David Dodd, and popularized by Warren Buffet). The momentum effect, discovered in the late 1980s by, among others, Clifford Asness, the founding partner of AQR, states that stocks with good momentum, in terms of recent 6-12 month returns, have higher returns going forward than poor momentum stocks with similar market risk. Researchers also found that value and momentum factors explain returns for stocks outside the US, as well as for other asset classes, such as bonds, currencies, and commodities, and additional risk factors. In fixed income, the value strategy is called riding the yield curve and is a form of the duration premium. In commodities, it is called the roll return, with a positive return for an upward-sloping futures curve and a negative return otherwise. In foreign exchange, the value strategy is called carry. There is also an illiquidity premium. Securities that are more illiquid trade at low prices and have high average excess returns, relative to their more liquid counterparts. Bonds with higher default risk tend to have higher returns on average, reflecting a credit risk premium. Since investors are willing to pay for insurance against high volatility when returns tend to crash, sellers of volatility protection in options markets tend to earn high returns. Multifactor models define risks in broader and more diverse terms than just the market portfolio. In 1976, Stephen Ross proposed arbitrage pricing theory, which asserted that investors are compensated for multiple systematic sources of risk that cannot be diversified away. The three most important macro factors are growth, inflation, and volatility, in addition to productivity, demographic, and political risk. In 1992, Eugene Fama and Kenneth French combined the equity risk factors' size and value with a market factor into a single model that better explained cross-sectional stock returns. They later added a model that also included bond risk factors to simultaneously explain returns for both asset classes. A particularly attractive aspect of risk factors is their low or negative correlation. Value and momentum risk factors, for instance, are negatively correlated, reducing the risk and increasing risk-adjusted returns above and beyond the benefit implied by the risk factors. Furthermore, using leverage and long-short strategies, factor strategies can be combined into market-neutral approaches. The combination of long positions in securities exposed to positive risks with underweight or short positions in the securities exposed to negative risks allows for the collection of dynamic risk premiums. As a result, the factors that explained returns above and beyond the CAPM were incorporated into investment styles that tilt portfolios in favor of one or more factors, and assets began to migrate into factor-based portfolios. The 2008 financial crisis underlined how asset-class labels could be highly misleading and create a false sense of diversification when investors do not look at the underlying factor risks, as asset classes came crashing down together. Over the past several decades, quantitative factor investing has evolved from a simple approach based on two or three styles to multifactor smart or exotic beta products. Smart beta funds have crossed $1 trillion AUM in 2017, testifying to the popularity of the hybrid investment strategy that combines active and passive management. Smart beta funds take a passive strategy but modify it according to one or more factors, such as cheaper stocks or screening them according to dividend payouts, to generate better returns. This growth has coincided with increasing criticism of the high fees charged by traditional active managers as well as heightened scrutiny of their performance. The ongoing discovery and successful forecasting of risk factors that, either individually or in combination with other risk factors, significantly impact future asset returns across asset classes is a key driver of the surge in ML in the investment industry. Algorithmic pioneers outperform humans at scale The track record and growth of Assets Under Management (AUM) of firms that spearheaded algorithmic trading has played a key role in generating investor interest and subsequent industry efforts to replicate their success. Systematic funds differ from HFT in that trades may be held significantly longer while seeking to exploit arbitrage opportunities as opposed to advantages from sheer speed. Systematic strategies that mostly or exclusively rely on algorithmic decision-making were most famously introduced by mathematician James Simons who founded Renaissance Technologies in 1982 and built it into the premier quant firm. Its secretive Medallion Fund, which is closed to outsiders, has earned an estimated annualized return of 35% since 1982. DE Shaw, Citadel, and Two Sigma, three of the most prominent quantitative hedge funds that use systematic strategies based on algorithms, rose to the all-time top-20 performers for the first time in 2017 in terms of total dollars earned for investors, after fees, and since inception. DE Shaw, founded in 1988 with $47 billion AUM in 2018 joined the list at number 3. Citadel started in 1990 by Kenneth Griffin, manages $29 billion and ranks 5, and Two Sigma started only in 2001 by DE Shaw alumni John Overdeck and David Siegel, has grown from $8 billion AUM in 2011 to $52 billion in 2018. Bridgewater started in 1975 with over $150 billion AUM, continues to lead due to its Pure Alpha Fund that also incorporates systematic strategies. Similarly, on the Institutional Investors 2017 Hedge Fund 100 list, five of the top six firms rely largely or completely on computers and trading algorithms to make investment decisions—and all of them have been growing their assets in an otherwise challenging environment. Several quantitatively-focused firms climbed several ranks and in some cases grew their assets by double-digit percentages. Number 2-ranked Applied Quantitative Research (AQR) grew its hedge fund assets 48% in 2017 to $69.7 billion and managed $187.6  billion firm-wide. Among all hedge funds, ranked by compounded performance over the last three years, the quant-based funds run by Renaissance Technologies achieved ranks 6 and 24, Two Sigma rank 11, D.E. Shaw no 18 and 32, and Citadel ranks 30 and 37. Beyond the top performers, algorithmic strategies have worked well in the last several years. In the past five years, quant-focused hedge funds gained about 5.1% per year while the average hedge fund rose 4.3% per year in the same period. ML driven funds attract $1 trillion AUM The familiar three revolutions in computing power, data, and ML methods have made the adoption of systematic, data-driven strategies not only more compelling and cost-effective but a key source of competitive advantage. As a result, algorithmic approaches are not only finding wider application in the hedge-fund industry that pioneered these strategies but across a broader range of asset managers and even passively-managed vehicles such as ETFs. In particular, predictive analytics using machine learning and algorithmic automation play an increasingly prominent role in all steps of the investment process across asset classes, from idea-generation and research to strategy formulation and portfolio construction, trade execution, and risk management. Estimates of industry size vary because there is no objective definition of a quantitative or algorithmic fund, and many traditional hedge funds or even mutual funds and ETFs are introducing computer-driven strategies or integrating them into a discretionary environment in a human-plus-machine approach. Morgan Stanley estimated in 2017 that algorithmic strategies have grown at 15% per year over the past six years and control about $1.5 trillion between hedge funds, mutual funds, and smart beta ETFs. Other reports suggest the quantitative hedge fund industry was about to exceed $1 trillion AUM, nearly doubling its size since 2010 amid outflows from traditional hedge funds. In contrast, total hedge fund industry capital hit $3.21 trillion according to the latest global Hedge Fund Research report. The market research firm Preqin estimates that almost 1,500 hedge funds make a majority of their trades with help from computer models. Quantitative hedge funds are now responsible for 27% of all US stock trades by investors, up from 14% in 2013. But many use data scientists—or quants—which, in turn, use machines to build large statistical models (WSJ). In recent years, however, funds have moved toward true ML, where artificially-intelligent systems can analyze large amounts of data at speed and improve themselves through such analyses. Recent examples include Rebellion Research, Sentient, and Aidyia, which rely on evolutionary algorithms and deep learning to devise fully-automatic Artificial Intelligence (AI)-driven investment platforms. From the core hedge fund industry, the adoption of algorithmic strategies has spread to mutual funds and even passively-managed exchange-traded funds in the form of smart beta funds, and to discretionary funds in the form of quantamental approaches. The emergence of quantamental funds Two distinct approaches have evolved in active investment management: systematic (or quant) and discretionary investing. Systematic approaches rely on algorithms for a repeatable and data-driven approach to identify investment opportunities across many securities; in contrast, a discretionary approach involves an in-depth analysis of a smaller number of securities. These two approaches are becoming more similar to fundamental managers take more data-science-driven approaches. Even fundamental traders now arm themselves with quantitative techniques, accounting for $55 billion of systematic assets, according to Barclays. Agnostic to specific companies, quantitative funds trade patterns and dynamics across a wide swath of securities. Quants now account for about 17% of total hedge fund assets, data compiled by Barclays shows. Point72 Asset Management, with $12 billion in assets, has been shifting about half of its portfolio managers to a man-plus-machine approach. Point72 is also investing tens of millions of dollars into a group that analyzes large amounts of alternative data and passes the results on to traders. Investments in strategic capabilities Rising investments in related capabilities—technology, data and, most importantly, skilled humans—highlight how significant algorithmic trading using ML has become for competitive advantage, especially in light of the rising popularity of passive, indexed investment vehicles, such as ETFs, since the 2008 financial crisis. Morgan Stanley noted that only 23% of its quant clients say they are not considering using or not already using ML, down from 44% in 2016. Guggenheim Partners LLC built what it calls a supercomputing cluster for $1 million at the Lawrence Berkeley National Laboratory in California to help crunch numbers for Guggenheim's quant investment funds. Electricity for the computers costs another $1 million a year. AQR is a quantitative investment group that relies on academic research to identify and systematically trade factors that have, over time, proven to beat the broader market. The firm used to eschew the purely computer-powered strategies of quant peers such as Renaissance Technologies or DE Shaw. More recently, however, AQR has begun to seek profitable patterns in markets using ML to parse through novel datasets, such as satellite pictures of shadows cast by oil wells and tankers. The leading firm BlackRock, with over $5 trillion AUM, also bets on algorithms to beat discretionary fund managers by heavily investing in SAE, a systematic trading firm it acquired during the financial crisis. Franklin Templeton bought Random Forest Capital, a debt-focused, data-led investment company for an undisclosed amount, hoping that its technology can support the wider asset manager. We looked at how ML plays a role in different industry trends around algorithmic trading. If you want to learn more about design and execution of algorithmic trading strategies, and use cases of ML in algorithmic trading, be sure to check out the book 'Hands on Machine Learning for algorithmic trading'. Using machine learning for phishing domain detection [Tutorial] Anatomy of an automated machine learning algorithm (AutoML) 10 machine learning algorithms every engineer needs to know
Read more
  • 0
  • 0
  • 6315

article-image-fab-lab-worldwide-labs-digital-fabrication-0
Michael Ang
30 Jan 2015
7 min read
Save for later

Fab Lab: worldwide labs for digital fabrication

Michael Ang
30 Jan 2015
7 min read
Looking for somewhere to get started using 3D printing, laser cutting, and digital fabrication? The world-wide Fab Lab network provides access to digital fabrication tools based on the ideas of open access, DIY, and knowledge sharing. The Fab Lab initiative was started by MIT’s Center for Bits and Atoms and now boasts affiliated labs in many countries. I first visited Fab Lab Berlin for their "Build your own 3D printer" workshop. Over the course of one weekend I assembled an i3 Berlin 3D printer from a kit, working alongside the kit’s designers and other people interested in building their own printers. Assembling a 3D printer can be a daunting task, but working as a group made it fast and fun - the lab provides access not just to tools but a community of people excited about digital fabrication. I spoke with Wolf Jeschonnek, founder of Fab Lab Berlin. Wolf spent three months visiting Fab Labs and maker spaces in the United States (he blogged his adventure) before returning to Berlin to start his own Fab Lab. For someone that’s not familiar, what is a Fab Lab? The original idea by MIT was an outreach project to show the general public what the Center for Bits and Atoms was doing, which was digital fabrication. And instead of doing an exhibition or a website they put the most important and most commonly used digital fabrication tools that they were using for research into a space and made them accessible to anyone. That’s the whole idea, basically. [Fab Labs follow a simple charter of rules and generally use a shared core set of machines and software so that projects and experience can be shared between the labs. They also provide free or in-kind access to the lab at least part of the time each week.] Machines offered at Fab Lab Berlin You could say the core idea of a Fab Lab is open access to tools for digital fabrication and the sharing of knowledge and experience. Yes, it’s an open research and development lab, basically. Every university, every art or architecture university has a fab lab. There’s not much difference in the equipment, but it’s not accessible - in most of the universities I experienced it’s not even accessible to the students. They’re very strict about rules and safety and so on. So the innovation is to make those kinds of labs accessible to anyone who’s interested. It seems like another big difference is that you get hands-on access to the machines. You get to operate the lasercutter - you get to operate the CNC router. Yes, and also to pass that hands-on knowledge on to the next person who wants to use the lasercutter. In our lab we do a lot of introduction workshops but you can as well just find someone who is willing to show you how the lasercutter works. We have a couple of formal training opportunities for beginners, but after that we don’t do much advanced training, it all happens by people passing their knowledge on to someone else or figuring it out themselves. In the beginning I was the expert for every machine, because I chose them and knew which machines to buy. By now I’m not the expert at all. If you have a very specific question for technology I’m not the person to ask anymore, because the community is much better. What are some of your favorite projects that you’ve seen come out of Fab Lab Berlin? I really like the i3 Berlin 3D printer, because we started together with them, and I could see the development of the machine over the last year and a half. I also built one myself. It was the first machine that we built in the Fab Lab. Bram de Vries with several iterations of his i3 Berlin 3D printer There’s a steadicam rig for a camera that two people built. [The CAMBAL by SeeYa Film.] It’s a very sophisticated motor driven steadicam rig. It’s very steady because the motors have an active control of the movements. Then there is a project that wasn’t really developed in Fab Lab Berlin but it also started with an i3 Berlin workshop. One guy who came and built an i3 Berlin used that printer to design and make a DIY lasercutter. It’s called Mr. Beam. If you look at the machine, the Mr. Beam, you see lots of details that he took from the printer and transferred to the lasercutter. He used the printer that he made himself to build the lasercutter and it was very successful on Kickstarter. CAMBAL camera stabilizer developed at Fab Lab Berlin. Dig that carbon fibre and milled aluminum! Do you use open source tools in the Fab Lab? Is there a preference for that? We try to use as much open source as possible, because we try to enable people to use their own computers to work. It makes a lot of sense to use free software, because obviously you don’t have to pay for it and everybody can install them on their own computers. We try to use open source as much as possible, but sometimes if you do more advanced stuff it makes sense to use something else. What kind of people do you find using the lab? That’s very hard to summarize because it’s very broad. There are doctors and lawyers and computer engineers, but also teachers or just people who are interested in how a 3D printer works. The age range is between 6 and 70, I would probably guess. There are some professionals who use the lab and the infrastructure to work on professional projects then there are also people who are very professional in other fields and do very high-level projects, but for a hobby. The reasons why people come to the lab are very different. We also get a lot of interest from large companies and corporations who are interested in how innovation works in an environment like this. It’s been a very good mix of people and companies so far. Rapid Robots workshop by Niklas Roy / School of Machines, Making & Make-Believe Do you have any advice for people who are coming to Fab Lab for the first time? One advice is to bring something to make, because otherwise it’s very boring. It can still be interesting to talk to people, but if you have a project and you already prepared something, you in most cases can find someone who can help you. Then you can make something and it’s a much different experience compared to just watching. When I toured the United States [visiting Fab Labs] I brought a project - a vertical axis wind turbine. Before I went I really thought hard of a project that I could do. Most people that hang around in a Fab Lab are people who are interested in making stuff. Also talking and sharing knowledge, but the core activity is really making things. If you bring something that’s interesting it’s also a very good starter for conversation. Thanks, Wolf! The Fab Foundation maintains a world-wide list of Fab Labs. Fab Lab Berlin has open lab hours each week as well as a number of introductory workshops to get you started with digital fabrication. Most other labs have similar programs. If there isn’t a lab near you, you could be like Wolf and start your own! About the author: Michael Ang is a Berlin-based artist and engineer working at the intersection of art, engineering, and the natural world. His latest project is the Polygon Construction Kit, a toolkit for bridging the virtual and physical worlds by translating simple 3D models into physical structures.
Read more
  • 0
  • 0
  • 6291

article-image-serverless-computing-aws-lambdas-azure-functions
Vijin Boricha
03 May 2018
5 min read
Save for later

Serverless computing wars: AWS Lambdas vs Azure Functions

Vijin Boricha
03 May 2018
5 min read
In recent times, local servers and on-premises computers are counted as old school. Users and organisations have shifted their focus on Cloud to store, manage, and process data. Cloud computing has evolved in ways that DevOps teams can now focus on improving code and processes rather than focusing on provisioning, scaling, and maintaining servers. This means we have now entered the Serverless era, and the big players of this era are AWS Lambda and Azure Functions. So if you are a developer now you need not worry about low-level infrastructure decision. Coming to the bigger question. What is Serverless Computing / Function-as-a-Service? Serverless Computing / Function-as-a-Service FaaS can be described as a concept of serverless computing where applications depend on third party services to manage server-side logics. This means application developers can concentrate on building their applications rather than thinking about servers. So if you want to build any type of application or backend service, just go about with it as everything required to run and scale your application is already being handled for you. Following are popular platforms that support Faas. AWS Lambda Azure Functions Cloud Functions Iron.io Webtask.io Benefits of Serverless Computing Serverless applications and architectures are gaining momentum and are increasingly being used by companies of all sizes. Serverless technology rapidly reduces production time and minimizes your costs, while you still have the freedom to customize your code, without hindering functionalities. For good reason, the serverless-based software takes care of many of the problems developers face when running systems and servers such as fault-tolerance, centralized logging, horizontal scalability, and deployments, to name a few. Additionally, the serverless pay-per-invocation model can result in drastic cost savings. Since AWS Lambda and Azure Functions are the most popular and widely used serverless computing platforms, we will discuss these services further. AWS Lambda AWS is recognized as one of the largest market leaders for cloud computing. One of the recent services within the AWS umbrella that has gained a lot of traction is AWS Lambda. It is the part of Amazon Web Services that lets you run your code without provisioning or managing servers. AWS Lambda is a compute service that enables you to deploy applications and back-end services that operate with zero upfront cost and requires no system administration. Although seemingly simple and easy to use, Lambda is a highly effective and scalable compute service that provides developers with a powerful platform to design and develop serverless event-driven systems and applications. Pros: Supports automatic scaling Support unlimited number of functions Takes 1 million requests for free, then charges $0.20/1 million invocations, plus $0.00001667/GB per sec Cons: Limited concurrent executions (1000 executions per account) Supports lesser languages in comparison to Azure (JavaScript, Java, C#, and Python) Azure Functions Microsoft provides a solution you can use to easily run small segments of code in the Cloud: Azure Functions. It provides solutions for processing data, integrating systems, and building simple APIs and microservices. Azure Functions help you easily run small pieces of code in cloud without worrying about a whole application or the infrastructure to run it. With Azure functions, you can use triggers to execute your code and bindings to simplify the input and output of your code. Pros: Supports unlimiter concurrent executions Supports C#, JavaScript, F#, Python, Batch, PHP, PowerShell Supports unlimited number of functions Takes 1 million requests for free, then charges $0.20/1 million invocations, plus $0.000016/GB per sec Cons: Manual scaling (App Service Plan) Conclusion When compared with the traditional Client-server approach, serverless architecture saves a lot effort and proves to cost effective for many organisations, no matter its size. The most important aspect of choosing the right platform is understanding which platform benefits your organisation the best. AWS Lambda has been around for a while with infinite support to Linux-based platforms but Azure Functions is not behind in supporting Windows-based suite even after entering the serveless market recently. If you are going to adopt AWS you will be to make the most of its; availability to open source integration, pay-as-you-go model, and high performance computing environment. Azure, on the other hand is easier to use as it’s a Windows platform. It also supports a precise pricing model where they charge by the minute and it has extended support for MacOS and Linux. So if you are looking for a clear winner here you shouldn't be surprised that AWS and Azure are similar in many ways and it would be a tie if it was to choose who is better or worse than the other. This battle would always be heated and experts will be placing their bets on who wins the race. In the end, the entire discussion would drill down to what your business needs. After all the mission would always be to grow your business at a marginal cost. The Lambda programming model How to Run Code in the Cloud with AWS Lambda Download Microsoft Azure serverless computing e-book for free
Read more
  • 0
  • 0
  • 6282

article-image-why-should-your-e-commerce-site-opt-for-headless-magento-2
Guest Contributor
09 Apr 2019
5 min read
Save for later

Why should your e-commerce site opt for Headless Magento 2?

Guest Contributor
09 Apr 2019
5 min read
Over the past few years, headless e-commerce has been pretty much in talks often being touted as the ‘future of e-commerce’. Last year, in a joint webinar conducted by Magento on ten B2B eCommerce trends for 2018, BORN and Magento predicted that headless CMS commerce will become a popular type of website architecture in the coming year and beyond, for both B2B and B2C businesses. Magento Headless is one such headless CMS which is gaining rapid popularity. Those who have recently jumped on the Magento bandwagon might find it new and hard to grasp. So, in this the post we will give a brief on what is headless Magento, its benefits, and why should you opt for Headless Magento 2. What is a Headless Browser? Headless browsers are basically software-enabled browsers offering a separate user interface. You can automate various actions of your website and monitor the performance under different circumstances. If you’re working under command line instructions in a headless browser, then there is no GUI. With the help of a headless browser, one can actually view several things such as the dimensions of a web layout, font family and other design elements used in a particular website. Headless browsers are mainly used to test web pages. Earlier, e-commerce websites like Magento, Shopify used to have their back-end and front-end tightly integrated. After the introduction of the headless architecture, front-end separated from the back-end. As a result, both parts work independently. There are various e-Commerce platforms that support a headless approach such as Magento, BigCommerce, Shopify, and many more. With Shopify, if you already have a website you can take the headless approach and use Shopify for your sales with links to it from your main website. Like Shopify, BigCommerce also offers a good range of themes and templates to make sure stores look professional and get up-and-running fast. The platform incorporates a full-featured CMS that allows you to run an entire website, not just your online store. Going headless with browsers means that you are running the output in a non-graphical environment such as Linux terms, without X-Windows or Wayland. For Google search algorithms, headless browsers play quite a crucial role. The search engine strongly recommends using Headless architecture because it helps Google to create Ajax websites on the web. So, websites which integrate the headless browser on the web server get easy access by the search engine. This is because Google can access the Ajax website program on the server before making it available for search engine rendering. What are the benefits of using a Headless browser? Going headless has a variety of benefits. If you are a web designer, the markup in HTML would be simple to understand. This is because PHP code, complex JavaScript, widgets are no longer in use – just a plain HTML with some kind of additional placeholder-syntax is required. This also means that the HTML page could be served statically, bringing the application load time down by a significant amount. From the e-commerce perspective, it acts as a viable channel for sales, where a majority of traffic comes from mobile. With the advent of disruptive technologies such as Headless browser for Magento, the path to purchase has expanded. Today it not only includes mobile traffic but even features a complex matrix of buyer touchpoints. Why use a headless browser for Magento? Opting for Magento featuring a headless browser has benefits of its own. Because in Magento, JavaScript coded parts are loosely coupled but the scope of flexibility widens when it comes to choosing any of frameworks such as AngularJs, VueJs, and others. In other words, one does not need to be a Magento developer to build on Magento. All you have to do is focus on a REST API. For instance, if a single page is loading 15 different resources of course from different URLs, the HTML document would become optimal. Magento is a flexible framework that can be used to create your own logic such as pricing, logins, checkout, etc. With a headless browser, Magento gets a clear performance boost. All static parts of the pages are loaded quickly, and the dynamic parts of the pages are loaded lazily through Ajax. In Magento 2, you have the additional support of full page cache. It may even interest you to know that Magento 2 offers Private Content which is equipped to handle lazy loading more efficiently. Isn't Magento 2 headless already? One of the common misconceptions that I have come across is people believing that Magento 2 is already headless. For Magento to be headless, a JavaScript developer requires knowing KnockoutJS ViewModels to make use of the Magento logic. If the ViewModels do not suffice, a Magento developer should add backend logic to this JavaScript layer. However, this also applies to go 100% headless: Whenever a REST resource won’t be available, professionals must make headless for Magento 2 available. To be or not to be headless will always be in talks because an HTML document not only contains the templating part but also static content that shouldn’t be changed. For me, a headless approach is best suited for businesses who have a CMS website for B2C and B2B storefront. It is also popular among those who are looking to put their JavaScript teams back to work. Author Bio Olivia Diaz is working in eTatvaSoft. Being a tech geek, she keeps a close watch over the industry focusing on the latest technology news and gadgets.
Read more
  • 0
  • 0
  • 6265

article-image-learn-framework-forget-the-language
Aaron Lazar
04 Jul 2018
7 min read
Save for later

Learn a Framework; forget the language!

Aaron Lazar
04 Jul 2018
7 min read
If you’re new to programming or have just a bit of experience, you’re probably thoroughly confused, wondering whether what you’ve been told all this while was bogus! If you’re an experience developer, you’re probably laughing (or scorning) at the title by now, wondering if I was high when I wrote the article. What I’m about to tell you is something that I’ve seen happen, and could be professionally beneficial to you. Although, I must warn you that it’s not what everyone is going to approve of, so read further but implement at your own risk. Okay, so I was saying, learn the framework, not the language. I’m going to explain why to take this approach, keeping two sets of audience in mind. The first, are total newbies, who’re probably working in some X field and now want to switch roles but have realised that with all the buzz of automation and the rise of tech, the new role demands a basic understanding of programming. The latter are developers who probably have varied levels of experience with programming, and now want to get into a new job, which requires them to have a particular skill. Later I will clearly list down the benefits of taking this approach. Let’s take audience #1 first. You’re a young Peter Parker just bitten by the programming bug You’re completely new to programming and haven’t the slightest clue about what it’s all about. You can spend close to a month trying to figure out a programming language like maybe Python or Java and then try to build something with it. Or you could jump straight into learning a framework and building something out of it. Now, in both cases we’re going to assume that you’re learning from a book, a video course or maybe a simple online tutorial. When you choose to learn the framework and build something, you’re going with the move fast and break things approach, which according to me, is the best way that anyone can learn something new. Once you have something built in front of you, you’re probably going to remember it much easier than when you’re just learning something theoretical first and then tried to apply it in practice at a later stage. How to do it? Start by understanding your goals first. Where do you want to go from where you are currently at. Now if your answer was that you wanted to get into Web Development, to build websites for a living, you have your first answer. What you need to do next is to understand what skills your “dream” company is actually looking for. You’ll understand that from the Job Description and a little help from someone technical. If the skill is web development, look for the accompanying tools/frameworks. Say for example, you found it was Node. Start learning Node! Pick up a book/video/tutorial that will help you build something as you learn. Spend at least a good week getting used to stuff. Have it reviewed by someone knowledgeable and watch carefully as the person works. You’ll be able to relate quite easily to what is going on, and will pick up some really useful tips and tricks quickly. Keep practicing another week, you’ll start getting good at it. Why will it work? Well, to be honest, several organisations work primarily with frameworks on a number of projects, mainly because frameworks simplify the building of large applications. Very rarely will you find the need to work with the vanilla language. By taking the Framework-first approach, you’re gaining the skill, i.e. web development, fast, rather than worry about using the medium or tool that will enable you to build it. You’re not spending too much time on learning the foundations, which you may never use in your development. Another example - Say you’ve been longing to learn how to build games, but don’t know how to program. Plus C++ is a treacherous language for a newbie to learn. Don’t worry at all! Just start learning how to work with Unreal Engine or any other game engine/framework. Use its in-built features, like Blueprints, which allows you to drag and drop things to build your game, and voila! You have your very own game! ;) You’re a Ninja wielding a katana in Star Wars Now you’re the experienced one, you probably have a couple of apps under your belt and are looking to learn a new skill, maybe because that’s what your dream company is looking for. Let’s say you’re a web developer, who now wants to move into mobile or enterprise application development. You’re familiar with JavaScript but don’t really want to take the time to learn a new language like C#. Don’t learn it, then. Just learn Xamarin or .NET Core! In this case, you’re already familiar with how programming works, but all that you don’t know is the syntax and working of the new language, C#. When you jump straight into .NET Core, you’ll be able to pick up the nitty gritties much faster than if you were to learn C# first and then start with .NET Core. No harm done if you were to take that path, but you’re just slowing down your learning by picking up the language first. Impossible is what? I know for a fact that by now, half of you are itching to tear me apart! I encourage you to vent your frustration in the comments section below! :) I honestly think it’s quite possible for someone to learn how to build an application without learning the programming language. You could learn how to drive an automatic car first and not know a damn thing about gears, but you’re still driving, right? You don’t always need to know the alphabet to be able to hold a conversation. At this point, I could cross the line by saying that this is true even in the the latest most cutting edge tech domain: machine learning. It might be possible even for buddying Data Scientists to start using Tensorflow straight away without learning Python, but I’ll hold my horses there. Benefits of learning a Framework directly There are 4 main benefits of this approach: You’re learning to become relevant quickly, which is very important, considering the level of competition that’s out there You’re meeting the industry requirements of knowing how to work with the framework to build large applications You’re unconsciously applying a fail-fast approach to your learning, by building an application from scratch Most importantly, you’re avoiding all the fluff - the aspects you may never use in languages or maybe the bad coding practices that you will avoid altogether As I conclude, it’s my responsibility to advise you that not learning a language entirely can be a huge drawback. For example, suppose your framework doesn’t address the problem you have at hand, you will have to work around the situation by working with the vanilla language. So when I say forget the language, I actually mean for the time being, when you’re trying to acquire the new skill fast. But to become a true expert, you must learn to master both the language and framework together. So go forth and learn something new today! Should software be more boring? The “Boring Software” manifesto thinks so These 2 software skills subscription services will save you time – and cash Minko Gechev: “Developers should learn all major front end frameworks to go to the next level”
Read more
  • 0
  • 0
  • 6240

article-image-python-oop-python-object-oriented-programming
Liz Tom
25 Apr 2016
5 min read
Save for later

Python - OOP! (Python: Object-Oriented Programming)

Liz Tom
25 Apr 2016
5 min read
Or Currency Conversion using Python I love to travel and one of my favorite programming languages is Python. Sometimes when I travel I like to make things a bit more difficult and instead of just asking Google to convert currency for me, I like to have a script on my computer that requires me to know the conversion rate and then calculate my needs. But seriously, let's use a currency converter to help explain some neat reasons why Object Oriented Programming is awesome. Money Money Money First let's build a currency class. class Currency(object): def __init__(self, country, value): self.country = country self.value = float(value) OK, neato. Here's a currency class. Let's break this down. In Python every class has an __init__ method. This is how we build an instance of a class. If I call Currency(), it's going to break because our class also happens to require two arguments. self is not required to be passed. In order to create an instance of currency we just use Currency('Canada', 1.41) and we've now got a Canadian instance of our currency class. Now let's add some helpful methods onto the Currency class. def from_usd(self, dollars): """If you provide USD it will convert to foreign currency """ return self.value * int(dollars)def to_usd(self, dollars): """If you provide foreign currency it will convert to USD """ return int(dollars) / self.value Again, self isn't needed by us to use the methods but needs to be passed to every method in our class. self is our instance. In some cases self will refer to our Canadian instance but if we were to create a new instance Currency('Mexico', 18.45) self can now also refer to our Mexican instance. Fun. We've got some awesome methods that help me do math without me having to think. How does this help us? Well, we don't have to write new methods for each country. This way, as currency rates change, we can update the changes rather quickly and also deal with many countries at once. Conversion between USD and foreign currency is all done the same way. We don't need to change the math based on the country we're planning on visiting, we only need to change the value of the currency relative to USD. I'm an American so I used USD because that's the currency I'd be converting to and from most often. But if I wanted, I could have named them from_home_country and to_home_country. Now how does this work? Well, if I wanted to run this script I'd just do this: again = True while again: country = raw_input('What country are you going to?n') value = float(raw_input('How many of their dollars equal 1 US dollarn')) foreign_country = Currency(country, value) convert = raw_input('What would you like to convert?n1. To USDn2. To %s dollarsn' % country) dollars = raw_input('How many dollars would you like to convert?n') if( convert == '1' ): print dollars + ' ' + country + ' dollars are worth ' + str(foreign_country.to_usd(dollars)) + ' US dollarsn' elif( convert == '2' ): print dollars + ' US dollars are worth ' + str(foreign_country.from_usd(dollars)) + dollars + ' ' + country again = raw_input('nnnWant to go again? (Y/N)n') if( again == 'y' or again == 'Y' ): again = True elif( again == 'n' or again == 'N' ): again = False **I'm still using Python 2 so if you're using Python 3 you'll want to change those raw_inputs to just input. This way we can convert as much currency as we want between USD and any country! I can now travel the world feeling comfortable that if I can't access the Internet and I happen to have my computer nearby and I am at a bank or the hotel lobby staring at the exchange rate board, I'll be able to convert currency with ease without having to remember which way converts my money to USD and which way converts USD to Canadian dollars. Object-oriented programming allows us to create objects that all behave in the same way but store different values, like a blue car, red car, or green car. The cars all behave the same way but they are all described differently. They might all have different MPG but the way we calculate their MPG is the same. They all have four wheels and an engine. While it can be harder to build your program with object oriented design in mind, it definitely helps with maintainability in the long run. About the Author Liz Tom is a Software Developer at Pop Art, Inc in Portland, OR.  Liz’s passion for full stack development and digital media makes her a natural fit at Pop Art.  When she’s not in the office, you can find Liz attempting parkour and going to check out interactive displays at museums.
Read more
  • 0
  • 0
  • 6211
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at £15.99/month. Cancel anytime
article-image-what-is-core-ml
Savia Lobo
28 Sep 2018
5 min read
Save for later

What is Core ML?

Savia Lobo
28 Sep 2018
5 min read
Introduced by Apple, CoreML is a machine learning framework that powers the iOS app developers to integrate machine learning technology into their apps. It supports natural language processing (NLP), image analysis, and various other conventional models to provide a top-notch on-device performance with minimal memory footprint and power consumption. This article is an extract taken from the book Machine Learning with Core ML written by Joshua Newnham. In this article, you will get to know the basics of what CoreML is and its typical workflow. With the release of iOS 11 and Core ML, performing inference is just a matter of a few lines of code. Prior to iOS 11, inference was possible, but it required some work to take a pre-trained model and port it across using an existing framework such as Accelerate or metal performance shaders (MPSes). Accelerate and MPSes are still used under the hood by Core ML, but Core ML takes care of deciding which underlying framework your model should use (Accelerate using the CPU for memory-heavy tasks and MPSes using the GPU for compute-heavy tasks). It also takes care of abstracting a lot of the details away; this layer of abstraction is shown in the following diagram: There are additional layers too; iOS 11 has introduced and extended domain-specific layers that further abstract a lot of the common tasks you may use when working with image and text data, such as face detection, object tracking, language translation, and named entity recognition (NER). These domain-specific layers are encapsulated in the Vision and natural language processing (NLP) frameworks; we won't be going into any details of these frameworks here, but you will get a chance to use them in later chapters: It's worth noting that these layers are not mutually exclusive and it is common to find yourself using them together, especially the domain-specific frameworks that provide useful preprocessing methods we can use to prepare our data before sending to a Core ML model. So what exactly is Core ML? You can think of Core ML as a suite of tools used to facilitate the process of bringing ML models to iOS and wrapping them in a standard interface so that you can easily access and make use of them in your code. Let's now take a closer look at the typical workflow when working with Core ML. CoreML Workflow As described previously, the two main tasks of a ML workflow consist of training and inference. Training involves obtaining and preparing the data, defining the model, and then the real training. Once your model has achieved satisfactory results during training and is able to perform adequate predictions (including on data it hasn't seen before), your model can then be deployed and used for inference using data outside of the training set. Core ML provides a suite of tools to facilitate getting a trained model into iOS, one being the Python packaged released called Core ML Tools; it is used to take a model (consisting of the architecture and weights) from one of the many popular packages and exporting a .mlmodel file, which can then be imported into your Xcode project. Once imported, Xcode will generate an interface for the model, making it easily accessible via code you are familiar with. Finally, when you build your app, the model is further optimized and packaged up within your application. A summary of the process of generating the model is shown in the following diagram:   The previous diagram illustrates the process of creating the .mlmodel;, either using an existing model from one of the supported frameworks, or by training it from scratch. Core ML Tools supports most of the frameworks, either internal or as third party plug-ins, including  Keras, Turi, Caffe, scikit-learn, LibSVN, and XGBoost frameworks. Apple has also made this package open source and modular for easy adaption for other frameworks or by yourself. The process of importing the model is illustrated in this diagram: In addition; there are frameworks with tighter integration with Core ML that handle generating the Core ML model such as Turi Create, IBM Watson Services for Core ML, and Create ML. We will be introducing Create ML in chapter 10; for those interesting in learning more about Turi Create and IBM Watson Services for Core ML then please refer to the official webpages via the following links: Turi Create; https://github.com/apple/turicreate IBM Watson Services for Core ML; https://developer.apple.com/ibm/ Once the model is imported, as mentioned previously, Xcode generates an interface that wraps the model, model inputs, and outputs. Thus, in this post, we learned about the workflow of training and how to import a model. If you've enjoyed this post, head over to the book  Machine Learning with Core ML  to delve into the details of what this model is and what Core ML currently supports. Emotional AI: Detecting facial expressions and emotions using CoreML [Tutorial] Build intelligent interfaces with CoreML using a CNN [Tutorial] Watson-CoreML: IBM and Apple’s new machine learning collaboration project
Read more
  • 0
  • 0
  • 6199

article-image-deploy-self-service-business-intelligence-qlik-sense
Amey Varangaonkar
31 May 2018
7 min read
Save for later

Best practices for deploying self-service BI with Qlik Sense

Amey Varangaonkar
31 May 2018
7 min read
As part of a successful deployment of Qlik Sense, it is important IT recognizes self-service Business Intelligence to have its own dynamics and adoption rules. The various use cases and subsequent user groups thus need to be assessed and captured. Governance should always be present but power users should never get the feeling that they are restricted. Once they are won over, the rest of the traction and the adoption of other user types is very easy. In this article, we will look at the most important points to keep in mind while deploying self-service with Qlik Sense. The following excerpt is taken from the book Mastering Qlik Sense, authored by Martin Mahler and Juan Ignacio Vitantonio. This book demonstrates useful techniques to design useful and highly profitable Business Intelligence solutions using Qlik Sense. Here's the list of points to be kept in mind: Qlik Sense is not QlikView Not even nearly. The biggest challenge and fallacy is that the organization was sold, by Qlik or someone else, just the next version of the tool. It did not help at all that Qlik itself was working for years on Qlik Sense under the initial product name Qlik.Next. Whatever you are being told, however, it is being sold to you, Qlik Sense is at best the cousin of QlikView. Same family, but no blood relation. Thinking otherwise sets the wrong expectation so the business gives the wrong message to stakeholders and does not raise awareness to IT that self-service BI cannot be deployed in the same fashion as guided analytics, QlikView in this case. Disappointment is imminent when stakeholders realize Qlik Sense cannot replicate their QlikView dashboards. Simply installing Qlik Sense does not create a self-service BI environment Installing Qlik Sense and giving users access to the tool is a start but there is more to it than simply installing it. The infrastructure requires design and planning, data quality processing, data collection, and determining who intends to use the platform to consume what type of data. If data is not available and accessible to the user, data analytics serve no purpose. Make sure a data warehouse or similar is in place and the business has a use case for self-service data analytics. A good indicator for this is when the business or project works with a lot of data, and there are business users who have lots of Excel spreadsheets lying around analyzing it in different ways. That’s your best case candidate for Qlik Sense. IT to monitor Qlik Sense environment rather control IT needs to unlearn to learn new things and the same applies when it comes to deploying self-service. Create a framework with guidelines and principles and monitor that users are following it, rather than limiting them in their capabilities. This framework needs to have the input of the users as well and to be elastic. Also, not many IT professionals agree with giving away too much power to the user in the development process, believing this leads to chaos and anarchy. While the risk is there, this fear needs to be overcome. Users love data analytics, and they are keen to get the help of IT to create the most valuable dashboard possible and ensure it will be well received by a wide audience. Identifying key users and user groups is crucial For a strong adoption of the tool, IT needs to prepare the environment and identify the key power users in the organization and to win them over to using the technology. It is important they are intensively supported, especially in the beginning, and they are allowed to drive how the technology should be used rather than having principles imposed on them. Governance should always be present but power users should never get the feeling they are restricted by it. Because once they are won over, the rest of the traction and the adoption of other user types is very easy. Qlik Sense sells well–do a lot of demos Data analytics, compelling visualizations, and the interactivity of Qlik Sense is something almost everyone is interested in. The business wants to see its own data aggregated and distilled in a cool and glossy dashboard. Utilize the momentum and do as many demos as you can to win advocates of the technology and promote a consciousness of becoming a data-driven culture in the organization. Even the simplest Qlik Sense dashboards amaze people and boost their creativity for use cases where data analytics in their area could apply and create value. Promote collaboration Sharing is caring. This not only applies to insights, which naturally are shared with the excitement of having found out something new and valuable, but also to how the new insight has been derived. People keep their secrets on the approach and methodology to themselves, but this is counterproductive. It is important that applications, visualizations, and dashboards created with Qlik Sense are shared and demonstrated to other Qlik Sense users as frequently as possible. This not only promotes a data-driven culture but also encourages the collaboration of users and teams across various business functions, which would not have happened otherwise. They could either be sharing knowledge, tips, and tricks or even realizing they look at the same slices of data and could create additional value by connecting them together. Market the success of Qlik Sense within the organization If Qlik Sense has had a successful achievement in a project, tell others about it. Create a success story and propose doing demos of the dashboard and its analytics. IT has been historically very bad in promoting their work, which is counterproductive. Data analytics creates value and there is nothing embarrassing about boasting about its success; as Muhammad Ali suggested, it’s not bragging if it’s true. Introduce guidelines on design and terminology Avoiding the pitfalls of having multiple different-looking dashboards by promoting a consistent branding look across all Qlik Sense dashboards and applications, including terminology and best practices. Ensure the document is easily accessible to all users. Also, create predesigned templates with some sample sheets so the users duplicate them and modify them to their liking and extend them, applying the same design. Protect less experienced users from complexities Don’t overwhelm users if they have never developed in their life. Approach less technically savvy users in a different way by providing them with sample data and sample templates, including a library of predefined visualizations, dimensions, or measures (so-called Master Key Items). Be aware that what is intuitive to Qlik professionals or power users is not necessarily intuitive to other users – be patient and appreciative of their feedback, and try to understand how a typical business user might think. For a strong adoption of the tool, IT needs to prepare the environment and identify the key power users in the organization and win them over to using the technology. It is important they are intensively supported, especially in the beginning, and they are allowed to drive how the technology should be used rather than having principles imposed on them. If you found the excerpt useful, make sure you check out the book Mastering Qlik Sense to learn more of these techniques on efficient Business Intelligence using Qlik Sense. Read more How Qlik Sense is driving self-service Business Intelligence Overview of a Qlik Sense® Application’s Life Cycle What we learned from Qlik Qonnections 2018
Read more
  • 0
  • 0
  • 6180

article-image-quantum-expert-robert-sutor-explains-the-basics-of-quantum-computing
Packt Editorial Staff
12 Dec 2019
9 min read
Save for later

Quantum expert Robert Sutor explains the basics of Quantum Computing

Packt Editorial Staff
12 Dec 2019
9 min read
What if we could do chemistry inside a computer instead of in a test tube or beaker in the laboratory? What if running a new experiment was as simple as running an app and having it completed in a few seconds? For this to really work, we would want it to happen with complete fidelity. The atoms and molecules as modeled in the computer should behave exactly like they do in the test tube. The chemical reactions that happen in the physical world would have precise computational analogs. We would need a completely accurate simulation. If we could do this at scale, we might be able to compute the molecules we want and need. These might be for new materials for shampoos or even alloys for cars and airplanes. Perhaps we could more efficiently discover medicines that are customized to your exact physiology. Maybe we could get a better insight into how proteins fold, thereby understanding their function, and possibly creating custom enzymes to positively change our body chemistry. Is this plausible? We have massive supercomputers that can run all kinds of simulations. Can we model molecules in the above ways today?  This article is an excerpt from the book Dancing with Qubits written by Robert Sutor. Robert helps you understand how quantum computing works and delves into the math behind it with this quantum computing textbook.  Can supercomputers model chemical simulations? Let’s start with C8H10N4O2 – 1,3,7-Trimethylxanthine.  This is a very fancy name for a molecule that millions of people around the world enjoy every day: caffeine. An 8-ounce cup of coffee contains approximately 95 mg of caffeine, and this translates to roughly 2.95 × 10^20 molecules. Written out, this is 295, 000, 000, 000, 000, 000, 000 molecules. A 12 ounce can of a popular cola drink has 32 mg of caffeine, the diet version has 42 mg, and energy drinks often have about 77 mg. These numbers are large because we are counting physical objects in our universe, which we know is very big. Scientists estimate, for example, that there are between 10^49 and 10^50 atoms in our planet alone. To put these values in context, one thousand = 10^3, one million = 10^6, one billion = 10^9, and so on. A gigabyte of storage is one billion bytes, and a terabyte is 10^12 bytes. Getting back to the question I posed at the beginning of this section, can we model caffeine exactly on a computer? We don’t have to model the huge number of caffeine molecules in a cup of coffee, but can we fully represent a single molecule at a single instant? Caffeine is a small molecule and contains protons, neutrons, and electrons. In particular, if we just look at the energy configuration that determines the structure of the molecule and the bonds that hold it all together, the amount of information to describe this is staggering. In particular, the number of bits, the 0s and 1s, needed is approximately 10^48: 10, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000. And this is just one molecule! Yet somehow nature manages to deal quite effectively with all this information. It handles the single caffeine molecule, to all those in your coffee, tea, or soft drink, to every other molecule that makes up you and the world around you. How does it do this? We don’t know! Of course, there are theories and these live at the intersection of physics and philosophy. However, we do not need to understand it fully to try to harness its capabilities.  We have no hope of providing enough traditional storage to hold this much information. Our dream of exact representation appears to be dashed. This is what Richard Feynman meant in his quote: “Nature isn’t classical.” However, 160 qubits (quantum bits) could hold 2^160 ≈ 1.46 × 10^48 bits while the qubits were involved in a computation. To be clear, I’m not saying how we would get all the data into those qubits and I’m also not saying how many more we would need to do something interesting with the information. It does give us hope, however. In the classical case, we will never fully represent the caffeine molecule. In the future, with enough very high-quality qubits in a powerful quantum computing system, we may be able to perform chemistry on a computer. How quantum computing is different than classical computing I can write a little app on a classical computer that can simulate a coin flip. This might be for my phone or laptop. Instead of heads or tails, let’s use 1 and 0. The routine, which I call R, starts with one of those values and randomly returns one or the other. That is, 50% of the time it returns 1 and 50% of the time it returns 0. We have no knowledge whatsoever of how R does what it does. When you see “R,” think “random.” This is called a “fair flip.” It is not weighted to slightly prefer one result over the other. Whether we can produce a truly random result on a classical computer is another question. Let’s assume our app is fair. If I apply R to 1, half the time I expect 1 and another half 0. The same is true if I apply R to 0. I’ll call these applications R(1) and R(0), respectively. If I look at the result of R(1) or R(0), there is no way to tell if I started with 1 or 0. This is just like a  secret coin flip where I can’t tell whether I began with heads or tails just by looking at how the coin has landed. By “secret coin flip,” I mean that someone else has flipped it and I can see the result, but I have no knowledge of the mechanics of the flip itself or the starting state of the coin.  If R(1) and R(0) are randomly 1 and 0, what happens when I apply R twice? I write this as R(R(1)) and R(R(0)). It’s the same answer: random result with an equal split. The same thing happens no matter how many times we apply R. The result is random, and we can’t reverse things to learn the initial value.  Now for the quantum version, Instead of R, I use H. It too returns 0 or 1 with equal chance, but it has two interesting properties. It is reversible. Though it produces a random 1 or 0 starting from either of them, we can always go back and see the value with which we began. It is its own reverse (or inverse) operation. Applying it two times in a row is the same as having done nothing at all.  There is a catch, though. You are not allowed to look at the result of what H does if you want to reverse its effect. If you apply H to 0 or 1, peek at the result, and apply H again to that, it is the same as if you had used R. If you observe what is going on in the quantum case at the wrong time, you are right back at strictly classical behavior.  To summarize using the coin language: if you flip a quantum coin and then don’t look at it, flipping it again will yield heads or tails with which you started. If you do look, you get classical randomness. A second area where quantum is different is in how we can work with simultaneous values. Your phone or laptop uses bytes as individual units of memory or storage. That’s where we get phrases like “megabyte,” which means one million bytes of information. A byte is further broken down into eight bits, which we’ve seen before. Each bit can be a 0 or 1. Doing the math, each byte can represent 2^8 = 256 different numbers composed of eight 0s or 1s, but it can only hold one value at a time. Eight qubits can represent all 256 values at the same time This is through superposition, but also through entanglement, the way we can tightly tie together the behavior of two or more qubits. This is what gives us the (literally) exponential growth in the amount of working memory. How quantum computing can help artificial intelligence Artificial intelligence and one of its subsets, machine learning, are extremely broad collections of data-driven techniques and models. They are used to help find patterns in information, learn from the information, and automatically perform more “intelligently.” They also give humans help and insight that might have been difficult to get otherwise. Here is a way to start thinking about how quantum computing might be applicable to large, complicated, computation-intensive systems of processes such as those found in AI and elsewhere. These three cases are in some sense the “small, medium, and large” ways quantum computing might complement classical techniques: There is a single mathematical computation somewhere in the middle of a software component that might be sped up via a quantum algorithm. There is a well-described component of a classical process that could be replaced with a quantum version. There is a way to avoid the use of some classical components entirely in the traditional method because of quantum, or the entire classical algorithm can be replaced by a much faster or more effective quantum alternative. As I write this, quantum computers are not “big data” machines. This means you cannot take millions of records of information and provide them as input to a quantum calculation. Instead, quantum may be able to help where the number of inputs is modest but the computations “blow up” as you start examining relationships or dependencies in the data.  In the future, however, quantum computers may be able to input, output, and process much more data. Even if it is just theoretical now, it makes sense to ask if there are quantum algorithms that can be useful in AI someday. To summarize, we explored how quantum computing works and different applications of artificial intelligence in quantum computing. Get this quantum computing book Dancing with Qubits by Robert Sutor today where he has explored the inner workings of quantum computing. The book entails some sophisticated mathematical exposition and is therefore best suited for those with a healthy interest in mathematics, physics, engineering, and computer science. Intel introduces cryogenic control chip, ‘Horse Ridge’ for commercially viable quantum computing Microsoft announces Azure Quantum, an open cloud ecosystem to learn and build scalable quantum solutions Amazon re:Invent 2019 Day One: AWS launches Braket, its new quantum service and releases
Read more
  • 0
  • 0
  • 6163

article-image-3-cybersecurity-lessons-for-e-commerce-website-administrators
Guest Contributor
25 Jun 2019
8 min read
Save for later

3 cybersecurity lessons for e-commerce website administrators

Guest Contributor
25 Jun 2019
8 min read
In large part, the security of an ecommerce company is the responsibility of its technical support team and ecommerce software vendors. In reality, cybercriminals often exploit the security illiteracy of the staff to hit a company. Of all the ecommerce team, web administrators are often targeted for hacker attacks as they control access to the admin panel with lots of sensitive data. Having broken into the admin panel, criminals can take over an online store, disrupt its operation, retrieve customer confidential data, steal credit card information, transfer payments to their own account, and do more harm to business owners and customers. Online retailers contribute to the security of their company greatly when they educate web administrators where security threats can come from and what measures they can take to prevent breaches. We have summarized some key lessons below. It’s time for a quick cybersecurity class! Lesson 1. Mind password policy Starting with the basis of cybersecurity, we will proceed to more sophisticated rules in the lessons that follow. The importance of secure password policy may seem obvious, it's still shocking how careless people can be with choosing a password. In e-commerce, web administrators set credentials for accessing the admin panel and they can “help” cybercriminals greatly if they neglect basic password rules. Never use similar or alike passwords to log into different systems. In general, sticking to the same patterns when creating passwords (for example, using a date of birth) is risky. Typically, people have a number of personal profiles in social networks and email services. If they use identical passwords to all of them, cybercriminals can steal credentials just to one social media profile to crack the others. If employees are that negligent about accessing corporate systems, they endanger the security of the company. Let’s outline the worst-case scenario. Criminals take advantage of the leaked database of 167 million LinkedIn accounts to hack a large online store. As soon as they see the password of its web administrator (the employment information is stated in the profile just for hackers’ convenience), they try to apply the password to get access to the admin panel. What luck! The way to break into this web store was too easy. Use strong and impersonalized passwords. We need to introduce the notion of doxing to fully explain the importance of this rule. Doxing is the process of collecting pieces of information from social accounts to ultimately create a virtual profile of a person. Cybercriminals engage doxing to crack a password to an ecommerce platform by using an admin’s personal information in it. Therefore, a strong password shouldn’t contain personal details (like dates, names, age, etc.) and must consist of eight or more characters featuring a mix of letters, numbers, and unique symbols. Lesson 2. Watch out for phishing attacks With the wealth of employment information people leave in social accounts, hackers hold all the cards for implementing targeted, rather than bulk, phishing attacks. When planning a malicious attack on an ecommerce business, criminals can search for profiles of employees, check their position and responsibilities, and conclude what company information they have access to. In such an easy way, hackers get to know a web store administrator and follow with a series of phishing attacks. Here are two possible scenarios of attacks: When hackers target a personal computer. Having found a LinkedIn profile of a web administrator and got a personal email, hackers can bombard them with disguised messages, for example, from bank or tax authorities. If the admin lets their guard down and clicks a malicious link, malware installs itself on their personal computer. Should they remotely log in the admin panel, hackers steal their credentials and immediately set a new password. From this moment, they take over the control over a web store. Hackers can also go a different way. They target a personal email of the web administrator with a phishing attack and succeed in taking it over. Let’s say they have already found out a URL to the admin panel by that time. All they have to do now is to request to change the password to the panel, click the confirmation link from the admin’s email and set a new password. In the described scenario, the web administrator has made three security mistakes of using a personal email for work purposes, not changing the default admin URL, and taking the bait of a phishing email. When hackers target a work computer. Here is how a cyberattack may unfold if web administrators have been reckless to disclose a work email online. This time, hackers create a targeted malicious email related to work activities. Let’s say, the admin can get a legitimate-looking email from FedEx informing about delivery problems. Not alarmed, they open the email, click the link to know the details, and compromise the security of the web store by giving away the credentials to the admin panel to hackers. The main mistake in dealing with phishing attacks is to expect a fraudulent email to look suspicious. However, phishers falsify emails from real companies so it can be easy to fall into the trap. Here are recommendations for ecommerce web administrators to follow: Don’t use personal emails to log in to the admin panel. Don’t make your work email publicly available. Don’t use work email for personal purposes (e.g., for registration in social networks). Watch out for links and downloads in emails. Always hover over the link prior to click it – in malicious emails, the destination URL doesn’t match the expected destination website. Remember that legitimate companies never ask for your credentials, credit card details or any other sensitive information in emails. Be wary of emails with urgent notifications and deadlines – hackers often try to allay suspicions by provoking anxiety and panic among their victims. Engage two-step verification for an ecommerce admin panel. Lesson 3.  Stay alert while communicating with a hosting provider Web administrators of companies that have chosen a hosted ecommerce platform for their e-shop will need to contact the technical support of their hosting provider now and then. Here, a cybersecurity threat comes unexpected. If hackers have compromised the security of the web hosting company, they can target its clients (e-commerce websites) as well. Admins are in serious danger if the hosting company stores their credentials unencrypted. In this case, hackers can get direct access to the admin panel of a web store. Otherwise, more sophisticated attacks are developed. Cybercriminals can mislead web administrators by speaking for tech support agents. When communicating with their hosting provider, web administrators should mind several rules to protect their confidential data and the web store from hacking. Use unique email and password to log in your web hosting account. The usage of similar credentials for different work services or systems leads to a company security breach in case the hosting company has been hacked. Never reveal any credentials on request of tech support agents. Having shared their password to the admin panel, web administrators can no longer authenticate themselves by using it. Track your company communication with tech support. Web administrators can set email notifications to track requests from team members to the tech support and control what information is shared. Time for an exam As a rule, ecommerce software vendors and retailers do their best for the security of ecommerce businesses. Thus, software vendors take the major role in providing for the security of SaaS ecommerce solutions (like Shopify or Salesforce Commerce Cloud), including the security of servers, databases and the application itself. In IaaS solutions (like Magento), retailers need to put more effort in maintaining the security of the environment and system, staying current on security updates, conducting regular audits and more (you can see the full list of Magento security measures as an example). Still, cybercriminals often target company employees to hack an online store. Retailers are responsible for educating their team what security rules are compulsory to follow and how to identify malicious intents. In our article, we have outlined the fundamental security lessons for web administrators to learn in order to protect a web store against illicit access. In short, they should be careful with personal information they publish online (in their social media profiles) and use unique credentials for different services and systems. There are no grades in our lessons – rather an admin’s contribution to the security of their company can become the evaluation of knowledge they have gained. About the Author Tanya Yablonskaya is Ecommerce Industry Analyst at ScienceSoft, an IT consulting and software development company headquartered in McKinney, Texas. After 2+ years of exploring the cryptocurrency and blockchain sphere, she has shifted the focus of interest to ecommerce industry. Delving into this enormous world, Tanya covers key challenges online retailers face and unveils a wealth of tools they can use to outpace competitors. The US launched a cyber attack on Iran to disable its rocket launch systems; Iran calls it unsuccessful All Docker versions are now vulnerable to a symlink race attack 12,000+ unsecured MongoDB databases deleted by Unistellar attackers
Read more
  • 0
  • 0
  • 6066
article-image-8-myths-rpa-robotic-process-automation
Savia Lobo
08 Nov 2017
9 min read
Save for later

8 Myths about RPA (Robotic Process Automation)

Savia Lobo
08 Nov 2017
9 min read
Many say we are on the cusp of the fourth industrial revolution that promises to blur the lines between the real, virtual and the biological worlds. Amongst many trends, Robotic Process Automation (RPA) is also one of those buzzwords surrounding the hype of the fourth industrial revolution. Although poised to be a $6.7 trillion industry by 2025, RPA is shrouded in just as much fear as it is brimming with potential. We have heard time and again how automation can improve productivity, efficiency, and effectiveness while conducting business in transformative ways. We have also heard how automation and machine-driven automation, in particular, can displace humans and thereby lead to a dystopian world. As humans, we make assumptions based on what we see and understand. But sometimes those assumptions become so ingrained that they evolve into myths which many start accepting as facts. Here is a closer look at some of the myths surrounding RPA. [dropcap]1[/dropcap] RPA means robots will automate processes The term robot evokes in our minds a picture of a metal humanoid with stiff joints that speaks in a monotone. RPA does mean robotic process automation. But the robot doing the automation is nothing like the ones we are used to seeing in the movies. These are software robots that perform routine processes within organizations. They are often referred to as virtual workers/digital workforce complete with their own identity and credentials. They essentially consist of algorithms programmed by RPA developers with an aim to automate mundane business processes. These processes are repetitive, highly structured, fall within a well-defined workflow, consist of a finite set of tasks/steps and may often be monotonous and labor intensive. Let us consider a real-world example here - Automating the invoice generation process. The RPA system will run through all the emails in the system, and download the pdf files containing details of the relevant transactions. Then, it would fill a spreadsheet with the details and maintain all the records therein. Later, it would log on to the enterprise system and generate appropriate invoice reports for each detail in the spreadsheet. Once the invoices are created, the system would then send a confirmation mail to the relevant stakeholders. Here, the RPA user will only specify the individual tasks that are to be automated, and the system will take care of the rest of the process. So, yes, while it is true that RPA involves robots automating processes, it is a myth that these robots are physical entities or that they can automate all processes. [dropcap]2[/dropcap] RPA is useful only in industries that rely heavily on software “Almost anything that a human can do on a PC, the robot can take over without the need for IT department support.” - Richard Bell, former Procurement Director at Averda RPA is a software which can be injected into a business process. Traditional industries such as banking and finance, healthcare, manufacturing etc that have significant tasks that are routine and depend on software for some of their functioning can benefit from RPA. Loan processing and patient data processing are some examples. RPA, however, cannot help with automating the assembly line in a manufacturing unit or with performing regular tests on patients. Even in industries that maintain daily essential utilities such as cooking gas, electricity, telephone services etc RPA can be put to use for generating automated bills, invoices, meter-readings etc. By adopting RPA, businesses irrespective of the industry they belong to can achieve significant cost savings, operational efficiency, and higher productivity. To leverage the benefits of RPA, rather than understanding the SDLC process, it is important that users have a clear understanding of business workflow processes and domain knowledge. Industry professionals can be easily trained on how to put RPA into practice. The bottom line - RPA is not limited to industries that rely heavily on software to exist. But it is true that RPA can be used only in situations where some form of software is used to perform tasks manually. [dropcap]3[/dropcap] RPA will replace humans in most frontline jobs Many organizations employ a large workforce in frontline roles to do routine tasks such as data entry operations, managing processes, customer support, IT support etc. But frontline jobs are just as diverse as the people performing them. Take sales reps for example. They bring new business through their expert understanding of the company’s products, their potential customer base coupled with the associated soft skills. Currently, they spend significant time on administrative tasks such as developing and finalizing business contracts, updating the CRM database, making daily status reports etc. Imagine the spike in productivity if these aspects could be taken off the plates of sales reps and they could just focus on cultivating relationships and converting leads. By replacing human efforts in mundane tasks within frontline roles, RPA can help employees focus on higher value-yielding tasks. In conclusion, RPA will not replace humans in most frontline jobs. It will, however, replace humans in a few roles that are very rule-based and narrow in scope such as simple data entry operators or basic invoice processing executives. In most frontline roles like sales or customer support, RPA is quite likely to change significantly at least in some ways how one sees their job responsibilities. Also, the adoption of RPA will generate new job opportunities around the development, maintenance, and sale of RPA based software. [dropcap]4[/dropcap] Only large enterprises can afford to deploy RPA The cost of implementing and maintaining the RPA software and training employees to use it can be quite high. This can make it an unfavorable business proposition for SMBs with fairly simple organizational processes and cross-departmental considerations. On the other hand, large organizations with higher revenue generation capacity, complex business processes, and a large army of workers can deploy an RPA system to automate high-volume tasks quite easily and recover that cost within a few months.   It is obvious that large enterprises will benefit from RPA systems due to the economies of scale offered and faster recovery of investments made. SMBs (Small to medium-sized businesses) can also benefit from RPA to automate their business processes. But this is possible only if they look at RPA as a strategic investment whose cost will be recovered over a longer time period of say 2-4 years. [dropcap]5[/dropcap] RPA adoption should be owned and driven by the organization's IT department The RPA team handling the automation process need not be from the IT department. The main role of the IT department is providing necessary resources for the software to function smoothly. An RPA reliability team which is trained in using RPA tools does not include IT professionals but rather business operations professionals. In simple terms, RPA is not owned by the IT department but by the whole business and is driven by the RPA team. [dropcap]6[/dropcap] RPA is an AI virtual assistant specialized to do a narrow set of tasks An RPA bot performs a narrow set of tasks based on the given data and instructions. It is a system of rule-based algorithms which can be used to capture, process and interpret streams of data, trigger appropriate responses and communicate with other processes. However, it cannot learn on its own - a key trait of an AI system. Advanced AI concepts such as reinforcement learning and deep learning are yet to be incorporated in robotic process automation systems. Thus, an RPA bot is not an AI virtual assistant, like Apple’s Siri, for example. That said, it is not impractical to think that in the future, these systems will be able to think on their own, decide the best possible way to execute a business process and learn from its own actions to improve the system. [dropcap]7[/dropcap] To use the RPA software, one needs to have basic programming skills Surprisingly, this is not true. Associates who use the RPA system need not have any programming knowledge. They only need to understand how the software works on the front-end, and how they can assign tasks to the RPA worker for automation. On the other hand, RPA system developers do require some programming skills, such as knowledge of scripting languages. Today, there are various platforms for developing RPA tools such as UIPath, Blueprism and more, which empower RPA developers to build these systems without any hassle, reducing their coding responsibilities even more. [dropcap]8[/dropcap] RPA software is fully automated and does not require human supervision This is a big myth. RPA is often misunderstood as a completely automated system. Humans are indeed required to program the RPA bots, to feed them tasks for automation and to manage them. The automation factor here lies in aggregating and performing various tasks which otherwise would require more than one human to complete. There’s also the efficiency factor which comes into play - the RPA systems are fast, and almost completely avoid faults in the system or the process that are otherwise caused due to human error. Having a digital workforce in place is far more profitable than recruiting human workforce. Conclusion One of the most talked about areas in terms of technological innovations, RPA is clearly still in its early days and is surrounded by a lot of myths. However, there’s little doubt that its adoption will take off rapidly as RPA systems become more scalable, more accurate and deploy faster. AI, cognitive, and Analytics-driven RPA will take it up a notch or two, and help the businesses improve their processes even more by taking away dull, repetitive tasks from the people. Hype can get ahead of the reality, as we've seen quite a few times - but RPA is an area definitely worth keeping an eye on despite all the hype.
Read more
  • 0
  • 0
  • 6050

article-image-5-ux-design-tips-for-building-a-great-e-commerce-mobile-app
Guest Contributor
11 Jan 2019
6 min read
Save for later

5 UX design tips for building a great e-commerce mobile app

Guest Contributor
11 Jan 2019
6 min read
The vastness and the innovativeness of today’s e-commerce websites are converting visitors into potential buyers and come back as returning visitors. Every e-commerce website is designed with high-quality UX (user experience) and the absence of it can hurt the revenue and sales in various stages. If not made simplistic and innovative, a bad UX may have an adverse effect on the website’s rankings as all search engines emphasize on portals that are easy to navigate. This makes e-commerce mobile app development services necessary for website owners. Almost 68% of users utilize smartphone devices in the US and the amount goes up to 88% in the UK. So, if seen from the advertising standpoint, websites need to be mobile-friendly at any cost as 62% of users will never suggest a business that has a poorly designed mobile website. Now if we take a look at the global population that buys online, the percentage comes to around 22.9% and by 2021, the number will hike to 3.15 billion. Hence keeping an up-to-date UX design for mobile apps is an absolute must. Some UX tips to keep in mind These are some UX design tips that an e-commerce website must utilize for making the buying experience worthwhile for the consumers. Horizontal filtering Most websites employ interfaces that carry left-hand vertical sidebar filtering. However, currently, horizontal filtering has gained prominence. Its benefits include: Horizontal filtering is a tablet and smartphone friendly. So, filters can be viewed while scrolling and the full width of a page can also be utilized. Utilizing paragraphs, sliders and tablets along with checkboxes are easier as horizontal filtering is flexible. Page width utilization: With this UX design, bigger visuals with better and useful information can be put in a page. No overloading of websites with CTAs and product information Making product descriptions crisp and understandable is necessary as any user or customer won’t like to go through massive texts. Clear CTAs – Having a clear and compelling call to action is important. Knowing the audience well – The aspects that attract your customers or the ones causing inconvenience to them must be well addressed.   Easy to understand descriptions - Top e-commerce app developers will always offer you with clear descriptions containing a crisp heading, bullet points, and subheadings. Fabricating a user-centric search The search experience that you offer to your customers can break or make your online sales. Your e-commerce businesses must implement these points for fabricating a client-centric search: Image recognition – Pictures or images are extremely important when it comes to e-commerce. Users will always want to see the items before buying. TImage recognition must be implemented in every website so that users can rely on the item they are purchasing. Voice search – As per mobile app design guidelines, voice recognition must also be implemented as it helps online retailers to improve the user experience. This is about ease, convenience, and speed. It makes consumers more comfortable which results in elevated customer engagement. Making the correct choice amid scrolling, loading more buttons and pagination When you engage in e-commerce shopping app development services, you must choose how the products are loading on your website.: Load more buttons – Websites that contain load button option is more favorable as users can discover more products offering them a feeling of control. Pagination – This is a process by which a small amount of information is offered to the customer so that he can emphasize on specific parts of a page. With pagination, users can overview entire results that will tell them how long the search is going to take. Endless scrolling - This technique can lead to seamless experiences for users. By enabling endless scrolling,  content loads continuously as a customer scrolls the page downwards. This works best for websites that contain an even content structure. It is not suitable for websites that need to fulfill goal-oriented jobs. Simple checkouts and signups required Among the various mobile app design elements, this point holds quite an importance. Making the procedures of checkouts and signups simple is paramount as website users are quick to run out of patience. Single-column structure – A user’s eye will move naturally from top to the bottom along a solitary line. Making on-boarding simple – Asking too many questions at the very beginning is not recommended. Registration simply calls for the user’s email and name. Later, personal information like, address and phone number can be asked. Colors Color is an often overlooked part of UI design. It is, in fact, hugely important - and it's beginning to become more important to designers. When you think about it, it seems obvious. Color is not only important from a branding perspective, allowing you to create a unified and consistent user experience within a single product (or, indeed, set of products). Color can also have consequences for hardware too. A white interface, for example, will use less battery power as the device won't require the resources that would otherwise be needed to load color. Accessibility and color in design Another important dimension to this is accessibility. With around 4.5% of the population estimated to suffer from color blindness (most of them male), it could be worth considering how the decisions you make about colors could impact these users. In a nutshell While implementing the entire e-commerce application development process, you have to make sure as a merchant that you are delivering experiences and products that fulfill real requirements. A lot of emphasis needs to be put on visual appeal as it marks as a vital element in fulfilling customer needs and is also an establishment of credibility. A customer’s first impression must be good no matter what. When talking about mobile user experience, credibility must be taken into account that can be ensured when you capture the attention of your users by informing them about what you exactly have to offer. The about page must be clear containing information like physical address, email address, phone number, etc. Following these UX tips is adequate for your E-commerce business to take flight and do wonders. Get hold of a professional team today and get started. Author Bio   Manan Ghadawala is the founder of 21Twelve Interactive which is a mobile app development company in India and the USA. You may follow him on @twitter. Why Motion and Interaction matter in a UX design? [Video] What UX designers can teach Machine Learning Engineers? To start with: Model Interpretability Align your product experience strategy with business needs
Read more
  • 0
  • 0
  • 6041

article-image-android-studio-how-does-it-differ-from-other-ides
Natasha Mathur
30 May 2018
5 min read
Save for later

What is Android Studio and how does it differ from other IDEs?

Natasha Mathur
30 May 2018
5 min read
Android Studio is a powerful and sophisticated development environment, designed with the specific purpose of developing, testing, and packaging Android applications. It can be downloaded, along with the Android SDK, as a single package.  It is a collection of tools and components. Many such tools are installed and updated independently of each other. Android Studio is not the only way to develop Android apps; there are other IDEs, such as Eclipse and NetBeans, and it is even possible to develop a complete app using nothing more than Notepad and the command line. This article is an excerpt from the book, 'Mastering Android Studio 3', written by Kyle Mew. Built for a purpose, Android Studio has attracted a growing number of third-party plugins that provide a large array of valuable functions, not available directly via the IDE. These include plugins to speed up build times, debug a project over Wi-Fi, and many more. Despite being arguably a superior tool, there are some very good reasons for having stuck with another IDE, such as Eclipse. Many developers develop for multiple platforms, which makes Eclipse a good choice of tool. Every developer has deadlines to meet, and getting to grips with unfamiliar software can slow them down considerably at first. But Android studio is the official IDE for Android studio and every android app developer should be wary of the differences between the two so that they can figure out the similarities and the differences, and see what works for them. How Android Studio differs There are many ways in which Android Studio differs from other IDEs and development tools. Some of these differences are quite subtle, such as the way support libraries are installed, and others, for instance, the build process and the UI design, are profoundly different. Before taking a closer look at the IDE itself, it is a good idea to first understand what some of these important differences are. The major ones are listed here:  UI development: The most significant difference between Studio and other IDEs is its layout editor, which is far superior to any of its rivals, offering text, design, and blueprint views, and most importantly, constraint layout tools for every activity or fragment, an easy-to-use theme and style editors, and a drag-and-drop design function. The layout editor also provides many tools unavailable elsewhere, such as a comprehensive preview function for viewing layouts on a multitude of devices and simple-to-use theme and translation editors. Project structure: Although the underlying directory structure remains the same, the way Android Studio organizes each project differs considerably from its predecessors. Rather than using workspaces as in Eclipse, Studio employs modules that can more easily be worked on together without having to switch workspaces. This difference in structure may seem unusual at first, but any Eclipse user will soon see how much time it can save once it becomes familiar.  Code completion and refactoring: The way that Android Studio intelligently completes code as you type makes it a delight to use. It regularly anticipates what you are about to type, and often a whole line of code can be entered with no more than two or three keystrokes. Refactoring too, is easier and more far-reaching than alternative IDEs, such as Eclipse and NetBeans. Almost anything can be renamed, from local variables to entire packages.  Emulation: Studio comes equipped with a flexible virtual device editor, allowing developers to create device emulators to model any number of real-world devices. These emulators are highly customizable, both in terms of form factor and hardware configurations, and virtual devices can be downloaded from many manufacturers. Users of other IDEs will be familiar with Android AVDs already, although they will certainly appreciate the preview features found in the Design tab. Build tools: Android Studio employs the Gradle build system, which performs the same functions as the Apache Ant system that many Java developers will be familiar with. It does, however, offer a lot more flexibility and allows for customized builds, enabling developers to create APKs that can be uploaded to TestFlight, or to produce demo versions of an app, with ease. It is also the Gradle system that allows for the modular nature. Rather than each library or a third-party SDK being compiled as a JAR file, Studio builds each of these using Gradle. These are the most far-reaching differences between Android Studio and other IDEs, but there are many other features which are unique. Studio provides the powerful JUnit test facility and allows for cloud platform support and even Wi-Fi debugging. It is also considerably faster than Eclipse, which, to be fair, has to cater for a wider range of development needs, as opposed to just one, and it can run on less powerful machines. Android Studio also provides an amazing time-saving device in the form of Instant Run. This feature cleverly only builds the part of a project that has been edited, meaning that developers can test small changes to code without having to wait for a complete build to be performed for each test. This feature can bring waiting time down from minutes to almost zero. To know more about Android studio and how to build faster, smoother, and error-free Android applications, be sure to check out the book 'Mastering Android Studio 3'. The art of Android Development using Android Studio Getting started with Android Things  Unit Testing apps with Android Studio
Read more
  • 0
  • 0
  • 6032
article-image-3-natural-language-processing-models-every-engineer-should-know
Amey Varangaonkar
18 Dec 2017
5 min read
Save for later

3 Natural Language Processing Models every ML Engineer should know

Amey Varangaonkar
18 Dec 2017
5 min read
[box type="note" align="" class="" width=""]This interesting excerpt is taken from the book Mastering Text Mining with R, co-authored by Ashish Kumar and Avinash Paul. The book gives an advanced view of different natural language processing techniques and their implementation in R. [/box] Our article given below aims to introduce to the concept of language models and their relevance to natural language processing. In terms of natural language processing, language models generate output strings that help to assess the likelihood of a bunch of strings to be a sentence in a specific language. If we discard the sequence of words in all sentences of a text corpus and basically treat it like a bag of words, then the efficiency of different language models can be estimated by how accurately a model restored the order of strings in sentences. Which sentence is more likely: I am learning text mining or I text mining learning am? Which word is more likely to follow “I am…”? Basically, a language model assigns the probability of a sentence being in a correct order. The probability is assigned over the sequence of terms by using conditional probability. Let us define a simple language modeling problem. Assume a bag of words contains words W1, W2,………………….,Wn. A language model can be defined to compute any of the following: Estimate the probability of a sentence S1: P (S1) = P (W1, W2, W3, W4, W5) Estimate the probability of the next word in a sentence or set of strings: P (W3|W2, W1) How to compute the probability? We will use chain law, by decomposing the sentence probability as a product of smaller string probabilities: P(W1W2W3W4) = P(W1)P(W2|W1)P(W3|W1W2)P(W4|W1W2W3) N-gram models N-grams are important for a wide range of applications. They can be used to build simple language models. Let's consider a text T with W tokens. Let SW be a sliding window. If the sliding window consists of one cell (wi wi wi wi) then the collection of strings is called a unigram; if the sliding window consists of two cells, the output Is (w1 , w2)(w3 , w4)(w5 w5 , w5)(w1 , w2)(w3 , w4)(w5 , w5), this is called a bigram .Using conditional probability, we can define the probability of a word having seen the previous word; this is known as bigram probability. So the conditional probability of an element, given the previous element (wi -1), is: Extending the sliding window, we can generalize that n-gram probability as the conditional probability of an element given previous n-1 element: The most common bigrams in any corpus are not very interesting, such as on the, can be, in it, it is. In order to get more meaningful bigrams, we can run the corpus through a part-of-speech (POS) tagger. This would filter the bigrams to more content-related pairs such as infrastructure development, agricultural subsidies, banking rates; this can be one way of filtering less meaningful bigrams. A better way to approach this problem is to take into account collocations; a collocation is the string created when two or more words co-occur in a language more frequently. One way to do this over a corpus is pointwise mutual information (PMI). The concept behind PMI is for two words, A and B, we would like to know how much one word tells us about the other. For example, given an occurrence of A, a, and an occurrence of B, b, how much does their joint probability differ from the expected value of assuming that they are independent. This can be expressed as follows: Unigram model: Punigram(W1W2W3W4) = P(W1)P(W2)P(W3)P(W4) Bigram model: Pbu(W1W2W3W4) = P(W1)P(W2|W1)P(W3|W2)P(W4|W3) P(w1w2…wn ) = P(wi | w1w2…wi"1) Applying the chain rule on n contexts can be difficult to estimate; Markov assumption is applied to handle such situations. Markov Assumption If predicting that a current string is independent of some word string in the past, we can drop that string to simplify the probability. Say the history consists of three words, Wi, Wi-1, Wi-2, instead of estimating the probability P(Wi+1) using P(Wi,i- 1,i-2) , we can directly apply P(Wi+1 | Wi, Wi-1). Hidden Markov Models Markov chains are used to study systems that are subject to random influences. Markov chains model systems that move from one state to another in steps governed by probabilities. The same set of outcomes in a sequence of trials is called states. Knowing the probabilities of states is called state distribution. The state distribution in which the system starts is the initial state distribution. The probability of going from one state to another is called transition probability. A Markov chain consists of a collection of states along with transition probabilities. The study of Markov chains is useful to understand the long-term behavior of a system. Each arc associates to certain probability value and all arcs coming out of each node must have exhibit a probability distribution. In simple terms, there is a probability associated to every transition in states: Hidden Markov models are non-deterministic Markov chains. They are an extension of Markov models in which output symbol is not the same as state. Language models are widely used in machine translation, spelling correction, speech recognition, text summarization, questionnaires, and many more real-world use-cases. If you would like to learn more about how to implement the above language models. check out our book Mastering Text Mining with R. This book will help you leverage the language models using popular packages in R for effective text mining.  
Read more
  • 0
  • 0
  • 6021

article-image-everything-know-ethereum
Packt Editorial Staff
10 Apr 2018
8 min read
Save for later

Everything you need to know about Ethereum

Packt Editorial Staff
10 Apr 2018
8 min read
Ethereum was first conceived of by Vitalik Buterin in November 2013. The critical idea proposed was the development of a Turing-complete language that allows the development of arbitrary programs (smart contracts) for Blockchain and decentralized applications. This concept is in contrast to Bitcoin, where the scripting language is limited in nature and allows necessary operations only. This is an excerpt from the second edition of Mastering Blockchain by Imram Bashir. The following table shows all the releases of Ethereum starting from the first release to the planned final release: Version Release date Olympic May, 2015 Frontier July 30, 2015 Homestead March 14, 2016 Byzantium (first phase of Metropolis) October 16, 2017 Metropolis To be released Serenity (final version of Ethereum) To be released   The first version of Ethereum, called Olympic, was released in May, 2015. Two months later, a second version was released, called Frontier. After about a year, another version named Homestead with various improvements was released in March, 2016. The latest Ethereum release is called Byzantium. This is the first part of the development phase called Metropolis. This release implemented a planned hard fork at block number 4,370,000 on October 16, 2017. The second part of this release called Constantinople is expected in 2018 but there is no exact time frame available yet. The final planned release of Ethereum is called Serenity. It's planned for Serenity to introduce the final version of PoS based blockchain instead of PoW. The yellow paper The Yellow Paper, written by Dr. Gavin Wood, serves as a formal definition of the Ethereum protocol. Anyone can implement an Ethereum client by following the protocol specifications defined in the paper. While this paper is a challenging read, especially for those who do not have a background in algebra or mathematics, it contains a complete formal specification of Ethereum. This specification can be used to implement a fully compliant Ethereum client. The list of all symbols with their meanings used in the paper is provided here with the anticipation that it will make reading the yellow paper more accessible. Once symbol meanings are known, it will be much easier to understand how Ethereum works in practice. Symbol Meaning Symbol Meaning ≡ Is defined as ≤ Less than or equal to = Is equal to Sigma, World state ≠ Is not equal to Mu, Machine state ║...║ Length of Upsilon, Ethereum state transition function Is an element of Block level state transition function Is not an element of . Sequence concatenation For all There exists Union ᴧ Contract creation function Logical AND Increment : Such that Floor, lowest element {} Set Ceiling, highest element () Function of tuple No of bytes [] Array indexing Exclusive OR Logical OR (a ,b) Real numbers >= a and < b > Is greater than Empty set, null + Addition - Subtraction ∑ Summation { Describing various cases of if, otherwise   Ethereum blockchain Ethereum, like any other blockchain, can be visualized as a transaction-based state machine. This definition is referred to in the Yellow Paper. The core idea is that in Ethereum blockchain, a genesis state is transformed into a final state by executing transactions incrementally. The final transformation is then accepted as the absolute undisputed version of the state. In the following diagram, the Ethereum state transition function is shown, where a transaction execution has resulted in a state transition: In the example above, a transfer of two Ether from address 4718bf7a to address 741f7a2 is initiated. The initial state represents the state before the transaction execution, and the final state is what the morphed state looks like. Mining plays a central role in state transition, and we will elaborate the mining process in detail in the later sections. The state is stored on the Ethereum network as the world state. This is the global state of the Ethereum blockchain. How Ethereum works from a user's perspective For all the conversation around cryptocurrencies, it's very rare for anyone to actually explain how it works from the perspective of a user. Let's take a look at how it works in practice. In this example, I'll use the example of one man (Bashir) transferring money to another (Irshad). You may also want to read our post on if Ethereum will eclipse bitcoin. For the purposes of this example, we're using Jaxx wallet. However, you can use any cryptocurrency wallet for this. First, either a user requests money by sending the request to the sender, or the sender decides to send money to the receiver. The request can be sent by sending the receivers Ethereum address to the sender. For example, there are two users, Bashir and Irshad. If Irshad requests money from Bashir, then she can send a request to Bashir by using QR code. Once Bashir receives this request he will either scan the QR code or manually type in Irshad's Ethereum address and send Ether to Irshad's address. This request is encoded as a QR code shown in the following screenshot which can be shared via email, text or any other communication methods.2. Once Bashir receives this request he will either scan this QR code or copy the Ethereum address in the Ethereum wallet software and initiate a transaction. This process is shown in the following screenshot where the Jaxx Ethereum wallet software on iOS is used to send money to Irshad. The following screenshot shows that the sender has entered both the amount and destination address for sending Ether. Just before sending the Ether the final step is to confirm the transaction which is also shown here: Once the request (transaction) of sending money is constructed in the wallet software, it is then broadcasted to the Ethereum network. The transaction is digitally signed by the sender as proof that he is the owner of the Ether. This transaction is then picked up by nodes called miners on the Ethereum network for verification and inclusion in the block. At this stage, the transaction is still unconfirmed. Once it is verified and included in the block, the PoW process begins. Once a miner finds the answer to the PoW problem, by repeatedly hashing the block with a new nonce, this block is immediately broadcasted to the rest of the nodes which then verifies the block and PoW. If all the checks pass then this block is added to the blockchain, and miners are paid rewards accordingly. Finally, Irshad gets the Ether, and it is shown in her wallet software. This is shown here: On the blockchain, this transaction is identified by the following transaction hash: 0xc63dce6747e1640abd63ee63027c3352aed8cdb92b6a02ae25225666e171009e Details regarding this transaction can be visualized from the block explorer, as shown in the following screenshot: Thiswalkthroughh should give you some idea of how it works. Different Ethereum networks The Ethereum network is a peer-to-peer network where nodes participate in order to maintain the blockchain and contribute to the consensus mechanism. Networks can be divided into three types, based on requirements and usage. These types are described in the following subsections. Mainnet Mainnet is the current live network of Ethereum. The current version of mainnet is Byzantium (Metropolis) and its chain ID is 1. Chain ID is used to identify the network. A block explorer which shows detailed information about blocks and other relevant metrics is available here. This can be used to explore the Ethereum blockchain. Testnet Testnet is the widely used test network for the Ethereum blockchain. This test blockchain is used to test smart contracts and DApps before being deployed to the production live blockchain. Because it is a test network, it allows experimentation and research. The main testnet is called Ropsten which contains all features of other smaller and special purpose testnets that were created for specific releases. For example, other testnets include Kovan and Rinkeby which were developed for testing Byzantium releases. The changes that were implemented on these smaller testnets has also been implemented on Ropsten. Now the Ropsten test network contains all properties of Kovan and Rinkeby. Private net As the name suggests, this is the private network that can be created by generating a new genesis block. This is usually the case in private blockchain distributed ledger networks, where a private group of entities start their blockchain and use it as a permissioned blockchain. The following table shows the list of Ethereum network with their network IDs. These network IDs are used to identify the network by Ethereum clients. Network name Network ID / Chain ID Ethereum mainnet 1 Morden 2 Ropsten 3 Rinkeby 4 Kovan 42 Ethereum Classic mainnet 61   You should now have a good foundation of knowledge to get started with Ethereum. To learn more about Ethereum and other cryptocurrencies, check out the new edition of Mastering Blockchain. Other posts from this book A brief history of Blockchain Write your first Blockchain: Learning Solidity Programming in 15 minutes 15 ways to make Blockchains scalable, secure and safe! What is Bitcoin
Read more
  • 0
  • 0
  • 6018