Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech Guides

851 Articles
article-image-virtual-machines-vs-containers
Amit Kothari
17 Oct 2017
5 min read
Save for later

Virtual machines vs Containers

Amit Kothari
17 Oct 2017
5 min read
Virtual machines and containers are pretty similar, but they do possess some important differences. These differences will dictate which ones you decide to use. So, when you ask a question like 'virtual machines vs containers' there isn't necessarily going to be an outright winner - but there might be a winner for you in a given scenario. Let's take a look at what a virtual machine is, exactly, what a container is, and how they compare - as well as the key differences between the two. What is a virtual machine? Virtual machines are a product of hardware virtualization. They sit on top of physical machines with the hypervisor or virtual machine manager in between, acting as a layer of abstraction between the virtual machine and the underlying hardware. A virtualized physical machine can host multiple virtual machines, enabling better hardware utilization. Since the hypervisor abstracts the physical machine's hardware, it allows virtual machines to use a different operating system on the same host machine. The host operating system and virtual machine operating system run their own kernel. All the communication between the virtual machines and the host machine occurs through the hypervisor, resulting in high level of isolation. This means if one virtual machine crashes, it would not affect other virtual machines running on the same physical machine. Although the hypervisor's abstraction layer offers a high level of isolation, it also affects the performance. This problem can be solved by using a different virtualization technique. What is a container? Containers use lightweight operating system level virtualization. Similar to virtual machines, multiple containers can run on the same host machine. However, containers do not have their own kernel. They share the host machine's kernel, making them much smaller in size compared to virtual machines. They use process level isolation, allowing processes inside a container to be isolated from other containers. The difference between virtual machines and containers In his post Containers are not VMs, Mike Coleman use the analogy of houses and apartment buildings to compare virtual machines and containers. Self-contained houses have their own infrastructure while apartments are built around shared infrastructure. Similarly, virtual machines have their own operating system, with kernel, binaries, libraries etc. While containers share the host operating system kernel with other containers. Due to this, containers are much smaller in size allowing a physical machine to host more containers than virtual machines. Since containers use lightweight operating system level virtualization instead of a hypervisor, they are less resource intensive compared to virtual machines and offer better performance. Compared to virtual machines, containers are faster, quicker to provision, and easy to scale. As spinning a new container is quick and easy when a patch or an update is required, it is easy to start a new container and stop the old one instead of updating a running container. This allows us to build immutable infrastructure, which is reliable, portable and easy to scale. All of this makes containers a preferred choice for application deployment, especially with the teams that are using micro-services or similar architecture, where an application is composed of multiple small services instead of a monolith. In microservice architecture, an application is built as a suite of independent, self-contained services. This allows the teams to work independent of each other and deliver features quicker. However, decomposing applications into multiple parts adds operational complexity and overhead. Containers solve this problem. Containers can serve as a building block in the microservice world where each service can be packaged and deployed as a container. A container will have everything that is required to run a service, this includes service code, its dependencies, configuration files, libraries etc. Packaging a service and all its dependencies as a container makes it easy to distribute and deploy a service. Since the container includes everything that is required to run a service, it can be deployed reliably in different environments. A service packaged as a container will run the same way locally on a developer's machine, in a test environment, and in production. However, there are things to consider when using containers. Containers share the kernel and other components of the host operating system. This makes them less isolated compared to virtual machines, and thus less secure. Since each virtual machine has its own kernel, we can run virtual machines with a different operating system on the same physical machine. However since containers share the host operating system kernel, only the guest operating system that can work with the host operating system can be installed in a container. Virtual machines vs containers - in conclusion... Compared to virtual machines, containers are lightweight, performant and easy to provision. While containers seem to be the obvious choice to build and deploy applications, virtual machines have their own advantages. Compared to physical machines, virtual machines have the better tooling and are easier to automate. Virtual machines and containers can co-exist. Organizations with existing infrastructure built around virtual machines can take the benefits of containers by deploying them on virtual machines.
Read more
  • 0
  • 0
  • 3991

article-image-hackers-are-our-societys-immune-system-keren-elazari-on-the-future-of-cybersecurity
Amrata Joshi
15 Dec 2018
9 min read
Save for later

Hackers are our society’s immune system - Keren Elazari on the future of Cybersecurity

Amrata Joshi
15 Dec 2018
9 min read
Keren Elazari, a world renowned cybersecurity analyst and senior researcher at the Tel Aviv University Interdisciplinary Cyber Research Center, author and speaker spoke earlier this year at Six, about the future of cybersecurity and a range of real world attacks in recent years. She also dived into the consequences as well as possible motivations behind such attacks. The Six event covers various press conferences and hackathons. The Six event organizes around one billion security events on a daily basis. The cybersecurity events organized by Six has international experts who answer various questions and give insights on various topics. This article highlights few insights from this year’s Six on Cybersecurity talk by Keren Elazari on The Future of Cybersecurity from a hacker’s perspective. How hackers used Starbucks’ free WiFi to use customer CPU resources for crypto mining “What if I told you that in 10 seconds I could take over your computer, generate thousands of dollars worth of cryptocurrencies all while you are drinking your morning coffee? You might think it’s impossible, by this is exactly what happened in Argentina earlier this year.” - Keren Elazari Earlier this year, the Starbucks customers  at Argentina experienced a slight delay of 10 seconds after logging into the website for free Wi-Fi. So what exactly happened? A security researcher discovered that the computer was running Coinhive, a type of distributed cryptocurrency mining software for those ten seconds. It was running on all the machines in Argentinian Starbucks that logged in for free Wi-Fi and the software generated a lot of  monero, the cryptocurrency (money). The hacker didn’t even have to code a JavaScript for this attack as he just had to buy the code from Coinhive. The business model of the company behind Coinhive allows anyone to monetize the user’s CPU. Cyber criminals can earn a lot of money from technologies like Coinhive. There are actually some news sites in the US that are looking at using such coinhiving solution as an alternative to paying for the news. This is an example of how creative technologies made by cybercriminals can even generate completely new business models. IoT brings a whole new set of vulnerabilities to your ecosystem “According to the Munich security conference report, they are expecting this year double the amount of devices than there are humans on this planet. This is not going to change. We definitely need an immune system for new digital universe because it is expanding without a stop.”   Devices like cameras, CCTVs, webcams etc could be used by potential hackers to spy of users. But even if measures such as blocking its vision with tape is taken, web cams can be hacked, not with an intention to steal pictures but to hack of other devices. How the Mirai DDoS attack used webcams to bring down the likes of Airbnb and Amazon This is what happened 2 years ago, when the massive internet DDoS attack - Mirai took place. Over the course of a weekend it took down websites all over the world. Websites like Amazon, Airbnb, and large news sites etc were down, due to which these companies faced losses. This attack was supercharged by the numerous devices in people’s homes. These devices where for DDoS attack because they were using basic internet protocols such as DNS which can be easily subverted. Even worse, many of the devices used default username password combinations. It’s important to change the passwords for the newly purchased devices. With shodan, a search engine, one can check the internet connected devices in their organizations or at home. This is helpful as it improves protection for the organizations from getting hacked. How hackers used a smart fish tank to steal data from a casino and an AI caught it “Hackers have found very creative, very fast automatic ways to identify devices that they can use and they will utilize any resource online. It would just become a part of their digital army. Speaking of which even an aquarium, a fish tank was hacked recently.” Recently, a smart fish tank in a US Casino was hacked. It had smart sensors that would check the temperature and the feeding schedule of the fish and the salinity of the water. While, hacking a fish tank does not appear to have any monetary incentive to a hacker, its connection to the internet make it a valuable access point.. The hackers, who already had access to the casino network, used the outgoing internet connection of the aquarium to send out 10 gigabytes of data from the casino. As the data was going of this connection, there was no firewall and it got noticed by none. The suspicious activity was flaggedby a self learning algorithm which realized that there was something fishy as the outgoing connection had no relation with the fish tank setup. How WannaCry used Ransomware attacks to target organizations “I don’t think we should shame organizations for having to deal with ransomware because it is a little bit like a flu in a sense that these attacks are designed to propagate and infect as many computers as they can.”- Keren Elazari In May 2017, the WannaCry ransomware attack by the WannaCry ransomware cryptoworm, affected the computers running the Microsoft Windows operating system by encrypting data and the criminals demanded ransom payments in the Bitcoin cryptocurrency. This attack affected the UK National Health Service the most as according to NHS, 30% of that national health services were not functioning. 80 out of the 236 trusts got affected in England. As per the UK government, North Korea was behind this attack as they are need of money because they are under sanctions. The Lazarus Group, a cybercrime group from North Korea attacked the Swift infrastructure and also attacked the central bank of Bangladesh last year. NotPetya - The Wiper attack “Whoever was hacking the tax company in the Ukraine wanted to create an effective virus that would destroy the evidence of everything they have been doing for two years in a bunch of Ukrainian companies. It might have been an accident that it infected so many other companies in the world.” In June, 2017, NotPetya, a wiper attack affected enterprise networks across Europe. The Ukrainian companies got highly affected. This attack appeared like a ransomware attack as it demanded some payment but it actually was a wiper attack. This attack affected the data and wiped off the data stored for two years. Maersk, the world's largest container shipping company got highly affected by this attack. The company faced a heavy loss of amount $300 million and was a collateral damage. Out-of-life operation systems were most affected by this virus. The software vulnerability used in both of these attacks, ransomware and wiper was a code named, EternalBlue, a cyber weapon which was discovered and developed by National Security Agency (NSA). The NSA couldn’t keep a track of EternalBlue and the criminals took advantage of this and attacked using using this cyber weapon. Earlier this year, a cyber attack was made on the German government IT network. This attack affected the defence and interior ministries' private networks. Why might motivate nation state actors back cyber-attacks? “The story is never simple when it comes to cyber attackers. Sometimes the motivations of a nation or nation state actors can be hidden behind what seems like a financial or criminal activity.” One of the reasons behind a nation or state backing a cyber-attack could be the the financial aspect, they might be under sanctions and need money for developing nuclear weapons. Another reason could be that the state or country is in a state of chaos or confusion and it is trying to create a dynamic from which they could benefit. Lastly, it could be an accident, where the cyber attack sometimes gets more effective than what the state has ever imagined of. What can organizations do to safeguard themselves from such cyberattacks? Consider making hundreds of security decisions everyday while putting personal details like credit card on a website, downloading a software that cause trouble to the system, etc. Instead of using a recycled password, go for a new one. Educating employees in the organizations about penetration testing. Sharing details of the past experience with regards to hacking, will help in working towards it. Developing a cybersecurity culture in the organization will bring change. Invite a Red team to the organizations to review the system. Encourage Bug Bounty Programs for reporting bugs in organization. Security professionals can work in collaboration with programs like Mayhem. Mayhem is an automated system that helps in finding the bugs in a system. It won the hacking challenge in 2016 but beaten by humans the next year. “Just imagine you are in a big ball room and you are looking at the hacking competition between completely automated supercomputers  and this (Mayhem) ladies and gentlemen is the winner and I think is also the future.” Just two years ago, Mayhem, a machine won in a hacking competition organized by United  States Defense Advanced Research Projects Agency (DARPA), Las Vegas, where seven machines (supercomputers) competed against each other. Mayhem is the first non-human to win a hacking competition. In 2017, Mayhem competed against humans, though humans won it. But we can still imagine how smart are smart computers. What does the Future of Cybersecurity look like? “In the years to come, automation, machine learning, algorithms, AI will be an integral part, not just of every aspect of society, but [also an] integral part of cybersecurity. That’s why I believe we need more such technologies and more humans that know how to work alongside and together with these automated creatures. If you like me think that friendly hackers, technology, and building  an ecosystem will a good way to create a safer society, I hope you take the red pill and wake up to this reality,” concludes Elazari. As 2018 comes to a close plagued with security breaches across industries, Keren’s insightful talk on cybersecurity is a must watch for everyone entering 2019. Packt has put together a new cybersecurity bundle for Humble Bundle 5 lessons public wi-fi can teach us about cybersecurity Blackberry is acquiring AI & cybersecurity startup, Cylance, to expand its next-gen endpoint solutions like its autonomous cars’ software
Read more
  • 0
  • 0
  • 3986

article-image-budget-demand-forecasting-markov-model-in-sas
Sunith Shetty
10 Aug 2018
8 min read
Save for later

Budget and Demand Forecasting using Markov model in SAS [Tutorial]

Sunith Shetty
10 Aug 2018
8 min read
Budget and demand forecasting are important aspects of any finance team. Budget forecasting is the outcome, and demand forecasting is one of its components. In this article, we understand the Markov model for forecasting and budgeting in finance.   This article is an excerpt from a book written by Harish Gulati titled SAS for Finance. Understanding problem of budget and demand forecasting While a few decades ago, retail banks primarily made profits by leveraging their treasury office, recent years have seen fee income become a major source of profitability. Accepting deposits from customers and lending to other customers is one of the core functions of the treasury. However, charging for current or savings accounts with add-on facilities such as breakdown cover, mobile, and other insurances, and so on, has become a lucrative avenue for banks. One retail bank has a plain vanilla classic bank account, mid-tier premier, and a top-of-the-range, benefits included a platinum account. The classic account is offered free and the premier and platinum have fees of $10 and $20 per month respectively. The marketing team has just relaunched the fee-based accounts with added benefits. The finance team wanted a projection of how much revenue could be generated via the premier and the platinum accounts. Solving with Markovian model approach Even though we have three types of account, the classic, premier, and the platinum, it doesn't mean that we are only going to have nine transition types possible as in Figure 4.1. There are customers who will upgrade, but also others who may downgrade. There could also be some customers who leave the bank and at the same time there will be a constant inflow of new customers. Let's evaluate the transition states flow for our business problem: In Figure 4.2, we haven't jotted down the transition probability between each state. We can try to do this by looking at the historical customer movements, to arrive at the transitional probability. Be aware that most business managers would prefer to use their instincts while assigning transitional probabilities. They may achieve some merit in this approach, as the managers may be able to incorporate the various factors that may have influenced the customer movements between states. A promotion offering 40% off the platinum account (effective rate $12/month, down from $20/month) may have ensured that more customers in the promotion period opted for the platinum account than the premier offering ($10/month). Let's examine the historical data of customer account preferences. The data is compiled for the years 2008 – 2018. This doesn't account for any new customers joining after January 1, 2008 and also ignores information on churned customers in the period of interest. Figure 4.3 consists of customers who have been with the bank since 2008: Active customer counts (Millions) Year Classic (Cl) Premium (Pr) Platinum (Pl) Total customers 2008 H1 30.68 5.73 1.51 37.92 2008 H2 30.65 5.74 1.53 37.92 2009 H1 30.83 5.43 1.66 37.92 2009 H2 30.9 5.3 1.72 37.92 2010 H1 31.1 4.7 2.12 37.92 2010 H2 31.05 4.73 2.14 37.92 2011 H1 31.01 4.81 2.1 37.92 2011 H2 30.7 5.01 2.21 37.92 2012 H1 30.3 5.3 2.32 37.92 2012 H2 29.3 6.4 2.22 37.92 2013 H1 29.3 6.5 2.12 37.92 2013 H2 28.8 7.3 1.82 37.92 2014 H1 28.8 8.1 1.02 37.92 2014 H2 28.7 8.3 0.92 37.92 2015 H1 28.6 8.34 0.98 37.92 2015 H2 28.4 8.37 1.15 37.92 2016 H1 27.6 9.01 1.31 37.92 2016 H2 26.5 9.5 1.92 37.92 2017 H1 26 9.8 2.12 37.92 2017 H2 25.3 10.3 2.32 37.92 Figure 4.3: Active customers since 2008 Since we are only considering active customers, and no new customers are joining or leaving the bank, we can calculate the number of customers moving from one state to another using the data in Figure 4.3: Customer movement count to next year (Millions) Year Cl-Cl Cl-Pr Cl-Pl Pr-Pr Pr-Cl Pr-Pl Pl-Pl Pl-Cl Pl-Pr Total customers 2008 H1 - - - - - - - - - - 2008 H2 30.28 0.2 0.2 5.5 0 0.23 1.1 0.37 0.04 37.92 2009 H1 30.3 0.1 0.25 5.1 0.53 0.11 1.3 0 0.23 37.92 2009 H2 30.5 0.32 0.01 4.8 0.2 0.43 1.28 0.2 0.18 37.92 2010 H1 30.7 0.2 0 4.3 0 1 1.12 0.4 0.2 37.92 2010 H2 30.7 0.2 0.2 4.11 0.35 0.24 1.7 0 0.42 37.92 2011 H1 30.9 0 0.15 4.6 0 0.13 1.82 0.11 0.21 37.92 2011 H2 30.2 0.8 0.01 3.8 0.1 0.91 1.29 0.4 0.41 37.92 2012 H1 30.29 0.4 0.01 4.9 0.01 0.1 2.21 0 0 37.92 2012 H2 29.3 0.9 0.1 5.3 0 0 2.12 0 0.2 37.92 2013 H1 29.2 0.1 0 6.1 0.1 0.2 1.92 0 0.3 37.92 2013 H2 28.6 0.3 0.4 6.5 0 0 1.42 0.2 0.5 37.92 2014 H1 28.7 0.1 0 7.2 0.1 0 1.02 0 0.8 37.92 2014 H2 28.7 0 0.1 8.1 0 0 0.82 0 0.2 37.92 2015 H1 28.6 0 0.1 8.3 0 0 0.88 0 0.04 37.92 2015 H2 28.3 0 0.3 8 0.1 0.24 0.61 0 0.37 37.92 2016 H1 27.6 0.8 0 8.21 0 0.16 1.15 0 0 37.92 2016 H2 26 1 0.6 8.21 0.5 0.3 1.02 0 0.29 37.92 2017 H1 25 0.5 1 8 0.5 1 0.12 0.5 1.3 37.92 2017 H2 25.3 0.1 0.6 9 0 0.8 0.92 0 1.2 37.92 Figure 4.4: Customer transition state counts In Figure 4.4, we can see the customer movements between various states. We don't have the movements for the first half of 2008 as this is the start of the series. In the second half of 2008, we see that 30.28 out of 30.68 million customers (30.68 is the figure from the first half of 2008) were still using a classic account. However, 0.4 million customers moved away to premium and platinum accounts. The total customers remain constant at 37.92 million as we have ignored new customers joining and any customers who have left the bank. From this table, we can calculate the transition probabilities for each state: Year Cl-Cl Cl-Pr Cl-Pl Pr-Pr Pr-Cl Pr-Pl Pl-Pl Pl-Cl Pl-Pr 2008 H2 98.7% 0.7% 0.7% 96.0% 0.0% 4.0% 72.8% 24.5% 2.6% 2009 H1 98.9% 0.3% 0.8% 88.9% 9.2% 1.9% 85.0% 0.0% 15.0% 2009 H2 98.9% 1.0% 0.0% 88.4% 3.7% 7.9% 77.1% 12.0% 10.8% 2010 H1 99.4% 0.6% 0.0% 81.1% 0.0% 18.9% 65.1% 23.3% 11.6% 2010 H2 98.7% 0.6% 0.6% 87.4% 7.4% 5.1% 80.2% 0.0% 19.8% 2011 H1 99.5% 0.0% 0.5% 97.3% 0.0% 2.7% 85.0% 5.1% 9.8% 2011 H2 97.4% 2.6% 0.0% 79.0% 2.1% 18.9% 61.4% 19.0% 19.5% 2012 H1 98.7% 1.3% 0.0% 97.8% 0.2% 2.0% 100.0% 0.0% 0.0% 2012 H2 96.7% 3.0% 0.3% 100.0% 0.0% 0.0% 91.4% 0.0% 8.6% 2013 H1 99.7% 0.3% 0.0% 95.3% 1.6% 3.1% 86.5% 0.0% 13.5% 2013 H2 97.6% 1.0% 1.4% 100.0% 0.0% 0.0% 67.0% 9.4% 23.6% 2014 H1 99.7% 0.3% 0.0% 98.6% 1.4% 0.0% 56.0% 0.0% 44.0% 2014 H2 99.7% 0.0% 0.3% 100.0% 0.0% 0.0% 80.4% 0.0% 19.6% 2015 H1 99.7% 0.0% 0.3% 100.0% 0.0% 0.0% 95.7% 0.0% 4.3% 2015 H2 99.0% 0.0% 1.0% 95.9% 1.2% 2.9% 62.2% 0.0% 37.8% 2016 H1 97.2% 2.8% 0.0% 98.1% 0.0% 1.9% 100.0% 0.0% 0.0% 2016 H2 94.2% 3.6% 2.2% 91.1% 5.5% 3.3% 77.9% 0.0% 22.1% 2017 H1 94.3% 1.9% 3.8% 84.2% 5.3% 10.5% 6.2% 26.0% 67.7% 2017 H2 97.3% 0.4% 2.3% 91.8% 0.0% 8.2% 43.4% 0.0% 56.6% Figure 4.5: Transition state probability In Figure 4.5, we have converted the transition counts into probabilities. If 30.28 million customers in 2008 H2 out of 30.68 million customers in 2008 H1 are retained as classic customers, we can say that the retention rate is 98.7%, or the probability of customers staying with the same account type in this instance is .987. Using these details, we can compute the average transition between states across the time series. These averages can be used as the transition probabilities that will be used in the transition matrix for the model: Cl Pr Pl Cl 98.2% 1.1% 0.8% Pr 2.0% 93.2% 4.8% Pl 6.3% 20.4% 73.3% Figure 4.6: Transition probabilities aggregated The probability of classic customers retaining the same account type between semiannual time periods is 98.2%. The lowest retain probability is for platinum customers as they are expected to transition to another customer account type 26.7% of the time. Let's use the transition matrix in Figure 4.6 to run our Markov model. Use this code for Data setup: DATA Current; input date CL PR PL; datalines; 2017.2 25.3 10.3 2.32 ; Run; Data Netflow; input date CL PR PL; datalines; 2018.1 0.21 0.1 0.05 2018.2 0.22 0.16 0.06 2019.1 .24 0.18 0.08 2019.2 0.28 0.21 0.1 2020.1 0.31 0.23 0.14 ; Run; Data TransitionMatrix; input CL PR PL; datalines; 0.98 0.01 0.01 0.02 0.93 0.05 0.06 0.21 0.73 ; Run; In the current data set, we have chosen the last available data point, 2017 H2. This is the base position of customer counts across classic, premium, and platinum accounts. While calculating the transition matrix, we haven't taken into account new joiners or leavers. However, to enable forecasting we have taken 2017 H2 as our base position. The transition matrix seen in Figure 4.6 has been input as a separate dataset. Markov model code PROC IML; use Current; read all into Current; use Netflow; read all into Netflow; use TransitionMatrix; read all into TransitionMatrix; Current = Current [1,2:4]; Netflow = Netflow [,2:4]; Model_2018_1 = Current * TransitionMatrix + Netflow [1,]; Model_2018_2 = Model_2018_1 * TransitionMatrix + Netflow [1,]; Model_2019_1 = Model_2018_2 * TransitionMatrix + Netflow [1,]; Model_2019_2 = Model_2019_1 * TransitionMatrix + Netflow [1,]; Model_2020_1 = Model_2019_2 * TransitionMatrix + Netflow [1,]; Budgetinputs = Model_2018_1//Model_2018_2//Model_2019_1//Model_2019_2//Model_2020_1; Create Budgetinputs from Budgetinputs; append from Budgetinputs; Quit; Data Output; Set Budgetinputs (rename=(Col1=Cl Col2=Pr Col3=Pl)); Run; Proc print data=output; Run; Figure 4.7: Model output The Markov model has been run and we are able to generate forecasts for all account types for the requested five periods. We can immediately see that there is an increase forecasted for all the account types. This is being driven by the net flow of customers. We have derived the forecasts by essentially using the following equation: Forecast = Current Period * Transition Matrix + Net Flow Once the 2018 H1 forecast is derived, we replace the Current Period with the 2018 H1 forecasted number while trying to forecast the 2018 H2 numbers. We are doing this as, based on the 2018 H1 customer counts, the transition probabilities will determine how many customers move across states. This will help generate the forecasted customer count for the required period. Understanding transition probability Now, since we have our forecasts let's take a step back and revisit our business goals. The finance team wants to estimate the revenues from the revamped premium and platinum customer accounts for the next few forecasting periods. As we have seen, one of the important drivers of the forecasting process is the transition probability. This transition probability is driven by historical customer movements, as shown in Figure 4.4. What if the marketing team doesn't agree with the transitional probabilities calculated in Figure 4.6? As we discussed, 26.7% of platinum customers aren't retained in this account type. Since we are not considering customer churn out of the bank, this means that a large proportion of platinum customers downgrade their accounts. One of the reasons the marketing teams revamped the accounts is due to this reason. The marketing team feels that it will be able to raise the retention rates for platinum customers and want the finance team to run an alternate forecasting scenario. This is, in fact, one of the pros of the Markov model approach as by tweaking the transition probabilities we can run various business scenarios. Let's compare the base and the alternate scenario forecasts generated in Figure 4.8: A change in the transition probabilities of how platinum customers moved to various states has brought about a significant change in the forecast for premium and platinum customer accounts. For classic customers, the change in the forecast between the base and the alternate scenario is negligible, as shown in the table in Figure 4.8. The finance team can decide which scenario is best suited for budget forecasting: Cl Pr Pl Cl 98.2% 1.1% 0.8% Pr 2.0% 93.2% 4.8% Pl 5.0% 15.0% 80.0% Figure 4.8: Model forecasts and updated transition probabilities To summarize, we learned the Markov model methodology and learned Markov models for forecasting and imputation. To know more about how to use the other two methodologies such as ARIMA and MCMC for generating forecasts for various business problems, you can check out the book SAS for Finance. Read more How to perform regression analysis using SAS Performing descriptive analysis with SAS Akon is planning to create a cryptocurrency city in Senegal
Read more
  • 0
  • 1
  • 3970

article-image-how-ai-is-changing-game-development
Raka Mahesa
16 Jan 2018
4 min read
Save for later

How AI is changing game development

Raka Mahesa
16 Jan 2018
4 min read
Artificial intelligence is transforming many areas, but one of the most interesting and underrated is gaming. It’s not that surprising; after all, an AI last year beat the world's top Go player, and it’s not a giant leap to see how artificial intelligence might be used in modern video gaming. However, things aren’t straightforward. This is because applying artificial intelligence to a video game is different from artificial intelligence applied to the development process of video game. Before we talk further about this, let's first discuss artificial intelligence by itself. What exactly is artificial intelligence? Artificial intelligence is, to put it simply, the ability of machines to learn and apply knowledge. Learning and applying knowledge is such a broad scope though. Being able to do simple math calculations is learning and applying arithmetic knowledge, but no one would call that capability artificial intelligence. Artificial intelligence is a shifting field Surprisingly, there is no definite scope for the kind of knowledge that artificial intelligence must have. It's a moving goalpost, the scope is updated whenever a new breakthrough in the field of artificial intelligence has occurred. There is a saying in the artificial intelligence community that "AI is whatever hasn't been done yet," meaning that whenever an AI problem is solved and figured out, it no longer counts as artificial intelligence and will be branched out to its own field. This has happened to searching algorithm, path finding, optical character recognition, and so on.  So, in short, we have a divide here. When experts are saying artificial intelligence, they usually mean the most cutting-edge part of artificial intelligence like neural networks or advanced voice recognition. Meanwhile, problems that are seen as solved, like path finding, are usually no longer being researched and not counted as artificial intelligence anymore. And it's also why all the advancement that has happened to artificial intelligence doesn't really impact video games. Yes, video games have many, many applications of artificial intelligence, but most of those applications are limited to problems that have already been solved like path finding and decision making. Meanwhile, the latest artificial intelligence technique like machine learning doesn't really have a place in video games. Things are a bit different though for video game development. Even though video games themselves don't use the latest and greatest artificial intelligence, their creation is still an engineering process that can be further aided with the advancement in AI. Do note that most of the artificial intelligence used for game development is still being explored and still in an experimental phase.  There has been research done recently about using an intelligent agent to learn about level layout of a game and then using that trained agent to layout another level. Having an artificial intelligence handle the level creation aspect of a game development will definitely increase the development speed because level designers now can just tweak existing layout instead of starting from scratch. Automation in game development  Another aspect of video game development that already uses a lot of automation and can be aided further by artificial intelligence is quality assurance (QA). By having an artificial intelligence that can learn how to play the game will great assist developers when they need to check for bugs and other issues. But of course, artificial intelligence can only detect stuff that can be measured like bugs and crashes, and they won't be able to see if the game is fun or not. Using AI to improve game design  Having an AI that can automatically play a video game isn't only good for testing purposes, but it could also help improve game design. There was a mobile game studio that uses artificial intelligence to mimic human behavior so they can determine the difficulty of the game. By checking how the artificial intelligence performs on a certain level, designers can project how real players would perform on that level and adjust the difficulty properly.  And that's how AI is changing game development. Despite all the advancement, there is still plenty of work to be done for artificial intelligence to truly assist game developers in their work. That said, game development is one of those jobs that don't need to worry about being replaced by robots, because more than anything, game development is a creative endeavor that only humans can do. Well, so far at least. Raka Mahesa is a game developer at Chocoarts (http://chocoarts.com/) who is interested in digital technology in general. Outside of work, he enjoys working on his own projects, with Corridoom VR being his latest relesed gme.
Read more
  • 0
  • 0
  • 3961

article-image-what-kotlin
Hari Vignesh
09 Oct 2017
5 min read
Save for later

What is Kotlin?

Hari Vignesh
09 Oct 2017
5 min read
Kotlin is a statically typed programming language for the JVM, Android, and the browser. Kotlin is a new programming language from JetBrains, the maker of the world’s best IDEs. Also, it’s now the official language for Android app development. Why Kotlin ? Before we begin highlighting the brilliant features of Kotlin, we need to understand how Kotlin originated and evolved. We already have many programming languages. How did Kotlin emerge to capture programmers' hearts? A 2013 study showed that language features matter little compared to ecosystem issues, when developers evaluate programming languages. Kotlin compiles to JVM bytecode or JavaScript. It is not a language you will write a kernel in. It is of greatest interest to people who work with Java today, although it could appeal to all programmers who use a garbage collected runtime, including people who currently use Scala, Go, Python, Ruby, and JavaScript. Kotlin comes from industry, not academia. It solves problems faced by working programmers and developers today. As an example, the type system helps you avoid null pointer exceptions. Research languages tend to just not have null at all, but this is of no use to people working with large codebases and APIs which do. Kotlin costs nothing to adopt! It’s open source, but that’s not the point. It means that there’s a high quality, one-click Java to Kotlin converter tool (available in Android Studio), and a strong focus on Java binary compatibility. You can convert an existing Java project one file at a time and everything will still compile, even for complex programs that run to millions of lines of code. Kotlin programs can use all existing Java frameworks and libraries, even advanced frameworks that rely on annotation processing. The interop is seamless and does not require wrappers or adapter layers. It integrates with Maven, Gradle, and other build systems. It is approachable and it can be learned in a few hours by simply reading the language reference. The syntax is clean and intuitive. Kotlin looks a lot like Scala, but it’s simpler. The language balances terseness and readability as well. It enforces no particular philosophy of programming, such as overly functional or OOP styling. Kotlin Features Let me summarize why it’s the right time to jump from native Java to Kotlin Java. Concise: Drastically reduce the amount of boilerplate code you need to write. Safe: Avoid entire classes of errors such as null pointer exceptions. Versatile: Build server-side applications, Android apps, or front-end code running in the browser. Interoperable: Leverage existing frameworks and libraries of the JVM with 100 percent Java Interoperability. Brief discussion Let’s discuss a few important features in detail. Functional Programming Support Functional programming is not easy, at least in the beginning. That is, until it becomes fun. With zero-overhead lambdas and ability to do mapping, folding, etc. over standard Java collections. The Kotlin type system distinguishes between mutable and immutable views over collections. Function purity The concept of a pure function (a function that does not have side effects) is the most important functional concept, which allows us to greatly reduce code complexity and get rid of most mutable states. 2. Higher-order functions Higher-order functions either take functions as parameters, return functions, or both. Higher-order functions are everywhere. You just pass functions to collections to make code easy to read. titles.map { it.toUpperCase()} reads like plain English. Isn’t it beautiful? 3. Immutability Immutability makes it easier to write, use, and reason about the code (class invariant is established once and then unchanged). The internal state of your app components will be more consistent. Kotlin enforces immutability by introducing val keyword as well as Kotlin collections, which are immutable by default. Once the val or a collection is initialized, you can be sure about its validity. Null Safety Kotlin’s type system is aimed at eliminating the danger of null references from code, also known as ‘The Billion Dollar Mistake.’ One of the most common pitfalls in many programming languages, including Java, is that of accessing a member of null references, resulting in null reference exceptions. In Java, this would be the equivalent of a NullPointerException, or NPE for short. In Kotlin, the type system distinguishes between references that can hold null (nullable references) and those that cannot (non-null references). For example, a regular variable of type String can’t hold null. How to migrate effectively to Kotlin? Migration is one of the last things that every developer or the organization wants. There are a lot of advantages when you migrate from Java to Kotlin, but the bottom line is, it will make the job of the developer easy, which in turn reduces bugs and improves the code quality and so on. Migrating effectively will always have many routes. But my advice would be to first convince the management that you need to migrate (if you’re a developer). Then you need to start writing the test cases first, to get familiar with the language. Then, as Kotlin is of interoperable capacity, you can start changing one file/module at a time. About the Author Hari Vignesh Jayapalan is a Google Certified Android app developer, IDF Certified UI & UX Professional, street magician, fitness freak, technology enthusiast, and wannabe entrepreneur. He can be found on Twitter @HariofSpades.
Read more
  • 0
  • 0
  • 3959

article-image-k-means-clustering
Janu Verma
15 Jun 2015
6 min read
Save for later

K-Means Clustering

Janu Verma
15 Jun 2015
6 min read
Clustering is one of the very important data mining and machine learning techniques. Clustering is a procedure for discovering groups of closely related elements in the dataset. Many a times we want to cluster the data into some categories, such as grouping similar users, modeling user behavior, identifying species of Irises, categorizing news items, classifying textual documents, and more. One of the most common clustering method is K-Means, which is a simple iterative method to partition the data into K - clusters. Algorithm Before we apply K-means to cluster data, it is required to express the data as vectors. In most of the cases, the data is given as a matrix of type [nSamples, nAttributes], which can be thought of as nSamples vectors each with a dimension of nAttributes. There are certain cases where some work has to be done to render the data into linear algebraic language, such as: A corpus of textual documents - We compute Term-frequency of a text document in the corpus as a vector of dimension=(vocabulary of the corpus) where the coefficient of each dimension is the frequency in the document of the word corresponding to the dimension. document1 = freq(word_1), freq(word_2), .....,freq(word_n) There are other choices of creating vectors from text documents, such as TFIDF vectors, binary vectors and more. If I'm trying to cluster by Twitter friends, I can represent each friend as a vector: number of followers, number of friends, number of tweets, count of favorite tweets After we have vectors representing data points, we will cluster these data vectors into K clusters using the following algorithm. Initialize the procedure by randomly selecting K vectors as cluster centroids. For each vector compute its Euclidean distance with each of the centroids and assign the vector to its closest centroid. When all of the objects have been assigned, recalculate the centroids as the mean (average) of all members of the cluster. Repeat the previous two steps until convergence – when the clusters no longer change. You can also choose other distance measures such as: Cosine similarity, Pearson correlation, Manhatten distance, and so on. Example We'll do cluster analysis of the wine dataset. This data contains 13 chemical measurements on 178 Italian wine samples. The data is taken from the UCI Machine Learning Repository. I'll use R to do this analysis, but it can very easily be done in other programing languages such as Python. We'll use the R package rattle, which is GUI for data mining in R. We use rattle only to access data from the UCI Machine Learning Repository. # install rattle install.packages("rattle") library(rattle) # load data data(wine) # what does the data look like head(wine) Type Alcohol Malic Ash Alcalinity Magnesium Phenols Flavanoids Nonflavanoids Proanthocyanins Color Hue Dilution Proline 1 1 14.23 1.71 2.43 15.6 127 2.80 3.06 0.28 2.29 5.64 1.04 3.92 1065 2 1 13.20 1.78 2.14 11.2 100 2.65 2.76 0.26 1.28 4.38 1.05 3.40 1050 3 1 13.16 2.36 2.67 18.6 101 2.80 3.24 0.30 2.81 5.68 1.03 3.17 1185 4 1 14.37 1.95 2.50 16.8 113 3.85 3.49 0.24 2.18 7.80 0.86 3.45 1480 5 1 13.24 2.59 2.87 21.0 118 2.80 2.69 0.39 1.82 4.32 1.04 2.93 735 6 1 14.20 1.76 2.45 15.2 112 3.27 3.39 0.34 1.97 6.75 1.05 2.85 1450 The first column contains the types of the wine. We will use K-Means as a learning model to predict the types:           # remove te first column input <- scale(wine[-1]) A drawback of K-means clustering is that we have to pre-decide on the number of clusters. First, we'll define a function to compute the optimal number of clusters by looking at the clusters sum of squares for a different number of clusters. wssplot <- function(data, nc=15, seed=1234){ wss <- (nrow(data)-1)*sum(apply(data,2,var)) for (i in 2:nc){ set.seed(seed) wss[i] <- sum(kmeans(data, centers=i)$withinss)} plot(1:nc, wss, type="b", xlab="Number of Clusters", ylab="Within groups sum of squares")} Now we'll compute the optimal number of clusters using the wss function we defined above: pdf("Number of Clusters.pdf") wssplot(input) dev.off() The plot shows within groups the sums of squares vs. the number of clusters extracted. The sharp decreases from 1 to 3 clusters (with a little decrease after) suggesting a 3-cluster solution. This shows that the optimal number of clusters is 3. Now we will cluster the data into 3 clusters using the kmeans() function in R. set.seed(1234) # Clusters fit <- kmeans(input, 3, nstart=25) Let's plot the clusters: # Plot the Clusters require(graphics) plot(input, col=fit$cluster) points(fit$centers, col=1:3, pch = 8, cex = 2) We can also visualize the clustered data more transparently using the R package ggplot2: # ggplot visual df <- data.frame(input) df$cluster <- factor(fit$cluster) centers <- as.data.frame(fit$centers) require(ggplot2) ggplot(data=df, aes(x=Alcohol,y=Malic,color=cluster)) + geom_point() The sizes of the 3 clusters can be computed as: # Size of the Clusters size <- fit$size size >>> [1] 62 65 51 Thus we have three clusters of wines of size 62, 65 and 51. The means of the columns (chemicals) for each of the cluster can be computed using the aggregate function. # Means of the columns for the Clusters mean_coulmns <- aggregate(input, by=list(fit$cluster), FUN=mean) mean_columns Let's now measure how good this clustering is. We can use K-Means as a predictive model to assign new data points to one of the 3 clusters. First, we should check how well this assignment is for the training set. A metric for this evaluation is called Cross Tabulation. This is a table comparing the type assigned by clustering and original values. # Measuring How Good is the Clustering # Cross Tabulation : A table comparing type assigned by clustering and original values cross <- table(wine$Type, fit$cluster) cross >>> 1 2 3 1 59 0 0 2 3 65 3 3 0 0 48 This shows that the clustering gives a pretty good prediction. Let's fit cluster centers to each observation: fit_centers <- fitted(fit) # Residue residue <- input - fitted(fit) This shows that the clustering gives a pretty good prediction. Assign to each observation the corresponding cluster: mydata <- data.frame(input, fit$cluster) write.table(mydata, file="clustered_observations.csv", sep=",", row.names=F, col.names=T, quote=F) The full code is available here. Further Reading K-Means in python Clustering Twitter friends Vectors from text data Want more Machine Learning tutorials and content? Visit our dedicated Machine Learning page here. About the Author Janu Verma is a Quantitative Researcher at the Buckler Lab, Cornell University, where he works on problems in bioinformatics and genomics. His background is in mathematics and machine learning and he leverages tools from these areas to answer questions in biology.
Read more
  • 0
  • 0
  • 3959
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at £15.99/month. Cancel anytime
article-image-ai-cold-war-between-china-and-the-usa
Neil Aitken
28 Jun 2018
6 min read
Save for later

The New AI Cold War Between China and the USA

Neil Aitken
28 Jun 2018
6 min read
The Cold War between the United States and Russia ended in 1991. However, considering the ‘behind the scenes’ behavior of the world’s two current Super Powers – China and the USA, another might just be beginning. This time around, many believe that the real battle doesn’t relate to the trade deficit between the two countries, despite new stories detailing the escalation of trade tariffs. In the next decade and a half, the real battle will take place between China and the USA in the technology arena, specifically, in the area of Artificial Intelligence or AI. China’s not shy about it’s AI ambitions China has made clear its goals when it comes to AI. It has publicly announced its plan to be the world leader in Artificial Intelligence by 2030. The country has learned a hard lesson, missing out on previous tech booms, notably, in the race for internet supremacy early this century. Now, they are taking a far more proactive stance. The AI market is estimated to be worth $150 billion per year by 2030, slightly over a decade from now, and China has made very clear public statements that the country wants it all. The US, in contrast has a number of private companies striving to carve out a leadership position in AI but no holistic policy. Quite the contrary, in fact. Trumps government say, “There is no need for an AI moonshot, and that minimizing government interference is the best way to make sure the technology flourishes.” What makes China so dangerous as an AI Threat ? China’s background and current circumstance gives them a set of valuable strategic advantages when it comes to AI. AI solutions are based, primarily, on two things. First, of critical importance is the amount of data available to ‘train’ an AI algorithm and the relative ease or difficulty of obtaining access to it. Secondly, the algorithm which sorts the data, looking for patterns and insights, derived from research, which are used to optimize the AI tools which interpret it. China leads the world on both fronts. China has more data: China’s population is 4 times larger than the US’s giving them a massive data advantage. China has a total of 730 million daily internet users and 704 million smartphone mobile internet users. Each of the connected individuals uses their phone, laptop or tablet online each day. Those digital interactions leave logs of location, time, action performed and many other variables. In sum then, China’s huge population is constantly generating valuable data which can be mined for value. Chinese regulations give public and private agencies easier access to this data: Few countries have exemplary records when it comes to human rights. Both Australia, and the US, for example, have been rebuked by the UN for their treatment of immigration in recent years. Questions have been asked of China too. Some suggest that China’s centralized government, and alleged somewhat shady history when it comes to human rights means they can provide internet companies with more data, more easily, than their private equivalents in the US could dream of. Chinese cybersecurity laws require companies doing business in the country to store their data locally. The government has placed one state representative on the board of each of their major tech companies, giving them direct, unfettered central government influence in the strategic direction and intent of those companies, especially when it comes to coordinating the distribution of the data they obtain. In the US, data leakage is one of the most prominent news stories of 2018. Given Facebook’s presentation to congress around the Facebook/Cambridge Analytica data sharing scandal, it would be hard to claim that US companies have access to data outside each company competing to evolve AI solutions fastest. It’s more secretive: China protects its advantage by limiting other countries’ access to its findings / information related to AI. At the same time, China takes advantage of the open publication of cutting edge ideas generated by scientists in other areas of the world. How China is doubling down on their natural advantage in AI solution development A number of metrics show China’s growing advantage in the area. China is investing more money in the area and leading the world in the number of university led research papers on AI that they’re publishing. China is investing more money in AI than the USA. They overtook the US in AI funds allocation in 2015 and have been increasing investment in the area since. Source: Wall Street Journal China now performs more research in to AI than the US – as measured by the number of published scientific peer reviewed journals. Source: HBR Why ‘Network Effects’ will decide the ultimate winner in the AI Arms Race You won’t see evidence of a Cold War in the behaviors of World Leaders. The handshakes are firm and the visits are cordial. Everybody smiles when they meet at the G8. However, a look behind the curtain clearly shows a 21st Century arms race underway, being led by investments  related to AI in both countries. Network effects ensure that there is often only one winner in a fight for technological supremacy. Whoever has the ‘best product’ for a given application, wins the most users. The data obtained from those users’ interactions with the tool is used to hone its performance. Thus creating a virtuous circle. The result is evident in almost every sphere of tech: Network effects explain why most people use only Google, why there’s only one Facebook and how Netflix has overtaken cable TV in the US as the primary source of video entertainment. Ultimately, there is likely to be only one winner in the war surrounding AI, too. From a military perspective, the advantage China has in its starting point for AI solution development could be the deciding factor. As we’ve seen, China has more people, with more devices, generating more data. That is likely to help the country develop workable AI solutions faster. They ingest the hard won advantages that US data scientists develop and share – but do not share their own. Finally, they simply outspend and out-research the US, investing more in AI than any other country. China’s coordinated approach outpaces the US’s market based solution with every step. The country with the best AI solutions for each application will gain a ‘Winner Takes All’ advantage and the winning hand in the $300 billion game of AI market ownership. We must change how we think about AI, urge AI founding fathers Does AI deserve to be so Overhyped? Alarming ways governments are using surveillance tech to watch you    
Read more
  • 0
  • 0
  • 3957

article-image-restful-apis-cloud-iot-social-media-emerging-technologies
Pavan Ramchandani
01 Jun 2018
13 min read
Save for later

What RESTful APIs can do for Cloud, IoT, social media and other emerging technologies

Pavan Ramchandani
01 Jun 2018
13 min read
Two decades ago, the IT industry saw tremendous opportunities with the dot-com boom. Similar to the dot-com bubble, the IT industry is transitioning through another period of innovation. The disruption is seen in major lines of business with the introduction of recent technology trends like Cloud services, Internet of Things (IoT), single-page applications, and social media. In this article, we have covered the role and implications of RESTful web APIs in these emerging technologies. This article is an excerpt from a book written by Balachandar Bogunuva Mohanram, titled RESTful Java Web Services, Second Edition. Cloud services We are in an era where business and IT are married together. For enterprises, thinking of a business model without an IT strategy is becoming out of the equation. Keeping the interest on the core business, often the challenge that lies ahead of the executive team is optimizing the IT budget. Cloud computing has come to the rescue of the executive team in bringing savings to the IT spending incurred for running a business. Cloud computing is an IT model for enabling anytime, anywhere, convenient, on-demand network access to a shared pool of configurable computing resources. In simple terms, cloud computing refers to the delivery of hosted services over the internet that can be quickly provisioned and decommissioned with minimal management effort and less intervention from the service provider. Cloud characteristics Five key characteristics deemed essential for cloud computing are as follows: On-demand Self-service: Ability to automatically provision cloud-based IT resources as and when required by the cloud service consumer Broad Network Access: Ability to support seamless network access for cloud-based IT resources via different network elements such as devices, network protocols, security layers, and so on Resource Pooling: Ability to share IT resources for cloud service consumers using the multi-tenant model Rapid Elasticity: Ability to dynamically scale IT resources at runtime and also release IT resources based on the demand Measured Service: Ability to meter the service usage to ensure cloud service consumers are charged only for the services utilized Cloud offering models Cloud offerings can be broadly grouped into three major categories, IaaS, PaaS, and SaaS, based on their usage in the technology stack: Software as a Service (SaaS) delivers the application required by an enterprise, saving the costs an enterprise needs to procure, install, and maintain these applications, which will now be offered by a cloud service provider at competitive pricing Platform as a Service (PaaS) delivers the platforms required by an enterprise for building their applications, saving the cost the enterprise needs to set up and maintain these platforms, which will now be offered by a cloud service provider at competitive pricing Infrastructure as a Service (IaaS) delivers the infrastructure required by an enterprise for running their platforms or applications, saving the cost the enterprise needs to set up and maintain the infrastructure components, which will now be offered by a cloud service provider at competitive pricing RESTful APIs' role in cloud services RESTful APIs can be looked on as the glue that connects the cloud service providers and cloud service consumers. For example, application developers requiring to display a weather forecast can consume the Google Weather API. In this section, we will look at the applicability of RESTful APIs for provisioning resources in the cloud. For an illustration of RESTful APIs, we will be using the Oracle Cloud service platform. Users can set up a free trial account via https://Cloud.oracle.com/home and try out the examples discussed in the following sections. For example, we will try to set up a test virtual machine instance using the REST APIs. The high-level steps required to be performed are as follows: Locate REST API endpoint Generate authentication cookie Provision virtual machine instance Locating the REST API endpoint Once users have signed up for an Oracle Cloud account, they can locate the REST API endpoint to be used by navigating via the following steps: Login screen: Choose the relevant Cloud Account details and click the My Services button as shown in the screenshot ahead: Home page: Displays the cloud services Dashboard for the user. Click the Dashboard icon as shown in the following screenshot: Dashboard screen: Lists the various cloud offerings. Click the Compute Classic offering: Compute Classic screen: Displays the details of infrastructure resources utilized by the user: Site Selector screen: Displays the REST endpoint: Generating an authentication cookie Authentication is required for provisioning the IT resources. For this purpose, we will be required to generate an authentication cookie using the Authenticate User REST API. The details of the API are as follows: API detailsDescriptionAPI functionAuthenticate supplied user credential and generate authentication cookie for use in the subsequent API calls.Endpoint <RESTEndpoint captured in previous section>/authenticate/ Example: https://compute.eucom-north-1.oracleCloud.com/authenticate/ HTTP method POST Request header properties Content-Type:application/oracle-compute-v3+jsonAccept: application/oracle-compute-v3+jsonRequest body user: Two-part name of the user in the format/Computeidentity_domain/user password: Password for the specified userSample request:{ "password": "xxxxx", "user": "/Compute-586113456/[email protected]" } Response header properties set-cookie: Authentication cookie value The following screenshot shows the authentication cookie generated by invoking the Authenticate User REST API via the Postman tool: Provisioning a virtual machine instance Consumer are allowed to provision IT resources on the Oracle Compute Cloud infrastructure service, using the LaunchPlans or Orchestration REST API. For this demonstration, we will use the LaunchPlans REST API. The details of the API are as follows: API functionLaunch plan used to provision infra resources in Oracle Compute Cloud Service.Endpoint <RESTEndpoint captured in above section>/launchplan/ Example: https://compute.eucom-north-1.oracleCloud.com/launchplan/ HTTP method POST Request header properties Content-Type:application/oracle-compute-v3+json Accept: application/oracle-compute-v3+json Cookie: <Authentication cookie> Request body instances: Array of instances to be provisioned. For details of properties required by each instance, refer to http://docs.oracle.com/en/Cloud/iaas/compute-iaas-Cloud/stcsa/op-launchplan--post.html. relationships: Mention if any relationship with other instances. Sample Request: { "instances": [ { "shape": "oc3", "imagelist": "/oracle/public/oel_6.4_2GB_v1", "name": "/Compute-586113742/[email protected]/test-vm-1", "label": "test-vm-1", "sshkeys":[] } ] } Response body Provisioned list of instances and their relationships The following screenshot shows the creation of a test virtual machine instance by invoking the LaunchPlan REST API via the Postman tool: HTTP Response Status 201 confirms the request for provisioning was successful. Check the provisioned instance status via the cloud service instances page as shown here: Internet of things The Internet of Things (IoT), as the name says, can be considered as a technology enabler for things (which includes people as well) to connect or disconnect from the internet. The term IoT was first coined by Kelvin Ashton in 1999. With broadband Wi-Fi becoming widely available, it is becoming a lot easier to connect things to the internet. This a has a lot of potential to enable a smart way of living and already there are many projects being spoken about smart homes, smart cities, and so on. A simple use case can be predicting the arrival time of a bus so that commuters can get a benefit, if there are any delays and plan accordingly. In many developing countries, the transport system is enabled with smart devices which help commuters predict the arrival or departure time for a bus or train precisely. Gartner analysts firm has predicted that more than 26 billion devices will be connected to the internet by 2020. The following diagram from Wikipedia shows the technology roadmap depicting the applicability of the IoT by 2020 across different areas:   IoT platform The IoT platform consists of four functional layers—the device, data, integration, and service layers. For each functional layer, let us understand the capabilities required for the IoT platform: Device Device management capabilities supporting device registration, provisioning, and controlling access to devices. Seamless connectivity to devices to send and receive data. Data Management of huge volume of data transmitted between devices. Derive intelligence from data collected and trigger actions. Integration Collaboration of information between devices.ServiceAPI gateways exposing the APIs. IoT benefits The IoT platform is seen as the latest evolution of the internet, offering various benefits as shown here: The IoT is becoming widely used due to lowering cost of technologies such as cheap sensors, cheap hardware,  and low cost of high bandwidth network. The connected human is the most visible outcome of the IoT revolution. People are connected to the IoT through various means such as Wearables, Hearables, Nearables, and so on, which can be used to improve the lifestyle, health, and wellbeing of human beings: Wearables: Wearables are any form of sophisticated, computer- like technology which can be worn or carried by a person, such as smart watches, fitness devices, and so on. Hearables: Hearables are wireless computing earpieces, such as headphones. Nearables: Nearables are smart objects with computing devices attached to them, such as door locks, car locks, and so on. Unlike Wearables or Hearables, Nearables are static. Also, in the healthcare industry, the IoT-enabled devices can be used to monitor patients' heart rate or diabetes. Smart pills and nanobots could eventually replace surgery and reduce the risk of complications. RESTful APIs' role in the IoT The architectural pattern used for the realization of the majority of the IoT use cases follows the event-driven architecture pattern. The event-driven architecture software pattern deals with the creation, consumption, and identification of events. An event can be generalized to refer the change in state of an entity. For example, a printer device connected to the internet may emit an event when the printer cartridge is low on ink so that the user can order a new cartridge. The following diagram shows the same with different devices connected to the internet:   The common capability required for devices connected to the internet is the ability to send and receive event data. This can be easily accomplished with RESTful APIs. The following are some of the IoT APIs available on the market: Hayo API: The Hayo API is used by developers to build virtual remote controls for the IoT devices in a home. The API senses and transmits events between virtual remote controls and devices, making it easier for users to achieve desired actions on applications by simply manipulating a virtual remote control. Mozilla Battery Status API: The Mozilla Battery Status API is used to monitor system battery levels of mobile devices and streams notification events for changes in the battery levels and charging progress. Its integration allows users to retrieve real-time updates of device battery levels and status. Caret API: The Caret API allows status sharing across devices. The status can be customized as well. Modern web applications Web-based applications have seen a drastic evolution from Web 1.0 to Web 2.0. Web 1.0 sites were designed mostly with static pages; Web 2.0 has added more dynamism to it. Let us take a quick snapshot of the evolution of web technologies over the years. 1993-1995Static HTML Websites with embedded images and minimal JavaScript1995-2000Dynamic web pages driven by JSP, ASP, CSS for styling, JavaScript for client side validations.2000-2008Content Management Systems like Word Press, Joomla, Drupal, and so on.2009-2013Rich Internet Applications, Portals, Animations, Ajax, Mobile Web applications2014 OnwardsSinge Page App, Mashup, Social Web   Single-page applications Single-page applications are web applications designed to load the application in a single HTML page. Unlike traditional web applications, rather than refreshing the whole page for displaying content change, it enhances the user experience by dynamically updating the current page, similar to a desktop application. The following are some of the key features or benefits of single-page applications: Load contents in single page No refresh of page Responsive design Better user experience Capability to fetch data asynchronously using Ajax Capability for dynamic data binding RESTFul API role in single-page applications In a traditional web application, the client requests a URI and the requested page is displayed in the browser. Subsequent to that, when the user submits a form, the submitted form data is sent to the server and the response is displayed by reloading the whole page as follows: Social media Social media is the future of communication that not only lets one interact but also enables the transfer of different content formats such as audio, video, and image between users. In Web 2.0 terms, social media is a channel that interacts with you along with providing information. While regular media is a one-way communication, social media is a two-way communication that asks for one's comments and lets one vote. Social media has seen tremendous usage via networking sites such as Facebook, LinkedIn, and so on. Social media platforms Social media platforms are based on Web 2.0 technology which serves as the interactive medium for collaboration, communication, and sharing among users. We can classify social media platforms broadly based on their usage as follows: Social networking servicesPlatforms where people manage their social circles and interact with each other, such as Facebook.Social bookmarking servicesAllows one to save, organize, and manage links to various resource over the internet, such as StumbleUpon.Social media newsPlatform that allows people to post news or articles, such as reddit.Blogging servicesPlatform where users can exchange their comments on views, such as Twitter.Document sharing servicesPlatform that lets you share your documents, such as SlideShare.Media sharing servicesPlatform that lets you share media contents, such as YouTube.Crowd sourcing servicesObtaining needed services, ideas, or content by soliciting contributions from a large group of people or an online community, such as Ushahidi. Social media benefits User engagement through social media has seen tremendous growth and many companies use social media channels for campaigns and branding. Let us look at various benefits social media offers: Customer relationship managementA company can use social media to campaigns their brand and potentially benefit with positive feedback from customer review.Customer retention and expansionCustomer reviews can become a valuable source of information for retention and also help to add new customers.Market researchSocial media conversations can become useful insight for market research and planning.Gain competitive advantageAbility to get a view of competitors' messages which enables a company to build strategies to handle their peers in the market.Public relationsCorporate news can be conveyed to audience in real time.Cost controlCompared to traditional methods of campaigning, social media offers better advertising at cheaper cost.   RESTful API role in social media Many of the social networks provide RESTful APIs to expose their capabilities. Let us look at some of the RESTful APIs of popular social media services: Social media servicesRESTFul APIReferenceYouTubeAdd YouTube features to your application, including the ability to upload videos, create and manage playlists, and more.https://developers.google.com/youtube/v3/FacebookThe Graph API is the primary way to get data out of, and put data into, Facebook's platform. It's a low-level HTTP-based API that you can use to programmatically query data, post new stories, manage ads, upload photos, and perform a variety of other tasks that an app might implement.https://developers.facebook.com/docs/graph-api/overviewTwitter Twitter provides APIs to search, filter, and create an ads campaign.https://developer.twitter.com/en/docs To summarize, we discussed modern technology trends and the role of RESTful APIs in each of these areas including its implication on the cloud, virtual machines, user experience for various architecture, and building social media applications. To know more about designing and working with RESTful web services, do check out Java RESTful Web Services, Second Edition. Getting started with Django and Django REST frameworks to build a RESTful app How to develop RESTful web services in Spring
Read more
  • 0
  • 0
  • 3934

article-image-computer-vision-is-an-expanding-market-heres-why
Aaron Lazar
12 Jun 2018
6 min read
Save for later

Computer vision is growing quickly. Here's why.

Aaron Lazar
12 Jun 2018
6 min read
Computer Vision is one of those technologies that has grown in leaps and bounds over the past few years. If you look back 10 years, it wasn’t the case, as CV was more a topic of academic interest. Now, however, computer vision is clearly a driver and benefactor of the renowned Artificial Intelligence. Through this article, we’ll understand the factors that have sparked the rise of Computer Vision. A billion $ market You heard it right! Computer Vision is a billion dollar market, thanks to the likes of Intel, Amazon, Netflix, etc investing heavily in the technology’s development. And from the way events are unfolding, the market is expected to hit a record $ 17 billion, by 2023. That’s at a cumulative growth rate of over 7% per year, from 2018 to 2023. Now this is a joint figure for both the hardware and software components related to Computer Vision. Under the spotlight Let’s talk a bit about a few companies that are already taking advantage of Computer Vision, and are benefiting from it. Intel There are several large organisations that are investing heavily in Computer Vision. Last year, we saw Intel invest $15 Billion towards acquiring Mobileye, an Israeli auto startup. Intel published its findings stating that the autonomous vehicle market itself would rise to $ 7 Trillion by 2050. The autonomous vehicle industry will be one of the largest implementers of computer vision technology. These vehicles will use Computer Vision to “see” their surroundings and communicate with other vehicles. Netflix Netflix on the other hand, is using Computer Vision for more creative purposes. With the rise of Netflix’s original content, the company is investing in Computer Vision to harvest static image frames directly from the source videos to provide a flexible source of raw artwork, which is used for digital merchandising. For example, within a single episode of Stranger Things, there are nearly 86k static video frames, that would had to have been analysed by human teams to identify the most appropriate stills to be featured. This meant first going through each of those 86k images, then understanding what worked for viewers of the previous episode and then applying the learning in the selection of future images. Need I estimate how long that would have taken to do? Now, Computer Vision performs this task seamlessly, with a much higher accuracy than that of humans. Pinterest Pinterest, the popular social networking application, sees millions of images, GIFs and other visuals shared every day. In 2017, they released an application feature callen Lens, that allows users to use their phone’s camera to search for similar looking decor, food and clothing, in the real world. Users can simply point their cameras at an image and Pinterest will show them similar styles and ideas. Recent reports reveal that Pinterest’s revenue has grown by a staggering 58%! National Surveillance in CCTV The world’s biggest AI startup, SenseTime, provides China with the world’s largest and most sophisticated CCTV network. With over 170 Mn CCTV cameras, the government authorities and police departments are able to seamlessly identify people. They perform this by wearing smart glasses, that have facial recognition capabilities. Bring this technology to Dubai and you’ve got a supercop in a supercar! The nation-wide surveillance project that’s named Skynet, began as early as 2005, although recent advances in AI have given it a boost. Reading through discussions like these is real fun. People used to quip that such “fancy” machines are only for the screen. If only they knew that such a machine would be a reality just a few years from then. Clearly, computer vision is one of the most highly valued commercial applications of machine learning and when integrated with AI, it’s an offer only a few can resist! Star Acquisitions that matter Several acquisitions have taken place in the field of Computer Vision in the past two years alone. The most notable of them being Intel’s acquisition of Movidius, to the tune of $400 Mn. Here are some of the others that have happened since 2016: Twitter acquires Magic Pony Technology for $150Mn Snap Inc acquires Obvious Engineering for $47 Mn Salesforce acquires Metamind for $32.8 Mn Google acquires Eyefluence for $21.6 Mn This shows the potential of the computer vision market and how big players are in the race to dive deep into the technology. Three little things driving computer vision I would say there are 3 clear growth factors that are contributing to the rise of Computer Vision: Deep Learning Advancements in Hardware Growth of the Datasets Deep Learning The advancements in the field of Deep Learning are bound to boost Computer Vision. Deep Learning algorithms are capable of processing tonnes of images, much more accurately than humans. Take Feature Extraction for example. The primary pain point with feature extraction is that you have to choose which features to look for in a given image. This becomes cumbersome and almost impossible when the number of classes you are trying to define, starts to grow. There are so many features, that you have to deal with a plethora of parameters, that have to be fine-tuned. Deep Learning simplifies this process for you. Advancements in Hardware With new hardware like GPUs capable of processing petabytes of data, algorithms are capable of running faster and more efficiently. This has led to the advancement in real-time processing and vision capabilities. Pioneering hardware manufacturers like NVIDIA and Intel are in a race to create more powerful and capable hardware to support deep learning capabilities for Computer Vision. Growth of the Datasets Training Deep Learning algorithms isn’t a daunting task anymore. There are plenty of open source data sets that you can choose from to train your algorithms. The more the data, the better is the training and accuracy. Here are some of the most notable data sets for computer vision. ImageNet with 15 million images, is a massive dataset Open Images has 9 million images Microsoft Common Objects in Context (COCO) has around 330K images CALTECH-101  has approximately 9,000 images Where tha money at? The job market for Computer Vision is on a rise too, with Computer Vision featuring at #3 on the list of top jobs in 2018, according to Indeed. Organisations are looking for Computer Vision Engineers who are well versed with writing efficient algorithms for handling large amounts of data. Source: Indeed.com So is it the right time to invest or perhaps learn Computer Vision? You bet it is! It’s clear that Computer Vision is a rapidly growing market and will have a sustained growth for the next few years. If you’re just planning to start out or even if you’re competent in using tools for Computer Vision, here are some resources to help you skill up with popular CV tools and techniques. Introducing Intel’s OpenVINO computer vision toolkit for edge computing Top 10 Tools for Computer Vision Computer Vision with Keras, Part 1
Read more
  • 0
  • 0
  • 3923

article-image-why-everyone-talking-about-javascript-fatigue
Erik Kappelman
21 Sep 2017
4 min read
Save for later

Why is everyone talking about JavaScript fatigue?

Erik Kappelman
21 Sep 2017
4 min read
To answer this question, let’s start by defining what exactly JavaScript fatigue is. JavaScript fatigue is best described as viewing the onslaught of new JavaScript tools, frameworks or packages as a relentless stream of shaggy dog stories instead of an endless stream of creativity and enhanced productivity. I must admit, I myself have a serious case of JavaScript fatigue. Anyone who is plugged into the tech world knows that JavaScript has been having a moment since the release of Node.js in 2009. Obviously, JavaScript was not new in 2009. Its monopoly on web scripting had already made it an old hand in the development world, but with the advent of Node.js, JavaScript began to creep out of web browsers into desktop applications and mobile apps. Pretty soon there was the MEAN stack, a web app architecture that allows for the developer to run a web app end-to-end with only JavaScript, and tools like PhoneGap allowing developers to create mobile apps with good old fashioned HTML, CSS and, you guessed it, JavaScript. I think JavaScript fatigue asks the question, should we really be excited about the emergence of ‘new’ tech based on or built for a scripting language that has been in use for almost 30 years? How did JavaScript fatigue happen? Before I answer the title question, let’s discuss how this happened. Obviously, just the creation/emergence of Node.js cannot be considered the complete explanation of JavaScript fatigue. But, when you consider that JavaScript happens to be a relatively ‘easy’ language, and the language that many people start their development journeys with, a new platform that extended the functionality of such a language (Node.js) easily became a catalyst for the JavaScript wave that has been rolling for the last few years. So, the really simple answer is that JavaScript is easy, so a bunch of people are using it. But who cares? Why is it that a bunch of people using a language that most of us already know is a bad thing? To me that sounds a lot like a good thing. The reason this is problematic actually has nothing to do with JavaScript. There is a difference between using a common language because it is productively advantageous and using a common language because of laziness. Many developers are guilty of the latter. And when a developer is lazy about one thing, they’re probably lazy about all the other things as well. Is it fair to blame JavaScript? So why are there so many lazily created frameworks, APIs, web apps and desktop applications created in JavaScript? Is it really fair to blame the language? No it is not fair. People are not really fed up with JavaScript, they’re fed up with lazy developers, and that is nothing new. Outside of literal laziness in the writing of JS code, there is a laziness based around picking the tools to solve problems. I’ve heard it said that web development or any development for that matter is really not about development tools or process, it's about the results. Regular people don’t care what technologies Amazon uses on their website, while everybody cares about using Amazon to buy things or stream videos. There has been a lot of use of JavaScript for the sake of using JavaScript. This is probably the most specific reason people are talking about JavaScript fatigue. When hammering a nail into a board, a carpenter doesn’t choose a screwdriver because the screwdriver is the newest tool in their toolbox, they choose a hammer, because it's the right tool. Sure, you could use the handle of the screwdriver to bang in that nail, and it would basically work, and then you would get to use your new tool. This is clearly a stupid way to operate. Unfortunately, many of the choices made in the development world today are centered on finding the newest JavaScript tool to solve a problem instead of finding the best tool to solve a problem. If developers eat up new tools like candy, other developers are going to keep creating them. This is the downward spiral we find ourselves in. Using technology to solve problems So, why is everyone talking about JavaScript fatigue? Because it is a real problem, and it's getting real annoying. As has been the case before, many developers have become Narcissus, admiring their code in the reflective pool of the Internet, until they turn to stone. Let’s keep an eye on the prize: using technology to solve problems. If JavaScript is used in this way, nobody would have any qualms with the current JavaScript renaissance. It's when we start developing for the sake of developing that things get a little weird.
Read more
  • 0
  • 0
  • 3911
article-image-5-things-to-remember-when-implementing-devops
Erik Kappelman
05 Dec 2017
5 min read
Save for later

5 things to remember when implementing DevOps

Erik Kappelman
05 Dec 2017
5 min read
DevOps is a much more realistic and efficient way to organize the creation and delivery of technology solutions to customers. But like practically everything else in the world of technology, DevOps has become a buzzword and is often thrown around willy-nilly. Let's cut through the fog and highlight concrete steps that will help an organization implement DevOps. DevOps is about bringing your development and operations teams together This might seem like a no-brainer, but DevOps is often explained in terms of tools rather than techniques or philosophical paradigms. At its core, DevOps is about uniting developers and operators, getting these groups to effectively communicate with each other, and then using this new communication to streamline various processes. This could include a physical change to the layout of an organization's workspace. It's incredible the changes that can happen just by changing the seating arrangements in an office. If you have a very large organization, development and operations might be in separate buildings, separate campuses, or even separate cities. While the efficacy of web-based communication has increased dramatically over the last few years, there is still no replacement for face-to-face daily human interactions. Putting developers and operators in the same physical space is going to increase the rate of adoption and efficacy of various DevOps tools and techniques. DevOps is all about updates Updates can be aimed at expanding functionality or simply fixing or streamlining existing processes. Updates present a couple of problems to developers and operators. First, we need to keep everybody working on the same codebase. This can be achieved by using a variety of continuous integration tools. The goal of continuous integration is to make sure that changes and updates to the codebase are implemented as close to continuously as possible. This helps avoid merging problems that can result from multiple developers working on the same codebase at the same time. Second, these updates need to be integrated into the final product. For this task, DevOps applies the concept of continuous deployment. This is essentially the same thing as continuous integration, but has to do with deploying changes to the codebase as opposed to integrating changes to the codebase. In terms of importance to the DevOps process, continues integration and deployment are equally important. Moving updates from a developer's workspace to the codebase to production should be seamless, smooth, and continuous. Implementing a microservices structure is imperative for an effective DevOps approach Microservices are an extension of the service-based structure. Basically a service structure calls for modulation of a solution’s codebase into units based on functionality. Microservices takes this a step further by implementing what consists of a service-based structure in which each service performs a single task. While a service-based or microservice structure is not required for implementation of DevOps, I have no idea why you wouldn’t because microservices lend themselves so well with DevOps. One way to think of a microservice structure is by imagining an ant hill in which all of the worker ants are microservices. Each ant has a specific set of abilities and is given a task from the queen. The ant then autonomously performs this task, usually gathering food, along with all of its ant friends. Remove a single ant from the pile, nothing really happens. Replace an old ant with a new ant, nothing really happens. The metaphor isn’t perfect, but it strikes at the heart of why microservices are valuable in a DevOps framework. If we need to be continuously integrating and deploying, shouldn’t we try to impact the codebase as directly as we can? When microservices are in use, changes can be made at an extremely granular level. This allows for continuous integration and deployment to really shine. Monitor your DevOps solutions In order to continuously deploy, applications need to also be continuously monitored. This allows for problems to be identified quickly. When problems are quickly identified, it tends to reduce the total effort required to fix the problems. Your application should obviously be monitored from the perspective of whether or not it is working as it currently should, but users need to be able to give feedback on the application’s functionality. When reasonable, this feedback can then be integrated into the application somehow. Monitoring user feedback tends to fall by the wayside when discussing DevOps. It shouldn’t. The whole point of the DevOps process is to improve the user experience. If you’re not getting feedback from users in a timely manner, it's kind of impossible to improve their experience. Keep it loose and experiment Part of the beauty of DevOps is that it can allow for more experimentation than other development frameworks. When microservices and continuous integration and deployment are being fully utilized, it's fairly easy to incorporate experimental changes to applications. If an experiment fails, or doesn’t do exactly what was expected, it can be removed just as easily. Basically, remember why DevOps is being used and really try to get the most out of it. DevOps can be complicated. Boiling anything down to five steps can be difficult but if you act on these five fundamental principles you will be well on your way to putting DevOps into practice. And while its fun to talk about what DevOps is and isn't, ultimately that's the whole point - to actually uncover a better way to work with others.
Read more
  • 0
  • 0
  • 3908

article-image-what-are-the-challenges-of-adopting-ai-powered-tools-in-sales-how-salesforce-can-help
Guest Contributor
24 Aug 2019
8 min read
Save for later

What are the challenges of adopting AI-powered tools in Sales? How Salesforce can help

Guest Contributor
24 Aug 2019
8 min read
Artificial intelligence is a hot topic for many industries. When it comes to sales, the situation gets complicated. According to the latest Salesforce State of Sales report, just 21% of organizations use AI in sales today, while its adoption in sales is expected to grow 155% by 2020. Let’s explore what keeps sales teams from implementing AI and how to overcome these challenges to unlock new opportunities. Why do so few teams adopt AI in Sales There are a few reasons behind such a low rate of AI application in sales. First, some teams don’t feel they are prepared to integrate AI into their existing strategies. Second, AI technologies are often applied in a hectic way: many businesses have high expectations of AI and concentrate mostly on its benefits rather than contemplating possible difficulties upfront. Such an approach rarely results in positive business transformation. Here are some common challenges that businesses need to overcome to turn their sales AI projects into success stories. Businesses don’t know how to apply AI in their workflow Problem: Different industries call for different uses of AI. Still, companies tend to buy AI platforms to use them for the same few popular tasks, like predictions based on historical data or automatic data logging. In reality, the business type and direction should dictate what AI solution will best fit the needs of an organization. For example, in e-commerce, AI can serve dynamic product recommendations on the basis of the customer’s previous purchases or views. Teams relying on email marketing can use AI to serve personalized email content as well as optimize send times. Solution: Let a sales team participate in AI onboarding. Prior to setup, gain insight into your sales reps’ daily routine, needs, and pains. Then, get their feedback continuously during the actual AI implementation. Such a strategy will ensure the sales team benefits from a tailored, rather than a generic, AI system. AI requires data businesses don’t have Problem: AI is most efficient when fed with huge amounts of data. It’s true, a company with a few hundred leads per week will train AI for better predictions than the company with the same amount of leads per month. Frequently, companies assume they don’t have so much data or they cannot present it in a suitable format to train an AI algorithm. Solution: In reality, AI can be trained with incomplete and imperfect data. Instead of trying to integrate the whole set of data prior to implementing AI, it’s possible to use it with data subsets, like historical purchase data or promotional campaign analytics. Plus, AI can improve the quality of data by predicting missing elements or identifying possible errors. Businesses lack skills to manage AI platforms Problem: AI is a sophisticated algorithm that requires special skills to implement and use it. Thus, sales teams need to be augmented with specialized knowledge in data management, software optimization, and integration. Otherwise, AI tools can be used incorrectly and thus provide little value. Solution: There are two ways of solving this problem. First, it’s possible to create a new team of big data, machine learning, and analytics experts to run AI implementation and coordinate it with the sales team. This option is rather time-consuming. Second, it’s possible to buy an AI-driven platform, like Salesforce, for example, that includes both out-of-the-box features as well as plenty of customization opportunities. Instead of hiring new specialists to manage the platform, you can reach out to Salesforce consultants who will help you select the best-fit plan, configure, and implement it. If your requirements go beyond the features available by default, then it’s possible to add custom functionality. How AI can change the sales of tomorrow When you have a clear vision of the AI implementation challenges and understand how to overcome them, it’s time to make use of AI-provided benefits. A core benefit of any AI system is its ability to analyze large amounts of data across multiple platforms and then connect the dots, i.e. draw actionable conclusions. To illustrate these AI opportunities, let’s take Salesforce, one of the most popular solutions in this domain today, and see how its AI technology, Einstein, can enhance a sales workflow. Time-saving and productivity boost Administrative work eats up sales reps’ time that they can spend selling. That’s why many administrative tasks should be automated. Salesforce Einstein can save time usually wasted on manual data entry by: Automating contact creation and update Activity logging Generating lead status reports Syncing emails and calendars Scheduling meetings Efficient lead management When it comes to leads, sales reps tend to base their lead management strategies on gut feeling. In spite of its importance, intuition cannot be the only means of assessing leads. The approach should be more holistic. AI has unmatched abilities to analyze large amounts of information from different sources to help score and prioritize leads. In combination with sales reps’ intuition, such data can bring lead management to a new level. For example, Einstein AI can help with: Scoring leads based on historical data and performance metrics of the best customers Classifying opportunities in terms of their readiness to convert Tracking reengaged opportunities and nurturing them Predictive forecasting AI is well-known for its predictive capabilities that help sales teams make smarter decisions without running endless what-if scenarios. AI forecasting builds sales models using historical data. Such models anticipate possible outcomes of multiple scenarios common in sales reps’ work. Salesforce Einstein, for example, can give the following predictions: Prospects most likely to convert Deals most likely to close Prospects or deals to target New leads Opportunities to upsell or cross-sell The same algorithm can be used for forecasting sales team performance during a specified period of time and taking proactive steps based on those predictions. What’s more, sales intelligence is shifting from predictive to prescriptive, where prescriptive AI does not recommend but prescribes exact actions to be taken by sales reps to achieve a particular outcome. Watching out for pitfalls of AI in sales While AI promises to fulfil sales reps’ advanced requests, there are still some fears and doubts around it. First of all, as a rising technology, AI still carries ethical issues related to its safe and legitimate use in the workplace, such as those of the integrity of autonomous AI-driven decisions and legitimate origin of data fed to algorithms. While the full-fledged legal framework is yet to be worked out, governments have already stepped in. For example, the High-Level Expert Group on AI of the European Commission came up with the Ethics Guidelines for Trustworthy Artificial Intelligence covering every aspect from human oversight and technical robustness to data privacy and non-discrimination. In particular, non-discrimination relates to potential bias,, such as algorithmic bias that comes from human bias when sourcing data, and the one where correlation does not equal causation. Thus, AI-driven analysis should be incorporated in decision-making cautiously as just one of the many sources of insights. AI won’t replace a human mind⁠—the data still needs to be processed critically. When it comes to sales, another common concern is that AI will take sales reps’ jobs. Yes, some tasks that are deemed monotonous and time-consuming are indeed taken over by AI automation. However, it is actually a blessing as AI does not replace jobs but augments them. This way, sales reps can have more time on their hands to complete more creative and critical tasks. It's true, however, that employers would need people who know how to work with AI technologies. It means either ongoing training or new hires, which can be rather costly. The stakes are high, though. To keep up with the fast-changing world, one has to bargain their way to success, finding one’s way around current limitations and challenges. In a nutshell AI is key to boosting sales team performance. However, successful AI integration into sales and marketing strategies requires teams to overcome challenges posed by sophisticated AI technologies. Such popular AI-driven platforms like Salesforce help sales reps get hold of the AI potential as well as enjoy vast opportunities for saving time and increasing productivity. Author Bio Valerie Nechay is MarTech and CX Observer at Iflexion, a Denver-based custom software development provider. Using her writing powers, she's translating complex technologies into fascinating topics and shares them with the world. Now her focus is on Salesforce implementation how-tos, challenges, insights, and shortcuts, as well as broader applications of enterprise tech for business development. IBM halt sales of Watson AI tool for drug discovery amid tepid growth: STAT report. Salesforce Einstein team open sources TransmogrifAI, their automated machine learning library How to create sales analysis app in Qlik Sense using DAR method [Tutorial]
Read more
  • 0
  • 0
  • 3900

article-image-capsnet-capsule-networks-convolutional-neural-networks-cnns
Savia Lobo
13 Dec 2017
5 min read
Save for later

CapsNet: Are Capsule networks the antidote for CNNs kryptonite?

Savia Lobo
13 Dec 2017
5 min read
Convolutional Neural networks (CNNs), are a group from the neural network family that has manifested in areas such as Image recognition, classification, etc. They are one of the popular neural network models present in nearly all of the image recognition tasks that provide state-of-the-art-results. However, these CNNs have drawbacks, which are to be discussed later in the article. In order to address the issue with CNNs, Geoffrey Hinton, popularly known as the Godfather of Deep Learning, recently proposed a research paper along with two other researchers, Sara Sabour and Nicholas Frosst. In this paper, they introduced CapsNet or Capsule Network--a neural network, based on multi-layer capsule system. Let’s explore the issue with CNNs and how CapsNet came as an advancement to it. What is the issue with CNNs? Convolutional Neural Network or CNNs are known to seamlessly handle image classification tasks. They are experts in learning at a granular level; where the lower layers detect edges and shape of an object, and the higher layers detect the image as a whole. However, CNNs perform poorly when an image possesses a slightly different orientation (rotation or a tilt), as it compares every image with the ones it learns during training. For instance, if an image of a face is to be detected, it checks for facial features such as nose, two eyes, mouth, eyebrows, etc; irrespective of the placement. This means CNNs may identify an incorrect face in cases where the placement of an eye and the nose is not as conventionally expected, for example in case of the profile view. So, the orientation and the spatial relationships between the objects within an image is not considered by a CNN. To make CNNs understand orientation and spatial relationships, they were trained profusely with images taken from all possible angles. Unfortunately, it resulted in excess amount of time required to train the model. Also, the performance of the CNNs did not improve largely. Pooling methods were also introduced at each layer within the CNN model for two reasons; first  to reduce the time invested in training, and second to bring out positional invariance within CNNs. It resulted in triggering false positives in an image, i.e., it detected the object within an image but did not check its orientation. Also it incorrectly declared it as a right image. Thus, positional invariance made the CNNs susceptible to minute changes in viewpoint. Instead of invariance, what CNNs require is equivariance-- a feature that makes CNNs adapt to change in rotation or proportion within an image. This equivariance feature is now possible via Capsule Network! The Solution: Capsule Network CapsNet or Capsule network is an encapsulation of nested neural network layers. Traditional neural network contains multiple layers whereas a capsule network contains multiple layers within a single capsule. CNNs go deeper in terms of height, whereas the capsule network deepens in terms of nesting or internal structure. Such a model is highly robust to geometric distortions and transformations, which are a result of non-ideal camera angles. Thus, it is able to exceptionally handle orientations, rotations and so on. CapsNet Architecture Source: https://arxiv.org/pdf/1710.09829.pdf Key Features: Layer based Squashing In a typical Convolutional Neural Network, the squashing function is added to each layer of the CNN model. A squashing function compresses the input to one of the ends of a small interval, introducing nonlinearity to the neural network and enables the network to be effective. Whereas, in a Capsule network, the squashing function is applied to the vector output of each capsule. Given below is a squashing function proposed by Hinton in his research paper. Squashing function Source: https://arxiv.org/pdf/1710.09829.pd Instead of applying non-linearity to each neuron, the squashing function applies squashing to a group of neurons i.e the capsule. To be more precise, it applies nonlinearity to the vector output of each capsule. The squashing function also tries to squash the vector output to zero if it is a small vector. If the vector is too long, the function tries to limit the output vector to 1. Dynamic Routing Dynamic routing algorithm in CapsNet replaces the scalar-output feature detectors of the CNN with the vector-output capsules. Also, the max pooling feature in CNNs, which led to positional invariance, is replaced with ‘routing by agreement’. The algorithm ensures that when they forward propagate the data, it goes to the next most relevant capsule in the layer above. Although dynamic routing adds an extra computational cost to the capsule network, it has been proved to be advantageous to the network by making it more scalable and adaptable. Training the Capsule Network The capsule network is trained using the MNIST. MNIST is a dataset which includes more than 60,000 handwritten digit images. It is used to test machine learning algorithms. The capsule model is trained for 50 epochs with a batch size of 128 parts, where each epoch is responsible for a complete run through the training dataset. A TensorFlow implementation of the CapsNet based on Hinton’s research paper is available in GitHub repository. Similarly, CapsNet can also be implemented using other deep learning frameworks such as Keras, PyTorch, MXNet, etc. CapsNet is a recent breakthrough in the field of Deep learning and have a promise to benefit organizations with accurate image recognition tasks. Also, implementations with CapsNet is slowly catching up and is expected to reach at par like CNNs. They have been trained on a very simplistic dataset i.e the MNIST. They will still require to prove themselves on various other datasets. However, as time advances and we see CapsNet being trained within different domains, it will be exciting to discern how it moulds itself as a faster and more efficient training technique for deep learning models.
Read more
  • 0
  • 0
  • 3898
article-image-why-phaser-is-a-great-game-development-framework
Alvin Ourrad
17 Jun 2014
5 min read
Save for later

Why Phaser is a Great Game Development Framework

Alvin Ourrad
17 Jun 2014
5 min read
You may have heard about the Phaser framework, which is fast becoming popular and is considered by many to be the best HTML5 game framework out there at the moment. Follow along in this post where I will go into some detail about what makes it so unique. Why Phaser? Phaser is a free open source HTML5 game framework that allows you to make fully fledged 2D games in a browser with little prior knowledge about either game development or JavaScript for designing for a browser in general. It was built and is maintained by a UK-based HTML5 game studio called Photon Storm, directed by Richard Davey, a very well-known flash developer and now full-time HTML5 game developer. His company uses the framework for all of their games, so the framework is updated daily and is thoroughly tested. The fact that the framework is updated daily might sound like a double-edged sword to you, but now that Phaser has reached its 2.0 version, there won't be any changes that break its compatibility, only new features, meaning you can download Phaser and be pretty sure that your code will work in future versions of the framework. Phaser is beginner friendly! One of the main strengths of the framework is its ease of use, and this is probably one of the reasons why it has gained such momentum in such a short amount of time (the framework is just over a year old). In fact, Phaser abstracts away all of the complicated math that is usually required to make a game by providing you with more than just game components.It allows you to skip the part that you spend thinking about how you can implement a given special feature and what level of calculus it requires. With Phaser, everything is simple. For instance, say you want to shoot something using a sprite or the mouse cursor.Whether it is for a space invader or a tower defense game, here is what you would normally have to do to your bullet object (the following example uses pseudo-code and is not tied to any framework): var speed = 50; var vectorX = mouseX - bullet.x; var vectorY = mouseY - bullet.y;   // if you were to shoot a target, not the mouse vectorX = targetSprite.x - bullet.x; vectorY = targetSprite.y - bullet.y;   var angle = Math.atan2(vectorY,vectorX);   bullet.x += Math.cos(angle) * speed;   bullet.y += Math.sin(angle) * speed; With Phaser, here is what you would have to do: var speed = 50; game.physics.arcade.moveToPointer(bullet, speed); // if you were to shoot a target : game.physics.arcade.moveToObject(bullet,target, speed); The fact that the framework was used in a number of games during the latest Ludum Dare (a popular Internet game jam) highly reflects this ease of use.There were about 60 games at Ludum Dare, and you can have a look at themhere. To get started with learning Phaser, take a look here at thePhaser examples, where you’ll find over 350 playable examples. Each example includes a simple demo explaining how to do specific actions with the framework, such as creating particles, using the camera, tweening elements, animating sprites, using the physics engine, and so on. A lot of effort has been put into these examples, and they are all maintained with new ones constantly added by either the creator or the community all the time. Phaser doesn't need any additional dependencies When using a framework, you will usually need an external device library, one for the math and physics calculations, a time management engine, and so on. With Phaser, everything is provided, giving you a very exhaustive device class that you can use to detect the browser's capabilities that is integrated into the framework and is used extensively internally and in games to manage scaling. Yeah, but I don't like the physics engine… Physics engines are usually a major feature in a game framework, and that is a fair point, since physics engines often have their own vocabulary and way of dealing and measuring things. And it's not always easy to switch over from one to another. The physics engines were a really important part of the Phaser 2.0 release. As of today, there are three physics engines fully integrated into Phaser's core, with the possibility to create a custom build of the framework in order to avoid a bloated source code. A physics management module was also created for this release.It dramatically reduces the pain to make your own or an existing physics engine work with the framework. This was the main goal of this feature: make the framework physics-agnostic. Conclusion Photon Storm has put a lot of effort into their framework, and as a result the framework has become widely used by both hobbyists and professional developers. The HTML5 game developers forum is always full of new topics and the community is very helpful as a whole. I hope to see you there.
Read more
  • 0
  • 0
  • 3897

article-image-elon-musks-tiny-submarine-is-a-lesson-in-how-not-to-solve-problems-in-tech
Richard Gall
11 Jul 2018
6 min read
Save for later

Elon Musk's tiny submarine is a lesson in how not to solve problems in tech

Richard Gall
11 Jul 2018
6 min read
Over the last couple of weeks the world has been watching on as rescuers attempted to find, and then save, a young football team from Tham Luang caves in Thailand. Owing to a remarkable coordinated effort, and a lot of bravery from the team (including one diver who died), all 12 boys were brought back to safety. Tech played a big part in the rescue mission too - from drones to subterranean radios. But it wanted to play a bigger role - or at least Elon Musk wanted it to. Musk and his submarine has been a somewhat bizarre subplot to this story, and while you can't fault someone for offering to help out in a crisis, you might even say it was unnecessary. Put simply, Elon Musk's involvement in this story is a fable about the worst aspects of tech-solutionism. It offers an important lesson for anyone working in tech how not to solve problems. Bringing a tiny submarine to a complex rescue mission that requires coordination between a number of different agencies, often operating from different countries is a bit like telling someone to use Angular to build their first eCommerce store. It's like building an operating system from scratch because your computer has crashed. Basically, you just don't need it. There are better and more appropriate solutions - like Shopify or WooCommerce, or maybe just rebooting your system. Lesson 1: Don't insert yourself in problems if you're not needed Elon Musk first offered his support to the rescue mission in Thailand on July 4. It was a response to one of his followers. https://twitter.com/elonmusk/status/1014509856777293825 Musk's first instincts were measures, saying that he suspects 'the Thai government has got this under control' but it didn't take long for his mind to change. Without any specific invitation or coordination with the parties leading the rescue mission, Musk's instincts to innovate and create kicked in. This sort of situation is probably familiar to anyone who works in tech - or, for that matter, anyone who has ever had a job. Perhaps you're the sort of person who hears about a problem and your immediate instinct is to fix it. Or perhaps you've been working on a project, someone hears about it, and immediately they're trying to solve all the problems you've been working on for weeks or months. Yes, sometimes it's appealing, but on the other side it can be incredibly annoying and disruptive. This is particularly true in software engineering where you're trying to solve problems at every level - from strategy to code. There's rarely a single solution. There's always going to be a difference of opinion. At some point we need to respect boundaries and allow the right people to get on with the job. Lesson 2: Listen to the people involved and think carefully about the problem you're trying to solve One of the biggest challenges in problem solving is properly understanding the problem. It's easy to think you've got a solution after a short conversation about a problem but there may be nuances you've missed or complexities that aren't immediately clear. Humility can be a very valuable quality when problem solving. It allows everyone involved to think clearly about the task at hand; it opens up space for better solutions. As the old adage goes, when every problem looks like a nail, every solution looks like a hammer. For Musk, when a problem looks like kids stuck in an underwater cave, the solution looks like a kid-sized submarine. Never mind that experts in Thailand explained that the submarine would not be 'practical.' For Musk, a solution is a solution. "Although his technology is good and sophisticated it’s not practical for this mission" said Narongsak Osatanakorn, one of the leaders of the rescue mission, speaking to the BBC and The Guardian. https://twitter.com/elonmusk/status/1016110809662066688 Okay, so perhaps that's a bit of a facetious example - but it is a problem we can run into, especially if we work in software. Sometimes you don't need to build a shiny new SPA - your multi-page site might be just fine for its purpose. And maybe you don't need to deploy on containers - good old virtual machines might do the job for you. In these sort of instances it's critical to think about the problem at hand. To do that well you also need to think about the wider context around it - what infrastructure is already there? If we change something, is that going to have a big impact on how it's maintained in the future? In many ways, the lesson here recalls the argument put forward by the Boring Software Manifesto in June. In it, the writer argued in favor of things that are 'simple and proven' over software that is 'hyped and volatile'. Lesson 3: Don't take it personally if people decline your solutions Problem solving is a collaborative effort, as we've seen. Offering up solutions is great - but it's not so great when you react badly to rejection. https://twitter.com/elonmusk/status/1016731812159254529 Hopefully, this doesn't happen too much in the workplace - but when your job is to provide solutions, it doesn't help anyone to bring your ego into it. In fact, it indicates selfish motives behind your creative thinking. This link between talent, status and ego has been developing for some time now in the tech world. Arguably Elon Musk is part of a trend of engineers - ninjas, gurus, wizards, whatever label you want to place on yourself - for whom problem-solving is as much an exercise in personal branding as it is actually about solving problems. This trend is damaging for everyone - it not only undermines people's ability to be creative, it transforms everyone's lives into a rat race for status and authority. That's not only sad, but also going to make it hard to solve real problems. Lesson 4: Sometimes collaboration can be more inspiring than Elon Musk Finally, let's think about the key takeaway here: everyone in that cave was saved. And this wasn't down to some miraculous invention. It was down to a combination of tools - some of them even pretty old. It wasn't down to one genius piece of engineering, but instead a combination of creative thinking and coordinated problem solving that used the resources available to bring a shocking story to a positive conclusion. Working in tech isn't always going to be a matter of life and death - but it's the collaborative and open world we want to work in, right?
Read more
  • 0
  • 2
  • 3892