Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech Guides

851 Articles
article-image-microsofts-github-acquisition-is-good-for-the-open-source-community
Pavan Ramchandani
19 Jul 2018
6 min read
Save for later

Microsoft’s GitHub acquisition is good for the open source community

Pavan Ramchandani
19 Jul 2018
6 min read
Microsoft buying GitHub is "good news" for open source. - Jim Zemlin, the Executive Director of Linux Foundation Unless you have been living under a rock, you will have heard about the software giant Microsoft’s acquisition of the open source platform giant GitHub for $7.5 Billion. Since the announcement a few weeks ago, the discussions in the open source community have heated up regarding the future of open source. This acquisition has seen a surge in the number of developers migrating to rival version control systems like BitBucket, GitLab, etc., but mostly GitLab. This will affect GitHub’s user base and in turn the contribution to the platform, which is the primary source of funding to keep any open source service alive. This goes to show how difficult it is to create a great product for developers and still make money. Microsoft has created great products for enterprises and has been making money in the process. As such, this acquisition is one worth waiting and watching as it transforms both entities. The common fear among developers is that Microsoft will exploit the limitations inherent to an open source platform and will inject its subscription model into GitHub to make it profitable. The insane price that Microsoft paid to acquire GitHub afterall needs to be recovered. However, it may not be as straightforward. Many believe it’s not the platform’s monetizing potential, but its access to the user base that Microsoft is most interested in. A lot of them also believe, Microsoft has the potential to resurrect GitHub and revolutionize the open source movement. Let us explore some reasons why this acquisition is fruitful for the developer community. GitHub’s losses have been significant GitHub had reportedly been suffering losses and is said to have lost $66 Mn loss in 2016. The software industry is a fierce eat-or-get-eaten jungle. Losing out in the market to giant companies or other emerging startups is a common fear. There is always an alternative tool for every developer need as the software market relentlessly works to make things cheaper while offering variety. Startups are reaching the deflection point sooner in their operation cycle. The GitHub community is the platform’s greatest strengths and the reason why the platform has remained operational through difficult times; but there were regular internal frictions at the management level in GitHub. The strife became apparent when reports came of developers feeling ignored by the GitHub management. The founder, Chris Wanstrath, had to come out and address reports of toxic environment, in a report last year. With Microsoft buying GitHub, there would be a massive cashflow for all the projects in development and the management will be streamlined with Nat Friedman, announced as the head of GitHub operations. Nat’s successful history with leading open source projects such as Xamarin, gives many hope that this time around, Microsoft really does mean well for GitHub with its acquisition. The Azure Cloud advantage for GitHub One of the key challenges that GitHub has faced lately is scaling their infrastructure smoothly without adversely impacting their users. Outages have become a common occurrences that most GitHub users are familiar with. Microsoft has a strong suite of cloud platform and services in the form of Azure. GitHub users can expect receiving the native experience of using the Azure stack as a part of the integration with GitHub. This integration will further enhance the collaboration on the GitHub platform for developers and advance the GitHub ecosystem. Microsoft can integrate GitHub into its enterprise offerings GitHub, in the last few years, has been attempting to extend its reach in the enterprise market with various offerings for business. However, this offering was limited to creating private repositories for some fees. Microsoft, on the other end, has been a leader when it comes to providing enterprise tools and venturing into the subscription market. This acquisition will excite the brand-loyal enterprises, using Microsoft suites. Imagine the new clientele that GitHub now has access to thanks to Microsoft. Just as Microsoft have bundled Skype with their Office 365 suite, it is easy to postulate similar offerings being designed for enterprises with GitHub at the center of such plans. Just like Excel, GitHub could end up as the default version control tool that enterprises use to build new projects, prototype ideas, open source or otherwise. In exchange, Github could be Microsoft’s ace up its sleeve in  strengthening its open source community ties and help put Microsoft in a position to inject innovative strategies in the community. Microsoft’s push to open source projects Microsoft, have plunged head first into open sourcing projects in recent years. The push is not only for their experimental projects, but has also has been for their successful enterprise tools like .NET Core and Visual Studio. Historically, Microsoft has taken a lot of heat from the open source community for opposing the Linux model. But the recent paradigm shift in Microsoft, with a change in its leadership and vision, is focussed on working around the community and doing business from the enterprises. End of last year, Microsoft joined the Linux Foundation and went platinum with the Open Source Initiative. TypeScript is a full open source language and sees regular updates from Microsoft. It is now an established language for web development and is managed better than some of the open source languages. Also, TypeScript is fully hosted on GitHub for developers to improve on it.  This indicates that Microsoft has been able to reach out to the community and has the potential to operate open source projects without necessarily commercializing them. Conclusion Microsoft buying out GitHub is not necessarily bad. The tech giant has been one of the biggest contributors to GitHub with its projects like Visual Studio Code, TypeScript, etc. While the panic is understandable, considering Microsoft’s past strategies to counter the open source model in its early days, the recent activities in Microsoft, especially under the leadership of Satya Nadella are suggesting a paradigm shift in Microsoft’s approach to serving the IT market. You can hate Microsoft for being a profit-driven company, but there is no denying that Microsoft was one of the pioneers of the modern day software industry and more importantly, the bitter pill that GitHub needs to get out of the evergrowing loss making sinkhole. Microsoft understand software better and are capable of doing open source the right way and with more efficiency.  This acquisition was inevitable to sustain the platform and to scale it to serve the increasing demand of developer market. What Microsoft must bear in mind while revamping GitHub policies and the business model is that, it’s greatest challenge and its greatest asset is the paradox of this alliance itself. As GitHub gets more profit conscious, Microsoft must get more community centric to ensure an equilibrium is reached where developers can thrive on a platform that provides a great developing and community experience. The Microsoft-GitHub deal has set into motion an exodus of GitHub projects to GitLab GitHub for Unity 1.0 is here with Git LFS and file locking support Microsoft releases Open Service Broker for Azure (OSBA) version 1.0  
Read more
  • 0
  • 0
  • 3310

article-image-what-you-need-to-know-about-generative-adversarial-networks
Guest Contributor
19 Jan 2018
7 min read
Save for later

What you need to know about Generative Adversarial Networks

Guest Contributor
19 Jan 2018
7 min read
[box type="note" align="" class="" width=""]We have come to you with another guest post by Indra den Bakker, an experienced deep learning engineer and a mentor on Udacity for many budding data scientists. Indra has also written one of our best selling titles, Python Deep Learning Cookbook which covers solutions to various problems in modeling deep neural networks.[/box] In 2014, we took a significant step in AI with the introduction of Generative Adversarial Networks -better known as GANs- by Ian Goodfellow, amongst others. The real breakthrough of GANs didn’t follow until 2016, however, the original paper includes many novel ideas that would be exploited in the years to come. Previously, deep learning had already revolutionized many industries by achieving above human performance. However, many critics argued that these deep learning models couldn’t compete with human creativity. With the introduction to GANs, Ian showed that these critics could be wrong. Figure 1: example of style transfer with deep learning The idea behind GANs is to create new examples based on a training set - for example to demonstrate the ability to create new paintings or new handwritten digits. In GANs two competing deep learning models are trained simultaneously. These networks compete against each other: one model tries to generate new realistic examples, this network is also called the generator. The other network tries to classify if an example originates from the training set or from the generator, also called as discriminator. In other words, the generator tries to mislead the discriminator by generating new examples. In the figure below we can see the general structure of GANs. Figure 2: GAN structure with X as training examples and Z as noise input. GANs are fundamentally different from other machine learning applications. The task of a GAN is unsupervised: we try to extract patterns and structure from data without additional information. Therefore, we don’t have a truth label. GANs shouldn’t be confused with autoencoder networks. With autoencoders we know what the output should be: the same as the input. But in case of GANs we try to create new examples that look like the training examples but are different. It’s a new way of teaching an agent to learn complex tasks by imitating an “expert”. If the generator is able to fool the discriminator one could argue that the agent mastered the task - think about the Turing test. Best way to explain GANs is to use images as an example. The resulting output of GANs can be fascinating. The most used dataset for GANs is the popular MNIST dataset. This dataset has been used in many deep learning papers, including the original Generative Adversarial Nets paper. Figure 3: example of MNIST training images Let’s say as input we have a bunch of handwritten digits. We want our model to be able to take these examples and create new handwritten digits. We want our model to learn how to write digits in such a way that it looks like handwritten digits. Note, that we don’t care which digits the model creates as long as it looks like one of the digits from 0 to 9. As you may suspect, there is a thin line between generating examples that are exact copies of the training set and newly created images. We need to make sure that the generator generates new images that follow the distribution of the training examples but are slightly different. This is where the creativity needs to come in. In Figure 2, we’ve showed that the generator uses noise -random values- as input. This noise is random, to make sure that the generator creates different output each time. Now that we know what we need and what we want to achieve, let’s have a closer look at both model architectures. Let’s start with the generator. We will feed the generator with random noise: a vector of 100 values randomly drawn between -1 and 1. Next, we stack multiple fully connected layers with Leaky ReLU activation function. Our training images are in grayscale and are sized as 28x28. Which means, flattened we need an output of 784 units for the final layer of our generator - the output of the generator should match the size of the training images. As activation function for our final layer we will be using TanH to make sure the resulting values are squeezed between -1 and 1. The final model architecture of our generator looks as follows: Figure 4: model architecture of the generator Next, we define our discriminator model. Most common is to use a mirrored version of the generator, where we have as input 784 values and as final layer a fully connected layer with 1 hidden neuron and sigmoid activation function for binary classification. Keep in mind that both the generator and discriminator are trained at the same time. The model looks like this: Figure 5: model architecture of the discriminator In general, generating new images is a harder task. Therefore, sometimes it can be beneficial to train the generator twice for each step. Whereas the discriminator will only be trained once. Another option is to set the learning rate for the discriminator a bit smaller than the learning rate for the generator. Tracking the performance of GANs can be tricky. Sometimes a lower loss doesn’t represent a better output. That’s why it’s a good idea to output the generated images during the training process. In the following figure we can see the digits generated by a GAN after 20 epochs. Figure 6: example output of generated MNIST images As we have stated in the introduction, GANs didn’t get much traction until 2016. GANs were mostly unstable and hard to train. Small adjustments in the model or training parameter resulted in unsatisfying results. Advancements in model architecture and other improvements fixed some of the previous limitations and unlocked the real potential of GANs. An important improvement was introduced by Deep Convolutional GANs (DCGANs). DCGANs is a network architecture, where in both the discriminator and generator are fully convolutional. The output is more stable - for datasets with higher translation invariance, like the Fashion MNIST dataset. Figure 7: example of Fashion MNIST images generated by a Deep Convolutional Generative Adversarial Network (DCGAN) There is so much more to discover with GANs and there is huge potential still to be unlocked. According to Yann LeCun - one of the fathers of deep learning - GANs are the most important advancement in machine learning in the last 20 years. GANs can be used for many different applications, ranging from 3D face generation to upscaling resolution of images and text-to-image. GANs might be the stepping stone we have been waiting for to add creativity to machines. [author title="Author's Bio"]Indra den Bakker is an experienced deep learning engineer and mentor on Udacity. He is the founder of 23insights, a part of NVIDIA's Inception program—a machine learning start-up building solutions that transform the world’s most important industries. For Udacity, he mentors students pursuing a Nanodegree in deep learning and related fields, and he is also responsible for reviewing student projects. Indra has a background in computational intelligence and has worked for several years as a data scientist for IPG Mediabrands and Screen6 before founding 23insights. [/author]      
Read more
  • 0
  • 0
  • 3302

article-image-how-has-python-remained-so-popular
Antonio Cucciniello
21 Sep 2017
4 min read
Save for later

How has Python remained so popular?

Antonio Cucciniello
21 Sep 2017
4 min read
In 1991, the Python programming language was created. It is a dynamically typed object oriented language that is often used for scripting and web applications today. It is usually paired with frameworks such as Django or Flask on the backend. Since its creation, it's still extremely relevant and one of the most widely used programming languages in the world. But why is this the case? Today we will look at the reason why the Python programming language has still been extremely popular over the last couple of years. Used by bigger companies Python is widely used by bigger technology companies. When bigger tech companies (think companies such as Google) use Python, the engineers that work there also use it. If the developers use Python at their jobs, they will take it to their next job and pass the knowledge on. In addition, Python continues to be used organically when these developers use their knowledge of this language in any of their personal projects they have as well, futher spreading its usage. Plenty of styles are not used In Python, whitespace is important, but in other languages such as JavaScript and C++ it is not. The whitespace is used to dictate the scope of the statements in that indent. By making whitespace important it reduces the need for things like braces and semicolons in your code. That reduction alone can make your code look simpler and cleaner. People are always more willing to try a language that looks and feels cleaner, because it seems easier to learn psychologically. Variety of libraries and third-party support Being around as long as it has, Python has plenty of built-in functionality. It has an extremely large standard library with plenty of things that you can use in your code. On top of that, it has plenty of third-party support libraries that make things even easier. All of this gained functionality allows programmers to focus on the more important logic that is vital to their application's core functionality. This makes programmers more efficient, and who doesn't like efficiency? Object oriented As mentioned earlier, Python is an object oriented programming language. Due to it being object oriented, more people are likely to adopt it because object orient programming allows developers to model their code very similar to real world behavior. Built-in testing Python allows you to import a package called unittest. This package is a full unit testing suite with setup and teardown functions. Having this built in, it is something that is stable for developers to use to test their applications. Readability and learnability As we mentioned earlier, the whitespace is important, therefore we do not need brackets and semicolons. Also Python is dynamically typed so it is easier to create and use variables while not really having to worry about the type. All of these topics could be difficult for new programmers to learn. Python makes it easier by removing some of the difficult parts and having nicer looking code. The reduction of difficulty allows people to choose Python as their first programming language more often than others. (This was my first programming language.) Well documented Building upon the standard libraries and the vast amount of third-party packages, the code in those applications is usually well documented. They tend to have plenty of helpful comments and tons of additional documentation to explain what is happening. From a developer stand point this is crucial. Having great documentation can make or break a language's usage for me. Multiple applications To top it off, Python has many applications it can be used in. It can be used to develop games for fun, web applications to aid businesses, and data science applications. The wide variety of usage attracts more and more people to Python, because when you learn the language you now have the power of versatility. The scope of applications is vast. With all of these benefits, who wouldn't consider Python as their next language of choice? There are many options out there, but Python tends to be superior when it comes to readability, support, documentation, and its wide application usage. About the author Antonio Cucciniello is a Software Engineer with a background in C, C++ and JavaScript (Node.Js) from New Jersey. His most recent project called Edit Docs is an Amazon Echo skill that allows users to edit Google Drive files using your voice. He loves building cool things with software, reading books on self-help and improvement, finance, and entrepreneurship. Follow him on twitter @antocucciniello, and follow him on GitHub here: https://github.com/acucciniello.
Read more
  • 0
  • 0
  • 3299

article-image-aws-fargate-makes-container-infrastructure-management-a-piece-of-cake
Savia Lobo
17 Apr 2018
3 min read
Save for later

AWS Fargate makes Container infrastructure management a piece of cake

Savia Lobo
17 Apr 2018
3 min read
Containers such as Docker, FreeBSD Jails, and many more, are a substantial way for developers to develop and deploy their applications. Also, with container orchestration solutions such as Amazon ECS and EKS (Kubernetes), developers can easily manage and scale these containers, thus enabling them to perform other activities quickly. However, in spite of these management solutions at hand, one also has to take an account of the infrastructure maintenance, its availability, capacity and so on which are added tasks. AWS Fargate eases out these tasks and streamlines all deployments for you, resulting in faster completion of deliverables. At the Re:Invent in November 2017, AWS launched Fargate, a technology which enables one to manage containers without having to worry about managing the container infrastructure underneath. AWS Fargate comes to your rescue here. It is an easy way to deploy your containers on AWS. One can start using Fargate on ECS or EKS, try processes and workloads and later migrate workloads to Fargate. It eliminates most of the management such as placement of resources, scheduling, scaling, and so on, which is a requirement for containers. All you have to do is, Build your container image, Specify the CPU and memory requirements, Define your networking and IAM policies, and Launch your container application Some key benefits of AWS Fargate It allows developers to focus on design, development, and deployment of applications. This eliminates the need to manage a cluster of Amazon EC2 instances. One can easily scale applications using Fargate. Once, the application requirements such as CPU, memory, and so on are defined, Fargate manages effective scaling and infrastructure needed to make containers highly-available. One can launch thousands of containers in no time and easily scale them to run most of the mission-critical applications. AWS Fargate is integrated with Amazon ECS and EKS. Fargate launches and manages containers once CPU and memory needed, IAM policies that container needs are defined and uploaded to Amazon ECS. With Fargate, one gets flexible configuration options that matches one’s applications’ needs. Also, one pays on the basis of per-second granularity. Adoption of Container management as a trend is steadily increasing. Kubernetes, at present, is one of the popular and most used containerized application management platforms. However, users and developers are often confused about who the best Kubernetes provider is. Microsoft and Google have their managed Kubernetes services, but AWS Fargate provides an added ease to Amazon’s EKS (Elastic Container Service for Kubernetes) by eliminating the hassle of container infrastructure management. Read more about AWS Fargate on AWS’ official website.
Read more
  • 0
  • 0
  • 3298

article-image-7-popular-applications-of-artificial-intelligence-in-healthcare
Guest Contributor
26 Jun 2018
5 min read
Save for later

7 Popular Applications of Artificial Intelligence in Healthcare

Guest Contributor
26 Jun 2018
5 min read
With the advent of automation, artificial intelligence(AI), and machine learning, we hear about their applications regularly in news across industries. This has been especially true for healthcare where various hospitals, health insurance companies, healthcare units, etc. have been impacted in more substantial and concrete ways by AI when compared to other industries. In the recent years, healthcare startups and life science organizations have ventured into Artificial Intelligence technology and are one of the most heavily invested areas by VCs. Various organizations with ties to healthcare are leveraging the advances in artificial intelligence algorithms for remote patient monitoring, medical imaging and diagnostics, and implementing newly developed sophisticated methods, and applications into the system. Let’s explore some of the most popular AI applications which have revamped the healthcare industry. Proper maintenance and management of medical records Assembling, analyzing, and maintaining medical information and records is one of the most commonly used applications of AI. With the coming of digital automation, robots are being used for collecting and tracing data for proper data management and analysis. This has brought down manual labor to a considerable extent. Computerized medical consultation and treatment path The existence of medical consultation apps like DocsApp allows a user to talk to experienced and specialist doctors on chat or call directly from their phone in a private and secure manner. Users can report their symptoms into the app and this ensures the users are connected to the right specialist physicians as per the user’s medical history. This has been made possible due to the existence of AI systems. AI also aids in treatment design like analyzing data, making notes and reports from a patient’s file, thereby helping in choosing the right customized treatment as per the patient’s medical history. Eliminates monotonous manual labor Various medical tasks like analyzing X-Ray reports, test reports, CT scans and other common tasks can be executed by robots and other mechanical devices more accurately. Radiology is one such discipline wherein human supervision and control have dropped to a substantial level due to the extensive use of AI. Aids in drug manufacture and creation Generally, billions of dollars are spent on developing pharmaceuticals through clinical trials and they take almost a decade or two to manufacture a life-saving drug. But now, with the arrival of AI, the entire drug creation procedure has been simplified and has become pretty reasonable as well. Even in the recent outbreak of the Ebola virus, AI was used for drug discovery, to redesign solutions and to scan the current existing medicines to eradicate the plague. Regular health monitoring In the current era of digitization, there are certain wearable health trackers – like Garmin, Fitbit, etc. which can monitor your heart rate and activity levels. These devices help the user to keep a close check on their health by setting up their exercise plan, or reminding them to stay hydrated. All this information can also be shared with your physician to track your current health status through AI systems. Helps in the early and accurate detection of medical disorders AI helps in spotting carcinogenic and cardiovascular disorders at an early stage and also aids in predicting health issues that people are likely to contract due to hereditary or genetic reasons. Enhances medical diagnosis and medication management Medical diagnosis and medication management are the ultimate data-based problems in the healthcare industry. IBM’s Watson, a deep learning system has simplified medical investigation and is being applied to oncology, specifically for cancer diagnosis. Previously, human doctors used to collect patient data, research on it and conduct clinical trials. But with AI, the manual efforts have reduced considerably. For medication management, certain apps have been developed to monitor the medicines taken by a patient. The cellphone camera in conjunction with AI technology to check whether the patients are taking the medication as prescribed. Further, this also helps in detecting serious medical problems and tracking patients medicine adaptability and participants behavior in certain scientific trials. To conclude, we can connote that we are gradually embarking on the new era of cognitive technology with the power of AI-based systems. In the coming years, we can expect AI to transform every area of the healthcare industry that it brushes up with. Experts are constantly looking for ways and means to organize the existing structure and power up healthcare on the basis of new AI technology. The ultimate goals being to improve patient experience, build a better public health management and reduce costs by automating manual labor. Author Bio Maria Thomas is the Content Marketing Manager and Product Specialist at GreyCampus with eight years rich experience on professional certification courses like PMI- Project Management Professional, PMI-ACP, Prince2, ITIL (Information Technology Infrastructure Library), Big Data, Cloud, Digital Marketing and Six Sigma. Healthcare Analytics: Logistic Regression to Reduce Patient Readmissions How IBM Watson is paving the road for Healthcare 3.0
Read more
  • 0
  • 0
  • 3297

article-image-13-reasons-exit-polls-wrong
Sugandha Lahoti
13 Nov 2017
7 min read
Save for later

13 reasons why Exit Polls get it wrong sometimes

Sugandha Lahoti
13 Nov 2017
7 min read
An Exit poll, as the name suggests, is a poll taken immediately after voters exit the polling booth. Private companies working for popular newspapers or media organizations conduct these exit polls and are popularly known as pollsters. Once the data is collected, data analysis and estimation is used to predict the winning party and the number of seats captured. Turnout models which are built using logistic regression or random forest techniques are used for prediction of turnouts in the exit poll results. Exit polls are dependent on sampling. Hence a margin of error does exist. This describes how close pollsters are in expecting an election result relative to the true population value. Normally, a margin of error plus or minus 3 percentage points is acceptable. However, in the recent times, there have been instances where the poll average was off by a larger percentage. Let us analyze some of the reasons why exit polls can get their predictions wrong. [dropcap]1[/dropcap] Sampling inaccuracy/quality Exit polls are dependent on the sample size, i.e. the number of respondents or the number of precincts chosen. Incorrect estimation of this may lead to error margins. The quality of sample data also matters. This includes factors such as whether the selected precincts are representative of the state, whether the polled audience in each precinct represents the whole etc. [dropcap]2[/dropcap] Model did not consider multiple turnout scenarios Voter turnout refers to the percentage of voters who cast a vote during an election. Pollsters may often misinterpret the number of people who actually vote based on the total no. of the population eligible to vote. Also, they often base their turnout prediction on past trends. However, voter turnout is dependent on many factors. For example, some voters might not turn up due to reasons such as indifference or a feeling of perception that their vote might not count--which is not true. In such cases, the pollsters adjust the weighting to reflect high or low turnout conditions by keeping the total turnout count in mind. The observations taken during a low turnout is also considered and the weights are adjusted therein. In short, pollsters try their best to maintain the original data. [dropcap]3[/dropcap] Model did not consider past patterns Pollsters may commit a mistake by not delving into the past. They can gauge the current turnout rates by taking into the account the presidential turnout votes or the previous midterm elections. Although, one may assume that the turnout percentage over the years have been stable a check on the past voter turnout is a must. [dropcap]4[/dropcap] Model was not recalibrated for year and time of election such as odd-year midterms Timing is a very crucial factor in getting the right traction for people to vote. At times, some social issues would be much more hyped and talked-about than the elections. For instance, the news of the Ebola virus breakout in Texas was more prominent than news about the contestants standing in the mid 2014 elections. Another example would be an election day set on a Friday versus on any other weekday. [dropcap]5[/dropcap] Number of contestants Everyone has a personal favorite. In cases where there are just two contestants, it is straightforward to arrive at a clear winner. For pollsters, it is easier to predict votes when the whole world's talking about it, and they know which candidate is most talked about. With the increase in the number of candidates, the task to carry out an accurate survey is challenging for the pollsters. They have to reach out to more respondents to carry out the survey required in an effective manner. [dropcap]6[/dropcap] Swing voters/undecided respondents Another possible explanation for discrepancies in poll predictions and the outcome is due to a large proportion of undecided voters in the poll samples. Possible solutions could be Asking relative questions instead of absolute ones Allotment of undecided voters in proportion to party support levels while making estimates [dropcap]7[/dropcap] Number of down-ballot races Sometimes a popular party leader helps in attracting votes to another less popular candidate of the same party. This is the down-ballot effect. At times, down-ballot candidates may receive more votes than party leader candidates, even when third-party candidates are included. Also, down-ballot outcomes tend to be influenced by the turnout for the polls at the top of the ballot. So the number of down-ballot races need to be taken into account. [dropcap]8[/dropcap] The cost incurred to commission a quality poll A huge capital investment is required in order to commission a quality poll. The cost incurred for a poll depends on the sample size, i.e. the number of people interviewed, the length of the questionnaire--longer the interview, more expensive it becomes, the time within which interviews must be conducted, are some contributing factors. Also, if a polling firm is hired or if cell phones are included to carry out a survey, it will definitely add up to the expense. [dropcap]9[/dropcap] Over-relying on historical precedence Historical precedence is an estimate of the type of people who have shown up previously on a similar type of election. This precedent should also be taken into consideration for better estimation of election results. However, care should be taken not to over-rely on it. [dropcap]10[/dropcap] Effect of statewide ballot measures Poll estimates are also dependent on state and local governments. Certain issues are pushed by local ballot measures. However, some voters feel that power over specific issues should belong exclusively to state governments. This causes opposition to local ballot measures in some states. These issues should be taken into account while estimation for better result prediction. [dropcap]11[/dropcap] Oversampling due to various factors such as faulty survey design, respondents’ willingness/unwillingness to participate etc   Exit polls may also sometimes oversample voters for many reasons. One example of this is related to the people of US with cultural ties to Latin America. Although, more than one-fourth of Latino voters prefer speaking Spanish to English, yet exit polls are almost never offered in Spanish. This might oversample English speaking Latinos. [dropcap]12[/dropcap] Social desirability bias in respondents People may not always tell the truth about who they voted for. In other words, when asked by pollsters they are likely to place themselves on the safer side, as exit polls is a sensitive topic. The voters happen to tell pollsters that they have voted for a minority candidate, but they have actually voted against the minority candidate. Social Desirability has no linking to issues with race or gender. It is just that people like to be liked and like to be seen as doing what everyone else is doing or what the “right” thing to do is, i.e., they play safe. Brexit polling, for instance, showed stronger signs of Social desirability bias. [dropcap]13[/dropcap] The spiral of silence theory People may not reveal their true thoughts to news reporters as they may believe media has an inherent bias. Voters may not come out to declare their stand publicly in fear of reprisal or the fear of isolation. They choose to remain silent. This may also hinder estimate calculation for pollsters. The above is just a shortlist of a long list of reasons why exit poll results must be taken with a pinch of salt. However, even with all its shortcomings, the striking feature of an exit poll is the fact that rather than predicting about a future action, it records an action that has just happened. So you rely on present indicators rather than ambiguous historical data. Exit polls are also cost-effective in obtaining very large samples. If these exit polls are conducted properly, keeping in consideration the points described above, they can predict election results with greater reliability.
Read more
  • 0
  • 0
  • 3294
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at £15.99/month. Cancel anytime
article-image-how-develop-tech-strategy
Hari Vignesh
26 Sep 2017
5 min read
Save for later

How to develop a tech strategy

Hari Vignesh
26 Sep 2017
5 min read
Technology has never been so fundamental, so strategic and so important as it is in the digital age. It is being used to create new business models, products, and services, enhance existing offerings, and create deeper, more rewarding customer experiences, and as such, businesses need to develop the right technology and IT strategy for success. What is a tech strategy? Technology strategy (information technology strategy or IT strategy) is the overall plan that consists of objective(s), principles, and tactics relating to usage of the technologies within a particular organization. Such strategies primarily focus on the technologies themselves and in some cases the people who directly manage those technologies. The strategy can be implied from the organization’s behaviors toward technology decisions and may be written down in a document. In other words, technology strategy is the task of building, maintaining, and exploiting a company’s technological assets. Why do I need a tech strategy? To compete in the new world of dynamic and disrupted digital markets, organizations need to be able to operate at the speed of digital; they need to be able to respond quickly and easily to changing market conditions, customer preferences or competitor activity. The traditional approach to IT strategy The traditional approach to developing a new technology strategy involves a fairly structured, sequential process that produces a long-term view of the organization’s technology requirements together with a plan for meeting these needs. The main steps of the classic approach are: Identify the business capabilities that will be needed over the next 3–5 years to support the organization’s strategy and realize its vision. Assess the gap between the organization’s current maturity against each capability and the level required to realize the vision. Identify how technology can be used to address any gaps between the current and required maturity level of each business capability. Design the target technology architecture that will support the required business capabilities. Assess the gap between the organization’s current and target technology architecture. Develop a prioritized roadmap for building the target technology architecture. The Agile approach to tech strategy The agile approach to technology strategy is based on many of the same activities as the classic approach, but with some key differences that take into account the need for speed and flexibility. Typical steps include: Identify the business capabilities that will be needed over the period covered by the organization’s current strategy and vision. Develop a high-level technology vision that describes the key features or characteristics that the organization’s technology platform will need in order to support the organization’s strategy. Agree the planning horizon to be covered by the technology strategy (organizations faced with fast changing markets may need to work on a 6–12-month horizon, whereas companies in more stable markets may select a 12–24 month planning period). Determine the business capabilities that will take priority during the agreed planning horizon and assess the gaps between the current and required level of each business capability. Identify and prioritize the technology initiatives required to address any gaps between the current and required level of the priority business capabilities. Develop a roadmap showing those initiatives that will be delivered during the agreed planning period. Repeat steps 3–6 towards the end of the current planning horizon. Repeat steps 1–6 whenever the organization’s vision and strategy is updated. When the business is the tech strategy In cases where technology is used as the starting point for a new business model or to create completely new products or services, the business strategy will itself be based on technology. There is an argument that, in such instances, there is no need for a separate technology strategy, as the technology initiatives, investments and priorities are an integral part of the business strategy. And the CIO and the IT function will be key players in the definition of that strategy. As with the agile approach, the no strategy case is also dependent on the IT function developing and maintaining key architectural artifacts to support the business strategy, and to shape and guide technology decisions. How you can develop an effective IT strategy For a strategy to be effective, it should answer questions of how to create value, deliver value, and capture value: In order to create value, one needs to trace back the technology and forecast on how the technology evolves, how the market penetration changes, and how to organize effectively. To capture value, you should know how to compete to gain a competitive advantage and sustain it, and how to compete in case that standards of technology is important. The final step is delivering the value, where firms define how to execute the strategy, make strategic decisions, and take decisive actions. In short, whether it’s a pure IT business or IT-dependent business, tech strategy plays a key role in handcrafting the org’s future. It’s high time to craft your firm’s strategy if you don’t have one, using any of the methods. Hari Vignesh Jayapalan is a Google Certified Android app developer, IDF Certified UI & UX Professional, street magician, fitness freak, technology enthusiast, and wannabe entrepreneur. He can be found on Twitter @HariofSpades.
Read more
  • 0
  • 0
  • 3271

article-image-developers-are-technology-decision-makers
Richard Gall
01 Aug 2017
3 min read
Save for later

Developers are today's technology decision makers

Richard Gall
01 Aug 2017
3 min read
For many years, technology in large organizations has been defined by established vendors. Oracle. Microsoft. Huge corporations were setting the agenda when it came to the technology being used by businesses. These tech organizations provided solutions - everyday businesses simply signed themselves up. But this year’s Skill Up survey painted an interesting picture of a world in which developers and tech professionals have a significant degree of control over the tools they use. This is how people responded when we asked them how much choice they have over the tools they use at work: Half of all respondents have at least a significant amount of choice over the software they use at work. This highlights an important fact of life for tech pros, engineers and developers across the globe - your job is not just about building things and shipping code, it’s also about understanding the tools that are going to help you do that. To be more specific, what this highlights is that open-source is truly mainstream. What evolved as a cultural niche of sorts in the late nineties has become fundamental to the way we understand technology today. Yes, it’s true that large tech conglomerates like Apple, Facebook, and Google have a huge hold on consumers across the planet, but they aren’t encouraging lock-in in the way that the previous generation of tech giants did. In fact, they are actually pushing open-source into the mainstream. Facebook built React; Google are the minds behind Golang and TensorFlow; Apple have done a lot to evolve Swift into a language that may come to dominate the wider programming landscape. We are moving to a world of open systems, where interoperability reigns supreme. Companies like Facebook, Google, and Apple want consumer control, but when it comes to engineering and programming they want to be empowering people - people like you. If you’re not convinced, take the case of Java. Java’s interesting, because in many respects it’s a language that was representative of the closed systems of enterprise tech a decade ago. But it’s function today has changed - it’s one of the most widely used programming languages on GitHub, being used in a huge range open source projects. C# is similar - in it you can see how Microsoft’s focus has changed, the organization’s stance on open source softening to become more invested with a culture where openness is the engine of innovation. Part of the reason for this is a broader economic changes in the very foundations of how software is used today and what organizations need to understand. As trends such as microservices have grown, and as APIs become more important to the development and growth of businesses - those explicitly rooted in software or otherwise - software necessarily must become open and changeable. And, to take us back to where we started, the developers, programmers, engineers who build and manage those systems must be open and alive to the developing landscape of software they can use in the future. Decision making, then, is a critical part of what it means to work in software. That may not have always been the case, but today it’s essential. Make sure you’re making the right decision. Read this year's Skill Up report for free.
Read more
  • 0
  • 0
  • 3254

article-image-davos-elite-weigh-in-on-globalization-4-0-and-digital-economy-at-the-world-economic-forum-2019
Prasad Ramesh
18 Feb 2019
8 min read
Save for later

Davos Elite weigh in on Globalization 4.0 and digital economy at the World Economic Forum 2019

Prasad Ramesh
18 Feb 2019
8 min read
At the World Economic Forum 2019, top executives from various industries talked about their views on digital economy related to Globalization 4.0. Participants of the discussion were Rajeev Suri, Nokia CEO; Ken Hu deputy chairman Huawei; Abidali Neemuchwala, Wipro CEO; Alfred F Kelly Jr, Visa CEO, and Eileen Donahoe, a UN ambassador. With digital economy and economic progress, social outcomes are changing fast. The topic explored in the discussion is the tension between the rate of change of technological progress and economic development and the social outcomes due to these factors. They explore if these things are connected or if they’re becoming decoupled and if there’s a tension between these areas. Ken Hu, Huawei Digital economy is driven by digital technology. He thinks that 2019 could be a big year for technologies as many of them are at a tipping point like IoT, AI, Blockchain, and 5G. 5G is ready, 5G enabled smartphones will in the market by June 2019. He explains that it will bring benefits to both consumers and manufacturers. For example, consumers can download HD videos in seconds and manufacturers can use the superior speeds for purposes like smart manufacturing, autonomous driving, remote surgery, etc. Focusing on skill development can help to embrace the benefits of digital economy. This required joint efforts from both the government and the industry. By leveraging the changing technology itself, training employees on demand, as a service can help in upskilling. Social impact While creating the next version of globalization, Globalization 4.0, the social value should be a key consideration. He shares an example of a food supply shortage growing up. Farmers in a specific region of China used IoT and big data to recover soil for agriculture. They were able to recover 5% of usable farmlands which can provide food for 80 million people. Hu believes that such success can be replicated in every industry and country. Abidali Neemuchwala, Wipro He thinks that three things will be or rather needs to be different in Globalization 4.0: Much more human-centric Inclusiveness Sustainability There needs to be growth beyond being “localized while globalized”. He thinks that people should be given opportunities in the long term where the disparity created by Globalization 3.0 is minimized. Things that you would do to improve inclusiveness in your organization using digital economy. Winning employee trust is a priority, he found two things that worked well for Wipro. The larger purpose of the organization beyond business Investment and reskilling Enabling teachers with technology is by creating networks of teachers where they can learn from one another leads to growth. He says that his firm has provided agriculturists and fishermen with means to get price democratization by taking out the middleman. This he says enables inclusion and helps create a positive narrative. How do you make the focus on customer trust a reality? He says that Wipro is winning customer trust despite being a B2B business. The most difficult thing for a CEO today is how they would use their own revenue to prioritize the customer. This starts with the employees in the organization. Something that would surprise the customer in very unexpected ways. This may not be good for the short term for the company as it requires investment, but, in the long term puts the customer first. Rajeev Suri, Nokia Globalization 4.0 will address the productivity paradox. The previous version, 3.0 didn’t really address productivity with data centers, smartphones, social media etc. In the US, digital economy has had 2.7% productivity per annum and physical economy 0.7% productivity per annum. There will be a tipping point eventually where the productivity starts to meaningfully increase and Suri thinks it will be 2028 for the US. From a global centralized world, we’ll see more decentralized systems. Such a decentralized system will facilitate the global-local concept that Neemuchwala mentions. Things that you would do to improve inclusiveness in your organization using digital economy. People are joining for the purpose of the company and are staying for the culture. He wants to use digital technology to battle complexity in order to simplify employees’ daily life. Suri thinks that the purpose of techs like AI and 5G is to simplify the work of factory workers, for example, not to replace them. He doesn’t think that these new technologies will necessarily reduce jobs but occupational changes will happen. In such a scenario, reskilling purposefully is important. Decentralization and 5g The whole notion of 5g is going to be decentralization due to the benefit of low latency. There will be more focus on local economies in the next generation of technology. There is a potential to bring back power to the local economies with this shift. Who is going to address trust deficit governments or organizations? People value their data, they want to be aware of trustworthy services. Suri thinks that it's going to be addressed by governments and businesses together. Eileen Donahoe, UN The big trend she sees is a dramatic swing from optimism to pessimism about the effects of digital tech on society and people. She talks about tech lash. There are two big areas of discontent in tech lash: Economic inclusion. Wealth distribution challenges are ‘now on steroids’. There are concerns about massive labor displacement. Trustworthiness is related to political, civil liberties, democracy. Digitization of society has led to an erosion of privacy, people are now understanding that privacy matters to the exercise of liberty. If everything you say is monitored, people are going to get more conscious of what they say. Digitization has also made everything society-wide less secure. There is a great sense of vulnerability which neither the private or public sectors are able to address completely. In the last few years, there is a fear of cross-border weaponization of information. Along with economic growth, citizens’ liberty, security, and democratic process need to be protected. This calls for a new governance model. We need to push beyond national boundaries, similar to how multinational private organizations have. A governance model that can bring in citizens, civil society and other stakeholders in the picture can increase accountability of corporations. Basic needs financed by an automation tax, so everybody can live without the need to work? Dignity of work is critically important so just handing out money won’t really solve problems. Alfred F Kelly Jr, Visa He thinks that connecting and improving the world actually shrinks it. Meaning that there is more accessible to people, countries etc,. He lists three major factors: Innovation where there are efforts to solve real problems A partnership where companies and governments collaborate to solve bigger issues Consumer-centric thinking considering that e-commerce is growing 4x faster than brick and mortar Customers want convenience, security, and privacy. Is it possible to have it all or do customers have to make choices? He thinks that it is possible to have it all; customers deserve a product that they can trust all the time. Tech industries are trying to create ubiquity around the world. The most precious asset in the digital economy is trust and people need to be able to trust. For financial inclusion, financial literacy is important. People need to be educated so that they build up a trust and it a big focus area. Are IT industries doing anything to reduce energy consumption? We are committed to operating our data centers 100% on renewable electricity by the end of next year. What to make of all this? The focus seems to be on 5G and its benefits, for the consumers and of course, the tech organizations. I think that the discussions were skewed to a bird’s view and the top executives can’t really relate to problems on the ground. The truth is companies will layoff employees if the growth slows down. At the end of the day, the CEOs have to answer their boards. Don’t get me wrong, being a CEO is a tough job as you can imagine. The discussions look good on paper but I have my doubts on implementing concepts like these on scale. These were the highlights of the talk on Strategic Outlook on the Digital Economy at WEF Davos 2019. For more detailed discussions, you can view the YouTube video. What the US-China tech and AI arms race means for the world – Frederick Kempe at Davos 2019 Is Anti-trust regulation coming to Facebook following fake news inquiry made by a global panel in the House of Commons, UK? Google and Ellen MacArthur Foundation with support from McKinsey & Company talk about the impact of Artificial Intelligence on circular economy
Read more
  • 0
  • 0
  • 3254

article-image-containers-end-of-virtual-machines
Vijin Boricha
13 Jun 2018
5 min read
Save for later

Are containers the end of virtual machines?

Vijin Boricha
13 Jun 2018
5 min read
For quite sometime now virtual machines (VMs) have gained a lot of traction. The major reason for this trend was IT industries were totally convinced about the fact that instead of having a huge room filled with servers, it is better to deploy all your workload on a single piece of hardware. There is no doubt that virtual machines have succeeded as they save a lot of cost and work pretty well making failovers easier. In a similar sense when containers were introduced they received a lot attention and have recently gained even more popularity amongst IT organisations. Well, there are a set of considerable reasons for this buzz; they are highly scalable, easy to use, portable, have faster execution and are mainly cost effective. Containers also subside management headaches as they share a common operating system. With this kind of flexibility it is quite easier to fix bugs, place update patches and make other alterations. All-in-all containers are lightweight and more portable than virtual machines. If all of this is true, are virtual machines going extinct? Well, for this answer you will have to deep dive into the complexities of both worlds. How Virtual Machines work? A virtual machine is an individual operating system installed on your usual operating system. The entire implementation is done by software emulation and hardware virtualization. Usually multiple virtual machines are used on servers where the physical machine remains the same but each virtual environment runs a completely separate service. Consider a Ubuntu server as a VM and use it to install all or any service you need. Now, if your deployment needs a set of software to handle web applications you provide all the necessary services to your application. Suddenly, there is a requirement for an additional service where your situation gets tighter, as all your resources are preoccupied. All you need to do is, install the new service on the guest virtual machine and you are all set to relax. Advantages of using virtual machines Multiple OS environments can run simultaneously on the same physical machine Easy to maintain, highly available, convenient recovery, and application provisioning Virtual machines tend to be more secure than containers Operating system flexibility on VMs is better than that of containers Disadvantages of using virtual machines Simultaneously running virtual machines may introduce an unstable performance, depending on the workload on the system by other running virtual machines Hardware accessibility becomes quite difficult when it comes to virtual machines Virtual machines are heavier in size taking up several gigabytes How Containers work? You can consider containers as lightweight, executable packages that provide everything an application needs to run and function as desired. A container usually sits on top of a physical server and its host OS allowing applications to run reliably in different environments by subtracting the operating system and physical infrastructure. So where VMs depend totally on hardware we have a new popular kid in town that requires significantly lesser hardware and does the task with ease and efficiency. Suppose you want to deploy multiple web servers faster, containers make it easier. The reason for this is, as you are deploying single services the containers require lesser hardware compared to virtual machines. The benefit of using containers does not end here. Docker, a popular container solution, creates a cluster of docker engines in such a way that they are managed as a single virtual system. So if you’re looking at deploying apps with scale, and lesser failovers your first preference should be containers. Advantages of using Containers You can any day add more computing workload on the same server as containers consume less resources Servers can load more containers than virtual machines as they are usually in megabytes Containers makes it easier to allocate resources to processes which helps running your applications in different environments Containers are cost effective solutions that help in decreasing both operating and development cost. Bug tracking and testing is easier in containers as there isn’t any difference in running your application locally, or on test servers, or in production Development, testing, and deployment time decreases with containers Disadvantages of using Containers Since containers share the kernel and other components of host operating system it become more vulnerable and can impact security of other containers as well Lack of operating system flexibility. Everytime you want to run a container on a different operating system you need to start a new server. Now coming to the original question. Are containers worth it? Will they eliminate virtualization entirely? Well, after reading this article you must have already guessed the clear winner considering the advantages over disadvantages of each platform. So, in virtual machines the hardware is virtualized to run multiple operating system instances. If one needs a complete platform that can provide multiple services then, virtual machines is your answer as it is considered a matured and a secure technology. If you're looking at achieving high scalability, agility, speed, lightweight, and portability, all this comes under just one hood, containers. With this standardised unit of software, one can stay ahead of the competition. If you still have concerns over security and how a vulnerable kernel can jeopardize the cluster than you need, DevSecOps is your knight in shining armor. The whole idea of DevSecOps is to bring operations and development together with security functions. In a nutshell, everyone involved in a software development life cycle is responsible for security. Kubernetes Containerd 1.1 Integration is now generally available Top 7 DevOps tools in 2018 What’s new in Docker Enterprise Edition 2.0?
Read more
  • 0
  • 0
  • 3254
article-image-after-angular-take-the-meteor-challenge
Sarah C
28 Nov 2014
4 min read
Save for later

What’s next after Angular? Take the Meteor challenge!

Sarah C
28 Nov 2014
4 min read
This month the Meteor framework hit version 1.0. We’ve been waiting to see this for a while here at Packt, and have definitely not been disappointed. Meteor celebrated their launch with a bang – Meteor Day saw old hands and n00bs from around the globe gather together to try out the software and build new things. You might have experienced the reverberations across the Web. Was it a carefully crafted and clever bit of marketing? Obviously. But in Meteor’s case, we can forgive a little fanfare. Maybe you’re jaded and worn out with a barrage of new tools for web development. You should make an exception for Meteor. Maybe JavaScript isn’t your thing, and you don’t have any interest in working with Node on the backend. You should make an exception for Meteor. I’m not trying to shill anything here – every resource I’ll mention in the course of this post is entirely free. I just think the Meteor web application stack is something special. Why does Meteor matter for a modern Web? If you haven’t come across it before, Meteor is a full stack JavaScript framework for the modern Web. It’s agnostic about how you want to structure your app – MVC, MVVM, MVW, stick everything in one folder with filenames such as TestTemplate(2).js –hey, man, you do you! As long as you keep your client and server concerns separate (there are special built-in rules for the client, server, and public folders to help it do its synchronous magic), Meteor won’t judge. The framework’s clarion cry is that creating application software should be radically simple . We all know that the Web looks different now than it did even a couple of years ago. The app is queen. Single-page web apps have made the Internet programmatic and reactive. The proliferation of mobile apps redefining the online path between customers and businesses are moving us even further away from treating the Internet as a static point of reference. “Pages” are a less and less an accurate metaphor for the visualization of our shared digital realm. Today’s Internet is deep, receptive, active, and aware. Given that, it’s hard to argue against making JavaScript app development simpler. Simple doesn’t mean shoddy, or hacky. It comes from thinking about the Web as it exists now and making the right demands of a framework. Meteor.js lives its philosophy – a multi-user, real-time web-app can be put together in a couple of hours with time to spare for pretty UI design and to window shop for packages. Don’t believe me? Try it out for yourself! Throwing down the gauntlet Originally, we wanted to do a Meteor challenge for the staff here in our Birmingham offices. The winner would have gotten something sweet – perhaps an extra turn on the water slide or an exemption from her turn feeding the Packt scorpions. Alas, in the end the obligation to get on with our actual jobs (helping you guys learn software) got in the way of making this happen. So I’m outsourcing the challenge to you, dear reader. Your mission: Download Meteor 1.0. Prototype an app. Use the time left over to feel pleased with yourself. You get extra credit if: The app has a particular appeal for book lovers (like us!) or It contains a good pun If you’re a Linux or Mac user you can get started right away. If you’re on Windows, you’ll need to use a virtual environment, either in your browser or using something like Vagrant. Don’t worry, the Meteor site has tutorials to get you started in a trice. After that, you can check out all kinds of great learning resources made available by the devs and the community. Get started with the official docs and tutorial, then move on to more hardcore tips and tricks at BulletProof Meteor. The more aurally inclined and those of you who like to code while you drive might prefer to check out the Meteor Podcast. (Please do not code while you drive! – The Legal Team.) When you get stuck, hit up the community on the G+ group. Or browse MeteorHelp for a collation of other sources of information. Most importantly, let me know how you get on with it! We’re excited to see what you come up with. Do you see yourself making Meteor part of your workflow in future? Check out our JavaScript Tech Page for more insight into Meteor and full-stack JS development.
Read more
  • 0
  • 0
  • 3246

article-image-frontend-frameworks-bootstrapping-beginners
Ed Gordon
30 Jun 2014
3 min read
Save for later

Frontend Frameworks: Bootstrapping for Beginners

Ed Gordon
30 Jun 2014
3 min read
I was on the WebKit.org site the other day, and it struck me that it was a fairly ugly site for the home page of such a well-known browser engine. Lime green to white background transition, drop-shadow headers. It doesn’t even respond; what? I don’t want to take anything away from its functionality – it works perfectly well – but it did bring to mind the argument about frontend frameworks and the beautification of the Internet. When the Internet started to become a staple of our daily compute, it was an ugly place. Let’s not delude ourselves in thinking every site looked awesome. The BBC, my home page since I was about 14, looked like crap until about 2008. As professional design started improving, it left “home-brew” sites still looking old, hacky, and unloved. Developers and bedroom hacks, not au fait with the whims of JavaScript or jQuery, were left with an Internet that still looks prehistoric. A gulf formed between the designers who were getting paid to make content look better and those who wanted to, but didn’t have the time. It was the haves, and the have nots. Whilst the beautification of websites built by the “common man” is a consequence of the development of dozens of tools in the open source arena, I’m ascribing the flashpoint as Twitter Bootstrap. Yes, you can sniff a Bootstrap site a mile off, and yes it loads a bit slower except for the people who use Bootstrap (me), and yes some of the mark-up syntax is woeful. It does remain, however, a genuine enabler of web design that doesn’t suck. The clamor of voices that have called out Bootstrap for the reasons mentioned above, I think, have really misunderstood who should be using this tool. I would be angry if I paid a developer to knock me up a hasty site in Bootstrap. Designers should only be using Bootstrap to knock up a proof of concept (Rapid Application Development), before building a bespoke site and living fat off the commission. If, however, someone asked me to make a site in my spare time, I’m only ever going to be using Bootstrap (or, in fairness, Foundation), because it’s quick, easy, and I’m just not that good with HTML, CSS, or JavaScript (though I’m learning!). Bootstrap, and tools like it, abstract away a lot of the pain that goes into web development (really, who cares if your button is the same as someone else’s?) for people who just want to add their voice to the sphere and be heard. Having a million sites that look similar but nice, to me is a better scenario than having a million sites that are different and look like the love child of a chalkboard and MS Paint. What’s clear is that it has home-brew developers contributing to the conversation of presentation of content; layout, typography, iconography. Anyone who wants to moan can spend some time on the wayback machine.
Read more
  • 0
  • 0
  • 3245

article-image-deep-reinforcement-learning-trick-or-treat
Bhagyashree R
31 Oct 2018
2 min read
Save for later

Deep reinforcement learning - trick or treat?

Bhagyashree R
31 Oct 2018
2 min read
Deep Reinforcement Learning (Deep RL) is the new buzzword in the machine learning world. Deep RL is an approach which combines reinforcement learning and deep learning in order to achieve human-level performance. It brings together the self-learning approach to learn successful strategies that lead to the greatest long-term rewards and allows the agents to construct and learn their own knowledge directly from raw inputs. With the fusion of these two approaches, we saw the introduction of many algorithms, starting with DeepMind’s Deep Q Network (DQN). It is a deep variant of the Q-learning algorithm. This algorithm reached human-level performance in playing Atari games. Combining Q-learning with reasonably sized neural networks and some optimization tricks, you can achieve human or superhuman performance in several Atari games. Deep RL resulted in one of the notable advancements in the game of AlphaGo.The AI agent by DeepMind was able to beat the human world champions Lee Sedol (4-1) and Fan Hui (5-0). DeepMind then further released advanced versions of their Agent called AlphaGO Zero and AlphaZero. Many recent works from the researchers at UC Berkeley have shown how both reinforcement learning and deep reinforcement learning have enabled the control of complex robots, both for locomotion and navigation. Despite these successes, it is quite difficult to find cases where deep RL has added any practical real-world value. The current status is that it is still a research topic. One of its limitations is that it assumes the existence of a reward function, which is either given or is hand-tuned offline. To get the desired results, your reward function must capture exactly what you want. RL has an annoying tendency to overfit to your reward, resulting in things you haven’t expected. This is the reason why Atari is a benchmark, as it is not only easy to get a lot of samples, but the goal is fairly straightforward i.e to maximize score. With so many researchers working towards introducing improved Deep RL algorithms, it surely is a treat. AlphaZero: The genesis of machine intuition DeepMind open sources TRFL, a new library of reinforcement learning building blocks Understanding Deep Reinforcement Learning by understanding the Markov Decision Process [Tutorial]
Read more
  • 0
  • 0
  • 3243
article-image-what-you-missed-at-last-weeks-icml-2018-conference
Sugandha Lahoti
18 Jul 2018
6 min read
Save for later

What you missed at last week’s ICML 2018 conference

Sugandha Lahoti
18 Jul 2018
6 min read
The 35th International Conference on Machine Learning (ICML) 2018, took place on July 10, 2018 - July 15, 2018 in Stockholm, Sweden. ICML is one of the most anticipated conferences for every data scientist and ML practitioner and features some of the best ML researchers who come to talk about their research and discuss new ideas. It won’t be wrong to say that Deep learning and its subsets were the showstopper of this conference with a large number of research papers and AI professionals implementing it in their methods. These included sessions and paper presentations on, Gaussian Processes, -Networks and Relational Learning, Time-Series Analysis, Deep Bayesian Non-parametric Tracking, Generative Models, etc. Also, other deep learning subsets such as Representation Learning, Ranking and Preference Learning, Supervised Learning, Transfer and Multi-Task Learning, etc were heavily featured. The conference consisted of one day of tutorials (July 10), followed by three days of main conference sessions (July 11-13), followed by two days of workshops (July 14-15). Best Talks and Seminars of ICML 2018 ICML 2018 featured two informative talks dealing with the applications of Artificial Intelligence in other domains. Day 1 was inaugurated by an invited talk from Prof. Dawn Song on “AI and Security: Lessons, Challenges and Future Directions’’. She talked about the impact of AI in computer security, differential privacy techniques, and the synergy between AI, computer security, and blockchain. She also gave an overview of challenges and new techniques to enable privacy-preserving machine learning. Day 3 featured an inaugural talk by Max Welling on “Intelligence per  Kilowatt hour”, focusing on the connection between physics and AI. According to Max, in the coming future, companies will find it too expensive to run energy absorbing ML tools to power their AI engines, or the heat dissipation in edge devices will be too high to be safe. So the next frontier of AI is going to be finding the most energy efficient combination of hardware and algorithms. There were also two plenary talks. Language to Action: towards Interactive Task Learning with Physical Agents, by Joyce Chai and Building Machines that Learn and Think Like People by Josh Tenenbaum. Best Research Papers of ICML 2018 Among the many interesting research papers that were submitted to the ICML 2018 conference, here are the winners. Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples by Anish Athalye, Nicholas Carlini, and David Wagner was lauded and bestowed with the Best Paper award. The paper identifies obfuscated gradients, a kind of gradient masking, as a phenomenon that leads to a false sense of security in defenses against adversarial examples. They identify the three different types of obfuscated gradients and develop attack techniques to overcome them. Delayed Impact of Fair Machine Learning by Lydia T. Liu, Sarah Dean, Esther Rolf, and Max Simchowitz also got the Best Paper award. This paper examines the circumstances where fairness criteria promotes the long-term well-being of disadvantaged groups, measured in terms of a temporal variable of interest. The paper also introduces a one-step feedback model of decision-making that exposes how decisions change the underlying population over time. Bonus: The Test of Time award Day 4 witnessed Facebook researchers Ronan Collobert and Jason Weston receiving the honorary ‘Test of Time award’ for their 2008 ICML paper, A Unified Architecture for Natural Language Processing: Deep Neural Networks with Multitask Learning. The paper proposed a single convolutional neural network that takes a sentence and outputs it’s language processing predictions. So the network can identify and distinguish part-of-speech tags, chunks, named entity tags, semantic roles, semantically similar words and the likelihood that the sentence makes sense (grammatically and semantically) using a language model. At the time of the paper publishing there was almost no neural networks research in Natural Language Processing. The paper’s use of word embeddings and how they are trained, the use of auxiliary tasks and multitasking, and the use of convolutional neural nets in NLP, really inspired the neural networks of today. For instance, Facebook’s recent machine translation and summarization tool Fairseq uses CNNs for language. AllenNLP’s Elmo learns improved word embeddings via a neural net language model and applies them to a large number of NLP tasks. Featured Tutorials at ICML 2018 The ICML 2018 featured a total of 9 tutorials in sets of 3 each. All the tutorials took place on Day 1. These included: Imitation Learning by Yisong Yue and Hoang M Le where they gave a broad overview of imitation learning techniques and its recent applications. Learning with Temporal Point Processes by Manuel Gomez Rodriguez and Isabel Valera. They talk about temporal point processes in machine learning from basics to advanced concepts such as marks and dynamical systems with jumps. Machine Learning in Automated Mechanism Design for Pricing and Auctions by Nina Balcan, Tuomas Sandholm, and Ellen Vitercik. This tutorial covered automated mechanism design for revenue maximization. Toward Theoretical Understanding of Deep Learning by Sanjeev Arora where he explained about what kind of theory may ultimately arise for deep learning with examples. Defining and Designing Fair Algorithms by Sam Corbett-Davies and Sharad Goel. They illustrated the problems that lie at the foundation of algorithmic fairness, drawing on ideas from machine learning, economics, and legal theory. Understanding your Neighbors: Practical Perspectives From Modern Analysis by Sanjoy Dasgupta and Samory Kpotufe. This tutorial aimed to cover new perspectives on k-NN, and translate new theoretical insights to a broader audience. Variational Bayes and Beyond: Bayesian Inference for Big Data by Tamara Broderick where she covered modern tools for fast, approximate Bayesian inference at scale. Machine Learning for Personalised Health by Danielle Belgrave and Konstantina Palla. This tutorial evaluated the current drivers of machine learning in healthcare and present machine learning strategies for personalised health. Optimization Perspectives on Learning to Control by Benjamin Recht where he showed how to learn models of dynamical systems, how to use data to achieve objectives in a timely fashion, how to balance model specification etc. Workshops at ICML 2018 Day 5 and 6 of the ICML 2018 conference were dedicated entirely for Workshops based on topics ranging from AI in health to AI in computational psychology to Humanizing AI to AI for Wildlife Conservation. Some other workshops included Bridging the Gap between Human and Automated Reasoning Data Science meets Optimization Domain Adaptation for Visual Understanding Eighth International Workshop on Statistical Relational AI Enabling Reproducibility in Machine Learning MLTrain@RML Engineering Multi-Agent Systems Exploration in Reinforcement Learning Federated AI for Robotics Workshop (F-Rob-2018) This is just a brief overview of the ICML conference, where we have handpicked a select few paper presentations and invited talks. You can see the full schedule along with the list of selected research papers at the ICML website. 7 of the best machine learning conferences for the rest of 2018 Microsoft start AI School to teach Machine Learning and Artificial Intelligence Google introduces Machine Learning courses for AI beginners
Read more
  • 0
  • 0
  • 3241

article-image-5-hurdles-overcome-javascript
Antonio Cucciniello
26 Jul 2017
5 min read
Save for later

The 5 hurdles to overcome in JavaScript

Antonio Cucciniello
26 Jul 2017
5 min read
If you are new to JavaScript, you may find it a little confusing depending on what computer language you were using before. Although JavaScript is my favorite language to use today, I cannot say that it was always this way. There were some things I truly disliked and was genuinely confused with in JavaScript. At this point I have come to accept these things. Today we will discuss the five hurdles you may come across in the JavaScript programing language. Global variables No matter what programming language you are using, it is never a good idea to have variables, functions, or objects as part of your global scope. It is good practice to limit the amount of global variables as much as possible. As programs get larger, there is a greater chance of naming collisions and giving access to code that does not necessarily need it by making it global. When implementing things, you want a variable to have a large enough scope as you need it to be. In JavaScript, you can access some global variables and objects through window. You can add things to this if you would like, but you should not do this. Use of Bitwise Operators As you probably know, JavaScript is a high level language that does not communicate with the hardware much. There are these things called Bitwise Operators that allow you to compare the bits of two variables. For instance x & y does an AND operation on x and y. The problem with this is, in JavaScript there is no such thing as integers, only double precision floating point numbers. So in order to do the bitwise operation, it must convert x and y to integers, compare the bits, and then convert them back to floating point numbers. This is much slower to perform and really should not be done, but then again it is somehow allowed. Coding style variations From seeing many different open source repositories, there does not seem to be one coding style standard that everyone adheres too. Some people love semicolons, others hate them. Some people adore ES6, other people despise it. Personally, I am fan of using standard for coding style, and I use ES5. That is soley my opinion though. When comparing code with other people who have completely different styles, it can be difficult to use their code or write something similar. It would be nice to have a more generally accepted style that is used by all JavaScript developers. It would make us overall more productive. Objects Coming from a class-based language, I found the topic of prototypical inheritance difficult to understand and use. In prototypical inheritance all objects inherit from Object.prototype. That means that if you try to refer to a property of an object that you have not defined for yourself, but it exists as part of Object.prototype, then it will execute using that property or function. There is a chain of objects where each object inherits all of the properties from its parent and all of that parents' parents. Meaning, your object might have access to plenty of functions it does not need. Luckily, you can override any of the parent functions by establishing a function for this object. A large amount of falsy values Here is a table of falsy values that are used in JavaScript: False Value Type 0 Numbers NaN Numbers '' String false Boolean null Object undefined undefined All of these values represent a different falsy value, but they are not interchangeable. They only work for their type in JavaScript. As a beginner, trying to figure out how to check for errors at certain points in your code can be tricky. Not to harp on the problem with global variables again, but undefined and NaN are both variables that are part of global scope. This means that you can actually edit the values of them. This should perhaps be illegal, because this one change can affect your entire product or system. Conclusion As mentioned in the beginning, this post is simply an opinion. I am coming from a background in C/C++ and then to JavaScript. These were the top 5 problems I had with JavaScript that made me really scratch my head. You might have a completely different opinion reading this from your different technical background. I hope you share your opinion! If you enjoyed this post, tweet and tell me your least favorite part of using JavaScript, or if you have no such problems, then please share your favorite JavaScript feature! About the author Antonio Cucciniello is a Software Engineer with a background in C, C++ and JavaScript (Node.Js) from New Jersey.   His most recent project called Edit Docs is an Amazon Echo skill that allows users to edit Google Drive files using your voice. He loves building cool things with software, reading books on self-help and improvement, finance, and entrepreneurship. Follow him on twitter @antocucciniello, and follow him on GitHub here: https://github.com/acucciniello
Read more
  • 0
  • 0
  • 3241