Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech Guides

851 Articles
article-image-tech-workers-coalition-volunteers-talk-unionization-and-solidarity-in-silicon-valley
Natasha Mathur
03 Dec 2018
9 min read
Save for later

Tech Workers Coalition volunteers talk unionization and solidarity in Silicon Valley

Natasha Mathur
03 Dec 2018
9 min read
In the latest podcast episode of Delete your account, Roqayah Chamseddine and Kumars Salehi talked to Ares and Kristen, volunteers with the Tech Workers Coalition (TWC), about how they function and organize to bring social justice and solidarity to the tech industry. What is the Tech Workers Coalition? The Tech Workers Coalition is a democratically structured, all-volunteer, and worker-led organization of tech and tech adjacent workers across the US who organize and offer support for activist, civic engagement and education projects. They primarily do work in the Bay Area Seattle, but they are also supporting and working on initiatives across the United States. While they work largely to defend the rights of tech workers, the organization argues for wider solidarity with existing social and economic justice movements. Key Takeaways The podcast discusses the evolution of TWC (from facilitating Google employees in their protest against Google’s Pentagon contract to helping Google employees in “walkout for real change”), pushback received, TWC’s unionizing goal, and their journey going forward. A brief history of the Tech Workers Coalition Tech Workers Coalition started with a friendship between Rachel Melendes, a former cafeteria worker and Matt Schaefer, an engineer. The first meetings, in 2014 and 2015, comprised a few full-time employees at tech companies. These meetings were occasions for discussing and sharing experiences of working in the tech industry in Silicon Valley. It’s worth noting that those involved didn’t just include engineers - subcontracted workers, cafeteria workers, security guards, and janitors were all involved too. So, TWC began life as a forum for discussing workplace issues, such as pay disparity, harassment, and discrimination. However, this forum evolved, with those attending becoming more and more aware that formal worker organization could be a way of achieving a more tangible defense of worker rights in the tech industry. Kristen points out in the podcast how 2016 presidential elections in the US were “mobilizing” and laid a foundation for TWC in terms of determining where their interests lay. She also described how ideological optimism of Silicon Valley companies - evidenced in brand values like “connecting people” and “don’t be evil”, encourages many people to join the tech industry for “naive but well-intentioned reasons.” One example presented by Kristen is of the 14th December Trump tower meeting in 2016, where Donald Trump invited top tech leaders including Tim Cook ( CEO, Apple), Jeff Bezos ( CEO, Amazon), Larry Page (CEO, Alphabet), and Sheryl Sandberg ( COO, Facebook) for a “technology roundup”. Kristen highlights that the meeting, seen by some as an opportunity to put forward the Silicon Valley ethos of openness and freedom, didn’t actually fulfill what it might have done. The acquiescence of these tech leaders to a President widely viewed negatively by many tech workers forced employees to look critically at their treatment in the workplace. It’s almost as if it was the moment, for many workers, when the fact those at the top of the tech industry weren’t on their side. From this point, the TWC has gone from strength to strength. There are now more than 500 people in the Tech Workers Coalition group on Slack that discuss and organize activities to bring more solidarity in the tech industry. Ideological splits within the tech left Ares also talks about ideological splits within the community of left-wing activists in the tech industry. For example, when Kristen joined TWC in 2016, many of the conversations focused on questions like are tech workers actually workers? and aren’t they at fault for gentrification? The fact that the debate has largely moved on from these issues says much about how thinking has changed in activist communities. While in the past activists may have taken a fairly self-flagellating view of, say, gentrification - a view that is arguably unproductive and offers little opportunity for practical action - today, activists focus on what tech workers have in common with those doing traditional working-class jobs. Kristen explains: “tech workers aren’t the ones benefiting from spending 3 grand a month on a 1 bedroom apartment, even if that’s possible for them in a way that is not for many other working people. You can really easily see the people that are really profiting from that are landlords and real estate developers”. As Salehi also points out in the episode, solidarity should ultimately move beyond distinctions and qualifiers like income. TWC’s recent efforts in unionizing tech Google’s walkout for Real Change A recent example of TWC’s efforts to encourage solidarity across the tech industry is its support of Google’s Walkout for Real Change. Earlier this month, 20,000 Google employees along with Vendors, and Contractors walked out of their respective Google offices to protest discrimination and sexual harassment in the workplace. As part of the walkout, Google employees laid out five demands urging Google to bring about structural changes within the workplace. To facilitate the walkout, TWC organized a retaliation hotline that allowed employees to call in if they faced any retribution for participating in the walkout. If an employee contacted the hotline, TWC would then support them in taking their complaints to the labor bureau. TWC also provided resources based on their existing networks and contacts with the National Labour Relations Board (NLRB). Read Also: Recode Decode #GoogleWalkout interview shows why data and evidence don’t always lead to right decisions in even the world’s most data-driven company Ares called the walkout “an escalation in tactic” that would force tech execs to concede to employee demands. He also described how the walkout caused a “ripple effect” -  since seeing Google end its forced arbitration policy, Facebook soon followed too. Protest against AI drones It was back in October when Google announced that it will not be competing for the Pentagon’s cloud-computing contract worth $10 billion, saying the project may conflict with its principles for the ethical use of AI. Google employees had learned about Google’s decision to provide and develop artificial intelligence to a controversial military pilot program known as Project Maven, earlier this year. Project Maven aimed to speed up analysis of drone footage by automatically labeling images of objects and people. Many employees had protested against this move by Google by resigning from the company.  TWC supported Google employees by launching a petition in April in addition to the one that was already in circulation, demanding that Google abandon its work on Maven. The petition also demanded that other major tech companies, such as IBM and Amazon, refuse to work with the U.S. Defense Department. TWC’s Unionizing goal and major obstacles faced in the tech industry On the podcast, Kristen highlights that union density across the tech industry is quite low. While unionization across the industry is one of the TWC’s goals, it’s not their immediate goal. “It depends on the workplace, and what the workers there want to do. We’re starting at a place that is comparable to a lot of industries in the 19th century in terms of what shape it could take, it's very nascent. It will take a lot of experimentation”, she says. The larger goal of TWC is to challenge established tech power structures and practices in order to better serve the communities that have been impacted negatively by them. “We are stronger when we act together, and there’s more power when we come together,” says Kristen. “We’re the people who keep the system going. Without us, companies won't be able to function”. TWC encourages people to think about their role within a workplace, and how they can develop themselves as leaders within the workplace. She adds that unionizing is about working together to change things within the workplace, and if it's done on a large enough scale, “we can see some amount of change”. Issues within the tech industry Kristen also discusses how issues such as meritocracy, racism, and sexism are still major obstacles for the tech industry. Meritocracy is particularly damaging as it prevents change - while in principle it might make sense, it has become an insidious way of maintaining exclusivity for those with access and experience. Kristen argues that people have been told all their lives that if you try hard you’ll succeed and if you don’t then that’s because you didn't try hard enough. “People are taught to be okay with their alienation in society,” she says. If meritocracy is the system through which exclusivity is maintained, sexism, sexual harassment, misogyny, and racism are all symptoms of an industry that, for its optimism and language of change, is actually deeply conservative. Depressingly, there are too many examples to list in full, but one particularly shocking report by The New York Times highlighted sexual misconduct perpetrated by those in senior management. While racism may, at the moment, be slightly less visible in the tech industry - not least because of an astonishing lack of diversity - the internal memo by Mark Luckie, formerly of Facebook, highlighted the ways in which Facebook was “failing its black employees and its black users”. What’s important from a TWC perspective is that none of these issues can be treated in isolation and as individual problems. By organizing workers and providing people with a space in which to share their experiences, the organization can encourage forms of solidarity that break down the barriers that exist across the industry. What’s next for TWC? Kristen mentions how the future for TWC depends on what happens next as there are lots of things that could change rather quickly. Looking at the immediate scope of TWC’s future work, there are projects that they're working on. Ares also mentions how he is blown away by how things have chalked out in the past couple of years and are optimistic about pushing the tendency of rebellion within the tech industry with TWC. “I've been very positively surprised with how things are going but it hasn't been without lots of hard work with lots of folks within the coalition and beyond. In that sense it is rewarding, to see the coalition grow where it is now”, says Kristen. Sally Hubbard on why tech monopolies are bad for everyone: Amazon, Google, and Facebook in focus OK Google, why are you ok with mut(at)ing your ethos for Project DragonFly?
Read more
  • 0
  • 0
  • 3774

article-image-what-are-limits-self-service-bi
Graham Annett
09 Aug 2017
4 min read
Save for later

What are the limits of self-service BI?

Graham Annett
09 Aug 2017
4 min read
While many of the newer self-service BI offerings are a progressive step forward, they still have some limitations that are not so easily overcome. One of the biggest advantages of self-service business intelligence is the ability for smaller and previously much more limited revenue and smaller sized companies to utilize their data and various other software-as-a-service offerings that previously would have been restricted to enterprise companies needing in house developers and data architects. This alone is an incredible barrier to overcome and helps in lessening the gap and burden that traditional small and medium businesses may have previously faced when hoping to use their collection of data in a real and impactful way.  What is self-service BI?  Self-service BI is the idea that traditional “Business Intelligence” tasks may be automated, and pipelines created in such a way that business insights can be optimized in a way that previously required both enterprise-sized datasets and enterprise-quality engineering to incorporate real and actionable insights. This idea has largely become outdated with the influx of both easily integrate-able third-party services (such as Azure BI tools and IBM Cognos Analytics), and the push towards using a company's collected data at any scale that can be used in supervised or unsupervised learning (with automated model tuning and feature engineering).  Limitations  One of the limits of self-service business intelligence services is that the abilities of the software are often so broad, that they cannot provide the level of insight that would be helpful or necessary to be applied to increase revenue. While these insights and self-service BI services might be useful for initial, or exploratory visualization purposes, the current implementations cannot provide the expertise and thoroughness that an established data scientist would be able to provide, and cannot divulge into the minutia that a boss may ask of someone with fine tuned statistical knowledge of their models.  Often these insights are limited in ways such as: feature engineering, data warehousing abilities, data pipeline integration, machine learning algorithms available, and a multitude of other aspects that while they are slowly being incorporated into the self-service BI platforms (and as an early Azure ML user, they have become incredibly adept and useful), will always be slightly behind the latest and greatest just by nature of the need to depend on others to implement the newest idea into the platform.  Another limitation of self-service platforms is that you are somewhat locked into these platforms once you create a solution that may be suitable for your needs. Some of the issues with this can be that you are subject to the platform's rising cost, the platform can at any minute change its API format, or the way that data is integrated that would break your system or worse, the self-service BI platform could simply cease to exist if the provider deems that it is no longer something they wish to pursue for a multitude of reasons. While these issues are somewhat avoidable if engineering is done with them in mind (i.e. know your data pipeline and how to translate it to another service or what happens were the third-party service goes down), they are still reasonable issues that could potentially have wide implications for a business depending on how integral they are in the engineering stack. This is probably the most poignant limitation.          All that said, the latest and greatest in BI may be unnecessary for most use cases though, and a general approach that is broad and simple may cover the applicable use cases for most companies, and most big cloud providers of self service BI platforms are unlikely to go down or disappear suddenly without notice. Having hyper-specific pipelines that take months to engineer and integrate optimizations that would have real impacts may far outweigh a simple approach that can be highly adaptable, and creating real insights is one of the best ways a business can start on the path of incorporating machine learning and data science services into their core business.  About the Author  Graham Annett is an NLP Engineer at Kip (Kipthis.com).  He has been interested in deep learning for a bit over a year and has worked with and contributed to Keras (https://github.com/fchollet/keras).  He can be found on Github at http://github.com/grahamannett or via http://grahamannett.me 
Read more
  • 0
  • 0
  • 3770

article-image-twilio-whatsapp-api-great-tool-reach-new-businesses
Amarabha Banerjee
15 Aug 2018
3 min read
Save for later

Twilio WhatsApp API: A great tool to reach new businesses

Amarabha Banerjee
15 Aug 2018
3 min read
The trend in the last few years have indicated that businesses want to talk to their customers in the same way they communicate with their friends and family. This enables them to cater to their specific need and to create customer centric  products. Twilio, a cloud and communication based platform has been at the forefront of creating messaging solutions for businesses. Recently, Twilio has enabled developers to integrate SMSing and calling facilities into their applications using the Twilio Web Services API. Over the last decade, Twilio customers have used Programmable SMS to build innovative messaging experiences for their users, whether it is sending instant transaction notifications for money transfers, food delivery alerts, or helping millions of people with the parking tickets. This latest feature added to the Twilio API integrates WhatsApp messaging into the application and manages messages and WhatsApp contacts with a business account. Why is the Twilio Whatsapp integration so significant? WhatsApp is one of the most popular instant messaging apps in the world presently. Everyday, 30 million messages are exchanged using WhatsApp. The visualization below shows the popularity of WhatsApp across different countries. Source: Twilio Integrating WhatsApp communications in the business applications would mean greater flexibility and ability to reach to a larger segment of audience. How is it done The operational overhead of integrating directly with the WhatsApp messaging network requires hosting, managing, and scaling containers in your own cloud infrastructure. This can be a tough task for any developer or business with a different end-objective and limited budget. The Twilio API makes it easier for you. WhatsApp delivers end-to-end message encryption through containers. These containers manage encryption keys and messages between the business and users. The containers need to be hosted in multiple regions for high availability and to scale efficiently, as messaging volume grows. Twilio solves this problem for you with a simple and reliable REST API. Other failsafe messaging features like: User opt-out options from WhatsApp messages Automatic switching to sms messaging in the absence of data network Shift to another messaging service in regions where WhatsApp is absent etc; can be implemented easily using the Twilio API. Also, you do not have to use separate APIs to get connected with different messaging services like Facebook messenger, MMS, RCS, LINE etc as all of them are possible within this API. WhatsApp is taking things at a slower pace currently. It initially allows you to develop a test application using the Twilio Sandbox for WhatsApp. This lets you to test your application first, and send messages to a limited number of users only. After your app gets production ready, you can create a WhatsApp business profile and get a dedicated Twilio number to work with WhatsApp. Source: Twilio With the added feature, Twilio enables you to leave aside the maintenance aspect of creating a separate WhatsApp integration service. Twilio takes care of the cloud containers and the security aspect of the application. It gives developers an opportunity to focus on creating customer centric products to communicate with them easily and efficiently. Make phone calls and send SMS messages from your website using Twilio Securing your Twilio App Building a two-way interactive chatbot with Twilio: A step-by-step guide
Read more
  • 0
  • 0
  • 3760

article-image-5g-mobile-data-propel-artificial-intelligence
Neil Aitken
02 Aug 2018
7 min read
Save for later

How 5G Mobile Data will propel Artificial Intelligence (AI) progress

Neil Aitken
02 Aug 2018
7 min read
Like it’s predecessors, 3G and 4G, 5G refers to the latest ‘G’ – Generation – of mobile technology. 5G will give us very fast - effectively infinitely fast - mobile data download bandwidth. Downloading a TV show to your phone over 5G, in its entirety, in HD, will take less than a second, for example. A podcast will be downloaded within a fraction of a second of you requesting it. Scratch the surface of 5G, however, and there is a great deal more to see than just fast mobile data speeds.  5G is the backbone on which a lot of emerging technologies such as AI, blockchains, IoT among others will reach mainstream adoption. Today, we look at how 5G will accelerate AI growth and adoption. 5G will create the data AI needs to thrive One feature of 5G with ramifications beyond data speed is ‘Latency.’ 5G offers virtually ‘Zero Latency’ as a service. Latency is the time needed to transmit a packet of data from one device to another. It includes the period of time between when the request was made, to the time the response is completed. [caption id="attachment_21251" align="aligncenter" width="580"] 5G will be superfast – but will also benefit from near zero ‘latency’[/caption] Source: Economist At the moment, we keep files (music, pictures or films) in our phones’ memory permanently. We have plenty of processing power on our devices. In fact, the main upgrade between phone generations these days is a faster processor. In a 5G world, we will be able to use cheap parts in our devices – processors and memory in our new phones. Data downloads will be so fast, that we can get them immediately when we need them. We won’t need to store information on the phone unless we want to.  Even if the files are downloaded from the cloud, because the network has zero latency – he or she feels like the files are on the phone. In other words, you are guaranteed a seamless user experience in a 5G world. The upshot of all this is that the majority of any new data which is generated from mobile products will move to the cloud for storage. At their most fundamental level, AI algorithms are pattern matching tools. The bigger the data trove, the faster and better performing the results of AI analysis is. These new structured data sets, created by 5G, will be available from the place where it is easiest to extract and manipulate (‘Analyze’) it – the cloud. There will be 100 billion 5G devices connected to cellular networks by 2025, according to Huawei. 5G is going to generate data from those devices, and all the smartphones in the world and send it all back to the cloud. That data is the source of the incredible power AI gives businesses. 5G driving AI in autonomous vehicles 5G’s features and this Cloud / Connected Device future, will manifest itself in many ways. One very visible example is how 5G will supercharge the contribution, especially to reliability and safety, that AI can make to self driving cars. A great deal of the AI processing that is required to keep a self driving car operating safely, will be done by computers on board the vehicle. However, 5G’s facilities to communicate large amounts of data quickly will mean that any unusual inputs (for example, the car is entering or in a crash situation) can be sent to bigger computing equipment on the cloud which could perform more serious processing. Zero latency is important in these situations for commands which might come from a centralized accident computer, designed to increase safety– for example issuing the command ‘break.’ In fact, according to manufacturers, it’s likely that, ultimately, groups of cars will be coordinated by AI using 5G to control the vehicles in a model known as swarm computing. 5G will make AI much more useful with ‘context’ - Intel 5G will power AI by providing location information which can be considered in establishing the context of questions asked of the tool – according to Intel’s Data Center Group. For example, asking your Digital Assistant where the tablets are means something different depending on whether you’re in a pharmacy or an electronics store. The nature of 5G is that it’s a mobile service. Location information is both key to context and an inherent element of information sent over a 5G connection. By communicating where they are, 5G sensors will help AI based Digital Assistants solve our everyday problems. 5G phones will enable  AI calculations on ‘Edge’ network devices  - ARM 5G will push some processing to the ‘Edge’ of the network, for manipulation by a growing range of AI chips on to the processors of phones. In this regard, smartphones like any Internet Of Things connected processor ‘in the field’ are simply an ‘AI platform’. Handset manufacturers are including new software features in their phones that customers love to use – including AI based search interfaces which allow them to search for images containing ‘heads’ and see an accurate list. [caption id="attachment_21252" align="aligncenter" width="1918"] Arm are designing new types of chips targeted at AI calculations on ‘Edge’ network devices.[/caption] Source: Arm's Project Trillium ARM, one of the world’s largest CPU producers are creating specific, dedicated AI chip sets, often derived from the technology that was behind their Graphics Processing Units. These chips process AI based calculations up to 50 times faster than standard microprocessors already and their performance is set to improve 50x over the next 3 years, according to the company. AI is part of 5G networks - Huawei Huawei describes itself as an AI company (as well as a number of other things including handset manufacturer.) They are one of the biggest electronic manufacturers in China and are currently in the process of selling networking products to the world’s telecommunications companies, as they prepare to roll out their 5G networks. Based on the insight that 70% of network system downtime comes from human error, Huawei is now eliminating humans from the network management component of their work, to the degree that they can. Instead, they’re implementing automated AI based predictive maintenance systems to increase data throughput across the network and reduce downtime. The way we use cellular networks is changing. Different applications require different backend traffic to be routed across the network, depending on the customer need. Someone watching video, for example, has a far lower tolerance for a disruption to the data throughput (the ‘stuttering Netflix’ effect) than a connected IoT sensor which is trying to communicate the temperature of a thermometer. Huawei’s network maintenance AI software optimizes these different package needs, maintaining the near zero latency that the standard demands at a lower cost. AI based network maintenance complete a virtuous loop in which 5G devices on new cellular networks give AI the raw data they need, including valuable context information, and AI helps the data flow across the 5G network better. Bringing it all together 5G and artificial intelligence (AI) are revolutionary technologies that will evolve alongside each other. 5G isn’t just fast data, it’s one of the most important technologies ever devised. Just as the smartphone did, it will fundamentally change how we relate to information, partly, because it will link us to thousands of newly connected devices on the Internet of Things. Ultimately, it could be the secondary effects of 5G, the network’s almost zero latency, which could provide the largest benefit – by creating structured data sets from billions of connected devices, in an easily accessible place – the cloud which can be used to fuel the AI algorithms which run on them. Networking equipment, chip manufacturers and governments have all connected the importance of AI with the potential of 5G. Commercial sales of 5G start in The US, UK and Australia in 2019. 7 Popular Applications of Artificial Intelligence in Healthcare Top languages for Artificial Intelligence development Cognitive IoT: How Artificial Intelligence is remoulding Industrial and Consumer IoT      
Read more
  • 0
  • 0
  • 3753

article-image-glancing-fintech-growth-story-powered-ml-ai-apis
Kartikey Pandey
14 Dec 2017
4 min read
Save for later

Glancing at the Fintech growth story - Powered by ML, AI & APIs

Kartikey Pandey
14 Dec 2017
4 min read
When MyBucks, a Luxembourg based Fintech firm, started scaling up their business in other countries. They faced a daunting challenge of reducing the timeline for processing credit requests from over a week’s time to just under few minutes. Any financial institution dealing with lending could very well relate to the nature of challenges associated with giving credit - checking credit history, tracking past fraudulent activities, and so on. This automatically makes the lending process tedious and time consuming. To add to this, MyBucks also aimed to make their entire lending process extremely simple and attractive to customers. MyBucks’ promise to its customers: No more visiting branches and seeking approvals. Simply login from your mobile phone and apply for a loan - we will handle the rest in a matter of minutes. Machine Learning has triggered a whole new segment in the Fintech industry- Automated Lending Platforms. MyBucks is one such player. Some other players in this field are OnDeck, Kabbage, and Lend up. What might appear transformational with Machine Learning in MyBucks’ case is just one of the many examples of how Machine Learning is empowering a large number of finance based companies to deliver disruptive products and services. So what makes Machine Learning so attractive to Fintech and how has Machine Learning fuel this entire industry’s phenomenal growth? Read on. Quicker and efficient credit approvals Long before Machine Learning was established in large industries unlike today, it was quite commonly used to solve fraud detection problems. This primarily involved building a self-learning model that used a training dataset to begin with and further expanding its learning based on incoming data. This way the system could distinguish a fraudulent activity from a non-fraudulent one. Modern day Machine Learning systems are no different. They use the very same predictive models that rely on segmentation algorithms and methods. Fintech companies are investing in big data analytics and machine learning algorithms to make credit approvals quicker and efficient. These systems are designed in such a way that they pull data from several sources online, develop a good understanding of transactional behaviours, purchasing patterns, and social media behavior and accordingly decide creditworthiness. Robust fraud prevention and error detection methods Machine Learning is empowering banking institutions and finance service providers to embrace artificial intelligence and combat what they fear the most-- fraudulent activities. Faster and accurate processing of transactions has always been the fundamental requirement in the finance industry. An increasing number of startups are now developing Machine Learning and Artificial Intelligence systems to combat the challenges around fraudulent transactions or even instances of incorrectly reported transactions. Billguard is one such company that uses big data analytics and makes sense of millions of consumers who report billing complaints. The AI system then builds its intelligence by using this crowd-sourced data and reports incorrect charges back to consumers thereby helping get their money back. Reinventing banking solutions with the powerful combination of APIs and Machine Learning Innovation is key to survival in the finance industry. The 2017 PwC global fintech report suggests that the incumbent finance players are worried about the advances in the Fintech industry that poses direct competition to banks. But the way ahead for banks definitely goes through Fintech that is evolving everyday. In addition to Machine Learning, ‘API’ is the other strong pillar driving innovation in Fintech. Developments in Machine Learning and AI are reinventing the traditional lending industry and APIs are acting as the bridge between classic banking problems and the future possibilities. Established banks are now taking the API (Application Programming Interface) route to tie up with innovative Fintech players in their endeavor to deliver modern solutions to customers. Fintech players are also able to reap the benefits of working with the old guard, banks, in a world where APIs have suddenly become the new common language. So what is this new equation all about? API solutions are helping bridge the gap between the old and the new - by helping collaborate in newer ways to solve traditional banking problems. This impact can be seen far and wide within this industry and Fintech as an industry isn’t just limited to lending tech and everyday banking alone. There are several verticals within the industry that now find increased impact of Machine Learning -payments, wealth management, capital markets, insurance, blockchain and now even chatbots for customer service to name a few. So where do you think this partnership is headed? Please leave your comments below and let us know.
Read more
  • 0
  • 0
  • 3747

article-image-product-development-need-developers-and-product-managers-collaborate
Packt Editorial Staff
04 Aug 2018
16 min read
Save for later

Effective Product Development needs developers and product managers collaborating on success metrics

Packt Editorial Staff
04 Aug 2018
16 min read
Modern product development is witnessing a drastic shift. Disruptive ideas and ambiguous business conditions have changed the way products are developed. Product development is no longer guided by existing processes or predefined frameworks. Delivering on time is a baseline metric, as is software quality. Today, businesses are competing to innovate. They are willing to invest in groundbreaking products with cutting-edge technology. Cost is no longer the constraint—execution is. Can product managers then continue to rely upon processes and practices aimed at traditional ways of product building? How do we ensure that software product builders look at the bigger picture and do not tie themselves to engineering practices and technology viability alone? Understanding the business and customer context is essential for creating valuable products. In this article, we are going to identify what success means to us in terms of product development. This article is an excerpt from the book Lean Product Management written by Mangalam Nandakumar. For the kind of impact that we predict our feature idea to have on the Key Business Outcomes, how do we ensure that every aspect of our business is aligned to enable that success? We may also need to make technical trade-offs to ensure that all effort on building the product is geared toward creating a satisfying end-to-end product experience. When individual business functions take trade-off decisions in silo, we could end up creating a broken product experience or improvising the product experience where no improvement is required. For a business to be able to align on trade-offs that may need to be made on technology, it is important to communicate what is possible within business constraints and also what is not achievable. It is not necessary for the business to know or understand the specific best practices, coding practices, design patterns, and so on, that product engineering may apply. However, the business needs to know the value or the lack of value realization, of any investment that is made in terms of costs, effort, resources, and so on. The section addresses the following topics: The need to have a shared view of what success means for a feature idea Defining the right kind of success criteria Creating a shared understanding of technical success criteria "If you want to go quickly, go alone. If you want to go far, go together. We have to go far — quickly." Al Gore Planning for success doesn't come naturally to many of us. Come to think of it, our heroes are always the people who averted failure or pulled us out of a crisis. We perceive success as 'not failing,' but when we set clear goals, failures don't seem that important. We can learn a thing or two about planning for success by observing how babies learn to walk. The trigger for walking starts with babies getting attracted to, say, some object or person that catches their fancy. They decide to act on the trigger, focusing their full attention on the goal of reaching what caught their fancy. They stumble, fall, and hurt themselves, but they will keep going after the goal. Their goal is not about walking. Walking is a means to reaching the shiny object or the person calling to them. So, they don't really see walking without falling      as a measure of success. Of course, the really smart babies know to wail their way to getting the said shiny thing without lifting a toe. Somewhere along the way, software development seems to have forgotten about shiny objects, and instead focused on how to walk without falling. In a way, this has led to an obsession with following processes without applying them to the context and writing perfect code, while disdaining and undervaluing supporting business practices. Although technology is a great enabler, it is not the end in itself. When applied in the context of running a business or creating social impact, technology cannot afford to operate as an isolated function. This is not to say that technologists don't care about impact. Of course, we do. Technologists show a real passion for solving customer problems. They want their code to change lives, create impact, and add value. However, many technologists underestimate the importance of supporting business functions in delivering value. I have come across many developers who don't appreciate the value of marketing, sales, or support. In many cases, like the developer who spent a year perfecting his code without acquiring a single customer, they believe that beautiful code that solves the right problem is enough to make a business succeed. Nothing can be further from the truth Most of this type of thinking is the result of treating technology as an isolated function. There is a significant gap that exists between nontechnical folks and software engineers. On the one hand, nontechnical folks don't understand the possibilities, costs, and limitations of software technology. On the other hand, technologists don't value the need for supporting functions and communicate very little about the possibilities and limitations of technology. This expectation mismatch often leads to unrealistic goals and a widening gap between technology teams and the supporting functions. The result of this widening gap is often cracks opening in the end-to-end product experience for the customer, thereby resulting in a loss of business. Bridging this gap of expectation mismatch requires that technical teams and business functions communicate in the same language, but first they must communicate. Setting SMART goals for team In order to set the right expectations for outcomes, we need the collective wisdom of the entire team. We need to define and agree upon what success means for each feature and to each business function. This will enable teams to set up the entire product experience for success. Setting specific, measurable, achievable, realistic, and time-bound (SMART) metrics can resolve this. We cannot decouple our success criteria from the impact scores we arrived at earlier. So, let's refer to the following table for the ArtGalore digital art gallery: The estimated impact rating was an indication of how much impact  the business expected a feature idea to have on the Key Business Outcomes. If you recall, we rated this on a scale of 0 to 10. When the estimated impact of a Key Business Outcomes is less than five, then the success criteria for that feature is likely to be less ambitious. For example, the estimated impact of "existing buyers can enter a lucky draw to meet an artist of the month" toward generating revenue is zero. What this means is that we don't expect this feature idea to bring in any revenue for us or put in another way, revenue is not the measure of success for this feature idea. If any success criteria for generating revenue does come up for this feature idea, then there is a clear mismatch in terms of how we have prioritized the feature itself. For any feature idea with an estimated impact of five or above, we need to get very specific about how to define and measure success. For instance, the feature idea "existing buyers can enter a lucky draw to meet an artist of the month" has an estimated impact rating of six towards engagement. This means that we expect an increase in engagement as a measure of success for this feature idea. Then, we need to define what "increase in engagement" means. My idea of "increase in engagement" can be very different from your idea of "increase in engagement." This is where being S.M.A.R.T. about our definition of success can be useful. Success metrics are akin to user story acceptance criteria. Acceptance criteria define what conditions must be fulfilled by the software in order for us to sign off on the success of the user story. Acceptance criteria usually revolve around use cases and acceptable functional flows. Similarly, success criteria for feature ideas must define what indicators can tell us that the feature is delivering the expected impact on the KBO. Acceptance criteria also sometimes deal with NFRs (nonfunctional requirements). NFRs include performance, security, and reliability. In many instances, nonfunctional requirements are treated as independent user stories. I also have seen many teams struggle with expressing the need for nonfunctional requirements from a customer's perspective. In the early days of writing user stories, the tendency for myself and most of my colleagues was to write NFRs from a system/application point of view. We would say, "this report must load in 20 seconds," or "in the event of a network failure, partial data must not be saved."  These functional specifications didn't tell us how/why they were important for an end user. Writing user stories forces us to think about the user's perspective. For example, in my team we used to have interesting conversations about why a report needed to load within 20 seconds. This compelled us to think about how the user interacted with our software. It is not uncommon for visionary founders to throw out very ambitious goals for success. Having ambitious goals can have a positive impact in motivating teams to outperform. However, throwing lofty targets around, without having a plan for success, can be counter-productive. For instance, it's rather ambitious to say, "Our newsletter must be the first to publish artworks by all the popular artists in the country," or that "Our newsletter must become the benchmark for art curation." These are really inspiring words, but can mean nothing if we don't have a plan to get there. The general rule of thumb for this part of product experience planning is that when we aim for an ambitious goal, we also sign up to making it happen. Defining success must be a collaborative exercise carried out by all stakeholders. This is the playing field for deciding where we can stretch our goals, and for everyone to agree on what we're signing up to, in order to set the product experience up for success. Defining key success metrics For every feature idea we came up with, we can create feature cards that look like the following sample. This card indicates three aspects about what success means for this feature. We are asking these questions: what are we validating? When do we validate this? What Key Business Outcomes does it help us to validate? The criteria for success demonstrates what the business anticipates as being a tangible outcome from a feature. It also demonstrates which business functions will support, own, and drive the execution of the feature. That's it! We've nailed it, right? Wrong. Success metrics must be SMART, but how specific is the specific? The preceding success metric indicates that 80% of those who sign up for the monthly art catalog will enquire about at least one artwork. Now, 80% could mean 80 people, 800 people, or 8000 people, depending on whether we get 100 sign-ups, 1000, or 10,000, respectively! We have defined what external (customer/market) metrics to look for, but we have not defined whether we can realistically achieve this goal, given our resources and capabilities. The question we need to ask is: are we (as a business) equipped to handle 8000 enquiries? Do we have the expertise, resources, and people to manage this? If we don't plan in advance and assign ownership, our goals can lead to a gap in the product experience. When we clarify this explicitly, each business function could make assumptions. When we say 80% of folks will enquire about one artwork, the sales team is thinking that around 50 people will enquire. This is what the sales team  at ArtGalore is probably equipped to handle. However, marketing is aiming for 750 people and the developers are planning for 1000 people. So, even if we can attract 1000 enquiries, sales can handle only 50 enquiries a month! If this is what we're equipped for today, then building anything more could be wasteful. We need to think about how we can ramp up the sales team to handle more requests. The idea of drilling into success metrics is to gauge whether we're equipped to handle our success. So, maybe our success metric should be that we expect to get about 100 sign-ups in the first three months and between 40-70 folks enquiring about artworks after they sign up. Alternatively, we can find a smart way to enable sales to handle higher sales volumes. Before we write up success metrics, we should be asking a whole truckload of questions that determine the before-and-after of the feature. We need to ask the following questions: What will the monthly catalog showcase? How many curated art items will be showcased each month? What is the nature of the content that we should showcase? Just good high-quality images and text, or is there something more? Who will put together the catalog? How long must this person/team(s) spend to create this catalog? Where will we source the art for curation? Is there a specific date each month when the newsletter needs     to go out? Why do we think 80% of those who sign up will enquire? Is it because of the exclusive nature of art? Is it because of the quality of presentation? Is it because of the timing? What's so special about our catalog? Who handles the incoming enquiries? Is there a number to call    or is it via email? How long would we take to respond to enquiries? If we get 10,000 sign-ups and receive 8000 enquiries, are we equipped to handle these? Are these numbers too high? Can we still meet our response time if we hit those numbers? Would we still be happy if we got only 50% of folks who sign up enquiring? What if it's 30%? When would we throw away the idea of the catalog? This is where the meat of feature success starts taking shape. We  need a plan to uncover underlying assumptions and set ourselves up for success. It's very easy for folks to put out ambitious metrics without understanding the before-and-after of the work involved in meeting that metric. The intent of a strategy should be to set teams up for success, not for failure. Often, ambitious goals are set without considering whether they are realistic and achievable or not. This is so detrimental that teams eventually resort to manipulating the metrics or misrepresenting them, playing the blame game, or hiding information. Sometimes teams try to meet these metrics by deprioritizing other stuff. Eventually, team morale, productivity, and delivery take a hit. Ambitious goals, without the required capacity, capability, and resources to deliver, are useless. Technology to be in line with business outcomes Every business function needs to align toward the Key Business Outcomes and conform to the constraints under which the business operates. In our example here, the deadline is for the business to launch this feature idea before the Big Art show. So, meeting timelines is already a necessary measure of success. The other indicators of product technology measures could be quality, usability, response times, latency, reliability, data privacy, security, and so on. These are traditionally clubbed under NFRs (nonfunctional requirements). They are indicators of how the system has been designed or how the system operates, and are not really about user behavior. There is no aspect of a product that is nonfunctional or without a bearing on business outcomes. In that sense, nonfunctional requirements are a misnomer. NFRs are really technical success criteria. They are also a business stakeholder's decision, based on what outcomes the business wants to pursue. In many time and budget-bound software projects, technical success criteria trade-offs happen without understanding the business context or thinking about the end-to-end product experience. Let's take an example: our app's performance may be okay when handling 100 users, but it could take a hit when we get to 10,000 users. By then, the business has moved on to other priorities and the product isn't ready to make the leap. This depends on how each team can communicate the impact of doing or not doing something today in terms of a cost tomorrow. What that means is that engineering may be able to create software that can scale to 5000 users with minimal effort, but in order to scale to 500,000 users, there's a different level of magnitude required. There is a different approach needed when building solutions for meeting short-term benefits, compared to how we might build systems for long-term benefits. It is not possible to generalize and make a case that just because we build an application quickly, that it is likely to be full of defects or that it won't be secure. By contrast, just because we build a lot of robustness into an application, this does not mean that it will make the product sell better. There is a cost to building something, and there is also a cost to not building something and a cost to a rework. The cost will be justified based on the benefits we can reap, but it is important for product technology and business stakeholders to align on the loss or gain in terms of the end-to-end product experience because of the technical approach we are taking today. In order to arrive at these decisions, the business does not really need to understand design patterns, coding practices, or the nuanced technology details. They need to know the viability to meet business outcomes. This viability is based on technology possibilities, constraints, effort, skills needed, resources (hardware and software), time, and other prerequisites. What we can expect and what we cannot expect must both be agreed upon. In every scope-related discussion, I have seen that there are better insights and conversations when we highlight what the business/customer does not get from this product release. When we only highlight what value they will get, the discussions tend to go toward improvising on that value. When the business realizes what it doesn't get, the discussions lean toward improvising the end-to-end product experience. Should a business care that we wrote unit tests? Does the business care what design patterns we used or what language or software we used? We can have general guidelines for healthy and effective ways to follow best practices within our lines of work, but best practices don't define us, outcomes do. To summarize we learned before commencing on the development of any feature idea, there must be a consensus on what outcomes we are seeking to achieve. The success metrics should be our guideline for finding the smartest way to implement a feature. Developer’s guide to Software architecture patterns Hey hey, I wanna be a Rockstar (Developer) The developer-tester face-off needs to end. It’s putting our projects at risk
Read more
  • 0
  • 0
  • 3734
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at £15.99/month. Cancel anytime
article-image-fosdem-2019-designing-better-cryptographic-mechanisms-to-avoid-pitfalls-talk-by-maximilian-blochberger
Prasad Ramesh
13 Feb 2019
3 min read
Save for later

FOSDEM 2019: Designing better cryptographic mechanisms to avoid pitfalls - Talk by Maximilian Blochberger

Prasad Ramesh
13 Feb 2019
3 min read
At FOSDEM 2019, Belgium, Maximilian Blochberger talked about preventing cryptographic pitfalls by avoiding mistakes while integrating cryptographic mechanisms correctly. Blochberger is a research associate at the University of Hamburg. FOSDEM is a free and open event for software developers with thousands of attendees, this year’s event took place on second and third February. The goal of this talk is to raise awareness of cryptographic misuse. Preventing pitfalls in cryptography is not about cryptographic protocols but about designing better APIs. Consider a scenario where a developer that values privacy intends to add encryption. This is about integrating cryptographic mechanisms into your application. Blochberger uses a mobile application as an example but the principles are no specific to mobile applications. A simple task is presented—to encrypt a string which is actually difficult. A software developer who doesn't have any cryptographic or even security background would search it online. They will then copy paste a common answer snippet available on StackOverflow. Even though it had warnings of not being secure, but had upvotes and probably worked for some people. Readily available code like that has words like “AES” or “DES” and the software developer may not know much about those encryption algorithms. Using the default algorithms listed in such template code, and using the same keys is not secure. Also, the encryption itself is not CPA (chosen-plaintext attack) secure, the key derivation can be unauthenticated, among other things. 98% of security-related snippets are insecure according to many papers. It’s hard to get encryption right. The vulnerability is high especially if the code is copied from the internet. Implementing cryptographic mechanisms should be done by cryptographic engineers who have expertise in the field. The software developer does not need to develop or even know about the details of the implementation. Doing compiler checks instead of runtime checks is better since you don’t have to wait for something to go wrong before identifying the problem. Cryptography is harder than it actually looks. Many things can and do go wrong exposing encrypted data due to incorrect choices or inadequate measures. He demonstrates an iOS and macOS example using Tafelsalz. For more details with the demonstration of code, you can watch the video. Introducing CT-Wasm, a type-driven extension to WebAssembly for secure, in-browser cryptography Sennheiser opens up about its major blunder that let hackers easily carry out man-in-the-middle attacks Tink 1.2.0: Google’s new multi-language, cross platform, cryptographic library to secure data
Read more
  • 0
  • 0
  • 3726

article-image-6-ways-to-blow-up-your-microservices
Aaron Lazar
14 Jul 2018
6 min read
Save for later

6 Ways to blow up your Microservices!

Aaron Lazar
14 Jul 2018
6 min read
Microservices are great! They’ve solved several problems created by large monoliths, like scalability, fault tolerance, and testability, among others. However, let me assure you that everything’s not rosy yet, and there are tonnes of ways you can blow your microservices to smithereens! Here are 6 sure shot ways to meet failure with microservices, and to spice it up, I’ve included the Batman sound effects too! Disclaimer: Unless you’re Sheldon Cooper, what is and what isn’t sarcasm should be pretty evident in this one! #1 The Polyglot Conspiracy One of the most spoken about benefits of using the microservices pattern, is that you can use a variety of tools and languages to build your application. Great! Let’s say you’re building an e-commerce website with a chat option, maybe VR/AR thrown in too, and then the necessities like a payment page, etc. Obviously you’ll want to build it with microservices. Now, you also thought you might have different teams work on the app using different languages and tools. Maybe Java for the main app, Golang for some services and JavaScript for something else. Moreover, you also decided to use Angular as well as React on various components of your UI. Then one day the React team needs to fix bugs in production on Angular, because the Angular team called in sick. Your Ops team is probably pulling out their hair right now! You need to understand that different tech stacks behave differently in production! Going the Microservices route, doesn’t give you a free ticket to go to town on polyglot services. #2 Sharing isn’t always Caring Let’s assume you’ve built an app where various microservices connect to a single, shared database. It’s quite a good design decision, right? Simple, effective and what not. Now a business requirement calls for a change in the character length on one of the microservices. The team goes ahead and changes the length on one of the tables, and... That’s not all, what if you decide to use connection pools so you can reuse request to the database when required. Awesome choice! Imagine your microservices decided to run amok, submitting query after query to the database. It would knock out every other service for weeks! #3 WET is in; DRY is out? Well, everybody’s been saying Don't Repeat Yourself, these days - architects, developers, my mom. Okay, so you’ve built this application that’s based on event sourcing. There’s a list or store of events and a microservice in your application, that publishes a new event to the store when something happens. For the sake of an example, let’s say it’s a customer microservice that publishes an event “in-cart” whenever the customer selects a product. Another microservice, say “account”, subscribes to that aggregate type and gets informed about the event. Now here comes the best part! Suppose your business asks for a field type to be changed, the easiest way out is to go WET (We Enjoy Typing), making the change in one microservice and copying the code to all the others. Imagine you’ve copied to a scale of hundreds of microservices! Better still, you decided to avoid using Git and just use your event history to identify what’s wrong! You’ll be fixing bugs till you find a new job! #4 Version Vendetta We usually get carried away sometimes, when we’re building microservices. You tend to toss Kafka out of the window and rather build your own framework for your microservices. Not a bad idea at all! Okay, so you’ve designed a framework for the app that runs on event sourcing. So naturally, every microservice that’s connected will use event sourcing to communicate with the others. One fine day, your business asked for a major change in a part of the application, which you did, and the new version of one of the microservices sends the new event to the other microservices and… When you make a change in one microservice, you can’t be sure that all others will work fine, unless versions are changed in them too. You can make things worse by following a monolithic release plan for your microservices. You could keep your customers waiting for months to make their systems compatible, while you have your services ready but are waiting to release a new framework on a monolithic schedule. An awesome recipe for customer retention! #5 SPA Treatment! Oh yeah, Single Page Apps are a great way to build front end applications! So your application is built on the REST architecture and your microservices are connected to a single, massive UI. One day, your business requests for a new field to be added to the UI. Now, each microservice has it’s individual domain model and the UI has its own domain model. You’re probably clueless about where to add the new field. So you identify some free space on the front end and slap it on! Side effects add to the fun! Imagine you’ve changed a field on one service, side effects work like a ripple - passing it on to the next microservice, and then to next and they all will blow up in series like dominoes. This could keep your testers busy for weeks and no one will know where to look for the fault! #6 Bye Bye Bye, N Sync Let’s consider you’ve used synchronous communication for your e-commerce application. What you didn’t consider was that not all your services are going to be online at the same time. An offline service or a slow one can potentially lock or slow thread communication, ultimately blowing up your entire system, one service at a time! The best part is that it’s not always possible to build an asynchronous communication channel within your services. So you’ll have to use workarounds like local caches, circuit breakers, etc. So there you have it, six sure shot ways to blow up your microservices and make your Testing and Ops teams go crazy! For those of you who think that microservices have killed the monolith, think again! For the brave, who still wish to go ahead and build microservices, the above are examples of what you should beware of, when you’re building away those microservices! How to publish Microservice as a service onto a Docker How to build Microservices using REST framework Why microservices and DevOps are a match made in heaven    
Read more
  • 0
  • 0
  • 3725

article-image-difference-between-native-mobile-development-vs-cross-platform-development
Amarabha Banerjee
27 Jun 2018
4 min read
Save for later

What’s the difference between cross platform and native mobile development?

Amarabha Banerjee
27 Jun 2018
4 min read
Mobile has become an increasingly important part of many modern businesses tech strategy. In everything from eCommerce to financial services, mobile applications aren’t simply a ‘nice to have’, they’re essential. Customers expect them. The most difficult question today isn’t ‘do we need a mobile app’ Instead, it’s ‘which type of mobile app should we build: native vs cross platform?’ There are arguments to be made for cross platform mobile development and native app development. Developers who have worked on either project will probably have an opinion on the right way to go. Like many things in tech, however, the cross platform v native debate is really a question of which one is right for you. From both a business and capability perspective, you need to understand what you want to achieve and when. Let’s take a look at the difference between cross-platform framework or a native development platforms. You should then feel comfortable enough to make the right decision about which mobile platform is right for you. Cross platform development? A cross platform application runs across all mobile operating systems without any extra coding. By all mobile operating systems, I mean iOS and Android (windows phones are probably on their way out). A cross platform framework provides all the tools to help you create cross-platform apps easily. Some of the most popular cross- platform frameworks include: Xamarin Corona SDK appcelerator titanium PhoneGap Hybrid mobile apps One specific form of cross-platform mobile  application is Hybrid. With hybrid mobile apps, the graphical user interface (GUI) is developed using HTML5. These are then wrapped in native webpack containers and deployed on iOS and Android devices. A native app is specifically designed for one particular operating system. This means it will work better in that specific environment than one created for multiple platforms. One of the latest native android development framework is Google Flutter. For iOS, it’s Xcode.. Native mobile development vs Cross platform development If you’re a mobile developer, which is better? Let’s compare cross platform development with mobile development: Cross-platform development is more cost effective. This is simply because you can reuse 80% of your code becase you’re essentially building one application. The cost of native development is roughly double to that of Cross-platform development, although cost of android development is roughly 30% more than iOS development. Cross-platform development takes less time. Although some coding has to be done natively, the time taken to develop one app is, obviously, less than to develop two. Native apps can use all system resources. No other app can have any additional features . They are able to use the maximum computing power provided by the GPU and CPU; this means that load times are often pretty fast.. Cross platform apps have restricted access to system resources. Their access is dependent on framework plugins and permissions. Hybrid apps usually take more time to loadbecause smartphone GPUs are generallyless powerful than other machines. Consequently, unpacking a HTML5 UI takes more time on a mobile device. The same reason forced Facebook to shift their mobile apps from Hybrid to Native which according to facebook, improved their app load time and loading of newsfeed and images in the app. The most common challenge with about cross-platform mobile development is been balancing the requirements of iOS and Android UX design. iOS is quite strict about their UX and UI design formats. That increases the chances of rejection from the app store and causes more recurring cost. A critical aspect of Native mobile apps is that if they are designed properly and properly synchronized with the OS, they get regular software updates. That can be quite a difficult task for cross-platform apps. Finally, the most important consideration that should determine your choice are your (or the customer’s) requirements. If you want to build a brand around your app, like a business or an institution, or your app is going to need a lot of GPU support like a game, then native is the way to go. But if your requirement is simply to create awareness and spread information about an existing brand or business on a limited budget then cross-platform is probably the best route to go down. How to integrate Firebase with NativeScript for cross-platform app development Xamarin Forms 3, the popular cross-platform UI Toolkit, is here! A cross-platform solution with Xamarin.Forms and MVVM architecture  
Read more
  • 0
  • 0
  • 3715

article-image-angular-2-new-world-web-dev
Owen Roberts
04 Feb 2016
5 min read
Save for later

Angular 2 in the new world of web dev

Owen Roberts
04 Feb 2016
5 min read
This week at Packt we’re all about Angular, and with the release of Angular 2 just on the horizon there’s no better time to be an Angular user. Our first book on Angular was Mastering Web Application Development with AngularJS back in 2013 and it’s amazing to see how much the JS landscape is a completely different place than what it was just 3 or 4 years ago. How so? Well, Backbone was expected to lord over other frameworks as The Top Dog, while others like Ember and Knockout were carving their own respectable niches and fans. When Angular started to pick up steam it was seen as a breath of fresh air thanks to its simplicity and host of features. Compared to the more niche driven frameworks at the time the appeal of the Google lead powerhouse drove developers all over to give it a go, and managed to keep them hooked. Of course web dev is a different world than it was in 2013. We’ve seen the growth of full-stack JS development, JS promises are starting to become more in use, components are the latest step in building web apps, and a host of new frameworks and libraries have burst onto the scene as older ones begin to fade into the background. Libraries like React and Polymer are fantastic alternatives to frameworks for developers who want to pick and choose the best stuff for their apps; while Ember has gone from strength to strength in the last few years with a diehard fanbase. A different world means that rewriting Angular from the ground for 2.0 makes sense, but it’s not without its risks too. So, what does Angular need to avoid falling behind? Here are a few ideas (And hopes!) Ease-of-use One of Angular’s greatest strengths was how easy it was to use; not just in the actual coding, but also in integration. Angular has always had that bonus over the competition – one of the biggest reasons it became so popular was because so many other projects allowed for easy Angular integration. However, the other side of the coin was Angular’s equally difficult learning curve; before the book and tutorials found their way onto the market everyone was trying to find as much as they could about Angular in order to get the most out of the more complex or difficult parts of the framework. With 2.x being a complete rewrite every developer is back in the same place again, what the Angular team needs to ensure is that Angular is just as welcoming as its new competition - React, Ember, and even Polymer offer a host of ways to get into their development mindsets. Angular needs to do the same. Debugging Does anyone actually like debugging? My current attempts at Python usually grind to a halt when I reach the debugging phase and for a lot of developers there’s always that whisper of “Urgh” under their breath when they finally get around to bugs. Angular isn’t any different, and you can find a lot of articles and Stack Overflow questions all about debugging in Angular. For what it’s worth Angular seem to have learnt from their experiences with 1.x. They’ve worked directly with the team at Rangle.io to create Batarangle, which is a Chrome plugin that checks Angular 2 apps. Only time will tell how well debugging in Angular will work for every developer, but this is the sort of thing that the Angular team need to give developers – work with other teams to build better tools that help developers breeze through the more difficult tasks. The future devs vs the old With the release of Angular 2 in the coming months we’re going to see React and Angular 2 fight for dominance as the defacto framework on the JS market. The rewrite of Angular is arguably the biggest weakness and strength that Angular 2 offers. For previous Angular 1.x users there are two routes you can go down: Take the jump to Angular 2 and learn everything again. Decide the clean slate is an opportunity to give React a try – maybe even stick with it. What does Angular need to do to ensure after the release of 2 to get old users back on the Angular horse? A few of the writers that I’ve worked with in the past have talked about Angular as the Lego of the JS world – it’s simpler to pick up and everything fits snug together. There’s a great simplicity in building good looking Angular apps – the team needs to remind more jaded Angular 1.x fans that 2.x is the same Angular they love rebuilt for the new challenges of 2016 onwards. It’s still fun Lego, but shinier. If you’re new to the framework and want to see why it’s become such a beloved framework then be sure to check out our Angular tech page; this page has all our best eBooks and videos, as well as the chance to preorder our upcoming Angular 2 titles to download the chapters as soon as they’re finished.
Read more
  • 0
  • 0
  • 3691
article-image-tic-tac-oop
Liz Tom
26 Feb 2016
5 min read
Save for later

Tic-Tac-OOP

Liz Tom
26 Feb 2016
5 min read
Hi there, if you're like me, you might have struggled with the idea behind object-oriented programming using JavaScript. I had no idea what I was doing! And while reading up about OOP helps, for me, the thing that really solidifies a new concept in my head is to build something. So, today we're going to build Tic Tac Toe. First Things First First, let's think about various classes we might need to make this simple game. Well, if we break down the components of a game there's a board, players, and the actual game itself. JavaScript is a prototype based language. This means that instead of writing classes and having objects instantited from the class, you directly write objects. Let's get started with building out Tic Tac Toe and hopefully you'll be able to see some differences. Ready Player One So Tic Tac Toe generally has two players, X and O. We can achieve this by creating a Player object. What types of things do players have? Players have names, if they are an X or an O, and maybe they have how many games they've won. There are few ways to approach this. You can make an object literal: playerOne = { name: 'Wonder Woman', marker: 'X' }; Use the constructor pattern: var Player = function(name, marker) { this.name = name; this.marker = marker; } var playerOne = new Player('Wonder Woman', 'X'); Or we can add each method and property directly to the prototype: function Player () { Player.prototype.name = 'Wonder Woman'; Player.prototype.marker = 'X'; }; var playerOne = new Player(); Now you can see updating properties on the last one might not be a great experience. I don't really want all my instances of Player to be Wonder Woman. I like to set my properties using the constructor method (so they're easy to update when I create new instances) and like to add methods by adding them to the object's prototype. While you can declare functions when you use the constructor pattern, what ends up happening is that you recreate that method every time you declare a new instance of the object. Putting methods on the prototype has you declaring a method only once but allows you access to it. People smarter than me say it helps keep memory usage lower. Anyway, back to the task at hand. We've made a player object and an instance of a player. Player One (aka Wonder Woman) is ready to go! But sadly Wonder Woman has nowhere to go. Building the Game So what else do you need in a tic tac toe game? Perhaps a board? Some spaces to populate that board? A game? These all sound great to me! A game probably needs to keep track of who is playing and what board they're playing on. A board might keep track of the various spaces. Spaces probably need to know where they are located. var Game = function(playerOne, playerTwo) { this.playerOne = playerOne; this.playerTwo = playerTwo; this.board = new Board(); this.currentPlayer = playerOne; }; var Board = function() { this.spaces = []; for(var x = 0; x < 3; x++){ for(var y = 0; y < 3; y++){ this.spaces.push(new Space(x, y)); } } }; var Space = function(xCoord, yCoord) { this.xCoord = xCoord; this.yCoord = yCoord; }; Ok, so we have our objects. We've put some properties on those objects . Now it's time for some functionality. There are some basic things I want to be able to do with tic tac toe. So let's say I'm Wonder Woman. What are the kinds of questions I'm going to ask myself to play this game? Is it my turn? Can I use this space? Did I win? These questions are going to be the basic methods we're going to add to our awesome objects. First up: Is it my turn? So we need a way to keep track of turns. Easy Peasy. Game.prototype = { switchPlayer: function(){ if( this.turn === 2 ) { this.turn = 1; this.currentPlayer = this.playerOne; return this.playerTwo; } else { this.turn = 2; this.currentPlayer = this.playerTwo; return this.playerOne; } } } Now we need to know, can a player even use that space and also who is occupying that space? Space.prototype = { markSpace: function(player){ if( !this.marked ) { this.marked = player; return true; } } } Did the player win? Board.prototype = { winner: function(player){ var coords = []; this.spaces.map(function(space){ if(space.marked === player) { coords.push({x: space.xCoord, y: space.yCoord}); } }); if(checkThree(coords)) return true; if(checkDiagonal(coords)) return true; } } var checkThree = function(coords){ // winning game logic here! } var checkDiagonal = function(coords) { // more winning game logic here! or you can combine the two into one function } I'll let you come up with the winning game logic. I put my logic in a function that my winner method called. If you'd like to check out this game in action, you can go here. Thanks for joining me on this exciting journey. If you end up making a sweet tic tac toe game, I'd love to see it! Tweet at me: @lizzletom If you’re interested in learning more about the world of OOP in JavaScript check out our Introduction to Object-Oriented Programming using Python, JavaScript, and C#, a perfect jumping point to using OOP for any situation. About the author Liz Tom is a Software Developer at Pop Art, Inc in Portland, OR.  Liz’s passion for full stack development and digital media makes her a natural fit at Pop Art.  When she’s not in the office, you can find Liz attempting parkour and going to check out interactive displays at museums.
Read more
  • 0
  • 0
  • 3686

article-image-ai-on-mobile-how-ai-is-taking-over-the-mobile-devices-marketspace
Sugandha Lahoti
19 Apr 2018
4 min read
Save for later

AI on mobile: How AI is taking over the mobile devices marketspace

Sugandha Lahoti
19 Apr 2018
4 min read
If you look at the current trends in the mobile market space, a lot of mobile phone manufacturers portray artificial intelligence as the chief feature in their mobile phones. The total number of developers who build for mobile is expected to hit 14m mark by 2020, according to Evans Data survey. With this level of competition, developers have resorted to Artificial Intelligence to distinguish their app, or to make their mobile device stand out. AI on Mobile is the next big thing. AI on Mobile can be incorporated in multiple forms. This may include hardware, such as AI chips as seen on Apple’s iPhone X or software-based, such as Google’s TensorFlow for Mobile. Let’s look in detail how smartphone manufacturers and mobile developers are leveraging the power of AI for Mobile for both hardware and software specifications. Embedded chips and In-device AI Mobile Handsets nowadays are equipped with specialized AI chips. These chips are embedded alongside CPUs to handle heavy lifting tasks in smartphones to bring AI on Mobile. These built-in AI engines can not only respond to your commands but also lead the way and make decisions about what it believes is best for you. So, when you take a picture, the smartphone software, leveraging the power of AI hardware correctly identifies the person, object, or location being photographed and also compensates for low-resolution images by predicting the pixels that are missing. When we talk about battery life, AI allocates power to relevant functions eliminating unnecessary use of power. Also, in-device AI reduces data-processing dependency on cloud-based AI, saving both energy, time and associated costs. The past few months have seen a large number of AI-based silicon popping everywhere. The trend first began with Apple’s neural engine, a part of the new A11 processor Apple developed to power the iPhone X.  This neural engine powers the machine learning algorithms that recognize faces and transfer facial expressions onto animated emoji. Competing head first with Apple, Samsung revealed the Exynos 9 Series 9810. The chip features an upgraded processor with neural network capacity for AI-powered apps. Huawei also joined the party with Kirin 970 processor, a dedicated Neural Network Processing Unit (NPU) which was able to process 2,000 images per minute in a benchmark image recognition test. Google announced the open beta of its Tensor Processing Unit 2nd Gen. ARM announced its own AI hardware called Project Trillium, a mobile machine learning processor.  Amazon is also working on a dedicated AI chip for its Echo smart speaker. Google Pixel 2 features a Visual Core co-processor for AI. It offers an AI song recognizer, superior imaging capabilities, and even helps the Google Assistant understand the user commands/questions better. The arrival of AI APIs for Mobile Apart from in-device hardware, smartphones also have witnessed the arrival of Artificially intelligent APIs. These APIs add more power to a smartphone’s capabilities by offering personalization, efficient searching, accurate video and image recognition, and advanced data mining. Let’s look at a few powerful machine learning APIs and libraries targeted solely to Mobile devices. It all began with Facebook announcing Caffe2Go in 2016. This Caffe version was designed for running deep learning models on mobile devices. It condensed the size of image and video processing AI models by 100x, to run neural networks with high efficiency on both iOS and Android. Caffe2Go became the core of Style Transfer, Facebook’s real-time photo stylization tool. Then came Google’s TensorFlow Lite in 2017 announced at the Google I/O conference. Tensorflow Lite is a feather-light upshot for mobile and embedded devices. It is designed to be Lightweight, Speedy, and Cross-platform (the runtime is tailormade to run on various platforms–starting with Android and iOS.) TensorFlow Lite also supports the Android Neural Networks API, which can run computationally intensive operations for machine learning on mobile devices. Following TensorFlow Lite came Apple’s CoreML, a programming framework designed to make it easier to run machine learning models on iOS. Core ML supports Vision for image analysis, Foundation for natural language processing, and GameplayKit for evaluating learned decision trees. CoreML makes it easier for apps to process data locally using machine learning without sending user information to the cloud. It also optimizes models for Apple mobile devices, reducing RAM and power consumption. Artificial Intelligence is finding its way into every aspect of a mobile device, whether it be through hardware with dedicated AI chips or through APIs for running AI-enabled services on hand-held devices. And this is just the beginning. In the near future, AI on Mobile would play a decisive role in driving smartphone innovation possibly being the only distinguishing factor consumers think of while buying a mobile device.
Read more
  • 0
  • 0
  • 3682

article-image-what-are-apis-why-should-businesses-invest-in-api-development
Packt Editorial Staff
25 Jul 2019
9 min read
Save for later

What are APIs? Why should businesses invest in API development?

Packt Editorial Staff
25 Jul 2019
9 min read
Application Programming Interfaces (APIs) are like doors that provide access to information and functionality to other systems and applications. APIs share many of the same characteristics as doors; for example, they can be as secure and closely monitored as required. APIs can add value to a business by allowing the business to monetize information assets, comply with new regulations, and also enable innovation by simply providing access to business capabilities previously locked in old systems. This article is an excerpt from the book Enterprise API Management written by Luis Weir. This book explores the architectural decisions, implementation patterns, and management practices for successful enterprise APIs. In this article, we’ll define the concept of APIs and see what value APIs can add to a business. APIs, however, are not new. In fact, the concept goes way back in time and has been present since the early days of distributed computing. However, the term as we know it today refers to a much more modern type of APIs, known as REST or web APIs. The concept of APIs Modern APIs started to gain real popularity when, in the same year of their inception, eBay launched its first public API as part of its eBay Developers Program. eBay's view was that by making the most of its website functionality and information also accessible via a public API, it would not only attract, but also encourage communities of developers worldwide to innovate by creating solutions using the API. From a business perspective, this meant that eBay became a platform for developers to innovate on and, in turn, eBay would benefit from having new users that perhaps it couldn't have reached before. eBay was not wrong. In the years that followed, thousands of organizations worldwide, including known brands, such as Salesforce.com, Google, Twitter, Facebook, Amazon, Netflix, and many others, adopted similar strategies. In fact, according to the programmableweb.com (a well-known public API catalogue), the number of publicly available APIs has been growing exponentially, reaching over 20k as of August 2018. Figure 1: Public APIs as listed in programmableweb.com in August 2018 It may not sound like much, but considering that each of the listed APIs represents a door to an organization's digital offerings, we're talking about thousands of organizations worldwide that have already opened their doors to new digital ecosystems, where APIs have become the product these organizations sell and developers have become the buyers of them. Figure: Digital ecosystems enabled by APIs In such digital ecosystems, communities of internal, partner, or external developers can rapidly innovate by simply consuming these APIs to do all sorts of things: from offering hotel/flight booking services by using the Expedia API, to providing educational solutions that make sense of the space data available through the NASA API. There are ecosystems where business partners can easily engage in business-to-business transactions, either to resell goods or purchase them, electronically and without having to spend on Electronic Data Interchange (EDI) infrastructure. Ecosystems where an organization's internal digital teams can easily innovate as key enterprise information assets are already accessible. So, why should businesses care about all this? There is, in fact, not one answer but multiple, as described in the subsequent sections. APIs as enablers for innovation and bimodal IT What is innovation? According to a common definition, innovation is the process of translating an idea or invention into a good or service that creates value or for which customers will pay. In the context of businesses, according to an article by HBR, innovation manifests itself in two ways: Disruptive innovation: Described as the process whereby a smaller company with fewer resources is able to successfully challenge established incumbent businesses. Sustaining innovation: When established businesses (incumbents) improve their goods and services in the eyes of existing customers. These improvements can be incremental advances or major breakthroughs, but they all enable firms to sell more products to their most profitable customers. Why is this relevant? It is well known that established businesses struggle with disruptive innovation. The Netflix vs Blockbuster example reminds us of this fact. By the time disruptors are able to catch up with an incumbent's portfolio of goods and services, they are able to do so with lower prices, better business models, lower operation costs, and far more agility, and speed to introduce new or enhanced features. At this point, sustaining innovation is not good enough to respond to the challenge. With all the recent advances in technology and the internet, the rate at which disruptive innovation is challenging incumbents has only grown exponentially. Therefore, in order for established businesses to endure the challenge put upon them, they must somehow also become disruptors. The same HBR article describes a point of view on how to achieve this from a business standpoint. From a technology standpoint, however, unless the several systems that underpin a business are "enabled" to deliver such disruption, no matter what is done from a business standpoint, this exercise will likely fail. Perhaps by mere coincidence, or by true acknowledgment of the aforesaid, Gartner introduced the concept of bimodal IT in December 2013, and this concept is now mainstream. Gartner defined bimodal IT as the following: "The practice of managing two separate, coherent modes of IT delivery, one focused on stability and the other on agility. Mode 1 is traditional and sequential, emphasizing safety and accuracy. Mode 2 is exploratory and nonlinear, emphasizing agility and speed." Figure: Gartner's bimodal IT According to Gartner, Mode 1 (or slow) IT organizations focus on delivering core IT services on top of more traditional and hard-to-change systems of record, which in principle are changed and improved in longer cycles, and are usually managed with long-term waterfall project mechanisms. Whereas for Mode 2 (or fast) IT organizations, the main focus is to deliver agility and speed, and therefore they act more like a startup (or digital disruptor in HBR terms) inside the same enterprise. However, what is often misunderstood is how fast IT organizations can disruptively innovate, when most of the information assets, which are critical to bringing context to any innovation, reside in backend systems, and any sort of access has to be delivered by the slowest IT sibling. This dilemma means that the speed of innovation is constrained to the speed by which the relevant access to core information assets can be delivered. As the saying goes, "Where there's a will there's a way." APIs could be implemented as the means for the fast IT to access core information assets and functionality, without the intervention of the slow IT. By using APIs to decouple the fast IT from the slow IT, innovation can occur more easily. However, as with everything, it is easier said than done. In order to achieve such organizational decoupling using APIs, organizations should first build an understanding about what information assets and business capabilities are to be exposed as APIs, so fast IT can consume them as required. This understanding must also articulate the priorities of when different assets are required and by whom, so the creation of APIs can be properly planned for and delivered. Luckily for those organizations that already have mature service-oriented architectures (SOA), some of this work will probably already be in place. For organizations without such luck, this activity should be planned for and should be a fundamental component of the digital strategy. Then the remaining question would be: which team is responsible for defining and implementing such APIs; the fast IT or slow IT? Although the long answer to this question is addressed throughout the different chapters of this book, the short answer is neither and both. It requires a multi-disciplinary team of people, with the right technology capabilities available to them, so they can incrementally API-enable the existing technology landscape, based on business-driven priorities. APIs to monetize on information assets Many experts in the industry concur that an organization's most important asset is its information. In fact, a recent study by Massachusetts Institute of Technology (MIT) suggests that data is the single most important asset for organizations "Data is now a form of capital, on the same level as financial capital in terms of generating new digital products and services. This development has implications for every company's competitive strategy, as well as for the computing architecture that supports it." If APIs act as doors to such assets, then APIs also provide businesses with an opportunity to monetize them. In fact, some organizations are already doing so. According to another article by HBR, 50% of the revenue that Salesforce.com generates comes from APIs, while eBay generates about 60% of its revenue through its API. This is perhaps not such a huge surprise, given that both of these organizations were pioneers of the API economy. Figure: The API economy in numbers What's even more surprising is the case of Expedia. According to the same article, 90% of Expedia's revenue is generated via APIs. This is really interesting, as it basically means that Expedia's main business is to indirectly sell electronic travel services via its public API. However, it's not all that easy. According to the previous study by MIT, most of the CEOs for Fortune 500 companies don't yet fully acknowledge the value of APIs. An intrinsic reason for this could be the lack of understanding and visibility over how data is currently being (or not being) used. Assets that sit hidden on systems of record, only being accessed via traditional integration platforms, will not, in most cases, give insight to the business on how information is being used, and the business value it adds. APIs, on the other hand, are better suited to providing insight about how/by whom/when/why information is being accessed, therefore giving the business the ability to make better use of information to, for example, determine which assets have better capital potential. In this article we provided a short description of APIs, and how they act as an enabler to digital strategies. Define the right organisation model for business-driven APIs with Luis Weir’s upcoming release Enterprise API Management. To create effective API documentation, know how developers use it, says ACM GraphQL API is now generally available Next.js 9 releases with built in zero-config TypeScript support, automatic static optimization, API routes and more
Read more
  • 0
  • 0
  • 3675
article-image-why-google-dart-will-never-win-battle-browser
Ed Gordon
30 Dec 2014
5 min read
Save for later

Why Google Dart Will Never Win The Battle For The Browser

Ed Gordon
30 Dec 2014
5 min read
This blog is not about programming languages as much as it’s about products and what makes good products (or more specifically, why good products sometimes don’t get used). I won’t talk about the advantages or disadvantages about the syntax or how they work as programming languages, but I will talk about the product side. We can all have an opinion on that, right? Real people use Dart. Really. I think we’ve all seen recently a growth in the number of adopters for ‘compile to JavaScript’ languages – TypeScript and Dart being the primary ones, and an honourable mention to CoffeeScript for trying before most others. Asana just switched out their hundreds of thousands of lines of JS code to TypeScript. I know that apps like Blosom are swapping out the JS-y bits of their code piece by piece. The axiom of my blog is that these things offer real developers (which I’m not) real advantages, right now.  They’re used because they are good products. They add productivity to a user-base that is famously short on time and always working to tight-deadlines. They take away no functionality (or very little, for the pedants out there) of JavaScript, but you get all the added benefits that the creators deigned to add. And for the select few, they can be a good choice. For online applications where a product lifespan may be 5 years, or less, worries about code support for the next 20 years (anyone who uses Perl still) melt away. They aren’t doing this because it’s hipster, they’re doing it because it works for them and that’s cool. I dig that. They will never, however, “ultimately… replace JavaScript as the lingua franca of web development”. Just missed the bull’s eye The main issue from a product perspective is that they are, by design, a direct response to the perceived shortcomings of JavaScript. Their value, and destiny as a product, is to be used by people who have struggled with JavaScript – is there anyone in the world who learned Dart before they learned JavaScript? They are linked to JavaScript in a way that limits their potential to that of JavaScript. If Dart is the Mercedes-Benz of the web languages (bear with me), then JavaScript is just “the car” (that is, all cars). If you want to drive over the alps, you can choose the comfort of a Merc if you can afford it, but it’s always going to ultimately be a car – four wheels that takes you from point-to-point. You don’t solve the problems of ‘the car’ by inventing a better car. You replace it by creating something completely different. This is why, perhaps, they struggle to see any kind of adoption over the long term. Google Trends can be a great proxy for market size and adoption, and as you can see “compile-to” languages just don’t seem to be able to hold ground over a long period of time After an initial peak of interest, the products tend to plateau or grow at a very slow rate. People aren’t searching for information on these products because in their limited capacity as ‘alternatives to JavaScript’ they offer no long term benefit to the majority of developers who write JavaScript. They have dedicated fans, and loyal users, but that base is limited to a small number of people. They are a ‘want’ product. No one needs them. People want the luxury of static typing, but you don’t need it. People want cleaner syntax, but don’t need it. But people need JavaScript. For “compile-to” languages to ever be more than a niche player, they need to transition from a ‘want’ product to a ‘need’ product. It’s difficult to do that when your product also ‘needs’ the thing that you’re trying to outdo. Going all out In fact, all the ‘compile to’ tools, languages and libraries have a glass ceiling that becomes pretty visible from their Google trends. Compared this to a Google language that IS its own product, Google Go, we can see stark differences Google Go is a language that offers an alternative to Python (and more, it’s a fully featured programming language), but it’s not even close to being Python. It can be used independently of Python – could you imagine if Google Go said, “We have this great product, but you can only use it in environments that already use Python. In fact, it compiles to Python. Yay.”. This could work initially, but it would stink for the long-term viability of Go as a product that’s able to grow organically and create its own ecosystem of tools, dedicated users, and carve out its own niche and area in which it thrives. Being decoupled from another product allows it to grow. A summary of sorts That’s not to say that JavaScript is perfect. It itself actually started as a language designed to coat-tail the fame of Java (albeit a very different language). And when there are so many voices trying to compete with it, it becomes apparent that not all is well with the venerable king of the web. ECMAScript 6 (and 7, 8, 9 ad infinitum) will improve on it, and make it more accessible – eventually incorporating in to it the ‘differences’ that set things like Dart and TypeScript apart, and taking the carpet from under their feet. It will remain the lingua france of the web until someone creates a product that is not beholden to JavaScript and not limited to what JavaScript can, or cannot, do. Dart will never win the battle for the browser. It is a product that many people want, but few actually need.
Read more
  • 0
  • 1
  • 3672

article-image-top-5-cybersecurity-myths-debunked
Guest Contributor
11 Jul 2018
6 min read
Save for later

Top 5 cybersecurity myths debunked

Guest Contributor
11 Jul 2018
6 min read
Whether it’s for work or pleasure, we are all spending more time online than ever before. Given how advanced and user-friendly modern technology is, it is not surprising that the online world has come to dominate the offline. However, as our lives are increasingly digitized, the need to keep us and our information secure from criminals has become increasingly obvious. Recently, a virtually unknown marketing and data-aggregation company Exactis has fallen victim to a major data breach. According to statements, the company might’ve been responsible for exposing up to 340 million individual records on a publicly accessible server. In this time and age, data breaches are not a rare occurrence. Major corporations face cybersecurity problems on a daily basis. Clearly, there is a thriving criminal market for hackers. But how can the average internet user keep safe? Knowing these 5 myths will definitely help you get started! Myth 1: A Firewall keeps me safe As you would expect, hackers know a great deal about computers. The purpose of what they do is to gain access to systems that they should not have access to. According to a research conducted by Breach Investigation Reports, cybersecurity professionals only regard 17% of threats as being highly challenging. This implies that they view the vast majority of what they do as very easy. All businesses and organizations should maintain a firewall, but it should not lull you into a false sense of security. A determined hacker will use a variety of online and offline techniques to get into your systems. Just last month, Cisco, a well known tech company, has discovered 24 security vulnerabilities in their firewalls, switches, and security devices. On June 20, the company released the necessary updates, which counteract those vulnerabilities. While firewalls are a security measure, it is essential to understand that they are susceptible to something known as a zero-day attack. Zero-day attacks are unknown, or newly designed intrusions that target vulnerabilities before a security patch is released. Myth 2: HTTPS means I’m secure Sending information over an HTTPS connection means that the information will be encrypted and secured, preventing snooping from outside parties. HTTPS ensures that data is safe as it is transferred between a web server and a web browser. While HTTPS will keep your information from being decrypted and read by a third party, it remains vulnerable. Though the HTTPS protocol has been developed to ensure secure communication, the infamous DROWN attack proved everyone wrong. As a result of DROWN more than 11 million HTTPS websites’ had their virtual security compromised. Remember, from the perspective of a hacker, who’s looking for a way to exploit your website, the notion of unbreakable or unhackable does not exist. Myth 3: My host ensures security This is a statement that’s never true. Hosting service providers are responsible for thousands of websites, so it is absurd to think that they can manage security on each one individually. They might have some excellent general security policies in place, yet they can’t ensure total security for quite a few reasons. Just like any other company that collects and maintains data, hosting providers are just as susceptible to cyber attacks. Just last year, Deep Hosting, a Dark Web hosting provider, suffered a security breach, which led to some sites being exported. It’s best not to assume that your host has it covered when it comes to your security. If you haven’t set the protections up yourself, consider them non-existent until you’ve seen and configured them. Myth 4: No Internet connection means no virtual security threats This is a pervasive myth, but a myth nonetheless. Unless you are dealing with a machine that is literally never allowed to connect to a network, at some point, it will communicate with other computers. Whenever this happens, there is the potential for malware and viruses to spread. In some instances, malware can infect your operating system via physical data sharing devices like USB drives or CDs. Infecting your computer with malware could have detrimental outcomes. For instance, a ransomware application can easily encrypt vast quantities of data in just a few moments. Your best bet to maintain a secure system at all times is by running a reliable antimalware tool on your computer. Don’t assume that just because a computer has remained offline, it can’t be infected. In 2013 first reports came in that scientist have developed a prototype malware that might be able to use inaudible audio signals to communicate. As a result of that, a malicious piece of software could communicate and potentially spread to computers that are not connected to a network. Myth 5: A VPN ensures security VPNs can be an excellent way of improving your overall online security by hiding your identity and making you much more difficult to trace. However, you should always be very careful about the VPN services that you use, especially if they are free. There are many free VPNs which exist for nefarious purposes. They might be hiding your IP address (many are not), but their primary function is to siphon away your personal data, which they will then sell. The simplest way to avoid these types of thefts is to, first of all, ensure that you thoroughly research and vet any service before using it. Check this list to be sure that a VPN service of your choice does not log data. Often a VPNs selling point is security and privacy. However, that’s not the case at all times. Not too long ago, PureVPN, a service that stated in its policies that it maintains a strict no-log approach at all times, have been exposed to lying. As it turns out, the company handed over information to the FBI regarding the activity of a cyberbully, Ryan Lin, who used a number of security tools, including PureVPN, to conceal his identity. [dropcap]M[/dropcap]any users have fallen prey to virtual security myths and suffered detrimental consequences. Cybersecurity is something that we should all take more seriously, especially as we are putting more of our lives online than ever before. Knowing the above 5 cybersecurity myths is a useful first step in implementing better practices yourself. About the author   Harold Kilpatrick is a cybersecurity consultant and a freelance blogger. He's currently working on a cybersecurity campaign to raise awareness around the threats that businesses can face online.   Cryptojacking is a growing cybersecurity threat, report warns Top 5 cybersecurity assessment tools for networking professionals How can cybersecurity keep up with the rapid pace of technological change?
Read more
  • 0
  • 0
  • 3660