Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech Guides

851 Articles
article-image-time-for-data-privacy-duckduckgo-ceo-gabe-weinberg-in-an-interview-with-kara-swisher
Vincy Davis
28 May 2019
8 min read
Save for later

Time for data privacy: DuckDuckGo CEO Gabe Weinberg in an interview with Kara Swisher

Vincy Davis
28 May 2019
8 min read
On the latest Recode Decode episode, Kara Swisher (co-founder) interviewed DuckDuckGo CEO, Gabriel Weinberg on data tracking and why it’s time for Congress to act now as federal legislation is necessary in the current scenario of constant surveillance. DuckDuckGo is an Internet search engine that emphasizes on protecting searchers' privacy. Its market share in the U.S. is about 1%, as compared to more than 88% share owned by Google. Given below are some of the key highlights of the interview. On how DuckDuckGo is different from Google DuckDuckGo which is a internet privacy company, helps users’ to “escape the creepiness and tracking on the internet”. DuckDuckGo has been an alternative to Google since 11 years. It has about a billion searches a month and is the fourth-largest search engine in the U.S. Weinberg states that “Google and Facebook are the largest traders of trackers”, and claims that his company blocks trackers from hundreds of companies. DuckDuckGo also enables more encryption as they force users to go to the unencrypted version of a website. This prevents Internet Service Providers(ISPs)  from tracking the user. When asked the reason for settling into the ‘search business’, Weinberg replied that being from a tech background (tech policy from MIT), he has always been interested in search. After developing this business, he got many privacy queries. It's then that he realized that, “One, searches are essentially the most private thing on the internet. You just type in all your deepest, darkest secrets and search, right? The second thing is, you don’t need to actually track people to make money on search,” so he realized that this would be a “better user experience, and just made the decision not to track people.” Read More: DuckDuckGo chooses to improve its products without sacrificing user privacy The switch from contextual advertising to behavioral advertising From the time internet started working till mid-2000s, the kind of advertising used is called as contextual advertising. It had a very simple routine, “sites used to sell their own ads, they would put advertising based on the content of the article”. Post mid-2000, the working shifted to behavioral advertising. It includes the “creepy ads, the ones that kind of follow you around the internet.” Weinberg added that when website publishers in the Google Network of content sites used to sell their biggest inventory, banner advertising was done at the top of the page. To explore more money, the bottom of the pages was sold to ad networks, to target the site content and audience. These advertisements are administered, sorted, and maintained by Google, under the name AdSense. This helped Google to get all the behavioral data. So if a user searched for something, Google can follow them around with that search. As these advertisements became more lucrative, publishers ceded most of their page over to this behavioral advertising. There has been “no real regulation in tech” to prevent this. Through these trackers, companies like Google and Facebook and many others get user information and browsing history, including purchase history, location history, browsing history, search history, and even user location. Read More: Ireland’s Data Protection Commission initiates an inquiry into Google’s online Ad Exchange services Read More: Advocacy groups push FTC to fine Facebook and break it up for repeatedly violating the consent order and unfair business practices Weinberg informs that, “when you go to, now, a website that has advertising from one of these networks, there’s a real-time bidding against you, as a person. There’s an auction to sell you an ad based on all this creepy information you didn’t even realize people captured” People do ‘care about privacy’ Weinberg says that “before you knew about it, you were okay with it because you didn’t realize it was so invasive, but after Cambridge Analytica and all the stories about the tracking, that number just keeps going up and up and up.” He also explained about the setting “do not track”, which is available in most of the privacy settings of the browser. He says “People are like, ‘No one ever goes into settings and looks at privacy.’ That’s not true. Literally, tens of millions of Americans have gone into their browser settings and checked this thing. So, people do care!”. Weinberg believes ‘do not track’ is a better mechanism for privacy laws, because once the user makes the setting, no more popups will be allowed i.e., no more sites can track you. He also hopes that the ‘do not track’ mechanism is passed by Congress as it will allow all the people in the country to not being tracked. On challenging Google One main issue faced by DuckDuckGo is that not many people are aware of it. Weinberg says, “There’s 20 percent of people that we think would be interested in switching to DuckDuckGo, but it’s hard to convey all these privacy concepts.” He also claimed that companies like Google are altering people’s searches through ‘filter bubble’. As an example, he added, “when you search, you expect to get the results right? But we found that it varies a lot by location”. Last year, DuckDuckGo had accused Google, that their search personalization contributes to “filter bubbles”. In 2012, DuckDuckGo ran a study showing Google's filter bubble may have significantly influenced the 2012 U.S. Presidential election by inserting tens of millions of more links for Obama than for Romney in the run-up to that election. Read More: DeepMind researchers provide theoretical analysis on recommender system, ‘echo chamber’ and ‘filter bubble effect’ How to prevent online tracking Other than using DuckDuckGo and not using say, any of Google’s internet home devices, Swisher asked Weinberg, what are other ways to protect ourselves from being tracked online. To this, Weinberg says there are plenty of other options available. He suggested, “For Google, there are actually alternatives in every category.” For emails, he suggested ProtonMail, FastMail as options. When asked about Facebook, he admitted that “there aren’t great alternatives to it” and added cheekily, “Just leave it”. He further added that there are a bunch of privacy settings available in the devices themselves. He also mentioned about DuckDuckGo blog spreadprivacy.com which provides advice tips. Also there are things which users can do, like turning off ad tracking in the device or to use an end-to-end encryption. On Facial recognition system Weinberg says “Facial recognition is hard”. A person can wear any minor thing to avoid getting caught on the camera. He admits, “you’re going to need laws” to regulate the use of it and thinks San Francisco started a great trend in banning the technology. Many other points were also discussed by Swisher and Weinberg, which included the Communications Decency Act 230 to control sensitive data on the internet. Weinberg also asserted that there’s a need for a national bill like GDPR in the U.S. There were also questions raised on Amazon’s growing advertisements through Google and Facebook. Weinberg also dismissed the probability of having a DuckDuckGo for YouTube anytime soon. Many users agree with Gabriel Weinberg that we should opt into data tracking and it is time to make ‘Do not track’ the norm. A user on Hacker News commented, “Discounting Internet by axing privacy is a nasty idea. Privacy should be available by default without any added price tags.” Another user added, “In addition to not stalking you across the web, DDG also does not store data on you even when using their products directly. For me that is still cause for my use of DDG.” However, as mentioned by Weinberg, there are still people who do not mind being tracked online. It can be because they are not aware of the big trades that takes place behind a user’s one click. A user on Reddit has given an apt basis for this,  “Privacy matters to people at home, but not online, for some reason. I think because it hasn't been transparent, and isn't as obvious as a person looking in your windows. That slowly seems to be changing as more of these concerns are making the news, more breaches, more scandals. You can argue the internet is "wandering outside", which is true to some degree, but it doesn't feel that way. It feels private, just you and your computer/phone, but it's not. What we experience is not matching up with reality. That is what's dangerous/insidious about the whole thing. People should be able to choose when to make themselves "public", and you largely can't because it's complicated and obfuscated.” For more details about their conversation, check out the full interview. Speech2Face: A neural network that “imagines” faces from hearing voices. Is it too soon to worry about ethnic profiling? ‘Facial Recognition technology is faulty, racist, biased, abusive to civil rights; act now to restrict misuse’ say experts to House Oversight and Reform Committee GDPR complaint in EU claim billions of personal data leaked via online advertising bids
Read more
  • 0
  • 0
  • 3127

article-image-6-ways-ai-transforming-web-development
Sugandha Lahoti
19 Dec 2017
8 min read
Save for later

6 ways you can give an Artificially Intelligent Makeover to the Web Development

Sugandha Lahoti
19 Dec 2017
8 min read
The web is an ever changing world! Users are always seeking personalized content and richer experiences - websites which can do whatever users want, however they want, and exactly as they want. In other words, end-users expect smarter applications with self-learning capabilities and hyper-customized user experiences. Now this poses a major challenge for developers - How do they design websites that deliver new and personalized content every time? Traditional approaches for web development are not the answer. They can, in fact, be problematic. Why you ask? Here are some clues. Building basic layouts and designing the website alone takes time. Forget customizing for dynamic content. Web app testing is a tedious, time-intensive process prone to errors. Even mundane web development decisions depend on the developer, slowing down the time taken to go live. Automating the web development process, starting with the more structured, repetitive and clearly defined tasks, can help developers pay less attention to cumbersome details and focus on the more value adding aspects of development such as formulating design strategy, planning the ultimate user experience and other such activities. Artificial Intelligence can help not just with this kind of intelligent automation, but also help do a lot more - from assisting with design conceptualization, website implementation to web analytics. This human-machine collaboration have the potential to transform the web as we know it. How AI can improve web development Through the lens of a typical software development lifecycle, let’s look at some ways in which AI is transforming web development. 1. Automating complex requirement gathering and analysis Using an AI powered chatbot or voice assistant, for instance, one can automate the process of collecting client requirements and end-user stories without human intervention. It can also prepare a detailed description of the gathered data and use data extraction tools to generate insights that then drive the web design and development strategy. This is possible through a carefully constructed system that employs NLP, ML, computer vision and image recognition algorithms and tools amongst others. Kore.ai is one such platform which empowers decision makers with insights they need to drive business outcomes within data-driven analytics. 2. Simplifying web designing with AI virtual assistants Designing basic layouts and templates of web pages is a tedious job for all developers. AI tools such as virtual assistants like The Grid’s Molly can help here by simplifying the design process. By prompting questions to the user (in this case the web owner or even the developer), and extracting content from their answers, AI assistants can create personalized content with the exact combination of branding, layout, design, and content required by that user. Take Adobe Sensei for instance - it can automatically analyze inputs and recommend design elements to the user. These range from automating basic photo editing skills such as cropping using image recognition techniques to creating elements in images which didn’t exist before by studying the neighbouring pixels. Developers now need only to focus on training a machine to think and perform like a designer. 3. Redefining web programming with self-learning algorithms AI can also facilitate programming. AI programs can perform basic tasks like updating and adding records to a database, predict which bits of code are most likely to be used to solve a problem, and then utilize those predictions to prompt developers to adopt a particular solution. An interesting example is Pix2Code that aims at automating front-end development. Infact, AI algorithms can also be utilized to create self-modifying codes right from scratch! Think of it as a fully-functional piece of code without human involvement. Developers can therefore build smarter apps and bots using AI tech at much faster rates than before. They would however need to train these machines and feed them with good datasets to start with. The smarter the design and more comprehensive the training, the better the results these systems produce. This is where the developers’ skills make a crucial difference.    4. Putting testing and quality assurance on auto-pilot AI algorithms can help an application test itself, with little or no human input. They can predict the key parameters of software testing processes based on historical data. They can also detect failure patterns and amplify failure prediction at a much higher efficiency than traditional QA approaches. Thus, bug identification and remedial will no longer be a long and slow process. As we speak, Microsoft is readying the release of an AI Bug Finder for developers, Microsoft Security Risk Detection. In this new AI powered QA environment, developers can discover more effective ways of testing, identify outliers faster, and work on effective code coverage techniques all with no or basic testing experience. In simple terms, developers can just focus on perfecting the build while the AI handles complex test cases and resultant bugs automatically. 5. Harnessing web analytics for SEO with AI SEO strategies rely heavily on number crunching. Many web analytics tools are good, however their potential is currently limited by the processing capabilities of humans who interpret that data for their websites. With AI backed data mining and analytics, one can maximise the usefulness of a website’s meta-data and other user generated data and metadata. The Predictive Engines built using AI technologies can generate insights which can point the developers to irregularities in their website architecture or highlight ill-fit content from an SEO point of view. Using such insights, AI can list out better ways to design websites and develop web content that connect with the target audience. Market Brew is one such Artificially Intelligent SEO platform which uses AI to help developers react and plan the content for their websites in ways that search engines might perceive them. 6. Providing superior end-user experience with chatbots AI powered chatbots can take customer support and interaction to the next level. A simple rule based chatbot responds to only specific preset commands. An AI infused chatbot, on the other hand, can simulate an actual conversation by learning something new from every conversation and tailoring answers and actions accordingly. They can automate routine tasks and provide relevant information and services. Imagine the possibilities here - these chatbots can enhance visitor engagement by responding to queries, to comments on blog posts, and provide real-time assistance and personalization. eBay’s AI-powered ShopBot is one such chatbot built on facebook messenger which can help consumers narrow down best deals from eBay from their entire listings and answer customer-centric queries. Skill up Developers! With the rise of AI, it is clear that developers will play a different role from the traditional role of a programmer-cum-designer. Developers will need to adapt their technical skillsets to rise above and complement the web development work that AI is capable of taking over. For example, they will now need to focus more on training AI algorithms to learn web and mobile usage patterns for better recommendations. A data-driven approach is now required with more focus on curating the data and taking the software through the process of learning by itself, and writing scripts to interact with the software. To do these, web developers need to get upto speed with the basics of machine learning, natural language processing, deep learning, etc and apply the tools and techniques related to AI into their web development workflow. The Future Perfect Artificial Intelligence has found its way into everything imaginable. Within web development this translates to automated web design, intelligent application development, highly proficient recommendation engines and many more. Today the use of AI in web development is a nice to have ammunition in an web developer’s toolkit. Soon, AI will make itself indispensable to the web development ecosystem ushering in the intelligent web. Developers will be a hybrid of designers, programmers and ML engineers who have a good grasp of user experience, are comfortable thinking in abstracts and algorithms and equally well versed with translating them into AI assisted elegant code . The next milestone for AI in web development is building self-improving apps which can think beyond the confines of human thought. Such apps would have the ability to perceive connections between data points that have not been previously considered, or are out of the reach of human intelligence. The ultimate goal of such machines, on the web and otherwise, would be to gain clairvoyance on aspects humans are missing or are oblivious to. Here’s hoping that when such a revolutionary AI hits the market, it impacts society for the greater good.
Read more
  • 0
  • 0
  • 3123

article-image-3-ways-jupyterlab-will-revolutionize-interactive-computing
Amey Varangaonkar
17 Nov 2017
4 min read
Save for later

3 ways JupyterLab will revolutionize Interactive Computing

Amey Varangaonkar
17 Nov 2017
4 min read
The history of the Jupyter notebook is quite interesting. It started as a spin-off project to IPython in 2011, with support for the leading languages for data science such as R, Python, and Julia. As the project grew, Jupyter’s core focus shifted to being more interactive and user-friendly. It was soon clear that Jupyter wasn’t just an extension of IPython - leading to the ‘Big Split’ in 2014. Code reusability, easy sharing, and deployment, as well as extensive support for third-party extensions - these are some of the factors which have led to Jupyter becoming the popular choice of notebook for most data professionals. And now, Jupyter plan to go a level beyond with JupyterLab - the next-gen Jupyter notebook with strong interactive and collaborative computing features. [box type="info" align="" class="" width=""] What is JupyterLab? JupyterLab is the next-generation end-user version of the popular Jupyter notebook, designed to enhance interaction and collaboration among the users. It takes all the familiar features of the Jupyter notebook and presents them through a powerful, user-friendly interface.[/box] Here are 3 ways, or reasons shall we say, to look forward to this exciting new project, and how it will change interactive computing as we know it. [dropcap]1[/dropcap] Improved UI/UX One of Jupyter’s strongest and most popular feature is that it is very user-friendly, and the overall experience of working with Jupyter is second to none. With improvements in the UI/UX, JupyterLab offers a cleaner interface, with an overall feel very similar to the current Jupyter notebooks. Although JupyterLab has been built with a web-first vision, it also provides a native Electron app that provides a simplified user experience.The other key difference is that JupyterLab is pretty command-centric, encouraging users to prefer keyboard shortcuts for quicker tasks. These shortcuts are a bit different from the other text editors and IDEs, but they are customizable. [dropcap]2[/dropcap] Better workflow support Many data scientists usually start coding on an interactive shell and then migrate their code onto a notebook for building and deployment purposes. With JupyterLab, users can perform all these activities more seamlessly and with minimal effort. It offers a document-less console for quick data exploration and offers an integrated text editor for running blocks of code outside the notebook. [dropcap]3[/dropcap] Better interactivity and collaboration Probably the defining feature which propels JupyterLab over Jupyter and the other notebooks is how interactive and collaborative it is, as compared to the other notebooks. JupyterLab has a side by side editing feature and provides a crisp layout which allows for viewing your data, the notebook, your command console and some graphical display, all at the same time. Better real-time collaboration is another big feature promised by JupyterLab, where users will be able to share their notebooks on a Google drive or Dropbox style, without having to switch over to different tool/s. It would also support a plethora of third-party extensions to this effect, with Google drive extension being the most talked about. Popular Python visualization libraries such as Bokeh will now be integrated with JupyterLab, as will extensions to view and handle different file types such as CSV for interactive rendering, and GeoJSON for geographic data structures. JupyterLab has gained a lot of traction in the last few years. While it is still some time away from being generally available, the current indicators look quite strong. With over 2,500 stars and 240 enhancement requests on GitHub already, the strong interest among the users is pretty clear. Judging by the initial impressions it has had on some users, JupyterLab hasn’t made a bad start at all, and looks well and truly set to replace the current Jupyter notebooks in the near future.
Read more
  • 0
  • 0
  • 3118

article-image-brickcoin-might-change-your-mind-about-cryptocurrencies
Savia Lobo
11 Apr 2018
3 min read
Save for later

BrickCoin might just change your mind about cryptocurrencies

Savia Lobo
11 Apr 2018
3 min read
At the start of 2018, the cryptocurrency boom seemed to be at an end. Bitcoin's price plunged in just a matter of weeks from a mid-December 2017 high of $20,000 to less than $10,000. Suddenly everything seemed unpredictable and volatile. The cryptocurrency honeymoon was at an end. However, while many are starting to feel cautious about investing, a new cryptocurrency might change the game. BrickCoin might well be the cryptocurrency to reinvigorate a world that's shifted from optimism to pessimism in just a couple of months. But what is BrickCoin? And how is it different from other cryptocurrencies? Most importantly, why might you be confident in its success? What is BrickCoin? This one’s also a Blockchain based currency, but one backed with real estate(REITs). BrickCoin aims to be the first regulated, KYC, and AML compliant real estate crypto. The real estate is comprehensive to regulators and is accepted by many as a robust asset class. This is a major distinguishing point between BrickCoin and other cryptocurrencies. Traditional money saving methods - savings account and fixed deposits - are not inflation-proof. They also have very low levels of interest at the moment. On the other hand, complex investment options such as hedge funds are typically only available to wealthy individuals as they require large initial investments. These also do not offer ready liquidity and are vulnerable to bankruptcy. BrickCoin comes to the rescue here, as it claims to be Inflation proof. Find out more about BrickCoin here. The key features of BrickCoin It is a savings token which can be bought with traditional currency or digital currency. It represents an investment in a piece of commercial debt-free real estate. The real estate is held as part of a very secure, high-value, debt-free REIT. BrickCoins are kept in a mobile digital wallet. All transactions are fully-managed, validated and trackable by blockchain technology. BrickCoins can be converted into FIAT currency instantly. [box type="note" align="" class="" width=""]Also read about CryptoML, a machine learning powered cryptocurrency platform.[/box] BrickCoin is essentially the next step in the evolution of cryptocurrency. It is a savings scheme that is backed by a non-inflationary asset - commercial debt-free real estate - to deliver stable capital preservation. As a cryptocurrency, it allows savers to convert their money to and from BrickCoin tokens using the full security and convenience of blockchain technology. BrickCoin will be the first cryptocurrency that bridges the gap between the necessary reliance on the FIAT currencies and the asset-backed wealth-creation opportunities that are often out of reach for many ordinary savers. Crypto News Cryptojacking is a growing cybersecurity threat, report warns Coinbase Commerce API launches Crypto-ML, a machine learning powered cryptocurrency platform Crypto Op-Ed There and back again: Decrypting Bitcoin`s 2017 journey from $1000 to $20000 Will Ethereum eclipse Bitcoin? Beyond the Bitcoin: How cryptocurrency can make a difference in hurricane disaster relief Cryptocurrency Tutorials Predicting Bitcoin price from historical and live data How to mine bitcoin with your Raspberry Pi Protecting Your Bitcoins
Read more
  • 0
  • 0
  • 3105

article-image-computing-technology-at-a-tipping-point-says-wef-davos-panel
Melisha Dsouza
30 Jan 2019
9 min read
Save for later

‘Computing technology at a tipping point’, says WEF Davos Panel

Melisha Dsouza
30 Jan 2019
9 min read
The ongoing World Economic Forum meeting 2019 has seen a vast array of discussions on political, technological and other industrial agendas. The meeting brings together the world’s foremost CEOs, government officials, policy-makers, experts and academics, international organizations, youth, technology innovators and representatives of civil society with an aim to drive positive change in the world on multiple facets. This article will focus on the talk ‘Computing Technology at a Tipping Point’ that was moderated by Nicholas Carlson from Business Insider with a panel consisting of Antonio Neri, president and Chief Executive Officer of Hewlett Packard Enterprise, Jeremy O’Brien, CEO of PsiQuantum  and Amy Webb, Adjunct Assistant Professor of NYU Stern School of Business. Their discussion explored questions of today's age, ranging from- why this is an important time for technology, the role of governments in encouraging a technological revolution, role of the community and business in optimizing tech and the challenges faced as we set out to utilize the next generation computing technologies like quantum computing and AI. Quantum Computing - The necessity of the future The discussion kicked off with the importance of Quantum computing at the present as well as the future. O’Brien defined Quantum computing as “Nothing short of a necessary tool that humans need to build their future”. According to him, QC is a “genuinely exponentially powerful technology”, due to the varied applications that quantum computing can impact if put to use in the correct way - from human health, energy, to molecular chemistry among others. Webb calls the year 2019 as the year of divergence, where we will move from the classic Von Neumann architecture to a more diversified Quantum age. Neri believes we are now at the end of Moore’s law that states overall processing power for computers will double every two years. He says that two years from now we will generate twice the amount of data as generated today and there will be a major divergence between the data generated and the computation power. This is why we need to focus on solving architectural problems of processing algorithms and computing data rather than focussing on the amount of data. Why is this an exciting time for tech? O’Brien: Quantum Computing, Molecular simulation for Techno-Optimism O’Brien expresses his excitement in the Quantum Computing and molecular simulation field where developers are just touching the waters with both these concepts. He has been in the QC field for the past 20 years and says that he has faith in Quantum computing and even though it's the next big thing to watch out for, he assures developers that it will not replace conventional computing.  Quantum computers can be used in fact to improve the performance of classical computing systems to handle the huge amounts of data and information that we are faced with today. In addition to QC, another concept he believes that ‘will transform lives’ is molecular simulation. Molecular simulation will design new pharmaceuticals, new chemicals and help build really sophisticated computers to solve exponentially large problems. Webb: The beginning of the end of smartphones “We are in the midst of a great transformation. This is an explosion happening in slow motion”. Based on data-driven models she says this is the beginning of the end of smartphones. 10 years from now, as our phones retrieve biometric information to information derived from what we wear and we use, the computing environments will look different. Citing an example of MagicLeap who creates spatial glasses, she mentions how computable devices we wear will turn our environment into a computable space to visualize data in a whole different way. She advises business' to rethink how they function;  even between the current cloud V/s edge and computer architectures change. Companies should start thinking in terms of 10 years rather than short term, since decisions made today will have long term consequences. While this is the positive side, Webb is pessimistic that there is no global alignment on the use of data. On the basis of GDPR and other data laws, systems have to be trained. Neri: continuous re-skilling to stay relevant Humans should continuously re-skill themselves with changing times and technologies to avoid an exclusion from new jobs as and when they arrive. He further states that, in the field of Artificial intelligence, there should not be a concentration of power in a few entities like Baidu, Alibaba, Tencent Google, Microsoft, Facebook, Apple and others. While these companies are at the foremost while deciding the future of AI, innovation should happen at all levels. We need guidelines and policy  for the same- not to regulate but to guide the revolution. Business, community and Government should start thinking about ethical and moral codes. Government’s role in Technological Optimism The speakers emphasized on the importance of the government's’ involvement in these ‘exciting times’ and how they can work towards making citizens feel safe against the possible abuse of technology. Webb: Regulation of AI doesn't make sense We need to have conversations on optimizing Artificial Intelligence using available data. She expresses her opinion that the regulation of AI doesn't make sense. This is because we shift from a group of people understanding and implementing optimization to lawmakers who do not understand technical know-how. Nowadays, people focus on regulating tech instead of optimizing it because most don’t understand the nitty-gritties of a system, nor do they understand a system’s limitations. Governments play a huge role in this optimization or regulation decision making. She emphasizes on the need to get hold of the right people to come to an agreement ,“ where companies are a hero to their shareholders and the government to their citizens” . Governments should start talking about and exploring Quantum computing such that its benefits are distributed equitably in a shortest amount of time. Neri: Human centered future of computing He adds that for a human centered future of computing, it is we who need to decide what is good or bad for us. He agrees with Webb’s point that since technology evolves in a way we cannot think of, we need to come to reasonable conclusions before a crisis arrives. Further, he adds that governments should inculcate moral ethics while adopting and implementing technology and innovation. Role of Politicians in technology During the discussion, a member of the European Parliament stated that people have a common notion that politicians do not understand technology and cannot keep up with changing times. Stating that many companies do not think about governance, human rights, democracy and possible abuse of their products; the questioner says that we need a minimum threshold to protect human rights and safeguard humans against abuse. Her question was centered around ways to invite politicians to understand tech better before it's too late. Expressing her gratitude that the European Parliament is asking such a thoughtful question, Webb suggested that creating some kind of framework that the key people on all sides of the spectrum can agree to and a mechanism that incentivises everyone to play fairly- will help parliaments and other law making bodies to feel inclusive in understanding technology. Neri also suggested a guiding principle to think ethically before using any technology without stopping innovation. Technological progress in China and its implications on the U.S. Another question that caught our attention was the progress of technology in China and its implications on the US. Webb says that the development of tools, technologies, frameworks and  data gathering mechanisms to mine, refine and monetize data have different approaches in US and China. In China, the activities related to AI and activities of Baidu, Alibaba and Tencent are under the leadership of the Chinese communist Party. She says that it is hard to overlook what is happening in Chain with the BRI (Belt to Road Initiative), 5G, digital transformation, expansion in fibre and expansion in e-commerce  and a new world order is being formed because of the same. She is worried that the US and its allies will be locked out economically from the BRI countries and AI will be one of the factors propelling the same . Role of the Military in technology The last question pointed out that some of the worst abuses of technology can be done by governments and the military has the potential to misuse technology. We need to have conversations on the ethical use of technology and how to design technology to fit ethical morals. Neri says that corporations do have a point of view on the military using technology for various reasons and the governments are consulting them on the impacts of technology on the world as well. This is a hard topic and the debate is ongoing even though it is not visible to the people. Webb says that the US always had ties with the government. We live in a world of social media where conversations spiral out of control because of the same.  She advises companies to meet quarterly to have conversations along this line and understanding how their work with the military/ government align with the core values of their company. Sustainability and Technology Neri states that 6% of the global power is used to power data centers. It is important to determine how to address this problem. The solutions proposed for the same are: Innovate in different ways. Be mindful the entire supply chain--->from the time you procure minerals to build the system and recycle it. We need to think of a circular economy. Consider if systems can be re-used by other companies, check parts to be re-cycled and reused. We can use synthetic DNA to back up data - this could potentially use less energy. To sustain human life on this planet, we need to optimise how we ruse resources- physical and virtual, QC tool will invent the future. Materials can be built using QC. You can listen to the entire talk at the World Economic Forum’s official page. What the US-China tech and AI arms race means for the world – Frederick Kempe at Davos 2019 Microsoft’s Bing ‘back to normal’ in China Facebook’s outgoing Head of communications and policy takes the blame for hiring PR firm ‘Definers’ and reveals more
Read more
  • 0
  • 0
  • 3077

article-image-modern-go-development
Xavier Bruhiere
06 Nov 2015
8 min read
Save for later

Modern Go Development

Xavier Bruhiere
06 Nov 2015
8 min read
  The Go language indisputably generates lot of discussions. Bjarne Stroustrup famously said: There are only two kinds of languages: the ones people complain about and the ones nobody uses. Many developers indeed share their usage retrospectives and the flaws they came to hate. No generics, no official tool for vendoring, built-in methods break the rules Go creators want us to endorse. The language ships with a bunch of principals and a strong philosophy. Yet, The Go Gopher is making its way through companies. AWS is releasing its Go SDK, Hashicorp's tools are written in Go, and so are serious databases like InfluxDB or Cockroach. The language doesn't fit everywhere, but its concurrency model, its cross-platform binary format, or its lightning speed are powerful features. For the curious reader, Texlution digs deeper on Why Golang is doomed to succeed. It is also intended to be simple. However, one should gain a clear understanding of the language's conventions and data structures before producing efficient code. In this post, we will carefully setup a Go project to introduce a robust starting point for further development. Tooling Let's kickoff the work with some standard Go project layout. New toys in town try to rethink the way they are organized, but I like to keep things simple as long as it just works. Assuming familiarity with the Go installation and GOPATH mess, we can focus on the code's root directory. ➜ code tree -L 2 . ├── CONTRIBUTING.md ├── CHANGELOG.md ├── Gomfile ├── LICENCE ├── main.go ├── main_test.go ├── Makefile ├── shippable.yml ├── README.md ├── _bin │   ├── gocov │   ├── golint │   ├── gom │   └── gopm └── _vendor    ├── bin    ├── pkg    └── src To begin with, README.md, LICENCE and CONTRIBUTING.md are usual important documents for any code expected to be shared or used. Especially with open source, we should care about and clearly state what the project does, how it works and how one can (and cannot) use it. Writing a Changelog is also a smart step in that direction. Package manager The package manager is certainly a huge matter of discussion among developers. The community was left to build upon the go get tool and many solutions arisen to bring deterministic builds to Go code. While most of them are good enough tools, Godep is the most widely used, but Gom is my personal favorite: Simplicity with explicit declaration and tags # Gomfile gom 'github.com/gin-gonic/gin', :commit => '1a7ab6e4d5fdc72d6df30ef562102ae6e0d18518' gom 'github.com/ogier/pflag', :commit => '2e6f5f3f0c40ab9cb459742296f6a2aaab1fd5dc' Dependency groups # Gomfile (continuation) group :test do # testing libraries gom 'github.com/franela/goblin', :commit => 'd65fe1fe6c54572d261d9a4758b6a18d054c0a2b' gom 'github.com/onsi/gomega', :commit => 'd6c945f9fdbf6cad99e85b0feff591caa268e0db' gom 'github.com/drewolson/testflight', :commit => '20e3ff4aa0f667e16847af315343faa39194274a' # testing tools gom 'golang.org/x/tools/cmd/cover' gom 'github.com/axw/gocov', :commit => '3b045e0eb61013ff134e6752184febc47d119f3a' gom 'github.com/mattn/goveralls', :commit => '263d30e59af990c5f3316aa3befde265d0d43070' gom 'github.com/golang/lint/golint', :commit => '22a5e1f457a119ccb8fdca5bf521fe41529ed005' gom 'golang.org/x/tools/cmd/vet' end Self-contained project # install gom binary go get github.com/mattn/gom # ... write Gomfile ... # install production and development dependencies in `./_vendor` gom -test install We just declared and bundled full requirements under its root directory. This approach plays nicely with trendy containers. # we don't even need Go to be installed # install tooling in ./_bin mkdir _bin && export PATH=$PATH:$PWD/_bin docker run --rm -it --volume $PWD/_bin:/go/bin golang go get -u -t github.com/mattn/gom # asssuming the same Gomfile as above docker run --rm -it --volume $PWD/_bin:/go/bin --volume $PWD:/app -w /app golang gom -test install An application can quickly rely on a significant number of external resources. Dependency managers like Gom offers a simple workflow to avoid breaking-change pitfalls - a widespread curse in our fast paced industry. Helpers The ambitious developer in love with productivity can complete its toolbox with powerful editor settings, an automatic fix, a Go repl, a debugger, and so on. Despite being young, the language comes with a growing set of tools helping developers to produce healthy codebase. Code With basic foundations in place, let's develop a micro server powered by Gin, an impressive web framework I had great experience with. The code below highlights commonly best practices one can use as a starter. // {{ Licence informations }} // {{ build tags }} // Package {{ pkg }} does ... // // More specifically it ... package main import ( // built-in packages "log" "net/http" // third-party packages "github.com/gin-gonic/gin" flag "github.com/ogier/pflag" // project packages placeholder ) // Options stores cli flags type Options struct { // Addr is the server's binding address Addr string } // Hello greets incoming requests // Because exported identifiers appear in godoc, they should be documented correctly func Hello(c *gin.Context) { // follow HTTP REST good practices with an adequate http code and json-formatted response c.JSON(http.StatusOK, gin.H{ "hello": "world" }) } // Handler maps endpoints with callbacks func Handler() *gin.Engine { // gin default instance provides logging and crashing recovery middlewares router := gin.Default() router.GET("/greeting", Hello) return router } func main() { // parse command line flags opts := Options{} flag.StringVar(&opts.Addr, "addr", ":8000", "server address") flag.Parse() if err := Handler().Run(opts.Addr); err != nil { // exit with a message and a code status 1 on errors log.Fatalf("error running server: %vn", err) } } We're going to take a closer look at two important parts this snippet is missing : error handling and interfaces' benefits. Errors One tool we could have mentioned above is errcheck, which checks that you checked errors. While it sometimes produces cluttered code, Go error handling strategy enforces rigorous development : When justified, use errors.New("message") to provide a helpful output. If one needs custom arguments to produce a sophisticated message, use fmt.Errorf("math: square root of negative number %g", f) For even more specific errors, let's create new ones: type CustomError struct { arg int prob string } // Usage: return -1, &CustomError{arg, "can't work with it"} func (e *CustomError) Error() string { return fmt.Sprintf("%d - %s", e.arg, e.prob) } Interfaces Interfaces in Go unlock many patterns. In the gold age of components, we can leverage them for API composition and proper testing. The following example defines a Project structure with a Database attribute. type Database interface { Write(string, string) error Read(string) (string, error) } type Project Structure { db Database } func main() { db := backend.MySQL() project := &Project{ db: db } } Project doesn't care of the underlying implementation of the db object it receives, as long as this object implements Database interface (i.e. implements read and write signatures). Meaning, given a clear contract between components, one can switch Mysql and Postgre backends without modifying the parent object. Apart from this separation of concern, we can mock a Database and inject it to avoid heavy integration tests. Hopefully this tiny, carefully written snippet should not hide too much horrors and we're going to build it with confidence. Build We didn't join a Test Driven Development style but let's catch up with some unit tests. Go provides a full-featured testing package but we are going to level up the game thanks to a complementary combo. Goblin is a thin framework featuring Behavior-driven development close to the awesome Mocha for node.js. It also features an integration with Gomega, which brings us fluent assertions. Finally testflight takes care of managing the HTTP server for pseudo-integration tests. // main_test.go package main import ( "testing" . "github.com/franela/goblin" . "github.com/onsi/gomega" "github.com/drewolson/testflight" ) func TestServer(t *testing.T) { g := Goblin(t) //special hook for gomega RegisterFailHandler(func(m string, _ ...int) { g.Fail(m) }) g.Describe("ping handler", func() { g.It("should return ok status", func() { testflight.WithServer(Handler(), func( r*testflight.Requester) { res := r.Get("/greeting") Expect(res.StatusCode).To(Equal(200)) }) }) }) } This combination allows readable tests to produce readable output. Given the crowd of developers who scan tests to understand new code, we added an interesting value to the project. It would certainly attract even more kudos with a green test-suite. The following pipeline of commands try to validate a clean, bug-free, code smell-free, future-proof and coffee-maker code. # lint the whole project package golint ./... # run tests and produce a cover report gom test -covermode=count -coverprofile=c.out # make this report human-readable gocov convert c.out | gocov report # push the reslut to https://coveralls.io/ goveralls -coverprofile=c.out -repotoken=$TOKEN Conclusion Countless posts conclude this way, but I'm excited to state that we merely scratched the surface of proper Go coding. The language exposes flexible primitives and unique characteristics one will learn the hard way one experimentation after another. Being able to trade a single binary against a package repository address is such an example, like JavaScript support. This article introduced methods to kick-start Go projects, manage dependencies, organize code, offered guidelines and testing suite. Tweak this opinionated guide to your personal taste, and remember to write simple, testable code. About the author Xavier Bruhiere is the CEO of Hive Tech. He contributes to many community projects, including Occulus Rift, Myo, Docker and Leap Motion. In his spare time he enjoys playing tennis, the violin and the guitar. You can reach him at @XavierBruhiere.
Read more
  • 0
  • 0
  • 3061
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at £15.99/month. Cancel anytime
article-image-the-amped-up-web-by-google
Amarabha Banerjee
23 Sep 2018
5 min read
Save for later

The AMPed up web by Google

Amarabha Banerjee
23 Sep 2018
5 min read
Google apparently wants all the web developers to adopt the AMP approach for their websites. The AMP project was announced by Google on October 7, 2015, and AMP pages first became available to web users in February 2016. Mobile search is more popular presently as compared to desktop search. It is important for web pages to appear in Google’s mobile search results, and this is why AMP is not optional for web publishers. Without AMP, a publisher’s articles will be extremely unlikely to appear in the Top Stories carousel on mobile search in Google. This means the developers will have two options for them, either to design the complete app in the AMP format, or have two formats ready, one as per their own design considerations, the other as per the Google AMP format. Does that really work for the developers? We will try to address that question here in this article. The trouble with Web content - Searchability and Indexing The searchability of your application is heavily dependent on the structure & format of your app. To be found on Google search is dependant on how easily Google can crawl and index your application. The main challenge for indexing is the vast nature of internet and the wide variety of applications that exist. The absence of a structure or a particular format makes the task of checking website content and categorizing them very difficult. This was the primary reason why Google had come up with the idea of Accelerated Mobile Pages. The purpose of adopting AMP is to make all web and mobile applications conform to a certain structure, so that they can be easily classified and categorized. Since the implementation of ‘mobile first’ approach - an approach that puts more emphasis on the mobile platform and UI considerations for mobile devices, the AMP way has been slowly becoming the most preferred way of app designing. But the real question here is are developers adopting this particular design thinking willingly or are they finding themselves running out of other options with Google forcing its hand on how they design their apps. The Ground Realities of the Web - Diversity vs Uniformity The truth is that the internet is a diverse playing ground. It’s a place for information sharing. As such, the general consensus is not exactly in line with Google’s vision of a uniform web. Google started off as a search engine whose main task was to be the gateway of information - to lead people to the specific web addresses. From there on, they have evolved to be one of the leading beneficiaries of the world wide web. The next step in Google’s evolution seems to be quite natural to take control over content and hosting. Google has also recently announced that they are going to lay down undersea internet cable from Japan to Guam, and from Guam to Australia. They are portraying this decision as an economic decision which will save them money eventually after the cables are laid. But some are seeing this as a step to remove external dependencies and as a step closer to total control over the internet. Google’s recent partnering deal with WordPress is a proof that Google is taking steps towards owning up the web hosting space. AMP specification means that Google will have the final say over design specifications. The diversity in design will suffer as you would not want to spend time to design a site that won’t be indexed by Google. Hence the developers will only have two options, use the pre-designed template provided by Google, or make two specific website designs, one as per their own design consideration and the other one as per AMP. But Google will keep showing you error signs if your AMP version doesn’t match the main design. Hence the choice finally narrows down to choosing AMP. The trouble with AMP Your content published using AMP is stored in a Google cache and hence repeated views are loaded from the cache. This also means that the user will actually spend more time in Google’s own page and will see Google’s ads and not the ones which the content creator had put up. This by extension means loss of revenue for the actual content creator. Using Analytics is far more difficult in AMP-based pages. The AMP pages are difficult to customize and hence difficult to design without looking similar. So the web might end up with similar looking apps with similar buttons and UIs all across. The AMP model takes its own decisions as per how it actually shows your content. So you don’t get to choose your metadata being displayed, but Google does. That means less control over your content. With Google controlling the extent to which your website data is displayed, all the pages are going to look similar with very little metadata info shown, fake stories will appear parallel to normal news thumbnails because there will be very little text displayed to enable to make a call, whether a story is true or false. All of these come with the caveat that AMPed up pages will rank higher on Google. If that’s the proverbial carrot used to lure web publishers to bring their content under Google’s umbrella, then we must say that it’s going to be a tricky choice for developers. No one wants a slow web with irregular indexing and erroneous search results. But how far are we prepared to let go of individuality and design thinking in this process, that’s a question to ponder about. Google wants web developers to embrace AMP. Great news for users, more work for developers. Like newspapers, Google algorithms are protected by the First Amendment Implementing Dependency Injection in Google Guice
Read more
  • 0
  • 0
  • 3054

article-image-today-you-are-not-web-developer-if-you-dont-know-javascript
Mario Casciaro
01 Jul 2015
6 min read
Save for later

You're not a web developer if you don't know JavaScript

Mario Casciaro
01 Jul 2015
6 min read
Mario Casciaro is a software engineer and technical lead with a passion for open source. After the recent publication of his successful book Node.JS Design Patterns, we caught up with him to discuss his views on today’s most important web development skills, and what the future holds. The best tool for the job may not be in your skillset yet I remember working on a small side project, something I try to do as much as possible, to put new skills into practice and try things outside of my job. It was a web application, something very similar to a social network, and I remember choosing Java with the Spring Framework as the main technology stack, and Backbone on the front-end. At the time - around 4 years ago - I was an expert Java developer, and considered it the technology with the most potential. It worked perfectly to implement enterprise web applications as well as mission-critical distributed applications and even mobile apps. While Java is still a popular and valuable tool in 2015, my experience doing this small side project made me rethink my opinion – I wouldn’t use it today unless there was a particular need for it. I remember that at some point I realized I was spending a lot of my development time in designing the object-oriented structure of the application and writing boilerplate code. Trying to find a solution, I migrated the project to Groovy and Grails and on the front-end I tried to implement a small homemade two-way binding framework. Things improved a little, but I was still feeling the need for something more agile on both ends, something more suited to the web. The web moves fast, so always let your skills evolve I wanted to try something radically different than the typical PHP, Ruby on Rails or Python for the server or JQuery or Backbone for the client. Fortunately I discovered Node.js and Angular.js, and that changed everything. By using Node I noticed that my mindset shifted from “how to do things” to “just get things done”. On the other hand, Angular revolutionized my approach to front end development, allowing me to drastically cut the amount of boilerplate code I was using. But most importantly, I realized that JavaScript and its ecosystem was becoming a seriously big thing. Today I would not even consider building a web application without having JavaScript as my primary player. The amount of packages on npm is staggering - a clear indication that the web has shifted towards JavaScript. The most impressive part of this story is that I also realized the importance that these new skills had in defining my career; if I wanted to build web applications, JavaScript and its amazing ecosystem had to be the focus of my learning efforts. This led me to find a job where Node, Angular and other cutting-edge JavaScript technologies actually played a crucial role in the success of the project I was in charge of creating. But the culmination of my renewed interest in JavaScript is the book I published 6 months ago - Node.jsDesignPatterns - which contains the best of the experience I accumulated since I devoted myself to the full-stack JavaScript mantra. The technologies and the skills that matter today for a web developer Today, if I had to give advice to someone approaching web development for the first time I would definitely recommend starting with JavaScript. I wouldn’t have said that 5-6 years ago, but today it’s the only language that allows you to get started both on the back end and the front end. Moreover JavaScript, in combination with other web technologies such as HTML and CSS, gives you access to an even broader set of applications with the help of projects like nw.js and ApacheCordova. PHP, Ruby, and Python are still very popular languages for developing the server-side of a web application, but for someone that already knows JavaScript, Node.js would be a natural choice. Not only does it save you the time it takes to learn a new language, it also offers a level of integration with the front end that is impossible with other platforms. I’m talking, of course, about sharing code between the server and the client and even implementing isomorphic applications which can run on both Node.js and the browser. React is probably the framework that offers the most interesting features in the area of isomorphic application development and definitely something worth digging into more, and it’s likely that we’ll also see a lot more from PouchDB, an isomorphic JavaScript database that will help developers build offline-enabled or even offline-first web applications more easily than ever before. Always stay ahead of the curve Today, as 4 years ago, the technologies that will play an important role in the web of tomorrow are already making an impact. WebRTC, for example, enables the creation of real-time peer-to-peer applications in the browser, without the need for any additional plugin. Developers are already using it to build fast and lightweight audio/video conferencing applications or even complete BitTorrent clients in the browser! Another revolutionizing technology is going to be ServiceWorkers which should dramatically improve the capabilities of offline applications. On the front end, WebComponents are going to play a huge role, and the Polymer project has already demonstrated what this new set of standards will be able to create. With regards to JavaScript itself, web developers will have to become familiar with the ES6 standard sooner than expected, as cross-compilation tools such as Babel are already allowing us to use ES6 on almost any platform. But we should also keep an eye on ES7 as it will contain very useful features to simplify asynchronous programming. Finally, as the browser becomes the runtime environment of the future, the recently revealed WebAssembly promises to give the web its own “bytecode”, allowing you to load code written in other languages from JavaScript, When WebAssembly becomes widely available, it will be common to see a complex 3D video game or a full-featured video editing tool running in the browser. JavaScript will probably remain the mainstream language for the web, but it will be complemented by the new possibilities introduced by WebAssembly. If you want to explore the JavaScript ecosystem in detail start with our dedicated JavaScript page. You'll find our latest and most popular, along with free tutorials and insights.
Read more
  • 0
  • 1
  • 3031

article-image-devops-for-big-data-success
Ashwin Nair
11 Oct 2017
5 min read
Save for later

DevOps might be the key to your Big Data project success

Ashwin Nair
11 Oct 2017
5 min read
So, you probably believe in the power of Big Data and the potential it has to change the world. Your company might have already invested in or is planning to invest in a big data project. That’s great! But what if I were to tell you that only 15% of the business were successfully able to deploy their Big Data projects to production. That can’t be a good sign surely! Now, don’t just go freeing up your Big Data budget. Not yet. Big Data’s Big Challenges For all the hype around Big Data, research suggests that many organizations are failing to leverage its opportunities properly. A recent survey by NewVantage partners, for example, explored the challenges facing organizations currently running their own Big Data projects or trying to adopt them. Here’s what they had to say: “In spite of the successes, executives still see lingering cultural impediments as a barrier to realizing the full value and full business adoption of Big Data in the corporate world. 52.5% of executives report that organizational impediments prevent realization of broad business adoption of Big Data initiatives. Impediments include lack or organizational alignment, business and/or technology resistance, and lack of middle management adoption as the most common factors. 18% cite lack of a coherent data strategy.”   Clearly, even some of the most successful organizations are struggling to get a handle on Big Data. Interestingly, it’s not so much about gaps in technology or even skills, but rather lack of culture and organizational alignment that’s making life difficult. This isn’t actually that surprising. The problem of managing the effects of technological change is one that goes far beyond Big Data - it’s impacting the modern workplace in just about every department, from how people work together to how you communicate and sell to customers. DevOps Distilled It’s out of this scenario that we’ve seen the irresistible rise of DevOps. DevOps, for the uninitiated, is an agile methodology that aims to improve the relationship between development and operations. It aims to ensure a fluid collaboration between teams; with a focus on automating and streamlining monotonous and repetitive tasks within a given development lifecycle, thus reducing friction and saving time. We can perhaps begin to see, then, that this approach - usually used in typical software development scenarios - might actually offer a solution to some of the problems faced when it comes to big data. A typical Big Data project Like a software development project, a Big Data project will have multiple different teams working on it in isolation. For example, a big data architect will look into the project requirements and design a strategy and roadmap for implementation, while the data storage and admin team will be dedicated to setting up a data cluster and provisioning infrastructure. Finally, you’ll probably then find data analysts who process, analyse and visualize data to gain insights. Depending on the scope and complexity of your project it is possible that more teams are brought in - say, data scientists are roped in to trains and build custom machine learning models. DevOps for Big Data: A match made in heaven Clearly, there are a lot of moving parts in a typical Big Data project - each role performing considerably complex tasks. By adopting DevOps, you’ll reduce any silos that exist between these roles, breaking down internal barriers and embedding Big Data within a cross-functional team. It’s also worth noting that this move doesn’t just give you a purely operational efficiency advantage - it also gives you much more control and oversight over strategy. By building a cross-functional team, rather than asking teams to collaborate across functions (sounds good in theory, but it always proves challenging), there is a much more acute sense of a shared vision or goal. Problems can be solved together, discussions can take place constantly and effectively. With the operational problems minimized, everyone can focus on the interesting stuff. By bringing DevOps thinking into big data, you also set the foundation for what’s called continuous analytics. Taking the principle of continuous integration, fundamental to effective DevOps practice, whereby code is integrated into a shared repository after every task or change to ensure complete alignment, continuous analytics streamlines the data science lifecycle by ensuring a fully integrated approach to analytics, where as much as possible is automated through algorithms. This takes away the boring stuff - once again ensuring that everyone within the project team can focus on what’s important. We’ve come a long way from Big Data being a buzzword - today, it’s the new normal. If you’ve got a lot of data to work with, to analyze and to understand, you better make sure you’ve the right environment setup to make the most from it. That means there’s no longer an excuse for Big Data projects to fail, and certainly no excuse not to get one up and running. If it takes DevOps to make Big Data work for businesses then it’s a MINDSET worth cultivating and running with.
Read more
  • 0
  • 0
  • 3029

article-image-uk-ncsc-report-reveals-ransomware-phishing-supply-chain-threats-to-businesses
Fatema Patrawala
16 Sep 2019
7 min read
Save for later

UK's NCSC report reveals significant ransomware, phishing, and supply chain threats to businesses

Fatema Patrawala
16 Sep 2019
7 min read
Last week, the UK’s National Cyber Security Centre (NCSC) published a report on cyber incident trends in the UK from October 2018 to April 2019. The U.S. Department of Homeland Security Cybersecurity and Infrastructure Security Agency (CISA) has recommended this report to better understand and know how to defend against most prevalent cyber security threats. The NCSC report reveals five main threats and threat vectors that affected UK organizations: cloud services (Office 365 in particular); ransomware; phishing; vulnerability scanning; and supply chain attacks. The NCSC report examined each of these, presented specific methods used by threat actors and provided tips for preventing and mitigating incidents. NCSC report reveals Cloud services and Office 365 as primary targets The NCSC report highlights the primary target of the attackers as Cloud services, and Office 365. The large scale move to cloud services has put the IT infrastructure of many enterprises within reach of internet-based attacks as these services are only protected by a username and password.  Tools and scripts to try and guess users’ passwords are abundant. And a successful login gives access to corporate data stored in all Office 365 services. For example, both SharePoint and Exchange could be compromised, as well as any third-party services an enterprise has linked to Azure AD. Another common way of attacking Office 365 mentioned in the report is password spraying. In this method the attackers attempt a small number of commonly used passwords against multiple accounts. In most cases, they aren’t after just one specific account as this method can target a large number of accounts in one organisation without raising any suspicions.  Other than this, credential stuffing is another common approach to attack Office 365. Credential stuffing takes pairs of usernames and passwords from leaked data sets and tries them against other services, such as Office 365. According to the report it is difficult to detect the vulnerability in logs as an attacker may only need a single attempt to successfully log in if the stolen details match those of the user's Office 365 account. The report further suggests a few remediation strategies to prevent compromising Office 365 accounts. Ransomware attacks among enterprises continue to rise Since the WannaCry and NotPetya attacks of 2017, ransomware attacks against enterprise networks have continued to rise in number and sophistication. The NCSC report mentions that historically, ransomware were delivered as a standalone attack. But today, attackers are using their network access to maximise the impact of the ransomware attack.  Ransomware tools such as Cybercrime botnets like Emotet, Dridex and Trickbot are commonly used as an initial infection vector, prior to retrieving and installing the ransomware. The report also highlights the use of Pen-testing tools such as Cobalt Strike. Ransomware such as Ryuk, LockerGoga, Bitpaymer and Dharma were seen to be prevalent in recent months. Cases observed in the NCSC report often tend to have resulted from a trojanised document, sent via email. The malware will exploit publicly known vulnerabilities and macros in Microsoft Office documents. Some of the remediation strategies to prevent ransomware include: Reducing the chances of the initial malware reaching devices Considering the use of URL reputation services including those built into a web browser, and Internet service providers. Using email authentication via DMARC and DNS filtering products is highly recommended Making it more difficult for ransomware to run, once it is delivered. Having a tested backup of your data offline, so that it cannot be modified or deleted by ransomware.  Effective network segregation to make it more difficult for malware to spread across a network and thereby limit the impact of ransomware attacks. Phishing is the most prevalent attack delivery method in NCSC report According to the NCSC report, phishing has been the most prevalent attack delivery method over the last few years, and in recent months. Just about anyone with an email address can be a target. Specific methods observed recently by the NCSC include: targeting Office 365 credentials - the approach here is to persuade users to follow links to legitimate-looking login pages, which prompt for O365 credentials. More advanced versions of this attack also prompt the user to use Multi Factor Authentication. sending emails from real, but compromised, email accounts - quite often this approach will exploit an existing email thread or relationship to add a layer of authenticity to a spear phish. fake login pages - these are dynamically generated, and personalised, pulling the real imagery and artwork from the victim’s Office 365 portal. using Microsoft services such as Azure or Office 365 Forms to host fake login pages - these give the address bar an added layer of authenticity. Remediation strategies to prevent phishing attacks include implementing a multi-layered defence against phishing attacks. This will reduce the chances of a phishing email reaching a user and minimises the impact of those that get through. Additionally you can configure Email anti-spoofing controls such as Domain-based Message Authentication, Reporting and Conformance (DMARC), Sender Policy Framework (SPF) and Domain-Keys Identified Mail (DKIM). Vulnerability scanning is a common reconnaissance method NSCS report mentions that vulnerability scanning is a common reconnaissance method used to search for open network ports, identify unpatched, legacy or otherwise vulnerable software and to identify misconfigurations, which could have an effect on security. It further details that attackers identify known weaknesses in Internet-facing service which they then target using tested techniques or 'exploits'. This approach means the attack is more likely to work for the first time, making its detection less likely when using traditional Intrusion prevention systems (IPS) and on-host security monitoring. Once an attacker has a foothold on the edge of your infrastructure, they will then attempt to run more network scans and re-use stolen credentials to pivot through to the core network. For vulnerability remediation NSCS suggests to ensure that all internet-facing servers that an attacker might be able to find should be hardened, and the software running on them must be fully patched. They also recommend penetration test to determine what an attacker scanning for vulnerabilities could find, and potentially attack. Supply chain attacks & threat from external service providers Threats introduced to enterprise networks via their service providers continue to be a major problem according to the report. Outsourcing – particularly of IT – results in external parties and their own networks being able to access and even reconfigure enterprise services. Hence, the network will inherit the risk from these connected networks.  NSCS report also gives several examples of attackers exploiting the connections of service providers to gain access to enterprise networks. For instance, the exploitation of Remote Management and Monitoring (RMM) tooling to deploy ransomware, as reported by ZDNet. And the public disclosure of a “sophisticated intrusion” at a major outsourced IT vendor, as reported by Krebs on Security. Few remediation strategies to prevent supply chain attacks are: Supply chain security should be a consideration when procuring both products and services. Those using outsourced IT providers should ensure that any remote administration interfaces used by those service providers are secured. Ensuring the way IT service provider connects to, or administers the system, meets the organisation’s security standards. Take appropriate steps to segment and segregate the networks. Segmentation and segregation can be achieved physically or logically using access control lists, network and computer virtualisation, firewalls, and network encryption such as Internet Protocol Security. Document the remote interfaces and internal accesses in use by your service provider to ensure that they are fully revoked at the end of the contract. To read the full report, visit the official NSCS website. What’s new in security this week? A new Stuxnet-level vulnerability named Simjacker used to secretly spy over mobile phones in multiple countries for over 2 years: Adaptive Mobile Security reports Lilocked ransomware (Lilu) affects thousands of Linux-based servers Intel’s DDIO and RDMA enabled microprocessors vulnerable to new NetCAT attack  
Read more
  • 0
  • 0
  • 3021
article-image-is-middleware-dead-cloud-is-the-prime-suspect
Prasad Ramesh
17 Nov 2018
4 min read
Save for later

Is middleware dead? Cloud is the prime suspect!

Prasad Ramesh
17 Nov 2018
4 min read
The cloud is now a ubiquitous term, in use from tasks such as storing photos to remotely using machines for complex AI tasks. But has it killed on premises middleware setups and changed the way businesses manage their services used by their employees? Is middleware dead? Middleware is the bridge that connects an operating system to different applications in a distributed system. Essentially it is a transition layer of software that enables communication between OS and applications. Middleware acts as a pipe for data to flow from one application to another. If the communication between applications in a network is taken care of by this software, developers can focus on the applications themselves, hence middleware came into picture. Middleware is used in enterprise networks. Is middleware still relevant? Middleware was a necessity for an IT business before cloud was a thing. But as cloud adoption has become mainstream, offering scalability and elasticity, middleware has become less important in modern software infrastructures. Middleware in on premises setups was used for different uses such as remote calls, communication with other devices in the network, transaction management and database interactions. All of this is taken care of by the cloud service provider behind scenes. Middleware is largely in decline - with cloud being a key reason. Specifically, some of the reasons middleware has lost favor include: Middleware maintenance can be expensive and quickly deplete resources, especially if you’re using middleware on a large scale. Middleware can’t scale as fast as cloud. If you need to scale, you’ll need new hardware - this makes elasticity difficult, with sunk costs in your hardware resources. Sustaining large applications on the middleware can become challenging over time. How cloud solves middleware challenges The reason cloud is killing off middleware is because it can simply do things better than traditional middleware. In just about every regard, from availability to flexibility to monitoring, using a cloud service makes life much easier. It makes life easier for developers and engineers, while potentially saving organizations time in terms of resource management. If you’re making decisions about software infrastructure, it probably doesn’t feel like a tough decision. Even institutions like banks, that have traditionally resisted software innovation are embracing cloud. More than 80% of world’s largest banks and more than 85% of global banks opting for the cloud according to this Information Age article. When is middleware the right option? There might still be some life left in middleware yet. For smaller organizations, where an on premise server setup will be used for a significant period of time - with cloud merely a possibility on the horizon - middleware still makes sense. Of course, no organization wants to think of itself as ‘small’ - even if you’re just starting out, you probably have plans to scale. In this case, cloud will give you the flexibility that middleware inhibits. While you shouldn’t invest in cloud solutions if you don’t need them, it’s hard to think of a scenario where it wouldn’t provide an advantage over middleware. From tiny startups that need accessible and efficient hosting services, to huge organizations where scale is simply too big to handle alone, cloud is the best option in a massive range of use cases. Is middleware dead really? So yes, middleware is dead for most practical use case scenarios. Most companies go with the cloud given the advantages and flexibility. With upcoming options like multi-cloud which gives you the options to use different cloud services for different areas, there is even more flexibility in using the cloud. Think Silicon open sources GLOVE: An OpenGL ES over Vulkan middleware Real clouds take out Microsoft’s Azure Cloud; users, developers suffer indefinite Azure outage MongoDB switches to Server Side Public License (SSPL) to prevent cloud providers from exploiting its open source code
Read more
  • 0
  • 0
  • 3020

article-image-2015-year-deep-learning
Akram Hussain
18 Mar 2015
4 min read
Save for later

Is 2015 the Year of Deep Learning?

Akram Hussain
18 Mar 2015
4 min read
The new phenomenon to hit the world of ‘Big Data’ seems to be ‘Deep Learning’. I’ve read many articles and papers where people question whether there’s a future for it, or if it’s just a buzzword that will die out like many a term before it. Likewise I have seen people who are genuinely excited and truly believe it is the future of Artificial intelligence; the one solution that can greatly improve the accuracy of our data and development of systems. Deep learning is currently a very active research area, by no means is it established as an industry standard, but rather one which is picking up pace and brings a strong promise of being a game changer when dealing with raw, unstructured data. So what is Deep Learning? Deep learning is a concept conceived from machine learning. In very simple terms, we think of machine learning as a method of teaching machines (using complex algorithms to form neural networks) to make improved predictions of outcomes based on patterns and behaviour from initial data sets.   The concept goes a step further however. The idea is based around a set of techniques used to train machines (Neural Networks) in processing information that can generate levels of accuracy nearly equivalent to that of a human eye. Deep learning is currently one of the best providers of solutions regarding problems in image recognition, speech recognition, object recognition and natural language processing. There are a growing number of libraries that are available, in a wide range of different languages (Python, R, Java) and frameworks such as: Caffe,Theanodarch, H20, Deeplearning4j, DeepDist etc.   How does Deep Learning work? The central idea is around ‘Deep Neural Networks’. Deep Neural Networks take traditional neural networks (or artificial neural networks) and build them on top of one another to form layers that are represented in a hierarchy. Deep learning allows each layer in the hierarchy to learn more about the qualities of the initial data. To put this in perspective; the output of data in level one is then the input of data in level 2. The same process of filtering is used a number of times until the level of accuracy allows the machine to identify its goal as accurately as possible. It’s essentially a repeat process that keeps refining the initial dataset. Here is a simple example of Deep learning. Imagine a face, we as humans are very good at making sense of what our eyes show us, all the while doing it without even realising. We can easily make out ones: face shape, eyes, ears, nose, mouth etc. We take this for granted and don’t fully appreciate how difficult (and complex) it can get whilst writing programs for machines to do what comes naturally to us. The difficulty for machines in this case is pattern recognition - identifying edges, shapes, objects etc. The aim is to develop these ‘deep neural networks’ by increasing and improving the number of layers - training each network to learn more about the data to the point where (in our example) it’s equal to human accuracy. What is the future of Deep Learning? Deep learning seems to have a bright future for sure, not that it is a new concept, I would actually argue it’s now practical rather than theoretical. We can expect to see the development of new tools, libraries and platforms, even improvements on current technologies such as Hadoop to accommodate the growth of Deep Learning. However it may not be all smooth sailing. It is still by far very difficult and time consuming task to understand, especially when trying to optimise networks as datasets grow larger and larger, surely they will be prone to errors? Additionally, the hierarchy of networks formed would surely have to be scaled for larger complex and data intensive AI problems.     Nonetheless, the popularity around Deep learning has seen large organisations invest heavily, such as: Yahoo, Facebook, Googles acquisition of Deepmind for $400 million and Twitter’s purchase of Madbits. They are just few of the high profile investments amongst many. 2015 really does seem like the year Deep learning will show its true potential. Prepare for the advent of deep learning by ensuring you know all there is to know about machine learning with our article. Read 'How to do Machine Learning with Python' now. Discover more Machine Learning tutorials and content on our dedicated page. Find it here.
Read more
  • 0
  • 0
  • 3020

article-image-amazon-echo-vs-google-home-next-gen-iot-war
Vijin Boricha
06 Jul 2018
6 min read
Save for later

Amazon Echo vs Google Home: Next-gen IoT war

Vijin Boricha
06 Jul 2018
6 min read
IoT has been around for a while now and big players like Google, Apple and Amazon have been creating buzz around smart devices over past couple of years. But 2018 is seeing a particular interest in smart speakers. That’s no surprise after Amazon succeeding with Echo it was obvious that other market leaders would love to compete in this area. Speaking about competition, Google recently revealed impressive set of enhancements to their not so old Google Home at Google I/O 2018. Like Amazon Echo, Google Home has entered the arena where users can interact with Home to play music, get personal assistance, and control their smart home. With Google being backed with their omnipresent search engine, Echo’s place of staying on top looks a little dicey. With that being said, let's get into the crux of the discussion keeping in mind three major components: Entertainment Personal assistant Smart home controller Entertainment The ideal purpose of a speaker is to entertain you with music but here your smart speaker can interact with you and play your favourite tracks. So, if you are at a moderate distance from your device all you have to do is wake the Echo with the command “Alexa” and Google Home with “Okay Google”. Don’t close your options here as both devices provide users with alternative commands such as the Echo wakes up with "Echo," "Amazon" or "Computer" and Home with "Hey Google”. Both these devices do a fair job of hearing users as they are always listening and their built-in microphone can listen to users over moderate background disturbance. These devices offer almost similar means of connection where your Echo can be plugged in to your existing home system while Home is capable of controlling any Google Cast-enabled speaker. When it comes to connecting these devices to your TV, Home does the job well by partially controlling a CEC (Consumer Electronics Control) supported television. On the other hand, Echo needs to be integrated with Fire TV in order to control your TV. With this you must have already guessed the winner but this does not end here, Google Home has a upper hand when it comes to connecting to multiple speakers to play a single song. Amazon Echo being more than a year older still misses this feature. Personal assistant Considering the variety of personal services Google offers (Google calendar, GTasks, and Google Maps) you must be expecting a straight win for Home here as well. However, Echo hasn’t stayed behind in this race. Echo uses Alexa as its digital assistant whereas Home uses the Google assistant, the digital assistant that is shipped with other Google products such as Pixel, to respond to voice commands. So, if you ask Google Home Who is Harry Potter?, you will get a definite answer from Home. You can follow that question with Which other movie is he associated with?, Home will again provide you a definite answer as it inferred the ‘he’ you referred to is actor Daniel Radcliffe. Similarly, Echo kept up with its standards when Alexa is asked about the weather. Then, when asked How about Thursday?, the response received is accurate despite the word ‘weather’ not being used in the follow-up question. Surprisingly Google falls short when it comes to updating personal tasks. The Echo can set reminders and stack-up a to-do list which the Google Home still cannot. But it would just be a matter of time to see these features added in Google Home. When it comes to assisting a group of people, Google Home supports upto 6 multiple users and when trained is capable of recognizing family member’s voice as long as they don’t sound similar. So, if one of the family members asks for a traffic or calendar update Home will customize the response depending on which member has asked for an update. Unfortunately, Echo lacks this capability. Google Home is still in its initial stages of enhancing its functionalities but already seems to be superior. Apart from a few, Home shares a lot of capabilities that are on Echo and in addition it supports recognizing family members and responding accordingly, so Google steals the show here as well. Smart Home Controller Another major functionality of a smart speaker is controlling your smart home. Both of these devices have displayed versatile functionalities but the winner in this area is a total surprise. Although Echo has a 2 year head start in this market Google Home hasn’t been that far behind in the race. Google has managed to integrate better with IFTTT (If This Then That) than that of Echo. This integration helps in crafting customizable commands. In a nutshell Home has more command options than Echo. So if a bulb is called “desk lamp” in the Philips app, Alexa will not respond to anything other than “desk lamp”. This comes with an additional support twist where you can group all lights and command Alexa to turn on/off all lights and the job’s done. The downside here is that without this grouping you would have to control your appliances with specific names. Well, with Google Home that’s not the case. You can assign a nickname to your home appliance and it would follow your orders. So while you must group your appliances on Echo, Google Assistant automatically groups a particular category of smart appliances helping you refer to it with your choice of command. Although Google Home has a upper hand in customizing commands, Echo is a much better device in terms of home automation control as it supports a vast variety of home appliances unlike Home. So, this time the winner is Amazon Echo. It is kind of difficult to make a clear choice as each of these have their advantages over disadvantages. If home automation is something that matters the most to you, Amazon Echo would be an apt choice. On the other hand, if personal assistant and music is all that your looking for Google Home fits perfectly. So, what are you waiting for, say the command and get things done for you. Related Links Windows 10 IoT Core: What you need to know How to build an Arduino based ‘follow me’ drone ROS Melodic Morenia released
Read more
  • 0
  • 0
  • 3020
article-image-admiring-many-faces-facial-recognition-deep-learning
Sugandha Lahoti
07 Dec 2017
7 min read
Save for later

Admiring the many faces of Facial Recognition with Deep Learning

Sugandha Lahoti
07 Dec 2017
7 min read
Facial recognition technology is not new. In fact, it has been around for more than a decade. However, with the recent rise in artificial intelligence and deep learning, facial technology has achieved new heights. In addition to facial detection, modern day facial recognition technology also recognizes faces with high accuracy and in unfavorable conditions. It can also recognize expressions and analyze faces to generate insights about an individual. Deep learning has enabled a power-packed face recognition system, all geared up to achieve widespread adoption. How has deep learning modernised facial recognition Traditional facial recognition algorithms would recognize images and people using distinct facial features (placement of eye, eye color, nose shape etc.) However, they failed in correct identification in cases of different lighting or slight change in the appearance ( beard growth, aging, or pose). In order to develop facial recognition techniques for a dynamic and ever-changing face, deep learning is proving to be a game changer. Deep Neural nets go beyond the approach of manual extraction. These AI based Neural Networks rely on image pixels to analyze features of a particular face. So they scan faces irrespective of the lighting, ageing, pose, or emotions. Deep learning algorithms remember each time they recognize or fail to recognize a problem. Thus, avoiding repeat mistakes and getting better at each attempt. Deep learning algorithms can also be helpful in converting 2D images to 3D. Facial recognition in practice: Facial Recognition Technology in Multimedia Deep learning enabled facial recognition technologies can be used to track audience reaction and measure different levels of emotions. Essentially it can predict how a member of the audience will react to the remaining film. Not only this, it also helps determine what percentage of users will be interested in a particular movie genre. For example, Microsoft’s Azure Emotion,  an emotion API detects emotions by analysing the facial expressions on an image or video content over time. Caltech and Disney have collaborated to develop a neural network which can track facial expressions. Their deep learning based Factorised Variational Autoencoders (FVAEs) analyze facial expressions of audience for about 10 minutes and then predict how their reaction will be for the rest of the film. These techniques help in estimating whether the viewers are giving the expected reactions at the right place. For example, the viewer is not expected to yawn on a comical scene. With this, Disney can also predict the earning potential of a particular movie. It can generate insights that may help producers create compelling movie trailers to maximize the number of footfalls. Smart TVs are also equipped with sophisticated cameras and deep learning algos for facial recognition ability. They can recognize the face of the person watching and automatically show channels and web applications programmed as their favorites. The British broadcasting corporation uses the facial recognition technology, built by CrowdEmotion. By tracking faces of almost 4,500 audience members watching show trailers, they gauge exact customer emotions about a particular programme. This in turn helps them generate insights to showcase successful commercials. Biometrics in Smartphones A large number of smartphones nowadays are instilled with biometric capabilities. Facial recognition in smartphones are not only used as a means of unlocking and authorizing, but also for making secure transactions and payments. In present times, there has been a rise in chips with built-in deep learning ability. These chips are embedded into smartphones. By having a neural net embedded inside the device, crucial face biometric data never leaves the device or sent to the cloud. This in turn improves privacy and reduces latency. Some of the real-world examples include Intel’s Nervana Neural Network Processor, Google’s TPU, Microsoft’s FPGA, and Nvidia’s Tesla V100. Deep learning models, embedded in a smartphone, can construct a mathematical model of the face which is then stored in the database. Using this mathematical face model, smartphones can easily recognize users even as their face ages or when it is obstructed by wearable accessories. Apple has recently launched the iPhone X facial recognition system termed as FaceID. It maps thousands of points on a user’s face using a projector and an infrared camera (which can operate under varied lighting conditions). This map is then passed to a bionic chip embedded in the smart phone. The chip has a neural network which constructs a mathematical model of the user’s face, used for biometric face verification and recognition. Windows Hello is also a facial recognition technology to unlock Windows smart devices equipped with infrared cameras. Qualcomm, a mobile technology organization, is working on a new depth-perception technology. It will include an image signal processor and high-resolution 3D depth-sensing cameras for facial recognition. Face recognition for Travel Facial recognition technologies can smoothen the departure process for a customer by eliminating the need for a boarding pass. A traveller is scanned by cameras installed at various check points, so they don’t have to produce a boarding pass at every step. Emirates is collaborating with Dubai Customs, Police and Airports to use a facial recognition technology solution integrated with the UAE Wallet app. The project is known as Together Initiative, it allows travellers to register and store their biometric facial data at several kiosks placed at the check-in area. This facility helps passengers to avoid presenting their physical documents at every touchpoint. Face recognition can also be used for determining illegal immigration. The technology compares the photos of passengers taken immediately before boarding, with the photos provided in their visa application. Biometric Exit, is an initiative by US government, which uses facial recognition to identify individuals leaving the country. Facial recognition technology can also be used at train stations to reduce the waiting time for  buying a train ticket or going through other security barriers. Bristol Robotics Laboratory has developed a software which uses infrared cameras to identify passengers as they walk onto the train platform. They do not need to carry tickets. Retail and shopping In the area of retail, smart facial recognition technologies can be helpful in fast checkout by keeping a track of each customer as they shop across a store. This smart technology, can also use machine learning and analytics to find trends in the shopper’s purchasing behavior over time and devise personalized recommendations. Facial video analytics and deep learning algorithms can also identify loyal and VIP shoppers from the moving crowd, giving them a privileged VIP experience. Thus, enabling them with more reasons to come back and make repeat purchases. Facial biometrics can also accumulate rich statistics about demographics(age, gender, shopping history) of an individual. Analyzing these statistics can generate insights, which helps organizations develop their products and marketing strategies. FindFace is one such platform that uses sophisticated deep learning technologies to generate meaningful data about the shopper. Its e-facial recognition system can verify faces with almost 99% accuracy. It can also help route the shopper data to a salesperson’s notice for personalized assistance. Facial recognition technology can also be used to make secure payment transactions simply by analysing a person’s face. AliBaba has set up a Smile to Pay face recognition system in KFC's. This system allows customers to make secure payments by merely scanning their face. Facial recognition has emerged as a hot topic of interest and is poised to grow. On the flip side, organizations deploying such technology should incorporate privacy policies as a standard measure. Data collected from such facial recognition software can also be used wrongly for targeting customers with ads, or for other illegal purposes. They should implement a methodical and systematic approach for using facial recognition for the benefit of their customers. This will not only help businesses generate a new source of revenue, but will also usher in a new era of judicial automation.  
Read more
  • 0
  • 0
  • 3006

article-image-application-flow-generators
Wesley Cho
07 Oct 2015
5 min read
Save for later

Application Flow With Generators

Wesley Cho
07 Oct 2015
5 min read
Oftentimes, developers like to fall back to using events to enforce the concept of a workflow, a rigid diagram of business logic that branches according to the application state and/or user choices (or in psuedo-formal terms, a tree-like uni-directional flow graph). This graph may contain circular flows until the user meets the criteria to continue. One example of this is user authentication to access an application, where a natural circular logic arises of returning back to the login form until the user submits a correct user/password combination - upon login, the application may decide to display a bunch of welcome messages pointing to various pieces of functionality & giving quick explanations. However, eventing has a problem - it doesn’t centralize the high level business logic with obvious branching, so in an application of moderate complexity, developers may scramble to figure out what callbacks are supposed to trigger when, and in what order. Even worse, the logic can be split across multiple files, creating a multifile spaghetti that makes it hard to find the meatball (or other preferred food of reader interest). It can be a taxing operation for developers, which in turn hampers productivity. Enter Generators Generators are an exciting new feature in ES6 that is mysterious to more inexperienced developers. They allow one to create a natural loop that blocks up until the yielded expression, which has some natural applications such as in pagination for handling infinite scrolling lists such as a Facebook newsfeed or Twitter feed. They are currently available in Chrome (and thus io.js, as well as Node.js via --harmony flags) & Firefox, but can be used in other browsers & io.js/Node.js through transpilation of JavaScript from ES6 to ES5 with excellent transpilers such as Traceur or Babel. If you want generator functionality, you can use regenerator. One nice feature of generators is that it is a central source of logic, and thus naturally fits the control structure we would like for encapsulating high level application logic. Here is one example of how one can use generators along with promises: var User = function () { … }; User.authenticate = function* authenticate() { var self = this; while (!this.authenticated) { yield function login(user, password) { return self.api.login(user, password) .then(onSuccess, onFailure); }; } }; function* InitializeApp() { yield* User.authenticate(); yield Router.go(‘home’); } var App = InitializeApp(); var Initialize = { next: function (step, data) { switch step { case ‘login’: return App.next(); ... } ... } }; Here we have application logic that first tries to authenticate the user, then we can implement a login method elsewhere in the application: function login (user, password) { return App.next().value(user, password) .then(function success(data) { return Initialize.next(‘login’, data); }, function failure(data) { return handleLoginError(data); }); } Note the role of the Initialize object - it has a next key whose value is a function that determines what to do next in tandem with App, the “instantiation” of a new generator. In addition, it also makes use of the fact that what we choose to yield with a second generator, which lets us yield a function which can be used to pass data to attempt to login a user. On success, it will set the authenicated flag as true, which in turn will break the user out of the User.authenticate part of the InitializeApp generator, and into the next step, which in this case is to route the user to the homepage. In this case, we are blocking the user from normally navigating to the homepage upon application boot until they complete the authentication step. Explanation of differences The important piece in this code is the InitializeApp generator. Here we have a centralized control structure that clearly displays the high level flow of the application logic. If one knows that there is a particular piece that needs to be modified due to a bug, such as in the authentication piece, it becomes obvious that one must start looking at User.authenticate, and any piece of code that is directly concerned with executing it. This allows the methods to be split off into the appropriate sectors of the codebase, similarly with event listeners & the callbacks that get fired on reception of the event, except we have the additional benefit of seeing what the high level application state is. If you aren't interested in using generators, this can be replicated with promises as well. There is a caveat to this approach though - using generators or promises hardcodes the high level application flow. It can be modified, but these control structures are not as flexible as events. This does not negate the benefits of using events, but it gives another powerful tool to help make long term maintenance easier when designing an application that potentially may be maintained by multiple developers who have no prior knowledge. Conclusion Generators have many use cases that most developers working with JavaScript are not familiar with currently, and it is worth taking the time to understand how they work. Combining them with existing concepts allow some unique patterns to arise that can be of immense use when architecting a codebase, especially with possibilities such as encapsulating branching logic into a more readable format. This should help reduce mental burden, and allow developers to focus on building out rich web applications without the overhead of worrying about potentially missing business logic. About the Author Wesley Cho is a senior frontend engineer at Jiff (http://www.jiff.com/).  He has contributed features & bug fixes and reported numerous issues to numerous libraries in the Angular ecosystem, including AngularJS, Ionic, UI Bootstrap, and UI Router, as well as authored several libraries.  
Read more
  • 0
  • 0
  • 2997