Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3709 Articles
article-image-vmworld-2019-vmware-tanzu-on-kubernetes-new-hybrid-cloud-offerings-collaboration-with-multi-cloud-platforms-and-more
Fatema Patrawala
30 Aug 2019
7 min read
Save for later

VMworld 2019: VMware Tanzu on Kubernetes, new hybrid cloud offerings, collaboration with multi cloud platforms and more!

Fatema Patrawala
30 Aug 2019
7 min read
VMware kicked off its VMworld 2019 US in San Francisco last week on 25th August and ended yesterday with a series of updates, spanning Kubernetes, Azure, security and more. This year’s event theme was “Make Your Mark” aimed at empowering VMworld 2019 attendees to learn, connect and innovate in the world of IT and business. 20,000 attendees from more than 100 countries descended to San Francisco for VMworld 2019. VMware CEO Pat Gelsinger took the stage, and articulated VMware’s commitment and support for TechSoup, a one-stop IT shop for global nonprofits. Gelsinger also put emphasis on the company's 'any cloud, any application, any device, with intrinsic security' strategy. “VMware is committed to providing software solutions to enable customers to build, run, manage, connect and protect any app, on any cloud and any device,” said Pat Gelsinger, chief executive officer, VMware. “We are passionate about our ability to drive positive global impact across our people, products and the planet.” Let us take a look at the key highlights of the show: VMworld 2019: CEO's take on shaping tech as a force for good The opening keynote from Pat Gelsinger had everything one would expect; customer success stories, product announcements and the need for ethical fix in tech. "As technologists, we can't afford to think of technology as someone else's problem," Gelsinger told attendees, adding “VMware puts tremendous energy into shaping tech as a force for good.” Gelsinger cited three benefits of technology which ended up opening the Pandora's Box. Free apps and services led to severely altered privacy expectations; ubiquitous online communities led to a crisis in misinformation; while the promise of blockchain has led to illicit uses of cryptocurrencies. "Bitcoin today is not okay, but the underlying technology is extremely powerful," said Gelsinger, who has previously gone on record regarding the detrimental environmental impact of crypto. This prism of engineering for good, alongside good engineering, can be seen in how emerging technologies are being utilised. With edge, AI and 5G, and cloud as the "foundation... we're about to redefine the application experience," as the VMware CEO put it. Read also: VMware reaches the goal of using 100% renewable energy in its operations, a year ahead of their 2020 vision Gelsinger’s 2018 keynote was about the theme of tech 'superpowers'. Cloud, mobile, AI, and edge. This time, more focus was given to how the edge was developing. Whether it was a thin edge, containing a few devices and an SD-WAN connection, a thick edge of a remote data centre with NFV, or something in between, VMware aims to have it all covered. "Telcos will play a bigger role in the cloud universe than ever before," said Gelsinger, referring to the rise of 5G. "The shift from hardware to software [in telco] is a great opportunity for US industry to step in and play a great role in the development of 5G." VMworld 2019 introduces Tanzu to build, run and manage software on Kubernetes VMware is moving away from virtual machines to containerized applications. On the product side VMware Tanzu was introduced, a new product portfolio that aims to enable enterprise-class building, running, and management of software on Kubernetes. In Swahili, ’tanzu’ means the growing branch of a tree and in Japanese, ’tansu’ refers to a modular form of cabinetry. For VMware, Tanzu is their growing portfolio of solutions that help build, run and manage modern apps. Included in this is Project Pacific, which is a tech preview focused on transforming VMware vSphere into a Kubernetes native platform. "With project Pacific, we're bringing the largest infrastructure community, the largest set of operators, the largest set of customers directly to the Kubernetes. We will be the leading enabler of Kubernetes," Gelsinger said. Read also: VMware Essential PKS: Use upstream Kubernetes to build a flexible, cost-effective cloud-native platform Other product launches included an update to collaboration program Workspace ONE, including an AI-powered virtual assistant, as well as the launch of CloudHealth Hybrid by VMware. The latter, built on cloud cost management tool CloudHealth, aims to help organisations save costs across an entire multi-cloud landscape and will be available by the end of Q3. Collaboration, not compete with major cloud providers - Google Cloud, AWS & Microsoft Azure At VMworld 2019 VMware announced an extended partnership with Google Cloud earlier this month led the industry to consider the company's positioning amid the hyperscalers. VMware Cloud on AWS continues to gain traction - Gelsinger said Outposts, the hybrid tool announced at re:Invent last year, is being delivered upon - and the company also has partnerships in place with IBM and Alibaba Cloud. Further, VMware in Microsoft Azure is now generally available, with the facility to gradually switch across Azure data centres. By the first quarter of 2020, the plan is to make it available across nine global areas. Read also: Cloud Next 2019 Tokyo: Google announces new security capabilities for enterprise users The company's decision not to compete, but collaborate with the biggest public clouds has paid off. Gelsinger also admitted that the company may have contributed to some confusion over what hybrid cloud and multi-cloud truly meant. But the explanation from Gelsinger was pretty interesting. Increasingly, with organisations opting for different clouds for different workloads, and changing environments, Gelsinger described a frequent customer pain point for those nearer the start of their journeys. Do they migrate their applications or do they modernise? Increasingly, customers want both - the hybrid option. "We believe we have a unique opportunity for both of these," he said. "Moving to the hybrid cloud enables live migration, no downtime, no refactoring... this is the path to deliver cloud migration and cloud modernisation." As far as multi-cloud was concerned, Gelsinger argued: "We believe technologists who master the multi-cloud generation will own it for the next decade." Collaboration with NVIDIA to accelerate GPU services on AWS NVIDIA and VMware today announced their intent to deliver accelerated GPU services for VMware Cloud on AWS to power modern enterprise applications, including AI, machine learning and data analytics workflows. These services will enable customers to seamlessly migrate VMware vSphere-based applications and containers to the cloud, unchanged, where they can be modernized to take advantage of high-performance computing, machine learning, data analytics and video processing applications. Through this partnership, VMware Cloud on AWS customers will gain access to a new, highly scalable and secure cloud service consisting of Amazon EC2 bare metal instances to be accelerated by NVIDIA T4 GPUs, and new NVIDIA Virtual Compute Server (vComputeServer) software. “From operational intelligence to artificial intelligence, businesses rely on GPU-accelerated computing to make fast, accurate predictions that directly impact their bottom line,” said Jensen Huang, founder and CEO, NVIDIA. “Together with VMware, we’re designing the most advanced GPU infrastructure to foster innovation across the enterprise, from virtualization, to hybrid cloud, to VMware's new Bitfusion data center disaggregation.” Read also: NVIDIA’s latest breakthroughs in conversational AI: Trains BERT in under an hour, launches Project Megatron to train transformer based models at scale Apart from this, Gelsinger made special note to mention VMware's most recent acquisitions, with Pivotal and Carbon Black and discussed about where they fit in the VMware stack at the back. VMware’s hybrid cloud platform for Next-gen Hybrid IT VMware introduced new and expanded cloud offerings to help customers meet the unique needs of traditional and modern applications. VMware empowers IT operators, developers, desktop administrators, and security professionals with the company’s hybrid cloud platform to build, run, and manage workloads on a consistent infrastructure across their data center, public cloud, or edge infrastructure of choice. VMware uniquely enables a consistent hybrid cloud platform spanning all major public clouds – AWS, Azure, Google Cloud, IBM Cloud – and more than 60 VMware Cloud Verified partners worldwide. More than 70 million workloads run on VMware. Of these, 10 million are in the cloud. These are running in more than 10,000 data centers run by VMware Cloud providers. Take a look at the full list of VMworld 2019 announcements here. What’s new in cloud and virtualization this week? VMware signs definitive agreement to acquire Pivotal Software and Carbon Black Pivotal open sources kpack, a Kubernetes-native image build service Oracle directors support billion dollar lawsuit against Larry Ellison and Safra Catz for NetSuite deal
Read more
  • 0
  • 0
  • 2832

article-image-microsoft-announces-its-support-for-bringing-exfat-in-the-linux-kernel-open-sources-technical-specs
Bhagyashree R
29 Aug 2019
3 min read
Save for later

Microsoft announces its support for bringing exFAT in the Linux kernel; open sources technical specs

Bhagyashree R
29 Aug 2019
3 min read
Yesterday, Microsoft announced that it supports the addition of its Extended File Allocation Table (exFAT) file system in the Linux kernel and publicly released its technical specifications. https://twitter.com/OpenAtMicrosoft/status/1166742237629308928 Launched in 2006, the exFAT file system is the successor to Microsoft's FAT and FAT32 file systems that are widely used in a majority of flash memory storage devices such as USB drives and SD cards. It uses 64-bits to describe file size and allows for clusters as large as 32MB. As per the specification, it was implemented with simplicity and extensibility in mind. John Gossman, Microsoft Distinguished Engineer, and Linux Foundation Board Member wrote in the announcement, “exFAT is the Microsoft-developed file system that’s used in Windows and in many types of storage devices like SD cards and USB flash drives. It’s why hundreds of millions of storage devices that are formatted using exFAT “just work” when you plug them into your laptop, camera, and car.” As exFAT was proprietary previously, mounting these flash drives and cards on Linux machines required installing additional software such as FUSE-based exFAT implementation. Supporting exFAT in the Linux kernel will provide users its full-featured implementation and can also be more performant as compared to the FUSE implementation. Also, its inclusion in OIN's Linux System Definition will allow its cross-licensing in a royalty-free manner. Microsoft shared that the exFAT code incorporated into the Linux kernel will be licensed under GPLv2. https://twitter.com/OpenAtMicrosoft/status/1166773276166828033 In addition to supporting exFAT in the Linux kernel, Microsoft also hopes that its specifications become a part of the Open Invention Network’s (OIN) Linux definition. Keith Bergelt, OIN's CEO, told ZDNet, "We're happy and heartened to see that Microsoft is continuing to support software freedom. They are giving up the patent levers to create revenue at the expense of the community. This is another step of Microsoft's transformation in showing it's truly committed to Linux and open source." The next edition of the Linux System Definition is expected to publish in the first quarter of 2020, post which any member of the OIN will be able to use exFAT without paying a patent royalty. The Linux Foundation also appreciated Microsoft's move to bring exFAT in the Linux kernel: https://twitter.com/linuxfoundation/status/1166744195199115264 Other developers also shared their excitement. A Hacker News user commented, “OMG, I can't believe we finally have a cross-platform read/write disk format. At last. No more Fuse. I just need to know when it will be available for my Raspberry Pi.” Read the official announcement by Microsoft to know more in detail. Microsoft Edge Beta is now ready for you to try Microsoft introduces public preview of Azure Dedicated Host and updates its licensing terms CERN plans to replace Microsoft-based programs with an affordable open-source software
Read more
  • 0
  • 0
  • 2365

article-image-introducing-activestate-state-tool-a-cli-tool-to-automate-dev-test-setups-workflows-share-secrets-and-manage-ad-hoc-tasks
Amrata Joshi
29 Aug 2019
3 min read
Save for later

Introducing ActiveState State Tool, a CLI tool to automate dev & test setups, workflows, share secrets and manage ad-hoc tasks

Amrata Joshi
29 Aug 2019
3 min read
Today, the team at ActiveState, a software-based company known for building Perl, Python and Tcl runtime environments introduced the ActiveState Platform Command Line Interface (CLI), the State Tool. This new CLI tool aims at automating manual tasks such as the setup of development and test systems. With this tool, all instructions in the Readme can easily be reduced to a single command. How can the State Tool benefit the developers? Eases ad-hoc tasks The State Tool can address tasks that cause trouble to developers at project setup or environment setups that don’t work the first time. It also helps developers in managing dependencies, system libraries and other such tasks that affect productivity. These tasks usually end up consuming developers’ coding time. The State Tool can be used to automate all of the ad hoc tasks that developers come across on a daily basis.  Deployment of runtime environment With this tool, developers can now deploy a consistent runtime environment into a virtual environment on their machine and across CI/CD systems with a single command. Sharing secrets and cross-platform scripts Developers can now centrally create secrets that can be securely shared among team members without the need of using a password manager, email, or Slack. They can create and share cross-platform scripts that include secrets for starting off the builds and run tests as well as simplifying and speeding up common development tasks. Developers can incorporate their secrets in the scripts by simply referencing their names. Automation of workflows All the workflows that developers handle can now get centrally automated with this tool. Jeff Rouse, vice president, product management, said in a statement, “Developers are a hardy bunch. They suffer through a thousand annoyances at project startup/restart time, but soldier on anyway. It’s just the way things have always been done. With the State Tool, it doesn’t have to stay that way. The State Tool addresses all the hidden costs in a project that sap developer productivity. This includes automating environment setup to secrets sharing, and even automating the day to day scripts that everyone counts on to get their jobs done. Developers can finally stop solving the same annoying problems over and over again, and just rely on the State Tool so they can spend more time coding.”   To know more about this news, check out the official page.  Podcasting with Linux Command Line Tools and Audacity GitHub’s ‘Hub’ command-line tool makes using git easier Command-Line Tools  
Read more
  • 0
  • 0
  • 1986

article-image-hire-by-googlethe-next-product-killed-by-google-services-to-end-in-2020
Vincy Davis
29 Aug 2019
5 min read
Save for later

‘Hire by Google’, the next product killed by Google; services to end in 2020

Vincy Davis
29 Aug 2019
5 min read
Google has notified users in a support note that they are taking down the Hire by Google service on September 1, 2020. In the vague note, no particular reason has been specified by Google. It simply states, “While Hire has been successful, we’re focusing our resources on other products in the Google Cloud portfolio.” Launched in 2017, the Hire by Google service is an applicant tracking system aimed to assist small to medium businesses (SMBs) for candidate sourcing. Its integrated software (Google Search, Gmail, Google Calendar, Google Docs, Google Sheets and Google Hangouts) makes activities like applicant search, interview scheduling and feedback simpler. A profile on Google Hire, can make a candidate more trackable as recruiters and hiring managers can get more information about the candidate from web sites such as LinkedIn, GitHub, and others. Even an email communication with the candidate is tracked on the candidate profile available on Google Hire. Until now, Hire was only available to companies in the United States, United Kingdom and Canada. In the FAQs following the note, Google has said that no new functionalities will be added in the Hire product. It also states that until September 1, 2020, customers under contract will be provided support in accordance with the Technical Support Services Guidelines (TSS) of Hire. “After your next bill, there will be no additional charges for your standard usage of Hire up until the end of your contract term or September 1, 2020, whichever comes first”, adds the note. It also specifies that closing down of Hire will have no impact on user’s G Suite agreement. Which other Google products have been shut down Google’s decision to shut down its own projects is not new. Two months ago, Google announced that it was shutting down the Trips app which was a substitute for Inbox's trip bundling functionality. This news came after the community favorite Google Inbox was discontinued in March 2019. In April this year, Google also ceased and deleted all user accounts on its Google+ social network platform. Per Verge, the reason behind the closure of Google+ was the security liabilities the social network posed. It suffered two significant data leaks causing millions of Google+ users’ data at risk. Though Google stated that Google+ failed to meet the company’s expectations of user growth and mainstream pickup as the reason for its packup. In May, another popular Google product, Works with Nest was given an end date of August 30, 2019. This was the result of Google’s plan of action to bring all the Nest and Google Home products under one brand ‘Google Nest’. With an aim to make its smart home experience more secure and unified for users, all the Nest account users were asked to migrate to Google Accounts, as it is the only serving front-end for using products across Nest and Google. This decision of phasing out Works with Nest had made many Nest products users’ infuriated back then. Read Also: Turbo: Google’s new color palette for data visualization addresses shortcomings of the common rainbow palette, ‘Jet’ With this trend of killing its own products, Google is gaining a lot of negative campaigning. Many people are of the opinion that Google’s side projects cannot be trusted for long term adoption. A user on Hacker News comments, “What is humorous to me is that Google is hurting users who typically have the most influence over SaaS integrations at their company (managers) by taking away a tool that helped them deal with the part of their job most of them hate the most (hiring/recruiting). If it hasn't been obvious yet to managers watching this, Google's software is not a safe investment for you to make for your company. It is only a matter of time until you will suddenly have to divert your time to figuring out how to migrate away from a Good Tool to a Less Good Tool because Google built it well then took it away.  Swapping a tool like this is an abysmal resource sink for you and your company. This is not the first, second, third, fourth or even fifth time this has happened, but this one should hit close to home. Google's software is not a safe investment for you to make for your company.” Many are wondering if Hire was really successful as stated by Google, then what could be the reason behind its shut down. Another comment on Hacker News reads, “Why do they cancel this product? Are they losing profit over this? Were they working on any new features? If no new features are required, would it be such a hassle to just keep the product working without assigning engineers to it? Only support?” Interested users can read the FAQs in the Google support page to know more information. Google Chrome 76 now supports native lazy-loading Google confirms and fixes 193 security vulnerabilities in Android Q Cisco Talos researchers disclose eight vulnerabilities in Google’s Nest Cam IQ indoor camera
Read more
  • 0
  • 0
  • 2016

article-image-the-julia-team-shares-its-finalized-release-process-with-the-community
Bhagyashree R
29 Aug 2019
4 min read
Save for later

The Julia team shares its finalized release process with the community

Bhagyashree R
29 Aug 2019
4 min read
The discussions regarding the Julia release process started last year when it hit Julia 1.0. Yesterday, Stefan Karpinski, one of Julia's core developers shared its finalized release process giving details on the kind of releases, the stages of the release process, the phases of a release, and more. “This information is collected from a small set of posts on discourse and conversations on Slack, so the information exists “out there”, but this blog post brings it all together in a single place. We may turn this post into an official document if it’s well-received,” Stefan wrote. Types of Julia releases As with most programming languages that follow Semantic Versioning (SemVer), Julia has three types of releases: Patch, Minor, and Major. A patch release will be represented by the last digit of Julia’s version number. It will include things like bug fixes, low-risk performance improvements, and documentation updates. The team plans to release a patch every month for the current active release branches, however, this will depend on the number of bug fixes. The team also plans to run PackageEvaluator (PkgEval) on the backports five days prior to the patch release. PkgEval is used to run tests for every registered package, update the web pages of Julia packages, and create status badges. A minor release will be represented by the middle digit of Julia’s version number. Along with some bug fixes and new features, it will include changes that are unlikely to break your code and the package ecosystem. Any significant refactoring of the internals will also be included in the minor release. Since minor releases are branched every four months, developers can expect three minor releases every year. A major release will be represented by the first digit of Julia’s version number. Typically, major releases consist of breaking changes, but the team assures to introduce them only when there is an absolute need, for instance, fixing API design mistakes. It will also include low-level changes that can end up breaking some libraries but are essential for fundamental improvements to the language. Julia’s release process There are three phases in the Julia release process. The development phase takes up 1-4 months where new features are introduced, bugs are fixed, and more. Before the feature freeze, alpha (early preview) and beta (later preview) versions are released for developers to test them and to share their feedback. After the feature freeze, a new unstable release branch is created. In the development phase, the new features will be merged onto the master branch, while the bug fixes will go on the release branch. The second phase, stabilization, also takes up 1-4 months where all known release-blocking bugs are fixed and release candidates are built. Then they are checked for any more release-blocking bugs for one week and if there are none a final release is announced. After this, starts the maintenance phase where bug fixes are backported to the release branch. This continues till a particular release branch is declared to be unmaintained. To ensure the quality of releases and maintaining a predictable release rate the Julia team overlaps the development and stabilization phases. “The development phase of each release is time-boxed at four months and the development phase of x.(y+1) starts as soon as the development phase for x.y is over. Come rain or shine we have a new feature freeze every four months: we pick a day and you’ve got to get your features merged by that day. If new features aren’t merged, they’re not going in the release. But that’s ok, they’ll go in the next one,” explains Karpinski. Talking about long term support, Karpinski wrote that there will be four active branches. The master branch is where all the new features, bug fixes, and breaking changes will go. The unstable release branch will include all the active bug fixing and performance work that happens prior to the next minor release. The stable release branch is where the most recently released minor or major version exists. The fourth one is the long term support (LTS) branch, which is currently Julia 1.0. This branch continues to get applicable bug fixes until it is announced to be unmaintained. Karpinski also shared the different fault tolerance personas in Julia. Check out his post on the Julia blog to get a better understanding of the Julia release process. Julia announces the preview of multi-threaded task parallelism in alpha release v1.3.0 Julia Angwin fired as Editor-in-Chief of The Markup prompting mass resignations in protest Creating a basic Julia project for loading and saving data [Tutorial]  
Read more
  • 0
  • 0
  • 2593

article-image-typescript-3-6-releases-with-stricter-generators-new-functions-in-typescript-playground-better-unicode-support-for-identifiers-and-more
Vincy Davis
29 Aug 2019
4 min read
Save for later

TypeScript 3.6 releases with stricter generators, new functions in TypeScript playground, better Unicode support for identifiers, and more

Vincy Davis
29 Aug 2019
4 min read
Yesterday, the Program Manager at Typescript, Daniel Rosenwasser announced the release of TypeScript 3.6. This is a major release of TypeScript as it contains many new features in Language and Compiler such as stricter generators, more accurate array spread, improved UX around Promises, better Unicode support for identifiers, and more. TypeScript 3.6 also explores a new TypeScript playground, new Editor features, and many breaking changes. TypeScript 3.6 beta was released last month. Language and Compiler improvements Stricter checking to Iterators and Generators Previously, generator users in TypeScript could not differentiate if a value was yielded or returned from a generator. In TypeScript 3.6, due to changes in the Iterator and IteratorResult type declarations, a new type called the Generator type has been introduced. It is an Iterator that will have both the return and throw methods present. This will allow a stricter generator checker to easily understand the difference between the values from their iterators. TypeScript 3.6 also infers certain uses of yield within the body of a generator function. The yield expression can be used explicitly to enforce the type of values that can be returned, yielded, and evaluated. More accurate array spread In pre-ES2015 targets, TypeScript uses a by default --downlevelIteration flag to use iterative constructs with arrays. However, many users found it undesirable that emits produced by it had no defined property slots. To address this problem, TypeScript 3.6 presents a new __spreadArrays helper. It will “accurately model what happens in ECMAScript 2015 in older targets outside of --downlevelIteration.” Improved UX around Promises TypeScript 3.6 explores new improvements in the Promise API, which is one of the most common ways to work with asynchronous data. TypeScript’s error messages will now inform the user if a then() or await content of a Promise API is not written before passing it to another function. The Promise API will also provide quick fixes in some cases. Better Unicode support for Identifiers TypeScript 3.6 contains better support for Unicode characters in identifiers when emitting to ES2015 and later targets. import.meta support in SystemJS: The new version supports the transformation of import.meta to context.meta when the module target is set to system. get and set accessors are allowed in ambient contexts: The previous versions of TypeScript did not allow the use of get and set accessors in ambient contexts. This feature has been changed in TypeScript 3.6, since the ECMAScript’s class fields proposal have differing behavior from an existing version of TypeScript. The official post also adds, “In TypeScript 3.7, the compiler itself will take advantage of this feature so that generated .d.ts files will also emit get/set accessors.” Read Also: Microsoft introduces Pyright, a static type checker for the Python language written in TypeScript New functions in TypeScript playground The TypeScript playground allows users to compile TypeScript and check the JavaScript output. It has more compiler options than typescriptlang and all the strict options are turned on by default in the playground. Following new functions are added in TypeScript Playground: The target option which allows users to switch out of es5 to es3, es2015, esnext, etc All the strictness flags Support for plain JavaScript files The post also states that in the future versions of TypeScript, more features like JSX support, and polishing automatic type acquisition can be expected. Breaking Changes Class members named "constructor" are now simply constructor functions. DOM updates like the global window will no longer be defined as type Window. Instead, it is defined as type Window & typeof globalThis. In JavaScript files, TypeScript will only consult immediately preceding JSDoc comments to figure out declared types. TypeScript 3.6 will not allow the escape sequences. Developers have liked the new features in TypeScript 3.6. https://twitter.com/zachcodes/status/1166840093849473024 https://twitter.com/joshghent/status/1167005999204638722 https://twitter.com/FlorianRappl/status/1166842492718899200 Interested users can check out TypeScript’s 6-month roadmap. Visit the Microsoft blog for full updates of TypeScript 3.6. Next.js 9 releases with built in zero-config TypeScript support, automatic static optimization, API routes and more Babel 7.5.0 releases with F# pipeline operator, experimental TypeScript namespaces support, and more TypeScript 3.5 releases with ‘omit’ helper, improved speed, excess property checks and more
Read more
  • 0
  • 0
  • 3156
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at ₹800/month. Cancel anytime
article-image-largest-women-in-tech-conference-grace-hopper-celebration-renounces-palantir-as-a-sponsor-due-to-concerns-over-its-work-with-the-ice
Sugandha Lahoti
29 Aug 2019
4 min read
Save for later

Largest ‘women in tech’ conference, Grace Hopper Celebration, renounces Palantir as a sponsor due to concerns over its work with the ICE

Sugandha Lahoti
29 Aug 2019
4 min read
Grace Hopper Celebration conference, which is the world's largest conference for women in tech has said that it has dropped Palantir as a sponsor due to its concerns with the United States' Immigration and Customs Enforcement (ICE). This news came after concerned civilians published a petition on Change.org demanding that AnitaB.org, the organization for women in computing that produces the Grace Hopper Celebration conf, should renounce Palantir as a sponsor. At the time of writing 326 people had signed the petition; their aim being to reach 500. The petition reads, “Funding well-respected and impactful events such as GHC is one of the ways in which Palantir can try to buy positive public sentiment. By accepting Palantir’s money, proudly displaying them as a sponsor, and giving them a platform to recruit, AnitaB.org is legitimizing Palantir's work with ICE to GHC's attendees, enabling ICE’s mission, and helping Palantir minimize its role in human rights abuses.” The petition called on AnitaB.org to: Drop Palantir as a sponsor for GHC 2019 and future conferences Release a statement denouncing the prior sponsorship and Palantir’s involvement with ICE Institute and publicly release an ethics vetting policy for future corporate sponsors and recruiters https://twitter.com/techworkersco/status/1166740206461964288 Several activists and women in tech had urged Grace Hopper Celebration to renounce Palantir as its sponsor. https://twitter.com/jrivanob/status/1166734671624822784 https://twitter.com/sarahmaranara/status/1163231777772703744 https://twitter.com/RileyMancuso/status/1157088427977904131 Following this open opprobrium, AnitaB.org Vice President of Business Development and Partnership Success, Robert Read released a statement yesterday: “At AnitaB.org we do our best to promote the basic rights and dignity of every person in all that we do, including our corporate sponsorship and events program. Palantir has been independently verified as providing direct technical assistance that enables the human rights abuses of asylum seekers and their children at US southern border detention centers. Therefore, at this time, Palantir will no longer be a sponsor of Grace Hopper Celebration 2019.” Prior to Grace Hopper Celebration, UC Berkeley’s Privacy Law Scholars Conference dropped Palantir as a sponsor. This was because of the discomfort of many in the community with the company's practices, including among the program committee that selects papers and awards. Lesbians Who Tech, a leading LGBTQ organization, followed suit, confirming their boycott of Palantir with The Verge. This was also because members of their community approached them to drop Palantir as a sponsor seeing its recent contract work with the US government. “Members of our community (the LGBTQ community) contacted us with concern around Palantir’s participation with the job fair,” a representative of Lesbians in tech said, “because of the recent news that the company’s software has been used to aid ICE in effort to gather, store, and search for data on undocumented immigrants, and reportedly playing a role in workplace raids.” Palantir is involved in conducting raids on immigrant communities as well as in enabling workplace raids: Mijente According to reports, Palantir’s mobile app FALCON is being used by ICE to carry out raids on immigrant communities as well as to enable workplace raids. In May this year, new documents released by Mijente, an advocacy organization, revealed that Palantir was responsible for the 2017 operation that targeted and arrested family members of children crossing the border alone. The documents show a huge contrast to what Palantir said its software was doing. As part of the operation, ICE arrested 443 people solely for being undocumented. Palantir's case management tool (Investigative Case Management) was shown to be used at the border to arrest undocumented people discovered in investigations of children who crossed the border alone, including the sponsors and family members of these children. Several open source communities, activists and developers have been strongly demonstrating against Palantir for their involvement with ICE. This includes Entropic, who is debating the idea of banning Palantir employees from participating in the project. Back in August 2018, the Lerna team had taken a strong stand against ICE by modifying its MIT license to ban companies who have collaborated with ICE from using Lerna. Last month, a group of Amazon employees sent out an internal email to the We Won’t Build it mailing list, calling on Amazon to stop working with Palantir. Fairphone 3 launches as a sustainable smartphone alternative that supports easy module repairs. #Reactgate forces React leaders to confront the community’s toxic culture head on Stack Overflow faces backlash for removing the “Hot Meta Posts” section; community feels left out of decisions.
Read more
  • 0
  • 0
  • 2367

article-image-javascript-will-soon-support-optional-chaining-operator-as-its-ecmascript-proposal-reaches-stage-3
Bhagyashree R
28 Aug 2019
3 min read
Save for later

JavaScript will soon support optional chaining operator as its ECMAScript proposal reaches stage 3

Bhagyashree R
28 Aug 2019
3 min read
Last month, the ECMAScript proposal for optional chaining operator reached stage 3 of the TC39 process. This essentially means that the feature is almost finalized and is awaiting feedback from users. The optional chaining operator aims to make accessing properties through connected objects easier when there are chances of a reference or function being undefined or null. https://twitter.com/drosenwasser/status/1154456633642119168 Why optional chaining operator is proposed in JavaScript Developers often need to access properties that are deeply nested in a tree-like structure. To do this, they sometimes end up writing long chains of property accesses. This can make them error-prone. If any of the intermediate references in these chains are evaluated to null or undefined, JavaScript will throw the TypeError: Cannot read property 'name' of undefined error. The optional chaining operator aims to provide a more elegant way of recovering from such instances. It allows you to check for the existence of deeply nested properties in objects. How it works is that if the operand before the operator evaluates to undefined or null, the expression will return to undefined. Or else, the property access, method or function call will be evaluated normally. MDN compares this operator with the dot (.) chaining operator. “The ?. operator functions similarly to the . chaining operator, except that instead of causing an error if a reference is null or undefined, the expression short-circuits with a return value of undefined. When used with function calls, it returns undefined if the given function does not exist,” the document reads. The concept of optional chaining is not new. Several other languages also have support for a similar feature including the null-conditional operator in C# 6 and later, optional chaining operator in Swift, and the existential operator in CoffeeScript. The optional chaining operator is represented by ‘?.’. Here’s how its syntax looks like: obj?.prop       // optional static property access obj?.[expr]     // optional dynamic property access func?.(...args) // optional function or method call Some properties of optional chaining Short-circuiting: The rest of the expression is not evaluated if an optional chaining operator encounters undefined or null at its left-hand side. Stacking: Another property of the optional chaining operator is that you can stack them. This means that you can apply more than one optional chaining operator on a sequence of property accesses. Optional deletion: You can also combine the ‘delete’ operator with an optional chain. Though there is time for the optional chaining operator to land in JavaScript, you can give it try with a Babel plugin. To stay updated with its browser compatibility, check out the MDN web docs. Many developers are appreciating this feature. A developer on Reddit wrote, “Considering how prevalent 'Cannot read property foo of undefined' errors are in JS development, this is much appreciated. Yes, you can rant that people should do null guards better and write less brittle code. True, but better language features help protect users from developer laziness.” Yesterday, the team behind V8, Chrome’s JavaScript engine, also expressed their delight on Twitter: https://twitter.com/v8js/status/1166360971914481669 Read the Optional Chaining for JavaScript proposal to know more in detail. ES2019: What’s new in ECMAScript, the JavaScript specification standard Introducing QuickJS, a small and easily embeddable JavaScript engine Introducing Node.js 12 with V8 JavaScript engine, improved worker threads, and much more
Read more
  • 0
  • 0
  • 6629

article-image-deepmind-introduces-openspiel-a-reinforcement-learning-based-framework-for-video-games
Savia Lobo
28 Aug 2019
3 min read
Save for later

DeepMind introduces OpenSpiel, a reinforcement learning-based framework for video games

Savia Lobo
28 Aug 2019
3 min read
A few days ago, researchers at DeepMind introduced OpenSpiel, a framework for writing games and algorithms for research in general reinforcement learning and search/planning in games. The core API and games are implemented in C++ and exposed to Python. Algorithms and tools are written both in C++ and Python. It also includes a branch of pure Swift in the swift subdirectory. In their paper, the researchers write, “We hope that OpenSpiel could have a similar effect on general RL in games as the Atari Learning Environment has had on single-agent RL.” Read Also: Google Research Football Environment: A Reinforcement Learning environment for AI agents to master football OpenSpiel allows evaluating written games and algorithms on a variety of benchmark games as it includes implementations of over 20 different games types including simultaneous move, perfect and imperfect information games, gridworld games, an auction game, and several normal-form / matrix games, etc. It includes tools to analyze learning dynamics and other common evaluation metrics. It also supports n-player (single- and multi-agent) zero-sum, cooperative and general-sum, one-shot and sequential games, etc. OpenSpiel has been tested on Linux (Debian 10 and Ubuntu 19.04). However, the researchers have not tested the framework on MacOS or Windows. “since the code uses freely available tools, we do not anticipate any (major) problems compiling and running under other major platforms,” the researchers added. The purpose of OpenSpiel is to promote “general multiagent reinforcement learning across many different game types, in a similar way as general game-playing but with a heavy emphasis on learning and not in competition form,”  the researcher paper mentions. This framework is “designed to be easy to install and use, easy to understand, easy to extend (“hackable”), and general/broad.” Read Also: DeepMind’s AI uses reinforcement learning to defeat humans in multiplayer games Design constraints for OpenSpiel The two main design criteria that OpenSpiel is based on include: Simplicity: OpenSpiel provides easy-to-read, easy-to-use code that can be used to learn from and to build a prototype rather than a fully-optimized code that would require additional assumptions. Dependency-free: Researchers say, “dependencies can be problematic for long-term compatibility, maintenance, and ease-of-use.” Hence, the OpenSpiel framework does not introduce dependencies thus keeping it portable and easy to install. Swift OpenSpiel: A port to use Swift for TensorFlow The swift/ folder contains a port of OpenSpiel to use Swift for TensorFlow. This Swift port explores using a single programming language for the entire OpenSpiel environment, from game implementations to the algorithms and deep learning models. This Swift port is intended for serious research use. As the Swift for TensorFlow platform matures and gains additional capabilities (e.g. distributed training), expect the kinds of algorithms that are expressible and tractable to train to grow significantly. While OpenSpiel has some tools for visualization and evaluation, the α-Rank algorithm is also a tool. The α-Rank algorithm leverages evolutionary game theory to rank AI agents interacting in multiplayer games. OpenSpiel currently supports using α-Rank for both single-population (symmetric) and multi-population games. Developers are excited about this release and want to try out this framework. https://twitter.com/SMBrocklehurst/status/1166435811581202443 https://twitter.com/sharky6000/status/1166349178412261376 To know more about this news in detail, head over to the research paper. You can also check out the GitHub page. Terrifyingly realistic Deepfake video of Bill Hader transforming into Tom Cruise is going viral on YouTube DeepCode, the AI startup for code review, raises $4M seed funding; will be free for educational use and enterprise teams with 30 developers Baidu open sources ERNIE 2.0, a continual pre-training NLP model that outperforms BERT and XLNet on 16 NLP tasks
Read more
  • 0
  • 0
  • 4695

article-image-npm-install-funding-experiment-to-sustain-open-source-projects-with-ads-on-the-cli-terminal-faces-community-backlash
Fatema Patrawala
28 Aug 2019
5 min read
Save for later

‘Npm install funding’, an experiment to sustain open-source projects with ads on the CLI terminal faces community backlash

Fatema Patrawala
28 Aug 2019
5 min read
Last week, one of the npm open source authors and maintainers, software developer Feross announced an “npm install funding” experiment. Essentially, this enabled sponsors to “advertise on the Npm package install terminals”. In turn, the money raised from these ads would ensure npm maintainers are paid for their important contributions to the project, ensuring that packages remain up to date, reliable, and secure. Feross wrote on the GitHub page, “I think that the current model of sustaining open source is not working and we need more experimentation. This is one such experiment.” He further wrote that if this experiment works, then they can help make all open source healthier, too. For complex reasons, companies are generally hesitant or unwilling to fund OSS directly. When it does happen, it's never enough and it never reaches packages which have transitive dependencies (i.e. packages that no one installs explicitly and therefore no one knows exists). Feross believes that npm essentially is a public good as it is consumed by huge numbers of users, but no one pays for it. And he viewed a funding model that usually works for public goods like this are ads. But how does it work? Read Also: Surprise NPM layoffs raise questions about the company culture How was the project ‘Npm install funding’ planned to work? Feross’s idea was that when developers install the library via the npm JavaScript package manager, they get a giant banner advertisement in their terminal as shown below: Source: GitHub thread Feross asked companies to promote ads on the installation terminals of JavaScript packages that have expressed interest in participating in the funding experiment. The idea behind funding is that companies buy ad space in people's terminals, and the funding project then shares its profits with open-source projects who signed-up to show the ads, as per ZDNet.  Linode and LogRocket agreed to participate in this funding experiment. The experiment did run on a few open source projects that Feross maintains: One of them was StandardJS 14. Feross raised $2000 as Npm install funds Feross had so far earned $2,000 for his time to release Standard 14 which took him five days. If he was able to raise additional funds, his next focus was the TypeScript support in StandardJS (one of the most common feature requests) and modernizing the various text editor plugins (many of which are currently unmaintained). Community did not support promoting ads on CLI and finally it came to a halt As per ZDNet reports, the developer community has been debating on this idea. There are arguments from both sides, one who see it is a good idea to finance their projects. And there are others who are completely against seeing ads on their terminals. Most of the negative comments for this new funding scheme came from developers who are dissatisfied that these post-install ad banners will now be making their way into logs, making app debugging unnecessarily complicated. Robert Hafner, a developer from California commented on a GitHub thread, "I don't want to have to view advertisements in my CI logs, and I hate what this would mean if other packages started doing this. Some JS packages have dozens, hundreds, or even more dependencies- can you imagine what it would look like if every package did this?" Some of the developers took a step further and created the world’s first ad blocker for a command line interface. https://twitter.com/dawnerd/status/1165330723923849216 They also put pressure on Linode and LogRocket to remove showing up the ads, and Linode eventually decide to drop out. https://twitter.com/linode/status/1165421512633016322 Additionally on Hacker News, users are confused about this initiative. They are curious to know about how this will actually work out? One of them commented, “The sponsorship pays directly for maintainer time. That is, writing new features, fixing bugs, answering user questions, and improving documentation. As far as I can tell, this project is literally just a 200 line configuration file for a linter. Not even editor integrations for the linter, just a configuration file for it. Is it truly something that requires funding to 'add new features'? How much time does it take out of your day to add a new line of JSON to a configuration file, or is the sponsorship there to pay for all the bikeshedding that's probably happening in the issues and comments on the project? What sort of bugs are there in a linter configuration file? I'm really confused by all of this. > The funds raised so far ($2,000) have paid for Feross's time to release Standard 14 which has taken around five days. Five days to do what? Five full 8 hour days? Does it take 5 days to cut a GitHub release and push it to NPM? What about the other contributors that give up their time for free, are their contributions worthless? Rather than feeling like a way to support FOSS developers or FOSS projects, it feels like a rather backhanded attempt at monetization by the maintainer where Standard was picked out because it was his most popular project, and therefore would return the greatest advertising revenue. Do JavaScript developers, or people that use this project, have a more nuanced opinion than me? I do zero web development, is this type of stuff normal?” After continuous backlash from developer community, the project has come to a halt and there are no messages promoted on the CLI. It is clear that while open-source funding still remains a major pain point for developers and maintainers, people don't really like ads in their CLI terminals. What's new in tech this week! “Rust is the future of systems programming, C is the new Assembly”: Intel principal engineer, Josh Triplett Kotlin 1.3.50 released with ‘duration and time Measurement’ API preview, Dukat for npm dependencies, and much more! React.js: why you should learn the front end JavaScript library and how to get started
Read more
  • 0
  • 0
  • 4071
article-image-fairphone-3-launches-as-a-sustainable-smartphone-alternative-that-supports-easy-module-repairs
Sugandha Lahoti
28 Aug 2019
3 min read
Save for later

Fairphone 3 launches as a sustainable smartphone alternative that supports easy module repairs

Sugandha Lahoti
28 Aug 2019
3 min read
Dutch company Fairphone has unveiled the third version of its sustainable smartphone - Fairphone 3. This new version builds on the company’s vision of creating sustainable but performant devices while also minimizing electronic waste. https://www.youtube.com/watch?v=S0fbZerTUjY Talking about specs, Fairphone 3 has all specifications of a modern 2019 smartphone. This sustainable smartphone has a 5.7-inch 1080x2160 18:9 touchscreen with Gorilla Glass 5 on top, it runs on the Snapdragon 632 SoC, has 4GB of RAM, 64GB of expandable storage, and a 3,000 mAh removable battery with fast charging support. It also runs Android 9 Pie and has a 12 MP rear camera and an 8 MP selfie shooter, dual-SIM functionality, Wi-Fi, Bluetooth 4.2, GPS, NFC, and 4G network support. Most importantly, Fairphone 3 contains 7 modules, which have been designed to support easy repairs in support of Fairphone’s goals for long-lasting and sustainable phones. It also ships in sustainable and reusable packaging, being delivered with its own protective bumper. The company revealed that you can save 30% of CO2 emissions or more by just maintaining the Fairphone 3 to last longer. The sustainable smartphone is made with responsibly sourced and conflict-free tin and tungsten, recycled copper and plastics, and sources Fairtrade gold. The phone also supports collection programs in countries like Ghana. To combat e-waste, it will reward buyers for using Fairphone’s recycle program to return their previous phones. Not just reducing electronic waste, Fairphone is also working to make its supply chain fairer for all those involved. Fairphone is collaborating with the final assembly partner Arima to improve employee satisfaction by improving worker representation, health, and safety and by paying a bonus to workers with the aim to bridge the gap between minimum and living wages in the factory. Read Also: The ethical mobile OS, /e/-MVP beta2 ported to Android-Oreo, /e/ powered smartphone may be released soon! Fairphone CEO Eva Gouwens sums up the company’s goals for Fairphone 3, commenting, “We envision an economy where consideration for people and the planet is a natural part of doing business and according to this vision, we have created scalable ways to improve our supply chain and product. We developed the Fairphone 3 to be a real sustainable alternative on the market, which is a big step towards lasting change. By establishing a market for ethical products, we want to motivate the entire industry to act more responsibly since we cannot achieve this change alone.” Fairphone 3 is available for presale on the company’s official website, for €450. It will be in stock on September 3, when it will be offered across Europe by select retailers and operators. People appreciated Fairphone 3 as a sustainable smartphone on various social networks. A Hacker News user posted, “Modular, repairable phone with fair and traceable raw materials (as far as possible). The company is a Dutch social enterprise and constantly creative, innovative and progressing further. Really happy they have been profitable as well and had s successful investment round last year. I followed this project from Fairphone 1 and they really have made impressive progress. FP3 looks great in specs and is with €450 priced a bit better than the previous one.” https://twitter.com/tonmoy_phy/status/1166424694544728064 Other interesting news in Tech CERN plans to replace Microsoft-based programs with an affordable open-source software Glitch hits 2.5 million apps, secures $30M in funding, and is now available in VS Code. VMware reaches the goal of using 100% renewable energy in its operations, a year Stripe’s ‘Negative Emissions Commitment’ to pay for the removal and sequestration of CO2 to mitigate global warming.
Read more
  • 0
  • 0
  • 2433

article-image-a-year-old-webmin-backdoor-revealed-at-def-con-2019-allowed-unauthenticated-attackers-to-execute-commands-with-root-privileges-on-servers
Bhagyashree R
27 Aug 2019
4 min read
Save for later

A year-old Webmin backdoor revealed at DEF CON 2019 allowed unauthenticated attackers to execute commands with root privileges on servers

Bhagyashree R
27 Aug 2019
4 min read
Earlier this month, at DEF CON 2019, a Turkish security researcher, Özkan Mustafa Akkuş presented a zero-day remote code execution vulnerability in Webmin, a web-based system configuration system for Unix-like systems. Following this disclosure, its developers revealed that the backdoor was found in Webmin 1.890. A similar backdoor was also detected in versions 1.900 to 1.920. The vulnerability was found in a Webmin security feature that allows an administrator to enforce a password expiration policy for other users’ accounts. The security researcher revealed that the vulnerability was present in the password reset page. It allows a remote, unauthenticated attacker to execute arbitrary commands with root privileges on affected servers. They just need to add a simple pipe command ("|") in the old password field through POST requests. This vulnerability is tracked as CVE-2019-15107. The Webmin zero-day vulnerability was no accident Jamie Cameron, the author of Webmin, in a blog post talked about how and when this backdoor was injected. He revealed that this backdoor was no accident, and was in fact, injected deliberately in the code by a malicious actor. He wrote, “Neither of these were accidental bugs - rather, the Webmin source code had been maliciously modified to add a non-obvious vulnerability,” he wrote. The traces of this backdoor goes back to April 2018 when the development build server of Webmin was exploited and a vulnerability was introduced to the ‘password_change.cgi’ script. The team then reverted this file to its checked-in version from GitHub. The attacker again modified this file in July 2018. However, this time they added the exploit to code that executed only when changing of expired passwords was enabled. The team then replaced the vulnerable build server with a new server running CentOS7 in September 2018. But, this also did not solve the problem because the build directory that had the modified file was copied across from backups made on the original server. After being informed about the zero-day exploit on 17th August 2019, the team released an updated version of Webmin 1.930 and Usermin version 1.780 addressing the vulnerabilities. These releases also address cross-site scripting (XSS) vulnerabilities that were disclosed by a different security researcher. In order to ensure that such attacks are not repeated in the future the team is taking a few steps: Updating the build process to use only checked-in code from Github, rather than a local directory that is kept in sync. Rotated all passwords and keys accessible from the old build system. Auditing all GitHub check-ins over the past year to look for commits that may have introduced similar vulnerabilities. To know more in detail, check out the official announcement by Webmin. Attackers are exploiting vulnerabilities revealed at DEF CON and Black Hat A ZDNet report posted last week, revealed that attackers are now exploiting the vulnerabilities that were made public earlier this month. Bad Packet reported on Twitter that it detected several “active exploitation attempts” by attackers on Friday. https://twitter.com/bad_packets/status/1164764172044787712 Many attackers are also targeting vulnerabilities in Pulse Secure VPN and Fortinet's FortiGate VPN. Some of these vulnerabilities were discussed in a Black Hat talk named ‘Infiltrating Corporate Intranet Like NSA: Pre-auth RCE on Leading SSL VPNs.’ Bad Packets in a blog post shared that its honeypots have detected an “opportunistic mass scanning activity” targeting Pulse Secure VPN server endpoints vulnerable to CVE-2019-11510. This vulnerability discloses sensitive information using which unauthenticated attackers can get access to private keys and user passwords. https://twitter.com/bad_packets/status/1164592212270673920 Security researcher, Kevin Beaumont tweeted that hackers are scanning the internet for vulnerable devices to retrieve VPN session files from Fortinet's FortiGate. https://twitter.com/GossiTheDog/status/1164536461665996800 Puppet launches Puppet Remediate, a vulnerability remediation solution for IT Ops New Bluetooth vulnerability, KNOB attack can manipulate the data transferred between two paired devices Apple announces ‘WebKit Tracking Prevention Policy’ that considers web tracking as a security vulnerability  
Read more
  • 0
  • 0
  • 3256

article-image-google-chrome-76-now-supports-native-lazy-loading
Bhagyashree R
27 Aug 2019
4 min read
Save for later

Google Chrome 76 now supports native lazy-loading

Bhagyashree R
27 Aug 2019
4 min read
Earlier this month, Google Chrome 76 got native support for lazy loading. Web developers can now use the new ‘loading’ attribute to lazy-load resources without having to rely on a third-party library or writing a custom lazy-loading code. Why native lazy loading is introduced Lazy loading aims to provide better web performance in terms of both speed and consumption of data. Generally, images are the most requested resources on any website. Some web pages end up using a lot of data to load images that are out of the viewport. Though this might not have much effect on a WiFi user, this could surely end up consuming a lot of cellular data. Not only images, but out-of-viewport embedded iframes can also consume a lot of data and contribute to slow page speed. Lazy loading addresses this problem by deferring the non-critical, below-the-fold images and iframe loads until the user scrolls closer to them. This results in faster web page loading, minimized bandwidth for users, and reduced memory usage. Previously, there were a few ways to defer the loading of images and iframes that were out of the viewport. You could use the Intersection Observer API or the ‘data-src’ attribute on the 'img' tag. Many developers also built third-party libraries to provide abstractions that are even easier to use. Bringing native support, however, eliminates the need for an external library. It also ensures that the deferred loading of images and iframes still work even if JavaScript is disabled on the client. How you can use lazy loading Without this feature, Chrome already loads images at different priorities depending on their location with respect to the device viewport. This new ‘loading’ attribute, however, allows developers to completely defer the loading of images and iframes until the user scrolls near them. The distance-from-viewport threshold is not fixed and depends on the type of resources being fetched, whether Lite mode is enabled, and the effective connection type. There are default values assigned for effective connection type in the Chromium source code that might change in a future release. Also, since the images are lazy-loaded, there are chances of content reflow. To prevent this, developers are advised to set width and height for the images. You can assign any one of the following three values to the ‘loading’ attribute: ‘auto’: This represents the default behavior of the browser and is equivalent to not including the attribute. ‘lazy’: This will defer the loading of the images and iframes until it reaches a calculated distance from the viewport. ‘eager’: This will load the resource immediately. Support for native lazy loading in Chrome 76 got mixed reactions from users. A user commented on Hacker News, “I'm happy to see this. So many websites with lazy loading never implemented a fallback for noscript. And most of the popular libraries didn't account for this accessibility.” Another user expressed that it does hinder user experience. They commented, “I may be the odd one out here, but I hate lazy loading. I get why it's a big thing on cellular connections, but I do most of my browsing on WIFI. With lazy loading, I'll frequently be reading an article, reach an image that hasn't loaded in yet, and have to wait for it, even though I've been reading for several minutes. Sometimes I also have to refind my place as the whole darn page reflows. I wish there was a middle ground... detect I'm on WIFI and go ahead and load in the lazy stuff after the above the fold stuff.” Right now, Chrome is the only browser to support native lazy loading. However, other browsers may follow the suit considering Firefox has an open bug for implementing lazy loading and Edge is based on Chromium. Why should your e-commerce site opt for Headless Magento 2? Edge, Chrome, Brave share updates on upcoming releases, recent milestones, and more at State of Browsers event Angular 8.0 releases with major updates to framework, Angular Material, and the CLI
Read more
  • 0
  • 0
  • 6426
article-image-facebook-open-sources-hyperparameter-autotuning-for-fasttext-to-automatically-find-best-hyperparameters-for-your-dataset
Amrata Joshi
27 Aug 2019
3 min read
Save for later

Facebook open-sources Hyperparameter autotuning for fastText to automatically find best hyperparameters for your dataset

Amrata Joshi
27 Aug 2019
3 min read
Two years ago, the team at Facebook AI Research (FAIR) lab open-sourced fastText, a library that is used for building scalable solutions for text representation and classification. To make models work efficiently on datasets with large number of categories, finding the best hyperparameters is crucial. However, searching the best hyperparameters manually is difficult as the effect of each parameter varies from one dataset to another. For this, Facebook has developed an autotune feature in FastText that automatically finds the best hyperparameters for your dataset. Yesterday, they announced that they are open-sourcing the Hyperparameter autotuning feature for fastText library.  What are hyperparameters? Hyperparameters are the parameter whose values are fixed before the training process begins. They are the critical components of an application and they can be tuned in order to control how a machine learning algorithm behaves. Hence it is important to search for the best hyperparameters as the performance of an algorithm can be majorly dependent on the selection of these hyperparameters. The need for Hyperparameter autotuning It is difficult and time-consuming to search for the best hyperparameters manually, even for expert users. This new feature makes this task easier by automatically determining the best hyperparameters for building an efficient text classifier. A researcher can input the training data, a validation set and a time constraint to use autotuning. The researcher can also constrain the size of the final model with the help of compression techniques in fastText. Building a size-constrained text classifier can be useful for even deploying models on devices or in the cloud such that it becomes easier to maintain a small memory footprint.  With Hyperparameter autotuning, researchers can now easily build a memory-efficient classifier that can be used for various tasks, including language identification, sentiment analysis, tag prediction, spam detection, and topic classification. The team’s strategy of exploring various hyperparameters is inspired by existing tools, such as Nevergrad, but has been tailored to fastText for using the specific structure of models. The autotune feature explores hyperparameters by initially sampling in a large domain that shrinks around the best combinations over time.  It seems that this new feature could possibly be a competitor to Amazon SageMaker Automatic Model Tuning. In Amazon’s model, however, the user needs to select the hyperparameters required to be tuned, a range for each parameter to explore, and also the total number of training jobs. While Facebook’s Hyperparameter autotuning automatically selects the hyperparameters.  To know more about this news, check out Facebook’s official blog post. Twitter and Facebook removed accounts of Chinese state-run media agencies aimed at undermining Hong Kong protests Facebook must face privacy class action lawsuit, loses facial recognition appeal, U.S. Court of Appeals rules Facebook research suggests chatbots and conversational AI are on the verge of empathizing with humans    
Read more
  • 0
  • 0
  • 2699

article-image-moscows-blockchain-based-internet-voting-system-encryption-scheme-broken
Sugandha Lahoti
27 Aug 2019
4 min read
Save for later

Moscow's blockchain-based internet voting system uses an encryption scheme that can be easily broken

Sugandha Lahoti
27 Aug 2019
4 min read
Russia is looking forward to its September 2019 elections for the representatives at the Parliament of the city (the Moscow City Douma). For the first time ever, Russia will use Internet voting in its elections. The internet-based system will use blockchain developed in-house by the Moscow Department of Information Technology. Since the news broke out, security experts have been quite skeptical about the overall applicability of blockchain to elections. Moscow’s voting system has a critical flaw in the encryption scheme Recently, a French security researcher Pierrick Gaudry has found a critical vulnerability in the encryption scheme used in the coding of the voting system. The scheme used was the ElGamal encryption, which is an asymmetric key encryption algorithm for public-key cryptography. Gaudry revealed that it can be broken in about 20 minutes using a standard personal computer and using only free software that is publicly available. The main problem, Gaudry says is in the choice of three cyclic groups of generators. These generators are multiplicative groups of finite fields of prime orders each of them being Sophie Germain primes. These prime fields are all less than 256-bit long and the 256x3 private key length is too little to guarantee strong security. Discrete logarithms in such a small setting can be computed in a matter of minutes, thus revealing the secret keys, and subsequently easily decrypting the encrypted data. Gaudry also showed that the implemented version of ElGamal worked in groups of even order, which means that it leaked a bit of the message. What an attacker can do with these encryption keys is currently unknown, since the voting system's protocols weren't yet available in English, so Gaudry couldn't investigate further. Following Gaudry's discovery, the Moscow Department of Information Technology promised to fix the reported issue. In a medium blog post, they wrote, "We absolutely agree that 256x3 private key length is not secure enough. This implementation was used only in a trial period. In a few days, the key's length will be changed to 1024." (Gaudry has mentioned in his research paper that the current general recommendation is at least 2048 bits). Even after the response, Gaudry was still concerned about potential flaws caused by the recent big changes fixing the key length issue. Gaudy concerns proved true as recently another security researcher Alexander Golovnev, found an attack on the revised encryption scheme. The revised encryption algorithm still leaks messages Alexander Golovnev is the current fellow for Michael O. Rabin Postdoctoral Fellowship in Theoretical Computer at Harvard University. His research reveals that the new implementation of the encryption system also leaks a bit of the message. This is caused by the usage of ElGamal where the message is not mapped to the cyclic group under consideration. This flaw can be misused for counting the number of votes cast for a candidate, which is illegal (until the end of the election period). Golovnev says that security vulnerability is a major issue of the implemented cryptographic scheme. The attack does not recover the secret key as required by the public testing scenario but rather breaks the system without recovering the secret key. There is no response or solution from the Moscow Department of Information Technology regarding this vulnerability. Many people took to Twitter to express their disappointment at Moscow’s lamentable internet voting system. https://twitter.com/mjos_crypto/status/1166252479761330176 https://twitter.com/KevinRothrock/status/1163750923182780416 In 2018, Robert Mueller’s report indicated that there were 12 Russian military officers who meddled with the 2016 U.S. Presidential elections. They had hacked into the Democratic National Committee, the Democratic Congressional Campaign Committee, and the Clinton campaign. This year, Microsoft revealed that Russian hackers ‘Fancy Bear’ are attempting to compromise IoT devices including a VOIP, a printer, and a video decoder across multiple locations. These attacks were discovered in April, by security researchers in the Microsoft Threat Intelligence Center. Microsoft reveals Russian hackers “Fancy Bear” are the culprit for IoT network breach in the US. FireEye reports infrastructure-crippling Triton malware linked to Russian government tech institute Russian government blocks ProtonMail services for its citizens
Read more
  • 0
  • 0
  • 2959