Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3709 Articles
article-image-deepmind-ais-alphastar-achieves-grandmaster-level-in-starcraft-ii-with-99-8-efficiency
Vincy Davis
04 Nov 2019
5 min read
Save for later

DeepMind AI’s AlphaStar achieves Grandmaster level in StarCraft II with 99.8% efficiency

Vincy Davis
04 Nov 2019
5 min read
Earlier this year in January, Google’s DeepMind AI AlphaStar had defeated two professional players, TLO and MaNa, at StarCraft II, a real-time strategy game. Two days ago, DeepMind announced that AlphaStar has now achieved the highest possible online competitive ranking, called Grandmaster level, in StarCraft II. This makes AlphaStar the first AI to reach the top league of a widely popular game without any restrictions. AplhaStar used the multi-agent reinforcement learning technique and rated above 99.8% of officially ranked human players. It was able to achieve the Grandmaster level for all the three StarCraft II races - Protoss, Terran, and Zerg. The DeepMind researchers have published the details of AlphaStar in the paper titled, ‘Grandmaster level in StarCraft II using multi-agent reinforcement learning’. https://twitter.com/DeepMindAI/status/1189617587916689408 How did AlphaStar achieve the Grandmaster level in StarCraft II? The DeepMind researchers were able to develop a robust and flexible agent by understanding the potential and limitations of open-ended learning. This helped the researchers to make AlphaStar cope with complex real-world domains. “Games like StarCraft are an excellent training ground to advance these approaches, as players must use limited information to make dynamic and difficult decisions that have ramifications on multiple levels and timescales,” states the blog post. The StarCraft II video game requires players to balance high-level economic decisions with individual control of hundreds of units. When playing this game, humans are under physical constraints which limits their reaction time and their rate of actions. Accordingly, AphaStar was also imposed with these constraints, thus making it suffer from delays due to network latency and computation time. In order to limit its actions per minute (APM), AphaStar’s peak statistics were kept substantially lower than those of humans. To align with the standard human movement, it also had a limited viewing of the portion of the map, AlphaStar could register only a limited number of mouse clicks and had only 22 non-duplicated actions to play every five seconds. AlphaStar uses a combination of general-purpose techniques like neural network architectures, imitation learning, reinforcement learning, and multi-agent learning. The games were sampled from a publicly available dataset of anonymized human replays, which were later trained to predict the action of every player. These predictions were then used to procure a diverse set of strategies to reflect the different modes of human play. Read More: DeepMind’s Alphastar AI agent will soon anonymously play with European StarCraft II players Dario “TLO” WÜNSCH, a professional starcraft II player says, “I’ve found AlphaStar’s gameplay incredibly impressive – the system is very skilled at assessing its strategic position, and knows exactly when to engage or disengage with its opponent. And while AlphaStar has excellent and precise control, it doesn’t feel superhuman – certainly not on a level that a human couldn’t theoretically achieve. Overall, it feels very fair – like it is playing a ‘real’ game of StarCraft.” According to the paper, AlphaStar had the 1026 possible actions available at each time step, thus it had to make thousands of actions before learning if it has won or lost the game. One of the key strategies behind AlphaStar’s performance was learning human strategies. This was necessary to ensure that the agents keep exploring those strategies throughout self-play. The researchers say, “To do this, we used imitation learning – combined with advanced neural network architectures and techniques used for language modeling – to create an initial policy which played the game better than 84% of active players.” AlphaStar also uses a latent variable to encode the distribution of opening moves from human games. This helped AlphaStar to preserve the high-level strategies and enabled it to represent many strategies within a single neural network. By training the advances in imitation learning, reinforcement learning, and the League, the researchers were able to train AlphaStar Final, the agent that reached the Grandmaster level at the full game of StarCraft II without any modifications. AlphaStar used a camera interface, which helped it get the exact information that a human player would receive. All the interface and restrictions faced by AlphaStar were approved by a professional player. Finally, the results indicated that general-purpose learning techniques can be used to scale AI systems to work in complex and dynamic environments involving multiple actors. AlphaStar’s great feat has got many people excited about the future of AI. https://twitter.com/mickdooit/status/1189604170489315334 https://twitter.com/KaiLashArul/status/1190236180501139461 https://twitter.com/JoshuaSpanier/status/1190265236571459584 Interested readers can read the research paper to check AlphaStar’s performance. Head over to DeepMind’s blog for more details. Google AI introduces Snap, a microkernel approach to ‘Host Networking’ Are we entering the quantum computing era? Google’s Sycamore achieves ‘quantum supremacy’ while IBM refutes the claim Smart Spies attack: Alexa and Google Assistant can eavesdrop or vish (voice phish) unsuspecting users, disclose researchers from SRLabs
Read more
  • 0
  • 0
  • 3966

article-image-snyks-javascript-frameworks-security-report-2019-shares-the-state-of-security-for-react-angular-and-other-frontend-projects
Bhagyashree R
04 Nov 2019
6 min read
Save for later

Snyk’s JavaScript frameworks security report 2019 shares the state of security for React, Angular, and other frontend projects

Bhagyashree R
04 Nov 2019
6 min read
Last week, Snyk, an open-source security platform published the State of JavaScript frameworks security report 2019. This report mainly focuses on security vulnerabilities and risks in React and Angular ecosystems. It further talks about security practices in other common JavaScript frontend ecosystem projects including Vue.js, Bootstrap, and JQuery. https://twitter.com/snyksec/status/1189527376197246977 Key takeaways from the State of JavaScript frameworks security report Security vulnerabilities in core Angular and React projects In the report, the ‘react’, ‘react-dom’, and ‘prop-types’ libraries were considered as the core modules of React since they often form the foundation for React web applications. Snyk’s research team was able to find three cross-site scripting (XSS) vulnerabilities in total: two in ‘react’ and one in ‘react-dom’. The two vulnerabilities in the ‘react’ library were present in its pretty older versions, 0.5.x versions and the versions prior to 0.14. However, the vulnerability in react-dom was found in a recent release, version 16.x. Its occurrence depends on other pre-conditions as well, such as using the library within a server-rendering context. All these vulnerabilities’ Common Vulnerability Scoring System (CVSS) score ranged 6.5 and 7.1, which basically means that they were all medium to high severity vulnerabilities. Coming to Angular, Snyk found 19 vulnerabilities across six different release branches of Angular 1.x or AngularJS, which is no longer maintained. Angular 1.5 has the highest number of vulnerabilities, with seven vulnerabilities in total. Out of those seven, three had high severity and four had medium severity. The good thing is that with every new version, the vulnerabilities have decreased both in terms of severity and count. Security risks of indirect dependencies React and Angular projects are often generated with a scaffolding tool that provides a boilerplate. While in React, we use the ‘create-react-app’ npm package, in Angular we use the ‘@angular/cli’ npm package. In a sample React and Angular project created using these scaffolding tools, it was found that both included development dependencies with vulnerabilities. The good thing is that neither of them had any production dependency security issues. “It’s worthy to note that Angular relies on 952 dependencies, which contain a total of two vulnerabilities; React relies on 1257 dependencies, containing three vulnerabilities and one potential license compatibility issue,”  the report states. Here’s the list of security vulnerabilities that were found in these sample projects: Source: Snyk Security vulnerabilities in the Angular module ecosystem For the purposes of this study, the Snyk research team divided the Angular ecosystem into three areas: Angular ecosystem modules, malicious versions of modules, developer tooling. The Angular module ecosystem has the following vulnerable modules: Source: Snyk Talking about the malicious versions of modules, the report lists three malicious versions for the ‘angular-bmap’, ‘ng-ui-library’, ‘ngx-pica’ modules. The ‘angular-bmap’ 0.0.9 version included a malicious code that collected sensitive information related to password and credit cards from forms. It then used to send this information to an attacker-controlled URL. Thankfully, this version is now taken down from the npm registry. The ‘ng-ui-library’ 1.0.987 has the same malicious code as  ‘angular-bmap’ 0.0.9, despite that it is still maintained. The third module, 'ngx-pica' (from versions 1.1.4 to 1.1.6) also has the same malicious code as the above two modules. In developer tooling, the module ‘angular-http-server’ was found vulnerable to directory traversal twice. Security vulnerabilities in the React module ecosystem In React’s case, Snyk found four malicious packages namely ‘react-datepicker-plus’, ‘react-dates-sc’, ‘awesome_react_utility’, and ‘reactserver-native’. These contain malicious code that harvests credit card and other sensitive information and attacks compromised modules on the React ecosystem. Notable vulnerable modules that were found in React’s ecosystem during this study: The ‘react-marked-markdown’ module has a high-severity XSS vulnerability that does not have any fix available as of now. The ‘preact-render-to-string’ library is vulnerable to XSS in all versions prior to 3.7.2. The ‘react-tooltip’ library is vulnerable to XSS attacks for all versions prior to 3.8.1. The ‘react-svg’ library has a high severity XSS vulnerability that was disclosed by security researcher Ron Perris affecting all versions prior to 2.2.18. The 'mui-datatables' library has the CSV Injection vulnerability. “When we track all the vulnerable React modules we found, we count eight security vulnerabilities over the last three years with two in 2017, six in 2018 and two up until August 2019. This calls for responsible usage of open source and making sure you find and fix vulnerabilities as quickly as possible,” the report suggests. Along with listing the security vulnerabilities in React and Angular, the report also shares the overall security posture of the two. This includes secure coding conventions, built-in secure capabilities, responsible disclosure policies, and dedicated security documentation for the project. Vue.js security In total, four vulnerabilities were detected in the Vue.js core project spanning from December 2017 to August 2018: three medium and one low regular expressions denial of service vulnerability. As for Vue’s module ecosystem, the report lists the following vulnerable modules: The ‘bootstrap-vue’ library has a high severity XSS vulnerability that was disclosed in January 2019 and affects all versions prior to <2.0.0-rc.12. The ‘vue-backbone’ library had a malicious version published. Bootstrap security The Snyk research team was able to track a total of seven XSS vulnerabilities in Bootstrap. Out of those seven, three were disclosed in 2019 for recent Bootstrap v3 versions and three security vulnerabilities were disclosed in 2018, one of which affects the newer 4.x Bootstrap release. All these vulnerabilities have security fixes and an upgrade path for users to remediate the risks. Among the vulnerable modules in the Bootstrap ecosystem are: The ‘bootstrap-markdown’ library that includes an unfixed XSS vulnerability affecting all versions. The ‘bootstrap-vuejs’ library has a high severity XSS vulnerability that affects all versions prior to bootstrap-vue 2.0.0-rc.12. The ‘bootstrap-select’ library also includes a high severity XSS vulnerability. This article touched upon some of the key findings of the report. Check out the full report by Snyk to know more in detail. React Native 0.61 introduces Fast Refresh for reliable hot reloading React Conf 2019: Concurrent Mode preview out, CSS-in-JS, React docs in 40 languages, and more Vue maintainers proposed, listened, and revised the RFC for hooks in Vue API
Read more
  • 0
  • 0
  • 6199

article-image-netflix-open-sources-polynote-an-ide-like-polyglot-notebook-with-scala-support-apache-spark-integration-multi-language-interoperability-and-more
Vincy Davis
31 Oct 2019
4 min read
Save for later

Netflix open sources Polynote, an IDE-like polyglot notebook with Scala support, Apache Spark integration, multi-language interoperability, and more

Vincy Davis
31 Oct 2019
4 min read
Last week, Netflix announced the open source launch of Polynote which is a polyglot notebook. It comes with a full scale Scala support, Apache Spark integration, multi-language interoperability including Scala, Python, SQL, and provides IDE-like features such as interactive autocomplete, a rich text editor with LaTeX support, and more. Polynote renders a seamless integration of Netflix’s Scala employed JVM-based ML platform with Python’s machine learning and visualization libraries. It is currently used by Netflix’s personalization and recommendation teams and is also being integrated with the rest of the Netflix research platform. The Netflix team says, “Polynote originated from a frustration with the shortcomings of existing notebook tools, especially with respect to their support of Scala.” Also, “we found that our users were also frustrated with the code editing experience within notebooks, especially those accustomed to using IntelliJ IDEA or Eclipse.”  Key features supported by Polynote Reproducibility A traditional notebook generally relies on a Read–eval–print loop (REPL) environment to build an interactive environment with other users. According to Netflix, the expressions and the results of a REPL evaluation is quite rigid. Thus, Netflix built the Polynote’s code interpretation from scratch, instead of relying on a REPL. This helps Polynote to keep track of the variables defined in each cell by constructing the input state for a given cell based on the cells that have run above it. By making the position of a cell important in its execution semantics, Polynote allows the users to read the notebook from top to bottom. This ensures reproducibility in Polynote by increasing the chances of running the notebook sequentially. Editing Improvements Polynote provides editing enhancements like: It integrates code editing with the Monaco editor for interactive auto-complete. It highlights errors internally to help users rectify it quickly. A rich text editor for text cells which allows users to easily insert LaTeX equations. Visibility One of the major guiding principles of Polynote is its visibility. It enables live view of what the kernel is doing at any given time, without requiring logs. A single glance at a user interface imparts with many information like- The notebook view and task list displays the current running cell, and also shows the queue to be run. The exact statement running in the system is highlighted in colour. Job and stage level Spark progress information is shown in the task list. The kernel status area provides information about the execution status of the kernel. Polyglot Currently, Polynote supports Scala, Python, and SQL cell types and enables users to seamlessly move from one language to another within the same notebook. When a cell is running in the system, the kernel handovers the typed input values to the cell’s language interpreter. Successively, the interpreter provides the resulted typed output values back to the kernel. This enables the cell in a Polynote notebook to run irrespective of the language with the same context and the same shared state. Dependency and Configuration Management In order to ease reproducibility, Polynote yields configuration and dependency setup within the notebook itself. It also provides a user-friendly Configuration section where users can set dependencies for each notebook. This allows Polynote to fetch the dependencies locally and also load the Scala dependencies into an isolated ClassLoader. This reduces the chances of a class conflict of Polynote with the Spark libraries. When Polynote is used in Spark mode, it creates a Spark Session for the notebook, where the Python and Scala dependencies are automatically added to the Spark Session. Data Visualization One of the most important use cases of a notebook is its ability to explore and visualize data. Polynote integrates with two open source visualization libraries- Vega and Matplotlib. It also has a native support for data exploration such as including a data schema view, table inspector and  plot constructor. Hence, this feature helps users to learn about their data without cluttering their notebooks. Users have appreciated Netflix efforts of open sourcing their Polynote notebook and have liked its features https://twitter.com/SpirosMargaris/status/1187164558382845952 https://twitter.com/suzatweet/status/1187531789763399682 https://twitter.com/SpirosMargaris/status/1187164558382845952 https://twitter.com/julianharris/status/1188013908587626497 Visit the Netflix Techblog for more information of Polynote. You can also check out the Polynote website for more details. Netflix security engineers report several TCP networking vulnerabilities in FreeBSD and Linux kernels Netflix adopts Spring Boot as its core Java framework Netflix’s culture is too transparent to be functional, reports the WSJ Linux foundation introduces strict telemetry data collection and usage policy for all its projects Fedora 31 releases with performance improvements, dropping support for 32 bit and Docker package
Read more
  • 0
  • 0
  • 4854

article-image-linux-foundation-introduces-strict-telemetry-data-collection-and-usage-policy-for-all-its-projects
Fatema Patrawala
31 Oct 2019
3 min read
Save for later

Linux Foundation introduces strict telemetry data collection and usage policy for all its projects

Fatema Patrawala
31 Oct 2019
3 min read
Last week, the Linux Foundation introduced a new policy around the collection and usage of telemetry data. As per this new policy all linux projects before using any telemetry data collection mechanism will have to take permissions from the Linux Foundation and the proposed mechanism will undergo a detailed review. The Linux Foundation’s announcement follows closely after Gitlab’s telemetry data collection plan came to a halt. Last week, GitLab announced that it would begin collecting new data by inserting JavaScript snippets and interact with both GitLab and a third-party SaaS telemetry service. However, after receiving severe backlash from users, the company reversed its decision. The official statement from the Linux Foundation reads as follows, “Any Linux Foundation project is required to obtain permission from the Linux Foundation before using a mechanism to collect Telemetry Data from an open source project. In reviewing a proposal to collect Telemetry Data, the Linux Foundation will review a number of factors and considerations.” The Linux Foundation also states that the software sometimes includes the functionality to collect telemetry data. The data is collected through a “phone home” mechanism built into the software. And the end user deploying the software is typically presented with an option to opt-in to share this data with the developers. In doing so certain personal and sensitive information of the users might also get shared without realizing. Hence, to address such data breach and to adhere to the recent data privacy legislation like GDPR, the Linux Foundation has introduced this stringent telemetry data policy. Dan Lopez, a representative of the Linux Foundation states, “by default, projects of the Linux Foundation should not collect Telemetry Data from users of open source software that is distributed on behalf of the project.” New policy for telemetry data As per the new policy, if a project community desires to collect telemetry data, it must first coordinate with members of the legal team of the Linux Foundation to undergo a detailed review of the proposed telemetry data and collection mechanism. The review will include an analysis of the following: the specific data proposed to be collected demonstrating that the data is fully anonymized, and does not contain any sensitive or confidential information of users the manner in which users of the software are (1) notified of all relevant details of the telemetry data collection, use and distribution; and (2) required to consent prior to any telemetry data collection being initiated the manner in which the collected telemetry data is stored and used by the project community the security mechanisms that are used to ensure that collection of telemetry data will not result in (1) unintentional collection of data; or (2) security vulnerabilities resulting from the “phone home” functionality The Linux Foundation has also emphasized that telemetry data should not be collected unless and until the legal team approves the proposed collection. Additionally any telemetry data collection approved by the Linux Foundation must be fully documented, must make the collected data available to all participants in the project community, and at all times comply with the Linux Foundation’s Privacy Policy. A recap of the Linux Plumbers Conference 2019 IBM open-sources Power ISA and other chips; brings OpenPOWER foundation under the Linux Foundation Introducing kdevops, a modern DevOps framework for Linux kernel development GitLab retracts its privacy invasion policy after backlash from community
Read more
  • 0
  • 0
  • 3820

article-image-fedora-31-releases-with-performance-improvements-dropping-support-for-32-bit-and-docker-package
Fatema Patrawala
31 Oct 2019
2 min read
Save for later

Fedora 31 releases with performance improvements, dropping support for 32 bit and Docker package

Fatema Patrawala
31 Oct 2019
2 min read
Yesterday, the Fedora team announced the release of Fedora 31. This release brings in a few visual changes and performance improvements for Fedora users. Key changes and features in Fedora 31 Let us take a look at the key changes and new features added in this release. Latest GNOME 3.34 release brings in performance improvement With the latest GNOME 3.34 update, Fedora Workstation users will find certain visual changes and performance improvements. It will be easier to change the background or lock screen wallpaper with GNOME 3.34. In addition to this, users can create application folders in the overview to organize app drawer. Basically, new features included in GNOME 3.34 directly reflects in this release. You can check out the official blog post by GNOME.org that covers important changes with GNOME 3.34 for Fedora 31. Dropping 32-bit support In this release, users will no longer find 32-bit bootable images. The team has completely dropped the support for 32-bit i686 kernel. However some of the most popular 32-bit packages like Steam and Wine will continue to work. Docker package removed If you are using Docker, it is worth noting that this release has enabled CGroups V2 by default and removed Docker package. The official Fedora wiki page highlights this particular change as follows: “The Docker package has been removed from Fedora 31. It has been replaced by the upstream package moby-engine, which includes the Docker CLI as well as the Docker Engine. However, we recommend instead that you use podman, which is a Cgroups v2-compatible container engine whose CLI is compatible with Docker’s. Fedora 31 uses Cgroups v2 by default.” Updated packages in Fedora 31 Several packages are updated, some of the notable upgrades are: Glibc 2.30 NodeJS 12 Python 3 Updated Fedora flavors & improved hardware support For desktop users, Fedora Workstation matters, but this release will have a significant impact on other Fedora editions like Fedora Astronomy, Fedora IoT and so on. The team has improved support for certain SoCs like Rock64, RockPro 64 and several other chips. If you want more details on this news, you can take a look at the official Fedora changelog page. Fedora announces the first preview release of Fedora CoreOS as an automatically updating Linux OS for containerized workloads Fedora Workstation 31 to come with Wayland support, improved core features of PipeWire, and more Fedora 30 releases with GCC 9.0, GNOME 3.32, performance improvements, and much more!
Read more
  • 0
  • 0
  • 2557

article-image-adobe-security-vulnerability-in-elasticsearch-servers-that-exposed-7-5-million-creative-cloud-accounts
Fatema Patrawala
31 Oct 2019
3 min read
Save for later

Adobe confirms security vulnerability in one of their Elasticsearch servers that exposed 7.5 million Creative Cloud accounts

Fatema Patrawala
31 Oct 2019
3 min read
Last week, Adobe admitted of being the victim of a serious security incident exposing the personal information of nearly 7.5 million users. The information belonged to the company’s popular Creative Cloud service. Adobe Creative Cloud service has approximately 15 million subscribers, providing them access to a suite of popular Adobe products such as Photoshop, Lightroom, Illustrator, InDesign, Premiere Pro, Audition, After Effects, and many others. The news was initially reported by security firm Comparitech. Comparitech partnered with security researcher Bob Diachenko to uncover the exposed database. They discovered that Adobe left an Elasticsearch server unsecured accessible on the web without any password or authentication required. The leak was plugged by Adobe after being alerted. The official statement from Adobe reads, “Late last week, Adobe became aware of a vulnerability related to work on one of our prototype environments. We promptly shut down the misconfigured environment, addressing the vulnerability”. The exposed database included details like: Email addresses Account creation date Which Adobe products they use Subscription status Whether the user is an Adobe employee Member IDs Country Time since last login Payment status Adobe also admitted that the data did not include passwords, payment or financial information. Although there were no such sensitive information in the database, the consequence of such exposure can be increased possibility of targeted phishing email and scams. “Fraudsters could pose as Adobe or a related company and trick users into giving up further info, such as passwords, for example,” Comparitech said. It’s therefore crucial that users turn on two-factor authentication to add a second layer of account protection. Adobe is no stranger to data privacy problems; in October 2013, company suffered a similar kind of data breach that impacted 38 million users. Additionally, 3 million encrypted customer credit cards and login credentials for an unknown number of users were exposed. The incident is not the only time instances of data breach headlines. In recent months, Ecuadorian, NordVPN, a popular Virtual Private Network and StockX, an online marketplace for buying and selling sneakers have had their users personal information left unprotected and exposed on the web. This clearly shows that tech companies still have a long way to go in order to achieve end to end secure networks and servers. Following Capital One data breach, GitHub gets sued and AWS security questioned by a U.S. Senator British Airways set to face a record-breaking fine of £183m by the ICO over customer data breach US Customs and Border Protection reveal data breach that exposed thousands of traveler photos and license plate images
Read more
  • 0
  • 0
  • 4058
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at ₹800/month. Cancel anytime
article-image-mobile-aware-phishing-campaign-targets-unicef-the-un-and-many-other-humanitarian-organizations
Savia Lobo
30 Oct 2019
2 min read
Save for later

Mobile-aware phishing campaign targets UNICEF, the UN, and many other humanitarian organizations

Savia Lobo
30 Oct 2019
2 min read
A few days ago researchers from the Lookout Phishing AI reported a mobile-aware phishing campaign that targets non-governmental organizations around the world including UNICEF, a variety of United Nations humanitarian organizations, the Red Cross and UN World Food, etc. The company has also contacted law enforcement and the targeted organizations. “The campaign is using landing pages signed by SSL certificates, to create legitimate-looking Microsoft Office 365 login pages,” Threatpost reports. According to the Lookout Phishing AI researchers, “The infrastructure connected to this attack has been live since March 2019. Two domains have been hosting phishing content, session-services[.]com and service-ssl-check[.]com, which resolved to two IPs over the course of this campaign: 111.90.142.105 and 111.90.142.91. The associated IP network block and ASN (Autonomous System Number) is understood by Lookout to be of low reputation and is known to have hosted malware in the past.” The researchers have also detected very interesting techniques used in this campaign. It quickly detects mobile devices and logs keystrokes directly as they are entered in the password field. Simultaneously, the JavaScript code logic on the phishing pages delivers device-specific content based on the device the victim uses. “Mobile web browsers also unintentionally help obfuscate phishing URLs by truncating them, making it harder for the victims to discover the deception,” Jeremy Richards, Principal Security Researcher, Lookout Phishing AI wrote in his blog post. Further, the SSL certificates used by the phishing infrastructure had two main ranges of validity: May 5, 2019 to August 3, 2019, and June 5, 2019 to September 3, 2019. The Lookout researchers said that currently, six certificates are still valid. They also suspect that these attacks may still be ongoing. Alexander García-Tobar, CEO and co-founder of Valimail, told Threatpost via email, “By using deviously coded phishing sites, hackers are attempting to steal login credentials and ultimately seek monetary gain or insider information.” To know more about this news in detail, read Lookout’s official blog post. UK’s NCSC report reveals significant ransomware, phishing, and supply chain threats to businesses A new Stuxnet-level vulnerability named Simjacker used to secretly spy over mobile phones in multiple countries for over 2 years: Adaptive Mobile Security reports Smart Spies attack: Alexa and Google Assistant can eavesdrop or vish (voice phish) unsuspecting users, disclose researchers from SRLabs
Read more
  • 0
  • 0
  • 2344

article-image-google-ai-introduces-snap-a-microkernel-approach-to-host-networking
Savia Lobo
29 Oct 2019
4 min read
Save for later

Google AI introduces Snap, a microkernel approach to ‘Host Networking’

Savia Lobo
29 Oct 2019
4 min read
A few days ago, the Google AI team introduced Snap, a microkernel-inspired approach to host networking at the 27th ACM Symposium on Operating Systems Principles. Snap is a userspace networking system with flexible modules that implement a range of network functions, including edge packet switching, virtualization for our cloud platform, traffic shaping policy enforcement, and a high-performance reliable messaging and RDMA-like service. The Google AI team says, “Snap has been running in production for over three years, supporting the extensible communication needs of several large and critical systems.” Why Snap? Prior to Snap, Google AI team says they were limited in their ability to develop and deploy new network functionality and performance optimizations in several ways. This is because developing kernel code was slow and drew on a smaller pool of software engineers. Second, feature release through the kernel module reloads covered only a subset of functionality and often required disconnecting applications, while the more common case of requiring a machine reboot necessitated draining the machine of running applications. Unlike prior microkernel systems, Snap benefits from multi-core hardware for fast IPC and does not require the entire system to adopt the approach wholesale, as it runs as a userspace process alongside our standard Linux distribution and kernel. Source: Snap Research paper Using Snap, the Google researchers also created a new communication stack called Pony Express that implements a custom reliable transport and communications API. Pony Express provides significant communication efficiency and latency advantages to Google applications, supporting use cases ranging from web search to storage. Features of the Snap userspace networking system Snap’s architecture comprises of recent ideas in userspace networking, in-service upgrades, centralized resource accounting, programmable packet processing, kernel-bypass RDMA functionality, and optimized co-design of transport, congestion control, and routing. With these, Snap: Enables a high rate of feature development with a microkernel-inspired approach of developing in userspace with transparent software upgrades. It also retains the benefits of centralized resource allocation and management capabilities of monolithic kernels and also improves upon accounting gaps with existing Linux-based systems. Implements a custom kernel packet injection driver and a custom CPU scheduler that enables interoperability without requiring the adoption of new application runtimes and while maintaining high performance across use cases that simultaneously require packet processing through both Snap and the Linux kernel networking stack. Encapsulates packet processing functions into composable units called “engines”, which enables both modular CPU scheduling as well as incremental and minimally disruptive state transfer during upgrades. Through Pony Express, it provides support for OSI layer 4 and 5 functionality through an interface similar to an RDMA-capable “smart” NIC. This enables transparently leveraging offload capabilities in emerging hardware NICs as a means to further improve server efficiency and throughput. Supports 3x better transport processing efficiency than the baseline Linux kernel and supporting RDMA-like functionality at speeds of 5M ops/sec/core. MicroQuanta: Snap’s new lightweight kernel scheduling class To dynamically scale CPU resources, Snap works in conjunction with a new lightweight kernel scheduling class called MicroQuanta. It provides a flexible way to share cores between latency-sensitive Snap engine tasks and other tasks, limiting the CPU share of latency-sensitive tasks and maintaining low scheduling latency at the same time. A MicroQuanta thread runs for a configurable runtime out of every period time units, with the remaining CPU time available to other CFS-scheduled tasks using a variation of a fair queuing algorithm for high and low priority tasks (rather than more traditional fixed time slots). MicroQuanta is a robust way for Snap to get priority on cores runnable by CFS tasks that avoid starvation of critical per-core kernel threads. While other Linux real-time scheduling classes use both per-CPU tick-based and global high-resolution timers for bandwidth control, MicroQuanta uses only per-CPU highresolution timers. This allows scalable time-slicing at microsecond granularity. Snap is being received positively by many in the community. https://twitter.com/copyconstruct/status/1188514635940421632 To know more about Snap in detail, you can read it’s complete research paper. Amazon announces improved VPC networking for AWS Lambda functions Netflix security engineers report several TCP networking vulnerabilities in FreeBSD and Linux kernels ReactOS 0.4.12 releases with kernel improvements, Intel e1000 NIC driver support, and more
Read more
  • 0
  • 0
  • 5196

article-image-qt-releases-the-technical-preview-of-webassembly-based-qml-open-source-design-viewer
Vincy Davis
25 Oct 2019
2 min read
Save for later

Qt releases the technical preview of WebAssembly based QML open source design viewer

Vincy Davis
25 Oct 2019
2 min read
Two days ago, the Qt team released the technical preview of an open source QML design viewer based on the Qt for WebAssembly. This design viewer will enable the QML application to be run on web browsers like Chrome, Safari, FireFox and Edge. The Qt for WebAssembly is a platform plugin which allows the user to build Qt applications with web page integrations.  For running a custom QML application, a user will have to define the main QML file and the import paths with a .qmlproject file. The project folder then has to be compressed as a ZIP file and uploaded to the design viewer. Users can also generate a resource file out of their project and upload the package. Image source: Qt blog Read More: Qt introduces Qt for MCUs, a graphics toolkit for creating a fluid user interface on microcontrollers The Qt team has tested the design viewer with the latest versions of Chrome, Safari, FireFox and Edge and has found that the QML application runs well on all the web browsers. “The startup and compilation time depends on your browser and configuration, but the actual performance of the application, once it is started, is indistinguishable from the same application running on the desktop,” states the official blog. This design viewer also runs on Android and iOS and is shipped with most QML modules  and is based on a snapshot of Qt 5.14. Many users have liked the web based design viewer A user on Hacker News comments, “One of the most beautiful things I have seen in 2019. Brilliant!” Another comment read, “This looks pretty cool! I am actually shopping for a GUI framework for a new project and WebAssembly support is a potential critical feature.” Qt and LG Electronics partner to make webOS as the platform of choice for embedded smart devices GitLab retracts its privacy invasion policy after backlash from community Are we entering the quantum computing era? Google’s Sycamore achieves ‘quantum supremacy’ while IBM refutes the claim
Read more
  • 0
  • 0
  • 3404

article-image-gitlab-retracts-its-privacy-invasion-policy-after-backlash-from-community
Vincy Davis
25 Oct 2019
3 min read
Save for later

GitLab retracts its privacy invasion policy after backlash from community

Vincy Davis
25 Oct 2019
3 min read
Yesterday, GitLab retracted its earlier decision to implement user level product usage tracking on their websites after receiving negative feedback from its users. https://twitter.com/gitlab/status/1187408628531322886 Two days ago, GitLab informed its users that starting from its next yet to be released version (version 12.4), there would be an addition of Javascript snippets in GitLab.com (GitLab’s SaaS offering) and GitLab's proprietary Self-Managed packages (Starter, Premium, and Ultimate) websites. These Java snippets will be used to interact with GitLab and other third-party SaaS telemetry services. Read More: GitLab 12.3 releases with web application firewall, keyboard shortcuts, productivity analytics, system hooks and more GitLab.com users were specifically notified that until they accept the new service terms condition, their access to the web interface and API will be blocked. This meant that users with integration to the API will experience a brief pause of service, until the new terms are accepted by signing in to the web interface. The self-managed users, on the other hand, were apprised that they can continue to use the free software GitLab Core without any changes. The DevOps coding platform says that SaaS telemetry products are important tools to understand the analytics on user behaviour inside web-based applications. According to the company, these additional user information will help in increasing their website speed and also enrich user experience. “GitLab has a lot of features, and a lot of users, and it is time that we use telemetry to get the data we need for our product managers to improve the experience,” stated the official blog. The telemetry tools will use JavaScript snippets that will be executed in the user’s browser and will send the user information back to the telemetry service. Read More: GitLab faces backlash from users over performance degradation issues tied to redis latency The company had also assured users that they will disclose all the whereabouts of the user information in the privacy policy. They also ensured that the third-party telemetry service will have data protection standards equivalent to their own standard and will also aim for their SOC2 compliance. If any user does not wish to be tracked, they can turn on the Do Not Track (DNT) mechanism in their GitLab.com or GitLab Self-Managed web browser. The DNT mechanism will not load the  the JavaScript snippet. “The only downside to this is that users may also not get the benefit of in-app messaging or guides that some third-party telemetry tools have that would require the JavaScript snippet,” added the official blog. Following this announcement, GitLab received loads of negative feedback from users. https://twitter.com/PragmaticAndy/status/1187420028653723649 https://twitter.com/Cr0ydon/status/1187380142995320834 https://twitter.com/BlindMyStare/status/1187400169303789568 https://twitter.com/TheChanceSays/status/1187095735558238208 Although, GitLab has rolled backed the Telemetry service changes for now, and are re-considering their decision, many users are warning them to drop the idea completely. https://twitter.com/atom0s/status/1187438090991751168 https://twitter.com/ry60003333/status/1187601207046524928 https://twitter.com/tresronours/status/1187543188703186949 DevOps platform for coding, GitLab reached more than double valuation of $2.75 billion than its last funding and way ahead of its IPO in 2020 GitLab goes multicloud using Crossplane with kubectl Are we entering the quantum computing era? Google’s Sycamore achieves ‘quantum supremacy’ while IBM refutes the claim PostGIS 3.0.0 releases with raster support as a separate extension Electron 7.0 releases in beta with Windows on Arm 64 bit, faster IPC methods, nativetheme API and more
Read more
  • 0
  • 0
  • 2664
article-image-are-we-entering-the-quantum-computing-era-googles-sycamore-achieves-quantum-supremacy-while-ibm-refutes-the-claim
Vincy Davis
25 Oct 2019
6 min read
Save for later

Are we entering the quantum computing era? Google’s Sycamore achieves ‘quantum supremacy’ while IBM refutes the claim

Vincy Davis
25 Oct 2019
6 min read
Two days ago, Google astonished many people around the world with claims that they have achieved a major milestone in quantum computing, one which has largely been an unattainable feat until now. In a paper titled “Quantum supremacy using a programmable superconducting processor”, Google explains how its 53-bit quantum computer named ‘Sycamore’ took only 200 seconds to perform a sensitive computation that would otherwise take the world's fastest supercomputer 10,000 years. This, Google claims is their attainment of ‘quantum supremacy’. If confirmed, this would be the first major milestone in harnessing the principles of quantum mechanics to solve computational problems. Google’s AI Quantum team and John Martinis, physicist at the University of California are the prime contributors to this achievement. NASA Ames Research Center, Oak Ridge National Laboratory and Forschungszentrum Jülich have also helped Google in implementing this experiment. In quantum computing, quantum supremacy is the potential ability of any device to solve problems that classical computers practically cannot. According to Sundar Pichai, Google CEO, “For those of us working in science and technology, it’s the “hello world” moment we’ve been waiting for—the most meaningful milestone to date in the quest to make quantum computing a reality.” This announcement from Google comes exactly one month after the same paper was leaked online. However, following Google’s announcement, IBM is arguing “an ideal simulation of the same task can be performed on a classical system in 2.5 days with far greater fidelity.” According to IBM, as proposed by John Preskill in 2012, the original meaning of the term “quantum supremacy,” is the point where quantum computers can do things that classical computers can’t. Since, Google has not yet achieved this threshold, IBM argues that their claims are wrong. IBM says that in the published paper, Google has assumed that the RAM storage requirements in a traditional computer would be massive. However, if a different approach of using both RAM and hard drive space to store and manipulate the state vector is employed, the 10,000 years specified by Google will drop considerably. Thus, IBM refutes Google’s claims and stated that in its strictest definition, the quantum supremacy standard has not been met by anybody until now. IBM believes that “fundamentally, quantum computers will never reign “supreme” over classical computers, but will rather work in concert with them, since each have their unique strengths.”  IBM further added that the term ‘supremacy’ is currently being misunderstood and urged everybody in the community to treat Google’s claims “with a large dose of skepticism.” Read More: Has IBM edged past Google in the battle for Quantum Supremacy? Though Google has not directly responded to IBM’s accusations, in a statement to Forbes, a Google spokesperson said, “We welcome ideas from the research community on new applications that work only on NISQ-era processors like Sycamore and its successors. We’ve published our circuits so the whole community can explore this new space. We’re excited by what’s possible now that we have this unique new resource.” Although IBM is skeptical of Google’s claims, the news of Google’s accomplishment is making waves all around the world. https://twitter.com/rhhackett/status/1186949190695313409 https://twitter.com/nasirdaniya/status/1187152799055929346 Google’s experiment with the Sycamore processor To achieve this milestone, Google researchers developed Sycamore, the high-fidelity quantum logic gates processor consisting of a two-dimensional array of 54 transmon qubits. Sycamore consists of a two-dimensional grid where each qubit is connected to four other qubits. Each qubit in the processor is tunably coupled to four nearest neighbors, in a rectangular lattice and are forward-compatible for error correction. As a result, the chip has enough connectivity to let  the qubit states quickly interact throughout the entire processor. This feature of Sycamore is what makes it distinct from a classical computer. Image Source: Google blog In the Sycamore quantum processor, “Each run of a random quantum circuit on a quantum computer produces a bitstring, for example 0000101. Owing to quantum interference, some bit strings are much more likely to occur than others when we repeat the experiment many times. However, finding the most likely bit strings for a random quantum circuit on a classical computer becomes exponentially more difficult as the number of qubits (width) and number of gate cycles (depth) grow,” states John Martinis, Chief Scientist Quantum Hardware and Sergio Boixo, Chief Scientist Quantum Computing Theory, Google AI Quantum. Image Source: Google blog For the experiment, the Google researchers ran a random simplified circuit from 12 to 53 qubits with the circuit depth kept constant. Next, they checked the performance of the quantum computer using classical simulations and compared the performance of a quantum computer with a theoretical model. After verifying that the quantum system was working, the researchers ran a random hard circuit with 53 qubits and this time, allowed the circuit depth to expand until the point where the classical simulation became infeasible. At the end, it was found that this quantum computation cannot be emulated on a classical computer and hence, this opens “a new realm of computing to be explored,” says Google. The Google team is now working on quantum supremacy applications like quantum physics simulation, quantum chemistry, generative machine learning, and more. After procuring “certifiable quantum randomness”, Google is now working on testing this algorithm to develop a prototype that can provide certifiable random numbers. Leaving IBM’s accusation aside, many people are excited about Google’s great achievement. https://twitter.com/christina_dills/status/1187074109550800897 https://twitter.com/Inzilya777/status/1187102111429021696 Few people believe that Google is making a hullabaloo of a not-so-successful experiment. A user on Hacker News comments, “Summary: - Google overhyped their results against a weak baseline. This seems to be commonplace in academic publishing, especially new-ish fields where benchmarks aren't well-established. There was a similar backlash against OpenAI's robot hand, where they used simulation for the physical robotic movements and used a well-known algorithm for the actual Rubik's Cube solving. I still think it's an impressive step forward for the field.” Check out the video of Google’s demonstration of quantum supremacy below. https://youtu.be/-ZNEzzDcllU Smart Spies attack: Alexa and Google Assistant can eavesdrop or vish (voice phish) unsuspecting users, disclose researchers from SRLabs Made by Google 2019: Google’s hardware event unveils Pixel 4 and announces the launch date of Google Stadia After backlash for rejecting a uBlock Origin update from the Chrome Web Store, Google accepts ad-blocking extension
Read more
  • 0
  • 0
  • 3237

Matthew Emerick
24 Oct 2019
1 min read
Save for later

ASP.NET Core Security, Part 2 from C# Corner

Matthew Emerick
24 Oct 2019
1 min read
Eric Vogel follows up on his previous post on getting started with ASP.NET Core security. Now that .NET Core 3.0 is out, he shows how to upgrade the code from Part 1 to ASP.NET Core 3.0, put pages behind login, create user roles, and use existing roles to restrict access to pages.
Read more
  • 0
  • 0
  • 1051

article-image-postgis-3-0-0-releases-with-raster-support-as-a-separate-extension
Fatema Patrawala
24 Oct 2019
3 min read
Save for later

PostGIS 3.0.0 releases with raster support as a separate extension

Fatema Patrawala
24 Oct 2019
3 min read
Last week, the PostGIS development team released PostGIS 3.0.0. This release works with PostgreSQL 9.5-12 and GEOS >= 3.6. If developers are using postgis_sfcgal extension, they need to compile against SFCGAL 1.3.1 or higher. The major change in the PostGIS 3.0.0 version is the raster functionality which has been broken out as a separate extension. Take a look at other breaking changes in this release below: Breaking changes in PostGIS 3.0.0 Raster support now in a separate extension - postgis_raster Extension library files no longer include the minor version. If developers need the old behavior, they can use the new configure switch --with-library-minor-version. This change is intended to smoothen future pg_upgrade since lib file names will not change between version 3.0, 3.1, 3.* releases. ND box operators (overlaps, contains, within, equals) will not look at dimensions that aren’t present in both operands. Developers will need to REINDEX their ND indexes after upgrade. Includes 32-bit hash fix (requires reindexing hash(geometry) indexes) Sorting now uses Hilbert curve and Postgres Abbreviated Compare. New features in PostGIS 3.0.0 PostGIS used to expose a SQL function named geosnoop(geometry) to test the cost of deserializing and re-serializing from the PostgreSQL backend. In this release they have brought that function back named as postgis_geos_noop(geometry) with the SFCGAL counterpart. Added ST_AsMVT support for Feature ID. ST_AsMVT transforms a geometry into the coordinate space of a Mapbox Vector Tile of a set of rows corresponding to a Layer. It makes best effort to keep and even correct validity and might collapse geometry into a lower dimension in the process. Added SP-GiST and GiST support for ND box operators overlaps, contains, within, equals.  SP-Gist in PostGIS has been designed to support K Dimensional-trees and other spatial partitioning indexes. Added ST_3DLineInterpolatePoint. ST_Line_Interpolate_Point returns a point interpolated along a line. Introduced WAGYU to validate MVT polygons. Wagyu can be chosen at configure time to clip and validate MVT polygons. This library is faster and produces more correct results than the GEOS default, but it might drop small polygons. It will require a C++11 compiler and will use CXXFLAGS (not CFLAGS). With PostGIS 3.0, it is now possible to generate GeoJSON features directly without any intermediate code, using the new ST_AsGeoJSON(record) function. The GeoJSON format is a common transport format, between servers and web clients, and even between components of processing chains. Added ST_ConstrainedDelaunayTriangles SFCGAL function. This function returns a Constrained Delaunay triangulation around the vertices of the input geometry. This method needs SFCGAL backend, supports 3d media file and will not drop the z-index. Additionally the team has done other enhancements in this release. To know more about this news, you can check out the official blog post by the PostGIS team. PostgreSQL 12 Beta 1 released Writing PostGIS functions in Python language [Tutorial] Top 7 libraries for geospatial analysis Percona announces Percona Distribution for PostgreSQL to support open source databases  After PostgreSQL, DigitalOcean now adds MySQL and Redis to its managed databases’ offering
Read more
  • 0
  • 0
  • 3274
article-image-electron-7-0-releases-in-beta-with-windows-on-arm-64-bit-faster-ipc-methods-nativetheme-api-and-more
Fatema Patrawala
24 Oct 2019
3 min read
Save for later

Electron 7.0 releases in beta with Windows on Arm 64 bit, faster IPC methods, nativetheme API and more

Fatema Patrawala
24 Oct 2019
3 min read
Last week the team at Electron announced the release of Electron 7.0 in beta. It includes upgrades to Chromium 78, V8 7.8, and Node.js 12.8.1. The team has added a Window on Arm 64 release, faster IPC methods, a new nativeTheme API, and much more. This release is published to npm under the beta tag and can be installed via npm install electron@beta, or npm i [email protected]. It is packed with upgrades, fixes, and new features. Notable changes in Electron 7.0 There are stack upgrades in this release, Electron 7.0 will be compatible on Chromium 78, V8 7.8 and Node.js 12.8.1. In this release they have added Windows on Arm (64 bit). The team has added ipcRenderer.invoke() and ipcMain.handle() for asynchronous request/response-style IPC. These are strongly recommended over the remote module. They have added nativeTheme API to read and respond to changes in the OS's theme and color scheme. In this release they have switched to a new TypeScript Definitions generator, which generates more precise definitions files (d.ts) from C# model classes to build strongly typed web application where the server- and client-side models are in sync. Earlier Electron used Doc Linter and Doc Parser but it had a few issues and hence shifted to TypeScript to make definition files better without losing any information on docs. Other breaking changes The team has removed deprecated APIs in this release: Callback-based versions of functions that now use Promises. Tray.setHighlightMode() (macOS). app.enableMixedSandbox() app.getApplicationMenu(), app.setApplicationMenu(), powerMonitor.querySystemIdleState(), powerMonitor.querySystemIdleTime(), webFrame.setIsolatedWorldContentSecurityPolicy(), webFrame.setIsolatedWorldHumanReadableName(), webFrame.setIsolatedWorldSecurityOrigin() Session.clearAuthCache() no longer allows filtering the cleared cache entries. Native interfaces on macOS (menus, dialogs, etc.) now automatically match the dark mode setting on the user's machine. The team has updated the electron module to use @electron/get. Node 8 is the minimum supported node version in this release. The electron.asar file no longer exists. Any packaging scripts that depend on its existence should be updated by the developers. Additionally the team announced that Electron 4.x.y has reached end-of-support as per the project's support policy. Developers and applications are encouraged to upgrade to a newer version of Electron. To know more about this release, check out the Electron 7.0 GitHub page and the official blog post. Electron 6.0 releases with improved Promise support, native Touch ID authentication support, and more Electron 5.0 ships with new versions of Chromium, V8, and Node.js The Electron team publicly shares the release timeline for Electron 5.0
Read more
  • 0
  • 0
  • 3357

article-image-ghost-3-0-an-open-source-headless-node-js-cms-released-with-jamstack-integration-github-actions-and-more
Savia Lobo
23 Oct 2019
4 min read
Save for later

Ghost 3.0, an open-source headless Node.js CMS, released with JAMStack integration, GitHub Actions, and more!

Savia Lobo
23 Oct 2019
4 min read
Yesterday, the team behind Ghost, an open-source headless Node.js CMS, announced its major version, Ghost 3.0. The new version represents “a total of more than 15,000 commits across almost 300 releases”  Ghost is now used by the likes of Apple, DuckDuckGo, OpenAI, The Stanford Review, Mozilla, Cloudflare, Digital Ocean, and many, others. “To date, Ghost has made $5,000,000 in customer revenue whilst maintaining complete independence and giving away 0% of the business,” the official website highlights. https://twitter.com/Ghost/status/1186613938697338881 What’s new in Ghost 3.0? Ghost on the JAMStack The team has revamped the traditional architecture using JAMStack which makes Ghost a completely decoupled headless CMS. In this way, users can generate a static site and later add dynamic features to make it powerful. The new architecture unlocks content management that is fundamentally built via APIs, webhooks, and frameworks to generate robust modern websites. Continuous theme deployments with GitHub Actions The process of manually making a zip, navigating to Ghost Admin, and uploading an update in the browser can be difficult at times. To deploy Ghost themes to production in a better way, the team decided to combine with Github Actions. This makes it easy to continuously sync custom Ghost themes to live production sites with every new commit. New WordPress migration plugin Earlier versions of Ghost included a very basic WordPress migrator plugin that made it extremely difficult for anyone to move their data between the platforms or have a smooth experience. The new Ghost 3.0 compatible WordPress migration plugin provides a single-button-download of the full WordPress content + image archive in a format that can be dragged and dropped into Ghost's importer. Those who are new and want to explore Ghost 3.0 can create a new site in a few clicks with an unrestricted 14 day free trial as all new sites on Ghost (Pro) are running Ghost 3.0. The team expects users to try out Ghost 3.0 and get back with feedback for the team on the Ghost forum or help out on Github for building the next features with the Ghost team. Ben Thompson’s Stratechery, a subscription-based newsletter featuring in-depth commentary on tech and media news, recently posted an interview with Ghost CEO John O’Nolan. This interview features questions on what Ghost is, where it came from, and much more. Ghost 3.0 has received a positive response from many and also the fact that it is moving towards adopting static site JAMStack approach. A user on Hacker News commented, “In my experience, Ghost has been the no-nonsense blog CMS that has been stable and just worked with very little maintenance. I like that they are now moving towards static site JAMStack approach, driven by APIs rather than the current SSR model. This lets anybody to customise their themes with the language / framework of choice and generating static builds that can be cached for improved loading times.” Another user who is using Ghost for the first time commented, “I've never tried Ghost, although their website always appealed to me (one of the best designed website I know). I've been using WordPress for the past 13 years, for personal and also professional projects, which means the familiarity I've built with building custom themes never drew me towards trying another CMS. But going through this blog post announcement, I saw that Ghost can be used as a headless CMS with frontend frameworks. And since I started using GatsbyJS extensively in the past year, it seems like something that would work _really_ well together. Gonna try it out! And congrats on remaining true to your initial philosophy.” To know more about the other features in detail, read the official blog post. Google partners with WordPress and invests $1.2 million on “an opinionated CMS” called Newspack Verizon sells Tumblr to WordPress parent, Automattic, for allegedly less than $3million, a fraction of its acquisition cost FaunaDB brings its serverless database to Netlify to help developers create apps
Read more
  • 0
  • 0
  • 4589