Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3709 Articles
article-image-ionic-react-released-ionic-framework-pivots-from-angular-to-a-native-react-version
Sugandha Lahoti
15 Oct 2019
3 min read
Save for later

Ionic React released; Ionic Framework pivots from Angular to a native React version

Sugandha Lahoti
15 Oct 2019
3 min read
Yesterday the team behind the Ionic Framework announced the general availability of Ionic React, which is a native React version of Ionic Framework, pivoting from its traditional Angular-focused app framework. “Ionic React makes it easy to build apps for iOS, Android, Desktop, and the web as a Progressive Web App”, states the team in a blog post. It uses Typescript and combines core Ionic experience with the tooling and APIs that are tailored to developers. It is a fully-supported, enterprise-ready offering with services, advisory, tooling, and supported native functionality. @ionic/react projects will work like standard React projects, leveraging react-dom and with setup normally found in a Create React App (CRA) app. For routing and navigation, React Router is used under the hood. One difference is the usage of TypeScript, which provides a more productive experience. To use plain JavaScript, you can rename files to use a .js extension then remove any of the type annotations with each file. Explaining the reason behind choosing React, the team says, “With Ionic, we envisioned being able to build rich JavaScript-powered controls and distribute them as simple HTML tags any web developer could assemble into an awesome app. We realized that building a version of the Ionic Framework for React made perfect sense. Combined with the fact that we had several React fans join the Ionic team over the years, there was a strong desire internally to see Ionic Framework support React as well.” How is Ionic React different from React Native The team realized that there was a gap in the React ecosystem that Ionic could fill as an easier mobile and Progressive Web App development solution. Developers were also interested in incorporating it in their existing React Native apps, by building more screens in their app out of a native WebView frame. There were two major reasons why the Ionic team built @ionic/react. First, it is DOM-native and uses the standard react-dom library. In contrast, React Native builds an abstraction on top of iOS and Android native UI controls. The team states, “When we looked at installs for react-dom compared to react-native, it was clear to us that vastly more React development was happening in the browser and on top of the DOM than on top of the native iOS or Android UI systems” Secondly, Ionic is one of the most popular frameworks for building PWA, most notably the Stencil project. React Native, on the other hand, does not officially support Progressive web apps. PWAs are, at best, an afterthought in the React Native ecosystem. @ionic/react has been well appreciated by developers on Twitter. https://twitter.com/dipakcreation/status/1183974237125693441 https://twitter.com/MichaelW_PWC/status/1183836080170323968 https://twitter.com/planetoftheweb/status/1183809368934043653 You can go through Ionic’s blog for additional information and for getting started. Ionic React RC is now out! Ionic 4.1 named Hydrogen is out! React Native Vs Ionic: Which one is the better mobile app development framework?
Read more
  • 0
  • 0
  • 6291

article-image-windows-server-2019-comes-with-security-storage-and-other-changes
Prasad Ramesh
21 Dec 2018
5 min read
Save for later

Windows Server 2019 comes with security, storage and other changes

Prasad Ramesh
21 Dec 2018
5 min read
Today, Microsoft unveiled new features of Windows Server 2019. The new features are based on four themes—hybrid, security, application platform, and Hyper-Converged Infrastructure (HCI). General changes Windows Server 2019, being a Long-Term Servicing Channel (LTSC) release, includes Desktop Experience. During setup, there are two options to choose from: Server Core installations or Server with Desktop Experience installations. A new feature called System Insights brings local predictive analytics capabilities to Windows Server 2019. This feature is powered by machine learning and aimed to help users reduce operational expenses associated with managing issues in Windows Server deployments. Hybrid cloud in Windows Server 2019 Another feature called the Server Core App Compatibility feature on demand (FOD) greatly improves the app compatibility in the Windows Server Core installation option. It does so by including a subset of binaries and components from Windows Server with the Desktop Experience included. This is done without adding the Windows Server Desktop Experience graphical environment itself. The purpose is to increase the functionality of Windows server while keeping a small footprint. This feature is optional and is available as a separate ISO to be added to Windows Server Core installation. New measures for security There are new changes made to add a new protection protocol, changes in virtual machines, networking, and web. Windows Defender Advanced Threat Protection (ATP) Now, there is a Windows Defender program called Advanced Threat Protection (ATP). ATP has deep platform sensors and response actions to expose memory and kernel level attacks. ATP can respond via suppressing malicious files and also terminating malicious processes. There is a new set of host-intrusion prevention capabilities called the Windows Defender ATP Exploit Guard. The components of ATP Exploit Guard are designed to lock down and protect a machine against a wide variety of attacks and also block behaviors common in malware attacks. Software Defined Networking (SDN) SDN delivers many security features which increase customer confidence in running workloads, be it on-premises or as a cloud service provider. These enhancements are integrated into the comprehensive SDN platform which was first introduced in Windows Server 2016. Improvements to shielded virtual machines Now, users can run shielded virtual machines on machines which are intermittently connected to the Host Guardian Service. This leverages the fallback HGS and offline mode features. There are troubleshooting improvements to shield virtual machines by enabling support for VMConnect Enhanced Session Mode and PowerShell Direct. Windows Server 2019 now supports Ubuntu, Red Hat Enterprise Linux, and SUSE Linux Enterprise Server inside shielded virtual machines. Changes for faster and safer web Connections are coalesced to deliver uninterrupted and encrypted browsing. For automatic connection failure mitigation and ease of deployment, HTTP/2’s server-side cipher suite negotiation is upgraded. Storage Three storage changes are made in Windows Server 2019. Storage Migration Service It is a new technology that simplifies migrating servers to a newer Windows Server version. It has a graphical tool that lists data on servers and transfers the data and configuration to newer servers. Their users can optionally move the identities of the old servers to the new ones so that apps and users don’t have to make changes. Storage Spaces Direct There are new features in Storage Spaces Direct: Deduplication and compression capabilities for ReFS volumes Persistent memory has native support Nested resiliency for 2 node hyper-converged infrastructure at the edge Two-server clusters which use a USB flash drive as a witness Support for Windows Admin Center Display of performance history Scale up to 4 petabytes per cluster Mirror-accelerated parity is two times faster Drive latency outlier detection Fault tolerance is increased by manually delimiting the allocation of volumes Storage Replica Storage Replica is now also available in Windows Server 2019 standard edition. A new feature called test failover allows mounting of destination storage to validate replication or backup data. Performance improvements are made and Windows Admin Center support is added. Failover clustering New features in failover clustering include: Addition of cluster sets and Azure-aware clusters Cross-domain cluster migration USB witness Cluster infrastructure improvements Cluster Aware Updating supports Storage Spaces Direct File share witness enhancements Cluster hardening Failover Cluster no longer uses NTLM authentication Application platform changes in Windows Server 2019 Users can now run Windows and Linux-based containers on the same container host by using the same docker daemon. Changes are being continually done to improve support for Kubernetes. A number of improvements are made to containers such as changes to identity, compatibility, reduced size, and higher performance. Now, virtual network encryption allows virtual network traffic encryption between virtual machines that communicate within subnets and are marked as Encryption Enabled. There are also some improvements to network performance for virtual workloads, time service, SDN gateways, new deployment UI, and persistent memory support for Hyper-V VMs. For more details, visit the Microsoft website. OpenSSH, now a part of the Windows Server 2019 Microsoft announces Windows DNS Server Heap Overflow Vulnerability, users dissatisfied with patch details Microsoft fixes 62 security flaws on Patch Tuesday and re-releases Windows 10 version 1809 and Windows Server 2019
Read more
  • 0
  • 0
  • 6269

article-image-timescaledb-goes-distributed-implements-chunking-over-sharding-for-scaling-out
Sugandha Lahoti
22 Aug 2019
5 min read
Save for later

TimescaleDB goes distributed; implements ‘Chunking’ over ‘Sharding’ for scaling-out

Sugandha Lahoti
22 Aug 2019
5 min read
TimeScaleDB announced yesterday that they are going distributed; this version is currently in private beta with the public version slated for later this year. They are also bringing PostgreSQL back. However, with PostgreSQL, a major problem is scaling out. To address this, TimeScaleDB does not implement traditional sharding, instead, using ‘chunking’. What is TimescaleDB’s chunking? In TimescaleDB, chunking is the mechanism which scales PostgreSQL for time-series workloads. Chunks are created by automatically partitioning data by multiple dimensions (one of which is time). In a blog post, TimeScaleDB specifies, “this is done in a fine-grain way such that one dataset may be comprised of 1000s of chunks, even on a single node.” Chunking offers a wide set of capabilities unlike sharding, which only offers the option to scale out. These include scaling-up (on the same node) and scaling-out (across multiple nodes). It also offers elasticity, partitioning flexibility,  data retention policies, data tiering, and data reordering. TimescaleDB also automatically partitions a table across multiple chunks on the same instance, whether on the same or different disks. TimescaleDB’s multi-dimensional chunking auto-creates chunks, keeps recent data chunks in memory, and provides time-oriented data lifecycle management (e.g., for data retention, reordering, or tiering policies). However, one issue is the management of the number of chunks (i.e., “sub-problems”). For this, they have come up with hypertable abstraction to make partitioned tables easy to use and manage. Hypertable abstraction makes chunking manageable Hypertables are typically used to handle a large amount of data by breaking it up into chunks, allowing operations to execute efficiently. However, when the number of chunks is large, these data chunks can be distributed over several machines by using distributed hypertables. Distributed hypertables are similar to normal hypertables, but they add an additional layer of hypertable partitioning by distributing chunks across data nodes. They are designed for multi-dimensional chunking with a large number of chunks (from 100s to 10,000s), offering more flexibility in how chunks are distributed across a cluster. Users are able to interact with distributed hypertables similar to a regular hypertable (which itself looks just like a regular Postgres table). Chunking does not put additional burden on applications and developers because  TimescaleDB does not interact directly with chunks (and thus do not need to be aware of this partition mapping themselves, unlike in some sharded systems). The system also does not expose different capabilities for chunks than the entire hypertable. TimescaleDB goes distributed TimescaleDB is already available for testing in private beta as for selected users and customers. The initial licensed version is expected to be widely available. This version will support features such as high write rates, query parallelism, predicate push down for lower latency, elastically growing a cluster to scale storage and compute, and fault tolerance via physical replica. Developers were quite intrigued by the new chunking process. A number of questions were asked on Hacker News, duly answered by TimescaleDB creators. One of the questions put forth is related to the Hot partition problem. A user asks, “The biggest limit is that their "chunking" of data by time-slices may lead directly to the hot partition problem -- in their case, a "hot chunk." Most time series is 'dull time' -- uninteresting time samples of normal stuff. Then, out of nowhere, some 'interesting' stuff happens. It'll all be in that one chunk, which will get hammered during reads.” To which Erik Nordström, Timescale engineer replied, “ TimescaleDB supports multi-dimensional partitioning, so a specific "hot" time interval is actually typically split across many chunks, and thus server instances. We are also working on native chunk replication, which allows serving copies of the same chunk out of different server instances. Apart from these things to mitigate the hot partition problem, it's usually a good thing to be able to serve the same data to many requests using a warm cache compared to having many random reads that thrashes the cache.” Another question asked said, “In this vision, would this cluster of servers be reserved exclusively for time series data or do you imagine it containing other ordinary tables as well?” To which, Mike Freedman, CTO of TimeScale answered, “We commonly see hypertables (time-series tables) deployed alongside relational tables, often because there exists a relation between them: the relational metadata provides information about the user, sensor, server, security instrument that is referenced by id/name in the hypertable. So joins between these time-series and relational tables are often common, and together these serve the applications one often builds on top of your data. Now, TimescaleDB can be installed on a PG server that is also handling tables that have nothing to do with its workload, in which case one does get performance interference between the two workloads. We generally wouldn't recommend this for more production deployments, but the decision here is always a tradeoff between resource isolation and cost.” Some thought sharding remains the better choice even if chunking improves performance. https://twitter.com/methu/status/1164381453800525824 Read the official announcement for more information. You can also view the documentation. TimescaleDB 1.0 officially released Introducing TimescaleDB 1.0 RC, the first OS time-series database with full SQL support Zabbix 4.2 release packed with modern monitoring system for data collection, processing and visualization
Read more
  • 0
  • 0
  • 6268

article-image-introducing-quickjs-a-small-and-easily-embeddable-javascript-engine
Bhagyashree R
12 Jul 2019
3 min read
Save for later

Introducing QuickJS, a small and easily embeddable JavaScript engine

Bhagyashree R
12 Jul 2019
3 min read
On Tuesday, Fabrice Bellard, the creator of FFmpeg and QEMU and Charlie Gordon, a C expert, announced the first public release of QuickJS. Released under MIT license, it is a “small but complete JavaScript engine” that comes with support for the latest ES2019 language specification. Features in QuickJS JavaScript engine Small and easily embeddable: The engine is formed by a few C files and does not have any external dependency. Fast interpreter: The interpreter shows impressive speed by running 56,000 tests from the ECMAScript Test Suite1 in just 100 seconds, and that too on a single-core CPU. A runtime instance completes its life cycle in less than 300 microseconds. ES2019 support: The support for ES2019 specification is almost complete including modules, asynchronous generators, and full Annex B support (legacy web compatibility). Currently, it does not has support for realms and tail calls. No external dependency: It can compile JavaScript source to executables without the need for any external dependency. Command-line interpreter: The command-line interpreter comes with contextual colorization and completion implemented in Javascript. Garbage collection: It uses reference counting with cycle removal to free objects automatically and deterministically. This reduces memory usage and ensures deterministic behavior of the JavaScript engine. Mathematical extensions: You can find all the mathematical extensions in the ‘qjsbn’ version, which are fully-backward compatible with standard Javascript. It supports big integers (BigInt), big floating-point numbers (BigFloat), operator overloading, and also comes with ‘bigint’ and ‘math’ mode. This news struck a discussion on Hacker News, where developers were all praises for Bellard’s and Gordon’s outstanding work on this project. A developer commented, “Wow. The core is a single 1.5MB file that's very readable, it supports nearly all of the latest standard, and Bellard even added his own extensions on top of that. It has compile-time options for either a NaN-boxing or traditional tagged union object representation, so he didn't just go for a single minimal implementation (unlike e.g. OTCC) but even had the time and energy to explore a bit. I like the fact that it's not C99 but appears to be basic C89, meaning very high portability. Despite my general distaste for JS largely due to websites tending to abuse it more than anything, this project is still immensely impressive and very inspiring, and one wonders whether there is still "space at the bottom" for even smaller but functionality competitive implementations.” Another wrote, “I can't wait to mess around with this, it looks super cool. I love the minimalist approach. If it's truly spec compliant, I'll be using this to compile down a bunch of CLI scripts I've written that currently use node. I tend to stick with the ECMAScript core whenever I can and avoid using packages from NPM, especially ones with binary components. A lot of the time that slows me down a bit because I'm rewriting parts of libraries, but here everything should just work with a little bit of translation for the OS interaction layer which is very exciting.” To know more about QuickJS, check out Fabrice Bellard's official website. Firefox 67 will come with faster and reliable JavaScript debugging tools Introducing Node.js 12 with V8 JavaScript engine, improved worker threads, and much more React Native 0.59 is now out with React Hooks, updated JavaScriptCore, and more!
Read more
  • 0
  • 0
  • 6266

article-image-pi-hole-4-3-2-removes-adblock-style-lists-support-and-implements-many-core-and-web-interface-changes
Vincy Davis
25 Sep 2019
3 min read
Save for later

Pi-hole 4.3.2 removes adblock style lists support and implements many core and web interface changes

Vincy Davis
25 Sep 2019
3 min read
Last week, Pi-hole, the open-source Linux network-level advertisement and internet tracker blocking application released their latest version Pi-hole 4.3.2. It includes many changes in its core and web interfaces. Users can run pihole -up to update this version from a terminal session. One of the core contributors to Pi-hole, Adam Warner revealed that the major change in this release is the removal of support for adblock style lists like Easylist/Easyprivacy. He alerted users that this may lead to a reduction in the number of blocked domains by Pi-hole. Warner also specified the motive behind the removal of adblock support as, “these lists were never designed to be parsed into a HOST formatted file, and while it may catch some domains, there are far too many false positives produced by using them in this way. If you have lists in this format, Pi-hole will now ignore them, and attempts to get around the detection will likely end up with a broken gravity list.” Pi-hole uses dnsmasq, cURL, lighttpd, PHP, and other tools to block Domain Name System (DNS) requests for known tracking and advertising. Intended for a private network, Pi-hole is implemented on embedded devices with network capabilities like Raspberry Pi. A Pi-hole can also block traditional website adverts in smart TVs, mobile operating systems, and more. If Pi-hole obtains any requests from adverts or tracking domains, it does not resolve the requested domain and responds to the requesting device with a blank webpage. Users are happy with Pi-hole 4.3.2 release and are all praises for it on Hacker News. A user said, “I'm a huge fan of this project! I have 3 set-up right now. One as a container on my Nuc at home for myself, and 2 other on old Pi's (one is a 1st gen B model) for family. A simple job to run every 2 months keeps everything up to date. For myself, I use Wireguard to only forward DNS packets to the PiHole when I'm outside the house. If you install a PiHole your help desk calls from family will drop by 90% (personal experience).” Another user comments, “I have Pi-hole running on my LAN and it's amazing. It also helped me identify that my Amcrest PoE security cameras aggressively phone home, even when no cloud functionality is configured on them. All the reasons to keep them on their own VLAN and off the Internet.” Another comment read, “One unadvertised advantage of pi-hole is monitoring and blocking sites that you don't want kids to use, such as the thousands of io-games and whatnot.” Check out the Pi-hole 4.3.2 release notes to know full updates of this release. Brave ad-blocker gives 69x better performance with its new engine written in Rust Chromium developers propose an alternative to webRequest API that could result in existing ad blockers’ end Opera Touch browser lets you browse with one hand on your iPhone, comes with e2e encryption and built-in ad blockers too!
Read more
  • 0
  • 0
  • 6250

article-image-homebrew-2-2-releases-with-support-for-macos-catalina
Vincy Davis
28 Nov 2019
3 min read
Save for later

Homebrew 2.2 releases with support for macOS Catalina

Vincy Davis
28 Nov 2019
3 min read
Yesterday, the project manager of Homebrew, Mike McQuaid, announced the release of Homebrew 2.2. This is the third release of Homebrew this year. Some of the major highlights of this new version include support to macOS Catalina, faster implementations of  HOMEBREW_AUTO_UPDATE_SECS and brew upgrade’s post-install dependent checking, and more. Read More: After Red Hat, Homebrew removes MongoDB from core formulas due to its Server Side Public License adoption New key features in Homebrew 2.2 Homebrew will now support macOS Catalina (10.15), support to macOS Sierra (10.12) and older are unsupported The speed of the no-op case for HOMEBREW_AUTO_UPDATE_SECS has become extremely fast and defaults to 5 minutes instead of 1 The brew upgrade will no longer give an unsuccessful error code if the formula is up-to-date. Homebrew upgrade’s post-install dependent checking is now exceedingly faster and reliable. Homebrew on Linux has updated and raised its minimum requirements. Starting from Homebrew 2.2, the software package management system will use OpenSSL 1.1. The Homebrew team has disabled the brew tap-pin since it was buggy and not much used by Homebrew maintainers. It will stop supporting Python 2.7 by the end of 2019 as it will reach EOL. Read More: Apple’s MacOS Catalina in major turmoil as it kills iTunes and drops support for 32 bit applications Many users are excited about this release and have appreciated the maintainers of Homebrew for their efforts. https://twitter.com/DVJones89/status/1199710865160843265 https://twitter.com/dirksteins/status/1199944492868161538 A user on Hacker News comments, “While Homebrew is perhaps technically crude and somewhat inflexible compared to other and older package managers, I think it deserves real credit for being so easy to add packages to. I contributed Homebrew packages after a few weeks of using macOS, while I didn't contribute a single package in the ten years I ran Debian. I'm also impressed by the focus of the maintainers and their willingness for saying no and cutting features. We need more of that in the programming field. Homebrew is unashamedly solely for running the standard configuration of the newest version of well-behaved programs, which covers at least 90% of my use cases. I use Nix when I want something complicated or nonstandard.” To know about the features in detail, head over to Hombrew’s official page. Announcing Homebrew 2.0.0! Homebrew 1.9.0 released with periodic brew cleanup, beta support for Linux, Windows and much more! Homebrew’s Github repo got hacked in 30 mins. How can open source projects fight supply chain attacks? ActiveState adds thousands of curated Python packages to its platform Firefox Preview 3.0 released with Enhanced Tracking Protection, Open links in Private tab by default and more
Read more
  • 0
  • 0
  • 6239
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $15.99/month. Cancel anytime
article-image-amazon-announces-improved-vpc-networking-for-aws-lambda-functions
Amrata Joshi
04 Sep 2019
3 min read
Save for later

Amazon announces improved VPC networking for AWS Lambda functions

Amrata Joshi
04 Sep 2019
3 min read
Yesterday, the team at Amazon announced improved VPC (Virtual Private Cloud) networking for AWS Lambda functions. It is a major improvement on how AWS Lambda function will work with Amazon VPC networks.  In case a Lambda function is not configured to connect to your VPCs then the function can access anything available on the public internet including other AWS services, HTTPS endpoints for APIs, or endpoints and services outside AWS. So, the function has no way to connect to your private resources that are inside your VPC. When the Lambda function is configured to connect to your own VPC, it creates an elastic network interface within the VPC and does a cross-account attachment. Image Source: Amazon These Lambda functions run inside the Lambda service’s VPC but they can only access resources over the network with the help of your VPC. But in this case, the user still won’t have direct network access to the execution environment where the functions run. What has changed in the new model? AWS Hyperplane for providing NAT (Network Address Translation) capabilities  The team is using AWS Hyperplane, the Network Function Virtualization platform that is used for Network Load Balancer and NAT Gateway. It also has supported inter-VPC connectivity for AWS PrivateLink. With the help of Hyperplane the team will provide NAT capabilities from the Lambda VPC to customer VPCs. Network interfaces within VPC are mapped to the Hyperplane ENI The Hyperplane ENI (Elastic Network Interfaces), a network resource controlled by the Lambda service, allows multiple execution environments to securely access resources within the VPCs in your account. So, in the previous model, the network interfaces in your VPC were directly mapped to Lambda execution environments. But in this case, the network interfaces within your VPC are mapped to the Hyperplane ENI. Image Source: Amazon How is Hyperplane useful? To reduce latency When a function is invoked, the execution environment now uses the pre-created network interface and establishes a network tunnel to it which reduces the latency. To reuse network interface cross functions Each of the unique security group:subnet combination across functions in your account needs a distinct network interface. If such a combination is shared across multiple functions in your account, it is now possible to reuse the same network interface across functions. What remains unchanged? AWS Lambda functions will still need the IAM permissions for creating and deleting network interfaces in your VPC. Users can still control the subnet and security group configurations of the network interfaces.  Users still need to use a NAT device(for example VPC NAT Gateway) for giving a function internet access or for using VPC endpoints to connect to services outside of their VPC. The types of resources that your functions can access within the VPCs still remain the same. The official post reads, “These changes in how we connect with your VPCs improve the performance and scale for your Lambda functions. They enable you to harness the full power of serverless architectures.” To know more about this news, check out the official post. What’s new in cloud & networking this week? Kubernetes releases etcd v3.4 with better backend storage, improved raft voting process, new raft non-voting member and more VMworld 2019: VMware Tanzu on Kubernetes, new hybrid cloud offerings, collaboration with multi cloud platforms and more! The Accelerate State of DevOps 2019 Report: Key findings, scaling strategies and proposed performance & productivity models  
Read more
  • 0
  • 0
  • 6239

article-image-unreal-engine-4-20-released-with-focus-on-mobile-and-immersive-ar-vr-mr-devices
Sugandha Lahoti
20 Jul 2018
4 min read
Save for later

Unreal Engine 4.20 released with focus on mobile and immersive (AR/VR/MR) devices

Sugandha Lahoti
20 Jul 2018
4 min read
Following the release of Unreal Engine 4.19 this April, Epic games have launched the Unreal Engine 4.20. This major update focuses on enhancing scalability and creativity, helping developers create more realistic characters, and immersive environments, for games, film, TV, and VR/AR devices. Multiple optimizations for Mobile Game development Epic games brought over 100 optimizations created for Fortnite on iOS and Android, for Unreal Engine 4.20. Hardware Occlusion Queries are now supported for high-end mobile devices on iOS and Android that support ES 3.1 or Vulkan using the GPU. Developers can also iterate and debug on Android without having to repackage the UE4 project. Game developers now have unlimited Landscape Material layers on mobile devices. Mixed Reality Capture Unreal Engine 4.20 provides a new Mixed Reality Capture functionality, which makes it easy to composite real players into a virtual space for mixed reality applications. It has three components: video input, calibration, and in-game compositing. You can use supported webcams and HDMI capture devices to pull real-world green-screened video into the Unreal Engine from a variety of sources.  The setup and calibration are done through a standalone calibration tool that can be reused across Unreal Engine 4 titles. Niagara Visual effects editor The Niagara visual effects Editor is available as an early access plugin. While the Niagara editor builds on the same particle manipulation methods of Cascade (UE4’s previous VFX), unlike Cascade, Niagara is fully Modular. UE 4.20 adds multiple improvements to Niagara Effect Design and Creation. All of Niagara’s Modules have been updated to support commonly used behaviors in building effects for games. New UI features have also been added for the Niagara stack that mimic the options developers have with UProperties in C++. Niagara now has support for GPU Simulation when used on DX11, PS4, Xbox One, OpenGL (ES3.1), and Metal platforms. Niagara CPU Simulation now works on PC, PS4, Xbox One, OpenGL (ES3.1) and Metal. Niagara was showcased at the GDC 2018 and you can see the presentation Programmable VFX with Unreal Engine’s Niagara for a complete overview. Cinematic Depth of Field Unreal Engine 4.20 also adds Cinematic Depth of Field, where developers can achieve cinema quality camera effects in real-time. Cinematic DoF, provides cleaner depth of field effect providing a cinematic appearance with the use of a procedural Bokeh simulation. It also features dynamic resolution stability, supports alpha channel, and includes settings to scale it down for console projects. For additional information, you can see the Depth of Field documentation. Proxy LOD improvements The Proxy LOD tool is now production-ready. This tool improves performance by reducing rendering cost due to poly count, draw calls, and material complexity. It  results in significant gains when developing for mobile and console platforms. The production-ready version of the Proxy LOD tool has several enhancements over the Experimental version found in UE4.19. Improved Normal Control: The use may now supply the hard-edge cutoff angle and the method used in computing the vertex normal. Gap Filling: The Proxy system automatically discards any inaccessible structures. Gap Filling results in fewer total triangles and a better use of the limited texture resource. Magic Leap One Early Access Support With Unreal Engine 4.20, game developers can now build for Magic Leap One. Unreal Engine 4 support for Magic Leap One uses built-in UE4 frameworks such as camera control, world meshing, motion controllers, and forward and deferred rendering. For developers with access to hardware, Unreal Engine 4.20 can deploy and run on the device in addition to supporting Zero Iteration workflows through Play In Editor. Read more The hype behind Magic Leap’s New Augmented Reality Headsets Magic Leap’s first AR headset, powered by Nvidia Tegra X2, is coming this Summer Apple ARKit 2.0 and Google ARCore 1.2 Support Unreal Engine 4.20 adds support for Apple’s ARKit 2.0, for better tracking quality, support for vertical plane detection, face tracking, 2D and 3D image detection, and persistent and shared AR experiences. It also adds support for Google’s ARCore 1.2, including vertical plane detection, Augmented Images, and Cloud Anchor to build collaborative AR experiences. These are just a select few updates to the Unreal Engine. The full list of release notes is available on the Unreal Engine blog. What’s new in Unreal Engine 4.19? Game Engine Wars: Unity vs Unreal Engine
Read more
  • 0
  • 1
  • 6236

article-image-aws-elastic-load-balancing-support-added-for-redirects-and-fixed-responses-in-application-load-balancer
Natasha Mathur
30 Jul 2018
2 min read
Save for later

AWS Elastic Load Balancing: support added for Redirects and Fixed Responses in Application Load Balancer

Natasha Mathur
30 Jul 2018
2 min read
AWS announced support for two new actions namely, redirect and fixed-response for elastic load balancing in Application Load Balancer last week. Elastic Load Balancing offers automatic distribution of the incoming application traffic. The traffic is distributed across targets, such as Amazon EC2 instances, IP addresses, and containers. One of the types of load balancers that Elastic load offers is Application Load Balancer. Application Load Balancer simplifies and improves the security of your application as it uses only the latest SSL/TLS ciphers and protocols. It is best suited for load balancing of HTTP and HTTPS traffic and operates at the request level which is layer 7. Redirect and Fixed response support simplifies the deployment process while leveraging the scale, availability, and reliability of Elastic Load Balancing. Let’s discuss how these latest features work. The new redirect action enables the load balancer to redirect the incoming requests from one URL to another URL. This involves redirecting HTTP requests to HTTPS requests, allowing more secure browsing, better search ranking and high SSL/TLS score for your site. Redirects also help redirect the users from an old version of an application to a new version. The fixed-response actions help control which client requests are served by your applications. This helps you respond to the incoming requests with HTTP error response codes as well as custom error messages from the load balancer. There is no need to forward the request to the application. If you use both redirect and fixed-response actions in your Application Load Balancer, then the customer experience and the security of your user requests are improved considerably. Redirect and fixed-response actions are now available for your Application Load Balancer in all AWS regions. For more details, check out the Elastic Load Balancing documentation page. Integrate applications with AWS services: Amazon DynamoDB & Amazon Kinesis [Tutorial] Build an IoT application with AWS IoT [Tutorial]
Read more
  • 0
  • 0
  • 6235

article-image-introducing-txqr-data-transfer-via-animated-qr-codes
Amrata Joshi
04 Jan 2019
3 min read
Save for later

Introducing TXQR, data transfer via animated QR codes

Amrata Joshi
04 Jan 2019
3 min read
TXQR is a project for transferring data via animated QR codes. It is written in Go and uses fountain erasure codes. Ivan Daniluk, it’s creator and software engineer has shared his experience in building TXDR and also the results of using animated QR as a data transfer method. QR Codes QR codes, a type of visual encoding allows different error recovery levels, with almost 30% redundancy for the highest one. QR Version 40 allows to encode up to 4296 alphanumeric or 2953 binary symbols. But this gives rise to two major issues, firstly 3-4KB might just not be enough and secondly the more data in QR code, the better should be the quality and image resolution. But what if we need to transfer approximately ~15KB of data on the average consumer devices? Using animated QR codes with dynamic FPS and size changes could possibly work. The basic design of TXQR A client chooses the data to be sent and generates an animated QR code and shows it in the loop until all the frames are received by the reader. Encoding is designed properly such that it allows any particular order of frames, as well as dynamic changes in FPS. In case, the reader is slower then it can display a message “please decrease FPS on the sender.” Talking about the protocol, it is simply where each frame starts with a prefix “NUM/TOTAL|”, (where NUM and TOTAL are integer values for current and total frames respectively) and rest is the file content. Original data is encoded using Base64, so only alphanumeric data is actually encoded in QR. Gomobile To get .framework or .aar file to be included in your iOS or Android project one can write a standard Go code, then run gomobile bind. Users can refer to it as any regular library and get autocomplete and type information. Ivan has built a simple iOS QR scanner in Swift and modified it to read animated QR codes, fed the decoded chunks into txqr decoder and displayed received the file in a preview window. Fountain codes TXQR is used for unidirectional data transfer using an animated sequence of QR codes. The approach involved in TXQR included repeating the encoded sequence over and over until the receiver gets complete data. This led to long delays in case the receiver missed at least one frame. As per the article by Bojtos Kiskutya, LT ( Luby Transform) codes can yield much better results for TXQR. LT codes are one of the implementations of the family of codes called fountain codes. It’s a class of erasure codes that can easily produce a potentially infinite amount of blocks from the source message blocks (K). The receiver can receive blocks from any point, in any order, with any erasure probability. Fountain codes start as soon as  K+ different blocks are received. It is named as fountain code as the encoded blocks represent the fountain’s water drops. Fountain codes are easy and they solve critical problems as they harness the properties of randomness, mathematical logic and probability distribution tuning to achieve their goal. In this article we have covered TXQR’s basic design, basics of animated QR codes, Fountain codes, Gomobile etc. To know more about the experimentation in detail, check out Ivan’s Github. AWS introduces ‘AWS DataSync’ for automated, simplified, and accelerated data transfer Google, Microsoft, Twitter, and Facebook team up for Data Transfer Project MySQL Data Transfer using Sql Server Integration Services (SSIS)
Read more
  • 0
  • 0
  • 6218
article-image-elon-musk-reveals-big-plans-with-neuralink
Guest Contributor
18 Sep 2018
3 min read
Save for later

Elon Musk reveals big plans with Neuralink

Guest Contributor
18 Sep 2018
3 min read
Be it a tweet about taking the company private or smoking weed on a radio show, Elon Musk has been in news for all the wrong reasons recently and he is in news again but this time for what he is best admired as a modern day visionary. As per reports the Tesla and SpaceX founder is working on a 'superhuman' product that will connect your brain to a computer. We all know Musk along with eight others founded a company called Neuralink two years ago. The company has been developing implantable brain–computer interfaces, better known as BCIs. While in the short-term the company’s aim is to use the technology to treat brain diseases, Musk’s eventual goal is human enhancement, which he believes will make us more intelligent and powerful than even AI.  According to hints he gave a week ago, Neuralink may soon be close to announcing a product unlike anything we have seen: A brain computer interface. Appearing on the Joe Rogan Experience podcast last week, Musk stated that he’ll soon be announcing a new product – Neuralink – which will connect your brain to a computer, thus making you superhuman. When asked about Neuralink, Musk said "I think we’ll have something interesting to announce in a few months that’s better than anyone thinks is possible. Best case scenario, we effectively merge with AI. It will enable anyone who wants to have superhuman cognition. Anyone who wants. How much smarter are you with a phone or computer or without? You’re vastly smarter, actually. You can answer any question pretty much instantly. You can remember flawlessly. Your phone can remember videos [and] pictures perfectly. Your phone is already an extension of you. You’re already a cyborg. Most people don’t realise you’re already a cyborg. It’s just that the data rate, it’s slow, very slow. It’s like a tiny straw of information flow between your biological self and your digital self. We need to make that tiny straw like a giant river, a huge, high-bandwidth interface." If we visualize what Musk said, it feels like a scene straight from a Hollywood movie. However, many of the creations, from a decade ago, that were thought to belong solely in the world of science-fiction, have become a  reality now. Musk argues that through our over-dependence on smartphones, we have already taken the first step towards our cyborg future. Neuralink is an attempt to just accelerate the process by leaps and bounds. That's not all, Elon Musk was also quoted saying on CNBC. "If your biological self dies, you can upload into a new unit. Literally, with our Neuralink technology". Read the full news on CNBC. About Author Sandesh Deshpande is currently working as a System Administrator for Packt Publishing. He is highly interested in Artificial Intelligence and Machine Learning. Tesla is building its own AI hardware for self-driving cars Elon Musk’s tiny submarine is a lesson in how not to solve problems in tech. DeepMind, Elon Musk, and others pledge not to build lethal AI
Read more
  • 0
  • 0
  • 6218

article-image-introducing-wasmjit-a-kernel-mode-webassembly-runtime-for-linux
Bhagyashree R
26 Sep 2018
2 min read
Save for later

Introducing Wasmjit: A kernel mode WebAssembly runtime for Linux

Bhagyashree R
26 Sep 2018
2 min read
Written in C90, Wasmjit is a small embeddable WebAssembly runtime. It is portable to most environments but it primarily targets a Linux kernel module that can host Emscripten-generated WebAssembly modules. What are the benefits of Wasmjit? Improved performance: Using Wasmjit you will be able to run WebAssembly modules in kernel-space (ring 0). This will provide access to system calls as normal function calls, which eliminates the user-kernel transition overhead. This also avoids the scheduling overheads from swapping page tables. This provides a boost in performance for syscall-bound programs like web servers or FUSE file systems. No need to run an entire browser: It also comes with a host environment for running in user-space on POSIX systems. This will allow running WebAssembly modules without having to run an entire browser. What tools do you need to get started? The following are the tools you require to get started with Wasmjit: A standard POSIX C development environment with cc and make Emscripten SDK Optionally, you can install kernel headers on Linux, the linux-headers-amd64 package on Debian, kernel-devel on Fedora What’s in the future? Wasmjit currently supports x86_64 and can run a subset of Emscripten-generated WebAssembly on Linux, macOS, and within the Linux kernel as a kernel module. In coming releases we will see more implementations and improvements along the following lines: Enough Emscripten host-bindings to run nginx.wasm Introduction of an interpreter Rust-runtime for Rust-generated wasm files Go-runtime for Go-generated wasm files Optimized x86_64 JIT arm64 JIT macOS kernel module What to consider when using this runtime? Wasmjit uses vmalloc(), a function for allocating a contiguous memory region in the virtual address space, for code and data section allocations. This prevents those pages from ever being swapped to disk resulting in indiscriminate access to the /dev/wasm device. This can make a system vulnerable to denial-of-service attacks. To mitigate this risk, in future, a system-wide limit on the amount of memory used by the /dev/wasm device will be provided. To get started with Wasmjit, check out its GitHub repository. Why is everyone going crazy over WebAssembly? Unity Benchmark report approves WebAssembly load times and performance in popular web browsers Golang 1.11 is here with modules and experimental WebAssembly port among other updates
Read more
  • 0
  • 0
  • 6213

article-image-macos-terminal-emulator-iterm2-3-3-is-here-with-new-python-scripting-api-a-scriptable-status-bar-minimal-theme-and-more
Vincy Davis
02 Aug 2019
4 min read
Save for later

MacOS terminal emulator, iTerm2 3.3.0 is here with new Python scripting API, a scriptable status bar, Minimal theme, and more

Vincy Davis
02 Aug 2019
4 min read
Yesterday, the team behind iTerm2, the GPL-licensed terminal emulator for macOS, announced the release of iTerm2 3.3.0. It is a major release with many new features such as the new Python scripting API, a new scriptable status bar, two new themes, and more. iTerm2 is a successor to iTerm and works on all macOS. It is an open source replacement for Apple's Terminal and is highly customizable as comes with a lot of useful features. Major highlights in iTerm2 3.3.0 A new Python scripting API which can control iTerm2 and extend its behavior has been added. It allows users to write Python scripts easily, thus enabling them to do extensive configuration and customization in iTerm2 3.3.0. A new scriptable status bar has been added with 13 built-in configurable components. iTerm2 3.3.0 comes with two new themes. The first theme is called as Minimal and it helps reducing visual cluster. The second theme can move tabs into the title bar, thus saving space while maintaining the general appearance of a macOS app and is called Compact. Other new features in iTerm2 3.3.0 The session, tab and window titles have been given a new appearance to make it more flexible and comprehensible. It is now possible to configure these titles separately and also to select what type of information it shows per profile. These titles are integrated with the new Python scripting API. The tabs title has new icons, which either indicates a running app or a fixed icon per profile. A new tool belt called ‘Actions’ has been introduced in iTerm2 3.3.0. It provides shortcuts  to frequent actions like sending a snippet of a text. A new utility ‘it2git’ which allows the git status bar component to show git state on a remote host, has been added. New support for crossed-out text (SGR 9) and automatically restarting a session when it ends has also been added in iTerm2 3.3.0. Other Improvements in iTerm2 3.3.0 Many visual improvements Updated app icon Various pages of preferences have been rearranged to make it more visually appealing The password manager can be used to enter a password securely A new option to log Automatic Profile Switching messages to the scripting console has been added The long scrollback history’s performance has been improved Users love the new features in iTerm2 3.3.0 release, specially the new Python API, the scriptable status bar and the new Minimal mode. https://twitter.com/lambdanerd/status/1157004396808552448 https://twitter.com/alloydwhitlock/status/1156962293760036865 https://twitter.com/josephcs/status/1157193431162036224 https://twitter.com/dump/status/1156900168127713280 A user on Hacker News comments, “First off, wow love the status bar idea.” Another user on Hacker News says “Kudos to Mr. Nachman on continuing to develop a terrific piece of macOS software! I've been running the 3.3 betas for a while and some of the new functionality is really great. Exporting a recording of a terminal session from the "Instant Replay" panel is very handy!” Few users are not impressed with iTerm2 3.3.0 features and are comparing it with the Terminal app. A comment on Hacker News reads, “I like having options but wouldn’t recommend iTerm. Apple’s Terminal.app is more performant rendering text and more responsive to input while admittedly having somewhat less unnecessary features. In fact, iTerm is one of the slowest terminals out there! iTerm used to have a lot of really compelling stuff that was missing from the official terminal like tabs, etc that made straying away from the canonical terminal app worth it but most of them eventually made their way to Terminal.app so nowadays it’s mostly just fluff.” For the full list of improvements in iTerm2 3.3.0, visit the iTerm2 changelog page. Apple previews macOS Catalina 10.15 beta, featuring Apple music, TV apps, security, zsh shell, driverKit, and much more! WWDC 2019 highlights: Apple introduces SwiftUI, new privacy-focused sign in, updates to iOS, macOS, and iPad and more Safari Technology Preview release 83 now available for macOS Mojave and macOS High Sierra
Read more
  • 0
  • 0
  • 6202
article-image-free-web-development-ebooks
Owen Roberts
13 Jan 2017
5 min read
Save for later

Free Web Development eBooks

Owen Roberts
13 Jan 2017
5 min read
So you want to get into web development - but where do you even start to begin? The world of web development has never been more important. Last year was the first time that the majority of purchases were made on the web rather than visiting a brick and mortar store. Web apps now offer fantastic ease of use for business clients or shoppers, and a poorly built or optimized app is often all it takes for your customers to choose to leave your ideas behind and go somewhere else. Luckily, we've got tons of free eBooks for you to get started with if you’re looking to enter this wide world of web development with as best a foot forward as possible. And beyond that, when you’ve got the basics down and are ready to start developing your skills in new and interesting ways with frameworks, coding paradigms, and optimization then we’ve got you covered too with our entire web development range of titles to buy too! Here are our current top free learning resources we think you'll find perfect to get started. With how important web development is right now starting here for free might just be the smartest investment you make today! All you need to do is click the links below, log in, and you’ll find the eBook in your account. You'll then be able to download the free machine learning guides as PDFs which you can keep forever. It's that simple. Thinking in HTML For anyone who wants to understand the web and how it works, HTML is an essential place to start. If you need somewhere to start your journey into HTML then this free eBook provides you with everything you need to know to get to grips with the code and to begin building your own web pages. Inside you’ll explore everything from how HTML code structures a web page, to formatting pages and including essential things like hyperlinks and images, to creating your own forms you’ll have the perfect foundation to build upon all your future work. Download and read Thinking in HTML for free now. Thinking in CSS CSS is intrinsic to the modern web, and if you want to become a true web developer it’s a must have to know inside and out. Thinking in CSS is your gateway to the ins and outs of the language. This title gives you the crash course on the ins and outs the structure of CSS, how to use selectors to locate elements to restyle your webpage, and shows you how CSS interacts with the web pages you see every day. With this book by your side you’ll be able to cut through CSS without breaking a sweat - letting you focus on simply making your web pages look great without getting lost in the language! Download and read Thinking in CSS for free now. What you need to know about JavaScript JavaScript and the web go hand in hand, and as a web developer you need to have a great handle on the language to ensure you're creating the best apps you can. From the absolute basics of JS syntax to combining ECMAScript 6 and Visual Studio Code this primer is the perfect resource for a JavaScript Master-in-the-Making! Download and read What you need to know about JavaScript for free now. What you need to know about Angular 2 Angular was one of the biggest development revolutions of the decade, and you owe it to yourself to see just why Angular 2 is going to be just as big - and this title will show you how. Giving you everything you need to build your first basic Angular app this is the ideal place to start for any Angular initiate looking to start their journey. Download and read What you need to know about Angular 2 for free now. Mastering JavaScript High Performance So, you’ve got the basics of web development down - where do you go from there? Why not try making sure your JS code's performance is as good as possible with this title! Inside this 208 page guide you’ll find a whole host of easy to copy tips, tricks, and best practices you can bring to your web dev world going forward. With a few simple tricks you can rest easy knowing your web or mobile apps run faster and better for both you and your customers; meaning your competitors don’t leave you in the dust. Download and read Mastering JavaScript High Performance for free now. There we go - 5 titles to give you the start (and a bit more when you’re ready to dig deeper too!) you need to start creating your very own web apps. So what are you waiting for? Get downloading, get reading, and then get creating!
Read more
  • 0
  • 0
  • 6201

article-image-amazon-s3-retiring-support-path-style-api-requests-sparks-censorship-fears
Fatema Patrawala
06 May 2019
5 min read
Save for later

Amazon S3 is retiring support for path-style API requests; sparks censorship fears

Fatema Patrawala
06 May 2019
5 min read
Last week on Tuesday Amazon announced that Amazon S3 will no longer support path-style API requests. Currently Amazon S3 supports two request URI styles in all regions: path-style (also known as V1) that includes bucket name in the path of the URI (example: //s3.amazonaws.com/<bucketname>/key) and virtual-hosted style (also known as V2) which uses the bucket name as part of the domain name (example: //<bucketname>.s3.amazonaws.com/key). Amazon team mentions in the announcement that, “In our effort to continuously improve customer experience, the path-style naming convention is being retired in favor of virtual-hosted style request format.” They have also asked customers to update their applications to use the virtual-hosted style request format when making S3 API requests. And this should be done before September 30th, 2020 to avoid any service disruptions. Customers using the AWS SDK can upgrade to the most recent version of the SDK to ensure their applications are using the virtual-hosted style request format. They have further mentioned that, “Virtual-hosted style requests are supported for all S3 endpoints in all AWS regions. S3 will stop accepting requests made using the path-style request format in all regions starting September 30th, 2020. Any requests using the path-style request format made after this time will fail.” Users on Hackernews see this as a poor development by Amazon and have noted its implications that collateral freedom techniques using Amazon S3 will no longer work. One of them has commented strongly on this, “One important implication is that collateral freedom techniques [1] using Amazon S3 will no longer work. To put it simply, right now I could put some stuff not liked by Russian or Chinese government (maybe entire website) and give a direct s3 link to https:// s3 .amazonaws.com/mywebsite/index.html. Because it's https — there is no way man in the middle knows what people read on s3.amazonaws.com. With this change — dictators see my domain name and block requests to it right away. I don't know if they did it on purpose or just forgot about those who are less fortunate in regards to access to information, but this is a sad development. This censorship circumvention technique is actively used in the wild and loosing Amazon is no good.” Amazon team suggests that if your application is not able to utilize the virtual-hosted style request format, or if you have any questions or concerns, you may reach out to AWS Support. To know more about this news check out the official announcement page from Amazon. Update from Amazon team on 8th May Amazon’s Chief Evangelist for AWS, Jeff Barr sat with the S3 team to understand this change in detail. After getting a better understanding he posted an update on why the team plans to deprecate the path based model. Here’s his comparison on old vs the new: S3 currently supports two different addressing models: path-style and virtual-hosted style. Take a quick look at each one. The path-style model looks either like this (the global S3 endpoint): https://s3.amazonaws.com/jbarr-public/images/ritchie_and_thompson_pdp11.jpeg https://s3.amazonaws.com/jeffbarr-public/classic_amazon_door_desk.png Or this (one of the regional S3 endpoints): https://s3-useast2.amazonaws.com/jbarrpublic/images/ritchie_and_thompson_pdp11.jpeg https://s3-us-east-2.amazonaws.com/jeffbarr-public/classic_amazon_door_desk.png For example, jbarr-public and jeffbarr-public are bucket names; /images/ritchie_and_thompson_pdp11.jpeg and /jeffbarr-public/classic_amazon_door_desk.png are object keys. Even though the objects are owned by distinct AWS accounts and are in different S3 buckets and possibly in distinct AWS regions, both of them are in the DNS subdomain s3.amazonaws.com. Hold that thought while we look at the equivalent virtual-hosted style references: https://jbarr-public.s3.amazonaws.com/images/ritchie_and_thompson_pdp11.jpeg https://jeffbarr-public.s3.amazonaws.com/classic_amazon_door_desk.png These URLs reference the same objects, but the objects are now in distinct DNS subdomains (jbarr-public.s3.amazonaws.com and jeffbarr-public.s3.amazonaws.com, respectively). The difference is subtle, but very important. When you use a URL to reference an object, DNS resolution is used to map the subdomain name to an IP address. With the path-style model, the subdomain is always s3.amazonaws.com or one of the regional endpoints; with the virtual-hosted style, the subdomain is specific to the bucket. This additional degree of endpoint specificity is the key that opens the door to many important improvements to S3. The select few in the community are in favor of this as per one of the user comment on Hacker News which says, “Thank you for listening! The original plan was insane. The new one is sane. As I pointed out here https://twitter.com/dvassallo/status/1125549694778691584 thousands of printed books had references to V1 S3 URLs. Breaking them would have been a huge loss. Thank you!” But for the other few Amazon team has failed to address the domain censorship issue as per another user which says, “Still doesn't help with domain censorship. This was discussed in-depth in the other thread from yesterday, but TLDR, it's a lot harder to block https://s3.amazonaws.com/tiananmen-square-facts than https://tiananmen-square-facts.s3.amazonaws.com because DNS lookups are made before HTTPS kicks in.” Read about this update in detail here. Amazon S3 Security access and policies 3 announcements about Amazon S3 from re:Invent 2018: Intelligent-Tiering, Object Lock, and Batch Operations Amazon introduces S3 batch operations to process millions of S3 objects
Read more
  • 0
  • 0
  • 6199