Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3709 Articles
article-image-google-and-mozilla-to-remove-extended-validation-indicators-in-chrome-77-and-firefox-70
Bhagyashree R
13 Aug 2019
4 min read
Save for later

Google and Mozilla to remove Extended Validation indicators in Chrome 77 and Firefox 70

Bhagyashree R
13 Aug 2019
4 min read
Last year, Apple removed the Extended Validation (EV) certificate indicators from Safari on both iOS 12 and Mojave. Now, Google and Mozilla are following suit by removing the EV visual indicators starting from Chrome 77 and Firefox 70. What are Extended Validation Certificates Introduced in 2007, Extended Validation Certificates are issued to applicants after they are verified as a genuine legal entity by a certificate authority (CA). The baseline requirements for an EV certificate are outlined by the CA/Browser forum. Web browsers show a green address bar when visiting a website that is using EV Certificate. You will see the company name alongside the padlock symbol in the green address bar. These certificates can often be expensive. DigiCert charges $344 USD per year, Symantec prices its EV certificate at $995 USD a year, and Thawte at $299 USD a year. Why Chrome and Firefox are removing EV indicators In a survey conducted by Google, users of the Chrome and Safari browsers were asked how much they trusted a website with and without EV indicators. The results of the survey showed that browser identity indicators do not have much effect on users’ secure choices. About 85 percent of users did not find anything strange about a Google login page with the fake URL “accounts.google.com.amp.tinyurl.com”. Seeing these results and prior academic work, the Google Security US team concluded that positive security indicators are largely ineffective. “As part of a series of data-driven changes to Chrome’s security indicators, the Chrome Security UX team is announcing a change to the Extended Validation (EV) certificate indicator on certain websites starting in Chrome 77,” the team wrote in a Google group. Another reason behind this decision was that the EV indicators takes up valuable screen space. Starting with Chrome 77,  the information related to EV indicators will be shown in Page Info that appears when the lock icon is clicked instead of the EV badge: Source: Google Citing similar reasons, the team behind Firefox shared their intention to remove EV indicators from Firefox 70 for desktop yesterday. They also plan to add this information to the identity panel instead of showing it on the identity block. “The effectiveness of EV has been called into question numerous times over the last few years, there are serious doubts whether users notice the absence of positive security indicators and proof of concepts have been pitting EV against domains for phishing,” the team wrote. Many CAs market EV certificates as something that builds visitor confidence, protects them against phishing, and identity fraud. Looking at these advancements, Troy Hunt, a web security expert and the creator of “Have I Been Pwned?” concluded that EV certificates are now dead. In a blog post, he questioned the CAs, “how long will it take the CAs selling EV to adjust their marketing to align with reality?” Users have mixed feelings about this change. “Good riddance, IMO. They never meant much, to begin with, the validation procedures were basically "can you pay the fee?", and they only added to user confusion,” a user said on Hacker News. Many users believe that EV indicators are valuable for financial transactions. A user commented on Reddit, “As a financial institution it was always much easier to just say "make sure it says <Bank name> in the URL bar and it's green" when having a customer on the phone than "Please click on settings -> advanced settings -> security -> display certificate and check the value subject".” To know more, check out the official announcements by Chrome and Firefox teams. Google Chrome to simplify URLs by hiding special-case subdomains Flutter gets new set of lint rules to build better Chrome OS apps Mozilla releases WebThings Gateway 0.9 experimental builds targeting Turris Omnia and Raspberry Pi 4  
Read more
  • 0
  • 0
  • 2880

article-image-amazon-ebs-snapshots-exposed-publicly-leaking-sensitive-data-in-hundreds-of-thousands-security-analyst-reveals-at-defcon-27
Fatema Patrawala
13 Aug 2019
5 min read
Save for later

Amazon EBS snapshots exposed publicly leaking sensitive data in hundreds of thousands, security analyst reveals at DefCon 27

Fatema Patrawala
13 Aug 2019
5 min read
Last week the DefCon security conference, which was held in Paris and Las Vegas, revealed that companies, govt and startups are inadvertently leaking their own files from the cloud. Ben Morris, a senior security analyst at cybersecurity firm Bishop Fox presented at DefCon on finding the secrets in publicly exposed EBS accounts. “You may have heard of exposed S3 buckets — those Amazon-hosted storage servers packed with customer data but often misconfigured and inadvertently set to “public” for anyone to access. But you may not have heard about exposed EBS snapshots, which poses as much, if not a greater, risk” Morris said. “Did you know that Elastic Block Storage (Amazon EBS) has a "public" mode that makes your virtual hard disk available to anyone on the internet? Apparently hundreds of thousands of others didn't either, because they're out there exposing secrets for everyone to see. I tore apart petabytes of data for you and have some dirty laundry to air: encryption keys, passwords, authentication tokens, PII, you name it and it's here. Whole (virtual) hard drives to live sites and apps, just sitting there for anyone to read. So much data in fact that I had to invent a custom system to process it all.” he added. Ahead of his talk at DefCon, Morris also spoke to a TechCrunch reporter and said that these elastic block storage (EBS) snapshots are the “keys to the kingdom”. “They have the secret keys to your applications and they have database access to your customers’ information.” “When you get rid of the hard disk for your computer, you know, you usually shredded or wipe it completely,” he said. “But these public EBS volumes are just left for anyone to take and start poking at.” He said that all too often cloud admins don’t choose the correct configuration settings, leaving EBS snapshots inadvertently public and unencrypted. “That means anyone on the internet can download your hard disk and boot it up, attach it to a machine they control, and then start rifling through the disk to look for any kind of secrets,” he said. Source: TechCrunch, Morris’ Def Con slides explaining how EBS snapshots can be exposed. Morris built a tool using Amazon’s own internal search feature to query and scrape publicly exposed EBS snapshots. He then attached it, made a copy and listed the contents of the volume on his system. “If you expose the disk for even just a couple of minutes, our system will pick it up and make a copy of it,” he said. It took him two months to build up a database of exposed data and just a few hundred dollars spent on Amazon cloud resources. Morris validates each snapshot and then deletes the data. Morris found dozens of snapshots exposed publicly in one region alone, it included application keys, critical user or administrative credentials, source code and more. He found data from several major companies, including healthcare providers and tech companies, exposed publicly. He also found VPN configurations, which could allow him to tunnel into a corporate network. Among the most damaging things he found a snapshot for one government contractor that provided data storage services to federal agencies. “On their website, they brag about holding this data,” he said, referring to collected intelligence from messages sent to and from the so-called Islamic State terror group to data on border crossings. Morris estimated the figure to be approximately 1,250 exposures across all Amazon cloud regions. An Amazon spokesperson said to TechCrunch, customers who set their Amazon EBS snapshots to public “have been notified and advised to take the snapshot offline if the setting was unintentional.” Morris plans to release his proof-of-concept code in the coming weeks. “I’m giving companies a couple of weeks to go through their own disks and make sure that they don’t have any accidental exposures,” he said. On Hacker News users are astonished to know about this fact and some of them say they have never come across such a situation after working on AWS for years. While some agree that the exposure of Amazon EBS snapshots it could be accidental or due to management pressure. One of the comments read, “I've been working almost exclusively in the AWS space for about 10 years now. Clients anywhere from tiny little three-person consultancies to Fortune 100. Commercial, govcloud, dozens of clients. Never once have I ever found a use case for making public EBS snapshots. Who on Earth is thinking that it is a good idea to take an EBS snapshot and make it public? Note, several of those engagements did involve multiple accounts, and the need to share / copy AMIs and/or snapshots between accounts. But never making them public.” Another user responded to this, “Laziness in attempting to share data with someone in another org? "Nope, can't access it" ... "Nope, still can't access it"... "My manager is harassing me to get access now"... "Look, just make it public then change it back after I get it copied"...” Ex-Amazon employee hacks Capital One’s firewall to access its Amazon S3 database; 100m US and 60m Canadian users affected Amazon S3 is retiring support for path-style API requests; sparks censorship fears Amazon S3 Security access and policies
Read more
  • 0
  • 0
  • 2438

article-image-gnu-radio-3-8-0-0-releases-with-new-dependencies-python-2-and-3-compatibility-and-much-more
Amrata Joshi
13 Aug 2019
2 min read
Save for later

GNU Radio 3.8.0.0 releases with new dependencies, Python 2 and 3 compatibility, and much more!

Amrata Joshi
13 Aug 2019
2 min read
Last week, the team behind GNU Radio announced the release of GNU Radio 3.8, a free and open-source software development toolkit. GNU Radio 3.8.0.0 comes with a few major changes and deprecations. Major changes in GNU Radio 3.8.0.0 Dependencies With this release, new dependencies have been introduced including MPIR/GMP, Qt5, codec2, gsm. The team has removed few of the dependencies including libusb, Qt4, and CppUnit Python compatibility This release is Python 2 and Python 3 compatible. Also, GNU Radio 3.8 is going to be the last Py2k-compatible release series. Gengen got replaced Gengen (GENerator GENerator) a tool that generates a text generator got replaced by templates. gnuradio-runtime The team has reworked on fractional tag time handling which is in the context of resamplers C++ generation In this release, C++ generation has been introduced as an option. gr-utils The gr_modtool has also improved now. Some deprecations in GNU Radio 3.8  Modules Modules gr-comedi, gr-fcd and gr-wxgui have been removed. Gr-comedi Gr-comedi has been removed as it had 0 active code contributions in the 3.7 lifecycle. gr-fcd Gr-fcd is getting removed as it is currently untestable by the CI and as there were no code contributions. It seems few users are excited to experiment with GNU Radio 3.8 in the near future. A user commented on HackerNews, “GNU Radio is one of those examples of free software being hyper-niche yet super successful. It's something I want to start playing with in the near future.” To know more about this news, check out the official post by GNURadio. GNU C Library version 2.30 releases with POSIX-proposed functions, support for Unicode 12.1.0, new Linux functions and more! GNU APL 1.8 releases with bug fixes, FFT, GTK, RE and more Debian 10 codenamed ‘buster’ released, along with Debian GNU/Hurd 2019 as a port    
Read more
  • 0
  • 0
  • 3521

article-image-verizon-sells-tumblr-to-wordpress-parent-automattic-for-allegedly-less-than-3million-a-fraction-of-its-acquisition-cost
Vincy Davis
13 Aug 2019
4 min read
Save for later

Verizon sells Tumblr to WordPress parent, Automattic, for allegedly less than $3million, a fraction of its acquisition cost

Vincy Davis
13 Aug 2019
4 min read
Yesterday, Tumblr Staff announced to its users that Automattic, the company that owns WordPress.com, plans to acquire Tumblr. Though the official post does not mention any details, it has been reported that Verizon sold Tumblr for less than $3 million. Automattic will also absorb 200 of Verizon’s employees, however other details of the deal remain undisclosed. The official blog post states, “We couldn’t be more excited to be joining a team that has a similar mission. Many of you know WordPress.com, Automattic’s flagship product. WordPress.com and Tumblr were both early pioneers among blogging platforms.” https://twitter.com/jeffdonof/status/1161034494465519620 Launched in 2007, Tumblr is a microblogging and social networking website which allows users to upload and share photos, music, art and post short blogs. It hosts more than 450 million blogs and was earlier considered as one of the major players among social media platforms. In 2013, Yahoo acquired Tumblr, for $1.1 billion when the company was one of the leading social media platforms. Due to poor returns from Tumblr, Yahoo downgraded its value to $230 million and in 2017, Verizon undertook it as part of its Yahoo acquisition. In December 2018, Verizon announced its new policy to ban all adult content on Tumblr. This new policy came days after Tumblr was removed from Apple’s iOS App Store over a child pornography incident. The new policy made many users infuriated, leading to further decline in its user count. Two months ago, it was reported that Verizon was keen to sell Tumblr, in order to compensate for its unattainable revenue targets. Automattic acquiring the company is seen as a good sign by many, as WordPress.com is one of the most popular open source blogging platforms. Although, Tumblr has suffered from inconsistent ownerships all along, it does have a loyal user base. Automattic’s Chief Executive Officer, Matt Mullenweg believes that the new ownership and investment will make Tumblr blossom. “I was very impressed with the engagement and activity Tumblr has continued to have,” he said on Hacker News. In an interview with Wall Street Journal, Mullenweg says this is the biggest acquisition for the company in terms of price and headcount and mentions that Tumblr will act as as a “complementary” site to WordPress. Although Automattic enables adult content on its own platform, Mullenweg has said that Automattic will continue with Verizon’s policy of no adult content on Tumblr, “Adult content is not our forte either, and it creates a huge number of potential issues with app stores, payment providers, trust and safety.” https://twitter.com/photomatt/status/1161049101741494273 Many users are annoyed with Automattic’s decision to not support adult content on Tumblr. https://twitter.com/countchrisdo/status/1161136251631734784 Another user tweeted that Twitter and Reddit both allow adult content, so Automattic should show some care for the people affected by the ban. He added, “No one wants the NSFW ban to stay, but I guess you're fine with it as long as it lines your pockets.” Another user says, “I'm curious why you would choose to maintain Verizon's policy changes that alienated the majority of the user-base.” Many users are, however, happy that Tumblr has finally found a stable host in Automattic. https://twitter.com/marcoarment/status/1161015149563645953 https://twitter.com/fraying/status/1161020130966437888 https://twitter.com/onalark/status/1161020459980222464 Some feel that Tumblr is a dead company and Automattics’ $3 million is down the drain. https://twitter.com/shiruken/status/1161058926936449025 https://twitter.com/1amnerd/status/1161208412752863233 A user on Hacker News comments, “Surprised by this news. Tumblr has lost a ton of momentum since its policy change, and the site itself doesn't have a very strong "brand" audience attached to it.” Tumblr open sources its Kubernetes tools for better workflow integration How Verizon and a BGP Optimizer caused a major internet outage affecting Amazon, Facebook, CloudFlare among others Verizon hosted Ericsson 2018 OSS/BSS User Group with a ‘Quest For Easy’ theme
Read more
  • 0
  • 0
  • 1903

article-image-you-can-now-use-fingerprint-or-screen-lock-instead-of-passwords-when-visiting-certain-google-services-thanks-to-fido2-based-authentication
Sugandha Lahoti
13 Aug 2019
2 min read
Save for later

You can now use fingerprint or screen lock instead of passwords when visiting certain Google services thanks to FIDO2 based authentication

Sugandha Lahoti
13 Aug 2019
2 min read
Google has announced a FIDO2 based local user verification for Google Accounts, for a simpler authentication experience when viewing saved passwords for a website. Basically, you can now use fingerprint or screen lock instead of passwords when visiting certain Google services. This password-free authentication service will leverage the FIDO2 standards, FIDO CTAP, and WebAuthn, which is designed to “provide simpler and more secure authentication experiences. They are a result of years of collaboration between Google and many other organizations in the FIDO Alliance and the W3C” according to a blog post from the company. This new authentication process is designed to speed up the process of logging into Google accounts as well as being more secure by replacing the password typing system with a direct biometric authentication system. How this works is that if you tap on any one of your saved passwords on passwords.google.com, then Google will prompt you to "Verify that it’s you," at which point, you can authenticate using your fingerprint or any other method you usually use to unlock your phone (such as using a pin number or a touch pattern). Google has not yet made it clear which Google services could be used by the biometric method; the blog post cited Google's online Password Manager, as the example. Source: Google Google is also being cautious about data privacy, noting, “Your fingerprint is never sent to Google's servers - it is securely stored on your device, and only a cryptographic proof that you've correctly scanned it is sent to Google's servers. This is a fundamental part of the FIDO2 design. This sign-in feature is currently available on all Pixel devices. It will be made available to all Android phones running 7.0 Nougat or later "over the next few days.  Google Titan Security key with secure FIDO two factor authentication is now available for purchase Google to provide a free replacement key for its compromised Bluetooth Low Energy (BLE) Titan Security Keys Cloud Next 2019 Tokyo: Google announces new security capabilities for enterprise users
Read more
  • 0
  • 0
  • 4251

article-image-introducing-coil-an-open-source-android-image-loading-library-backed-by-kotlin-coroutines
Bhagyashree R
13 Aug 2019
3 min read
Save for later

Introducing Coil, an open-source Android image loading library backed by Kotlin Coroutines

Bhagyashree R
13 Aug 2019
3 min read
Yesterday, Colin White, a Senior Android Engineer at Instacart, introduced Coroutine Image Loader (Coil). It is a fast, lightweight, and modern image loading library for Android backed by Kotlin. https://twitter.com/colinwhi/status/1160943333033648128 Currently, there are a number of image loading libraries for Android such as Glide, Fresco, Picasso, Mirage, among others. However, the Instacart team aims to introduce a new library that is “more modern and simpler” with Coil. Key features in Coil Backed by Kotlin Coil offers a “simple, elegant API” by leveraging the Kotlin language features like extension functions, inlining, lambda params, and sealed classes. It provides strong support for non-blocking asynchronous computation and work cancellation while ensuring maximum thread reuse with the help of Kotlin Coroutines. Leverages modern dependencies Coil relies on dependencies that are standard and recommended such as OkHttp, Okio, and AndroidX Lifecycles. Square’s OkHttp and Okio are by default efficient and enables Coil to avoid reimplementing things like disk caching and stream buffering. Likewise, AndroidX Lifecycles is a recommended way for tracking the lifecycle state. Lightweight Coil’s codebase consists of 8x fewer lines of code as compared to Glide. It adds approximately 1500 methods to your APK, which is comparable to Picasso and significantly less than Glide and Fresco. Supports extension The image pipeline of Coil consists of three main classes: Mappers, Fetchers, and Decoders. You can use these interfaces to augment or override the base behavior and add support for new file types in Coil. Supports dynamic image sampling Coil comes with a new feature, dynamic image sampling. Consider you want to load a 500x500 image into a 100x100 ImageView. The library will load the image into memory at 100x100. But, what if you want the quality to be as the 500x500 image? In this case, the 100x100 image is used as a placeholder while the 500x500 image is read. Coil will take care of this automatically for all BitmapDrawables. The placeholder is set synchronously on the main thread preventing white flashes where the ImageView is empty for one frame. It also creates a visual effect where the image detail appears to fade in with the help of the crossfade animation. To know more in detail about Coil, check out its official documentation and GitHub repository. 25 million Android devices infected with ‘Agent Smith’, a new mobile malware Mozilla launches Firefox Preview, an early version of a GeckoView-based Firefox for Android Facebook released Hermes, an open-source JavaScript engine to run React Native apps on Android  
Read more
  • 0
  • 0
  • 5825
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at ₹800/month. Cancel anytime
article-image-pytorch-1-2-is-here-with-a-new-torchscript-api-expanded-onnx-export-and-more
Bhagyashree R
12 Aug 2019
3 min read
Save for later

PyTorch 1.2 is here with a new TorchScript API, expanded ONNX export, and more

Bhagyashree R
12 Aug 2019
3 min read
Last week, the PyTorch team announced the release of PyTorch 1.2. This version comes with a new TorchScript API with improved Python language coverage, expanded ONNX export, a standard nn.Transformer module, and more. https://twitter.com/PyTorch/status/1159552940257923072 Here are some of the updates in PyTorch 1.2: A new TorchScript API TorchScript enables you to create models that are serializable and optimizable with PyTorch code. PyTorch 1.2 brings a new “easier-to-use TorchScript API” for converting nn.Modules into ScriptModules. The torch.jit.script will now recursively compile functions, methods, and classes that it encounters. The preferred way to create ScriptModules is torch.jit.script(nn_module_instance) instead of inheriting from torch.jit.ScriptModule. With this update, some of the items will be considered deprecated and developers are recommended not to use them in their new code. Among the deprecated components are the @torch.jit.script_method decorator, classes that inherit from torch.jit.ScriptModule, the torch.jit.Attribute wrapper class, and the __constants__ array. Also, TorchScript now has improved support for Python language constructs and Python's standard library. It supports iterator-based constructs such as for..in loops, zip(), and enumerate(). It also supports the math and string libraries and other Python builtin functions. Full support for ONNX Opset export The PyTorch team has worked with Microsoft to bring full support for exporting ONNX Opset versions 7, 8, 9, 10. PyTorch 1.2 includes the ability to export dropout, slice, flip and interpolate in Opset 10. ScriptModule is improved to include support for multiple outputs, tensor factories, and tuples as inputs and outputs. Developers will also be able to register their own symbolic to export custom ops, and set the dynamic dimensions of inputs during export. A standard nn.Transformer PyTorch 1.2 comes with a standard nn.Transformer module that allows you to modify the attributes as needed. Based on the paper Attention is All You Need, this module relies entirely on an attention mechanism for drawing global dependencies between input and output. It is designed in such a way that you can use its individual components independently. For instance, you can use its nn.TransformerEncoder API without the larger nn.Transformer. Breaking changes in PyTorch 1.2 The return dtype of comparison operations including lt, le, gt, ge, eq, ne is now changed to torch.bool instead of torch.uint8. The type of torch.tensor(bool) and torch.as_tensor(bool) is changed to torch.bool dtype instead of torch.uint8. Some of the linear algebra functions are now removed in favor of the renamed operations. Here’s a table listing all the removed operations and their alternatives for your quick reference: Source: PyTorch Check out the PyTorch release notes to know more in detail. PyTorch announces the availability of PyTorch Hub for improving machine learning research reproducibility Sherin Thomas explains how to build a pipeline in PyTorch for deep learning workflows Facebook open-sources PyText, a PyTorch based NLP modeling framework  
Read more
  • 0
  • 0
  • 4005

article-image-red-hat-joins-the-risc-v-foundation-as-a-silver-level-member
Vincy Davis
12 Aug 2019
2 min read
Save for later

Red Hat joins the RISC-V foundation as a Silver level member

Vincy Davis
12 Aug 2019
2 min read
Last week, RISC-V announced that Red Hat is the latest major company to join the RISC-V foundation. Red Hat has joined as a Silver level member, which carries US$5,000 due per year, including 5 discounted registrations for RISC-V workshops.  RISC-V states in the official blog post that “As a strategic partner to cloud providers, system integrators, application vendors, customers, and open source communities, Red Hat can help organizations prepare for the digital future.” RISC-V is a free and open-source hardware instruction set architecture (ISA) which aims to enable extensible software and hardware freedom in computing design and innovation. As a member of the RISC-V foundation, Red Hat now officially agrees to support the use of RISC-V chips. As RISC-V has not released any major software and hardware, per performance, its customer companies will continue using both Arm and RISC-V chips. Read More: RISC-V Foundation officially validates RISC-V base ISA and privileged architecture specifications In January, Raspberry Pi also joined the RISC-V foundation. Though it has not announced if it will be releasing a RISC-V developer board, instead of using Arm-based chips. IBM has been a RISC-V foundation member for many years. In October last year, Red Hat, the major distributor of open-source software and technology was acquired by IBM for $34 Billion, with an aim to deliver next-generation hybrid multi cloud platform. Subsequently, it would want Red Hat to join the RISC-V Foundation as well. Other tech giants like Google, Qualcomm, Samsung, Alibaba, and Samsung are also part of the  RISC-V foundation. Alibaba’s chipmaker launches open source RISC-V based ‘XuanTie 910 processor’ for 5G, AI, IoT and self-driving applications Debian GNU/Linux port for RISC-V 64-bits: Why it matters and roadmap AdaCore joins the RISC-V Foundation, adds support for C and Ada compilation
Read more
  • 0
  • 0
  • 2677

article-image-telegram-introduces-new-features-slow-mode-switch-custom-titles-comments-widget-and-much-more
Amrata Joshi
12 Aug 2019
3 min read
Save for later

Telegram introduces new features: Slow mode switch, custom titles, comments widget and much more!

Amrata Joshi
12 Aug 2019
3 min read
Last week, the team at Telegram, the messaging app, introduced new features for group admins and users. These features include Slow Mode switch, custom titles, features for videos, and much more. What’s new in Telegram? Admins get more authority to manage the group  Slow Mode switch The Slow Mode feature will allow the group admin to control how often a member could send a message in the group. Once the admin enables Slow Mode in a group, the users will be able to send one message per the interval they choose. Also, a timer will be shown to the users which would tell them how long they need to wait before sending their next message. This feature is introduced to make group conversations more orderly and also to raise the value of each individual message. The official post suggests admins to “Keep it (Slow Mode feature) on permanently, or toggle as necessary to throttle rush hour traffic.” Image Source: Telegram Custom titles Group owners will now be able to set custom titles for admins like ‘Meme Queen’, ‘Spam Hammer’ or ‘El Duderino’. These custom titles will be shown with the default admin labels. For adding a custom title, users need to edit admin's rights in Group Settings. Image Source: Telegram Silent messages Telegram has now planned to bring more peace of mind to its users by introducing a feature that allows its users to message friends without any sound. Users just have to hold the send button to have any message or media delivered. New feature for videos Videos shared on Telegram now show thumbnail previews as users scroll through the videos to help them find the moment they were looking for. If users add a timestamp like 0:45 to a video caption, it will be automatically highlighted as a link. Also, if a user taps on a timestamp the video will play from the right spot.  Comments widget The team has come up with a new tool called Comments.App for users to comment on channel posts. With the help of the comments widget, users can log in with just two taps and comment with text and photos, as well as like, dislike and further reply to comments from others. Few users are excited about this news and appreciate Telegram over Whatsapp because it provides by default end to end encryption. A user commented on HackerNews, “I really like Telegram. Only end-to-end encryption by default and in group chats would make it perfect.” To know more about this news, check out the official post by Telegram. Telegram faces massive DDoS attack; suspects link to the ongoing Hong Kong protests Hacker destroys Iranian cyber-espionage data; leaks source code of APT34’s hacking tools on Telegram Trick or a treat: Telegram announces its new ‘delete feature’ that deletes messages on both the ends
Read more
  • 0
  • 0
  • 7147

article-image-ubuntu-19-10-will-now-support-experimental-zfs-root-file-system-install-option
Vincy Davis
12 Aug 2019
4 min read
Save for later

Ubuntu 19.10 will now support experimental ZFS root file-system install option

Vincy Davis
12 Aug 2019
4 min read
Last week, Ubuntu announced that the upcoming Ubuntu version 19.10 will support ZFS as a root file system, and should be used as an ‘experimental’ installer. The ZFS support will enable an easy to use interface, provide automated operations and offer high flexibility to Ubuntu users. Initially, Ubuntu 19.10 will be supported on desktop only, however, the layout has been kept extensible for servers, later on. Ubuntu has also warned users not to use ZFS on production systems yet; users can use it for experimental purposes and provide feedback. Ubuntu develops a new user space daemon - ‘zsys’ In order to make the basic and advanced concepts of ZFS easily accessible and transparent to users, Ubuntu is developing a new user space daemon, called zsys, which is a ZFS system tool. It will allow multiple ZFS systems to run in parallel on the same machine, and have other advantages like automated snapshots, separating user data from system and persistent data to manage complex zfs dataset layouts. Ubuntu is designing the system in such a way that people with little knowledge of ZFS will also be able to use it flexibly. Zsys’s cooperation with GRUB and ZFS on Linux initramfs will yield advanced features which will be made official by Ubuntu, later on. Users can check out the current progress and what’s next with zsys on the Ubuntu projects Github page. Progress update of Ubuntu 19.10 ZFS has already been shipped on Linux version 0.8.1. It supports features like native encryption, trimming support, checkpoints, raw encrypted zfs transmissions, project accounting and quota and many performance enhancements. Some post-release upstream fixes has been backported, to provide a better user experience and increase reliability. A new support has been added in the GNU GRUB menu. All existing ZFS on root user can enjoy these benefits, as soon as version Ubuntu 19.10 is updated. The post states that “We still have a lot to tackle and 19.10 will be only the beginning of the journey. However, the path forward is exciting and we hope to be able to bring something fresh and unique to ZFS users.” Users are very happy with Ubuntu 19.10 supporting ZFS. https://twitter.com/jtteag/status/1159143800821952514 A user on Hacker News comments, “Having been a ZFS fan since the twilight of OpenSolaris, I'm very glad to see ZoL taking off. Rolling it into Ubuntu and making it officially supported was a great move - after some frustration with trying to run ZFS on a CentOS box and having it occasionally break after a kernel update, having it easily available on Ubuntu was like a breath of fresh air. Having it readily available as a root filesystem, and having TRIM support at long last, is great news.” While few users are not happy with Ubuntu 19.10 supporting ZFS due to its high maintenance. A Redditor says, “I'm a big fan of Ubuntu, use it on one of my own machines and recommend it to people. But almost every time they have decided to go it alone and make something a unique selling point it has backfired (Upstart, Mir, Unity, bzr, CouchDB, Ubuntu one). No other mainstream distro is going to adopt ZFS. Probably ubuntu will drop it in a few years when they realize they can't carry the maintenance burden. If you use ZFS for your file system then you won't be able to use standard recovery tools or access it from a dual boot. You won't be able to revert back to and older ubuntu version. You won't be able to install upstream kernels.” For more details, head over to the Ubuntu blog. Canonical, the company behind the Ubuntu Linux distribution, was hacked; Ubuntu source code unaffected Ubuntu has decided to drop i386 (32-bit) architecture from Ubuntu 19.10 onwards Xubuntu 19.04 releases with latest Xfce package releases, new wallpapers and more
Read more
  • 0
  • 0
  • 3356
article-image-uber-goes-on-a-hiring-freeze-in-its-engineering-teams-after-a-painful-second-quarter-operating-loss-of-5-4-billion
Sugandha Lahoti
12 Aug 2019
3 min read
Save for later

Uber goes on a hiring freeze in its engineering teams after a painful second-quarter operating loss of $5.4 billion

Sugandha Lahoti
12 Aug 2019
3 min read
Uber has stopped recruiting new candidates for its engineering teams in U.S and Canada after reporting the largest-ever quarterly loss of $5.4 billion in its second-quarter earnings call. The loss is attributed to heavy competition and IPO expenses. The second quarter 2019 results were released Thursday, last week. Out of this $5.4 billion Uber paid out almost $4 billion in stock-based compensation as one time charges related to Uber's IPO, that inflated their loss number. This leaves almost $1.2 billion this quarter which they burned on operations out of which 50% was from Uber Eats subsidies. The investor report also highlights an increase in bookings (up 31%), active users (up 30%), trips (up 35%), and revenue (up 14%). In July, the Uber platform reached over 100 million Monthly Active Platform Consumers. Their core business, ridesharing has improved its gross margin and unit economics quarter-over-quarter. Uber also froze hiring for the position of software engineers and product managers across the US and Canada citing Hiring goal exceeding. According to Yahoo, who first reported the news, Uber has canceled scheduled on-site interviews for tech roles. Job applicants were informed that the positions are being put on hold due to a hiring freeze in engineering teams in the U.S. and Canada.  In emails sent to job interviewees, Uber recruiters explained “there have been some changes” and the opportunity has been “put on hold for now,” according to emails reviewed by Yahoo Finance. Hiring remains unaffected for workers in Uber’s freight or autonomous vehicles businesses. Uber also laid off 400 employees in its marketing department, earlier this month. The cut off was 1/3rd of the 1200-employee Uber marketing team which followed after Uber’s IPO and first-quarter investor report with losses of $1 billion. The reorganized marketing team will be under the leadership of Mike Strickman. Many of Uber’s teams are “too big, which creates overlapping work, makes for unclear decision owners, and can lead to mediocre results,” CEO Dara Khosrowshahi wrote in an email sent to employees and shared with TechCrunch. “As a company, we can do more to keep the bar high, and expect more of ourselves and each other,” Khosrowshahi said the restructuring aims to put the marketing team, and the company, back on track. The move suggests Uber is getting quite cautious about headcount to ensure their strategic priorities. https://twitter.com/RonOpti/status/1159982955487383552 In May, Uber drivers went on a two-hour strike in several major cities around the world coinciding with Uber’s IPO. Labor groups organizing the strike protested the companies’ poor payment and labor practices. Uber and Lyft drivers go on strike a day before Uber IPO roll-out Uber introduces Base Web, an open source “unified” design system for building websites in React. Uber open-sources Peloton, a unified Resource Scheduler
Read more
  • 0
  • 0
  • 1005

article-image-microsoft-contractors-also-listen-to-skype-and-cortana-audio-recordings-joining-amazon-google-and-apple-in-privacy-violation-scandals
Savia Lobo
12 Aug 2019
5 min read
Save for later

Microsoft contractors also listen to Skype and Cortana audio recordings, joining Amazon, Google and Apple in privacy violation scandals

Savia Lobo
12 Aug 2019
5 min read
In a recent report, Motherboard reveals, “Contractors working for Microsoft are listening to personal conversations of Skype users conducted through the app's translation service.” This allegation was done on the basis of a cache of internal documents, screenshots, and audio recordings obtained by Motherboard. These files also reveal that the contractors were also listening to voice commands given to its Cortana. While Skype FAQs does mention that it collects and uses conversations to improve products and services and also that company may analyze audio of phone calls that a user wants to translate in order to improve the chat platform's services; however, it nowhere informs users that some of the voice analysis may be done manually. Earlier this year, Apple, Amazon, and Google faced scrutiny over how they handle user’s voice data obtained from their respective voice assistants. After the Guardian’s investigation into Apple employees’ listening in on Siri conversations was published, Apple announced it has temporarily suspended human transcribers to listen to conversations users had with Siri. Google agreed to stop listening in and transcribing Google Assistant recordings for three months in Europe. Google’s decision to halt its review process was disclosed after a German privacy regulator started investigating the program after “a contractor working as a Dutch language reviewer handed more than 1,000 recordings to the Belgian news site VRT which was then able to identify some of the people in the clips.” TechCrunch reports. On the other hand, Amazon now allows users to opt-out of the program that allows contractors to manually review voice data. Bloomberg was the first to report in April that “Amazon had a team of thousands of workers around the world listening to Alexa audio requests with the goal of improving the software”. The anonymous Microsoft contractor who shared the cache of files with Motherboard said, “The fact that I can even share some of this with you shows how lax things are in terms of protecting user data.” In an online chat, Frederike Kaltheuner, data exploitation program lead at activist group Privacy International, told Motherboard, “People use Skype to call their lovers, interview for jobs, or connect with their families abroad. Companies should be 100% transparent about the ways people's conversations are recorded and how these recordings are being used." She further added, “If a sample of your voice is going to human review (for whatever reason) the system should ask them whether you are ok with that, or at least give you the option to opt-out." Pat Walshe, an activist from Privacy Matters, in an online chat with Motherboard said, "The marketing blurb for [Skype Translator] refers to the use of AI not humans listening in. This whole area needs a regulatory review." "I’ve looked at it (Skype Translator FAQ) and don’t believe it amounts to transparent and fair processing," he added. A Microsoft spokesperson told Motherboard in an emailed statement, "Microsoft collects voice data to provide and improve voice-enabled services like search, voice commands, dictation or translation services. We strive to be transparent about our collection and use of voice data to ensure customers can make informed choices about when and how their voice data is used. Microsoft gets customers’ permission before collecting and using their voice data." The statement continues, "We also put in place several procedures designed to prioritize users’ privacy before sharing this data with our vendors, including de-identifying data, requiring non-disclosure agreements with vendors and their employees, and requiring that vendors meet the high privacy standards set out in European law. We continue to review the way we handle voice data to ensure we make options as clear as possible to customers and provide strong privacy protections."  How safe is user data with these smart assistants looped with manual assistance? According to the documents and screenshots, when a contractor is given a piece of audio to transcribe, they are also given a set of approximate translations generated by Skype's translation system. “The contractor then needs to select the most accurate translation or provide their own, and the audio is treated as confidential Microsoft information, the screenshots show,” Motherboard reports. Microsoft said this data is only available to the transcribers “through a secure online portal, and that the company takes steps to remove identifying information such as user or device identification numbers.” The contractor told Motherboard, "Some stuff I've heard could clearly be described as phone sex. I've heard people entering full addresses in Cortana commands or asking Cortana to provide search returns on pornography queries. While I don't know exactly what one could do with this information, it seems odd to me that it isn't being handled in a more controlled environment."  In such an environment users no longer feel safe even after the company’s FAQ assures them that their data is safe but actually being listened to. A user on Reddit commented, “Pretty sad that we can not have a secure, private conversation from one place to another, anymore, without taking extraordinary measures, which congress also soon wants to poke holes in, by mandating back doors in these systems.” https://twitter.com/masonremaley/status/1159140919247036416 After this revelation, people may take steps in a jiffy like uninstalling Skype or not sharing extra personal details in the vicinity of their smart home devices. However, such steps won’t erase everything the transcribers might have heard in the past. Will this effect also result in a reduction in sales of the smart home devices that will directly affect the IoT market for each company that offers it? https://twitter.com/RidT/status/1159101690861301760 To know more about this news in detail, read the Motherboard’s report. Microsoft reveals Russian hackers “Fancy Bear” are the culprit for IoT network breach in the U.S. Microsoft introduces public preview of Azure Dedicated Host and updates its licensing terms Data Transfer Project: Now Apple joins Google, Facebook, Microsoft and Twitter to make data sharing seamless
Read more
  • 0
  • 0
  • 2338

article-image-matthew-flatts-proposal-to-change-rackets-s-expressions-based-syntax-to-infix-representation-creates-a-stir-in-the-community
Bhagyashree R
09 Aug 2019
4 min read
Save for later

Matthew Flatt’s proposal to change Racket’s s-expressions based syntax to infix representation creates a stir in the community

Bhagyashree R
09 Aug 2019
4 min read
RacketCon 2019 happened last month from July 13 to 14 bringing together the Racket community to discuss ideas and future plans for the Racket programming language. Matthew Flatt, one of the core developers, graced the stage to give his talk: State of Racket. In his talk, he spoke about the growing community, performance improvements, and much more. He also touched upon his recommendation to change the surface syntax of Racket2, which has sparked a lot of discussion in the Racket community. https://www.youtube.com/watch?v=dnz6y5U0tFs&t=390 Later in July, Greg Hendershott, who has contributed Racket projects like Rackjure and Travis-Racket and has driven a lot of community participation, expressed his concern about this change in a blog post. “I’m concerned the change won’t help grow the community; instead hurt it,“ he added. He further shared that he will shift his focus towards working on other programming languages, which implies that he is stepping down as a Racket contributor. Matthew Flatt recommends surface syntax change for removing technical barriers to entry There is no official proposal about this change yet, but Flatt has discussed it a couple of times. According to Flatt’s recommendation, Racket 2’s ‘lispy’ s-expressions should be changed to something which is not a barrier of entry to new users. He suggests to get rid or reduce the use of parentheses and bring infix operators, which means the operator sign will be written in between the operands, for instance, a + b.  “More significantly, parentheses are certainly an obstacle for some potential users of Racket. Given the fact of that obstacle, it's my opinion that we should try to remove or reduce the obstacle,“ Flatt writes in a mailing list. Racket is a general-purpose, multi-paradigm programming language based on the Scheme dialect of Lisp. It is also an ecosystem for language-oriented programming. Flatt further explained his rationale behind suggesting this change that the current syntax is not only a hindrance to potential users of Racket as a programming language but also to those who want to use it as “a programming-language programming language”. He adds, “The idea of language-oriented programming (LOP) doesn't apply only to languages with parentheses, and we need to demonstrate that.” With this change, he hopes to make Racket2 more familiar and easier-to-accept for users outside the Racket community. Some Racket developers believe changing s-expressions based syntax is not “desirable” Many developers in the Racket community share a similar sentiment as Greg Hendershott. A user on Hacker News added, “Getting rid of s expressions without it being part of a more cohesive improvement (like better supporting a new type system or something) just for mainstream appeal seems like an odd choice to me.” Another user added, “A syntax without s-expressions is not an innovative feature. For me, it's not even desirable, not at all. When I'm using non-Lispy languages like Rust, Ada, Nim, and currently a lot of Go, that's despite their annoying syntactic idiosyncrasies. All of those quirky little curly braces and special symbols to save a few keystrokes. I'd much prefer if all of these languages used s-expressions. That syntax is so simple that it makes you focus on the semantics.” While others are more neutral about this suggested change. “To me, Flatt's proposal for Racket2 smells more like adding tools to better facilitate infix languages than deprecating S-expressions. Given Racket's pedagogical mission, it looks more like a move toward migrating the HtDP series of languages (Beginning Student, Intermediate Student, Intermediate Student with Lambda, and Advanced Student) to infix syntax than anything else. Not really the end of the world or a big change to the larger Racket community. Just another extension of an ecosystem that remains s-expression based despite Algol and Datalog shipping in the box,” a user expressed his opinion. To know more about this change, check out the discussion on Racket’s mailing list. Also, you can share your solutions on Racket2 RFCs. Racket 7.3 releases with improved Racket-on-Chez, refactored IO system, and more Racket 7.2, a descendant of Scheme and Lisp, is now out! Racket v7.0 is out with overhauled internals, updates to DrRacket, TypedRacket among others
Read more
  • 0
  • 0
  • 4587
article-image-apple-announces-expanded-security-bug-bounty-program-up-to-1-million-plans-to-release-ios-security-research-device-program-in-2020
Vincy Davis
09 Aug 2019
4 min read
Save for later

Apple announces expanded security bug bounty program up to $1 million; plans to release iOS Security Research Device program in 2020

Vincy Davis
09 Aug 2019
4 min read
Apple made some major announcements at the Black Hat cybersecurity conference 2019 which concluded yesterday, in Las Vegas. Apple’s head of security engineering, Ivan Krstić announced that anybody who can hack an iPhone will get up to $1 million reward. They have also released a new payout system for security researchers, depending on the type of vulnerability found by them. Krstić also unveiled Apple’s new iOS Security Research Device program, which will be out next year. As part of the program, qualified security researchers will be provided with special iPhones to find out flaws in them. Apple expands its Security bug Bounty program Apple first launched its bug bounty program, in 2016. The previous bug bounty program consisted of $200,000 and included only those involved in Apple’s invite-only bug bounty program. Yesterday, Apple announced that, per Apple’s new security bug bounty program, anyone who can hack an iPhone will receive up to $1 million. Also, the security bounty program has been opened to all security researchers. It will include all of Apple’s platforms, including iCloud, iOS, tvOS, iPadOS, watchOS, and macOS. https://twitter.com/mikebdotorg/status/1159557138580004864 Apple has also released a new payout system with the payouts starting from $100,000 for finding a bug that allows lock screen bypass or unauthorized access to iCloud. Researchers can also gain up to 50% bonus if they find any bugs in a pre-released software. The top payout is booked for hackers who can discover a zero-click kernel code execution with persistence. https://twitter.com/Manzipatty/status/1159680310348537861 https://twitter.com/sdotknight/status/1159807563036340224 https://twitter.com/kennwhite/status/1159705960061030400 Apple’s new iOS Security Research Device program Apple gave out details about its new iOS Security Research Device program, which will be out next year. In this program, Apple will be supplying special iPhones to security researchers to help them find security flaws in iOS. However, this the iOS security research device program is available only to researchers who have great experience in security research on any platforms. https://twitter.com/0x30n/status/1159553364159414272 The special devices will be different from the regular iPhones, as it will come with ssh, a root shell, and advanced debug capabilities to ensure identification of bugs. “This is an unprecedented fully Apple supported iOS security research platform,” said Krstić at the conference. https://twitter.com/skbakken/status/1159556808198852608 https://twitter.com/marconielsen/status/1159584902339276801 Though many users have praised Apple for the great money and initiating the security research device program, few also opine that this is not so huge. Given the kind of knowledge and expertise required to find these bugs, there are suggestions that Apple should consider paying these hackers more as they are the ones saving Apple from a lot of negative P.R. Also, they found a bug, which even the Apple employees are sometimes unable to find. A user on Hacker News comments, “1M is a lot of money to me, a regular person, but when you consider that top security engineering talent could be making north of 500k in total compensation, 1M suddenly doesn’t seem all that impressive. It’s a good bet to make on their risk. Imagine paying a mere 1M to avoid a public fiasco where all of your users get owned. This just seems like good business. They could make it 5M, and it would still be worth it to them in the medium to long term.” Another user says, “I'm surprised by how cheap the vulnerabilities market is. A good exploit, against a popular product like Chrome, selling for 100k or even $1M may sound like a lot, but it's really pennies for any top software firm. And $1M is still a lot for a vulnerability by market prices.” Another comment on Hacker News reads, “When I read the article, my first reaction was "Only a million?" Considering the importance of a bug like this to Apple's business and the size of their cash hoard, this sounds like they don't actually care that much.” To know about other highlights at the Black Hat cybersecurity conference 2019, head over to our full coverage. Apple Card, iPhone’s new payment system, is now available for select users Apple plans to suspend Siri response grading process due to privacy issues Data Transfer Project: Now Apple joins Google, Facebook, Microsoft and Twitter to make data sharing seamless
Read more
  • 0
  • 0
  • 2400

article-image-amd-competes-with-intel-by-launching-epyc-rome-worlds-first-7-nm-chip-for-data-centers-luring-in-twitter-and-google
Bhagyashree R
09 Aug 2019
5 min read
Save for later

AMD competes with Intel by launching EPYC Rome, world’s first 7 nm chip for data centers, luring in Twitter and Google

Bhagyashree R
09 Aug 2019
5 min read
On Wednesday, Advanced Micro Devices (AMD) unveiled its highly-anticipated second-generation EPYC processor chip for data centers code-named “Rome”. Since the launch, the company has announced its agreements with many tech giants including Intel’s biggest customers, Twitter and Google. Lisa Su, AMD’s president and CEO, said during her keynote at the launch event, “Today, we set a new standard for the modern data center. Adoption of our new leadership server processors is accelerating with multiple new enterprise, cloud and HPC customers choosing EPYC processors to meet their most demanding server computing needs.” EPYC Rome: The world’s first 7nm server chip AMD first showcased the EPYC Rome chip, the world's first 7 nm server processor, at its Next Horizon 2018 event. Based on the Zen 2 microarchitecture, it features up to eight 7 nm-based chiplet processors with a 14 nm-based IO chip in the center interconnected by an Infinity fabric. This chip aims to offer twice the performance per socket and about 4X floating-point performance as compared to the previous generation of EPYC chips. https://www.youtube.com/watch?v=kC3ny3LBfi4 At the launch, one of the performance comparisons based on SPECrate 2017 int-peak benchmark showed the top-of-the-line 64-core AMD Epyc 7742 processor showed double the performance of the top-of-the-line 28-core Intel Xeon Platinum 8280M. Priced at under $ 7,000 it is a lot more affordable than Intel’s chip priced at $13,000. AMD competes with Intel, the dominant supplier of data center chips AMD’s main competitor in the data center chip realm is Intel, which is the dominant supplier of data center chips with more than 90% of the market share. However, AMD was able to capture a small market share with the release of its first-generation EPYC server chips. Coming up with its second-generation chip that is performant yet affordable gives AMD an edge over Intel. Donovan Norfolk, executive director of Lenovo’s data center group, told DataCenter Knowledge, “Intel had a significant portion of the market for a long time. I think they’ll continue to have a significant portion of it. I do think that there are more customers that will look at AMD than have in the past.” The delay in the launch of Intel’s 10 nm chips also might have worked in favor of AMD. After a long wait, it was officially launched earlier this month. Its 7 nm chips are expected to arrive in 2021. Intel fall behind schedule in launching its 10 nm chips has also worked in favor of AMD. Its 7nm chips will most likely arrive in 2021. The EPYC Rome chip has already grabbed the attention of many tech giants. Google is planning to use the EPYC server chip in its internal data centers and also wants to offer it to external developers as part of its cloud computing offerings. Twitter will start using EPYC server in its data centers later this year. Hewlett Packard Enterprise is already using these chips in its three ProLiant servers and plans to have 12 systems by the end of this year. Dell also plans to add second-gen Epyc servers to its portfolio this fall. Following AMD’s customer announcements, Intel shares were down 0.6%  to $46.42 in after-hours trading. Though AMD’s chips are better than Intel’s chips in some computing tasks, they do lag in a few desirable and advanced features. Patrick Moorhead, founder of Moor Insights & Strategy told the Reuters, “Intel chip features for machine learning tasks and new Intel memory technology being with customers such as German software firm SAP SE (SAPG.DE) could give Intel an advantage in those areas.” This news sparked a discussion on Hacker News. A user said, “This is a big win for AMD and for me it reconfirms that their strategy of pushing into the mainstream features that Intel is trying to hold hostage for the "high end" is a good one. Back when AMD first introduced the 64-bit extensions to the x86 architecture and directly challenged Intel who was selling 64 bits as a "high end" feature in their Itanium line, it was a place where Intel was unwilling to go (commoditizing 64-bit processors). That proved pretty successful for them. Now they have done it again by commoditizing "high core count" processors. Each time they do this I wonder if Intel will ever learn that you can't "get away" with selling something for a lot of money that can be made more cheaply forever. ” Another user commented, “I hope AMD turns their attention to machine learning tasks soon not just against Intel but NVIDIA also. The new Titan RTX GPUs with their extra memory and Nvlink allow for some really awesome tricks to speed up training dramatically but they nerfed it by only selling without a blower-style fan making it useless for multi-GPU setups. So the only option is to get Titan RTX rebranded as a Quadro RTX 6000 with a blower-style fan for $2,000 markup. $2000 for a fan. The only way to stop things like this will be competition in the space.” To know more in detail, you can watch the EPYC Rome’s launch event: https://www.youtube.com/watch?v=9Jn9NREaSvc Intel’s 10th gen 10nm ‘Ice Lake’ processor offers AI apps, new graphics and best connectivity Why Intel is betting on BFLOAT16 to be a game changer for deep learning training? Hint: Range trumps Precision. Intel’s new brain inspired neuromorphic AI chip contains 8 million neurons, processes data 1K times faster
Read more
  • 0
  • 0
  • 3073