Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3709 Articles
article-image-microsoft-adds-telemetry-files-in-a-security-only-update-without-prior-notice-to-users
Savia Lobo
12 Jul 2019
4 min read
Save for later

Microsoft adds Telemetry files in a “security-only update” without prior notice to users

Savia Lobo
12 Jul 2019
4 min read
The recent Windows 7 ‘security-only’ update also includes Telemetry components, which users may be unaware of. It may be used to secretly monitor individual PC’s for “innocuous data collection to outright spyware”, according to ZDNet. Per Microsoft, the "Security-only updates" should not include quality fixes or diagnostic tools, etc. other than sole security updates. This is because, in 2016, Microsoft divided Win7 and 8.1 patchings into two parts, a monthly rollup of updates and fixes and, for those who want only essential patches, and second, a Security-only update package. Why is this “security-only” update suspicious? What was surprising about this month's Security-only update, formally titled the "July 9, 2019—KB4507456 (Security-only update)," is that it bundled the Compatibility Appraiser, KB2952664, which is designed to identify issues that could prevent a Windows 7 PC from updating to Windows 10. An anonymous user commented on Woody Leonhard’s post on the July 2019 security update published on his website, AskWoody. Leonhard is a Senior Contributing Editor at InfoWorld, and Senior Editor at Windows Secrets. “Warning for group B Windows 7 users! The “July 9, 2019—KB4507456 (Security-only update)” is NOT “security-only” update. It replaces infamous KB2952664 and contains telemetry. Some details can be found in file information for update 4507456 (keywords: “telemetry”, “diagtrack” and “appraiser”) and under http://www.catalog.update.microsoft.com/ScopedViewInline.aspx?updateid=7cdee6a8-6f30-423e-b02c-3453e14e3a6e (in “Package details”->”This update replaces the following updates” and there is KB2952664 listed). It doesn’t apply for IA-64-based systems, but applies both x64 and x86-based systems.” “Microsoft included the KB2952664 functionality (known as the “Compatibility Appraiser”) in the Security Quality Monthly Rollups for Windows 7 back in September 2018. The move was announced by Microsoft ahead of time”, another user with the name @PKCano explains. The user further added, “With the July 2019-07 Security Only Quality Update KB4507456, Microsoft has slipped this functionality into a security-only patch without any warning, thus adding the “Compatibility Appraiser” and its scheduled tasks (telemetry) to the update. The package details for KB4507456 say it replaces KB2952664 (among other updates).” “Come on Microsoft. This is not a security-only update. How do you justify this sneaky behavior? Where is the transparency now?”, the user concluded. ZDNet states, “The Appraiser tool was offered via Windows Update, both separately and as part of a monthly rollup update two years ago; as a result, most of the declining population of Windows 7 PCs already has it installed”. Ed Bott, a technology writer at ZDNet, says that this update is benign and also that Microsoft is being truthful when they say "There is no GWX or upgrade functionality contained in this update." If so, why is Microsoft not briefing users about this update? Many users are confused about whether or not they should update their systems. A user commented on AskWoody, “So should this update be skipped or installed? This appears to pose a dilemma, at least right now. I hope that some weeks from now, by the time we are closer to a green DEFCON, this has been sorted out”. Another user speculated that this issue might be resolved in the next update, “Disabling (or deleting) these schedule tasks after installation (before reboot) should be enough to turn off the appraiser \Microsoft\Windows\Application Experience\ProgramDataUpdater \Microsoft\Windows\Application Experience\Microsoft Compatibility Appraiser \Microsoft\Windows\Application Experience\AitAgent but it’s best to wait next month to see if the SO update comes clean” ZDNet states this might be because Windows 7 is nearing end-of-support date, which is on January 14, 2020, “It's also possible that Microsoft thinks it has a strong case for making the Compatibility Appraiser tool mandatory as the Windows 7 end-of-support date nears”. To know more about this news, visit Microsoft’s security update. Microsoft quietly deleted 10 million faces from MS Celeb, the world’s largest facial recognition database Microsoft’s Xbox team at E3 2019: Project Scarlett, AI-powered Flight Simulator, Keanu Reeves in Cyberpunk 2077, and more Debian GNU/Linux port for RISC-V 64-bits: Why it matters and roadmap
Read more
  • 0
  • 0
  • 2136

article-image-azure-kinect-developer-kit-is-now-generally-available-will-start-shipping-to-customers-in-the-us-and-china
Amrata Joshi
12 Jul 2019
3 min read
Save for later

Azure Kinect Developer Kit is now generally available, will start shipping to customers in the US and China

Amrata Joshi
12 Jul 2019
3 min read
In February, this year, at the Mobile World Congress (MWC), Microsoft announced the $399 Azure Kinect Developer Kit, an all-in-one perception system for computer vision and speech solutions. Recently, Microsoft announced that the kit is generally available and will begin shipping it to customers in the U.S. and China who preordered it.  The Azure Kinect Developer Kit aims to offer developers a platform to experiment with AI tools as well as help them plug into Azure’s ecosystem of machine learning services.  The Azure Kinect DK camera system features a 1MP (1,024 x 1,024 pixel) depth camera, 360-degree microphone, 12MP RGB camera that is used for additional color stream which is aligned to the depth stream, and an orientation sensor. It uses the same time-of-flight sensor that the company had developed for the second generation of its HoloLens AR visor. It also features an accelerometer and gyroscope (IMU) that helps in sensor orientation and spatial tracking. Developers can also experiment with the field of view because of the presence of a global shutter and automatic pixel gain selection. This Kit works with a range of compute types that can be used together for providing a “panoramic” understanding of the environment. This advancement might help Microsoft users in health and life sciences to experiment with depth sensing and machine learning. During the keynote, Microsoft Azure corporate vice president Julia White said, “Azure Kinect is an intelligent edge device that doesn’t just see and hear but understands the people, the environment, the objects, and their actions.”  She further added, “It only makes sense for us to create a new device when we have unique capabilities or technology to help move the industry forward.” Few users are complaining about the product and expecting some changes in the future. They have highlighted issues with the mics, the SDK, the sample code and much more. A user commented on the HackerNews thread, “Then there's the problem that buries deep in the SDK is a binary blob that is the depth engine. No source, no docs, just a black box. Also, these cameras require a BIG gpu. Nothing is seemingly happening onboard. And you're at best limited to 2 kinects per usb3 controller. All that said, I'm still a very happy early adopter and will continue checking in every month or two to see if they've filled in enough critical gaps for me to build on top of.” Few others seem to be excited and think that the camera is good and will be helpful in projects. Another user commented, “This is really cool!” The user further added, “This camera is way better quality, so it'll be neat to see the sort of projects can be done now.” To know more about Azure Kinect Developer Kit, watch the video https://www.youtube.com/watch?v=jJglCYFiodI Microsoft Defender ATP detects Astaroth Trojan, a fileless, info-stealing backdoor Microsoft will not support Windows registry backup by default, to reduce disk footprint size from Windows 10 onwards Microsoft is seeking membership to Linux-distros mailing list for early access to security vulnerabilities    
Read more
  • 0
  • 0
  • 2466

article-image-microsoft-introduces-passwordless-feature-in-its-windows-10-devices-replaces-it-with-windows-hello-face-authentication-fingerprints-or-a-pin
Amrata Joshi
12 Jul 2019
3 min read
Save for later

Microsoft introduces passwordless feature in its Windows 10 devices, replaces it with Windows Hello face authentication, fingerprints, or a PIN

Amrata Joshi
12 Jul 2019
3 min read
For most of us, it is difficult to remember passwords across multiple devices and accounts. Also, if one account gets hacked, then attackers can manage to gain access to all the other accounts. Even though features like two-factor authentication (2FA) exist but not many use them. To make things simpler for its customers, Microsoft has introduced a "Make your device passwordless” feature in its Windows 10 devices. Just two days ago, the team at Microsoft announced Windows 10 Insider Preview Build 18936 in the Fast ring. The test build comes with a new sign-in option, "Make your device passwordless" in Settings. This means PCs can use Windows Hello face authentication, fingerprints, or a PIN code. The password option will no longer be there on the login screen if users opt-in for “Make your device passwordless” feature. https://twitter.com/msftsecurity/status/1064926596778401792 According to Microsoft, a PIN code is far more secure than a password, even though it appears to be very simple to use a four-digit code. The advantage is that it uses unknown variables and also the code is stored on a device and not shared online. Windows 10 stores the private key on a device with a Trusted Platform Module (TPM), which is also a secure chip that keeps a PIN local to the device only.  In case of a server being compromised or a password being stolen, an attacker can access the user’s device or account. But such an attack wouldn’t be effective with a Windows Hello PIN because the passwordless feature will still work through Azure Active Directory. It will further lock down business devices and protect valuable data by removing the password. This feature is currently available only for a set of Fast Ring Insiders and will be made available for others later this week. Users need a FIDO2-compatible security key for trying out these new capabilities. Microsoft has made public preview of FIDO2 security keys support in Azure Active Directory, available. It seems the company has been trying to convince Windows 10 users to opt into two-factor authentication processes such as basic SMS, Windows Hello, a separate Microsoft Authenticator app, or even physical security keys with the FIDO2 standard.  Microsoft Defender ATP detects Astaroth Trojan, a fileless, info-stealing backdoor Microsoft will not support Windows registry backup by default, to reduce disk footprint size from Windows 10 onwards Microsoft is seeking membership to Linux-distros mailing list for early access to security vulnerabilities            
Read more
  • 0
  • 0
  • 2393

article-image-25-million-android-devices-infected-with-agent-smith-a-new-mobile-malware
Vincy Davis
12 Jul 2019
4 min read
Save for later

25 million Android devices infected with 'Agent Smith', a new mobile malware

Vincy Davis
12 Jul 2019
4 min read
Two days ago, Check Point researchers reported a new mobile malware attack called ‘Agent Smith’ which infected around 25 million Android devices. This malware is being used for financial gains through the use of malicious advertisements. The malware, concealed under the identity of a Google related app, exploited known Android vulnerabilities and automatically replaced installed apps with their malicious versions, without any consent of the user. The primary targets of this malware are based in Asian countries, especially India with over 15 million infected devices, Pakistan, Bangladesh, Saudi Arabia, UK and around 300k devices infected in the U.S. Currently, no malicious apps remain on the Google Play Store. However, before being removed, the malicious apps were downloaded over 10 million times. Researchers have estimated over 2.8 billion infections in total, on around 25 Million unique devices. Image Source: Check Point Research How Agent Smith infected Android apps A preliminary investigation revealed that the app strongly resembled Janus vulnerability abuse which was discovered in 2017 and allowed attackers to modify the code in Android applications without affecting their signatures. These malicious apps had the ability to hide their app icons and claim to be Google related updaters or vending modules. Check Point researchers found that  Agent Smith’s attack also resembled previous malware campaigns against Android apps, like Gooligan, HummingBad, and CopyCat. The Agent Smith malware basically attacks in a step by step manner: Image Source: Check Point Research Firstly, a dropper app attracts a victim to install itself voluntarily. The dropper has an inbuilt Feng Shui Bundle which works as an encrypted asset file. The dropper variants include photo utility, games, or sex-related apps. Next, the dropper automatically decrypts and installs its core malware APK, which is usually disguised as Google Updater, Google Update for U or ‘com.google.vending’.  This core malware APK is then used to conduct malicious patching and app updates. The core malware’s icon is hidden from the user, at all times. Lastly, the core malware extracts the device’s installed app list. If the malware finds apps like Whatsapp, Flipkart, Jio, Truecaller, etc on its prey list (hard-coded or sent from C&C server), the malware extracts the base APK of the target innocent app on the device. Next, the malware patches the APK with malicious ads modules. The base APK is then installed back, making it seem like an update. During the final update installation process, Agent Smith relies on the Janus vulnerability to bypass Android’s APK integrity checks. Finally, Agent Smith hijacks the compromised user apps, to show malicious advertisements. The hackers have used Agent Smith for financial gain only until now. However, with its ability to hide its icon from the launcher and successfully impersonate any popular existing app on a device, Agent Smith can cause serious harms like banking credential theft, shopping, and other sensitive apps. It has also come to light that Google had fixed Janus vulnerability, in 2017 but the fix has not made its way onto every Android phone. “Android users should use ad blocker software, always update their devices when prompted, and only download apps from the Google Play Store”, said Dustin Childs, the communications manager at a cybersecurity company Trend Micro. Many Android users have expressed their concern about the Agent Smith malware attack. https://twitter.com/TMWB1/status/1149337833695600640 https://twitter.com/AkiSolomos/status/1149487532272312324 Few iOS users, now say that its Google’s security vulnerabilities that make users opt for iOS phones. A Redditor comments, “This is unfortunately why I am still an Apple customer. I do not trust android to keep my information safe. Hey Google, how about I pay you a $15 per month subscription and you stop using spyware on me?” According to the researchers, the malware appears to be run by a Chinese Internet company located in Guangzhou that claims to help Chinese Android developers publish and promote their apps on overseas platforms. Check Point researchers have submitted their report to Google and law enforcement units, to facilitate further investigation. The names of the malicious actors have not yet been revealed. Google has not yet released any official statement warning Android users about the Agent Smith malware attack. For more details about the attack, head over to Check Point research page. An IoT worm Silex, developed by a 14 year old resulted in malware attack and taking down 2000 devices China is forcing tourists crossing Xinjiang borders to install an Android app that sends personal information to authorities, reports the Vice News React Native 0.60 releases with accessibility improvements, AndroidX support, and more
Read more
  • 0
  • 0
  • 3037

article-image-facebook-released-hermes-an-open-source-javascript-engine-to-run-react-native-apps-on-android
Fatema Patrawala
12 Jul 2019
4 min read
Save for later

Facebook released Hermes, an open source JavaScript engine to run React Native apps on Android

Fatema Patrawala
12 Jul 2019
4 min read
Yesterday Facebook released a new JavaScript engine called Hermes under an open source MIT license. According to Facebook, this new engine will speed up start times for native Android apps built with React Native framework. https://twitter.com/reactnative/status/1149347916877901824 Facebook software engineer Marc Horowitz unveiled Hermes at the Chain React 2019 conference held yesterday in Portland, Oregon. Hermes is a new tool for developers to primarily improve app startup performance in the same way Facebook does for its apps, and to make apps more efficient on low-end smartphones. The supposed advantage of Hermes is that developers can target all three mobile platforms with a single code base; but as with any cross-platform framework, there are trade offs in terms of performance, security and flexibility. Hermes is available on GitHub for all developers to use. It has also got its own Twitter account and home page. In a demo, Horowitz showed that a React Native app with Hermes was fully loaded within half the time the same app without Hermes loaded, or about two seconds faster. Check out the video below: Horowitz emphasized on the fact that Hermes cuts the APK size (the size of the app file) to half the 41MB of a stock React Native app, and removes a quarter of the app's memory usage. In other words, with Hermes developers can get users interacting with an app faster with fewer obstacles like slow download times and constraints caused by multiple apps sharing in a limited memory resources, especially on lower-end phones. And these are exactly the phones Facebook is aiming at with Hermes, compared to the fancy high-end phones that well-paid developers typically use themselves. "As developers we tend to carry the latest flagship devices. Most users around the world don't," he said. "Commonly used Android devices have less memory and less storage than the newest phones and much less than a desktop. This is especially true outside of the United States. Mobile flash is also relatively slow, leading to high I/O latency." It's not every day a new JavaScript engine is born, but while there are plenty such engines available for browsers, like Google's V8, Mozilla's SpiderMonkey, Microsoft's Chakra, Horowitz notes Hermes is not aimed at browsers or, for example, how Node.js on the server side. "We're not trying to compete in the browser space or the server space. Hermes could in theory be for those kinds of use cases, that's never been our goal." The Register reports that Facebook has no plan to push Hermes' beyond React Native to Node.js or to turn it into the foundation of a Facebook-branded browser. This is because it's optimized for mobile apps and wouldn't offer advantages over other engines in other usage scenarios. Hermes tries to be efficient through bytecode precompilation – rather than loading JavaScript and then parsing it. Hermes employs ahead-of-time (AOT) compilation during the mobile app build process to allow for more extensive bytecode optimization. Along similar lines, the Fuchsia Dart compiler for iOS is an AOT compiler. There are other ways to squeeze more performance out of JavaScript. The V8 engine, for example, offers a capability called custom snapshots. However, this is a bit more technically demanding than using Hermes. Hermes also abandons the just in time (JIT) compiler used by other JavaScript engines to compile frequently interpreted code into machine code. In the context of React Native, the JIT doesn't do that much to ease mobile app workloads. The reason Hermes exists, as per Facebook, is to make React Native better. "Hermes allows for more optimization on mobile since developers control the build stack," said a Facebook spokesperson in an email to The Register. "For example, we implemented bytecode precompilation to improve performance and developed more efficient garbage collection to reduce memory usage." In a discussion on Hacker News, Microsoft developer Andrew Coates claims that internal testing of Hermes and React Native in conjunction with Microsoft Office for Android shows TTI using Hermes at 1.1s, compared to 1.4s for V8, and with 21.5MB runtime memory impact, compared to 30MB with V8. Hermes is mostly compatible with ES6 JavaScript. To keep the engine small, support for some language features is missing, like with statements and local mode eval(). Facebook’s spokesperson also said to The Register that they are planning to publish benchmark figures in the next week to support its performance claims. Declarative UI programming faceoff: Apple’s SwiftUI vs Google’s Flutter OpenID Foundation questions Apple’s Sign In feature, says it has security and privacy risks Material-UI v4 releases with CSS specificity, Classes boilerplate, migration to Typescript and more
Read more
  • 0
  • 0
  • 6911

article-image-introducing-quickjs-a-small-and-easily-embeddable-javascript-engine
Bhagyashree R
12 Jul 2019
3 min read
Save for later

Introducing QuickJS, a small and easily embeddable JavaScript engine

Bhagyashree R
12 Jul 2019
3 min read
On Tuesday, Fabrice Bellard, the creator of FFmpeg and QEMU and Charlie Gordon, a C expert, announced the first public release of QuickJS. Released under MIT license, it is a “small but complete JavaScript engine” that comes with support for the latest ES2019 language specification. Features in QuickJS JavaScript engine Small and easily embeddable: The engine is formed by a few C files and does not have any external dependency. Fast interpreter: The interpreter shows impressive speed by running 56,000 tests from the ECMAScript Test Suite1 in just 100 seconds, and that too on a single-core CPU. A runtime instance completes its life cycle in less than 300 microseconds. ES2019 support: The support for ES2019 specification is almost complete including modules, asynchronous generators, and full Annex B support (legacy web compatibility). Currently, it does not has support for realms and tail calls. No external dependency: It can compile JavaScript source to executables without the need for any external dependency. Command-line interpreter: The command-line interpreter comes with contextual colorization and completion implemented in Javascript. Garbage collection: It uses reference counting with cycle removal to free objects automatically and deterministically. This reduces memory usage and ensures deterministic behavior of the JavaScript engine. Mathematical extensions: You can find all the mathematical extensions in the ‘qjsbn’ version, which are fully-backward compatible with standard Javascript. It supports big integers (BigInt), big floating-point numbers (BigFloat), operator overloading, and also comes with ‘bigint’ and ‘math’ mode. This news struck a discussion on Hacker News, where developers were all praises for Bellard’s and Gordon’s outstanding work on this project. A developer commented, “Wow. The core is a single 1.5MB file that's very readable, it supports nearly all of the latest standard, and Bellard even added his own extensions on top of that. It has compile-time options for either a NaN-boxing or traditional tagged union object representation, so he didn't just go for a single minimal implementation (unlike e.g. OTCC) but even had the time and energy to explore a bit. I like the fact that it's not C99 but appears to be basic C89, meaning very high portability. Despite my general distaste for JS largely due to websites tending to abuse it more than anything, this project is still immensely impressive and very inspiring, and one wonders whether there is still "space at the bottom" for even smaller but functionality competitive implementations.” Another wrote, “I can't wait to mess around with this, it looks super cool. I love the minimalist approach. If it's truly spec compliant, I'll be using this to compile down a bunch of CLI scripts I've written that currently use node. I tend to stick with the ECMAScript core whenever I can and avoid using packages from NPM, especially ones with binary components. A lot of the time that slows me down a bit because I'm rewriting parts of libraries, but here everything should just work with a little bit of translation for the OS interaction layer which is very exciting.” To know more about QuickJS, check out Fabrice Bellard's official website. Firefox 67 will come with faster and reliable JavaScript debugging tools Introducing Node.js 12 with V8 JavaScript engine, improved worker threads, and much more React Native 0.59 is now out with React Hooks, updated JavaScriptCore, and more!
Read more
  • 0
  • 0
  • 6266
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at ₹800/month. Cancel anytime
article-image-twitter-experienced-major-outage-yesterday-due-to-an-internal-configuration-issue
Fatema Patrawala
12 Jul 2019
4 min read
Save for later

Twitter experienced major outage yesterday due to an internal configuration issue

Fatema Patrawala
12 Jul 2019
4 min read
Yesterday Twitter went down across major parts of the world including the US and the UK. Twitter users reported being unable to access the platform on web and mobile devices. The outage lasted on the site for approximately an hour. According to DownDetector.com, the site began experiencing major issues at 2:46pm EST, with problems being reported from users attempting to access Twitter through its website, iPhone or iPad app and via Android devices. While the majority of problems being reported from Twitter were website issues (51%), nearly 30% were from iPhone and iPad app usage and another 18% from Android users, as per the outage report. Twitter acknowledged that the platform was experiencing issues on its status page shortly after the first outages were reported online. The company listed the status as “investigating” and noted a service disruption was causing the seemingly global issue. “We are currently investigating issues people are having accessing Twitter,” the statement read. “We will keep you updated on what's happening.” This month has experienced several high-profile outages among social networks. Facebook and Instagram experienced a day-long outage affecting large parts of the world on July 3rd. LinkedIn went down for several hours on Wednesday. Cloudfare suffered two major outages in the span of two weeks this month. One was due to an internal software glitch and another was caused when Verizon accidentally rerouted IP packages after it wrongly accepted a network misconfiguration from a small ISP in Pennsylvania, USA. Reddit was experiencing outages on its website and app earlier in the day, but appeared to be back up and running for most users an hour before Twitter went down, according to DownDetector.com. In March, Facebook and its family of apps experience a 14 hour long outage which was reasoned as server config change issue. Twitter site then began operating normally nearly an hour later at approximately 3:45pm EST. The users on Twitter joked saying they were "all censored for the last hour" when the site eventually was back up and running. On the status page of the outage report Twitter said that the outage was caused due to “an internal configuration change, which we're now fixing.” “Some people may be able to access Twitter again and we're working to make sure Twitter is available to everyone as quickly as possible,” the company said in a follow up statement. https://twitter.com/TwitterSupport/status/1149412158121267200 On Hacker News too users discussed about number of outages in major tech companies and why is this happening. One of the user comments reads, “Ok, this is too many high-profile, apparently unrelated outages in the last month to be completely a coincidence. Hypotheses: 1) software complexity is escalating over time, and logically will continue to until something makes it stop. It has now reached the point where even large companies cannot maintain high reliability. 2) internet volume is continually increasing over time, and periodically we hit a point where there are just too many pieces required to make it work (until some change the infrastructure solves that). We had such a point when dialup was no longer enough, and we solved that with fiber. Now we have a chokepoint somewhere else in the system, and it will require a different infrastructure change 3) Russia or China or Iran or somebody is f*(#ing with us, to see what they are able to break if they needed to, if they need to apply leverage to, for example, get sanctions lifted 4) Just a series of unconnected errors at big companies 5) Other possibilities?” On this comment another user adds, “I work at Facebook. I worked at Twitter. I worked at CloudFlare. The answer is nothing other than #4. #1 has the right premise but the wrong conclusion. Software complexity will continue escalating until it drops by either commoditization or redefining problems. Companies at the scale of FAANG(+T) continually accumulate tech debt in pockets and they eventually become the biggest threats to availability. Not the new shiny things. The sinusoidal pattern of exposure will continue.” Facebook, Instagram and WhatsApp suffered a major outage yesterday; people had trouble uploading and sending media files Facebook family of apps hits 14 hours outage, longest in its history How Verizon and a BGP Optimizer caused a major internet outage affecting Amazon, Facebook, CloudFlare among others
Read more
  • 0
  • 0
  • 2658

article-image-pluribus-an-ai-bot-built-by-facebook-and-cmu-researchers-has-beaten-professionals-at-six-player-no-limit-texas-hold-em-poker
Sugandha Lahoti
12 Jul 2019
5 min read
Save for later

Pluribus, an AI bot built by Facebook and CMU researchers, has beaten professionals at six-player no-limit Texas Hold ’Em Poker

Sugandha Lahoti
12 Jul 2019
5 min read
Researchers from Facebook and Carnegie Mellon University have developed an AI bot that has defeated human professionals in six-player no-limit Texas Hold’em poker.   Pluribus defeated pro players in both “five AIs + one human player” format and a “one AI + five human players” format. Pluribus was tested in 10,000 games against five human players, as well as in 10,000 rounds where five copies of the AI  played against one professional. This is the first time an AI bot has beaten top human players in a complex game with more than two players or two teams. Pluribus was developed by Noam Brown of Facebook AI Research and Tuomas Sandholm of Carnegie Mellon University. Pluribus builds on Libratus, their previous poker-playing AI which defeated professionals at Heads-Up Texas Hold ’Em, a two-player game in 2017. Mastering 6-player Poker for AI bots is difficult considering the number of possible actions. First, obviously since this involves six players, the games have a lot more variables and the bot can’t figure out a perfect strategy for each game - as it would do for a two player game. Second, Poker involves hidden information, in which a player only has access to the cards that they see. AI has to take into account how it would act with different cards so it isn’t obvious when it has a good hand. Brown wrote on a Hacker News thread, “So much of early AI research was focused on beating humans at chess and later Go. But those techniques don't directly carry over to an imperfect-information game like poker. The challenge of hidden information was kind of neglected by the AI community. This line of research really has its origins in the game theory community actually (which is why the notation is completely different from reinforcement learning). Fortunately, these techniques now work really really well for poker.” What went behind Pluribus? Initially, Pluribus engages in self-play by playing against copies of itself, without any data from human or prior AI play used as input. The AI starts from scratch by playing randomly, and gradually improves as it determines which actions, and which probability distribution over those actions, lead to better outcomes against earlier versions of its strategy. Pluribus’s self-play produces a strategy for the entire game offline, called the blueprint strategy. This online search algorithm can efficiently evaluate its options by searching just a few moves ahead rather than only to the end of the game. Pluribus improves upon the blueprint strategy by searching for a better strategy in real time for the situations it finds itself in during the game. Real-time search The blueprint strategy in Pluribus was computed using a variant of counterfactual regret minimization (CFR). The researchers used Monte Carlo CFR (MCCFR) that samples actions in the game tree rather than traversing the entire game tree on each iteration. Pluribus only plays according to this blueprint strategy in the first betting round (of four), where the number of decision points is small enough that the blueprint strategy can afford to not use information abstraction and have a lot of actions in the action abstraction. After the first round, Pluribus instead conducts a real-time search to determine a better, finer-grained strategy for the current situation it is in. https://youtu.be/BDF528wSKl8 What is astonishing is that Pluribus uses very little processing power and memory, less than $150 worth of cloud computing resources. The researchers trained the blueprint strategy for Pluribus in eight days on a 64-core server and required less than 512 GB of RAM. No GPUs were used. Stassa Patsantzis, a Ph.D. research student appreciated Pluribus’s resource-friendly compute power. She commented on Hacker News, “That's the best part in all of this. I'm hoping that there is going to be more of this kind of result, signaling a shift away from Big Data and huge compute and towards well-designed and efficient algorithms.” She also said how this is significantly lesser than ML algorithms used at DeepMind and Open AI. “In fact, I kind of expect it. The harder it gets to do the kind of machine learning that only large groups like DeepMind and OpenAI can do, the more smaller teams will push the other way and find ways to keep making progress cheaply and efficiently”, she added. Real-life implications AI bots such as Pluribus give a better understanding of how to build general AI that can cope with multi-agent environments, both with other AI agents and with humans. A six-player AI bot has better implications in reality because two-player zero-sum interactions (in which one player wins and one player loses) are common in recreational games, but they are very rare in real life.  These AI bots can be used for handling harmful content, dealing with cybersecurity challenges, or managing an online auction or navigating traffic, all of which involve multiple actors and/or hidden information. Apart from fighting online harm, four-time World Poker Tour title holder Darren Elias helped test the program's skills, said, Pluribus could spell the end of high-stakes online poker. "I don't think many people will play online poker for a lot of money when they know that this type of software might be out there and people could use it to play against them for money." Poker sites are actively working to detect and root out possible bots. Brown, Pluribus' developer, on the other hand, is optimistic. He says it's exciting that a bot could teach humans new strategies and ultimately improve the game. "I think those strategies are going to start penetrating the poker community and really change the way professional poker is played," he said. For more information on Pluribus and it’s working, read Facebook’s blog. DeepMind’s Alphastar AI agent will soon anonymously play with European StarCraft II players Google DeepMind’s AI AlphaStar beats StarCraft II pros TLO and MaNa OpenAI Five bots destroyed human Dota 2 players this weekend
Read more
  • 0
  • 0
  • 3769

article-image-a-vulnerability-found-in-jira-server-and-data-center-allows-attackers-to-remotely-execute-code-on-systems
Amrata Joshi
11 Jul 2019
2 min read
Save for later

A vulnerability found in Jira Server and Data Center allows attackers to remotely execute code on systems

Amrata Joshi
11 Jul 2019
2 min read
Yesterday, the Atlassian Support released the Jira security advisory affecting Jira Server and Jira Data Center. This advisory reveals a critical severity security vulnerability, labeled as CVE-2019-11581, which was introduced in version 4.4.0 of Jira Server and Jira Data Center. How can one exploit this vulnerability? For this issue to be exploitable, the attacker needs to meet any one of the following conditions: An SMTP server configured in Jira and the Contact Administrators Form is enabled, which will allow the attackers to exploit this issue without authentication. An SMTP server configured in Jira and an attacker has "JIRA Administrators" access, where attackers can exploit the issue using  JIRA Administrators’ credentials. In any of the cases, exploitation of this issue helps an attacker to remotely execute code on systems that run a vulnerable version of Jira Server or Data Center. The official post reads, “All versions of Jira Server and Data Center from 4.4.0 before 7.6.14 (the fixed version for 7.6.x), from 7.7.0 before 7.13.5 (the fixed version for 7.13.x), from 8.0.0 before 8.0.3 (the fixed version for 8.0.x), from 8.1.0 before 8.1.2 (the fixed version for 8.1.x), and from 8.2.0 before 8.2.3 are affected by this vulnerability.” To address this issue, the team has fixed this vulnerability in the 8.2.3, 8.1.2, 8.0.3, 7.13.5, 7.6.14 versions of Jira Server and Jira Data Center. Atlassian recommends that users upgrade to the latest version. How can users quickly mitigate this issue? For mitigating, users can first disable the Contact Administrators Form and then also block the /secure/admin/SendBulkMail!default.jspa endpoint from being accessed. This can be easily achieved by denying access in the reverse-proxy, load balancer, or Tomcat directly. However, blocking the SendBulkMail endpoint will prevent Jira Administrators from being able to send bulk emails to users. Hence, after upgrading Jira, users can re-enable the Administrator Contact Form, and unblock the SendBulkMail endpoint. To know more about this news, check out Jira security advisory. JIRA 101 Gadgets in JIRA Securing your JIRA 4
Read more
  • 0
  • 0
  • 2720

article-image-ges-2-models-of-hospital-anesthesia-machines-found-with-vulnerabilities-says-it-wont-harm-unless-connected-to-a-hospital-network
Amrata Joshi
11 Jul 2019
3 min read
Save for later

GE’s 2 models of hospital anesthesia machines found with vulnerabilities, says it won’t harm unless connected to a hospital network

Amrata Joshi
11 Jul 2019
3 min read
As per the reports from ZDNet, security researchers from CyberMDX, a healthcare cybersecurity firm found vulnerabilities in two models of hospital anesthesia machines manufactured by General Electric (GE). The two vulnerable devices are GE Aestiva and GE Aespire, models 7100 and 7900 and according to the researchers, the vulnerabilities reside in the two devices' firmware. Also, the US Department of Homeland Security's Industrial Control Systems and Cyber Emergency Response Team (ICS-CERT) issued a medical advisory for this vulnerability CVE-2019-10966. This vulnerability has been assigned 5.3 points as the CVSS score that indicates medium severity as per the ICS-CERT reports.   According to the researchers, attackers on the same network as the devices can send remote commands that can alter devices' settings. In a statement to ZDNet, a CyberMDX researcher told, "There is simply a lack of authentication."  He further added, "The mentioned commands are supported by design. Some of them are only supported on an earlier version of the protocol, however there is another command that allows changing the protocol version (for backward compatibility). After sending a command to change the protocol version to an earlier one, an attacker can send all other commands." The researcher claims that the commands can be used for making unauthorized adjustments to the anesthetic machines' gas composition which includes modifying the concentration of oxygen, CO2, N2O, and other anesthetic agents, or the gas' barometric pressure. If attackers get access to hospital’s network where either of these devices is connected to a terminal server, they can possibly break into the machine without knowing its IP address or location. And they can remotely change parameters without authorization and make unauthorized adjustments. According to the CyberMDX researchers such unauthorized modifications can put patients at risk. Attackers can also silence device alarms for low or high levels of various agents and modify timestamps inside logs. In a statement to ZDNet, Elad Luz, Head of Research at CyberMDX said, "The potential for manipulating alarms and gas compositions is obviously troubling." Luz further added, "More subtle but just as problematic is the ability to alter timestamps that reflect and document what happened in surgery." But as per a statement by GE Healthcare, the vulnerability is not in the device itself and this particular situation doesn't grant access to data or pose a direct risk to patients.  The GE Healthcare statement reads,“While the anesthesia device is in use, the potential gas composition parameter changes, potential device time change, or potential remote alarm silencing actions will not interfere in any way with the delivery of therapy to a patient at the point of delivery, and do not pose any direct clinical harm” In an email to ZDNet, GE explained the mitigations and according to them the vulnerabilities can be avoided if the anesthesia machines aren't connected to a hospital's network. In case the anesthesia machines aren't connected to a hospital network, then they can't be exploited, even if a hacker has access to a hospital's network. Microsoft is seeking membership to Linux-distros mailing list for early access to security vulnerabilities Deepfakes House Committee Hearing: Risks, Vulnerabilities and Recommendations Netflix security engineers report several TCP networking vulnerabilities in FreeBSD and Linux kernels        
Read more
  • 0
  • 0
  • 1112
article-image-stripes-api-suffered-two-consecutive-outages-yesterday-causing-elevated-error-rates-and-response-times
Vincy Davis
11 Jul 2019
3 min read
Save for later

Stripe's API suffered two consecutive outages yesterday causing elevated error rates and response times

Vincy Davis
11 Jul 2019
3 min read
Yesterday, Stripe’s API services went down twice, from 16:36–17:02 UTC and again from 21:14–22:47 UTC. Though the API services were recovered immediately after the disruptions, it caused elevated error rates and response times. Stripe has not yet specified the cause of the degradation, they have promised to share a root analysis of the issue later. Stripe constantly updated users about the repeated degradation and recovery on Twitter. Meanwhile, Stripe users expressed their outrage on the social media platform. https://twitter.com/OBX_Kayak/status/1149091674620080128 https://twitter.com/secretveneers/status/1138061688186576896 https://twitter.com/shazbegg/status/1138035967095390209 https://twitter.com/katetomasdphil/status/1138075917283188736 The issue started at 16.36 UTC with some Stripe payouts to GBP bank accounts being delayed. Next, Stripe informed users that they are investigating the issue with their UK banking partner which resulted in the delay of some GBP payouts. Later, Stripe confirmed that all the affected payouts have been processed and the issue has been resolved. Stripe’s CEO Patrick Collison commented on one of the Hacker News threads, “Stripe CEO here. We're very sorry about this. We work hard to maintain extreme reliability in our infrastructure, with a lot of redundancy at different levels. This morning, our API was heavily degraded (though not totally down) for 24 minutes. We'll be conducting a thorough investigation and root-cause analysis.” Later in the day, around 21.14 UTC, Stripe informed users that their error rates and response times have again increased. Finally their API services were restored at 22:47 UTC. Stripe assured users that all the delayed bank payments have been successfully deposited to the corresponding bank accounts. Though many users were distraught over Stripe’s service degradation, some users came out in support of Stripe. https://twitter.com/macrodesiac_/status/1138072815603769348 https://twitter.com/nickjanetakis/status/1149079993437380608 A user on Hacker News comments, “This is causing a big problem for my business right now, but I am not mad at Stripe because you earned that level of credibility and respect in my opinion. I understand these things happen and am glad to know a team as excellent as Stripe's is on the job.” Many users have asked Stripe to give a post mortem analysis about the issue. https://twitter.com/thinkdigitalco/status/1149092661082693633 https://twitter.com/DahmianOwen/status/1149071761188589568 This month, many other services like GitLab, Google Cloud, Cloudflare, Facebook, Instagram, Whatsapp and Apple’s iCloud also suffered major outages. A comment on Hacker News reads, “Most services are going down from time to time, it's just that the big one are widely used and so people notice quickly” Another user comments, “Between Cloudflare, Google, and now Stripe, I feel like there's been a huge cluster of services that never go down, going down. Curious to see Stripe's post-mortem here.” To know Stripe’s exact system status, head over to Stripe Status page. Stripe updates its product stack to prepare European businesses for SCA-compliance Former Google Cloud CEO joins Stripe board just as Stripe joins the global Unicorn Club Stripe open sources ‘Skycfg’, a configuration builder for Kubernetes
Read more
  • 0
  • 0
  • 3148

article-image-ispa-nominated-mozilla-in-the-internet-villain-category-for-dns-over-https-push-withdrew-nominations-and-category-after-community-backlash
Fatema Patrawala
11 Jul 2019
6 min read
Save for later

ISPA nominated Mozilla in the “Internet Villain” category for DNS over HTTPs push, withdrew nominations and category after community backlash

Fatema Patrawala
11 Jul 2019
6 min read
On Tuesday, the Internet Services Providers' Association (ISPA) which is also UK's Trade Association for providers of internet services announced that the nomination of Mozilla Firefox has been withdrawn from the “Internet Villain Category”. This decision came after they saw a global backlash to their nomination of Mozilla for their DNS-over-HTTPS (DoH) push. ISPA withdrew the Internet Villain category as a whole from the ISPA Awards 2019 ceremony which will be held today in London. https://twitter.com/ISPAUK/status/1148636700467453958 The official blog post reads, “Last week ISPA included Mozilla in our list of Internet Villain nominees for our upcoming annual awards. In the 21 years the event has been running it is probably fair to say that no other nomination has generated such strong opinion. We have previously given the award to the Home Secretary for pushing surveillance legislation, leaders of regimes limiting freedom of speech and ambulance-chasing copyright lawyers. The villain category is intended to draw attention to an important issue in a light-hearted manner, but this year has clearly sent the wrong message, one that doesn’t reflect ISPA’s genuine desire to engage in a constructive dialogue. ISPA is therefore withdrawing the Mozilla nomination and Internet Villain category this year.” Mozilla Firefox, which is the preferred browser for a lot of users encourages privacy protection and feature options to keep one’s Internet activity as private as possible. One of the recently proposed features – DoH (DNS-over-HTTPS) which is still in the testing phase didn’t receive a good response from the ISPA trade association. Hence, the ISPA decided to nominate Mozilla as one of the “Internet Villains” among the nominees for 2019. In their announcement, the ISPA mentioned that Mozilla is one of the Internet Villains for supporting DoH (DNS-over-HTTPS). https://twitter.com/ISPAUK/status/1146725374455373824 Mozilla on this announcement responded by saying that this is one way to know that they are fighting the good fight. https://twitter.com/firefox/status/1147225563649564672 On the other hand this announcement amongst the community garnered a lot of criticism. They rebuked ISPA for promoting online censorship and enabling rampant surveillance. Additionally there were comments of ISPA being the Internet Villian in this scenario. Some the tweet responses are given below: https://twitter.com/larik47/status/1146870658246352896 https://twitter.com/gon_dla/status/1147158886060908544 https://twitter.com/ultratethys/status/1146798475507617793 Along with Mozilla, Article 13 Copyright Directive and United States President Donald Trump also appeared in the nominations list. Here’s how ISPA explained in their announcement: “Mozilla – for their proposed approach to introduce DNS-over-HTTPS in such a way as to bypass UK filtering obligations and parental controls, undermining internet safety standards in the UK. Article 13 Copyright Directive – for threatening freedom of expression online by requiring ‘content recognition technologies’ across platforms President Donald Trump – for causing a huge amount of uncertainty across the complex, global telecommunications supply chain in the course of trying to protect national security” Why are the ISPs pushing back against DNS-over-HTTPS? DoH basically means that your DNS requests will be encrypted over an HTTPS connection. Traditionally, the DNS requests are unencrypted and your DNS provider or the ISP can monitor/control your browsing activity. Without DoH, you can easily enforce blocking/content filtering through your DNS provider or the ISP can do that when they want. However, DoH takes that out of the equation and hence, you get a private browsing experience. Admittedly big broadband ISPs and politicians are concerned that large scale third-party deployments of DoH, which encrypts DNS requests using the common HTTPS protocol for websites (i.e. turning IP addresses into human readable domain names), could disrupt their ability to censor, track and control related internet services. The above position is however a particularly narrow way of looking at the technology, because at its core DoH is about protecting user privacy and making internet connections more secure. As a result DoH is often praised and widely supported by the wider internet community. Mozilla is not alone in pushing DoH but they found themselves being singled out by the ISPA because of their proposal to enable the feature by default within Firefox which is yet to happen. Google is also planning to introduce its own DoH solution in its Chrome browser. The result could be that ISPs lose a lot of their control over DNS and break their internet censorship plans. Is DoH useful for internet users? If so, how? On one side of the coin, DoH lets users bypass any content filters enforced by the DNS or the ISPs. So, it is a good thing that it will put a stop to Internet censorship and DoH will help in this. But, on the other side, if you are a parent, you can no longer set content filters if your kid utilizes DoH on Mozilla Firefox. And potentially DoH could be a solution for some to bypass parental controls, which could be a bad thing. And this particular reason is given by the ISPA for nominating Mozilla for the Internet Villian category. It says that DNS-over-HTTPS will bypass UK filtering obligations and parental controls, undermining internet safety standards in the UK. Also, using DoH means that you can no longer use the local host file, in case you are using it for ad blocking or for any other reason. The Internet community criticized the way ISPA handled the back lash and withdrew the category as a whole. One of the user comments on Hacker News read, “You have to love how all their "thoughtful criticisms" of DNS over HTTPS have nothing to do with the things they cited in their nomination of Mozilla as villain. Their issue was explicitly "bypassing UK filtering obligations" not that load of flaming horseshit they just pulled out of their ass in response to the backlash.” https://twitter.com/VModifiedMind/status/1148682124263866368   Highlights from Mary Meeker’s 2019 Internet trends report How Verizon and a BGP Optimizer caused a major internet outage affecting Amazon, Facebook, CloudFlare among others Time for data privacy: DuckDuckGo CEO Gabe Weinberg in an interview with Kara Swisher
Read more
  • 0
  • 0
  • 3911

article-image-intellij-idea-2019-2-beta-2-released-with-new-services-tool-window-and-profiling-tools
Bhagyashree R
11 Jul 2019
4 min read
Save for later

IntelliJ IDEA 2019.2 Beta 2 released with new Services tool window and profiling tools

Bhagyashree R
11 Jul 2019
4 min read
Yesterday, JetBrains announced the release of IntelliJ IDEA 2019.2 Beta 2, which marks the next step towards the stable release. The team has already implemented major features like profiling tools, better shell script support, a new Services tool window, among others. With this release, the team has given a final polish to the existing features including the Terminal that now soft-wraps long lines better. This solves the previous problem of breaking links while wrapping lines. Source: IntelliJ IDEA Shell script support This release will come with rich editing features for shell scripts including word and path completion, quick documentation preview, and textual rename. Additionally, it will also allow integration with various other external tools to provide developers an enhanced shell script support. For instance, the IDE will prompt you to install ShellCheck to detect possible errors in your scripts and also suggest quick fixes for them. A new Services tool window IntelliJ IDEA 2019.2 will introduce a new Services tool window, which will be your single stop to view all connections and run configurations that are configured to be reported to the Services view.  The Services view will incorporate windows for several tools such as RunDashboard, Database Console, Docker, and Application Servers. You have the option of viewing all the service types as nodes or tabs. To view a service type on a separate tab you can either use the Show in New tab action from the toolbar or simply drag and drop the needed node on to the edge of the Services tool window. You can also create a custom tab to group various services using the Group Services action from the context menu or from the toolbar. Source: IntelliJ IDEA Profiling tools for IntelliJ IDEA Ultimate You will be able to analyze the performance of your application right from the IDE using the new CPU Profiler integration and Memory Profiler integration on macOS, Linux, and Windows. It will also come integrated with Java Flight Recorder and Async profiler. This will help you get an insight into how the CPU and memory resources are allocated in your application. To run Java Flight Recorder or Async profiler, you just need to click the icon on the main toolbar or the run icon in the gutter. These tools will only be available in the professional and fully-featured commercial IDE, IntelliJ IDEA Ultimate. Source: IntelliJ IDEA Syntax highlighting for over 20 different programming languages IntelliJ IDEA 2019.2 will provide syntax highlighting for more than 20 different languages. To provide this support, this upcoming version comes integrated with TextMate text editor and a collection of built-in grammar files for various languages. You can find the full list of supported languages in Preferences / Settings | Editor | TextMate Bundles. In case you require syntax highlighting for any additional languages, you can download the TextMate bundle for the selected language and import it into IntelliJ IDEA. Commit directly from the Local Changes With this version, developers will be able to commit directly from the Local Changes tab without having to go through a separate Commit dialog. While working on a commit, you will be able to browse through the source code, view the file history, view the diff for the file in the same area as the commit, or use other features of the IDE. In previous versions, all these actions were impossible because the modal commit dialog blocked all the other IDE functionality. Additionally, there is a new feature for projects that are using version systems like Git or Mercurial. You just need to press the Commit shortcut (Ctrl-K on Windows, Linux/Cmd-K on macOS) and the IDE will select the modified files for the commit. You will then be able to review the selected files and change the file or code chunk. Source: IntelliJ IDEA These were some of the features coming in IntelliJ IDEA 2019.2. You can read the entire release notes and stay updated with the IntelliJ IDEA blog to know more in detail. Developers are excited about the profiling tools and other shining features bundled with this release: https://twitter.com/Rahamat87523498/status/1149221123256492032 https://twitter.com/goKarumi/status/1148849477136146432 https://twitter.com/matsumana/status/1140659765518852097 What’s new in IntelliJ IDEA 2018.2 IntelliJ IDEA 2018.3 Early Access Program is now open! Netbeans, Intellij IDEA and PyCharm come to Haiku OS
Read more
  • 0
  • 0
  • 3709
article-image-apple-patched-vulnerability-in-macs-zoom-client-plans-to-address-video-on-by-default
Savia Lobo
11 Jul 2019
3 min read
Save for later

Apple patched vulnerability in Mac’s Zoom Client; plans to address ‘video on by default’

Savia Lobo
11 Jul 2019
3 min read
After the recent disclosure of the vulnerability in Mac’s Zoom Client, Apple was quick to patch the vulnerable component. On July 9, the same day when security researcher, Jonathan Leitschuh revealed the vulnerability publicly, Apple released a patch that removes the local web server entirely and also allows users to manually uninstall Zoom. The Mac Zoom client vulnerability allowed any malicious website to initiate users’ camera and forcibly join a Zoom call without their authority. Apple said the update does not require any user interaction and is deployed automatically. How can Mac users ensure they get these updates? As the vulnerability was capable of re-installing the Zoom Client applications, Apple first stopped the use of a local web server on Mac devices. It then removed the local web server entirely, once the Zoom client was updated. Mac users were prompted in the Zoom user interface (UI) to update their client after the patch was deployed. After the complete update, the local web server will be completely removed on that device. Apple had added a new option to the Zoom menu bar that will allow users to manually and completely uninstall the Zoom client, including the local web server. Once the patch is deployed, a new menu option will appear that says, “Uninstall Zoom.” By clicking that button, Zoom will be completely removed from the user’s device along with the user’s saved settings. Plans to address ‘video on by default’ Apple has also announced a planned release this weekend (July 12) that will address another security concern, ‘video on by default’. With this July 12 release: First-time users who select the “Always turn off my video” box will automatically have their video preference saved. The selection will automatically be applied to the user’s Zoom client settings and their video will be OFF by default for all future meetings. Returning users can update their video preferences and make video OFF by default at any time through the Zoom client settings. Zoom spokesperson Priscilla McCarthy told TechCrunch, “We’re happy to have worked with Apple on testing this update. We expect the web server issue to be resolved today. We appreciate our users’ patience as we continue to work through addressing their concerns.” Regarding Apple’s quick action to patch the Zoom Client vulnerability, Leitschuh tweeted that their willingness to patch represented an “about face”. “it went from rationalizing its existing strategy to planning a fix in a matter of hours”, Engadget reports. https://twitter.com/JLLeitschuh/status/1148686921528414208 To know more about this news in detail, read Zoom blog. Apple plans to make notarization a default requirement in all future macOS updates Ian Goodfellow quits Google and joins Apple as a director of machine learning Apple to merge the iPhone, iPad, and Mac apps by 2021
Read more
  • 0
  • 0
  • 2098

article-image-risc-v-foundation-officially-validates-risc-v-base-isa-and-privileged-architecture-specifications
Vincy Davis
11 Jul 2019
2 min read
Save for later

RISC-V Foundation officially validates RISC-V base ISA and privileged architecture specifications

Vincy Davis
11 Jul 2019
2 min read
Yesterday, the RISC-V Foundation announced that the RISC-V base Instruction Set Architecture (ISA) and privileged architecture specifications have been ratified.  The RISC-V Foundation drives the adoption and implementation of the free and open RISC-V ISA. The RISC-V base architecture acts as the interface between application software and hardware.  Krste Asanović, chairman of the RISC-V Foundation Board of Directors says, “The RISC-V ecosystem has already demonstrated a large degree of interoperability among various implementations. Now that the base architecture has been ratified, developers can be assured that their software written for RISC-V will run on all similar RISC-V cores forever.” The RISC-V privileged architecture covers all aspects of RISC-V systems including privileged instructions, additional functionality required for running operating systems and attaching external devices. Privilege levels are used to provide protection between different components of the software stack, such that it has a core set of privileged ISA extensions. The ISA extensions have optional extensions and variants, including the machine ISA, supervisor ISA and hypervisor ISA. “The RISC-V privileged architecture serves as a contract between RISC-V hardware and software such as Linux and FreeBSD. Ratifying these standards is a milestone for RISC-V,” said Andrew Waterman, chair of the RISC-V Privileged Architecture Task Group.  To know more about this announcement in detail, head over to RISC-V blog. Debian GNU/Linux port for RISC-V 64-bits: Why it matters and roadmap AdaCore joins the RISC-V Foundation, adds support for C and Ada compilation Western Digital RISC-V SweRV Core is now on GitHub
Read more
  • 0
  • 0
  • 2423