Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3709 Articles
article-image-google-chrome-announces-an-update-on-its-autoplay-policy-and-its-existing-youtube-video-annotations
Natasha Mathur
29 Nov 2018
4 min read
Save for later

Google Chrome announces an update on its Autoplay policy and its existing YouTube video annotations

Natasha Mathur
29 Nov 2018
4 min read
Google Chrome team finally announced the release date for its Autoplay Policy, earlier this week. The policy had been delayed when it was released with the Chrome 66 stable release, back in May this year. The latest policy change is scheduled to come out along with Chrome 71, in the upcoming month. The Autoplay policy imposes restrictions that prevent videos and audios from autoplaying in the web browser. For websites that want to be able to autoplay their content, the new policy change will prevent playback by default. For most of the sites, playback will be resumed but a small code adjustment will be required in other cases to resume the audio. Additionally, Google has added a new approach to the policy that includes tracking users' past behavior with the sites that have autoplay enabled. So in case, if a user regularly lets an audio play for more than 7 seconds on a website, the autoplay gets enabled for that website. This is done with the help of a “Media Engagement Index” (MEI) i.e. an index stored locally per Chrome profile on a device. MEI tracks the number of visits to a site that includes audio playback of more than 7 seconds long. Each website gets a score between zero and one in MEI, where higher scores indicate that the user doesn’t mind audio playing on that website. For new user profiles or if a user clears their browsing data, a pre-seed list based on anonymized user aggregated MEI scores is used to track which websites can autoplay. The pre-seeded site list is algorithmically generated and only sites with enough users permitting autoplay on that site are added to the list. “We believe by learning from the user – and anticipating their intention on a per website basis – we can create the best user experience. If users tend to let content play from a website, we will autoplay content from that site in the future. Conversely, if users tend to stop autoplay content from a given website, we will prevent autoplay for that content by default”, mentions the Google team. The reason behind the delay The autoplay policy had been delayed by Google after receiving feedback from the Web Audio developer community, especially the web game developer and WebRTC developers. As per the feedback, the autoplay change was affecting many web games and audio experiences, especially on the sites that had not been updated for the change. Delaying the policy rollout gave web game developers enough time to update their websites. Moreover, Google also explored ways to reduce the negative impact of audio play policy on websites with audio enabled. Following this, Google has made an adjustment to its implementation of Web Audio to reduce the number of websites that had been originally impacted. New adjustments made for the developers As per new adjustments by Google in the autoplay policy, audio will get resumed automatically in case the user has interacted with a page and when the start() method of a source node is called. Source node represents individual audio snippets that most games play. One such example is that of a sound that gets played when a player collects a coin or the background music that plays in a particular stage within a game. Game developers call the start() function on source nodes more often than not in cases whenever any of these sounds are necessary for the game. These changes will enable the autoplay in most web games when the user starts playing the game. Google team has also introduced a mechanism for users that allows them to disable the autoplay policy for cases where the automatic learning doesn’t work as expected. Along with the new autoplay policy update,  Google will also stop showing existing annotations on the YouTube videos to viewers starting from January 15, 2019. All the other existing annotations will be removed. “We always put our users first but we also don’t want to let down the web development community. We believe that with our adjustments to the implementation of the policy, and the additional time we provided for web audio developers to update their code, that we will achieve this balance with Chrome 71”, says the Google team. For more information, check out Google’s official blog post. “ChromeOS is ready for web development” – A talk by Dan Dascalescu at the Chrome Web Summit 2018 Day 1 of Chrome Dev Summit 2018: new announcements and Google’s initiative to close the gap between web and native Meet Carlo, a web rendering surface for Node applications by the Google Chrome team
Read more
  • 0
  • 0
  • 7892

article-image-microsoft-announces-internet-explorer-10-will-reach-end-of-life-by-january-2020
Bhagyashree R
30 Jan 2019
2 min read
Save for later

Microsoft announces Internet Explorer 10 will reach end-of-life by January 2020

Bhagyashree R
30 Jan 2019
2 min read
Along with Windows 7, Microsoft is also ending security updates and technical support for Internet Explorer 10 by January 2020 that it shared in a blog post yesterday, and users are advised to upgrade to IE11 by then. Support for IE10 and below ended back in 2016, except on a few environments like Windows server 2012 and some embedded versions and now Microsoft is just pulling the plug on those few remaining environments. Microsoft on their blog post wrote, “We encourage you to use the time available to pilot IE11 in your environments. Upgrading to the latest version of Internet Explorer will ease the migration path to Windows 10, Windows Server 2016 or 2019, or Windows 10 IoT, and unlock the next generation of technology and productivity. It will also allow you to reduce the number of Internet Explorer versions you support in your environment.” Commercial customers of Windows Server 2012 and Windows Embedded 8 Standard can download IE11 via the Microsoft Update Catalog or IE11 upgrade through Windows Update and Windows Server Update (WSUS) that Microsoft will publish later this year. IE10 will continue to receive updates for Windows 10, Windows Server 2016 or 2019, or Windows 10 IoT throughout 2019. You can find these updates on the Update Catalog and WSUS channel as a Cumulative Update for Internet Explorer 10. Similarly, updates for IE11 will be labeled as Cumulative Update Internet Explorer 11 on the Microsoft Update Catalog, Windows Update, and WSUS. Many Hacker News users are also speculating that the support of IE11 could also end by 2025. One of the users said, “If anyone is wondering about IE11, MS says "Internet Explorer 11 will continue receiving security updates and technical support for the lifecycle of the version of Windows on which it is installed. Extended support for Windows 10 ends on October 14, 2025. Extended support for Windows Server 2016 ends on January 11, 2027. Presumably one or those 2 dates could be considered the termination date for IE11.” Another Hacker News user believes, “...it is good time to start considering ending IE11 support as well, especially with Chromium-Edge coming out later this year. Edge is getting a Chromium back-end with talk of Windows 7 and 8 support. So, perhaps that's a strategy to kill IE11 too (fingers crossed).” Read the official announcement by Microsoft to know more details. Microsoft Office 365 now available on the Mac App Store Microsoft acquires Citus Data with plans to create a ‘Best Postgres Experience’ Microsoft’s Bing ‘back to normal’ in China
Read more
  • 0
  • 0
  • 7870

article-image-introducing-luna-worlds-first-programming-language-with-dual-syntax-representation-data-flow-modeling-and-much-more
Amrata Joshi
17 Jun 2019
3 min read
Save for later

Introducing Luna, world’s first programming language with dual syntax representation, data flow modeling and much more!

Amrata Joshi
17 Jun 2019
3 min read
Luna, a data processing and visualization environment, provides a library of highly tailored, domain-specific components as well as a framework for building new components. Luna focuses on domains related to data processing, such as IoT, bioinformatics, data science, graphic design and architecture. What’s so interesting about Luna? Data flow modeling Luna is a data flow modeling whiteboard that allows users to draw components and the way data flows between them. Components in Luna have simply nested data flow graphs and users can enter into any component or into its subcomponents to move from high to low levels of abstraction. It is also designed as a general purpose programming language with two equivalent representations, visual and textual. Data processing and visualizing Luna components can visualise their results and further use colors for indicating the type of data they exchange. Users can compare all the intermediate outcomes and also understand the flow of data by looking at the graph. Users can also work around the parameters and observe how they affect each step of the computation in real time. Debugging Luna can help in assisting and analyzing network service outages and data corruption. In case any errors occur, Luna tracks and display its path through the graph so that users can easily follow and understand where it comes from.  It also records and visualizes information about performance and memory consumption. Luna explorer, the search engine Luna comes with Explorer which is a context-aware fuzzy search engine that lets users query libraries for desired components as well as browse their documentation. Since the Explorer is context-aware, it can easily understand the flow of data and also predict users’ intentions and adjust the search results accordingly. Dual syntax representation Luna is also the world’s first programming language that features two equivalent syntax representations, that is visual and textual. Automatic parallelism Luna also features parallelism that uses the state of the art Haskell’s GHC runtime system which helps to run thousands of threads in a fraction of a second. It also automatically partitions a program and schedules its execution over available CPU cores. Users seem to be happy with Luna, a user commented on HackerNews, “Luna looks great. I've been doing work in this area myself and hope to launch my own visual programming environment next month or so.” Few others are happy because Luna features text syntax supports building functional blocks. Another user commented, “I like that Luna has a text syntax. I also like that Luna supports building graph functional blocks that can be nested inside other graphs. That's a missing link in other tools of this type that limits the scale of what you can do with them.” To know more about this, check out the official Luna website. Declarative UI programming faceoff: Apple’s SwiftUI vs Google’s Flutter Polyglot programming allows developers to choose the right language to solve tough engineering problems Researchers highlight impact of programming languages on code quality and reveal flaws in the original FSE study
Read more
  • 0
  • 0
  • 7868

article-image-darpa-on-the-hunt-to-catch-deepfakes-with-its-ai-forensic-tools-underway
Natasha Mathur
08 Aug 2018
5 min read
Save for later

DARPA on the hunt to catch deepfakes with its AI forensic tools underway

Natasha Mathur
08 Aug 2018
5 min read
The U.S. Defense Advanced Research Projects Agency ( DARPA) has come out with AI-based forensic tools to catch deepfakes, first reported by MIT technology review yesterday. According to MIT Technology Review, the development of more tools is currently under progress to expose fake images and revenge porn videos on the web. DARPA’s deepfake mission project was announced earlier this year. Alec Baldwin on Saturday Night Live face swapped with Donald Trump As mentioned in the MediFor blog post, “While many manipulations are benign, performed for fun or for artistic value, others are for adversarial purposes, such as propaganda or misinformation campaigns”. This is one of the major reasons why DARPA Forensics experts are keen on finding methods to detect deepfakes videos and images How did deepfakes originate? Back in December 2017, a Reddit user named “DeepFakes” posted extremely real-looking explicit videos of celebrities. He used deep learning techniques to insert celebrities’ faces into adult movies. Using Deep learning, one can combine and superimpose existing images and videos onto original images or videos to create realistic-seeming fake videos. As per the MIT technology review,“Video forgeries are done using a machine-learning technique -- generative modeling -- lets a computer learn from real data before producing fake examples that are statistically similar”. Video tampering is done using two neural networks -- generative adversarial networks which work in conjunction “to produce ever more convincing fakes”. Why are deepfakes toxic? An app named FakeApp was released earlier this year which helped create deepfakes quite easily. FakeApp uses neural networking tools developed by Google's AI division. The app trains itself to perform image-recognition tasks using trial and error. Ever since its release, the app has been downloaded more than 120,000 times. In fact, there are tutorials online on how to create deepfakes. Apart from this, there are regular requests on deepfake forums, asking users for help in creating face-swap porn videos of ex-girlfriends, classmates, politicians, celebrities, and teachers. Deepfakes is even be used to create fake news such as world leaders declaring war on a country. The toxic potential of this technology has led to a growing concern as deepfakes have become a powerful tool for harassing people. Once deepfakes found their way on the world wide web, many websites such as Twitter and PornHub, banned them from being posted on their platforms. Reddit also announced a ban on deepfakes, earlier this year, killing The “deepfakes” subreddit which had more than 90,000 subscribers, entirely. MediFor: DARPA’s AI weapon to counter deepfakes DARPA’s Media Forensics group, also known as MediFor, works in a group along with other researchers is set on developing AI tools for deepfakes. It is currently focusing on four techniques to catch the audiovisual discrepancies present in a forged video. This includes analyzing lip sync, detecting speaker inconsistency, scene inconsistency and content insertions. One technique comes from a team led by Professor Siwei Lyu of SUNY Albany. Lyu mentioned that they “generated about 50 fake videos and tried a bunch of traditional forensics methods. They worked on and off, but not very well”. As the deepfakes are created using static images, Lyu noticed that that the faces in deepfakes videos rarely blink and that eye-movement, if present, is quite unnatural. An academic paper titled "In Ictu Oculi: Exposing AI Generated Fake Face Videos by Detecting Eye Blinking," by Yuezun Li, Ming-Ching Chang and Siwei Lyu explains a method to detect forged videos. It makes use of Long-term Recurrent Convolutional Networks (LRCN). According to the research paper, people, on an average, blink about 17 times a minute or 0.283 times per second. This rate increases with conversation and decreases while reading. There are a lot of other techniques which are used for eye blink detection such as detecting the eye state by computing the vertical distance between eyelids, measuring eye aspect ratio ( EAR ), and using the convolutional neural network (CNN) to detect open and closed eye states. But, Li, Chang, and Lyu use a different approach. They rely on  Long-term Recurrent Convolutional Networks (LRCN) model. They first perform pre-processing to identify facial features and normalize the video frame orientation. Then, they pass cropped eye images into the LRCN for evaluation. This technique is quite effective. It is also better as compared to other approaches, with a reported accuracy of 0.99 (LRCN) compared to 0.98 (CNN) and 0.79 (EAR). However, Lyu says that a skilled video editor can fix the non-blinking deepfakes by using images that shows blinking eyes. But, Lyu’s team has a secret effective technique in the works to fix even that, though he hasn’t divulged any details. Others in DARPA are on the look-out for similar cues such as strange head movements, odd eye color, etc as these little details are leading the team even closer to detection of deepfakes. As mentioned in the MIT Technology review post, “the arrival of these forensics tools may simply signal the beginning of an AI-powered arms race between video forgers and digital sleuths” and how”. Also, MediFor states that “If successful, the MediFor platform will automatically detect manipulations, provide detailed information about how these manipulations were performed, and reason about the overall integrity of visual media to facilitate decisions regarding the use of any questionable image or video”. Deepfakes need to stop and the U.S. Defense Advanced Research Projects Agency ( DARPA) seems all set to fight against them. Twitter allegedly deleted 70 million fake accounts in an attempt to curb fake news A new WPA/WPA2 security attack in town: Wi-fi routers watch out! YouTube has a $25 million plan to counter fake news and misinformation  
Read more
  • 0
  • 17
  • 7753

article-image-anime-studio-khara-switching-primary-3d-cg-tools-to-blender
Sugandha Lahoti
19 Aug 2019
4 min read
Save for later

Japanese Anime studio Khara is switching its primary 3D CG tools to Blender

Sugandha Lahoti
19 Aug 2019
4 min read
Popular Japanese animation studio Khara, announced on Friday that it will be moving to open source 3D software Blender as its primary 3D CG tool. Khara is a motion picture planning and production company and are currently working on “EVANGELION:3.0+1.0”, a film to be released in June 2020. Primarily, they will partially use Blender for ‘EVANGELION:3.0+1.0’ but will make the full switch once that project is finished. Khara is also helping the Blender Foundation by joining the Development Fund as a corporate member. Last month, Epic Games granted Blender $1.2 million in cash. Following Epic Games, Ubisoft also joined the Blender Development fund and adopted Blender as its main DCC tool. Why Khara opted for Blender? Khara had been using AutoDesk’s “3ds Max” as their primary tool for 3D CG so far. However, their project scale got bigger than what was possible with 3ds Max. 3ds Max is also quite expensive; according to Autodesk’s website, an annual fee for a single user is $2,396. Khara also had to reach out to small and medium-sized businesses for its projects. Another complaint was that Autodesk took time to release improvements to their proprietary software, which happens at a much faster rate in an open source software environment. They had also considered Maya as one of the alternatives, but dropped the idea as it resulted in duplication of work resource. Finally they switched to Blender, as it is open source and free. They were also intrigued by the new Blender 2.8 release which provided them with a 3D creation tool that worked like “paper and pencil”.  Blender’s Grease Pencil feature enables you to combine 2D and 3D worlds together right in the viewport. It comes with a new multi-frame edition mode with which you can change and edit several frames at the same time. It has a build modifier to animate the drawings similar to the Build modifier for 3D objects. “I feel the latest Blender 2.8 is intentionally ‘filling the gap’ with 3ds Max to make those users feel at home when coming to Blender. I think the learning curve should be no problem.”, told Mr. Takumi Shigyo, Project Studio Q Production Department. Khara founded “Project Studio Q, Inc.” in 2017, a company focusing mainly on the movie production and the training of Anime artists. Providing more information on their use of Blender, Hiroyasu Kobayashi, General Manager of Digital Dpt. and Director of Board of Khara, said in the announcement, “Preliminary testing has been done already. We are now at the stage to create some cuts actually with Blender as ‘on live testing’. However, not all the cuts can be done by Blender yet. But we think we can move out from our current stressful situation if we place Blender into our work flows. It has enough potential ‘to replace existing cuts’.” While Blender will be used for the bulk of the work, Khara does have a backup plan if there's anything Blender struggles with. Kobayashi added "There are currently some areas where Blender cannot take care of our needs, but we can solve it with the combination with Unity. Unity is usually enough to cover 3ds Max and Maya as well. Unity can be a bridge among environments." Khara is also speaking with their partner companies to use Blender together. Khara’s transition was well appreciated by people. https://twitter.com/docky/status/1162279830785646593 https://twitter.com/eoinoneillPDX/status/1154161101895950337 https://twitter.com/BesuBaru/status/1154015669110710273 Blender 2.80 released with a new UI interface, Eevee real-time renderer, grease pencil, and more Following Epic Games, Ubisoft joins Blender Development fund; adopts Blender as its main DCC tool Epic Games grants Blender $1.2 million in cash to improve the quality of their software development projects
Read more
  • 0
  • 0
  • 7730

article-image-google-open-sources-filament-rendering-engine-for-android
Sugandha Lahoti
06 Aug 2018
2 min read
Save for later

Google open sources Filament - a physically based rendering engine for Android, Windows, Linux and macOS

Sugandha Lahoti
06 Aug 2018
2 min read
Google has just open-sourced Filament, their physically based rendering (PBR) engine for Android. It can also be used in Windows, Linux, and macOS. Filament provides a set of tools and APIs for Android developers to help them easily create high-quality 2D and 3D rendering. Filament is currently being used in the Sceneform library both at runtime on Android devices and as the renderer inside the Android Studio plugin. Apart from Filament, Google has also open sourced Materials, the full reference documentation for their material system. They have also made available Material Properties which is a reference sheet for the standard material model. Google’s Filament comes packed with the following features: The rendering system is able to perform efficiently on mobile platforms. The primary target is OpenGL ES 3.x class GPUs. The rendering system emphasizes overall picture quality. Artists are able to iterate often and quickly on their assets and the rendering system allows them to do so instinctively. The physically based approach of the system also allows developers to create visually believable materials even if they don’t understand the theory behind the implementation. The system relies on as few parameters as possible to reduce trial and error and allows users to quickly master the material model. The system uses physical units everywhere possible: distances in meters or centimeters, color temperatures in Kelvin, light units in lumens or candelas, etc. The rendering library is as small as possible so any application can bundle it without increasing the binary to unwanted sizes. Filament APIs There are two major APIs used. Native C++ API for Android, Linux, macOS, and Windows Java/JNI API for Android, Linux, macOS, and Windows Backends OpenGL 4.1+ for Linux, macOS, and Windows OpenGL ES 3.0+ for Android Vulkan 1.0 for Android, Linux, macOS (with MoltenVk) and Windows A sample material rendered with Filament. Source: Github You can check out the Filament Documentation, for an in-depth explanation of real-time PBR, the graphics capabilities and implementation of Filament. Google open sources Seurat to bring high precision graphics to Mobile VR Google releases Android Things library for Google Cloud IoT Core Google updates biometric authentication for Android P, introduces BiometricPrompt API
Read more
  • 0
  • 0
  • 7719
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $15.99/month. Cancel anytime
article-image-nvidia-announces-cuda-10-2-will-be-the-last-release-to-support-macos
Bhagyashree R
25 Nov 2019
3 min read
Save for later

NVIDIA announces CUDA 10.2 will be the last release to support macOS

Bhagyashree R
25 Nov 2019
3 min read
NVIDIA announced the release of CUDA 10.2 last week. This is the last version to have macOS support for developing CUDA applications and will be completely dropped in the next release. Other updates include libcu++, new interoperability APIs, and more. Key updates in CUDA 10.2 General CUDA 10.2 updates New APIs: CUDA 10.2 ships with CUDA Virtual Memory Management APIs. New interoperability APIs are added for buffer allocation, synchronization, and streaming. However, these are in beta and may change in future releases. Support for new operating systems: This release adds support for a few new operating systems including Fedora 29, Red Hat Enterprise Linux (RHEL) 7.x and 8.x, OpenSUSE 15.x, SUSE SLES 12.4 and SLES 15.x, Ubuntu 16.04.6 LTS and Ubuntu 18.04.3 LTS. In CUDA 10.2, RHEL 6.x is deprecated and support will be dropped in the next release of CUDA. Increased texture size limit for Maxwell+ GPUs: The 1D linear texture size limit for Maxwell+ GPUs in CUDA is now increased to 2^28. Updates in CUDA tools The Nvidia CUDA Compiler (NVCC) now has support for Clang 8.0 and Xcode 10.2 as host compilers. There is a new -forward-unknown-to-host-compiler option that allows forwarding options not recognized by NVCC to the host compiler. Visual Profiler and NVProf now allow tracing features for non-root and non-admin users on desktop platforms. The events and metrics profiling is still restricted to non-root and non-admin users. Also, starting with CUDA 10.2, Visual Profiler and NVProf use dynamic/shared CUPTI library. Users are required to set the path to the CUPTI library before launching Visual Profiler and NVProf. Updates in CUDA libraries cuBLAS: The cuBLAS library is a fast GPU-accelerated implementation of the standard basic linear algebra subroutines (BLAS). In CUDA 10.2, performance is further improved on some large and other GEMM sizes due to increased internal workspace size. cuSOLVER: This library includes a collection of direct solvers that deliver significant acceleration for computer vision, CFD, and linear optimization apps. In this release, a new Tensor Cores Accelerated Iterative Refinement Solver (TCAIRS) is introduced. The cusolverMg library includes ‘cusolverMgGetrf’ and ‘cusolverMgGetrs’ to support multi-GPU LU. cuFFT: This library provides GPU-accelerated FFT implementations that perform up to 10x faster than CPU-only alternatives. This release comes with improved performance and scalability for these use cases: multi-GPU non-power of 2 transforms, R2C and Z2D odd-sized transforms, 2D transforms with small sizes and large batch counts These were a few updates in CUDA 10.2. Read the official release notes to know what else has shipped with this release. CUDA 10.1 released with new tools, libraries, improved performance and more Implementing color and shape-based object detection and tracking with OpenCV and CUDA [Tutorial] NVIDIA releases Kaolin, a PyTorch library to accelerate research in 3D computer vision and AI
Read more
  • 0
  • 0
  • 7717

article-image-homebrews-github-repo-got-hacked-in-30-mins-how-can-open-source-projects-fight-supply-chain-attacks
Savia Lobo
14 Aug 2018
5 min read
Save for later

Homebrew's Github repo got hacked in 30 mins. How can open source projects fight supply chain attacks?

Savia Lobo
14 Aug 2018
5 min read
On 31st July 2018, Eric Holmes, a security researcher gained access to Homebrew's GitHub repo easily (He documents his experience in an in-depth Medium post). Homebrew is a free and open-source software package management system with well-known packages like node, git, and many more. It simplifies the installation of software on macOS. The Homebrew repository contains its recently elevated scopes. Eric gained access to git push on Homebrew/brew and Homebrew/homebrew-core. He was able to invade and make his first commit into Homebrew’s GitHub repo within 30 minutes. Attack = Higher chances of obtaining user credentials After getting an easy access to Homebrew’s GitHub repositories, Eric’s prime motive was to uncover user credentials of some of the members of Homebrew GitHub org. For this, he made use of an OSSINT tool by Michael Henriksen called gitrob, which easily automates the credential search. However, he could not find anything interesting. Next, he explored Homebrew’s previously disclosed issues on https://hackerone.com/Homebrew, which led him to the observation that Homebrew runs a Jenkins instance that’s (intentionally) publicly exposed at https://jenkins.brew.sh. With further invasion into the repo, Eric encountered that the builds in the “Homebrew Bottles” project were making authenticated pushes to the BrewTestBot/homebrew-core repo. This further led him to an exposed GitHub API token. The token opened commit access to these core Homebrew repos: Homebrew/brew Homebrew/homebrew-core Homebrew/formulae.brew.sh Eric stated in his post that, “If I were a malicious actor, I could have made a small, likely unnoticed change to the openssl formulae, placing a backdoor on any machine that installed it.” Via such a backdoor, intruders could have gained access to private company networks that use Homebrew. This could further lead to data breach on a large scale. Eric reported this issue to Homebrew developer, Mike McQuaid. Following which, he publicly disclosed the issue on the blog at https://brew.sh/2018/08/05/security-incident-disclosure/. Within a few hours the credentials had been revoked, replaced and sanitised within Jenkins so they would not be revealed in future. Homebrew/brew and Homebrew/homebrew-core were updated so non-administrators on those repositories cannot push directly to master. The Homebrew team worked with GitHub to audit and ensure that the given access token wasn’t used maliciously, and didn’t make any unexpected commits to the core Homebrew repos. As an ethical hacker, Eric reported the vulnerabilities he found to the Homebrew team and did no harm to the repo itself. But, not all projects may have such happy endings. How can one safeguard their systems from supply chain attacks? The precautions which Eric Holmes took were credible. He informed the Homebrew developer. However, not every hacker has good intentions and it is one’s responsibility to make sure to keep a check on all the supply chains associated to an organization. Keeping a check on all the libraries One should not allow random libraries into the supply chain. This is because it is difficult to partition libraries with organization’s custom code, thus both run with the same privilege risking the company’s security. One should make sure to levy certain policies around the code the company wishes to allow. Only projects with high popularity, active committers, and evidence of process should be allowed. Establishing guidelines Each company should create guidelines for secure use of the libraries selected. For this, a prior definition of what the libraries are expected to be used for should be made. The developers should also be detailed in safely installing, configuring, and using each library within their code. Identification of dangerous methods and how to use them safely should also be taken care of. A thorough vigilance within the inventory Every organization should keep a check within their inventories to know what open source libraries they are using. They should also ensure to set up a notification system which keeps them abreast of which new vulnerabilities the applications and servers are affected. Protection during runtime Organizations should also make use of runtime application security protection (RASP) to prevent both known and unknown library vulnerabilities from being exploited. If in case they notice new vulnerabilities, the RASP infrastructure enables one to respond in minutes. The software supply chain is the important part to create and deploy applications quickly. Hence, one should take complete care to avoid any misuse via this channel. Read the detailed story of Homebrew’s attack escape on its blog post and Eric’s firsthand account of how he went about planning the attack and the motivation behind it on his medium post. DCLeaks and Guccifer 2.0: Hackers used social engineering to manipulate the 2016 U.S. elections Twitter allegedly deleted 70 million fake accounts in an attempt to curb fake news YouTube has a $25 million plan to counter fake news and misinformation
Read more
  • 0
  • 2
  • 7689

article-image-surprise-npm-layoffs-raise-questions-about-the-company-culture
Fatema Patrawala
02 Apr 2019
8 min read
Save for later

Surprise NPM layoffs raise questions about the company culture

Fatema Patrawala
02 Apr 2019
8 min read
Headlines about the recent NPM layoff has raised questions about the company culture and ethics. NPM which stands for Node Package Manager is now being regarded as “Not Politely Managed”. The San Francisco startup, NPM Inc, the company behind the widely used NPM JavaScript package repository,  laid off 5 employees in a wrong, unprofessional and unethical manner. The incident stands imperative of the fact that many of us while accepting those lucrative job offers merely ask companies not to be unethical, and seldom expect them to actually be good. Indeed, social psychologist, Roy Baumeister convincingly argues there’s an evolutionary reason to focus more on getting people to avoid bad things than to do good things; among other reasons, humans are hardwired to consider potential threats that could harm us. Bob Sutton, author of fantastic and influential books like Good Boss, Bad Boss and The No Asshole Rule, draws on Baumeister’s work to highlight why it’s so critical to stamp out poorly behaving leaders (and employees) in organizations. Frédéric Harper, a developer advocate who was among those who lost their jobs, posted at length about the situation on Twitter. His concerns, did not come from being laid off. That happens, he said, and will happen again. "It’s the total lack of respect, empathy and professionalism of the process," he said. In an email to The Register, he said there appeared to be a disconnect between the company's professed values and its behavior. NPM layoff took its roots under the new leadership The layoffs actually started last summer when the company hired a new CEO, Bryan Bogensberger, to take the company from about $3m in annual revenue to 10x-20x, explained an early NPM employee who spoke with The Register on condition of anonymity. Bogensberger was previously the CEO and co-founder of Inktank, a leading provider of scale-out, open source storage systems that was acquired by Red Hat, Inc. for $175 million in 2014. , He has been running NPM since around July or August 2018, a source explained, but wasn't actually announced as CEO until January 2019 because his paperwork wasn't in order. Bryan brought in his own people, displacing longtime NPM staffers. "As he stacked the management ranks with former colleagues from a previous startup, there were unforced errors," another source explained to the Register. A culture of suspicion and hostility emerged under the new leadership. At NPM an all-hands meeting was held at which employees were encouraged to ask frank questions about the company's new direction. Those who spoke up were summarily fired last week, the individual said, at the recommendation of an HR consultant. https://twitter.com/ThatMightBePaul/status/1112843936136159232 People were very surprised by the layoffs at NPM. "There was no sign it was coming. It wasn't skills based because some of them heard they were doing great." said CJ Silverio, ex-CTO at NPM who was laid off last December. Silverio and Harper both are publicizing the layoff as they had declined to sign the non-disparagement clause in the NPM severance package. The non-disparagement clause prevents disclosure of the company’s wrongdoing publicly. A California law which came into effect in January, SB 1300 prohibits non-disparagement clause in the employment severance package but in general such clauses are legal. One of the employees fired last Friday was a month away from having stock options vest. The individual could have retained those options by signing a non-disparagement clause, but refused. https://twitter.com/neverett/status/1110626264841359360 “We can not comment on confidential personnel matters," CEO Bryan Bogensberger mentioned. "However, since November 1, we have approximately doubled in size to 55 people today, and continue to hire aggressively for many positions that will optimize and expand our ability to support and grow the JavaScript ecosystem over the long term.” Javascript community sees it as a leadership failure The community is full of outrage on this incident, many of them have regarded this as a 100% leadership failure. Others have commented that they would put NPM under the list of “do not apply” for jobs in this company. This news comes to them as a huge disappointment and there are questions asked about the continuity of the npm registry. Some of them also commented on creating a non profit node packages registry. While others have downgraded their paid package subscription to a free subscription. Rebecca Turner, core contributor to the project and one of the direct reportees to Harper has voluntarily put down her papers in solidarity with her direct reports who were let go. https://twitter.com/ReBeccaOrg/status/1113121700281851904 How goodness inspires goodness in organization Compelling research by David Jones and his colleagues finds that job applicants would prefer to work for companies that show real social responsibility–those that improve their communities, the environment, and the world. Employees are most likely to be galvanized by leaders who are actively perceived to be fair, virtuous, and self-sacrificing. Separate research by Ethical Systems founder, Jonathan Haidt demonstrates that such leaders influence employees to feel a sense of “elevation”—a positive emotion that lifts them up as a result of moral excellence. Liz Fong, a developer advocate at Honeycomb tweets on the npm layoff that she will never want to be a manager again if she had to go through this kind of process. https://twitter.com/lizthegrey/status/1112902206381064192 Layoffs becoming more common and frequent in Tech Last week we also had IBM in news for being sued by former employees for violating laws prohibiting age discrimination in the workplace: the Older Workers Benefit Protection Act (OWBPA) and the Age Discrimination in Employment Act (ADEA). Another news last week which came as a shocker was Oracle laying off a huge number of employees as a part of its “organizational restructuring”. The reason behind this layoff round was not clear, while some said that this was done to save money, some others said that people working on a legacy product were let go. While all of these does raise questions about the company culture, it may not be wrong to say that the Internet and social media makes corporate scandals harder than ever to hide. With real social responsibility easier than ever to see and applaud–we hope to see more of “the right things” actually getting done. Update from the NPM statement after 10 days of the incident After receiving public and community backlash on such actions, NPM published a statement on Medium on April 11 that, "we let go of 5 people in a company restructuring. The way that we undertook the process, unfortunately, made the terminations more painful than they needed to be, which we deeply regret, and we are sorry. As part of our mission, it’s important that we treat our employees and our community well. We will continue to refine and review our processes internally, utilizing the feedback we receive to be the best company and community we can be." Does this mean that any company for its selfish motives can remove its employees and later apologize to clean its image? Update on 14th June, Special report from The Register The Register published a special report last Friday saying that JavaScript package registry and NPM Inc is planning to fight union-busting complaints brought to America's labor watchdog by fired staffers, rather than settling the claims. An NLRB filing obtained by The Register alleges several incidents in which those terminated claim executives took action against them in violation of labor laws. On February 27, 2019, the filing states, a senior VP "during a meeting with employees at a work conference in Napa Valley, California, implicitly threatened employees with unspecified reprisals for raising group concerns about their working conditions." The document also describes a March 25, 2019, video conference call in which it was "impliedly [sic] threatened that [NPM Inc] would terminate employees who engaged in union activities," and a message sent over the company's Keybase messaging system that threatened similar reprisals "for discussing employee layoffs." The alleged threats followed a letter presented to this VP in mid-February that outlined employee concerns about "management, increased workload, and employee retention." The Register has heard accounts of negotiations between the tech company and its aggrieved former employees, from individuals apprised of the talks, during which a clearly fuming CEO Bryan Bogensberger called off settlement discussions, a curious gambit – if accurate – given the insubstantial amount of money on the table. NPM Inc has defended its moves as necessary to establish a sustainable business, but in prioritizing profit – arguably at the expense of people – it has alienated a fair number of developers who now imagine a future that doesn't depend as much on NPM's resources. The situation has deteriorated to the point that former staffers say the code for the npm command-line interface (CLI) suffers from neglect, with unfixed bugs piling up and pull requests languishing. The Register understands further staff attrition related to the CLI is expected. To know about this story in detail check out the report published by The Register. The npm engineering team shares why Rust was the best choice for addressing CPU-bound bottlenecks npm Inc. announces npm Enterprise, the first management code registry for organizations npm JavaScript predictions for 2019: React, GraphQL, and TypeScript are three technologies to learn
Read more
  • 0
  • 0
  • 7687

article-image-postgresql-wins-dbms-of-the-year-2018-beating-mongodb-and-redis-in-db-engines-ranking
Amrata Joshi
09 Jan 2019
4 min read
Save for later

PostgreSQL wins ‘DBMS of the year’ 2018 beating MongoDB and Redis in DB-Engines Ranking

Amrata Joshi
09 Jan 2019
4 min read
Last week, DB Engines announced PostgreSQL as the Database Management System (DBMS) of the year 2018, as it gained more popularity in the DB-Engines Ranking last year than any of the other 343 monitored systems. Jonathan S. Katz, PostgreSQL contributor, said, "The PostgreSQL community cannot succeed without the support of our users and our contributors who work tirelessly to build a better database system. We're thrilled by the recognition and will continue to build a database that is both a pleasure to work with and remains free and open source." PostgreSQL, which will turn 30 this year has won the DBMS title for the second time in a row. It has established itself as the preferred data store amongst developers and has been appreciated for its stability and feature set. In the DBMS market, various systems use PostgreSQL as their base technology, this itself justifies that how well-established PostgreSQL is. Simon Riggs, Major PostgreSQL contributor, said, "For the second year in a row, the PostgreSQL team thanks our users for making PostgreSQL the DBMS of the Year, as identified by DB-Engines. PostgreSQL's advanced features cater to a broad range of use cases all within the same DBMS. Rather than going for edge case solutions, developers are increasingly realizing the true potential of PostgreSQL and are relying on the absolute reliability of our hyperconverged database to simplify their production deployments." How the DB-Engines Ranking scores are calculated For determining the DBMS of the year, the team at DB Engines subtracted the popularity scores of January 2018 from the latest scores of January 2019. The team used a difference of these numbers instead of percentage because that would favor systems with tiny popularity at the beginning of the year. The popularity of a system is calculated by using the parameters, such as the number of mentions of the system on websites, the number of mentions in the results of search engine queries. The team at DB Engines uses Google, Bing, and Yandex for this measurement. In order to count only relevant results, the team searches for <system name> together with the term database, e.g. "Oracle" and "database".The next measure is known as General interest in the system, for which the team uses the frequency of searches in Google Trends. The number of related questions and the number of interested users on the well-known IT-related Q&A site such as Stack Overflow and DBA Stack Exchange are also checked in this process. For calculating the ranking, the team also uses the number of offers on the leading job search engines Indeed and Simply Hired. A number of profiles in professional networks such as LinkedIn and Upwork in which the system is mentioned is also taken into consideration. The number of tweets in which the system is mentioned is also counted. The calculated result is a list of DBMSs sorted by how much they managed to increase their popularity in 2018. 1st runner-up: MongoDB For 2018, MongoDB is the first runner-up and has previously won the DBMS of the year in 2013 and 2014. Its growth in popularity has even accelerated ever since, as it is the most popular NoSQL system. MongoDB keeps on adding functionalities that were previously outside the NoSQL scope. Lat year, MongoDB also added ACID support, which got a lot of developers convinced, to rely on it with critical data. With the improved support for analytics workloads, MongoDB is a great choice for a larger range of applications. 2nd runner-up: Redis Redis, the most popular key-value store got the third place for DBMS of the year 2018. It has been in the top three DBMS of the year for 2014. It is best known as high-performance and feature-rich key-value store. Redis provides a loadable modules system, which means third parties can extend the functionality of Redis. These modules offer a graph database, full-text search, and time-series features, JSON data type support and much more. PipelineDB 1.0.0, the high performance time-series aggregation for PostgreSQL, released! Devart releases standard edition of dbForge Studio for PostgreSQL MongoDB switches to Server Side Public License (SSPL) to prevent cloud providers from exploiting its open source code
Read more
  • 0
  • 0
  • 7679
article-image-golang-plans-to-add-a-core-implementation-of-an-internal-language-server-protocol
Prasad Ramesh
24 Sep 2018
3 min read
Save for later

Golang plans to add a core implementation of an internal language server protocol

Prasad Ramesh
24 Sep 2018
3 min read
Go, the popular programming language is adding an internal language server protocol (LSP). This is expected to bring features like code autocompletion and diagnostics available in Golang. LSP is used between a user and a server to integrate features such as autocomplete, go to definition, find all references and alike into the tool. It was created by Microsoft to define a common language for enabling programming language analyzers to communicate. It is growing in popularity with adoption from companies like Codenvy, Red Hat, and Sourcegraph. There is also a rapidly growing list of editor and language communities supporting LSP. Golang already has a language server available on GitHub. This version has support for Hover jump to def, workspace symbols, and find references. But, it does not support code completion and diagnostics. Sourcegraph CEO Quinn Slack stated in a comment on Hacker News: “The idea is that with a Go language server becoming a core part of Go, it will have a lot more resources invested into it and it will surpass where the current implementation is now.” The Go language server made by Sourcegraph available currently on GitHub is not a core part of Golang. It uses tools and custom extensions not maintained by the Go team. The hope is that the core LSP implementation will be good enough and that SourceGraph can re-use this implementation in the future to bring down the number of implementations to just one. Slack said in a comment that they are very happy with this implementation: “We are 10,000% supportive of this, as we've discussed openly in the golang-tools group and with the Go team. The Go team was commendably empathetic about the optics here, and we urged them very, very, very directly to do this.” This core implementation of LSP by the Golang team is also beneficial for Sourcegraph from a business perspective. Sourcegraph sells a product that lets you search and browse all your code, which involves using language servers for certain features like hovers, definitions and references. Since the core work will be done by the Golang team, Sourcegraph won’t have to invest more time into building their implementation of Go language server. For more information, visit the Googlesource website. Golang 1.11 is here with modules and experimental WebAssembly port among other updates Why Golang is the fastest growing language on GitHub Go 2 design drafts include plans for better error handling and generics
Read more
  • 0
  • 0
  • 7637

article-image-microsoft-build-2019-introducing-wsl-2-the-newest-architecture-for-the-windows-subsystem-for-linux
Amrata Joshi
07 May 2019
3 min read
Save for later

Microsoft Build 2019: Introducing WSL 2, the newest architecture for the Windows Subsystem for Linux

Amrata Joshi
07 May 2019
3 min read
Yesterday, on the first day of Microsoft Build 2019, the team at Microsoft introduced WSL 2, the newest architecture for the Windows Subsystem for Linux. With WSL 2, file system performance will increase and users will be able to run more Linux apps. The initial builds of WSL 2 will be available by the end of June, this year. https://twitter.com/windowsdev/status/1125484494616649728 https://twitter.com/poppastring/status/1125489352795201539 What’s new in WSL 2? Run Linux libraries WSL 2 powers Windows Subsystem for Linux to run ELF64 Linux binaries on Windows. This new architecture brings changes to how these Linux binaries interact with Windows and computer’s hardware, but it will still manage to provide the same user experience as in WSL Linux distros With this release, the individual Linux distros can be run either as a WSL 1 distro, or as a WSL 2 distro, and can be upgraded or downgraded at any time. Also, users can run WSL 1 and WSL 2 distros side by side. This new architecture uses an entirely new architecture that uses a real Linux kernel. Increases speed With this release, file-intensive operations like git clone, npm install, apt update, apt upgrade, and more will get faster. The initial tests that the team has run have WSL 2 running up to 20x faster as compared to WSL 1, when unpacking a zipped tarball. And it is around 2-5x faster while using git clone, npm install and cmake on various projects. Linux kernel with Windows The team will be shipping an open source real Linux kernel with Windows which will make full system call compatibility possible. This will also be the first time a Linux kernel is shipped with Windows. The team is building the kernel in house and in the initial builds they will ship version 4.19 of the kernel. This kernel is been designed in tune with WSL 2 and it has been optimized for size and performance. The team will service this Linux kernel through Windows updates, users will get the latest security fixes and kernel improvements without needing to manage it themselves. The configuration for this kernel will be available on GitHub once WSL 2 will release. The WSL kernel source will consist of links to a set of patches in addition to the long-term stable source. Full system call compatibility The Linux binaries use system calls for performing functions such as accessing files, requesting memory, creating processes, and more. In WSL 1 the team has created a translation layer that interprets most of these system calls and allow them to work on the Windows NT kernel. It is challenging to implement all of these system calls, where some of the apps don’t run properly in WSL 1. WSL 2 includes its own Linux kernel which has full system call compatibility. To know more about this news, check out Microsoft’s blog post. Microsoft introduces Remote Development extensions to make remote development easier on VS Code Docker announces collaboration with Microsoft’s .NET at DockerCon 2019 Microsoft and GitHub employees come together to stand with the 996.ICU repository      
Read more
  • 0
  • 0
  • 7615

article-image-ibms-deeplocker-the-artificial-intelligence-powered-sneaky-new-breed-of-malware
Melisha Dsouza
13 Aug 2018
4 min read
Save for later

IBM’s DeepLocker: The Artificial Intelligence powered sneaky new breed of Malware

Melisha Dsouza
13 Aug 2018
4 min read
In the new found age of Artificial Intelligence, where everything and everyone uses Machine Learning concepts to make life easier, the dark side of the same is can be left unexplored. Cybersecurity is gaining a lot of attention these days.The most influential organizations have experienced a downfall because of undetected malware that have managed to evade even the most secure cyber defense mechanisms. The job just got easier for cyber criminals that exploit AI to empower them and launch attacks. Imagine combining AI with cyber attacks! At last week’s Black Hat USA 2018 conference, IBM researchers presented their newly developed malware “DeepLocker” that is backed up by AI. Weaponized AI seems here to stay. Read Also: Black Hat USA 2018 conference Highlights for cybersecurity professionals All you need to know about DeepLocker Simply put, DeepLocker is a new generation malware which can stealth under the radar and go undetected till its target is reached. It uses an Artificial Intelligence model to identify its target using indicators like facial recognition, geolocation and voice recognition. All of which is easily available on the web these days! What’s interesting is that the malware can hide its malicious payload in carrier applications- like a video conferencing software, and go undetected by most antivirus and malware scanners until it reaches specific victims. Imagine sitting on your computer performing daily tasks. Considering that your profile pictures are available on the internet, your video camera can be manipulated to find a match to your online picture. Once the target (your face) is identified, the malicious payload can be unleashed thanks to your face which serves as a key to unlock the virus. This simple  “trigger condition” to unlock the attack is almost impossible to reverse engineer. The malicious payload will only be unlocked if the intended target is reached. It achieves this by using a deep neural network (DNN) AI model. The simple logic of  “if this, then that” trigger condition used by DeepLocker is transformed into a deep convolutional network of the AI model.   DeepLocker – AI-Powered Concealment   Source: SecurityIntelligence   The DeepLocker makes it really difficult for malware analysts to answer the 3 main questions- What target is the malware after-  Is it after people’s faces or some other visual clues? What specific instance of the target class is the valid trigger condition? And what is the ultimate goal of the attack payload? Now that’s some commendable work done by the IBM researchers. IBM has always strived to make a mark in the field of innovation. DeepLocker comes as no surprise as IBM has the highest number of facial recognition patents granted in 2018. BlackHat USA 2018 sneak preview The main aim of the IBM Researchers- Marc Ph. Stoecklin, Jiyong Jang and Dhilung Kirat-  briefing the crowd in the BlackHat USA 2018 conference was, To raise awareness that AI-powered threats like DeepLocker can be expected very soon To demonstrate how attackers have the capability to build stealthy malware that can circumvent defenses commonly deployed today and To provide insights into how to reduce risks and deploy adequate countermeasures. To demonstrate the efficiency of DeepLocker’s capabilities, they designed and demonstrated a proof of concept. The WannaCry virus was camouflaged in a benign video conferencing application so that it remains undetected by antivirus engines and malware sandboxes. As a triggering condition, an individual was selected, and the AI was trained to launch the malware when certain conditions- including the facial recognition of the target- were met. The experiment was, undoubtedly, a success. The DeepLocker is just an experiment by IBM to show how open-source AI tools can be combined with straightforward evasion techniques to build a targeted, evasive and highly effective malware. As the world of cybersecurity is constantly evolving, security professionals will now have to up their game to combat hybrid malware attacks. Found this article Interesting? Read the Security Intelligence blog to discover more. 7 Black Hat USA 2018 conference cybersecurity training highlights 12 common malware types you should know Social engineering attacks – things to watch out for while online  
Read more
  • 0
  • 0
  • 7537
article-image-meet-pypeline-a-simple-python-library-for-building-concurrent-data-pipelines
Natasha Mathur
25 Sep 2018
2 min read
Save for later

Meet Pypeline, a simple python library for building concurrent data pipelines

Natasha Mathur
25 Sep 2018
2 min read
The Python team came out with a new simple and powerful library called Pypeline, last week for creating concurrent data pipelines. Pypeline has been designed for solving simple to medium data tasks that require concurrency and parallelism. It can be used in places where using frameworks such as Spark or Dask feel unnatural. Pypeline comprises an easy to use familiar and functional API. It enables building data pipelines using Processes, Threads, and asyncio.Tasks via the exact same API. With Pypeline, you also have control over memory and CPU resources which are used at each stage of your pipeline. Pypeline Basic Usage Using Pypeline, you can easily create multi-stage data pipelines with the help of functions such as map, flat_map, filter, etc. To do so, you need to define a computational graph specifying the operations which are to be performed at each stage, the number of resources, and the type of workers you want to use. Pypeline comes with 3 main modules, and each of them uses a different type of worker. To build multi-stage data pipelines, you can use 3 type of workers, namely, processes, threads, and tasks. Processes You can create a pipeline based on multiprocessing. Process workers with the help of process module. After this, you can specify the numbers of workers at each stage. The maxsize parameter limits the maximum amount of elements that the stage can hold simultaneously. Threads and Tasks Create a pipeline using threading.Thread workers by using the thread module. Additionally, in order to create a pipeline based on asyncio.Task workers, use an asyncio_task module. Apart from being used to create multi-stage data pipelines, it can also help you create pipelines with the help of the pipe | operator. For more information, check out the official documentation. How to build a real-time data pipeline for web developers – Part 1 [Tutorial] How to build a real-time data pipeline for web developers – Part 2 [Tutorial] Create machine learning pipelines using unsupervised AutoML [Tutorial]
Read more
  • 0
  • 0
  • 7522

article-image-you-can-now-install-windows-10-on-a-raspberry-pi-3
Prasad Ramesh
14 Feb 2019
2 min read
Save for later

You can now install Windows 10 on a Raspberry Pi 3

Prasad Ramesh
14 Feb 2019
2 min read
The WoA Installer for Raspberry Pi 3 enables installing Windows 10 on the credit card size computer. The WoA Installer for Raspberry Pi 3 is made by the same members who brought Windows 10 ARM to the Lumia 950 and 950 XL. Where to start? To get started, you need Raspberry Pi 3 Model B or B+, a microSD card of at least class 1, and a Windows 10 ARM64 Image which you can get from GitHub. You also need a recent version of Windows 10 and .NET Framework 4.6.1. The WoA Installer is just a tool which helps you to deploy Windows 10 on the Raspberry Pi 3. WoA Installer needs the Core Package in order to run. You can find them listed on the GitHub page. Specification comparison Regarding specifications, the minimum requirements for Windows 10 is: Processor: 1 gigahertz (GHz) or faster processor or SoC. RAM: 1 gigabyte (GB) for 32-bit or 2 GB for 64-bit. Hard disk space: 16 GB for 32-bit OS 20 GB for 64-bit OS The Raspberry Pi 3B+ has specifications just good enough to run Windows 10: SoC: Broadcom BCM2837B0 quad-core A53 (ARMv8) 64-bit @ 1.4GHz RAM: 1GB LPDDR2 SDRAM While this sounds good, a Hacker news user points out: “Caution: To do this you need to run a rat's nest of a batch file that runs a bunch of different code obtained from the web. If you're going to try this, try on devices you don't care about. Or spend innumerable hours auditing code. Pass -- for now.” You can check out the GitHub page for more instructions. Raspberry Pi opens its first offline store in England Introducing Strato Pi: An industrial Raspberry Pi Raspberry Pi launches it last board for the foreseeable future: the Raspberry Pi 3 Model A+ available now at $25
Read more
  • 0
  • 0
  • 7520