Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3709 Articles
article-image-gnu-guix-1-0-0-released-with-an-improved-user-interface-hassle-free-installation-and-more
Savia Lobo
03 May 2019
3 min read
Save for later

GNU Guix 1.0.0 released with an improved user interface, hassle-free installation and more

Savia Lobo
03 May 2019
3 min read
Yesterday, GNU Guix, a transactional package manager and an advanced distribution of the GNU system, announced the release of GNU Guix version 1.0.0. or “One-point-oh”. This release includes ISO-9660 installation images, a virtual machine image, and with tarballs to install the package manager on top of GNU/Linux distro, either from source or from binaries. Guix users can update by running guix pull. According to their official post, the team says that, “For Guix, 1.0 is the result of seven years of development, with code, packaging, and documentation contributions made by 260 people, translation work carried out by a dozen of people, and artwork and web site development by a couple of individuals, to name some of the activities that have been happening. During those years we published no less than 19 “0.x” releases.” This release, the team says, is a major milestone for those who’ve been on board for several years. Highlights of GNU Guix 1.0.0 On December 6, last year, the GNU Guix team released the 0.16.0 version where 99 people contributed over 5,700 commits at that time. This new One-point-oh release includes the following highlights since the previous version. Hassle-free system installation: The ISO installation image now runs a text-mode graphical installer, which makes system installation less tedious than it was before. The installer is fully translated to French, German, and Spanish. Improved user interface: This release includes aliases for common operations such as guix search and guix install. Diagnostics are now colorized, more operations show a progress bar, there’s a new --verbosity option recognized by all commands, and most commands are now “quiet” by default. New package transformation: There’s a new --with-git-url package transformation option, that goes with --with-branch and --with-commit. Guix now has a uniform mechanism to configure keyboard layout—a long overdue addition. Also, Xorg configuration has been streamlined with the new xorg-configuration record. guix pack -R: This creates tarballs containing relocatable application bundles that rely on user namespaces. Starting from 1.0, guix pack -RR generates relocatable binaries that fall back to PRoot on systems where user namespaces are not supported. Package addition and updates: More than 1,100 packages were added, leading to close to 10,000 packages, 2,104 packages were updated, and several system services were contributed. Multiple language availability: The manual has been fully translated to French, the German and Spanish translations are nearing completion. They have also planned to add a Simplified Chinese translation. One can also help translate the manual into their language by joining the Translation Project. The team also says that Guix 1.0 is a tool that’s both serviceable for one’s day-to-day computer usage and a great playground to explore. Whether users want to help on design, coding, maintenance, system administration, translation, testing, artwork, web services, funding, organizing a Guix install party, the team is welcome to contributions. To know more about the GNU Guix 1.0.0 in detail, read the official blog post. GNU Shepherd 0.6.0 releases with updated translations, faster services, and much more GNU Nano 4.0 text editor releases! GNU Octave 5.1.0 releases with new changes and improvements
Read more
  • 0
  • 0
  • 2230

article-image-puppet-announces-updates-in-a-bid-to-help-organizations-manage-their-automation-footprint
Richard Gall
03 May 2019
3 min read
Save for later

Puppet announces updates in a bid to help organizations manage their "automation footprint"

Richard Gall
03 May 2019
3 min read
There are murmurs on the internet that tools like Puppet are being killed off by Kubernetes. The reality is a little more complex. True, Kubernetes poses some challenges to various players in the infrastructure automation market, but they nevertheless remain important tools for engineers charged with managing infrastructure. Kubernetes is forcing this market to adapt - and with Puppet announcing new tools and features to its portfolio in Puppet Enterprise 2019.1 yesterday, this it's clear that the team are making the necessary strides to remain a key part of the infrastructure automation landscape. Update: This article was amended to highlight that Puppet Enterprise is a distinct product separate from Continuous Delivery for Puppet Enterprise. What's new for Puppet Enterprise 2019.1? There are two key elements to the Puppet announcement: enhanced integration with Puppet Bolt - an open source, agentless task runner - and improved capabilities with Continuous Delivery for Puppet Enterprise. Puppet Bolt Puppet Bolt, the Puppet team argue, offers a really simple way to get started with infrastructure automation "without requiring an agent installed on a remote target." The Puppet team explain that Puppet Bolt essentially allows users to expand the scope of what they can automate without losing the consistency and control that you'd expect when using a tool like Puppet. This has some significant benefits in the context of Kubernetes. Bryan Belanger, Principal Consultant at Autostructure, said "We love using Puppet Bolt because it leverages our existing Puppet roles and classifications allowing us to easily make changes to large groups of servers and upgrade Kubernetes clusters quicker, which is often a pain if done manually." Belanger continues, saying "with the help of Puppet Bolt, we were also able to fix more than 1,000 servers within five minutes and upgrade our Kubernetes clusters within four hours, which included coding and tasks." Continuous Delivery for Puppet Enterprise Updates to the Continuous Delivery product aim to make DevOps practices easier - the Puppet team are clearly trying to make it easier for organizations to empower their colleagues and continue to build a culture where engineers are not simply encouraged to be responsible for code deployment, but also able to do it with minimal fuss. Module Delivery Pipelines now mean modules can be independently deployed without blocking others, while Simplified Puppet Deployments aims to make it easier for engineers that aren't familiar with Puppet to "push simple infrastructure changes immediately and easily perform complex rolling deployments to a group of nodes in batches in one step." But there is also another dimension that aims to help engineers take pro-active steps to tackle resiliency and security issues. With Impact Analysis teams will be able to look at the potential impact of a deployment before it's done. Read next: “This is John. He literally wrote the book on Puppet” – An Interview with John Arundel What's the big idea behind this announcement? The over-arching narrative that's coming from the top is about supporting teams to scale their DevOps processes. It's about making organizations' 'automation footprint' more manageable. "IT teams need a simple way to get started with automation and a solution that grows with them as their automation footprint grows," Matt Waxman, Head of Product at Puppet, explains. "You shouldn’t have to throw away your existing scripts or tools to scale automation across your organization. Organizations need a solution that is extensible — one that complements their current automation efforts and helps them scale beyond individuals to multiple teams." Puppet Enterprise 2019.1 will be out on general availability on May 7 2019. Learn more here.
Read more
  • 0
  • 0
  • 2389

article-image-the-major-dns-blunder-at-microsoft-azure-affects-office-365-one-drive-microsoft-teams-xbox-live-and-many-more-services
Amrata Joshi
03 May 2019
3 min read
Save for later

The major DNS blunder at Microsoft Azure affects Office 365, One Drive, Microsoft Teams, Xbox Live, and many more services

Amrata Joshi
03 May 2019
3 min read
It seems all is not well at Microsoft post yesterday’s outage as the Microsoft's Azure cloud been up and down globally because of a DNS configuration issue. This outage that started at 1:20 pm yesterday, lasted for more than an hour which ended up affecting Microsoft’s cloud services, including Office 365, One Drive, Microsoft Teams, Xbox Live, and many others that are used by Microsoft’s commercial customers. Due to the networking connectivity errors in Microsoft Azure even the third-party apps and sites running on Microsoft’s cloud got affected. Meanwhile, around 2:30 pm, Microsoft started gradually recovering Azure regions one by one. Though Microsoft is yet to completely troubleshoot this major issue and has already warned that it might take some time to get everyone back up and running. But this isn’t the first time that DNS outage has affected Azure. This year in January, a few customers' databases had gone missing, which affected a number of Azure SQL databases that utilize custom KeyVault keys for Transparent Data Encryption (TDE). https://twitter.com/AzureSupport/status/1124046510411460610 The Azure status page reads, "Customers may experience intermittent connectivity issues with Azure and other Microsoft services (including M365, Dynamics, DevOps, etc)." The Microsoft engineers found out that an incorrect name server delegation issue affected DNS resolution, network connectivity, and that affected the compute, storage, app service, AAD, and SQL database resources. Even on the Microsoft 365 status page, Redmond's techies have blamed an internal DNS configuration error for the downtime. Also, during the migration of the DNS system to Azure DNS, some domains for Microsoft services got incorrectly updated. The good thing is that no customer DNS records were impacted during this incident, also the availability of Azure DNS remained at 100% throughout this incident. Only records for Microsoft services got affected due to this issue. According to Microsoft, the broken systems have been fixed and the three-hour outage has come to an end and the Azure's network infrastructure will soon get back to normal. https://twitter.com/MSFT365Status/status/1124063490740826133 Users have reported issues with accessing the cloud service and are complaining. A user commented on HackerNews, “The sev1 messages in my inbox currently begs to differ. there's no issue maybe with the dns at this very moment but the platform is thoroughly fucked up.” Users are also questioning the reliability of Azure. Another comment reads, “Man... Azure seems to be an order of magnitude worse than AWS and GCP when it comes to reliability.” To know more about the status of the situation, check out Microsoft’s post. Microsoft brings PostgreSQL extension and SQL Notebooks functionality to Azure Data Studio Microsoft @MWC (Mobile World Congress) Day 1: HoloLens 2, Azure-powered Kinect camera and more! Microsoft Cloud services’ DNS outage results in deleting several Microsoft Azure database records    
Read more
  • 0
  • 0
  • 3189

article-image-apple-convincingly-lobbied-against-right-to-repair-bill-in-california-citing-consumer-safety-concern
Amrata Joshi
03 May 2019
3 min read
Save for later

Apple convincingly lobbied against ‘right to repair’ bill in California citing consumer safety concern

Amrata Joshi
03 May 2019
3 min read
Apple is known for designing its products in a way that except for Apple experts none can easily repair them in case of any issues. For this, it seems the company is trying hard to kill the ‘Right To Repair’ bill in California which might work against Apple. The ‘Right To Repair’ bill which has been adopted by 18 states, is currently under discussion in California. According to this bill,  consumers will get the right to fix or mod their devices without any effect on their warranty. The company has managed to lobby California lawmakers and pushed the bill till 2020. https://twitter.com/kaykayclapp/status/1123339532068253696 In a recent report by Motherboard, an Apple representative and a lobbyist has been privately meeting with legislators in California to encourage them to go off the bill. The company is doing so by stoking fears of battery explosions for the consumers who attempt to repair their iPhones. The Apple representative argued that the consumers might hurt themselves if they accidentally end up puncturing the flammable lithium-ion batteries in their phones. In a statement to The Verge, California Assemblymember Susan Talamantes Eggman, who first introduced the bill in March 2018 and again in March 2019, said, “While this was not an easy decision, it became clear that the bill would not have the support it needed today, and manufacturers had sown enough doubt with vague and unbacked claims of privacy and security concerns.” Last quarter, Apple’s iPhone sales slowed down so the company anticipates that consumers may buy new handsets instead of getting the old one repaired. But the fact that the batteries might get punctured might bother many and will surely have enough speculations around it. Kyle Wiens, iFixit co-founder laughs at the fact about getting an iPhone battery punctured during a repair. Though he admits the possibility but according to him, it rarely happens. Wiens says, “Millions of people have done iPhone repairs using iFixit guides, and people overwhelmingly repair these phones successfully. The only people I’ve seen hurt themselves with an iPhone are those with a cracked screen, cutting their finger.” He further added, “Whether it uses gasoline or a lithium-ion battery, most every car has a flammable liquid inside. You can also get badly hurt if you’re changing a tire and your car rolls off the jack.” But a recent example from David Pierce, WSJ tech reviewer, justifies the explosion. https://twitter.com/pierce/status/1113242195497091072 With so much talk around repairing and replacing, it’s difficult to predict if the ‘Right to Repair’ bill with respect to iPhones, will come in force anytime soon. Only in 2020 we will get a clearer picture of the bill. Also, we will come to know if consumer safety is at stake or is it related to the company benefits. Apple plans to make notarization a default requirement in all future macOS updates Ian Goodfellow quits Google and joins Apple as a director of machine learning Apple officially cancels AirPower; says it couldn’t meet hardware’s ‘high standards’
Read more
  • 0
  • 0
  • 2235

article-image-you-can-now-permanently-delete-your-location-history-and-web-and-app-activity-data-on-google
Sugandha Lahoti
03 May 2019
4 min read
Save for later

You can now permanently delete your location history, and web and app activity data on Google

Sugandha Lahoti
03 May 2019
4 min read
Google keeps a track of everything that you do online, including the websites you visit, the ads you see, the videos you watch, and the things you search. Soon, this is (partially) going to change. Google, on Wednesday, launched a new feature allowing users to delete all or part of the location history and web and app activity data, manually. This has been a long requested feature by all internet users, and Google says it “ has heard user feedback that they need to provide simpler ways for users to manage or delete their data.” In the Q1 earnings shared by Google’s parent company Alphabet, they said that EU’s USD 1.49 billion fine on Google is one of the reasons their profit sagged in the first three months of this year.  This was Google’s third antitrust fine by EU since 2017. In the Monday report, Alphabet said that profit in the first quarter fell 29 percent to USD 6.7 billion on revenue that climbed 17 percent to USD 36.3 billion. “Without identifying you personally to advertisers or other third parties, we might use data that includes your searches and location, websites and apps you’ve used, videos and ads you’ve seen, and basic information you’ve given us, such as your age range and gender,” the company explains on its Safety Center Web page. Google already allows you to turn off their location history and Web and app activity. You can also manually delete data that’s generated from searches and other Google services. This new feature, however, lets you remove such information automatically. It has a time limit for how long you want your activity data to be saved: Keep until I delete manually Keep for 18 months, then delete automatically Keep for 3 months, then delete automatically Based on the option chosen, any data older than that will be automatically deleted from your account on an ongoing basis. Surprisingly, Google still does not have an option that says 'don't track me' or 'automatically delete after I close website', which would ensure 100 percent data privacy and security for users. Source: Google Blog Enabling privacy has not been one of Google’s strongholds in recent times. Last year, Google was caught in a scandal which allowed Google to track a person’s location history in incognito mode, even when they had turned it off. In November last year, Google came under scrutiny by the European Consumer Organisation (BEUC). They published a report stating that Google is using various methods to encourage users to enable the settings ‘location history’ and ‘web and app activity’ which are integrated into all Google user accounts. They allege that Google is using these features to facilitate targeted advertising. “These practices are not compliant with GDPR, as Google lacks a valid legal ground for processing the data in question. In particular, the report shows that users’ consent provided under these circumstances is not freely given,” BEUC, speaking on behalf of the countries’ consumer groups, said. Google was also found helping the police use Google’s location database to catch potential crime suspects, and sometimes capturing innocent people in the process, per a recent New York Times investigation. The new feature will be rolled out in the coming weeks for location history and for web and app activity data. It is likely to be incorporated in data history as well, but it has not been officially confirmed. To enable this privacy feature, visit your Google account activity controls. European Consumer groups accuse Google of tracking its users’ location, calls it a breach of GDPR. Google’s incognito location tracking scandal could be the first real test of GDPR Google’s Sidewalk Lab smart city project threatens privacy and human rights: Amnesty Intl, CA says.
Read more
  • 0
  • 0
  • 5020

article-image-valve-reveals-new-index-vr-kit-with-detail-specs-and-costs-upto-999
Fatema Patrawala
02 May 2019
4 min read
Save for later

Valve reveals new Index VR Kit with detail specs and costs upto $999

Fatema Patrawala
02 May 2019
4 min read
Valve introduced the new VR headset kit, Valve Index, only one month ago. And said the preorders will begin from, May 1st, and will ship in June. Today, Valve is fully detailing the Index headset for the first time, and revealing exactly how much it will cost: $999. The price seems to be relatively high according to today’s VR headset standards. In comparison, Facebook announced the Oculus Quest and Oculus Rift S to be shipped on May 21st for $399. But Valve says it will let you buy parts piecemeal if you need, which is good deal if you do not wish to buy the whole kit. And if you’ve already got a Vive or Vive Pro and / or don’t need the latest Knuckles controllers, you won’t necessarily need to spend that whole $999 to get started. Get the best look yet of the Index headset at the Valve Index website. Like the HTC Vive, which was co-designed with Valve, the Index will still be a tethered experience with a 5-meter cable that plugs into a gaming PC. It also uses the company’s laser-firing Lighthouse base stations to figure out where the headset is at any given time. That’s how it lets you walk around a room worth of space in VR — up to a huge 10 x 10 meter room. Valve’s not using cameras for inside-out tracking; the company says the twin stereo RGB cameras here are designed for passthrough (letting you see the real world through the headset) and for the computer vision community. Instead, Valve says the Index’s focus is on delivering the highest fidelity VR experience possible, meaning improved lenses, screens, and audio. In this case it actually includes a pair of 1440 x 1600-resolution RGB LCDs, rather than the higher-res OLED screens much of which the competition is already using. But Valve says its screens run faster — 120Hz, with an experimental 144Hz mode — and are better at combating the “screen door effect” and blurry when you move your head, persistence issues that first-gen VR headsets struggled with. The Valve Index also has an IPD slider to adjust for the distance between your eyes and lenses that Valve says offer a 20-degree larger field of view than the HTC Vive “for typical users.” Most interesting in Valve are the built-in headphone images shown on the website which aren’t actually headphones — but they’re speakers. And they are designed to not touch your ears, instead firing their sound toward your head. It is similar to how Microsoft’s HoloLens visors produce audio, which means that while people around you could theoretically hear what you’re doing, there’ll be less fiddling with the mechanism to get that audio aligned with your ears. They have also provided a 3.5mm headphone jack if you want to plug in your own headphones. Another interesting part of the Valve Index is it can be purchased separately for $279. The Valve Index Controllers, formerly known as Knuckles, might be the most intuitive way to get your hands into VR yet. While a strap holds the controller to your hand, 87 sensors track the position of your hands and fingers and even how hard you’re pressing down. Theoretically, you could easily reach, grab, and throw virtual objects with such a setup, something that wasn’t really possible with the HTC Vive or Oculus Touch controllers. Here’s one gameplay example that Valve is showing off: Source - Valve website Another small improvement is the company’s Lighthouse base stations. Since they only use a single laser now, and no IR blinker, Valve says they play nicer with other IR devices, which mean you can turn on and off TV without needing to power off them first. According to the reports by Polygon which got an early hands-on with the Valve Index, they say the Knuckles feel great, the optics are sharp, and that it may be the most comfortable way to wear a VR headset over a pair of glasses yet. Polygon also further explained the $999 price point. They said, during Valve’s demonstration, a spokesperson said that Index is the sort of thing that is likely to appeal to a virtual reality enthusiast who (a) must have the latest thing and (b) enjoys sufficient disposable income to satisfy that desire. It’s an interesting contrast with Facebook’s strategy for Rift, which is pushing hard for the price tipping point when VR suddenly becomes a mass-market thing, like smartphones did a decade ago. Get to know about pricing details of Valve Index kit on its official page. Top 7 tools for virtual reality game developers Game developers say Virtual Reality is here to stay Facebook releases DeepFocus, an AI-powered rendering system to make virtual reality more real
Read more
  • 0
  • 0
  • 3383
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $15.99/month. Cancel anytime
article-image-docker-announces-collaboration-with-microsofts-net-at-dockercon-2019
Savia Lobo
02 May 2019
4 min read
Save for later

Docker announces collaboration with Microsoft’s .NET at DockerCon 2019

Savia Lobo
02 May 2019
4 min read
Using Docker and .NET together was brought up in the year 2017, where Microsoft explained the cons of using them together. Last year’s DockerCon update showed multiple .NET demos showing how one can use Docker for modern applications and for older applications that use traditional architectures. This eased users to containerize .NET applications using tools from both Microsoft and Docker. The team said that “most of their effort to improve the .NET Core Docker experience in the last year has been focused on .NET Core 3.0.” “This is the first release in which we’ve made substantive runtime changes to make CoreCLR much more efficient, honor Docker resource limits better by default, and offer more configuration for you to tweak”. Microsoft writes in one of their blog posts. The team also mentions that they are invested in making .NET Core a true container runtime. They look forward to hardening .NET’s runtime to make it container-aware and function efficiently in low-memory environments. Let’s have a look at the different advantages of bring Docker and .NET together Docker + .NET advantages Lesser memory allocation and fewer GC heaps by default With .NET 3.0, the team reduced the minimal generation 0 GC allocation budget to better align with modern processor cache sizes and cache hierarchy. With this, the initial allocation size, which was unnecessarily large, was significantly reduced without any perceivable loss of performance thus bringing in tens of percentage points of improvements. The team also mention about a new policy for determining how many GC heaps to create. This is most important on machines where a low memory limit is set, but no CPU limit is set on a machine with many CPU cores. The GC now reserves a memory segment with a minimum size of 16 MB per heap. This limits the number of heaps the GC will create. Both these changes result in lower memory usage by default and make the default .NET Core configuration better in more cases. Added PowerShell to .NET Core SDK container Images PowerShell Core has been added to the .NET Core SDK Docker container images, per requests from the community. There are two main scenarios that having PowerShell inside the .NET Core SDK container image enables, which were not otherwise possible: Write .NET Core application Dockerfiles with PowerShell syntax, for any OS. Write .NET Core application/library build logic that can be easily containerized. Note: PowerShell Core is now available as part of .NET Core 3.0 SDK container images. It is not part of the .NET Core 3.0 SDK. .NET Core Images now available via Microsoft Container Registry Microsoft teams are now publishing container images to the Microsoft Container Registry (MCR). There are two primary reasons for this change: Syndicate Microsoft-provided container images to multiple registries, like Docker Hub and Red Hat. Use Microsoft Azure as a global CDN for delivering Microsoft-provided container images. Platform matrix and support With .NET Core, we try to support a broad set of distros and versions. Following is the policy for each distro: Alpine: support tip and retain support for one quarter (3 months) after a new version is released. Currently, 3.9 is tip and the team will stop producing 3.8 images in a month or two. Debian: support one Debian version per latest .NET Core version. This is also the default Linux image used for a given multi-arch tag. For .NET Core 3.0, the team may publish Debian 10 based images. However, they may produce Debian 9 based images for .NET Core 2.1 and 2.2, and Debian 8 images for earlier .NET Core versions. Ubuntu:  support one Ubuntu version per the latest .NET Core version(18.04). Also, as the team gets to know of a new Ubuntu LTS versions, they will start supporting non-LTS Ubuntu versions a means of validating the new LTS versions. For Windows, they support the cross-product of Nano Server and .NET Core versions. ARM Architecture The team plans to add support for ARM64 on Linux with .NET Core 3.0, complementing the ARM32 and X64 support already in place. This will enable .NET Core to be used in even more environments. Apart from these advantages, the team has also added support for Docker Memory and CPU Limits. To know more about this partnership in detail, read Microsoft’s official blogpost. DockerHub database breach exposes 190K customer data including tokens for GitHub and Bitbucket repositories Are Debian and Docker slowly losing popularity? Creating a Continuous Integration commit pipeline using Docker [Tutorial]
Read more
  • 0
  • 0
  • 4146

article-image-github-deprecates-and-then-restores-network-graph-after-github-users-share-their-disapproval
Vincy Davis
02 May 2019
2 min read
Save for later

GitHub deprecates and then restores Network Graph after GitHub users share their disapproval

Vincy Davis
02 May 2019
2 min read
Yesterday, GitHub announced in a blog post that they are deprecating the Network Graph from the repository’s Insights panel and that visits to this page will be redirected to the forks page instead. Following this announcement, they removed the network graph. On the same day, however, they deleted the blog post and also added back the network graph. The network graph is one of the useful features for developers on GitHub. It is used to display the branch history of the entire repository network, including branches of the root repository and branches of forks that contain commits unique to the network. Users of GitHub were alarmed on seeing the blog post about the removal of network graph without any prior notification or provision of a suitable replacement. For many users, this meant a significant burden of additional work. https://twitter.com/misaelcalman/status/1123603429090373632 https://twitter.com/theterg/status/1123594154255187973 https://twitter.com/morphosis7/status/1123654028867588096 https://twitter.com/jomarnz/status/1123615123090935808 Following the backlash and requests to bring back the Graph Network, on the same day, the Community Manager of GitHub posted on its community forum, that they will be reverting this change, based on the users’ feedback. Later on, the blog post announcing the deprecation was removed and the network graph was back on its website. This has brought a huge sigh of relief amongst GitHub’s users. The feature is famous for checking the state of a repository and the relationship between active branches. https://twitter.com/dotemacs/status/1123851067849097217 https://twitter.com/AlpineLakes/status/1123765300862836737 GitHub has not yet officially commented on why they removed the network graph in the first place. A Reddit user has put up an interesting shortlist of suspicions: The cost-benefit analysis from "The Top" determined that the compute time for generating the graph was too expensive, and so they "moved" the feature to a more premium account. "Moved" could also mean unceremoniously kill off the feature because some manager thought it wasn't shiny enough. Microsoft buying GitHub made (and will continue to make) GitHub worse, and this is just a harbinger of things to come. DockerHub database breach exposes 190K customer data including tokens for GitHub and Bitbucket repositories Apache Software Foundation finally joins the GitHub open source community Microsoft and GitHub employees come together to stand with the 996.ICU repository
Read more
  • 0
  • 0
  • 5023

article-image-red-hat-rebrands-logo-after-20-years-drops-shadowman
Fatema Patrawala
02 May 2019
3 min read
Save for later

Red Hat rebrands logo after 20 years; drops Shadowman

Fatema Patrawala
02 May 2019
3 min read
On Wednesday, Red Hat unveiled its new logo - an update from its previous "Shadowman" logo, which has been associated with the company since 1997. The classic rebranding of the logo comes nearly after 20 years as it prepares for the close of its $34 billion acquisition by IBM. Source - Red Hat The new logo is a sign that the company is looking to freshen up its image for the next phase in its history. Source - Red Hat “Many people didn't know what Red Hat's logo was supposed to be”, says Tim Yeaton to Business Insider, Red Hat's executive vice president of corporate marketing. The original Red Hat logo features "Shadowman," supposedly a heroic spy. But according to a research by Red Hat's internal team found that many thought Shadowman was "secretive," sinister," and "sneaky." Yeaton says, "What this told us was not only do we need to tidy up the mark for better rendering, we probably need to modernize the mark to better reflect what we are and where we're going,". From there Red Hat started its Open Brand Project, where the company collected feedback from employees on how to modernize the brand. After the feedback it came up with 45 different hat-related logos before settling on the final version. The Red Hat team came up with an elegant solution after 5 months of research, explorations, and brainstorming. The new logo supposedly reflects Red Hat’s open-source company culture and has potential to grow. It is designed by the company’s branding team, and Paula Scher, a New York-based partner at the design firm Pentagram. Red Hat employees show off the new logo and gets it inked Business Insider reported that when the new logo was made official on Wednesday - six of its employees, including a high-ranking executive, have the new logo tattooed on their body. While getting a Red Hat tattoo is not an unusual part of the open source company's culture. In fact, there are 17 employees who sport tattoos of the original logo, forming the so-called "tat-pack." Consuelo Madrigal, Red Hat's brand manager who got herself a tattoo of the new logo says to Business Insider. "Culturally, this is not just the logo for us Red Hatters, It's our culture, it's our way of doing things, and I've learned a lot of that, and I loved it throughout the journey. It's a reminder of what we can do when we try new things. Finally, the outcome, what could happen if we all do it together. I'm just happy about all that's happening." While internally the new logo design has got the Red Hat employees really excited, users on Hackernews have different views and they somehow feel it is an amateur job. One of the users commented, “New symbol in the logo is OK, but the text is a disaster. Just stare at it for a second, especially the Re-d coupling. It's unkerned. The 'e' is about to topple over to the right, 'd' looks pregnant, 'a' is too top heavy. There's no visual balance, rhythm or consistency to how "Red Hat" looks. It basically looks as an amateur job.” Check out the Red Hat’s official blog post page to know about the details of other changes made to its logo. Red Hat team announces updates to the Red Hat Certified Engineer (RHCE) program Red Hat Satellite to drop MongoDB and will support only PostgreSQL backend Red Hat announces CodeReady Workspaces, the first Kubernetes-Native IDE for easy collaboration among developers
Read more
  • 0
  • 0
  • 3488

article-image-fedora-30-releases-with-gcc-9-0-gnome-3-32-performance-improvements-and-much-more
Amrata Joshi
02 May 2019
2 min read
Save for later

Fedora 30 releases with GCC 9.0, GNOME 3.32, performance improvements, and much more!

Amrata Joshi
02 May 2019
2 min read
Last month, the Fedora team announced the release of Fedora 30 Beta version. Just last week, the Fedora team broke the news of the release of Fedora 30 that serves as the staging environment for Red Hat Enterprise Linux. This release comes with a number of improvements and performance optimizations. What’s new in Fedora 30? GCC 9.0 This release uses GCC 9.0 which brings performance improvements across all applications that have been recompiled with this version. This release also features a flicker-free boot process that hides the GRUB loader/kernel select screen by default and also relies on creative theming to incorporate the bootsplash image into the loading process. GNOME 3.32 This release has been shipped with GNOME 3.32 that includes all-new app icons that use a new visual language reminiscent of Google's Material Design guidelines. GNOME 3.32 provides more robust support for HiDPI displays including experimental non-integer scaling. Performance improvements This release comes with performance improvements including an upgrade to Bash 5.0, Boost 1.69, and glibc to 2.29. In this release, even Python 2 packages have been removed and Ruby 2.6 and PHP 7.3 has been updated. Excessive linking for Fedora-built packages has been removed, which will improve startup times and smaller metadata files. This release also brings UEFI for ARMv7 devices that makes it possible to install Fedora on UEFI-compatible ARM hardware that is similar to installing on an arbitrary computer. New packages for desktop environments This release includes packages for DeepinDE and Pantheon, the desktop environments that are used in Deepin Linux, also known as "the single most beautiful desktop on the market" by TechRepublic's Jack Wallen. These packages require a simple and manual installation process. Most of the users are happy and excited about this news. A user commented on HackerNews, “Love this, switched today! Definitely the most easy to use distro out there and, especially in the case of Silverblue, the most modern by far (containers only!).” Few others are complaining about the bugs in this release. Another user commented, “This is good distro for developers by developers. I wouldn't suggest it for everyday users though. There are too many beta quality bugs since it uses really bleeding edge releases.” To know more about this news, check out the Fedora 30’s official announcement. Fedora 30 Beta released with desktop environment options, GNOME 3.32, and much more Fedora 31 will now come with Mono 5 to offer open-source .NET support Fedora 29 released with Modularity, Silverblue, and more  
Read more
  • 0
  • 0
  • 2437
article-image-amazon-introduces-s3-batch-operations-to-process-millions-of-s3-objects
Amrata Joshi
02 May 2019
3 min read
Save for later

Amazon introduces S3 batch operations to process millions of S3 objects

Amrata Joshi
02 May 2019
3 min read
Just two days ago, Amazon announced that it has made Amazon S3 Batch Operations, a storage management feature for processing millions of S3 objects in an easier way. It is also an automated feature that was first previewed at AWS re:Invent 2018. Users can now set tags or access control lists (ACLs), copy objects to another bucket, initiate a restore from Glacier, and also invoke an AWS Lambda function on each one. Developers and IT administrators can now change object properties and metadata and further execute storage management tasks with a single API request. For example, S3 Batch Operations allows customers to replace object tags, change access controls, add object retention dates, copy objects from one bucket to another, and even trigger Lambda functions against existing objects stored in S3. S3’s existing support for inventory reports are used to drive the batch operations. With this new feature of Batch Operations, users can now easily write code, set up any server fleets, or figure out how to partition the work and distribute it to the fleet. Users can now create a job in minutes with a couple of clicks. S3 uses massive, behind-the-scenes parallelism to manage the job. Users can also create, monitor, and manage their batch jobs using the S3 CLI, the S3 Console, or the S3 APIs. Important terminologies for batch operations Bucket An S3 bucket can hold a collection of any number of S3 objects, with optional per-object versioning. S3 Inventory report An S3 inventory report can be generated when daily or weekly bucket inventory is run. A report can be configured to include all of the objects in a bucket or to focus on a prefix-delimited subset. Manifest A manifest is an inventory report or a file in CSV format that identifies the objects to be processed in the batch job. Batch Action Batch action is the desired action on the objects which is described by a Manifest. IAM role An IAM role provides S3 with permission for reading the objects in the inventory report and perform the desired actions for writing the optional completion report. Batch job Batch references all of the above-mentioned terminologies. Each job has a status and a priority; higher priority (numerically) jobs take precedence over those with lower priority. Most of the users are happy because of this news as they think the performance of their projects might increase. A user commented on HackerNews, “This S3 request rate performance increase removes any previous guidance to randomize object prefixes to achieve faster performance. That means you can now use logical or sequential naming patterns in S3 object naming without any performance implications.” To know more about this news, check out Amazon’s blog post. Amazon finally agrees to let shareholders vote on selling facial recognition software Eero’s acquisition by Amazon creates a financial catastrophe for investors and employees Amazon Alexa is HIPAA-compliant: bigger leap in the health care sector      
Read more
  • 0
  • 0
  • 4195

article-image-eric-schmidt-former-google-ceo-steps-down-from-alphabets-board-after-an-18-year-long-stint
Sugandha Lahoti
02 May 2019
4 min read
Save for later

Eric Schmidt, former Google CEO, steps down from Alphabet’s board after an 18-year long stint

Sugandha Lahoti
02 May 2019
4 min read
Eric Schmidt is stepping down from Alphabet's board of directors, after an 18-year long stint. He is not seeking re-election to the board of directors after his current term expires on June 19th, 2019, Alphabet announced on Tuesday. Schmidt will continue to serve the board as Technical Advisor. https://twitter.com/ericschmidt/status/1123324575436214272 Schmidt joined Google in March 2001 as chairman and became the company's CEO in August 2001. In 2011, Google announced that Schmidt would step down as the CEO of Google but would take new title as executive chairman of the company and act as an adviser to co-founders Larry Page and Sergey Brin. He later announced in December 2017 that he was stepping down from that role but staying on Alphabet's board. Along with Schmidt, former CEO of Google Cloud, Diane Greene is also not seeking re-election to the Board at the expiration of her current term on June 19, 2019. Both these retirement announcements come just a day after Alphabet had its worst earnings call in 6 years. According to Bloomberg, the stock fell as much as 8.6 percent Tuesday, the most intraday since October 2012, and traded down 7.6 percent to $1,197.85 at 9:37 a.m. in New York. Sales came in at $29.5 billion, excluding payments to distribution partners, Alphabet said in a statement on Monday. Google was also fined $1.7 billion by the EU for abusive ad practices last month, which further dented the earnings result. Although Schmidt’s has undoubtedly been a key player in Google’s growth, his recent actions have come under the spotlight. Multiple shareholders including Schmidt and Greene were involved in the decision to quietly pay out $135 million to Android creator Andy Rubin amid a sexual misconduct inquiry, which instigated the Google Walkout. Schmidt was later named in a lawsuit accusing the company of covering up harassment by multiple executives. Greene was also involved in pushing for Google’s infamous Project Maven, which was focused on analyzing drone footage and could have been eventually used to improve drone strikes on the battlefield. Schmidt played an important role in President Obama’s two election victories. Per a New York Times report, Schmidt was also intimately involved in building Obama’s voter-targeting operation in 2012, recruiting digital talent, choosing technology, and coaching campaign manager Jim Messina on campaign infrastructure. He has been appointed to numerous White House advisory positions, giving him privileged insight into the administration’s policies in technology, science, and military defense, as well as unusual access to top policymakers. Now Schmidt is involved in Joe Biden’s presidential campaigning which was announced last week. In a bid to broaden its appeal with younger voters and small donors, CNBC reports, Biden has turned to Civis Analytics, a data science software and consulting company backed by former Google chairman Eric Schmidt. https://twitter.com/lutherlowe/status/1123397234438103040 Back in 2009, Eric Schmidt was accused of dismissing privacy concerns. When asked during an interview for CNBC's "Inside the Mind of Google" special about whether users should be sharing information with Google as if it were a "trusted friend," Schmidt responded, "If you have something that you don't want anyone to know, maybe you shouldn't be doing it in the first place." Schmidt said that he would be stepping down in order to help “the next generation of talent to serve.” He said he would be teaching more, working at his philanthropic organization, Schmidt Futures, and using his role as technical advisor to “coach Alphabet and Google businesses/tech.” Schmidt is replaced by Robin L. Washington effective from April 25. She has experience across finance and operations and will serve on Alphabet’s Leadership Development and Compensation Committee. Washington has been the Executive Vice President and Chief Financial Officer of Gilead Sciences, Inc., a biopharmaceutical company, since February 2014. #NotOkGoogle: Employee-led town hall reveals hundreds of stories of retaliation at Google. Google announces new policy changes for employees to report misconduct amid complaints of retaliation and harassment. #GoogleWalkout organizers face backlash at work, tech workers show solidarity
Read more
  • 0
  • 0
  • 1464

article-image-oakland-privacy-advisory-commission-lay-out-privacy-principles-for-oaklanders-and-propose-ban-on-facial-recognition-tech
Amrata Joshi
30 Apr 2019
5 min read
Save for later

Oakland Privacy Advisory Commission lay out privacy principles for Oaklanders and propose ban on facial recognition tech

Amrata Joshi
30 Apr 2019
5 min read
Privacy issues are now becoming a matter of concern, with Silicon Valley coming under the radar every now and then, and lawmakers taking a stand for the user’s privacy, it seems a lot of countries are now making an effort in this direction. In the US, lawmakers have already started working on the lawsuits and regulations that violate consumer data privacy. Countries like California have taken steps towards issues related to privacy and surveillance. Perhaps last week, the Oakland Privacy Advisory Commission released 2 key documents, an initiative to protect Oaklanders’ privacy namely, Proposed ban on facial recognition and City of Oakland Privacy Principles. https://twitter.com/cfarivar/status/1123081921498636288 Proposal to ban facial recognition tech The committee has written this document which talks about the regulations on Oakland’s acquisition and use of surveillance technology. It has defined Face Recognition Technology “as an automated or semi-automated process that assists in identifying or verifying an individual based on an individual's face.” According to this document, it will be unlawful for any city staff to retain, obtain, request, access, or use any Face Recognition Technology or any information obtained from Face Recognition Technology. City staff’s unintentional receipt, access to, or use of any information that has been obtained from Face Recognition Technology shouldn’t violate the above. Provided that the city staff shouldn’t request or solicit its receipt, access to, or use of such information. Unless the city staff logs such access to, receipt, or use in its Annual Surveillance Report. Oakland privacy principles laid out by the committee The Oakland Privacy Advisory Commission has listed few principles with regards to user’s data privacy for Oaklanders. Following are the privacy principles: Design and use equitable privacy practices According to the first principle, community safety and access to city services shouldn’t be welcomed at the cost of any Oaklander’s right to privacy. They aim to collect information in a way that won’t discriminate against any Oaklander or Oakland community. Whenever possible, the alternatives to the collection of personal data will be communicated at the time of data collection. Limit collection and retention of personal information According to this principle, personal information should be collected and stored only when and for as long as is justified for serving the purpose of collecting it in the first place. Information related to Oaklanders’ safety, health, or access to city services should be protected. Oaklanders views on collection of information will be considered by the Commission. Manage personal information with diligence Oaklanders’ personal information should be treated with respect and handled with care, regardless of how or by whom it was collected. For maintaining the security of the systems, the software and applications that interact with Oaklanders’ personal information are regularly updated and reviewed by the Commission. The personal information gathered from different departments will be combined when there is a need. According to the Oakland Privacy Advisory Commission, encryption, minimization, deletion, and anonymization can reduce misuse of personal information. The aim of the Commission is to create effective use of these tools and practices. Extend privacy protections to our relationships with third parties According to the Oakland Privacy Advisory Commission, the responsibility to protect Oaklanders’ privacy should be extended to the vendors and partners. Oaklanders’ personal information should be shared by the Commission with third parties only to provide city services, and only when doing so is consistent with these privacy principles. The Commission will disclose the identity of parties with whom they share personal information, once the law permits to do so. Safeguard individual privacy in public records disclosures According to the Commission, providing relevant information to interested parties about their services and governance is essential to democratic participation as well as civic engagement. The Commission will protect Oaklanders’ individual privacy interests and the City’s information security interests and will still preserve the fundamental objective of the California Public Records Act for encouraging transparency. Be transparent and open The Commission states that Oaklanders’ right to privacy is open to access and understand explanations of why and how they collect, use, manage, and share personal information. And they aim to communicate these explanations to Oakland communities in plain and accessible language on the City of Oakland website. Be accountable to Oaklanders The Commission publicly reviews and discusses departmental requests for acquiring and using technology that can be used for surveillance purposes. The Commission further encourages Oaklanders to share their views and concerns regarding any system or department that collects and uses their personal information or has the potential to do so. And the Commission allows Oaklanders to share their views on their compliance with these Principles. Well, it seems Oakland has clearly signalled that development at the cost of Oaklanders’ privacy won’t be unacceptable, there is still a long race to go for the cities around the world with respect to their user privacy laws. Russia opens civil cases against Facebook and Twitter over local data laws Microsoft says tech companies are “not comfortable” storing their data in Australia thanks to the new anti-encryption law Harvard Law School launches its Caselaw Access Project API and bulk data service making almost 6.5 million cases available  
Read more
  • 0
  • 0
  • 2349
article-image-googles-sidewalk-lab-smart-city-project-threatens-privacy-and-human-rights-amnesty-intl-ca-says
Fatema Patrawala
30 Apr 2019
6 min read
Save for later

Google’s Sidewalk Lab smart city project threatens privacy and human rights: Amnesty Intl, CA says

Fatema Patrawala
30 Apr 2019
6 min read
Sidewalk Toronto, a joint venture between Sidewalk Labs, which is owned by Google parent company Alphabet Inc., and Waterfront Toronto, is proposing a high-tech neighbourhood called Quayside for the city’s eastern waterfront. In March 2017, Waterfront Toronto had shared a Request for proposal for this project with the Sidewalk Labs team. It ultimately got approval by Oct 2017 and is currently led by Eric Schmidt, Alphabet Inc CEO and Daniel Doctoroff, Sidewalk Labs CEO. As per reports from Daneilla Barreto, a digital activism coordinator for Amnesty International Canada, the project will normalize the mass surveillance and is a direct threat to human rights. https://twitter.com/AmnestyNow/status/1122932137513164801 The 12-acre smart city, which will be located between East Bayfront and the Port Lands, promises to tackle the social and policy challenges affecting Toronto: affordable housing, traffic congestion and the impacts of climate change. Imagine self-driving vehicles shuttling you around a 24/7 neighbourhood featuring low-cost, modular buildings that easily switch uses based on market demand. Picture buildings heated or cooled by a thermal grid that doesn’t rely on fossil fuels, or garbage collection by industrial robots. Underpinning all of this is a network of sensors and other connected technology that will monitor and track environmental and human behavioural data. That last part about tracking human data has sparked concerns. Much ink has been spilled in the press about privacy protections and the issue has been raised repeatedly by citizens in two of four recent community consultations held by Sidewalk Toronto. They have proposed to build the waterfront neighbourhood from scratch, embed sensors and cameras throughout and effectively create a “digital layer”. This digital layer may result monitoring actions of individuals and collection of their data. In the Responsible Data Use Policy Framework released last year, the Sidewalk Toronto team made a number of commitments with regard to privacy, such as not selling personal information to third parties or using it for advertising purposes. Daneilla further argues that privacy was declared a human right and is protected under the Universal Declaration of Human Rights adopted by the United Nations in 1948. However, in the Sidewalk Labs conversation, privacy has been framed as a purely digital tech issue. Debates have focused on questions of data access, who owns it, how will it be used, where it should all be stored and what should be collected. In other words it will collect the minutest information of an individual’s everyday living. For example, track what medical offices they enter, what locations they frequent and who their visitors are, in turn giving away clues to physical or mental health conditions, immigration status, whether if an individual is involved in any kind of sex work, their sexual orientation or gender identity or, the kind of political views they might hold. It will further affect their health status, employment, where they are allowed to live, or where they can travel further down the line. All of these raise a question: Do citizens want their data to be collected at this scale at all? And this conversation remains long overdue. Not all communities have agreed to participate in this initiative as marginalized and racialized communities will be affected most by surveillance. The Canadian Civil Liberties Association (CCLA) has threatened to sue Sidewalk Toronto project, arguing that privacy protections should be spelled out before the project proceeds. Toronto’s Mayor John Tory showed least amount of interest in addressing these concerns during a panel on tech investment in Canada at South by Southwest (SXSW) on March 10. Tory was present in the event to promote the city as a go-to tech hub while inviting the international audience at SXSW at the other industry events. Last October, Saadia Muzaffar announced her resignation from Waterfront Toronto's Digital Strategy Advisory Panel. "Waterfront Toronto's apathy and utter lack of leadership regarding shaky public trust and social license has been astounding," the author and founder of TechGirls Canada said in her resignation letter. Later that month, Dr. Ann Cavoukian, a privacy expert and consultant for Sidewalk Labs, put her resignation too. As she wanted all data collection to be anonymized or "de-identified" at the source, protecting the privacy of citizens. Why big tech really want your data? Data can be termed as a rich resource or the “new oil” in other words. As it can be mined in a number of ways, from licensing it for commercial purposes to making it open to the public and freely shareable.  Apparently like oil, data has the power to create class warfare, permitting those who own it to control the agenda and those who don’t to be left at their mercy. With the flow of data now contributing more to world GDP than the flow of physical goods, there’s a lot at stake for the different players. It can benefit in different ways as for the corporate, it is the primary beneficiaries of personal data, monetizing it through advertising, marketing and sales. For example, Facebook for past 2 to 3 years has repeatedly come under the radar for violating user privacy and mishandling data. For the government, data may help in public good, to improve quality of life for citizens via data--driven design and policies. But in some cases minorities and poor are highly impacted by the privacy harms caused due to mass surveillance, discriminatory algorithms among other data driven technological applications. Also public and private dissent can be discouraged via mass surveillance thus curtailing freedom of speech and expression. As per NY Times report, low-income Americans have experienced a long history of disproportionate surveillance, the poor bear the burden of both ends of the spectrum of privacy harms; are subject to greater suspicion and monitoring while applying for government benefits and live in heavily policed neighborhoods. In some cases they also lose out on education and job opportunities. https://twitter.com/JulieSBrill/status/1122954958544916480 In more promising news, today the Oakland Privacy Advisory Commission released 2 key documents one on the Oakland privacy principles and the other on ban on facial recognition tech. https://twitter.com/cfarivar/status/1123081921498636288 They have given emphasis to privacy in the framework and mentioned that, “Privacy is a fundamental human right, a California state right, and instrumental to Oaklanders’ safety, health, security, and access to city services. We seek to safeguard the privacy of every Oakland resident in order to promote fairness and protect civil liberties across all of Oakland’s diverse communities.” Safety will be paramount for smart city initiatives, such as Sidewalk Toronto. But we need more Oakland like laws and policies that protect and support privacy and human rights. One where we are able to use technology in a safe way and things aren’t happening that we didn’t consent to. #NotOkGoogle: Employee-led town hall reveals hundreds of stories of retaliation at Google Google announces new policy changes for employees to report misconduct amid complaints of retaliation and harassment #GoogleWalkout organizers face backlash at work, tech workers show solidarity
Read more
  • 0
  • 0
  • 3745

article-image-dockerhub-database-breach-exposes-190k-customer-data-including-tokens-for-github-and-bitbucket-repositories
Savia Lobo
30 Apr 2019
3 min read
Save for later

DockerHub database breach exposes 190K customer data including tokens for GitHub and Bitbucket repositories

Savia Lobo
30 Apr 2019
3 min read
On Friday, DockerHub informed its users of a security breach in its database, via email written by Kent Lamb, Director of Docker Support. The breach exposed sensitive information including some usernames and hashed passwords, as well as tokens for GitHub and Bitbucket repositories, for approximately 190K users. The company said this number is only five percent of DockerHub's entire user base. Lamb highlighted that the security incident which took place a day prior, i.e. on April 25, where the company discovered unauthorized access to a single Hub database storing a subset of non-financial user data. "For users with autobuilds that may have been impacted, we have revoked GitHub tokens and access keys, and ask that you reconnect to your repositories and check security logs to see if any unexpected actions have taken place," Lamb said in his email. The GitHub and Bitbucket access tokens stored in Docker Hub allow developers to modify their project's code and also help in auto building the images on Docker Hub. In cases where a third-party gains access to these tokens would allow them to gain access to code within the private repositories. They can also easily modify it depending on the permissions stored in the token. Misusing these tokens to modify code and deploy compromised images can lead to serious supply-chain attacks as Docker Hub images are commonly utilized in server configurations and applications. “A vast majority of Docker Hub users are employees inside large companies, who may be using their accounts to auto-build containers that they then deploy in live production environments. A user who fails to change his account password and may have their accounts autobuilds modified to include malware”, ZDNet reports. Meanwhile, the company has asked users to change their password on Docker Hub and any other accounts that shared this password. For users with autobuilds that may have been impacted, the company has revoked GitHub tokens and access keys, and asked the users to reconnect to their repositories and check security logs to see if any unexpected actions have taken place. Mentioning DockerHub’s security exposure, a post on Microsoft website mentions, “While initial information led people to believe the hashes of the accounts could lead to image:tags being updated with vulnerabilities, including official and microsoft/ org images, this was not the case. Microsoft has confirmed that the official Microsoft images hosted in Docker Hub have not been compromised.” Docker said that it is enhancing the overall security processes and also that it is still investigating the incident and will share details when available. A user on HackerNews commented, “I find it frustrating that they are not stating when exactly did the breach occur. The message implies that they know, due to the "brief period" claim, but they are not explicitly stating one of the most important facts. No mention in the FAQ either. I'm guessing that they are either not quite certain about the exact timing and duration, or that the brief period was actually embarrassingly long.” https://twitter.com/kennwhite/status/1122117406372057090 https://twitter.com/ewindisch/status/1121998100749594624 https://twitter.com/markhood/status/1122067513477611521 To know more about this news, head over to the official DockerHub post. Hacker destroys Iranian cyber-espionage data; leaks source code of APT34’s hacking tools on Telegram Liz Fong-Jones on how to secure SSH with Two Factor Authentication (2FA) WannaCry hero, Marcus Hutchins pleads guilty to malware charges; may face upto 10 years in prison
Read more
  • 0
  • 0
  • 2773