Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Security

470 Articles
article-image-chrome-69-privacy-issues-automatic-sign-ins-and-retained-cookies-chrome-70-to-correct-these
Prasad Ramesh
27 Sep 2018
4 min read
Save for later

Chrome 69 privacy issues: automatic sign-ins and retained cookies; Chrome 70 to correct these

Prasad Ramesh
27 Sep 2018
4 min read
There are privacy concerns with Chrome 69, the latest release of the popular browser. The concerns revolve around signing into Chrome and the storage of cookies which have been changed in the new release. What are the privacy concerns with Chrome 69? The Google Chrome 69 update brought a new interface, UI changes and a feature that would automatically sign you into Chrome if you signed into any of Google’s services. This was met with heavy criticism from privacy conscious users. This is not the first time Google has been in question regarding user privacy and the data they collect. Google changed their privacy policy to circumvent GDPR fines in the scale of billions of dollars. Previously, users had an option to signin too Chrome with their Google credentials, but the Chrome 69 update changes it. Signing into any Google service would automatically sign you into Chrome. But Google noted that this would not turn on the sync feature by default. Another concern with Chrome 69 is that on clearing all browsing history and cookies, everything gets cleared excluding Google sites. So, on clearing all browsing history and data, you’re still left with Google cookies and data in your desktop if you’re using Chrome. Source: Google Blog What are people saying? In a blog, John Hopkins professor Matthew Green stated: “Google has transformed the question of consenting to data upload from something affirmative that I actually had to put effort into — entering my Google credentials and signing into Chrome — into something I can now do with a single accidental click. This is a dark pattern.” Christoph Tavan, CTO & Co-Founder of @contentpass tweeted that cookies from Google sites remain in your machine even after clearing all browser data. https://twitter.com/ctavan/status/1044282084020441088 John Graham-Cumming, Cloudflare CTO tweeted that he won’t be using Chrome anymore: https://twitter.com/jgrahamc/status/1044123160243826688 A comment on reddit reads: “This is actually ok. It's not incredibly invasive, and it just creates a chrome user profile when you sign in. They say that it will solve the confusion of the two separate sign ins.” What does Google have to say about this? Chrome 70 to be released in mid October will rollback this move. In a blog Zach Koch, Chrome Product Manager states: “While we think sign-in consistency will help many of our users, we’re adding a control that allows users to turn off linking web-based sign-in with browser-based sign-in—that way users have more control over their experience. For users that disable this feature, signing into a Google website will not sign them into Chrome.” ‏Google Chrome engineer Adrienne Porter Felt replied with an explanation as to why automatic sign in was turned on by default in Chrome 69. Porter stated that the intent is to prevent a ‘common’ confusion where the login state of the browser ends up being different from the login state of the content area. The reply from a Google engineer is not sufficient, notes Green. In the Chrome blog post they also addressed the concerns with cookies by stating: “We’re also going to change the way we handle the clearing of auth cookies. In the current version of Chrome, we keep the Google auth cookies to allow you to stay signed in after cookies are cleared. We will change this behavior so that all cookies are deleted and you will be signed out.” Ending thoughts It is concerning that singing into any Google product automatically signs you into Chrome. Moreover, syncing is just an accidental click away, many people wouldn’t want their data to be synced like that. If sync is not turned on by default then why are they signing you in by default in the first place? Makes sense where multiple accounts are in play, but in any case there should be a prompt for signing into Chrome that makes users consciously choose to sign in. The next step might have been auto sync on login, had not the user backlash happened. This design choice has definitely eroded trust and goodwill among many Chrome users, some of whom are now seriously looking for viable alternatives. Google Chrome’s 10th birthday brings in a new Chrome 69 Microsoft Cloud Services get GDPR Enhancements Google’s new Privacy Chief officer proposes a new framework for Security Regulation
Read more
  • 0
  • 0
  • 2636

article-image-ex-googler-who-quit-google-on-moral-grounds-writes-to-senate-about-companys-unethical-china-censorship-plan
Melisha Dsouza
27 Sep 2018
4 min read
Save for later

Ex-googler who quit Google on moral grounds writes to Senate about company’s “Unethical” China censorship plan

Melisha Dsouza
27 Sep 2018
4 min read
“I am part of a growing movement in the tech industry advocating for more transparency, oversight and accountability for the systems we build.” - Jack Poulson, former Google Scientist Project Dragonfly is making its rounds on the internet yet again. Jack Poulson, a former Google scientist who quit Google in September 2018, over its plan to build a censored search engine in China, has written a letter to the U.S. senators revealing new details of this project. The letter lists several details of Google's work on the Chinese search engine that had been reported but never officially confirmed by the company. He affirms that some company employees may have "actively subverted" an internal privacy review of the system. Poulson was strictly opposed to the idea of Google supporting China’s censorship on subjects by blacklisting keywords such as human rights, democracy, peaceful protest, and religion in its search engine. In protest to this project more than 1,000 employees had signed an open letter asking the company to be transparent. Many employees, including Poulson, took the drastic step of resigning from the company altogether. Now, in fear of Google’s role in violating human rights in China, Poulson has sent a letter to members of the Senate Committee on Commerce, Science, and Transportation. The letter stated that there has been "a pattern of unethical and unaccountable decision making from company leadership" at Google. He has requested Keith Enright, Google’s chief privacy officer, to respond to concerns raised by 14 leading human rights groups, who said in late August that Dragonfly could result in Google "directly contributing to, or [becoming] complicit in, human rights violations." The letter highlights a major flaw in the process of developing the Chinese search platform. He says there was "a catastrophic failure of the internal privacy review process, which one of the reviewers characterized as [having been] actively subverted." Citing anonymous sources familiar to the project, the Intercept affirms that the "catastrophic failure" Poulson mentioned, relates to an internal dispute between Google employees- those who work on privacy issues and engineers who developed the censored search system. The privacy reviewers were led to believe that the code used for developing the engine did not involve user data. After The Intercept exposed the project in early August, the privacy reviewers reviewed the code and felt that their colleagues working on Dragonfly had seriously and purposely misled them. The engine did involve user data and was designed to link users’ search queries to their personal phone number, track their internet movements, IP addresses, and information about the devices they use and the links they clicked on. Poulson told the senators that he could "directly verify" that a prototype of Dragonfly would allow a Chinese partner company to "search for a given user’s search queries based on their phone number." The code incorporates an extensive censorship blacklist developed in accordance with the Chinese government. It censors words like the English term "human rights", the Mandarin terms for 'student protest' and 'Nobel prize', and very large numbers of phrases involving 'Xi Jinping' and other members of the CCP. The engine is explicitly coded to ensure only Chinese government-approved air quality data would be returned in response to Chinese users' search. This incident takes us back to August 2018, when in an Open letter to Google CEO Sundar Pichai, the US Senator for Florida Marco Rubio led by a bipartisan group of senators, expressed his concerns over the project being  "deeply troubling" and risks making “Google complicit in human rights abuses related to China’s rigorous censorship regime”. If Google does go ahead with this project, other non-democratic nations can follow suit to demand customization of the search engine as per their rules, even if they may violate human rights. Citizens will have to think twice before leaving any internet footprint that could be traced by the government. To gain deeper insights on this news, you can head over to The Intercept. 1k+ Google employees frustrated with continued betrayal, protest against Censored Search engine project for China Skepticism welcomes Germany’s DARPA-like cybersecurity agency – The federal agency tasked with creating cutting-edge defense technology Google’s ‘mistakenly deployed experiment’ covertly activated battery saving mode on multiple phones today  
Read more
  • 0
  • 0
  • 2800

article-image-ex-employee-on-contract-sues-facebook-for-not-protecting-content-moderators-from-mental-trauma
Natasha Mathur
27 Sep 2018
5 min read
Save for later

Ex-employee on contract sues Facebook for not protecting content moderators from mental trauma

Natasha Mathur
27 Sep 2018
5 min read
An ex-employee filed a lawsuit against Facebook, last week, alleging that Facebook is not providing enough protection to the content moderators whose job involve reviewing disturbing content on the platform. Why is Selena Scola, a content moderator, suing Facebook? “Plaintiff Selena Scola seeks to protect herself and all others similarly situated from the dangers of psychological trauma resulting from Facebook's failure to provide a safe workplace for the thousands of contractors who are entrusted to provide the safest environment possible for Facebook users”, reads the lawsuit. Facebook receives millions of videos, images, and broadcast posts of child sexual abuse, rape, torture, bestiality, beheadings, suicide, and murder. In order to make Facebook a safe platform for users, it relies on machine learning augmented by content moderators. This ensures that any image that violates the corporation’s term of use is removed completely from the platform, as quickly as possible. “Facebook’s content moderators are asked to review more than 10 million potentially rule-breaking posts per week. Facebook aims to do this with an error rate of less than one percent, and seeks to review all user-reported content within 24 hours”, says the lawsuit. Although this safeguard helps with maintaining the safety on the platform, content moderators witness thousands of such extreme content every day. Because of this constant exposure to disturbing graphics, content moderators go through a lot of trauma, with many ending up developing Post-traumatic stress disorder (PTSD), highlights the lawsuit. What does the law say about workplace safety? Facebook claims to have a workplace safety standards draft already in place, like many other tech giants, to protect content moderators. They say it includes providing moderators with mandatory counseling, mental health supports, altering the resolution, and audio, of traumatizing images. It also aimed to train its moderators to recognize the physical and psychological symptoms of PTSD. We have, however, found it difficult to locate the said document. However, as per the lawsuit, “Facebook ignores the workplace safety standards it helped create. Instead, the multibillion-dollar corporation affirmatively requires its content moderators to work under conditions known to cause and exacerbate psychological trauma”. This is against the California law which states, “Every employer shall do every other thing reasonably necessary to protect the life, safety, and health of Employees. This includes establishing, implementing, and maintaining an effective injury prevention program. Employers must provide and use safety devices and safeguards reasonably adequate to render the employment and place of employment safe”. Facebook hires content moderators on a contract basis Tech giants such as Facebook generally have a two-level workforce in place. The top level comprises Facebook’s official employees such as engineers, designers, and managers. These enjoy the majority of benefits such as high salary, and lavish perks among others. Employees such as Content moderators come under the lower level. Majority of these workers are not even permanent employees at Facebook, as they’re employed on a contract basis. Because of this, they often get paid low, miss out on the benefits that regular employees get, as well as have limited access to Facebook management. One of the employees, who wished to remain anonymous told the Guardian last year, “We were underpaid and undervalued”. He earned roughly $15 per hour. This was for removing terrorist related content from Facebook, after a two-week training period. They usually come from a poor financial background, with many having families to support. Taking up a job as opposed to being unemployed seems to be a better option for them. Selena Scola was employed by Pro Unlimited (a contingent labor management company in New York) as a Public Content Contractor from approximately June 19, 2017, until March 1, 2018, at Facebook’s offices in Menlo Park and Mountain View, California. During the entirety of this period, Scola was employed solely by Pro Unlimited, an independent contractor of Facebook. She had never been directly employed by Facebook in any capacity. Scola is also suing Pro Unlimited. “According to the Technology Coalition, if a company contracts with a third-party vendor to perform duties that may bring vendor employees in contact with graphic content, the company should clearly outline procedures to limit unnecessary exposure and should perform an initial audit of a contractor’s wellness procedures for its employees,” says the lawsuit. Scola is not the only one who has complained about the company. Over a hundred conservative Facebook employees formed an online group to protest against the company’s “intolerant” liberal culture, last month. The mass exodus of high profile executives is also indicative of a deeper people and a cultural problem at Facebook. Additionally, Facebook has been in many controversies regarding user’s data, fake news, and hate speech. The Department of Housing and Urban Development (HUD) had filed a complaint against Facebook last month, for selling ads which discriminate against users on the basis of race, religion, and sexuality. Similarly, Facebook was found guilty of discriminatory advertisements. Apparently, Facebook provided the third-party advertisers with an option to exclude religious minorities, immigrants, LGBTQ individuals, and other protected groups from seeing their ads. Given the increasing number of controversial cases against Facebook, it's high time for the company to take the right measures towards solving these issues. The lawsuit is currently Scola v Facebook Inc and Pro Unlimited Inc, filed in Superior Court of the State of California. For more information, read the official lawsuit. How far will Facebook go to fix what it broke: Democracy, Trust, Reality Facebook COO, Sandberg’s Senate testimony: On combating foreign influence, fake news, and upholding election integrity Time for Facebook, Twitter and other social media to take responsibility or face regulation
Read more
  • 0
  • 0
  • 2997
Visually different images

article-image-googles-new-privacy-chief-officer-proposes-a-new-framework-for-security-regulation
Natasha Mathur
25 Sep 2018
4 min read
Save for later

Google’s new Privacy Chief officer proposes a new framework for Security Regulation

Natasha Mathur
25 Sep 2018
4 min read
Google announced a new Chief Privacy Officer as Keith Enright, yesterday, who has served a decade at leading Google’s Privacy Legal team. Enright has always been heavily involved in speaking out regarding Google’s Privacy and Security regulations. As a Chief Privacy Officer, Enright will be responsible for setting the privacy program at Google. This includes updating the security tools, policies, and practices as user-focused. “My team’s goal is to help you enjoy the benefits of technology while remaining in control of your privacy” mentions Enright in the Google outreach page. Google has already been taking measures when it comes to security as it launched a “Protect your Election program”, last month, which included security policies to defend against the state-sponsored phishing attacks. “This is an important time to take on this new role. There is real momentum to develop baseline rules of the road for data protection. Google welcomes this and supports comprehensive, baseline privacy regulation. People deserve to feel comfortable that all entities that use personal information will be held accountable for protecting it” as per the Google blog. Lately, there’s been a lot of companies raising voice against security-related issues. For instance, YouTube’s CBO and German OpenStreetMap, spoke out against the Article 13 of EU’s controversial copyright law. In light of organizations citing EU’s privacy laws as strict, Enright proposed a new Privacy framework which includes Google’s view on the requirements, scope, and enforcement expectations in data protection laws. This framework has been established using privacy frameworks, and the services that depend on personal data. Also, it will be complying with the evolving data protection laws around the world. “These principles help us evaluate new legislative proposals and advocate for responsible, interoperable and adaptable data protection regulations. How these principles are put into practice will shape the nature and direction of innovation”, says Enright. Principles in the new framework are based on established privacy regimes. These are applicable to organizations which are responsible for making decisions about the collection and use of personal information. Enright will be discussing the principles in the framework and Google’s work on privacy and security with the U.S. Senate later this week. The new framework states the requirements, scope, and accountability in data protection laws as follows: Requirements Collecting and using personal information responsibly. Maintaining Transparency is mandatory for helping individuals be informed. Placing reasonable limitations on the means of collecting, using, and disclosing personal information. Maintaining the quality of personal information. Make it practical for individuals so that they can control the use of personal information. Giving individuals an ability to access, correct, delete and download personal information if it's about them. Requirements needed to secure personal information should be included. Scope and Accountability Holding organizations accountable for compliance. More focus on the risk of harm to individuals and communities. Direct consumer services should be distinguished from enterprise services. Personal information should be defined flexibly to ensure the proper incentives and handling. Rules should be applied to all organizations that process personal information. Regulations should be designed for improving the ecosystem and accommodating the changes in technology and norms. A geographic scope that accords with international norms should be applied. Encouraging global interoperability. “Sound practices combined with strong and balanced regulations can help provide individuals with confidence that they’re in control of their personal information,” says Enright. For more information, check out the official framework.  EU slaps Google with $5 billion fine for the Android antitrust case Ex-Google CEO, Eric Schmidt, predicts an internet schism by 2028 Google plans to let the AMP Project have an open governance model, soon!
Read more
  • 0
  • 0
  • 2532

article-image-baidu-security-labs-mesalink-cryptographic-memory-safe-library-openssl
Aarthi Kumaraswamy
20 Sep 2018
3 min read
Save for later

Baidu Security Lab's MesaLink, a cryptographic memory safe library alternative to OpenSSL

Aarthi Kumaraswamy
20 Sep 2018
3 min read
X-Lab, Baidu’s security lab focused on researching and developing industry-leading security solutions, today released the latest version of MesaLink, a cryptographic memory safe library for securing end-to-end communications. Encrypted communication is a cornerstone of Internet security, as it provides protection from vulnerabilities for a wide variety of applications like cloud computing, blockchain, autonomous driving and Internet of Things. Existing solutions for securing end-to-end communications are implemented with programming languages like C/C++, which makes them particularly susceptible to memory safety vulnerabilities. Heartbleed Bug, for example, is a serious memory safety vulnerability in OpenSSL cryptographic software library that allows attackers to steal information protected by encryption. “OpenSSL, one of the most prominent implementations of the SSL/TLS protocol, has been protecting the Internet for the past two decades,” said Tao Wei, Chief Security Scientist at Baidu, Inc. “It has made a significant contribution to the evolution of the Internet. However, cryptography and protocol implementations of SSL/TLS are complex, and SSL/TLS is nearly impossible to implement without vulnerabilities. When Heartbleed was discovered in 2014, it affected two-thirds of the Internet, causing detrimental loss around the globe. Heartbleed is considered one of the most serious vulnerabilities since the commercialization of the Internet.” MesaLink, unlike OpenSSL, is based on Baidu’s advanced Hybrid Memory Safety Model, which has revolutionized memory safety systems at the software architecture level. MesaLink is well-guarded against a whole class of memory safety vulnerabilities and withstands most exploits. MesaLink aims to be a drop-in replacement for the widely adopted OpenSSL library. By providing OpenSSL-compatible APIs, it enables developers of preexisting projects to smoothly transition to MesaLink. For example, curl, a popular library for transferring data, recently integrated MesaLink, which now easily extends its presence into a wide variety of applications where OpenSSL used to dominate. Another promising example is with Android, in which MesaLink is able to transparently establish secure communications for any installed app without changing a single line of code. Beyond memory safety and OpenSSL compatibility, MesaLink also provides competitive performance. With secure and efficient cryptographic APIs, MesaLink reduces the time to estasblish a trusted communication channel between the client and server, providing a faster web browsing experience to users. “Heartbleed is an example of why C/C++ cannot meet the memory safety expectations in SSL/TLS implementations,” add Wei. “To eliminate vulnerabilities like Heartbleed, the MesaLink project was created. We expect MesaLink could be the next OpenSSL that protects secure communication on the Internet for the foreseeable future.” MesaLink has already been adopted in products like smart TVs and set-top boxes. As part of Baidu's Open AI System Security Alliance and AIoT Security Solutions, it has enabled more than 2 million smart TVs to securely connect to the cloud. Baidu releases EZDL – a no-code platform for building AI and machine learning models Baidu Apollo autonomous driving vehicles get machine learning based auto-calibration system Baidu announces ClariNet, a neural network for text-to-speech synthesis
Read more
  • 0
  • 0
  • 3342

article-image-how-twitter-is-defending-against-the-silhouette-attack-that-discovers-user-identity
Savia Lobo
20 Sep 2018
5 min read
Save for later

How Twitter is defending against the Silhouette attack that discovers user identity

Savia Lobo
20 Sep 2018
5 min read
Twitter Inc. disclosed that it is learning to defend against a new cyber attack technique, Silhouette, that discovers the identity of logged-in twitter users. This issue was reported to Twitter first in December 2017 through their vulnerability rewards program by a group of researchers from Waseda University and NTT. The researchers submitted a draft of their paper for the IEEE European Symposium on Security and Privacy in April 2018. Following this, Twitter’s security team prioritized the issue and routed it to several relevant teams and also contacted several other at-risk sites and browser companies to urgently address the problem. The researchers too recognized the significance of the problem and formed a cross-functional squad to address it. The Silhouette attack This attack exploits variability during the time taken by web pages to load. This threat is established by exploiting a function called ‘user blocking’ that is widely adopted in (Social Web Services) SWSs. Here the malicious user can also control the visibility of pages from legitimate users. As a preliminary step, the malicious third party creates personal accounts within the target SWS (referred to below as “signaling accounts”) and uses these accounts to systematically block some users on the same service thereby constructing a combination of non-blocked/blocked users. This pattern can be used as information for uniquely identifying user accounts. At the time of identification execution, that is, when a user visits a website on which a script for identifying account names has been installed, that user will be forced to communicate with pages of each of those signaling accounts. This communication, however, is protected by the Same-Origin Policy*5, so the third party will not be able to directly obtain the content of a response from such a communication. The action taken against Silhouette attack The Waseda University and NTT researchers provided various ideas for mitigating the issue in their research paper. The ideal solution was to use the SameSite attribute for the twitter login cookies. This would mean that requests to Twitter from other sites would not be considered logged-in requests. If the requests aren't logged-in requests, identity can't be detected. However, this feature was an expired draft specification and it had only been implemented by Chrome. Although Chrome is one of biggest browser clients by usage, Twitter needed to cover other browsers as well. Hence, they decided to look into other options to mitigate this issue. Twitter decided to reduce the response size differences by loading a page shell and then loading all content with JavaScript using AJAX. Page-to-page navigation for the website already works this way. However, the server processing differences were still significant for the page shell, because the shell still needed to provide header information and those queries made a noticeable impact on response times. Twitter’s CSRF protection mechanism for POST requests checks if the origin and referer headers of the request are sourced from Twitter. This proved effective in addressing the vulnerability, but it prevented this initial load of the website. Users might load Twitter from a Google search result or by typing the URL into the browser. To address this case, Twitter created a blank page on their site which did nothing but reload itself. Upon reload, the referer would be set to twitter.com, and so it would load correctly. There is no way for non-Twitter sites to follow that reload. The blank page is super-small, so while a roundtrip load is incurred, it doesn't impact load times too much. With this solution, Twitter was able to apply it to various high-level web stacks. There were a bunch of other considerations twitter had to make. Some of them include: They supported a legacy version of Twitter (known internally as M2) that operates without the need for JavaScript. They also made sure that the reloading solution didn't require JavaScript. They made use of CSP for security to make sure that their blank reloading page followed Twitter’s own CSP rules, which can vary from service to service. Twitter needed to pass through the original HTTP referrer to make sure metrics were still accurately attributing search engine referrals. They had to make sure the page wasn't cached by the browser, or the blank page would reload itself indefinitely. Thus, they used cookies to detect those loops, showing a short friendly message and a manual link if the page appeared to be reloading more than once. Implementing the SameSite cookie on major browsers Although Twitter has implemented the mitigation, they have discussed this issue with other major browser vendors regarding the SameSite cookie attribute. All major browsers have now implemented SameSite cookie support. This includes Chrome, Firefox, Edge, Internet Explorer 11, and Safari. Rather than adding the attribute to Twitter’s existing login cookie, they added two new cookies for SameSite, to reduce the risk of logout should a browser or network issue corrupt the cookie when it encounters the SameSite attribute. Adding the SameSite attribute to a cookie is not at all time-consuming. One just needs to add "SameSite=lax" to the set-cookie HTTP header. However, Twitter's servers depend on Finagle, which is a wrapper around Netty, which does not support extensions to the Cookie object. As per a Twitter post, “When investigating, we were surprised to find a feature request from one of our own developers the year before! But because SameSite was not an approved part of the spec, there was no commitment from the Netty team to implement. Ultimately we managed to add an override into our implementation of Finagle to support the new cookie attribute.” Read more about this in detail on Twitter’s blog post. The much loved reverse chronological Twitter timeline is back as Twitter attempts to break the ‘filter bubble’ Building a Twitter news bot using Twitter API [Tutorial] Facebook, Twitter open up at Senate Intelligence hearing, the committee does ‘homework’ this time
Read more
  • 0
  • 0
  • 2152
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at £15.99/month. Cancel anytime
article-image-peekaboo-zero-day-vulnerability-allows-hackers-to-access-cctv-cameras-says-tenable-research
Melisha Dsouza
20 Sep 2018
3 min read
Save for later

‘Peekaboo’ Zero-Day Vulnerability allows hackers to access CCTV cameras, says Tenable Research

Melisha Dsouza
20 Sep 2018
3 min read
Earlier this week, Tenable Inc announced that its research team had discovered a zero-day vulnerability dubbed as 'Peekaboo' in NUUO software. NUUO licenses its software to at least 100 other brands including Sony, CISCO, Sony, Cisco Systems, D-Link, Panasonic and many more. The vulnerable device is NVRMini2, which is a network-attached storage device and network video recorder. The vulnerability would allow cybercriminals to view, disable or otherwise manipulate video footage using administrator privileges. To give you a small gist of the situation, hackers could replace live feed of video surveillance with a static image of the area. This could assist criminals to enter someone’s premises- undetected by the CCTV! Cameras with this bug could be manipulated and taken offline, worldwide. And this is not the first time that NUUO devices have been affected by a vulnerability. Just last year, there were reports of the NUUO NVR devices being specifically targeted by the Reaper IoT Botnet. "The Peekaboo flaw is extremely concerning because it exploits the very technology we rely on to keep us safe" - Renaud Deraison, co-founder and chief technology officer, Tenable Vulnerabilities discovered by Tenable The vulnerabilities -CVE-2018-1149, CVE-2018-1150, are tied to NUUO NVRMini2 webserver software. #1 CVE-2018-1149: Allows an attacker to sniff out affected gear This vulnerability assists attackers to sniff out affected gear using Shodan. The attacker can trigger a buffer-overflow attack that allows them to access the camera’s web server Common Gateway Interface (CGI). This interface acts as a gateway between a remote user and the web server. The attack delivers a really large cookie file to the CGI handle. The CGI, therefore, does not validate the user’s input properly, allowing them to access the web server portion of the camera. #2 CVE-2018-1150: Takes advantage of Backdoor functionality This bug takes advantage of the backdoor functionality in the NUUO NVRMini2 web server. When the back door PHP code is enabled, it allows an unauthenticated attacker to change the password for any registered user except administrator of the system. ‘Peekaboo’ affects firmware versions older than 3.9.0, Tenable states that NUUO was notified of this vulnerability in June. NUUO was given 105 days to issue a patch before publicly disclosing the bugs. Tenable’s GitHub page provides more details on potential exploits tested with one of NUUO’s NVRMini2 devices. NUUO is planning to issue a security patch. Meanwhile, users are advised to restrict access to their NUUO NVRMini2 deployments. Owners of devices connected directly to the internet are especially at risk. Affected end users are urged to disconnect these devices from the internet until a patch is released. For more information on Peekaboo, head over to the Tenable Research Advisory blog post. Alarming ways governments are using surveillance tech to watch you Windows zero-day vulnerability exposed on ALPC interface by a vulnerability researcher with ‘no formal degrees’ IoT botnets Mirai and Gafgyt target vulnerabilities in Apache Struts and SonicWall  
Read more
  • 0
  • 0
  • 3047

article-image-nsas-eternalblue-leak-leads-to-459-rise-in-illicit-crypto-mining-cyber-threat-alliance-report
Melisha Dsouza
20 Sep 2018
3 min read
Save for later

NSA’s EternalBlue leak leads to 459% rise in illicit crypto mining, Cyber Threat Alliance report

Melisha Dsouza
20 Sep 2018
3 min read
"Illicit mining is the 'canary in the coal mine' of cybersecurity threats. If illicit cryptocurrency mining is taking place on your network, then you most likely have worse problems and we should consider the future of illicit mining as a strategic threat." - Neil Jenkins, Chief Analytic Officer for the Cyber Threat Alliance A leaked software tool from the US National Security Agency has led to a surge in Illicit cryptocurrency mining, researchers said on Wednesday. The report released by the Cyber Threat Alliance, an association of cybersecurity firms and experts, states that it detected a 459 percent increase in the past year of illicit crypto mining- a technique used by hackers to steal the processing power of computers to create cryptocurrency. One reason for the sharp rise in illicit mining was the leak last year by a group of hackers known as the Shadow Brokers of EternalBlue. The EternalBlue was a software developed by the NSA to exploit vulnerabilities in the Windows operating system. There are still countless organizations that are being victimized by this exploit, even after a patch for EternalBlue has been made available for 18 months. Incidentally, the rise in hacking coincides with the growing use of virtual currencies such as bitcoin, ethereum or monero. Hackers have discovered ways to tap into the processing power of unsuspecting computer users to illicitly generate currency. Neil Jenkins said in a blog post that the rise in malware for crypto mining highlights "broader cybersecurity threats". Crypto mining which was once non-existent is, now, virtually on every top firm’s threat list. The report further added that 85 percent of illicit cryptocurrency malware mines monero, and 8 percent mines bitcoin. Even though Bitcoin is well known as compared to Monero, according to the report, the latter offers more privacy and anonymity which help cyber criminals hide their mining activities and their transactions using the currency. Transaction addresses and values are unclear in monero by default, making it incredibly difficult for investigators to find the cybercrime footprint. The blog advises network defenders to make it harder for cybercriminals to carry out illicit mining by improving practices of cyber hygiene. Detection of cyber mining and Incident response plans to the same should also be improved. Head over to techxplore for more insights on this news. NSA researchers present security improvements for Zephyr and Fucshia at Linux Security Summit 2018 Top 15 Cryptocurrency Trading Bots Cryptojacking is a growing cybersecurity threat, report warns  
Read more
  • 0
  • 0
  • 2277

article-image-the-much-loved-reverse-chronological-twitter-timeline-is-back-as-twitter-attempts-to-break-the-filter-bubble
Natasha Mathur
19 Sep 2018
3 min read
Save for later

The much loved reverse chronological Twitter timeline is back as Twitter attempts to break the ‘filter bubble’

Natasha Mathur
19 Sep 2018
3 min read
Twitter’s CEO Jack Dorsey announced on Monday that Twitter’s bringing back the much-loved and original ‘reverse chronological order theme’ for the Twitter news feed. You can enable the reverse chronological theme by making setting changes. https://twitter.com/jack/status/1042038232647647232 Twitter is also working on providing users with a way to easily toggle between the two different themes i.e. a timeline of tweets most relevant to you and a timeline of all the latest tweets. To change to the reverse chronological order timeline, go to settings on the twitter, then select privacy option, go to the content section and uncheck the box that says “Timeline- show the best tweets first”. Twitter also removed the ‘in case you missed it’ section from the settings. https://twitter.com/TwitterSupport/status/1041838957896450048 The Reverse Chronological theme was Twitter’s original content presentation style, much before it made the ‘top tweets algorithm’ as a default option, back in 2016. When Twitter announced that it was changing its timeline so that it wouldn’t show the tweets in chronological order anymore, a lot of people were unhappy. In fact, people despised the new theme so much that a new hashtag #RIPTwitter was trending back then. Twitter with its new algorithm in 2016 focussed mainly on bringing the top, most happening, tweets to light. But, a majority of Twitter users felt differently. People enjoyed the simpler reverse-chron Twitter where people could get real-time updates from their close friends, family, celebrities, etc, not the twitter that shows only the most relevant tweets stacked together. Twitter defended the new approach as it tweeted yesterday that “We’ve learned that when showing the best Tweets first, people find Twitter more relevant and useful. However, we've heard feedback from people who at times prefer to see the most recent Tweets”. Also, Twitter has been making a lot of changes recently after Twitter CEO, Jack Dorsey testified before the House Energy and Commerce Committee regarding Twitter’s algorithms and content monitoring. Twitter mentioned that they want people to have more control over their timeline. https://twitter.com/TwitterSupport/status/1042155714205016064 Public reaction to this new change has been largely positive with a lot of people criticizing the company’s Top Tweet timeline.   https://twitter.com/_Case/status/1041841407739260928 https://twitter.com/terryb600/status/1041847173770620929   https://twitter.com/smithant/status/1041884671921930240 https://twitter.com/alliecoyne/status/1041850426159583232 https://twitter.com/fizzixrat/status/1041881429477654528 One common pattern observed is that people brought up Facebook a lot while discussing this new change. https://twitter.com/_Case/status/1042068118997270528 https://twitter.com/Depoetic/status/1041842498459578369 https://twitter.com/schachin/status/1041925075698503680 Twitter seems to have dodged a bullet by giving back to its users what they truly want. Twitter’s trying to shed its skin to combat fake news and data scandals, says Jack Dorsey Facebook, Twitter open up at Senate Intelligence hearing, committee does ‘homework’ this time Jack Dorsey to testify explaining Twitter algorithms before the House Energy and Commerce Committee  
Read more
  • 0
  • 0
  • 1866

article-image-google-makes-trust-services-root-r1-r2-r3-and-r4-inclusion-request
Melisha Dsouza
17 Sep 2018
3 min read
Save for later

Google makes Trust Services Root R1, R2, R3, and R4 Inclusion Request

Melisha Dsouza
17 Sep 2018
3 min read
After Google Launched its Certification Authority in August 2017, it has now put in a  request to Mozilla certification store for the inclusion of the Google Trust Services R1, R2, R3, and R4 roots as documented in the following bug. Google’s application states the following- "Google is a commercial CA that will provide certificates to customers from around the world.  We will offer certificates for server authentication, client authentication, email (both signing and encrypting), and code signing.  Customers of the Google PKI are the general public. We will not require that customers have a domain registration with Google, use domain suffixes where Google is the registrant, or have other services from Google." What are Google Trust Services Roots? To adopt an independent infrastructure and build the "foundation of a more secure web," Google Trust Services allows the company to issue its own TLS/SSL certificates for securing its web traffic via HTTPS, instead of relying on third-party certs. The main aim of launching the GTS was to bring security and authentication certificates up to par with Google’s rigorous security standards. This means invalidating the old, insecure HTTP standard in Chrome, and depreciate Adobe Flash, a web program known to be insecure, and a resource hog. GTS will provide HTTPS certificates public websites to API servers, and it will be inclusive to all Alphabet companies, not just Google. Developers who build products that connect to Google’s services will have to include the new Root Certificates. All GTS roots expire in 2036, while GS Root R2 expires in 2021 and GS Root R4 in 2038. Google will also be able to cross-sign its CAs, using GS Root R3 and GeoTrust, to ease potential timing issues while setting up the root CAs. To know more about these trust services, you can visit GlobalSign. Some noticeable points in this request are Google has supplied a key generation ceremony audit report Other than the disclosed intermediates and required test certificates, no issuance has been detected from these roots. Section 1.4.2 of the CPS expressly forbids the use of Google certificates for "man-in-the middle purposes". Appendix C of the current CPS indicates that Google limits the lifetime of server certificates to 365 days. The following concerns exist in the Roots- From the transfer on 11-August 2016 through 8-December 2016, at the time it would not have been clear if any policies applied to these new roots. The applicable CPS (Certification Practice Statement) during that period makes no reference to these roots. Google does state in their current CPS that these roots were operated according to that CPS. From the transfer on 11-August 2016 through the end of Google’s audit period on 30-September, 2016, these roots were not explicitly covered by either Google’s audit nor GlobalSign’s audit. The discussion was concluded with adding this policy to the main Mozilla Root Store Policy (section 8). With these changes and the filing of the bug, Mozilla plans to take no action against GTS based on what has been discovered and discussed. Here is what users had to say on this request- Source: Vue-hn To get a complete insight into this request, head over to Google groups. Let’s Encrypt SSL/TLS certificates gain the trust of all Major Root Programs Pay your respects to Inbox, Google’s email innovation is getting discontinued Google’s prototype Chinese search engine ‘Dragonfly’ reportedly links searches to phone numbers  
Read more
  • 0
  • 0
  • 8295
article-image-googles-prototype-chinese-search-engine-dragonfly-reportedly-links-searches-to-phone-numbers
Melisha Dsouza
17 Sep 2018
3 min read
Save for later

Google’s prototype Chinese search engine ‘Dragonfly’ reportedly links searches to phone numbers

Melisha Dsouza
17 Sep 2018
3 min read
Last month, the Intercept informed that Google is reportedly building a prototype search engine for China called 'Dragonfly' which lead to Google employees pressuring Google to abandon the project on ethical grounds. Google has then appeased their employees stating that the project was simply an exploration and nowhere near completion. Now, there are fresh reports from the Intercept that Google’s custom search engine would link Chinese users’ search queries to their personal phone numbers, thus making it easier for the government to track their searches. This means those who search for banned information could be interrogated or detained if security agencies got hold of Google's search records. According to The intercept, Dragonfly will be designed for Android devices, and would remove content considered to be sensitive by China’s authoritarian Communist Party regime- which includes information about freedom of speech, dissidents, peaceful protest and human rights. Citing anonymous sources familiar with the plan—including a Google whistleblower having "moral and ethical concerns" about Google’s role in censorship, the Intercept revealed that "programmers and engineers at Google have created a custom Android app" which has already been demonstrated to the Chinese government. The finalized version could be launched in the next six to nine months,  after the approval from Chinese officials. What this means to other nations and to Google China has strict cyber surveillance, and the fact that this tech giant is bending to China’s demands is a topic of concern for US legislators as well as citizens of other countries. Last week, in an Open letter to Google CEO Sundar Pichai, the US Senator for Florida Marco Rubio led by a bipartisan group of senators, expresses his concerns over the project being   "deeply troubling" and risks making “Google complicit in human rights abuses related to China’s rigorous censorship regime”. He also requests answers for several unanswered doubts. For instance, what changed since Google’s 2010 withdrawal from China to make the tech giant comfortable in cooperating with China’s rigorous censorship regime. This project is also driving attention from users all over the Globe. Source: Reddit   Google has not yet confirmed the existence of Dragonfly, and has publicly declined to comment on reports about the project. The only comment released to Fox News from a Google spokesperson on Sunday was that it is just doing 'exploratory' work on a search service in China and that it is 'not close to launching a search product.' In protest to this project last month, more than 1,000 employees had signed an open letter asking the company to be transparent. Now, some employees have taken the next step by resigning from the company altogether.  This is not the first time that Google employees have resigned in protest over one of the company's projects. Earlier this year, Project Maven, a drone initiative for the US government that could weaponize their AI research caused a stir among at least a dozen employees who reportedly quit over the initiative. The scrutiny on Google’s take on privacy has continued to intensify. It is about time the company starts  taking into consideration all aspects of a user’s internet privacy. To know more about Project 'Dragonfly', head over to The intercept. Google’s ‘mistakenly deployed experiment’ covertly activated battery saving mode on multiple phones today Did you know your idle Android device sends data to Google 10 times more often than an iOS device does to Apple? Bloomberg says Google, Mastercard covertly track customers’ offline retail habits via a secret million dollar ad deal      
Read more
  • 0
  • 0
  • 3172

article-image-iot-botnets-mirai-gafgyt-target-vulnerabilities-apache-struts-sonicwall
Savia Lobo
12 Sep 2018
4 min read
Save for later

IoT botnets Mirai and Gafgyt target vulnerabilities in Apache Struts and SonicWall

Savia Lobo
12 Sep 2018
4 min read
Unit 42 of the Palo Alto Networks reported about two new variants of the IoT botnets named Mirai and Gafgyt last week on September 7, 2018. The former IoT botnet targets vulnerabilities in Apache Struts and the latter in older, unsupported versions of SonicWall’s Global Management System (GMS). Researchers at Palo Alto Networks said, “Unit 42 found the domain that is currently hosting these Mirai samples previously resolved to a different IP address during the month of August. During that time this IP was intermittently hosting samples of Gafgyt that incorporated an exploit against CVE-2018-9866, a SonicWall vulnerability affecting older versions of SonicWall Global Management System (GMS). SonicWall has been notified of this development.” Mirai variant botnet exploit in Apache Struts The Mirai botnet exploit targets 16 different vulnerabilities, which includes the Apache Struts arbitrary command execution vulnerability CVE-2017-5638 , via crafted Content-Type, Content-Disposition, or Content-Length HTTP headers. The same Mirai bug was associated with the massive Equifax data breach in September 2017. This botnet had previously targeted routers and other IoT based devices which was revealed around end of May 2018. However, in the case of Mirai botnet, this is the first instance where it has targeted a vulnerability in Apache Struts. This new Mirai variant is also targeting vulnerabilities such as: the Linksys E-series device remote code execution flaw, a D-Link router remote code execution flaw, an OS command injection security flaw affecting Zyxel routers, an unauthenticated command injection flaw affecting AVTECH IP devices and more. Here’s the complete list of all exploits incorporated in this Mirai variant. Gafgyt variant exploit in SonicWall GMS The Gafgyt variant is targeting a security flaw, CVE-2018-9866 discovered in July that affects old, unsupported versions of SonicWall Global Management System (GMS) that is, versions 8.1 and older. The vulnerability targeted by this exploit is caused by the lack of sanitization of XML-RPC requests to the set_time_config method. There is currently no fix for the flaw except for GMS users to upgrade to version 8.2. Researchers noted that these samples were first surfaced on August 5, less than a week after the publication of a Metasploit module for this vulnerability. Some of its configured commands include launching the Blacknurse DDoS attack. Unit 42 researchers said, “Blacknurse is a low bandwidth DDoS attack involving ICMP Type 3 Code 3 packets causing high CPU loads first discovered in November 2016. The earliest samples we have seen supporting this DDoS method are from September 2017.” The researchers also mentioned, "The incorporation of exploits targeting Apache Struts and SonicWall by these IoT/Linux botnets could indicate a larger movement from consumer device targets to enterprise targets. These developments suggest these IoT botnets are increasingly targeting enterprise devices with outdated versions." In an email directed to us, SonicWall mentions that "The vulnerability disclosed in this post is not an announcement of a new vulnerability in SonicWall Global Management System (GMS).  The issue referenced only affects an older version of the GMS software (version 8.1) which was replaced by version 8.2 in December 2016. Customers and partners running GMS version 8.2 and above are protected against this vulnerability.  Customers still using GMS version 8.1 should apply a hotfix supplied by SonicWall in August 2018 and plan for an immediate upgrade, as GMS 8.1 went out of support in February 2018.  SonicWall and its threat research team continuously updates its products to provide industry-leading protection against the latest security threats, and it is therefore crucial that customers are using the latest versions of our products. We recommend that customers with older versions of GMS, which are long out of support, should upgrade immediately from www.mysonicwall.com." To know more about these IoT botnet attacks in detail, visit Palo Alto Networks Unit 42 blog post. Build botnet detectors using machine learning algorithms in Python [Tutorial] Cisco and Huawei Routers hacked via backdoor attacks and botnets How to protect yourself from a botnet attack
Read more
  • 0
  • 0
  • 2527

article-image-openssl-1-1-1-released-with-support-for-tls-1-3-improved-side-channel-security
Melisha Dsouza
12 Sep 2018
3 min read
Save for later

OpenSSL 1.1.1 released with support for TLS 1.3, improved side channel security

Melisha Dsouza
12 Sep 2018
3 min read
Yesterday (11th of September), the OpenSSL team announced the stable release of OpenSSL 1.1.1. With work being in progress for two years along with more than 500 commits, the release comes with many notable upgrades. The most important new feature in OpenSSL 1.1.1 is TLSv1.3, which was published last month as RFC 8446 by the Internet Engineering Task Force. Applications working with OpenSSL1.1.0 can gain the benefits of TLSv1.3 by upgrading to the new OpenSSL version. TLS 1.3 features Reduction in the number of round trips required between the client and server to improve connection times 0-RTT or “early data” feature - which is the ability  for clients to start sending encrypted data to the server straight away without any round trips with the server Removal of various obsolete and insecure cryptographic algorithms and encryption of more of the connection handshake has improved security For more details on TLS 1.3 read: Introducing TLS 1.3, the first major overhaul of the TLS protocol with improved security and speed Updates in OpenSSL 1.1.1 A complete rewrite of the OpenSSL random number generator The OpenSSL random number generator has been completely rewritten to introduce capabilities such as: The default RAND method now utilizes an AES-CTR DRBG according to NIST standard SP 800-90Ar1. Support for multiple DRBG instances with seed chaining. Public and private DRBG instance. DRBG instances are made fork-safe. Keep all global DRBG instances on the secure heap if it is enabled. The public and private DRBG instance are per thread for lock free operation Support for various new cryptographic algorithms The different algorithms that are now supported by OpenSSL 1.1.1 include: SHA3, SHA512/224 and SHA512/256 EdDSA (including Ed25519 and Ed448) X448 (adding to the existing X25519 support in 1.1.0) Multi-prime RSA SM2,SM3,SM4 SipHash ARIA (including TLS support) Side-Channel attack security improvements This upgrade also introduces significant Side-Channel attack security improvements, maximum fragment length TLS extension support and a new STORE module, implementing a uniform and URI based reader of stores containing keys, certificates, CRLs and numerous other objects. OpenSSL 1.0.2 will receive full support only until the end of 2018 and security fixes only till the end of 2019. The team advises users of OpenSSL 1.0.2 to upgrade to OpenSSL 1.1.1 at the earliest. Head over to the OpenSSL blog for further details on the news. GNU nano 3.0 released with faster file reads, new shortcuts and usability improvements Haiku, the open source BeOS clone, to release in beta after 17 years of development Ripgrep 0.10.0 released with PCRE2 and multi-line search support  
Read more
  • 0
  • 0
  • 3507
article-image-german-openstreetmap-protest-against-article-13-eu-copyright-reform
Sugandha Lahoti
10 Sep 2018
3 min read
Save for later

German OpenStreetMap protest against “Article 13” EU copyright reform making their map unusable

Sugandha Lahoti
10 Sep 2018
3 min read
European’ Union’s copyright preform bill, is currently up for a vote in the European Parliament on September 12. It’s Article 13 has been on the receiving end of backlash with many organizations protesting against it. Last week it was Youtube’s CBO speaking out and this week German OpenStreetMap has made their map unusable, to protest against EU copyright reform. [box type="shadow" align="" class="" width=""]According to Article 13, there is an “obligation on information society service providers storing and giving access to large amounts of works and other subject-matter uploaded by their users to take appropriate and proportionate measures to ensure the functioning of agreements concluded with right holders and to prevent the availability on their services of content identified by rightholders in cooperation with the service providers”.[/box] The Article 13 is a new revamped version that EU has come out with as the older version of the copyright reform bill was rejected by the Parliament back in July. The older version also received heavy criticism from different policy experts and digital rights group on grounds of violating the fundamental rights of the internet users. This legislation has the possibility of changing the balance of power between producers of music, news and film and the dominant websites that host their work. On one side, people say that if passed, this law would mean the end of free Internet. Platforms will have to algorithmically pre-filter all user uploads and block fair use content, cool satire, funny memes etc. On the other side, supporters of the law say that their hard work is being compromised because they are not being fairly compensated for their work. These people are creators who depend upon being paid for the sharable content they create, such as musicians, authors, filmmakers and so on. Although people have supported OpenStreetmap’s decision. A hacker news user pointed out, “Good for them. The Internet as we know it is being attacked from multiple angles right now, with the EU filtering proposals, AU/5Eyes anti-encryption proposals, etc.” A person also called it as, “Oh no, more evil political hacking!” You can read about more such opinions on Hacker news. You can also find some of the most common questions around the proposed Directive on the EU website. Mozilla, Internet Society, and web foundation wants G20 to address “techlash” fuelled by security and privacy concerns. Facebook COO, Sandberg’s Senate testimony: On combating foreign influence, fake news, and upholding election integrity. Twitter’s CEO, Jack Dorsey’s Senate Testimony: On Twitter algorithms, platform health, role in elections and more.
Read more
  • 0
  • 0
  • 3112

article-image-snort-3-beta-available-now
Melisha Dsouza
10 Sep 2018
2 min read
Save for later

Snort 3 beta available now!

Melisha Dsouza
10 Sep 2018
2 min read
On 29th August 2018, the team at Snort released the fourth alpha of the next generation Snort IPS- Snort 3, in beta version. Along with all the Snort 2.X features, this version of Snort++ includes new features as well as bug fixes for the base version of Snort. Here are some key features of Snort++: Support provided for multiple packet processing threads Shared configuration and attribute table available Simple, scriptable configuration Key components are now pluggable Autodetect services for portless configuration Support for  sticky buffers in rules Autogenerate reference documentation Provide better cross-platform support Facilitate component testing Support pipelining of packet processing, hardware offload and data plane integration, and proxy mode Below is a brief gist of these upgrades, Easy Configuration LuaJIT is used for configuration with a consistent, and executable syntax. Better Detection of Services The team has worked closely with Cisco Talos to update rules to meet their needs, including a feature they call "sticky buffers." The Hyperscan search engine, and regex fast patterns make rules faster and more accurate. HTTP Support Snort 3 has a stateful HTTP inspector that handles 99 percent of the HTTP Evader cases. The aim is to achieve 100% coverage soon. The HTTP support also includes new rule options. Better Performance Deep packet inspection now gives a better performance. Snort 3 supports multiple packet-processing threads, and scales linearly with a much smaller amount of memory required for shared configs. JSON event logging This can be used to integrate with tools such as the Elastic Stack. Check out the Snort blog post for more details on the same. More Plugins! Snort 3 was designed to be extensible. It has over 225 of plugins of various types. It is easy for users to add their own codec, inspector, rule action, rule option, or logger. In addition to all these features, users can also watch out for additional upgrades like next generation DAQ, connection events, search engine acceleration among others. To know more about the release of Snort 3, head over to Snort’s official page. OpenFaaS releases full support for stateless microservices in OpenFaaS 0.9.0 Mastodon 2.5 released with UI, administration, and deployment changes GNOME 3.30 released with improved Desktop performance, Screen Sharing, and more
Read more
  • 0
  • 0
  • 2796