Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3709 Articles
article-image-slack-was-down-for-an-hour-yesterday-causing-disruption-during-work-hours
Fatema Patrawala
30 Jul 2019
2 min read
Save for later

Slack was down for an hour yesterday, causing disruption during work hours

Fatema Patrawala
30 Jul 2019
2 min read
Yesterday, Slack reported of an outage which started at 7:23 a.m. PDT and was fully resolved at 8:48 a.m. PDT. The Slack status page said that some people had issues in sending messages while others couldn't access their channels at all. Slack said it was fully up and running again about an hour after the issues emerged. https://twitter.com/SlackStatus/status/1155869112406437889 According to Business Insider, more than 2,000 users reported issues with Slack via Downdetector. Employees around the globe, rely on Slack to communicate, organize tasks and share information. Downdetector’s live outage map showed a concentration of reports in the United States and a few of them in Europe and Japan. Slack has not yet shared the reason which caused the disruption on its status page. Last month as well Slack had suffered an outage which was caused due to server unavailability. Users took to Twitter sending funny memes and gifs about how they really depend on Slack all the time to communicate. https://twitter.com/slabodnick/status/1155858811518930946 https://twitter.com/gbhorwood/status/1155864432527867905 https://twitter.com/envyvenus/status/1155857852625555456 https://twitter.com/nhetmalaluan/status/1155863456991436800 While on Hacker News, users were annoyed and said that such issues have become quite common. One user commented, “This is becoming so often it's embarrassing really. The way it's handled in the app is also not ideal to say the least - only indication that something is wrong is that the text you are trying to send is greyed out.” Why did Slack suffer an outage on Friday? How Verizon and a BGP Optimizer caused a major internet outage affecting Amazon, Facebook, CloudFlare among others Google Cloud went offline taking with it YouTube, Snapchat, Gmail, and a number of other web services
Read more
  • 0
  • 0
  • 2181

article-image-purescript-npm-installers-infected-dependencies-prevented-it-from-running-successfully
Bhagyashree R
29 Jul 2019
4 min read
Save for later

PureScript npm installer’s infected dependencies prevented it from running successfully

Bhagyashree R
29 Jul 2019
4 min read
Earlier this month Harry Garrood, a PureScript maintainer found that PureScript’s npm installer is infected by some malicious code. Though the issue is now addressed, developers are recommended to update the installer as soon as possible. Which dependencies of the PureScript npm installer were infected Garrood got suspicious when some developers started submitting an issue on the GitHub repository of PureScript’s npm installer saying that it gets stuck during installation. He found that the code was added to various dependencies of the installer, specifically the ones that were maintained by @shinnn, the original author of the PureScript npm installer. It was first inserted into the load-from-cwd-or-npm (version 3.0.2) npm package and later into the rate-map (version 1.0.3) npm package. @shinnn and the maintainers of rate-map and load-from-cwd-or-npm said that the malicious code was published by an attacker who gained access to their npm account. The purpose of this code was to sabotage the PureScript npm installer to prevent the download from completing. This halted the installer during the “Check if a prebuilt binary is provided for your platform” step. In the first attempt of this exploit, the ‘load-from-cwd-or-npm’ package was infected so that any call to the ‘loadFromCwdOrNpm()’ method would return a ‘PassThrough’ stream instead of the expected package. In the second attempt, a more advanced version of the exploit was done by modifying the source file of ‘rate-map’ to prevent a download callback from firing. The resolution and next steps All the dependencies maintained by @shinnn as of v0.2.5 are now dropped. Also, all the previous versions of the PureScript installer are now marked as deprecated. If you have installed any version of PureScript npm package prior to 0.13.2, you will still be downloading packages maintained by @shinnn. It is recommended that you update the installer as soon as possible. Npm has also removed both ‘[email protected]’ and ‘[email protected]’ from the registry. Garrood further suggests, “If you want to be absolutely sure you do not have malicious code on your machine, you should delete your node_modules directories and your package-lock.json files, and set a lower bound of 0.13.2 on the purescript package.” This news triggered a discussion on Hacker News. While some think that the community etiquette is here to blame, others believe that npm packages can be easy targets of such attacks. A user commented, “This is not the first time this year we see an npm issue, and it could have been much worse than this. All package managers, in general, create risks, but how the community etiquette evolves around package managers is just as important. Something is wrong with the latter here.” Another user added, “Part of the problem is the bounty for attacking NPM packages is high. You get a high profile exploit and lots of people talking about it, or you can even get some of your evil JS code running on thousands of sites on the back end or the front end. Compounded by the fact there is no decent base class library for JS like you'd get for .NET [0]. Want to do anything you could do by default with .NET BCL? Like open a url, save a file (with nice api) or parse some XML? Then npm i ... it is. And hope it doesn't pull in an exploit. As a mitigation I recommend people consider writing their own code (NIH) for simple stuff not npm i all the things. [0] I'm comparing to .NET but same could be said of Java/Python/Ruby etc.” To know more in detail, check out Garrood’s blog post. Is the Npm 6.9.1 bug a symptom of the organization’s cultural problems? Surprise NPM layoffs raise questions about the company culture npm Inc. announces npm Enterprise, the first management code registry for organizations
Read more
  • 0
  • 0
  • 1557

article-image-go-introduces-generic-codes-and-a-new-contract-draft-design-at-gophercon-2019
Vincy Davis
29 Jul 2019
3 min read
Save for later

Go introduces generic codes and a new contract draft design at GopherCon 2019

Vincy Davis
29 Jul 2019
3 min read
Update: On 31st July, Ian Lance Taylor posted in detail explaining the benefits and costs of including generics in Go. He also briefly discussed the draft design to convey how generics is going to be added to the Go language. Taylor says “Our goal is to arrive at a design that makes it possible to write the kinds of generic code, without making the language too complex to use or making it not feel like Go anymore.” Check out the Goland blog for more details. On 26th July at GopherCon 2019, Ian Lance Taylor introduced generics codes in Go. He briefly explained the need, implementation and benefits from generics for the Go language. Next, Taylor reviewed the Go contract design draft which included addition of optional type parameters to types and functions. https://twitter.com/ymotongpoo/status/1154957680651276288 https://twitter.com/lelenanam/status/1154819005925867520 Taylor also proposed guidelines for implementing generic design in Go. https://twitter.com/chimeracoder/status/1154794627548897280 In all the three years of Go surveys, lack of generics has been listed as one of the three highest priorities for fixing the Go language. Taylor defines generic as “Generic programming which enables the representation of functions and data structures in a generic form, with types factored out.” Generic code is written using types, which are specified later. An unspecified type is called as type parameter. A type parameter offers support only when permitted by contracts. A generic code imparts strong basis for sharing codes and building programs. It can be compiled using an interface-based approach which optimizes time as the package is compiled only once. If a generic code is compiled multiple times, it can carry compile time cost. Some of the many functions that can be written generically in Go include - Image Source: Source graph Go already supports two generic data structures which are built using Slice and Map languages. Go requires data structures to be written only once and then reused after putting it in a package. The contract draft design states that since Go is designed to support programming, a clear contract should be maintained between a generic code and a calling code. With the new changes, users may find the language more complex. However, the Go team expects users to not write generic code themselves, instead use packages that are written by others using generic code. Developers are very happy that the Go generics proposal is simple to understand and enables users to depend on the already written generic packages. This will save them time as users need not rewrite type specific functions in Go. https://twitter.com/lizrice/status/1154802013982449666 https://twitter.com/protolambda/status/1155286562659282952 https://twitter.com/arschles/status/1154793543149375488 https://twitter.com/YvanDaSilva/status/1155432594818969600 https://twitter.com/mickael/status/1154799370610466816 Users have also admired the new contract design draft by the Go team. https://twitter.com/miyagawa/status/1154810546002153473 https://twitter.com/t_colgate/status/1155380984671551488 https://twitter.com/francesc/status/1154796941227646976 Head over to the Google proposal page for more details on the new contract draft design. Read More Ian Lance Taylor, Golang team member, adds another perspective to Go being Google’s language Is Golang truly community driven and does it really matter? Go User Survey 2018 results: Golang goes from strength to strength, as more engineers than ever are using it at work
Read more
  • 0
  • 0
  • 3118

article-image-mozilla-releases-webthings-gateway-0-9-experimental-builds-targeting-turris-omnia-and-raspberry-pi-4
Bhagyashree R
29 Jul 2019
4 min read
Save for later

Mozilla releases WebThings Gateway 0.9 experimental builds targeting Turris Omnia and Raspberry Pi 4

Bhagyashree R
29 Jul 2019
4 min read
In April, the Mozilla IoT team relaunched Project Things as “WebThings” with its two components: WebThings Gateway and WebThings Framework. WebThings is an open-source implementation of W3C’s Web of Things standard for monitoring and controlling connected devices on the web. On Friday, the team announced the release of WebThings Gateway 0.9 and the availability of its first experimental builds for Turris Omnia. This release is also the first version of WebThings Gateway to support the new Raspberry Pi 4. Along with that, they have also released WebThings framework 0.12. W3C’s Web of Things standard The Internet of Things (IoT) has a lot of potential, but it suffers from a lack of interoperability across platforms. The Web of Things aims to solve this by building a decentralized IoT using the web as its application layer. It provides mechanisms to formally describe IoT interfaces to enable IoT devices and services interact with each other, independent of their underlying implementation. To connect real-world things to the web, each thing is assigned a URI to make them linkable and discoverable. It is currently under the process of standardization at the W3C. Updates in WebThings Gateway 0.9 and WebThings Framework 0.12 WebThings Gateway is a software distribution for smart home gateways that allows users to monitor and control their smart home devices over the web, without a middleman. Among the protocols it supports are HomeKit, ZigBee, Thread, MQTT, Weave, AMQP. Among the languages it supports are JS (Node.js), Python, Rust, Java, and C++. The experimental builds of WebThings Gateway 0.9 are based on OpenWrt, a Linux operating system for embedded devices. They come with a new first-time setup for configuring the gateway as a router and Wi-Fi access point itself instead of connecting to an existing Wi-Fi network. Source: Mozilla However, Mozilla noted that the router configurations are still pretty basic and are not yet ready to replace your existing wireless router. “This is just our first step along the path to creating a full software distribution for wireless routers,” reads the announcement. We can expect support for other wireless routers and router developer brands in the near future. This version ships with a new type of add-on called notifier add-ons. In previous gateway versions, push notifications were the only way for notifying users of any event. But, this mechanism is not supported by all browsers and is also not considered to be the most convenient way of notifying users. As a solution, Mozilla came up with notifier add-ons using which you can create a set of outlets. These outlets will act as an output for a defined rule. For instance, you can set up a rule to get an SMS or an email whenever any motion is detected in your home. You can also configure a notifier with a title, a message, and a priority level. Source: Mozilla The WebThings Gateway 0.9 and WebThings Framework 0.12 bring a few changes to Thing Descriptions as well to make it more aligned with the latest W3C drafts. A Thing Description provides a vocabulary to describe physical devices connected to the web in a machine-readable format with a default JSON encoding. The “name” is now changed to “title” and there are experimental new properties of the Thing Descriptions exposed by the gateway. To know more check out Mozilla’s official announcement. To get started, head over to its GitHub repository. Mozilla introduces Pyodide, a Python data science stack compiled to WebAssembly Mozilla developers have built BugBug which uses machine learning to triage Firefox bugs Mozilla re-launches Project Things as WebThings, an open platform for monitoring and controlling devices
Read more
  • 0
  • 0
  • 3385

article-image-amazon-transcribe-streaming-announces-support-for-websockets
Savia Lobo
29 Jul 2019
3 min read
Save for later

Amazon Transcribe Streaming announces support for WebSockets

Savia Lobo
29 Jul 2019
3 min read
Last week, Amazon announced that its automatic speech recognition (ASR) service, Amazon Transcribe, now supports WebSockets. According to Amazon, “WebSocket support opens Amazon Transcribe Streaming up to a wider audience and makes integrations easier for customers that might have existing WebSocket-based integrations or knowledge”. Amazon Transcribe allows developers to add speech-to-text capability to their applications easily with its ASR service. Amazon announced the general availability of Amazon Transcribe in the AWS San Francisco Summit 2018. With Amazon Transcribe API, users can analyze audio files stored in Amazon S3 and have the service return a text file of the transcribed speech. Real-time transcripts from a live audio stream are also possible with the Transcribe API. Until now, the Amazon Transcribe Streaming API has been available using HTTP/2 streaming. However, Amazon adds the new WebSockets support as another integration option for bringing real-time voice capabilities to different projects built using Transcribe. What are WebSockets? WebSockets are a protocol built atop TCP, similar to HTTP. HTTP is excellent for short-lived requests, however, it does not handle persistent real-time communications well. Due to this, the first Amazon Transcribe Streaming API made available uses HTTP/2 streams that solve a lot of the issues that HTTP had with real-time communications. Amazon states, “an HTTP connection is normally closed at the end of the message, a WebSocket connection remains open”. With this advantage, messages can be sent bi-directionally with no bandwidth or latency added by handshaking and negotiating a connection. WebSocket connections are full-duplex, which means that the server and client can both transmit data to and fro at the same time. WebSockets were also designed “for cross-domain usage, so there’s no messing around with cross-origin resource sharing (CORS) as there is with HTTP”. Amazon Transcribe Streaming using Websockets While using the WebSocket protocol to stream audio, Amazon Transcribe transcribes the stream in real-time. When a user encodes the audio with event stream encoding, Amazon Transcribe responds with a JSON structure, which is also encoded using event stream encoding. Key components of a WebSocket request to Amazon Transcribe are: Creating a pre-signed URL to access Amazon Transcribe. Creating binary WebSocket frames containing event stream encoded audio data. Handling WebSocket frames in the response. The different languages that Amazon Transcribe currently supports during real-time transcription include British English (en-GB), US English (en-US), French (fr-FR), Canadian French (fr-CA), and US Spanish (es-US). To know more about WebSockets API in detail, visit Amazon’s official post. Understanding WebSockets and Server-sent Events in Detail Implementing a non-blocking cross-service communication with WebClient[Tutorial] Introducing Kweb: A Kotlin library for building rich web applications
Read more
  • 0
  • 0
  • 3992

article-image-city-power-johannesburg-hit-by-a-ransomware-attack-that-encrypted-all-its-databases-applications-and-network
Savia Lobo
26 Jul 2019
4 min read
Save for later

‘City Power Johannesburg’ hit by a ransomware attack that encrypted all its databases, applications and network

Savia Lobo
26 Jul 2019
4 min read
Yesterday, a ransomware virus affected City Power Johannesburg, the electricity distributor for some parts of South Africa’s capital city. City Power notified citizens via Twitter that the virus has encrypted all its databases, applications and network and that the ICT team is trying to fix the issue. https://twitter.com/CityPowerJhb/status/1154277777950093313 Due to the attack, City Power’s website was restraining users from lodging a complaint or purchasing pre-paid electricity. https://twitter.com/CityPowerJhb/status/1154278402003804160 The city municipality, owners of the City Power, tweeted, it also “affected our response time to logged calls as some of the internal systems to dispatch and order material have been slowed by the impact”. Chris Baraniuk, a freelance science and technology journalist, tweeted, “The firm tells me more than 250,000 people would have had trouble paying for pre-paid electricity, potentially leaving them cut off”. City Power hasn’t yet released information on the scale of the impact. The ransomware attack occurs amidst existing power outages According to iAfrikan, the ransomware attack struck the city while it was “experiencing a strain on the power grid due to increased use of electricity during Johannesburg's recent cold winter weather”. The strain on the grid has resulted in multiple power outages in different parts of the city. According to Bleeping Computers, Business Insider South Africa reported that an automated voice message on City Power's phone helpline said, "Dear customers, please note that we are currently experiencing a problem with our prepaid vending system. We are working on this issue and hope to have it resolved by one o'clock today (25 July 2019)". The city municipality tweeted yesterday, “most of the IT applications and networks that were affected by the cyberattack have been cleaned up and restored.” The municipality apologized for their inconvenience and assured the customers that none of their details were compromised. https://twitter.com/CityPowerJhb/status/1154626973056012288 Many users have raised requests tagging the municipality and the electricity distribution board on Twitter. City Power replied, “Technicians will be dispatched to investigate and work on restorations”. Later it tweeted asking them to cancel their request and that the power had been restored. https://twitter.com/GregKee/status/1154397914191540225 A recent tweet today at 10:47 am (SAST) from the City Power says, “Electricity supply points to be treated as live at all times as power can be restored anytime. City Power regrets any inconvenience that may be caused by the interruption”. https://twitter.com/CityPowerJhb/status/1154674533367988224 Luckily, City Power Johannesburg escaped from paying a ransom Ransomware attack blocks the company’s or individual’s system until a huge ransom--in a credit or in Bitcoin--is paid to the attackers to relieve their systems. According to Business Insider South Africa, attackers usually convert the whole information with the databases into “gibberish, intelligible only to those with the right encryption key. Attackers then offer to sell that key to the victim, allowing for the swift reversal of the damage”. There have been many instances in this year and Johannesburg has been lucky enough to escape from paying a huge ransom. Early this month, a Ryuk ransomware attack encrypted Lake City’s IT network in the United States and the officials had to approve a huge payment of nearly $500,000 to restore operations. Similarly, Jackson County officials in Georgia, USA, paid $400,000 to cyber-criminals to resolve a ransomware infection. Also, La Porte County, Indiana, US, paid $130,000 to recover data from its encrypted computer systems. According to The Next Web, the “ever-growing list of ransomware attacks has prompted the United States Conference of Mayors to rule that they would not pay ransomware demands moving forward.” Jim Trainor, who formerly led the Cyber Division at FBI Headquarters and is now a senior vice president in the Cyber Solutions Group at risk management and insurance brokerage firm Aon, told CSO, “I would highly encourage a victim of a ransomware attack to work with the FBI and report the incident”. The FBI “strongly encourages businesses to contact their local FBI field office upon discovery of a ransomware infection and to file a detailed complaint at www.ic3.gov”. Maintaining good security habits is the best way to deal with ransomware attacks, according to the FBI. “The best approach is to focus on defense-in-depth and have several layers of security as there is no single method to prevent compromise or exploitation,” they tell CSO. To know more about the City Power Johannesburg ransomware attack in detail, head over to The Bleeping Computer’s coverage. Microsoft releases security updates: a “wormable” threat similar to WannaCry ransomware discovered Atlassian Bitbucket, GitHub, and GitLab take collective steps against the Git ransomware attack Anatomy of a Crypto Ransomware
Read more
  • 0
  • 0
  • 2484
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $15.99/month. Cancel anytime
article-image-alibabas-chipmaker-launches-open-source-risc-v-based-xuantie-910-processor-for-5g-ai-iot-and-self-driving-applications
Vincy Davis
26 Jul 2019
4 min read
Save for later

Alibaba’s chipmaker launches open source RISC-V based ‘XuanTie 910 processor’ for 5G, AI, IoT and self-driving applications

Vincy Davis
26 Jul 2019
4 min read
Launched in 2018, Alibaba’s chip subsidiary, Pingtouge made a major announcement yesterday. Pingtouge is launching its first product - chip processor XuanTie 910 using the open-source RISC-V instruction set architecture. The XuanTie 910 processor is expected to reduce the costs of related chip production by more than 50%, reports Caixin Global. XuanTie 910, also known as T-Head, will soon be available in the market for commercial use. Pingtouge will also be releasing some of XuanTie 910’s codes on Github for free to help the global developer community to create innovative applications. No release dates have been revealed yet. What are the properties of the XuanTie 910 processor? The XuanTie 910 16-core processor has 7.1 Coremark/MHz and its main frequency can achieve 2.5GHz. This processor can be used to manufacture high-end edge-based microcontrollers (MCUs), CPUs, and systems-on-chip (SOC). It can be used in applications like 5G telecommunication, artificial intelligence (AI), and autonomous driving. XuanTie 910 processor gives 40% increased performance over the mainstream RISC-V instructions and also a 20% increase in terms of instructions. According to Synced, Xuantie 910 has two unconventional properties: It has a 2-stage pipelined out-of-order triple issue processor with two memory accesses per cycle. The processors computing, storage and multi-core capabilities are superior due to an increased extension of instructions. Xuantie 910 can extend more than 50 instructions than RISC-V. Last month, The Verge reported that an internal ARM memo has instructed its staff to stop working with Huawei. With the US blacklisting China’s telecom giant Huawei, and also banning any American company from doing business with them, it seems that ARM is also following the American strategy. Although ARM is based in U.K. and is owned by the Japanese SoftBank group, it does have an “US origin technology”, as claimed in the internal memo. This may be one of the reasons why Alibaba is increasing its efforts in developing RISC-V, so that Chinese tech companies can become independent from Western technologies. A Xuantie 910 processor can assure Chinese companies of a stable future, with no fear of it being banned by Western governments. Other than being cost-effective, RISC-V also has other advantages like more flexibility compared to ARM. With complex licence policies and high power prospect, it is going to be a challenge for ARM to compete against RISC-V and MIPS (Microprocessor without Interlocked Pipeline Stages) processors. A Hacker News user comments, “I feel like we (USA) are forcing China on a path that will make them more competitive long term.” Another user says, “China is going to be key here. It's not just a normal market - China may see this as essential to its ability to develop its technology. It's Made in China 2025 policy. That's taken on new urgency as the west has started cutting China off from western tech - so it may be normal companies wanting some insurance in case intel / arm cut them off (trade disputes etc) AND the govt itself wanting to product its industrial base from cutoff during trade disputes” Some users also feel that it is technology that wins when two big economies continue bringing up innovative technologies. A comment on Hacker News reads, “Good to see development from any country. Obviously they have enough reason to do it. Just consider sanctions. They also have to protect their own market. Anyone that can afford it, should do it. Ultimately it is a good thing from technology perspective.” Not all US tech companies are wary of partnering with Chinese counterparts. Two days ago, Salesforce, an American cloud-based software company announced a strategic partnership with Alibaba. This aims to help Salesforce localize their products in mainland China, Hong Kong, Macau, and Taiwan. This will enable Salesforce customers to market, sell, and operate through services like Alibaba Cloud and Tmall. Winnti Malware: Chinese hacker group attacks major German corporations for years, German public media investigation reveals The US Justice Department opens a broad antitrust review case against tech giants Salesforce is buying Tableau in a $15.7 billion all-stock deal
Read more
  • 0
  • 0
  • 6591

article-image-cern-plans-to-replace-microsoft-based-programs-with-an-affordable-open-source-software
Amrata Joshi
26 Jul 2019
3 min read
Save for later

CERN plans to replace Microsoft-based programs with an affordable open-source software

Amrata Joshi
26 Jul 2019
3 min read
Last month, CERN, one of the leading scientific research organizations planned to stop using Microsoft-based programs to look out for affordable open-source software. For the past 20 years, CERN has been using Microsoft products at a discounted "academic institution" rate. Things changed in March when its previous contract was ending and Microsoft revoked CERN's academic status and as per a CERN’s blog post, under the new contract, licensing costs have been increased.  Meanwhile, CERN is now focusing on its year-old project known as, Microsoft Alternatives project (MAlt) and plans to migrate to open-source software. MAlt’s principles of engagement are: delivering the same service to every category of CERN personnel, avoiding vendor lock-in for decreasing risk and dependency, keeping hands-on data and addressing the common use-cases. The official post reads, “The Microsoft Alternatives project (MAlt) started a year ago to mitigate anticipated software license fee increases. MAlt’s objective is to put us back in control using open software. It is now time to present more widely this project and to explain how it will shape our computing environment.” https://twitter.com/Razican/status/1138818892825055233 This summer, MAlt will start with a pilot mail service for the IT department and volunteers. CERN plans to migrate all of its staff to the new mail service and also move the Skype for Business clients and analogue phones to a softphone pilot. Microsoft agreed to increase CERN's fees over a ten-year period so that the institution could adapt but it was still unsustainable as per CERN. Emmanuel Ormancey, a CERN system analyst, wrote in a blog post, “Although CERN has negotiated a ramp-up profile over ten years to give the necessary time to adapt, such costs are not sustainable.” Considering CERN’s collaborative nature and its wide community, a large number of licenses are required for delivering the services to everyone. The costs per product becomes unaffordable when traditional business models on a per-user basis are applied. It got unaffordable for CERN to go for commercial software licenses with a per-user fee structure. While many other public research institutions have previously been affected by this new licensing structure.  While few users still think Microsoft was a better choice and are on the point that it would be difficult for CERN to migrate. A user commented on HackerNews, “Migrating away from Microsoft won't be easy. Despite high licensing costs, Windows, AD and Exchange are still great solutions with millions of people familiar with them, good documentation and support.” Few others are happy about CERN’s decision to support open source. Another user commented, “It is awesome to see how CERN is supporting open source. They have been long time users of our open core GitLab with 12,000 users https://about.gitlab.com/customers/cern/” To know more about this news, check out the official post. Softbank announces a second AI-focused Vision Fund worth $108 billion with Microsoft, Apple as major investors Why are experts worried about Microsoft’s billion dollar bet in OpenAI’s AGI pipe dream? Ex-Microsoft employee arrested for stealing over $10M from store credits using a test account
Read more
  • 0
  • 0
  • 3264

article-image-softbank-announces-second-ai-focused-vision-fund-108-billion-microsoft-apple-investors
Sugandha Lahoti
26 Jul 2019
3 min read
Save for later

Softbank announces a second AI-focused Vision Fund worth $108 billion with Microsoft, Apple as major investors

Sugandha Lahoti
26 Jul 2019
3 min read
Softbank has announced a second Vision Fund with a focus on accelerating AI innovation. This second investment fund totalling $108 billion is backed by investors including Apple, Foxconn, Microsoft, and Standard Chartered Bank, to name a few. SoftBank Group’s own investment in the fund will be $38 billion. SoftBank Group is also said to be still in discussions with other participants with the total amount of the fund expected to increase. The first Softbank vision fund of $97 billion fund was mainly backed by the governments of Saudi Arabia and Abu Dhabi and was used for investments in Uber, WeWork, Grab, Grofers, Paytm, and Oyo. Notably, the second Vision Fund does not currently include any participants from the Saudi Arabia government.  Following the revealation of Saudi Arabia’s role in the murder of journalist Jamal Khashoggi, SoftBank received a lot of backlash for taking funding from Saudi Arabia. Speaking at a quarterly earnings call in early November 2018, Masayoshi Son, CEO of SoftBank, condemned Khashoggi’s murder, describing it as an “act against humanity and also journalism and free speech… a horrible and deeply regrettable act.” He said that the killing of Khashoggi could have an impact on the bank’s Vision Fund. The Wall Street Journal reported earlier this week, citing unnamed sources, that Saudi Arabia and Abu Dhabi had indicated they were likely to invest again, but Riyadh’s funding would be less than $45 billion which was the amount invested in the first fund. Also reported by WSJ, Microsoft being a major investor in this fund, SoftBank executives told Microsoft they would encourage the fund’s roughly 75 companies to shift from Amazon’s cloud platform to Microsoft’s. Quite an interesting way to capture the market! https://twitter.com/KateClarkTweets/status/1154128555728494593 https://twitter.com/seyitaylor/status/1154150531897679873 According to Financial Times, SoftBank said it had signed a memorandum of understanding (MoUs) to invest in the fund with an unnamed Taiwanese investor and seven Japanese financial groups including the top three banks — Mizuho, Sumitomo Mitsui Banking Corporation and MUFG Bank — as well as Dai-ichi Life Insurance and Daiwa Securities. Read the official report here. Why are experts worried about Microsoft’s billion dollar bet in OpenAI’s AGI pipe dream? SoftBank CEO says Khashoggi murder could have an impact on Saudi-backed $100 billion Vision Fund pouring money into Silicon Valley Ericsson’s expired software certificate issue causes massive outages in UK’s O2 and Japan’s SoftBank network services.
Read more
  • 0
  • 0
  • 1761

article-image-github-has-blocked-an-iranian-software-developers-account
Richard Gall
25 Jul 2019
3 min read
Save for later

GitHub has blocked an Iranian software developer's account

Richard Gall
25 Jul 2019
3 min read
GitHub's importance to software developers can't be overstated. In the space of a decade it has become central to millions of people's professional lives. For it to be taken away, then, must be incredibly hard to take. Not only does it cut you off from your work, it also cuts your identity as a developer. But that's what appears to have happened today to Hamed Saeedi, an Iranian software developer. Writing on Medium, Saeedi revealed that he today received an email from GitHub explaining that his account has been restricted "due to U.S. trade controls law restrictions." As Saeedi notes, he is not a paying GitHub customer, only using their free services, which makes the fact he has been clocked by the platform surprising. Does GitHub really think a developer is developing dangerous software in a public repo? Digging down into the terms and conditions around U.S. trade laws, Saeedi found a paragraph that states the platform cannot: "...be used for services prohibited under applicable export control laws, including purposes related to the development, production, or use of nuclear, biological, or chemical weapons or long range missiles or unmanned aerial vehicles." The implication - in Saeedi's reading at least - is that he is using GitHub for precisely that. But the impact of this move is massive for Saeedi. The incident has echoes of when Slack terminated Iranian users' accounts at the end of 2018, but, as one Twitter user noted, this is even more critical because "GitHub is hosting all the efforts of a programmer/engineer." How has GitHub and the developer community responded? GitHub hasn't, as of writing, responded publicly to the incident. However, it would be reasonable to assume that the organization would lean heavily on existing trades sanctions against Iran as an explanation for the actions. The ethical and moral implications of that notwithstanding, it's a move that would ensure that would protect the company. Given increased scrutiny on the geopolitical impact of technology, and current Iran/U.S. tensions, perhaps it isn't that surprising. But it has received condemnation from a number of developers on Twitter. One commented on the need to break up GitHub's monopoly, while another suggested that the incident emphasised the importance of #deletegithub - a small movement that sees GitHub (and other ostensibly 'free' software) as compromised and failing to live up to the ideals of free and open source software. Mikhail Novikov, a developer part of the GatsbyJS team, had words of solidariy for Saeedi, reading the situation in the context of the U.S. President's rhetoric towards Iran: https://twitter.com/freiksenet/status/1154297497290006528?s=20 It appears that other Iranian users have been affected in the same way - however, it remains unclear to what extent GitHub has been restricting Iranian accounts.
Read more
  • 0
  • 0
  • 3652
article-image-lyft-releases-an-autonomous-driving-dataset-level-5-and-sponsors-research-competition
Amrata Joshi
25 Jul 2019
3 min read
Save for later

Lyft releases an autonomous driving dataset “Level 5” and sponsors research competition

Amrata Joshi
25 Jul 2019
3 min read
This week, the team at Lyft released a subset of their autonomous driving data, the Level 5 Dataset, and will be sponsoring a research competition. The Level 5 Dataset includes over 55,000 human-labelled 3D annotated frames, a drivable surface map, as well as an HD spatial semantic map for contextualizing the data. The team has been perfecting their hardware and autonomy stack for the last two years. As the sensor hardware needs to be built and properly calibrated, there is also the need for a localization stack and an HD semantic map must be created. Only then it is possible to unlock higher-level functionality like 3D perception, prediction, and planning. The dataset allows a broad cross-section of researchers in contributing to downstream research in self-driving technology.  The team is iterating on the third generation of Lyft’s self-driving car and has already patented a new sensor array and a proprietary ultra-high dynamic range (100+DB) camera. Since HD mapping is crucial to autonomous vehicles, the teams in Munich and Palo Alto have been working towards building high-quality lidar-based geometric maps and high-definition semantic maps that are used by the autonomy stack. The team is also working towards building high quality and cost-effective geometric maps that would use only a camera phone for capturing the source data.  Lyft’s autonomous platform team has been deploying partner vehicles on the Lyft network. Along with their partner Aptiv, the team has successfully provided over 50,000 self-driving rides to Lyft passengers in Las Vegas, which becomes the largest paid commercial self-driving service in operation. Waymo vehicles are also now available on the Lyft network in Arizona that expands the opportunity for our passengers to experience self-driving rides. To advance self-driving vehicles, the team will also be launching a competition for individuals for training algorithms on the dataset. The dataset makes it possible for researchers to work on problems such as prediction of agents over time, scene depth estimation from cameras with lidar as ground truth and many more. The blog post reads, “We have segmented this dataset into training, validation, and testing sets — we will release the validation and testing sets once the competition opens.” It further reads, “There will be $25,000 in prizes, and we’ll be flying the top researchers to the NeurIPS Conference in December, as well as allowing the winners to interview with our team. Stay tuned for specific details of the competition!” To know more about this news, check out the Medium post. Lyft announces Envoy Mobile, an iOS and Android client network library for mobile application networking Uber and Lyft drivers go on strike a day before Uber IPO roll-out Lyft introduces Amundsen;  a data discovery and metadata engine for its researchers and data scientists
Read more
  • 0
  • 0
  • 2444

article-image-vlc-media-player-affected-by-a-major-vulnerability-in-a-3rd-library-libebml-updating-to-the-latest-version-may-help
Savia Lobo
25 Jul 2019
4 min read
Save for later

VLC media player affected by a major vulnerability in a 3rd library, libebml; updating to the latest version may help

Savia Lobo
25 Jul 2019
4 min read
A few days ago, a German security agency CERT-Bund revealed it had found a Remote Code Execution (RCE) flaw in the popular open-source, VLC Media Player allowing hackers to install, modify, or run any software on a victim’s device without their authority and could also be used to disclose files on the host system. The vulnerability (listed as CVE-2019-13615) was first announced by WinFuture and received a vulnerability score of 9.8 making it a "critical" problem. According to a release by CERT-Bund, “A remote, anonymous attacker can exploit a vulnerability in VLC to execute arbitrary code, create a denial of service state, disclose information, or manipulate files.” According to Threat Post, “Specifically, VLC media player’s heap-based buffer over-read vulnerability exists in mkv::demux_sys_t::FreeUnused() in the media player’s modules/demux/mkv/demux.cpp function when called from mkv::Open in modules/demux/mkv/mkv.cpp.” VLC is not vulnerable, VideoLAN says Yesterday, VideoLAN, the makers of VLC, tweeted that VLC is not vulnerable. They said, “the issue is in a 3rd party library, called libebml, which was fixed more than 16 months ago. VLC since version 3.0.3 has the correct version shipped, and @MITREcorp did not even check their claim.” https://twitter.com/videolan/status/1153963312981389312 VideoLAN said a reporter, opened a bug on their public bug tracker, which is outside of the reporting policy and should have mailed in private on the security alias. “We could not, of course, reproduce the issue, and tried to contact the security researcher, in private”, VideoLAN tweeted. VideoLAN said the reporter was using Ubuntu 18.04, an old version of Ubuntu and “clearly has not all the updated libraries. But did not answer our questions.” VideoLAN says it wasn’t contacted before the CVE was issued VideoLAN is quite unhappy that MITRE Corp did not approach them before issuing a CVE for the VLC vulnerability, which is a direct violation of MITRE’s own policies. Source: CVE.mitre.org https://twitter.com/videolan/status/1153965979988348928 When VideoLAN complained and asked if they could manage their own CVE (like another CNA), “we had no answer and @usnistgov NVD told us that they basically couldn't do anything for us, not even fixing the wrong information”, they tweeted. https://twitter.com/videolan/status/1153965981536010240 VideoLAN said even CERT Bund did not contact them for clarifications. They further added, “So, when @certbund decided to do their "disclosure", all the media jumped in, without checking anything nor contacting us.” https://twitter.com/videolan/status/1153971024297431047 The VLC CVE on the National Vulnerability Database has now been updated. NVD has downgraded the severity of the issue from a Base Score of 9.8 (critical) to 5.5 (medium). Also, the changelog specifies that the “Victim must voluntarily interact with attack mechanism.” Dan Kaminsky, an American security researcher, tweeted, “A couple of things, though: 1) Ubuntu 18.04 is not some ancient version 2) Playing videos with VLC is both a first-class user demand and a major attack surface, given the realities of content sourcing.  If Ubuntu can't secure VLC dependencies, VLC probably has to ship local libs.” https://twitter.com/dakami/status/1154118377197035520 Last month, VideoLAN fixed two high severity bugs in their security update for the VLC media player. The update included fixes for 33 vulnerabilities in total, of which two were marked critical, 21 medium and 10 rated low. Jean-Baptiste Kempf, president of VideoLAN and an open-source developer, wrote, “This high number of security issues is due to the sponsoring of a bug bounty program funded by the European Commission, during the Free and Open Source Software Audit (FOSSA) program”. To know more about this news in detail, you can read WinFuture’s blog post. The EU Bounty Program enabled in VLC 3.0.7 release, this version fixed the most number of security issues A zero-day vulnerability on Mac Zoom Client allows hackers to enable users’ camera, leaving 750k companies exposed VLC’s updating mechanism still uses HTTP over HTTPS
Read more
  • 0
  • 0
  • 3714

article-image-the-us-justice-department-opens-a-broad-antitrust-review-case-against-tech-giants
Fatema Patrawala
25 Jul 2019
6 min read
Save for later

The US Justice Department opens a broad antitrust review case against tech giants

Fatema Patrawala
25 Jul 2019
6 min read
The U.S. Justice Department is opening a broad antitrust review into whether dominant technology firms are unlawfully stifling competition, the Wall Street Journal reported yesterday. The review is geared toward examining the practices of online platforms that dominate internet search, social media and retail services, which includes Facebook, Google, Amazon and Apple, according to the report. The move is the strongest by DOJ so far towards big tech, which faces increased scrutiny from both political parties because of the expanded market power the companies have and the tremendous amount of consumer data they control. The review is designed to go above and beyond recent plans for scrutinizing the tech sector that were crafted by the Justice department and the FTC. DOJ will examine big tech’s growth in size and reach The Justice Department will examine issues including how the most dominant tech firms have grown in size and expanded their reach into additional businesses. They are also interested in how these tech companies have leveraged the powers that come with having very large networks of users, the department said. There is no defined end-goal yet for the review than to understand whether there are antitrust problems that need addressing, the officials said. The inquiry could eventually lead to more focused investigations of specific company conduct, they said. The review also presents risks for the companies beyond whether antitrust issues are identified. The department won’t ignore other company practices that may raise concerns about compliance with other laws, officials said. “Without the discipline of meaningful market-based competition, digital platforms may act in ways that are not responsive to consumer demands,” Justice Department antitrust chief Makan Delrahim said in a statement. “The department’s antitrust review will explore these important issues.” At a broader level, the division will work in close coordination with Deputy Attorney General Jeffrey Rosen, the officials said. DoJ hosted a meeting with critics of social media giants WSJ further mentioned that the department had recently hosted a private presentation where officials heard from critics of Facebook, who walked through their concerns about the social-media giant and advocated for its breakup. Tech and antitrust observers believed issues related to Facebook’s dominance were to be handled by the FTC. Both the FTC and the Justice Department have made clear that they view tech-sector competition issues as a priority. Under agreements brokered in recent months between Mr. Delrahim and FTC Chairman Joseph Simons, the Justice Department obtained clearance to proceed with a probe of whether Google has engaged in illegal monopolization tactics, as well as jurisdiction over Apple for similar issues. FTC has already undertaken a lengthy consumer-protection investigation of Facebook’s privacy practices, and imposed a $5 billion fine to Facebook for which the company was already prepared. Justice Department officials said those agreements weren’t meant to be open-ended or all-encompassing. But in any case the department isn’t trying to pre-empt the FTC’s work, they said, and suggested the two agencies might explore different tech practices by the same company, as well as different legal theories for possible cases. Apart from the Justice Department and FTC scrutiny, a House antitrust subcommittee also is taking a broad look at potential anticompetitive conduct in the tech sector. Executives from Facebook, Google, Apple and Amazon all testified before the panel last week. “I don’t think big is necessarily bad, but I think a lot of people wonder how such huge behemoths that now exist in Silicon Valley have taken shape under the nose of the antitrust enforcers,” Mr. Barr told senators. “You can win that place in the marketplace without violating the antitrust laws, but I want to find out more about that dynamic.” The market and community reactions Hours after this news on Tuesday, shares for the four companies were down. Apple was down by about 0.4%, Amazon by about 1.13%, Alphabet by about 0.96%, and Facebook by about 1.65% according to CNBC report. On this news, Professor at NYUStern, Rob Seamans put forth his views and proposes a few ideas on how the law makers can regulate the big tech. He says  lawmakers should break big tech in a horizontal way, which means that the big tech firm is divided into two smaller, but similar-looking firms. Another idea of break up can be vertical in nature, which means that the big tech firm’s platform remains the same, but it has to spin off anything that uses the platform. For example, Google’s search platform would no longer be able to provide its own maps or other “edge services.” https://twitter.com/robseamans/status/1153849902629314562 On Hacker News, this development has gained significant attention and discussions revolve around the question of whether scientifically the tech companies have made any progress. One of the user comments reads, “I just finished the Eric Weinstein/Peter Thiel podcast, and came away mostly agreeing with their assessment that we’ve really stagnated when it comes to progressing scientifically. I definitely feel like there’s this illusion of tech innovation coming from these big companies that suck up all the tech talent, but at the end of the day the best and brightest are working on optimizing ad clicks (FB, Goog) or getting people to buy crap (Amazon) or working on incremental hardware improvements (Apple).   If anything, I would hope any outcome against big tech would level the playing field when it comes to attracting talent, and create an environment where working on true “moonshot” tech was not so risky.” Democratic presidential candidates like Elizabeth Warren have  been calling for the breakup of companies like Google and Facebook since the start of the year. And a few Republicans have voiced concerns about whether tech companies disfavor conservative voices on their platform. Last month the US regulators had already planned to probe Google on anti-trust issues; Facebook, Amazon & Apple were also under legal scrutiny. This month the EU Commission opened an antitrust case against Amazon on the grounds of violating the EU competition rules under Article 101. EU Commission opens an antitrust case against Amazon on grounds of violating EU competition rules US regulators plan to probe Google on anti-trust issues; Facebook, Amazon & Apple also under legal scrutiny Facebook sets aside $5 billion in anticipation of an FTC penalty for its user data practices
Read more
  • 0
  • 0
  • 1680
article-image-tesla-reports-a-408-million-loss-in-its-q2-earnings-call-cto-and-co-founder-jb-straubel-steps-down
Sugandha Lahoti
25 Jul 2019
3 min read
Save for later

Tesla reports a $408 million loss in its Q2 earnings call; CTO and co-founder, JB Straubel steps down

Sugandha Lahoti
25 Jul 2019
3 min read
Tesla had a disappointing Wednesday earnings call and reported a $408 million loss. Tesla founder Elon Musk also announced that current CTO JB Straubel is stepping down and will now work in a senior advisor role. This marks the fourth departure in a series of high-profile exits since the past year, as Tesla is still struggling to prove it is profitable. Most recently Steve MacManus, the former Vice President of Engineering at Tesla, joined Apple as Senior Director. The combination of a worse-than-expected loss and losing a co-founder sent the stock plunging 11% in late trading after the announcement. Straubel joined Tesla in March 2004 and became a member of the board. He initially served as principal engineer of drive systems and in May 2005 became head of technology. At Tesla, Straubel was responsible for overseeing the technical and engineering design of the vehicles, notably around batteries. He also took an active role in new technology evaluation, R&D, technical diligence review of key vendors and partners, IP, and systems validation testing. According to his company bio, he helped launch programs like its Supercharger network and the Tesla Energy business. In addition to his work at Tesla, Straubel was also on the Board of Directors for SolarCity. Now he will take on the senior advisor role. “I’m not disappearing, and I just wanted to make sure that people understand that this was not some, you know, lack of confidence in the company or the team or anything like that,” Straubel said. Elon Musk thanked Straubel for his time at Tesla at the Q2 earnings call on Wednesday. “I want to thank JB for his fundamental role in creating and building Tesla. If we hadn’t had lunch in 2003, Tesla wouldn’t exist, basically,” Musk added. https://twitter.com/nealboudette/status/1154162074391646208 “It’s a significant transition for Tesla, as Straubel has been one of the most important members of Tesla management,” Dan Levy, an analyst at Credit Suisse, wrote in a note to clients reported by Bloomberg. Drew Baglino, vice president of technology, will take over Straubel as CTO. As for the earnings report, experts say they are concerned. Tesla adjusted net loss of $1.12 per share, which was worse than the $0.31 loss expected. The company’s shares have plunged by more than 20% so far this year while the Standard & Poor’s 500 index has surged by 20%. However, the overall loss of $408m was an improvement over unexpectedly large loss of $702m reported in quarter one. “Overall, a bad report that will inevitably lead to more questions about its ability to stabilize and turn a profit,” Clement Thibault, a senior analyst at financial markets platform Investing.com said. Tesla initially promised to be profitable in the third quarter of 2018 and has now pushed back that target multiple times. Musk said in the earnings report, “Now profit is expected in the fourth quarter of 2019, with the current quarter to be break-even, as the company's focus is less on profit and more on volume growth, capacity expansion and cash generation. https://twitter.com/TezzlaCFO/status/1154135552050028545 Tesla Autonomy Day takeaways: Full Self-Driving computer, Robotaxis launching next year, and more Researchers successfully trick Tesla autopilot into driving into opposing traffic via “small stickers as interference patches on the ground”. Elon Musk’s Neuralink unveils a “sewing machine-like” robot to control computers via the brain
Read more
  • 0
  • 0
  • 1703

article-image-developers-should-be-in-charge-of-application-security-whitesource-security-report
Savia Lobo
24 Jul 2019
6 min read
Save for later

Developers should be in charge of Application security: Whitesource security report

Savia Lobo
24 Jul 2019
6 min read
Security these days is a major concern for all organizations dealing with user data. We have newer apps being developed daily, crunching in user data to provide users with better services, great deals, discounts, and much more. Application security has become one of the top priorities and needs to be taken care of at every stage of software development. Hence, over the years software testing has shifted from testing just before release to testing during the early stages of the software development lifecycle (SDLC). This helps developers to discover vulnerabilities during early stages and to tackle them easily with lesser efforts.  A recent report from WhiteSource, an open-source security and license compliance management platform, highlights how developers should be in charge of application security and how organizations are investing heavily to produce secure code. The development team should be in charge of software security According to a Whitesource report, “for the day-to-day operational responsibility for application security with 71% of the respondents stating the ownership lies in the software development side, whether it is by the DevOps teams, the development team leaders or the developers themselves.” This is because fixing the vulnerability in the development or coding phase produces better-secured applications. And, if these are handled by development teams, security teams can focus on other bigger security aspects for the organization, on the whole. In comparison to the previous waterfall method where software testing was done before the release, after adopting a DevOps approach, the testing has moved to early phases to avoid bottlenecks at a later stage.  Whitesource report says, “the 36% of organizations have moved past the initial implementation at testing at the build stage and are starting to integrate security testing tools at earlier points in the SDLC like the IDE and their repositories”. How are organizations investing in secure code? It is possible for a vulnerability to escape the final test rounds and affect users after being released in the market. This can bring in customer dissatisfaction, bad reviews towards the application, customer loss, and many other disadvantages. In such cases, organizations are trying their best to resolve vulnerabilities by testing tools, training, and time spent on handling security vulnerabilities, the Whitesource report says. “Along with training, developers are tooling up with a range of application security testing (AST) technologies with 68% of developers reporting using at least one of the following technologies: SAST, DAST, SCA, IAST or RASP”, the report says. For organizations that are working with DevOps, the question is not if they should integrate automated tools into their pipeline, but which ones should they adopt first. [box type="shadow" align="" class="" width=""] Static Application Security Testing (SAST) is also known as “white-box testing” and allows developers to know about security vulnerabilities in the application source code earlier in SDLC. Dynamic Application Security Testing (DAST) also known as “black-box testing” helps to find security vulnerabilities and weaknesses in a running application(web apps). Interactive Application Security Testing (IAST) combines static and dynamic techniques to improve testing. According to Veracode, IAST analyzes code for security vulnerabilities while the app is run by an automated test, human tester, or any activity “interacting” with the application functionality. Run-time Application Security Protection (RASP) lets an app run continuous security checks on itself and respond to live attacks by terminating an attacker’s session and alerting defenders to the attack. [/box] Security in the development phase, an added task for developers With the help of such technologies (SAST, DAST, SCA, IAST or RASP), issues can be notified before and after production, thus, adding visibility to the application’s security and also enable teams to be proactive. However, the issue may be constantly thrown at the developers which they will have to research and remediate. “It is unreasonable to ask developers to handle all security alerts, especially as most application security tools are developed for security teams focused on coverage (detecting all potential issues), rather than accuracy and prioritization”, the Whitesource team mentions. The report states, “Developers claim that they are spending a considerable amount of their time on dealing with remediations, with 42% reporting that they spend between 2 to 12 hours a month on these tasks, while another 33% say that they spend 12 to 36 hours on them.” How can developers ensure security while choosing their open-source component? Developers said they check for known vulnerabilities when they choose an open-source component. This ensures “their open source components are secure from the earliest stages of development”. The Whitesource team shows a graph where survey “respondents from North America (the U.S. and Canada) showed a higher level of awareness to check the vulnerability status of the open-source components that they were choosing.” For the Europeans though, open source compliance rated higher on their priorities. On asking respondents how their organization detects vulnerable open source components in their applications,  34% of them said they have tools that continuously detect open source vulnerabilities in their applications 28% of them use a code scanner to review software once or twice a year 14% manually check for open source vulnerabilities, but only for the high severity ones 24% said the security team notifies them Once developers discover the known vulnerability in their product they need to find a quick and effective path to remediating it. Most of them turn first to GitHub’s Security Alerts tool for help, Whitesource reports. The graph below shows other free security tools in the market similar to GitHub.  Detection vs Remediation of vulnerabilities Developers take a more proactive approach to detect vulnerabilities. However, the same isn’t applicable when it comes to vulnerability remediation. “25% of developers only report on detected vulnerabilities and 53% are taking actions only in specific cases,” the report states. “Developers are investing many hours is research and remediation so why aren’t we seeing more developers taking action? The reason probably lies in the fact that most application security tools' main goal is to detect, alert and report.” We cannot just blame developers if there is a vulnerability found. They also need to have the same quality of tools that speeds up the process for vulnerability remediation. Talking about manual processes, they are time-consuming and require a certain amount of skill set, which are certain challenges faced.  Whitesource concludes that next-generation application security tools will be those that are developer-focused, closing the loop from detecting of an issue, all the way through validation, research, and remediation of the issue. To know about this survey in detail, read Whitesource Developer security report. Kazakhstan government intercepts nationwide HTTPS traffic to re-encrypt with a govt-issued root certificate – Cyber-security or Cyber-surveillance? “Why was Rust chosen for Libra?”, US Congressman questions Facebook on Libra security design choices Introducing Abscissa, a security-oriented Rust application framework by iqlusion
Read more
  • 0
  • 0
  • 3447