Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3709 Articles
article-image-xhci-usb-3-0-issues-have-finally-been-resolved
Amrata Joshi
11 Mar 2019
2 min read
Save for later

XHCI (USB 3.0+) issues have finally been resolved!

Amrata Joshi
11 Mar 2019
2 min read
Users have been facing issues with XHCI (USB 3 host controller) bus driver since quite some time now. Last month, Waddlesplash, a team member at Haiku, worked towards fixing the XHCI bus driver. Though few users contributed some small fixes, which helped the driver to boot Haiku within QEMU. But there were still few issues that caused device lockups such as USB mouse/keyboard stalls. The kernel related issues have been resolved now. Even the devices don’t lock up now and even the performance has been greatly improved to 120MB/s on some USB3 flash drives and XHCI chipsets. Users can now try the improved driver which is more efficient. The only remaining issue is a hard-stall on boot with certain USB3 flash drives on NEC/Renesas controllers. The work related to USB2 flash drives on USB3 ports and mounting the flash drives has finished. Most of the issues related to controller initialization got fixed by hrev52772. The issues related to broken transfer finalization logic and random device stalls have been fixed. This driver will be more useful as a reference than FreeBSD’s, OpenBSD’s, or Linux’s to other OS developers. The race condition in request submission has been fixed. A dead code has been removed and the style has been cleaned. Also, the device structure has been improved now. To know more about this news, check out the Haiku’s official blog post. USB 4 will integrate Thunderbolt 3 to increase the speed to 40Gbps USB-IF launches ‘Type-C Authentication Program’ for better security Google releases two new hardware products, Coral dev board and a USB accelerator built around its Edge TPU chip
Read more
  • 0
  • 0
  • 3272

article-image-gcc-9-will-come-with-improved-diagnostics-simpler-c-errors-and-much-more
Amrata Joshi
11 Mar 2019
2 min read
Save for later

GCC 9.1 releases with improved diagnostics, simpler C++ errors and much more

Amrata Joshi
11 Mar 2019
2 min read
Just two months ago, the team behind GCC (GNU Compiler Collection) made certain changes to GCC 9.1. And Last week, the team released GCC 9.1 with improved diagnostics, location and simpler C++ errors.  What’s new in GCC 9.1? Changes to diagnostics The team added a left-hand margin that shows line numbers. GCC 9.1 now has a new look for the diagnostics. The diagnostics can label regions of the source code in order to show relevant information. The diagnostics come with left-hand and right-hand sides of the “+” operator, so GCC highlights them inline. The team has added a JSON output format such that GCC 9.1 now has a machine-readable output format for diagnostics. C++ errors  The compiler usually has to consider several functions while dealing with C++ at a given call site and reject all of them for different reasons. Also, the g++‘s error messages need to be handled and a specific reason needs to be given for rejecting each function. This makes simple cases difficult to read. This release comes with a  special-casing to simplify g++ errors for common cases. Improved C++ syntax in GCC 9.1 The major issue within GCC’s internal representation is that not every node within the syntax tree has a source location. For GCC 9.1, the team has worked to solve this problem so that most of the places in the C++ syntax tree now retain location information for longer. Users can now emit optimization information GCC 9.1 can now automatically vectorize loops and reorganize them to work on multiple iterations at once. Users will now have an option, -fopt-info, that will help in emitting optimization information. Improved runtime library in GCC 9.1 This release comes with improved experimental support for C++17, including <memory_resource>. There will also be a support for opening file streams with wide character paths on Windows. Arm specific This release comes with support for the deprecated Armv2 and Armv3 architectures and their variants have been removed. Support for the Armv5 and Armv5E architectures has also been removed. To know more about this news, check out RedHat’s blog post. DragonFly BSD 5.4.1 released with new system compiler in GCC 8 and more The D language front-end support finally merged into GCC 9 GCC 8.1 Standards released!
Read more
  • 0
  • 0
  • 11872

article-image-resecurity-reports-iriduim-behind-citrix-data-breach-200-government-agencies-oil-and-gas-companies-and-technology-companies-also-targeted
Melisha Dsouza
11 Mar 2019
4 min read
Save for later

Resecurity reports ‘IRIDUIM’ behind Citrix data breach, 200+ government agencies, oil and gas companies, and technology companies also targeted.

Melisha Dsouza
11 Mar 2019
4 min read
Last week, Citrix, the American cloud computing company, disclosed that it suffered a data breach on its internal network. They were informed of this attack through the FBI. In a statement posted on Citrix’s official blog, the company’s Chief Security Information Officer Stan Black said, “the FBI contacted Citrix to advise they had reason to believe that international cybercriminals gained access to the internal Citrix network. It appears that hackers may have accessed and downloaded business documents. The specific documents that may have been accessed, however, are currently unknown.” The FBI informed Citrix that the hackers likely used a tactic known as password spraying to exploit weak passwords. The blog further states that “Once they gained a foothold with limited access, they worked to circumvent additional layers of security”. In wake of these events, a security firm Resecurity reached out to NBC news and claimed that they had reasons to believe that the attacks were carried out by Iranian-linked group known as IRIDIUM.  Resecurity says that IRIDIUM "has hit more than 200 government agencies, oil and gas companies, and technology companies including Citrix." Resecurity claims that IRIDIUM breached Citrix's network during December 2018. Charles Yoo, Resecurity's president, said that the hackers extracted at least six terabytes of data and possibly up to 10 terabytes of sensitive data stored in the Citrix enterprise network, including e-mail correspondence, files in network shares and other services used for project management and procurement. “It's a pretty deep intrusion, with multiple employee compromises and remote access to internal resources." Yoo further added that his firm has been tracking the Iranian-linked group for years, and has reasons to believe that Iridium broke its way into Citrix's network about 10 years ago, and has been “lurking inside the company's system ever since.” There is no evidence to prove that the attacks directly penetrated U.S. government networks. However, the breach carries a potential risk that the hackers could eventually enter into sensitive government networks. According to Black, “At this time, there is no indication that the security of any Citrix product or service was compromised.” Resecurity said that it first reached out to Citrix on December 28, 2018, to share an early warning about “a targeted attack and data breach”. According to Yoo, an analysis of the indicated that the hackers were focused in particular on FBI-related projects, NASA and aerospace contracts and work with Saudi Aramco, Saudi Arabia's state oil company. “Based on the timing and further dynamics, the attack was planned and organized specifically during Christmas period,” Resecurity says in a blog. A spokesperson for Citrix confirmed to The Register that "Stan’s blog refers to the same incident" described by Resecurity. “At this time, there is no indication that the security of any Citrix product or service was compromised,” says Black Twitter was abuzz with users expressing their confusion over the timeline of events and wondering about the consequences if IRIDIUM was truly lurking in Citrix’s network for 10 years: “Based on the timing and further dynamics, the attack was planned and organized specifically during Christmas period,” Resecurity says in a blog. https://twitter.com/dcallahan2/status/1104301320255754241 https://twitter.com/MalwareYoda/status/1104170906740350977 https://twitter.com/Maliciouslink/status/1104375001715798016 The data breach is worrisome, considering that Citrix sells workplace software to government agencies and handles sensitive computer projects for the White House communications agency, the U.S. military, the FBI and many American corporations. U.S. Senator introduces a bill that levies jail time and hefty fines for companies violating data breaches Internal memo reveals NASA suffered a data breach compromising employees social security numbers Equifax data breach could have been “entirely preventable”, says House oversight and government reform committee staff report
Read more
  • 0
  • 0
  • 4916

article-image-flickr-creative-commons-photos-wont-be-subject-1000-picture
Fatema Patrawala
11 Mar 2019
2 min read
Save for later

Flickr says Creative Commons photos won’t be subject to 1,000 picture limit

Fatema Patrawala
11 Mar 2019
2 min read
On November 1st, 2018 Flickr announced that they would be limiting free accounts to just 1,000 pictures. But it recently made an exception: that it would be deleting any pictures on accounts over that number, and any Creative Commons licensed photos uploaded before the November 1st, 2018 deadline would be allowed to stay. Last Friday, the company made the policy permanent — all Creative Commons photos will be allowed on Flickr for good, regardless of upload date, even on accounts that otherwise would have surpassed the 1,000 picture limit. In light of this change, Flickr also removed the ability to change licenses on photos on the site in bulk. This makes it difficult for users to just hit a button and circumvent the 1,000 picture limit. That’s for good reason, too. The company says it wants users to think about and understand the consequences of making a photo open to use by anyone with Creative Commons licensing before they just flip the switch to avoid the limit. It’s unclear if users already at the 1,000 photo limit will be able to upload new Creative Commons photos past that, but that seems to be what Flickr is implying. Additionally, Flickr is adding “In memoriam” accounts to users that have passed away, which will lock the account and preserve all the pictures on it. It is available for Pro users too who would be over the 1,000 picture limit when their subscription inevitably lapses. For this Flickr has put up a page to submit accounts which can be memorialized. Upon receiving a request on the page they evaluate the account if it qualifies to be memorialized. And then the account’s username will be updated to reflect the “in memoriam” status and login for the account will be locked to prevent anyone from signing in. Lastly, Flickr also announced that it will finally be removing the last major vestige of the company’s former Yahoo stewardship. They have decided to to do away with the mandatory Yahoo login requirement, and will also transition existing accounts away from Yahoo over the next few weeks. RSA Conference 2019 Highlights: Top 5 cybersecurity products announced Google Cloud security launches three new services for better threat detection and protection in enterprises
Read more
  • 0
  • 0
  • 1614

article-image-blue-oak-council-publishes-model-license-version-1-0-0-to-simplify-software-licensing-for-everyone
Natasha Mathur
11 Mar 2019
2 min read
Save for later

Blue Oak Council publishes model license version 1.0.0 to simplify software licensing for everyone

Natasha Mathur
11 Mar 2019
2 min read
Blue Oak Council Inc, a Delaware nonprofit corporation, published a model license version 1.0.0, last week. The new license demonstrates all the techniques used by the licenses to make the software free and simple for everyone to use and build on. The licensing materials published by Blue Oak is in everyday language, making it easy for developers, lawyers, and others to understand software licensing without relying on legal help. Blue Oak model license 1.0.0 comes with information regarding purpose, acceptance, copyright, notices, excuse, patent, reliability, and no liability. The license states that it provides everyone with as much permission to work with the software as possible. It also protects the contributors from liability. It states that users must agree to the rules of the license to receive it. Users should refrain from doing things that would defy the rules of the license. Additionally, it states that everyone who gets a copy of any part of the software (with or without changes) also receives a text of this license or a link to Blue Oak Council license 1.0.0. Also, in case anyone notifies the users in writing that they have not complied with Notices, then they can keep their license by taking all the practical and necessary steps to needed to comply within 30 days, post-notice. If users fail to follow this, the license will end immediately. Apart from this, Blue Oak Council has also published certain example provisions for contracts and grants, along with a corporate open source policy that helps with the permissive licenses. There’s also a list of permissive public software licenses on the OSI and SPDX lists. These licenses have been rated from gold to lead, based on criteria such as the clarity of drafting, simplicity, and practicality of conditions. For more information, check out the official Blue Oak Council blog post. Red Hat drops MongoDB over concerns related to its Server Side Public License (SSPL) Neo4j Enterprise Edition is now available under a commercial license Free Software Foundation updates its licensing materials, adds Commons Clause and Fraunhofer FDK AAC license
Read more
  • 0
  • 0
  • 1689

article-image-css-working-group-approves-to-add-support-for-trigonometric-functions-in-css
Bhagyashree R
11 Mar 2019
2 min read
Save for later

CSS Working Group approves to add support for trigonometric functions in CSS

Bhagyashree R
11 Mar 2019
2 min read
In a meeting conducted last month, the CSS Working Group has agreed on introducing a few trigonometry functions in CSS. Created in 1997 by the World Wide Web Consortium (W3C), the CSS Working Group is responsible for discussing new features and tackling issues in CSS. Currently, they have approved the following 10 functions: Sine: sin() Cosine: cos() Tangent: tan() Arccosine: acos() Arcsine: asin() Arctangent: atan() Arctangent of two numbers x and y: atan2() Square root: sqrt() Square root of the sum of squares of its arguments: hypot() Power of: pow() CSS is no longer just limited to changing colors or fonts. Developers have been slowly relying on CSS for implementing much more complex tasks. CSS 3, its overhauled version, comes with several web animations, gradients, certain selectors, and more. However, CSS lacked the ability to work with angles and perform much more advanced mathematical operations than adding, subtracting, multiplying, or dividing two values. This decision comes after multiple requests by web developers to introduce trigonometric functions for simplifying implementations of many use cases that involve angles. These will be very handy in cases like syncing rotation angles, converting between angles and x/y dimensions, and more. Currently, for implementing these use cases developers had to either hardcode, use JavaScript, or a preprocessor, which is a pain. Explaining the need of trigonometric functions in CSS, one of the developers said, “In static markup, the solution is to hard-code approximate values, but that often leaves pixel gaps or discontinuities from rounding errors. In dynamic situations, as others have mentioned, the only solution is JavaScript (with lots of converting back and forth between radians for the JS functions and degrees or turns for my design and for SVG properties, which is the only time I usually need Math.PI!).” Developers are also requesting for reciprocal functions for calculating cotangent, secant, and cosecant. These will be very convenient but are currently not a priority. Read the discussion by CSS Working Group, check out its GitHub repository. Erlang turns 20: Tracing the journey from Ericsson to Whatsapp How you can replace a hot path in JavaScript with WebAssembly Bootstrap 5 to replace jQuery with vanilla JavaScript
Read more
  • 0
  • 0
  • 1827
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $15.99/month. Cancel anytime
article-image-a-security-researcher-reveals-his-discovery-on-800-million-leaked-emails-available-online
Savia Lobo
09 Mar 2019
4 min read
Save for later

A security researcher reveals his discovery on 800+ Million leaked Emails available online

Savia Lobo
09 Mar 2019
4 min read
A security researcher Bob Diachenko shared his discovery of an unprotected 150GB-sized MongoDB instance. He said that there were a huge number of emails that were publicly accessible for anyone with an internet connection. “Some of the data was much more detailed than just the email address and included personally identifiable information (PII)” The discovered database contained four separate collections of data and combined was 808,539,939 records. The huge part of this database was named ‘mailEmailDatabase’ with three folders Emailrecords (798,171,891 records) emailWithPhone (4,150,600 records) businessLeads (6,217,358 records) He cross-checked some random election of records with Troy Hunt’s HaveIBeenPwned database. The researcher states, “I started to analyze the content in an attempt to identify the owner and responsibly disclose it – even despite the fact that this started to look very much like a spam organization dataset.” In addition to the email databases the Mongo instance also uncovered details on the possible owner of the database-–a company named ‘Verifications.io’-–which offered the services of ‘Enterprise Email Validation’. Once emails were uploaded for verification they were also stored in plain text. “Once I reported my discovery to Verifications.io the site was taken offline and is currently down at the time of this publication. Here is the archived version”, the researcher said. According to Diachenko, Someone uploads a list of email addresses that they want to validate. Verifications.io has a list of mail servers and internal email accounts that they use to “validate” an email address. They do this by literally sending the people an email. If it does not bounce, the email is validated. If it bounces, they put it in a bounce list so they can easily validate later on. Diachenko said, “‘Mr. Threat Actor’ has a list of 1000 companies that he wants to hack into. He has a bunch of potential users and passwords but has no idea which ones are real. He could try to log in to a service or system using ALL of those accounts, but that type of brute force attack is very noisy and would likely be identified.” The threat actor instead uploaded all of his potential email addresses to a service like verifications.io. The email verification service then sent tens of thousands of emails to validate these users (some real, some not). Each one of the users on the list received their own spam message saying “hi”. Further, the threat actor received a cleaned, verified, and valid list of users at these companies. This, in turn, helped him to know who works there and who does not, using which he could possibly start a more focused phishing or brute forcing campaign. According to Wired, “The data doesn't contain Social Security numbers or credit card numbers, and the only passwords in the database are for Verifications.io's own infrastructure. Overall, most of the data is publicly available from various sources, but when criminals can get their hands on troves of aggregated data, it makes it much easier for them to run new social engineering scams, or expand their target pool.” Security researcher Troy Hunt is adding the Verifications.io data to his service HaveIBeenPwned, which helps people check whether their data has been compromised in data exposures and breaches. He says that 35 percent of the trove's 763 million email addresses are new to the HaveIBeenPwned database. The Verifications.io data dump is also the second-largest ever added to HaveIBeenPwned in terms of a number of email addresses, after the 773 million in the repository known as Collection 1, which was added earlier this year. Hunt says some of his own information is included in the Verifications.io exposure. To know more about this news in detail, read Bob Diachenko’s post. Leaked memo reveals that Facebook has threatened to pull investment projects from Canada and Europe if their data demands are not met Switzerland’s e-voting system source code leaked ahead of its bug bounty program; slammed for being ‘poorly constructed’ GDPR complaint claims Google and IAB leaked ‘highly intimate data’ of web users for behavioral advertising  
Read more
  • 0
  • 0
  • 3125

article-image-chacha20-poly1305-vulnerability-issue-affects-openssl-1-1-1-and-1-1-0
Savia Lobo
09 Mar 2019
2 min read
Save for later

ChaCha20-Poly1305 vulnerability issue affects OpenSSL 1.1.1 and 1.1.0

Savia Lobo
09 Mar 2019
2 min read
On Wednesday, March 6, the OpenSSL team revealed a low severity vulnerability in the ChaCha20-Poly1305, an AEAD cipher that incorrectly allows a nonce to be set of up to 16 bytes. OpenSSL team states that ChaCha20-Poly1305 requires a unique nonce input for every encryption operation. RFC 7539 specifies that the nonce value (IV) should be 96 bits (12 bytes). OpenSSL allows a variable nonce length and front pads the nonce with 0 bytes if it is less than 12 bytes. However it also incorrectly allows a nonce to be set of up to 16 bytes. In this case only the last 12 bytes are significant and any additional leading bytes are ignored. The OpenSSL versions 1.1.1 and 1.1.0 are affected by this issue. However, this issue does not impact OpenSSL 1.0.2. The OpenSSL blog states that using the ChaCha20 cipher makes the nonce values unique. “Messages encrypted using a reused nonce value are susceptible to serious confidentiality and integrity attacks. If an application changes the default nonce length to be longer than 12 bytes and then makes a change to the leading bytes of the nonce expecting the new value to be a new unique nonce then such an application could inadvertently encrypt messages with a reused nonce”, the blog states. Also, the ignored bytes in a long nonce are not covered by the “integrity guarantee” of this cipher. This means any application that relies on the integrity of these ignored leading bytes of a long nonce may be further affected. Any OpenSSL internal use of this cipher, including in SSL/TLS, is safe because no such use sets such a long nonce value. However, user applications that use this cipher directly and set a non-default nonce length to be longer than 12 bytes may be vulnerable. To know more about this issue in detail, head over to the OpenSSL blog post. Drupal releases security advisory for ‘serious’ Remote Code Execution vulnerability New research from Eclypsium discloses a vulnerability in Bare Metal Cloud Servers that allows attackers to steal data Google releases a fix for the zero-day vulnerability in its Chrome browser while it was under active attack
Read more
  • 0
  • 0
  • 3830

article-image-junit-5-4-released-with-an-aggregate-artifact-for-reducing-your-maven-and-gradle-files
Bhagyashree R
08 Mar 2019
2 min read
Save for later

JUnit 5.4 released with an aggregate artifact for reducing your Maven and Gradle files

Bhagyashree R
08 Mar 2019
2 min read
Last month, the team behind the JUnit framework announced the release of JUnit 5.4. This release allows ordering extensions and test case execution provides an aggregate artifact for slimming your Maven and Gradle files, and more. Some new features in JUnit 5.4 Ordering test case execution JUnit 5.4 allows you to explicitly define a text execution order. To enable tests ordering, you need to annotate the class with the ‘@TestMethodOrder’ extension and also mention the ordering type of either Alphanumeric, OrderAnnotation, or Random. Alphanumeric orders the test execution based on the method name of the test case. For a custom defined execution, you can use the OrderAnnotation order type. To order test cases pseudo-randomly, you can use the Random order type. Extension ordering With this release, you can not only order test case execution but also order how programmatically register extensions are executed. These extensions are registered with @RegisterExtension. You can use this feature in cases where the setup/teardown behavior of a test is complex and has separate domains. For instance, when you are testing the behavior of how a cache and database are used. Aggregate artifact A large number of dependencies were required when using JUnit 5. With this release, the team has changed this by providing the ‘junit-jupiter’ aggregate artifact. The ‘junit-jupiter’ artifact includes ‘junit-jupiter-api’ and ‘junit-jupiter-params’. This artifact collectively covers most of the dependencies we will need when using JUnit 5. It will also help in reducing the size of Maven and Gradle files of projects using JUnit 5. TempDir In JUnit 5.4, the team has added @TempDir as a native feature of the JUnit framework, which was originally a part of the JUnit-Pioneer third-party library. You can use the @TempDir extension for handling the creation and cleanup of temporary files. TestKit With TestKit, you can perform a meta-analysis on a test suite. It allows you to check the number of executed tests, passed tests, failed tests, skipped tests, as well as a few other behaviors. To read the full list of updates in JUnit 5.4, check out the official announcement. Apache NetBeans IDE 10.0 released with support for JDK 11, JUnit 5 and more! JUnit 5.3 brings console output capture, assertThrow enhancements and parallel test execution Unit testing with Java frameworks: JUnit and TestNG [Tutorial]  
Read more
  • 0
  • 0
  • 3174

article-image-ionic-4-1-named-hydrogen-is-out
Bhagyashree R
08 Mar 2019
2 min read
Save for later

Ionic 4.1 named Hydrogen is out!

Bhagyashree R
08 Mar 2019
2 min read
After releasing Ionic 4.0 in January this year, the Ionic team announced the release of Ionic  4.1 on Wednesday. This release is named “Hydrogen” based on the name of elements in the periodic table. Along with a few bugfixes, Ionic 4.1 comes with features like skeleton text update, indeterminate checkboxes, and more. Some of the new features in Ionic 4.1 Skeleton text update Using the ion-skeleton-text component, developers can now make showing skeleton screens for list items more natural. You can use ‘ion-skeleton-text’ inside media controls like ‘ion-avatar’ and ‘ion-thumbnail’. The size of skeletons placed inside of avatars and thumbnails will be automatically adjusted according to their containers. You can also style the skeletons to have custom border-radius, width, height, or any other CSS styles for use outside of Ionic components. Indeterminate checkboxes A new property named ‘indeterminate’ is now added to the ‘ion-checkbox’ component. When the value of ‘indeterminate’ is true it will show the checkbox in a half-on/half-off state. This property will be handy in cases where you are using a ‘check all’ checkbox, but only some of the options in the group are selected. CSS display utilities Ionic 4.1 comes with a few new CSS classes for hiding elements and responsive design: ion-hide and ion-hide-{breakpoint}-{dir}. To hide an element, you can use the ‘ion-hide’ class. You can use the ion-hide-{breakpoint}-{dir} classes to hide an element based on breakpoints for certain screen sizes. To know more about the other features in detail, visit Ionic's official website. Ionic Framework 4.0 has just been released, now backed by Web Components, not Angular Ionic v4 RC released with improved performance, UI Library distribution and more The Ionic team announces the release of Ionic React Beta  
Read more
  • 0
  • 0
  • 3016
article-image-microsoft-researchers-introduce-a-new-climate-forecasting-model-and-a-public-dataset-to-train-these-models
Natasha Mathur
08 Mar 2019
3 min read
Save for later

Microsoft researchers introduce a new climate forecasting model and a public dataset to train these models

Natasha Mathur
08 Mar 2019
3 min read
Microsoft researcher Lester Mackey and his teammates along with grad students, Jessica Hwang, and Paulo Orenstein have come out with a new machine learning based forecasting model along with a comprehensive dataset, called SubseasonalRodeo, for training the subseasonal forecasting models. Subseasonal forecasting models are systems that are capable of predicting the temperature or precipitation 2-6 weeks in advance in the western contiguous United States. The SubseasonalRhodeo dataset can be found at the Harvard Dataverse. Researchers have presented the details about their work in the paper titled “Improving Subseasonal Forecasting in the Western U.S. with Machine Learning”. “What has perhaps prevented computer scientists and statisticians from aggressively pursuing this problem is that there hasn’t been a nice, neat, tidy dataset for someone to just download ..and use, so we hope that by releasing this dataset, other machine learning researchers.. will just run with it,” says Hwang. Microsoft team states that a large amount of high-quality historical weather data along with the existing computational power makes the process of statistical forecast modeling worthwhile. Also, clubbing together the physics-based and statistics-based approaches lead to better predictions. The team’s machine learning based forecasting system combines the two regression models that are trained on its SubseasonalRodeo dataset. The dataset consists of different weather measurements dating as far back as 1948. These weather measurements include temperature, precipitation, sea surface temperature, sea ice concentration, and relative humidity and pressure. This data is consolidated from sources like the National Center for Atmospheric Research, the National Oceanic and Atmospheric Administration’s Climate Prediction Center and the National Centers for Environmental Prediction. First of the two models created by the team is a local linear regression with multitask model selection, or MultiLLR. Data used by the team was limited to an eight-week span in any year around the day for which the prediction was being made. There was also a selection process which made use of a customized backward stepwise procedure where two to 13 of the most relevant predictors were consolidated to make a forecast. The second model created by the team was a multitask k-nearest neighbor autoregression, or AutoKNN. This model incorporates the historical data of only the measurement being predicted such as either the temperature or the precipitation. Researchers state that although each model performed better on its own as compared to the competition’s baseline models, namely, a debiased version of the operational U.S. Climate Forecasting System (CFSv2) and a damped persistence model, they deal with different parts of the challenges associated with the subseasonal forecasting. For instance, the first model created by the researchers makes use of only the recent history to make its predictions while the second model doesn’t account for other factors. So the team’s final forecasting model was a combination of the two models. The team will be further expanding its work to the Western United States and will continue its collaboration with the Bureau of Reclamation and other agencies. “I think that subseasonal forecasting is fertile ground for machine learning development, and we’ve just scratched the surface,” mentions Mackey. For more information, check out the official Microsoft blog. Italian researchers conduct an experiment to prove that quantum communication is possible on a global scale Stanford researchers introduce DeepSolar, a deep learning framework that mapped every solar panel in the US Researchers unveil a new algorithm that allows analyzing high-dimensional data sets more effectively, at NeurIPS conference
Read more
  • 0
  • 0
  • 3725

article-image-notepad-drops-code-signing-for-its-releases-from-version-7-6-4-onwards
Bhagyashree R
08 Mar 2019
3 min read
Save for later

Notepad++ drops code-signing for its releases from version 7.6.4 onwards

Bhagyashree R
08 Mar 2019
3 min read
On Wednesday, Don Ho, Notepad++ developer announced the release of Notepad++ 7.6.4. He also shared that from this release onwards, users will not see the blue-trusted User Access Control (UAC) popup as Notepad++ has dropped code signing for its releases. UAC is a Windows security feature which helps prevent unauthorized changes to operating systems. Why Notepad++ decided to drop code-signing for its releases? DigiCert, a US-based X.509 SSL certificate authority, donated a three years code signing certificate to Notepad++ in 2016, which has now expired. Now when Don Ho was trying to repurchase a new certificate from Certum, a Certification Authority, he was required to mention a Common Name (CN). The problem here is that as Notepad++ is not a company or organization, Certum did not allow him to use Notepad++ as CN. Additionally, he also feels that these code-signing certificates are too overpriced. He added in the blog post, “Notepad++ has done without a certificate for more than 10 years, I don’t see why I should add the dependency now (and be an accomplice of this overpricing industry). I decide to do without it.” This sparked a discussion on Hacker News, and many users supported the developer’s decision. One of the users commented, “Well I don't care if the developer paid the certificate, and I don't see why someone that develops FOSS should pay money for something that doesn't bring to him any of that money back. At least for open source software certificates should be offered for free, in my opinion.” Don Ho mentioned in the announcement that this decision will not have any effect on Notepad++ security whatsoever, but it will be less flexible from before: As always, every release will come with SHA256 hash of the installed and other packages. The SHA256 hash of all components such as ‘SciLexer.dll’, ‘GUP.exe’, and ‘nppPluginList.dll’ will be checked by Notepad++ Markdown support was planned to land in Notepad++ 7.6.3 version, but the needed file wasn’t deployed correctly by the installer. This bug is now fixed in Notepad++ 7.6.4. Additionally, this release fixes a few vulnerable issues and some crash bugs identified in the European Commission's Free and Open Source Software Auditing Bug Bounty program. To read the original announcement, visit Notepad++’s official website. EU to sponsor bug bounty programs for 14 open source projects from January 2019 Browser based Visualization made easy with the new P5.js 5 Reasons to learn programming
Read more
  • 0
  • 0
  • 2220

article-image-introducing-quarkus-a-kubernetes-native-java-framework-for-graalvm-openjdk-hotspot
Melisha Dsouza
08 Mar 2019
2 min read
Save for later

Introducing ‘Quarkus’, a Kubernetes native Java framework for GraalVM &amp; OpenJDK HotSpot

Melisha Dsouza
08 Mar 2019
2 min read
Yesterday, RedHat announced the launch of ‘Quarkus’, a Kubernetes Native Java framework that offers developers “a unified reactive and imperative programming model” in order to address a wider range of distributed application architectures. The framework uses Java libraries and standards and is tailored for GraalVM and HotSpot. Quarkus has been designed keeping in mind serverless, microservices, containers, Kubernetes, FaaS, and the cloud and it provides an effective solution for running Java on these new deployment environments. Features of Quarkus Fast Startup enabling automatic scaling up and down of microservices on containers and Kubernetes as well as FaaS on-the-spot execution. Low memory utilization to help optimize container density in microservices architecture deployments that require multiple containers. Quarkus unifies imperative and reactive programming models for microservices development. Quarkus introduces a full-stack framework by leveraging libraries like Eclipse MicroProfile, JPA/Hibernate, JAX-RS/RESTEasy, Eclipse Vert.x, Netty, and more. Quarkus includes an extension framework for third-party framework authors can leverage and extend. Twitter was abuzz with Kubernetes users expressing their excitement on this news- describing Quarkus as “game changer” in the world of microservices: https://twitter.com/systemcraftsman/status/1103759828118368258 https://twitter.com/MarcusBiel/status/1103647704494804992 https://twitter.com/lazarotti/status/1103633019183738880 This open source framework is available under the Apache Software License 2.0 or compatible license. You can head over to the Quarkus website for more information on this news. Using lambda expressions in Java 11 [Tutorial] Bootstrap 5 to replace jQuery with vanilla JavaScript Will putting limits on how much JavaScript is loaded by a website help prevent user resource abuse?
Read more
  • 0
  • 0
  • 4385
article-image-rsa-conference-2019-highlights-top-5-cybersecurity-products-announced
Melisha Dsouza
08 Mar 2019
4 min read
Save for later

RSA Conference 2019 Highlights: Top 5 cybersecurity products announced

Melisha Dsouza
08 Mar 2019
4 min read
The theme at the ongoing RSA 2019 conference is “Better”. As the official RSA page explains, “This means working hard to find better solutions. Making better connections with peers from around the world. And keeping the digital world safe so everyone can get on with making the real world a better place.” Keeping up with the theme of the year, the conference saw some exciting announcements, keynotes, and seminars presented by some of the top security experts and organizations. Here is our list of the top 5 new Cybersecurity products announced at RSA Conference 2019: #1 X-Force Red Blockchain Testing service IBM announced the ‘X-Force Red Blockchain Testing service’ to test vulnerabilities in enterprise blockchain platforms. This service will be run by IBM's in-house X-Force Red security team and will test the security of back-end processes for blockchain-powered networks. The service will evaluate the whole implementation of enterprise blockchain platforms. This will include chain code, public key infrastructure, and hyperledgers. Alongside, this service will also assess hardware and software applications that are usually used to control access and manage blockchain networks. #2 Microsoft Azure Sentinel Azure Sentinel will help developers “build next-generation security operations with cloud and AI”. It gives developers a holistic view of security across the enterprise. The service will help them collect data across all users, devices, applications, and infrastructure, both on-premises and in multiple clouds. It can then detect previously uncovered threats and minimize false positives using analytics and threat intelligence. Azure sentinel also helps investigate threats with AI and hunt suspicious activities at scale while responding to incidents rapidly with built-in orchestration and automation of common tasks. #3 Polaris Software Integrity Platform The Polaris Software Integrity Platform is an integrated, easy-to-use solution that enables security and development teams to quickly build secure and high-quality software. The service lets developers integrate and automate static, dynamic, and software composition analysis with the tools they are familiar with. The platform also provides security teams with a holistic view of application security risk across their portfolio and the SDLC. It enables developers to address security flaws in their code as they write it, without switching tools using the Polaris Code Sight IDE plugin. #4 CyberArk Privileged Access security solution v10.8 The CyberArk Privileged Access Security Solution v10.8 automates detection, alerting and response for unmanaged and potentially-risky Amazon Web Services (AWS) accounts. This version also features Just-in-Time capabilities to deliver flexible user access to cloud-based or on-premises Windows systems. The Just-in-Time provisional access to Windows servers will enable administrators to configure the amount of access time granted to Windows systems, irrespective of whether they are cloud-based or on-premises. This will reduce operational friction. The solution can now identify privileged accounts in AWS, unmanaged Identity and Access Management (IAM) users (such as Shadow Admins), and EC2 instances and accounts. This will help track AWS credentials and accelerate the on-boarding process for these accounts. #5 Cyxtera AppGate SDP IoT Connector Cyxtera’s IoT Connector, a feature within AppGate SDP secures unmanaged and undermanaged IoT devices with a 360-degree perimeter protection. It isolates IoT resources using their Zero Trust model. Each AppGate IoT Connector instance scales for both volume and throughput and handles a wide array of IoT devices. AppGate operates in-line and limits access to prevent lateral attacks while allowing devices to seamlessly perform their functions. It can be easily deployed without replacing existing hardware or software. Apart from this, the other products launched at the conference included CylancePERSONA, CrowdStrike Falcon for Mobile, Twistlock 19.03 and much more. To stay updated with all the events, keynotes, seminars, and releases happening at the RSA 2019 conference, head over to their official blog. The Erlang Ecosystem Foundation launched at the Code BEAM SF conference NSA releases Ghidra, a free software reverse engineering (SRE) framework, at the RSA security conference Google teases a game streaming service set for Game Developers Conference
Read more
  • 0
  • 0
  • 3755

article-image-google-cloud-security-launches-three-new-services-for-better-threat-detection-and-protection-in-enterprises
Melisha Dsouza
08 Mar 2019
2 min read
Save for later

Google Cloud security launches three new services for better threat detection and protection in enterprises

Melisha Dsouza
08 Mar 2019
2 min read
This week, Google Cloud Security announced a host of new services to empower customers with advanced security functionalities that are easy to deploy and use. This includes the Web Risk API, Cloud Armor, and HSM keys. #1 Web Risk API The Web Risk API has been released in the beta format to ensure the safety of users on the web. The Web Risk API includes data on more than a million unsafe URLs. Billions of URL’s are examined each day to keep this data up-to-date. Client applications can use a simple API call to check URLs against Google's lists of unsafe web resources. This list also includes social engineering sites, deceptive sites, and sites that host malware or unwanted software. #2 Cloud Armor Cloud Armor is a Distributed Denial of Service (DDoS) defense and Web Application Firewall (WAF) service for Google Cloud Platform (GCP) based on the technologies used to protect services like Search, Gmail and YouTube. Cloud Armor is generally available, offering L3/L4 DDoS defense as well as IP Allow/Deny capabilities for applications or services behind the Cloud HTTP/S Load Balance. It also allows users to either permit or block incoming traffic based on IP addresses or ranges using allow lists and deny lists. Users can also customize their defenses and mitigate multivector attacks through Cloud Armor’s flexible rules language. #3 HSM keys to protect data in the cloud Cloud HSM is now generally available and it allows customers to protect encryption keys and perform cryptographic operations in FIPS 140-2 Level 3 certified HSMs. Customers do not have to worry about the operational overhead of HSM cluster management, scaling and patching. Cloud HSM service is fully integrated with Cloud Key Management Service (KMS), allowing users to create and use customer-managed encryption keys (CMEK) that are generated and protected by a FIPS 140-2 Level 3 hardware device. You can head over to Google Cloud Platform’s official blog to know more about these releases. Machine Learning as a Service (MLaaS): How Google Cloud Platform, Microsoft Azure, and AWS are democratizing Artificial Intelligence Google’s Cloud Robotics platform, to be launched in 2019, will combine the power of AI, robotics and the cloud Build Hadoop clusters using Google Cloud Platform [Tutorial]
Read more
  • 0
  • 0
  • 3007