Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3709 Articles
article-image-oracle-releases-open-source-and-commercial-licenses-for-java-11-and-later
Savia Lobo
13 Sep 2018
3 min read
Save for later

Oracle releases open source and commercial licenses for Java 11 and later

Savia Lobo
13 Sep 2018
3 min read
Oracle announced that it will provide JDK releases in two combinations ( an open source license and a commercial license): Under the open source GNU General Public License v2, with the Classpath Exception (GPLv2+CPE) Under a commercial license for those using the Oracle JDK as part of an Oracle product or service, or who do not wish to use open source software. These combinations will replace the historical BCL(Binary Code License for Oracle Java SE technologies), which had a combination of free and paid commercial terms. The BCL has been the primary license for Oracle Java SE technologies for well over a decade. It historically contained ‘commercial features’ that were not available in OpenJDK builds. However, over the past year, Oracle has contributed features to the OpenJDK Community, which include Java Flight Recorder, Java Mission Control, Application Class-Data Sharing, and ZGC. From Java 11 onwards, therefore, Oracle JDK builds and OpenJDK builds will be essentially identical. Minute differences between Oracle JDK 11 and OpenJDK Oracle JDK 11 emits a warning when using the -XX:+UnlockCommercialFeatures option. On the other hand, in OpenJDK builds this option results in an error. This difference remains in order to make it easier for users of Oracle JDK 10 and earlier releases to migrate to Oracle JDK 11 and later. The javac --release command behaves differently for the Java 9 and Java 10 targets. This is because, in those releases the Oracle JDK contained some additional modules that were not part of corresponding OpenJDK releases. Some of them are: javafx.base javafx.controls javafx.fxml javafx.graphics javafx.media javafx.web This difference remains in order to provide a consistent experience for specific kinds of legacy use. These modules are either now available separately as part of OpenJFX, are now in both OpenJDK and the Oracle JDK because they were commercial features which Oracle contributed to OpenJDK (e.g., Flight Recorder), or were removed from Oracle JDK 11 (e.g., JNLP). The Oracle JDK always requires third party cryptographic providers to be signed by a known certificate. The cryptography framework in OpenJDK has an open cryptographic interface. This means it does not restrict which providers can be used.  Oracle JDK 11 will continue to require a valid signature, and Oracle OpenJDK builds will continue to allow the use of either a valid signature or unsigned third party crypto provider. The Oracle JDK has always required third party cryptographic providers to be signed by a known certificate.  The cryptography framework in OpenJDK has an open cryptographic interface, meaning it does not restrict which providers can be used.  Oracle JDK 11 will continue to require a valid signature, and Oracle OpenJDK builds will continue to allow the use of either a valid signature or unsigned third party crypto provider. Read more about this news in detail on Oracle blog. State of OpenJDK: Past, Present and Future with Oracle Oracle announces a new pricing structure for Java Oracle reveals issues in Object Serialization. Plans to drop it from core Java
Read more
  • 0
  • 0
  • 5655

article-image-how-we-can-secure-cyberspace
Richard Gall
27 Mar 2018
7 min read
Save for later

How cybersecurity can help us secure cyberspace

Richard Gall
27 Mar 2018
7 min read
With cybercrime on the rise, companies have started adopting the hard ways of preventing system breaches. Cybersecurity has become the need of the hour. This article will explore how cyberattacks bring companies down to their knees giving rise to cybersecurity. The article also looks at some of the cybersecurity strategies that an organization can adopt to safeguard itself from the prevalent attacks. Malware, Phishing, Ransomware, DDoS - these terms have become widespread today due to the increasing number of cyberattacks. The cyber threats that organizations face have grown steadily during the last few years and can disrupt even the most resilient organizations. 3 cyber attacks that shook the digital world 2011: Sony Who can forget the notorious Sony hack of April 2011? Sony’s PlayStation Network was hacked by a hacking group called “OurMine,” compromising the personal data of 77 million users. This cyberattack made Sony pay more than 15 million dollars in compensation to the people whose accounts were hacked. A hack made possible through a simple SQL inject could have been prevented using data encryption. Not long after this hack, in 2014, Sony Pictures was attacked through a malware by a hacker group called “Guardians of Peace” stealing more than 100 terabytes of confidential data. Sony had once again not paid heed to its security audit, which showed flaws in the firewall and several routers and servers resulting in the failure of infrastructure management and a monetary loss of 8 million dollars in compensation. 2013: 3 billion Yahoo accounts hacked Yahoo has been the target of the attackers thrice. During its takeover by Verizon, Yahoo disclosed that every one of Yahoo's 3 billion accounts had been hacked in 2013. However, one of the worst things about this attack was that it was discovered only in 2016, a whopping two years after the breach. 2017: WannaCry One of the most infamous ransomware of 2017, WannaCry spanned more than 150 countries targeting businesses running outdated Windows machines by leveraging some of the leaked NSA tools. The cyber attack that has been linked to North Korea hit thousands of targets, including public services and large corporations. The effects of WannaCry were so rampant that Microsoft, in an unusual move to curb the ransomware, released Windows patches for the systems it had stopped updating. On a somewhat unsurprising note, WannaCry owed its success to the use of outdated technologies (such as SMBv1) and improper maintaining their systems update for months, failing to protect themselves from the lurking attack. How cyber attacks damage businesses Cyberattacks are clearly bad for business. They lead to: Monetary loss Data loss Breach of confidential information Breach of trust Infrastructure damages Impending litigations and compensations Remediations Bad reputation Marketability This is why cybersecurity is so important - investing in it is smart from a business perspective as it could save you a lot of money in the long run. Emerging cybersecurity trends Tech journalist and analyst Art Wittmann once said "the idea that security starts and ends with the purchase of a prepackaged firewall is simply misguided". It's a valuable thing to remember when thinking about cybersecurity today. It's about more than just buying software; it's also about infrastructure design, culture and organizational practices. Cybersecurity is really a range of techniques and strategies designed to tackle different threats from a variety of sources. Gartner predicts that worldwide cybersecurity spending will climb to $96 billion in 2018. This rapid market growth is being driven by numerous emerging trends, including: Cloud computing Internet of things Machine learning Artificial Intelligence Biometrics and multi-factor authentication Remote access and BYOD--Bring your own device Effective cybersecurity strategies The most effective strategy to mitigate and minimize the effects of a cyberattack is to build a solid cybersecurity. Here are some of the ways in which an organization can strengthen their cybersecurity efforts: Understand the importance of security In the cyberage, you have to take the role of security seriously. You need to protect the organization with the help of a security team. When building a security team, you should take into accountthe types of risks that could affect the organization, how these risks will impact the business, and remedial measures in case of a breach Top notch security systems You cannot compromise on the quality of systems installed to secure your systems. Always remember what is at stake. Shoulda situation of attack arise, you need the best quality of security for your business. Implement a Red and Blue Team The organization must use the Red Team and Blue Team tactics, where the Red Team tactics can be used in penetration for accessing sensitive data, and the Blue Team tactics will defend your system from complex attacks. This team can be appointed internally or this job could be outsourced to the experts. Security audits Security audits are conducted with the aim of protect, detect, and respond. The security team must actively investigate their own security systems to make sure that everything is at par to defend against the lurking attack if it should occur. The security team must also be proactive with countermeasures to defend the organization walls against these malicious lurkers. Employees must also be properly educated to take proper precautions and act wisely in case of occurrence of a breach. Continuous monitoring Securing your organization against cyberattacks is a continuous process. It is not a one-time-only activity. The security team must be appointed to do regular audits of the security systems of the organizations. There should be a systematic and regular process, penetration testing must be conducted at regular intervals. The results of these tests must be looked at seriously to take mitigation steps to correct any weak or problematic systems. Enhance your security posture In an event of a breach, once the security team has confirmed the breach, they need to react quickly. However, don't start investigating without a plan. The compromised device should be located, its behavior should be analyzed and remedial actions should be underway. Vigilance In the words of the world’s most famous hacker, Kevin Mitnick, “Companies spend millions of dollars on firewalls, encryption,and secure access devices, and its money wasted; none of these measures address the weakest link in the security chain.” It cannot be stressed enough how important it is to be ever vigilant. The security team must stay current with the latest threat intelligence and always be on the lookout for the latest malicious programs that disrupt the organizations. Think ahead The question is never “if”, the real question is “when.”The attackers come sneaking when you are not looking. It is absolutely critical that organizations take a proactive stance to protect themselves by dropping the “if” attitude and adopting the “when” attitude. If you liked this post explore the book from which it was taken: Cybersecurity - Attack and Defense Strategies. Written by Yuri Diogenes and Erdal Ozkaya, Cybersecurity - Attack and Defense Strategiesuses a practical approach to the cybersecurity kill chain to explain the different phases of the attack, which includes the rationale behind each phase, followed by scenarios and examples that bring the theory into practice. Yuri Diogenes is a Senior Program Manager @ Microsoft C+E Security CxP Team and a professor at EC-Council University for their master's degree in cybersecurity program. Erdal Ozkaya is a doctor of philosophy in cybersecurity, works for Microsoft as a cybersecurity architect and security advisorand is also a part-time lecturer at Australian Charles Sturt University.
Read more
  • 0
  • 2
  • 5636

article-image-googles-cloud-robotics-platform-to-be-launched-in-2019-will-combine-the-power-of-ai-robotics-and-the-cloud
Melisha Dsouza
25 Oct 2018
3 min read
Save for later

Google’s Cloud Robotics platform, to be launched in 2019, will combine the power of AI, robotics and the cloud

Melisha Dsouza
25 Oct 2018
3 min read
Earlier this week, Google announced its plans to launch a ‘Cloud Robotics platform’ for developers in 2019. Since the early onset of ‘cloud robotics’ in the year 2010, Google has explored various aspects of the cloud robotics field. Now, with the launch of Cloud Robotics platform, Google will combine the power of AI, robotics and the cloud to deploy cloud-connected collaborative robots. The platform will encourage efficient robotic automation in highly dynamic environments. The core infrastructure of the Platform will be open source and users will pay only for what services they use. Features of Cloud Robotics platform: #1 Critical infrastructure The platform will introduce secure and robust connectivity between robots and the cloud. Kubernetes will be used for the management and distribution of digital assets. Stackdriver will assist with the logging, monitoring, alerting, and dashboarding processes. Developers will gain access to Google’s data management and AI capabilities, ranging from Cloud Bigtable to Cloud AutoML. The standardized data types and open APIs will help developers build reusable automation components. Moreover, open APIs support interoperability, which means integrators can compose end-to-end solutions with collaborative robots from different vendors. #2 Specialized tools The tools provided with this platform will help developers to build, test, and deploy software for robots with ease. Composing and deploying automation solutions in customers’ environments through system integrators can be done easily. Operators can monitor robot fleets and ongoing missions, as well. Plus, users have to only pay for the services they use. That being said, if a user decides to move to another cloud provider, they can take their data with them! #3 Fostering powerful first-party services and third-partyy innovation Google’s initial Cloud Robotics services can be applied to various use cases like robot localization and object tracking. The services will process sensor data from multiple sources and use machine learning to obtain information and insights about the state of the physical world. It will encourage an ecosystem of hardware, and applications, that can be used and re-used for collaborative automation. #4 Industrial Automation made easy Industrial automation requires extensive custom integration. Collaborative robots can help improve flexibility of the overall process.  It will help save costs and vendor lock ins. That being said, it is difficult to program robots to understand and react to the unpredictable changes of the physical human world. The Google Cloud platform will solve these issues by providing flexible automation services like Cartographer service, Spatial Intelligence service and Object Intelligence service Watch this video to know more about these services: https://www.youtube.com/watch?v=eo8MzGIYGzs&feature=youtu.be Alternatively, head over to Google's Blog to know more about this announcement. What’s new in Google Cloud Functions serverless platform Cloud Filestore: A new high performance storage option by Google Cloud Platform Machine Learning as a Service (MLaaS): How Google Cloud Platform, Microsoft Azure, and AWS are democratizing Artificial Intelligence  
Read more
  • 0
  • 0
  • 5635

article-image-google-open-sources-active-question-answering-activeqa-a-reinforcement-learning-based-qa-system
Natasha Mathur
15 Oct 2018
3 min read
Save for later

Google open sources Active Question Answering (ActiveQA), a Reinforcement Learning based Q&A system

Natasha Mathur
15 Oct 2018
3 min read
Google announced last week, that it’s open-sourcing Active Question Answering (ActiveQA), a research project that involves training artificial agents for question answering using reinforcement learning. As this research project is now open source, Google has released a TensorFlow package for ActiveQA system. The latest TensorFlow ActiveQA package comprises three main components along with the code necessary to train and run the ActiveQA agent. First component is a pre-trained sequence to sequence model which takes a question as an input and returns its reformulations. Second component is an answer selection model that uses a convolutional neural network and gives a score to each triplet of the original question, reformulation, and answer. The selector makes use of the pre-trained, and publicly available word embeddings (GloVe). Third component is a question answering system (the environment) that uses BiDAF, a popular question answering system.The TensorFlow package also consists of all the code that is necessary to train and run the ActiveQA agent. “ActiveQA system.. learns to ask questions that lead to good answers. However, because training data in the form of question pairs, with an original question and a more successful variant, is not readily available, ActiveQA uses reinforcement learning, an approach to machine learning concerned with training agents so that they take actions that maximize a reward, while interacting with an environment”, reads the Google AI blog. This concept of ActiveQA was first Introduced in Google’s ICLR 2018 paper “Ask the Right Questions: Active Question Reformulation with Reinforcement Learning”. ActiveQA is far different in its approach than the traditional QA systems.Traditional QA systems make use of supervised learning techniques that are used along with labeled data to train a system. This system is capable of answering the arbitrary input questions, however,  it doesn’t come with an ability to deal with uncertainty as humans would. For instance, It is not able to reformulate the questions, issue multiple searches, and evaluate the responses. This leads to poor quality answers. ActiveQA, on the other hand, comprises an agent that consults the QA system repeatedly. This agent reformulates the original question many times which helps it select the best answer. Each of the questions reformulated is evaluated on the basis of how good the corresponding answer to that question is. If the corresponding answer is good, then the learning algorithm adjusts the model’s parameters accordingly. So, the question reformulation that led to the right answer would more likely be generated again. The ActiveQA approach allows the agent to involve in a dynamic interaction with the QA system, which leads to better quality of the returned answers. ActiveQA As per an example mentioned by Google, if you consider a question “When was Tesla born?”. The agent will reformulate the question in two different ways. One of them being “When is Tesla’s birthday” and the other one as “Which year was Tesla born”. This will help it retrieve the answers to both of the questions from the QA system. Once the systems use all this information, it collectively returns the answer as “July 10, 1856”. ActiveQA “We envision that this research will help us design systems that provide better and more interpretable answers, and hope it will help others develop systems that can interact with the world using natural language”, mentions Google. For more information, read the official Google AI blog. Google, Harvard researchers build a deep learning model to forecast earthquake aftershocks location with over 80% accuracy Google strides forward in deep learning: open sources Google Lucid to answer how neural networks make decisions Google moving towards data centers with 24/7 carbon-free energy
Read more
  • 0
  • 0
  • 5628

article-image-azure-functions-3-0-released-with-support-for-net-core-3-1
Savia Lobo
12 Dec 2019
2 min read
Save for later

Azure Functions 3.0 released with support for .NET Core 3.1!

Savia Lobo
12 Dec 2019
2 min read
On 9th December, Microsoft announced that the go-live release of the Azure Functions 3.0 is now available. Among many new capabilities and functionality added to this release, one amazing addition is the support for the newly released .NET Core 3.1 -- an LTS (long-term support) release -- and Node 12. With users having the advantage to build and deploy 3.0 functions in production, the Azure Functions 3.0 bring newer capabilities including the ability to target .NET Core 3.1 and Node 12, higher backward compatibility for existing apps running on older language versions, without any code changes. “While the runtime is now ready for production, and most of the tooling and performance optimizations are rolling out soon, there are still some tooling improvements to come before we announce Functions 3.0 as the default for new apps. We plan to announce Functions 3.0 as the default version for new apps in January 2020,” the official announcement mentions. While users running on earlier versions of Azure Functions will continue to be supported, the company does not plan to deprecate 1.0 or 2.0 at present. “Customers running Azure Functions targeting 1.0 or 2.0 will also continue to receive security updates and patches moving forward—to both the Azure Functions runtime and the underlying .NET runtime—for apps running in Azure. Whenever there’s a major version deprecation, we plan to provide notice at least a year in advance for users to migrate their apps to a newer version,” Microsoft mentions. https://twitter.com/rickvdbosch/status/1204115191367114752 https://twitter.com/AzureTrenches/status/1204298388403044353 To know more about this in detail, read Azure Functions’ official documentation. Creating triggers in Azure Functions [Tutorial] Azure Functions 2.0 launches with better workload support for serverless Serverless computing wars: AWS Lambdas vs Azure Functions
Read more
  • 0
  • 0
  • 5623

article-image-anti-paywall-add-on-is-no-longer-available-on-the-mozilla-website
Sugandha Lahoti
03 Dec 2018
4 min read
Save for later

Anti-paywall add-on is no longer available on the Mozilla website

Sugandha Lahoti
03 Dec 2018
4 min read
Anti-paywall add-on has been deprecated from the Mozilla website. The author of that add-on, Florent Daigniere confirmed that it has been removed from both Chrome and Mozilla. “This was done because the add-on violated the Firefox Add-on Distribution Agreement and the Conditions of Use,” Daigniere wrote. “It appears to be designed and promoted to allow users to circumvent paywalls, which is illegal”. Last year, Daigniere released the anti-paywall browser extension that maximizes the chances of bypassing paywalls. On asking Mozilla about why this add-on was deprecated, he got the reply: "There are various laws in the US that prohibit tools for circumventing access controls like a paywall. Both Section 1201 of the Digital Millennium Copyright Act (DMCA) and the Computer Fraud and Abuse Act (CFAA) are examples. We are responding to a specific complaint that named multiple paywalls bypassing add-ons. It did not target only your add-on." This news was one of the top stories on Hacker News. People are largely opposing Mozilla’s move. “Making it harder to install addons (and breaking all the old ones) is one of the things contributing to Mozilla losing share to Chrome. People used to use Firefox over Chrome because of all the great addons, which they then broke, leaving users with less reason not to use Chrome.” “I used to default to Firefox for work. Then they killed the old addons, which broke a major part of my workflow (FireFTP's "open a file and as you edit it it automatically re-uploads" feature). So there was a lot less keeping me stuck to it. ” “This extension just seems to strip tracking data and pretend to be a Google bot. It baffles me that this is somehow concerning enough to be taken down. And anyway, isn't making exemptions for Google's robots sort-of against their policy?” Users offered advice and suggestions to Daigniere on how he can go about with the process. “I would consult with an attorney to determine legal options for an adequate defense and expected expenses. A consult is not a contract and you can change your mind if you are unwilling to take the risk with a lawsuit. I suspect the takedown notice is a DMCA takedown based upon a flawed assumption of the law. The hard part about this is arguing the technical merits of the case before non-technical people. While the takedown notice is probably in error they could still make a good argument around bypassing their security controls. You could appeal to the EFF or ACLU. If they are willing to take your case it will be pro bono.” “I'd just move on. To be honest sites with those types of paywalls should not be indexed. The loophole you are taking advantage of here is a bait and switch by these sites. They want the search traffic but don't want public access. Most of us have already adapted, however, and avoid these sites or pay for them. Your plugin title blatantly describes that you're avoiding paying for something they are charging for so even though it may not be illegal it's not something I'd waste energy fighting for.” “Rename the plugin and change the description. The message from Mozilla states that the problem is the intent of the plugin. The technological measures it actually takes are not illegal per sé, but are illegal when used to circumvent paywalls. If you present this as a plug-in that allows you to view websites as the Google bot views them, for educational and debugging purposes, there is no problem. You can give the fact that it won’t see the paywall as an example. It’s actually useful for that purpose: you are not lying. It’s just that most people will install the plugin for its ‘side effects’. Their use of it will still be illegal, but the intent will not be illegal.” Read more of this conversation on Hacker News. The State of Mozilla 2017 report focuses on internet health and user privacy Mozilla criticizes EU’s terrorist content regulation proposal, says it’s a threat to user rights Mozilla v. FCC: Mozilla challenges FCC’s elimination of net neutrality protection rules
Read more
  • 0
  • 0
  • 5618
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at £15.99/month. Cancel anytime
article-image-developer-community-mourns-the-loss-of-joe-armstrong-co-creator-of-erlang
Sugandha Lahoti
22 Apr 2019
5 min read
Save for later

Developer community mourns the loss of Joe Armstrong, co-creator of Erlang

Sugandha Lahoti
22 Apr 2019
5 min read
Dr. Joe Armstrong, one of the creators of Erlang passed away over the weekend at the age of 68. Dr. Armstrong’s wife specified that he died from an infection of the lungs which occurred due to a quite recent diagnosis of pulmonary fibrosis. His lungs were donated to lung research. Francesco Cesarini, founder of Erlang solutions tweeted about Joe’s demise. https://twitter.com/FrancescoC/status/1119596234166218754 Robert Virding, co-creator of Erlang also payed his regards. https://twitter.com/rvirding/status/1119610591885307904 The developer community has also mourned the loss of Joe Armstrong with a large number of developers taking to various social media platforms to offer their condolences to Dr. Armstrong's family and paying their respects for him. Dr. Armstrong’s work with concurrency programming Dr. Armstrong was best known for helping lay foundations in the '70s and '80s to the most widely spread concurrency models as we know them today. In concurrent programming, multiple events, code snippets or programs are perceived to be executing at the same time. Unlike imperative languages, which uses routines or object-oriented languages, which use objects. Concurrency oriented languages use processes, actors, and agents as the main building blocks. Dr. Armstrong helped propel concurrency programming at a time when there was no IoT, web, massive multi-user online games, video streaming, and automated trading or online transactions. The Erlang programming language Erlang was co-created by Joe Armstrong alongside Robert Virding and Mike Williams in the 1980s at the Ericsson Computer Science Labs. While working there, Dr. Armstrong and his colleagues were looking for an approach to developing fault-tolerant and scalable systems. This resulted in the Erlang-style concurrency. He later received a Ph. D. in computer science from the Royal Institute of Technology in Stockholm, Sweden in 2003. He is also the author of a number of key books on the topic of Erlang including Concurrent Programming in Erlang, Programming Erlang: Software for a Concurrent World, and Coders At Work. Erlang was originally built for use only at Ericsson, as a proprietary language, to improve telephony applications. It was designed to be a fault-tolerant, distributed, real-time system that offered pattern matching and functional programming in one handy package. It was then open-sourced to the public in 1998. Since then, it has been responsible for business, big and small, to create reliable systems. Since then, Erlang has been one of the most popular open source languages with compelling features like concurrent processes, memory management, scheduling, distribution, networking, etc. WhatsApp, the most popular messaging platform’s server is almost completely implemented in Erlang. In 2018, Erlang celebrated 20 years of its open sourcing tracing its journey from Ericcson to Whatsapp. Erlang also inspired Elixir, a general-purpose programming language that runs on the Erlang virtual machine. Elixir is built on top of Erlang and shares the same abstractions for building distributed, fault-tolerant applications. Using Erlang modules in Elixir has helped in the creation of Nerves, which helps in building embedded software, and the web framework Phoenix. Remembering Dr. Joe Armstrong Many developers have shared their sentiments on Dr. Armstrong’s demise, with most of them describing him as a kind and compassionate developer who was more interested in teaching than his ego. Thomas Gebert, a software developer shared an email thread where he asked Joe Armstrong about concurrency. He states, “Dr. Armstrong’s enthusiasm about Erlang, distributed programming, and pretty much everything else about computers was really a good springboard for self-education.” Even though Thomas asked some serious noobie questions about concurrency, Dr. Armstrong responded back with an incredibly long, well-written email explaining a lot of the minutia of how Erlang avoids a lot of pitfalls and generic concurrency theory. Thomas adds, “He was really good about explaining things in a way simple-enough for me to understand, without coming off as patronizing or rude.” A lot of people also took to Twitter to share their experiences working with Dr. Armstrong. https://twitter.com/zxq9_notits/status/1119602063506206725 https://twitter.com/glv/status/1119706037689491456 https://twitter.com/ktosopl/status/1119612076190601217 https://twitter.com/jboner/status/1119651034933100544 “He and I discussed distributed storage. Well detailed response from him that sent me reading for days. I aspire to be like him.” reads a comment on Hacker News. Such was his popularity. Here are some of his memorable quotes on a varied set of topics of interest to him. “All significant energy gains in the last 50 odd years are the result of new hardware NOT software.” https://twitter.com/joeerl/status/1115988725111169025 Prediction: One day computers might become useful https://twitter.com/joeerl/status/1114558139217711104 “One on the disadvantages of having a PhD in computer science is that I get asked really difficult questions. Like - "In gmail on my iPhone I press archive - can I get my mail back?" and "Why have they changed the interface?" Why no easy questions like what's a monad?” https://twitter.com/joeerl/status/1113847695612022785 The Erlang Ecosystem Foundation launched at the Code BEAM SF conference Elixir 1.7, the programming language for Erlang virtual machine, releases Introducing Mint, a new HTTP client for Elixir
Read more
  • 0
  • 0
  • 5582

article-image-angular-cli-8-3-0-releases-with-a-new-deploy-command-faster-production-builds-and-more
Bhagyashree R
26 Aug 2019
3 min read
Save for later

Angular CLI 8.3.0 releases with a new deploy command, faster production builds, and more

Bhagyashree R
26 Aug 2019
3 min read
Last week, the Angular team announced the release of Angular CLI 3.8.0. Along with a redesigned website, this release comes with a new deploy command and improves previously introduced differential loading. https://twitter.com/angular/status/1164653064898277378 Key updates in Angular CLI 8.3.0 Deploy directly from CLI to a cloud platform with the new deploy command Starting from Angular CLI 8.3.0, you have a new deploy command to execute the deploy CLI builder associated with your project. It is essentially a simple alias to ng run MY_PROJECT:deploy. There are many third-party builders that implement deployment capabilities to different platforms that you can add to your project with ng add [package name]. After this package with the deployment capability is added, your project’s angular.json file is automatically updated with a deploy section. You can then simply deploy your project by executing the ng deploy command. Currently, the deploy command supports deployment to Firebase, Azure, Zeit, Netlify, and GitHub. You can also create a builder yourself to use the ng deploy command in case you are deploying to a self-managed server or there’s no builder for the cloud platform you are using. Improved differential loading Angular CLI 8.0 introduced the concept of differential loading to maximize browser compatibility of your web application. Most of the modern browsers today support ES2015, but there might be cases when your app users have a browser that doesn’t. To target a wide range of browsers, you can use polyfill scripts for the browsers. You can ship a single bundle containing all your compiled code and any polyfills that may be needed. However, this increased bundle size shouldn’t affect users who have modern browsers. This is where differential loading comes in where the CLI builds two separate bundles as part of your deployed application. The first bundle will target modern browsers, while the second one will target the legacy browser with all necessary polyfills. Though this increases your application’s browser compatibility, the production build ends up taking twice the time. Angular CLI 8.3.0 fixes this by changing how the command runs. Now, the build targeting ES2015 is built first and then it is directly down leveled to ES5, instead of rebuilding the app from scratch. In case you encounter any issue, you can fall back to the previous behavior with NG_BUILD_DIFFERENTIAL_FULL=true ng build --prod. Many Angular developers are excited about the new updates in Angular CLI 8.3.0. https://twitter.com/vikerman/status/1164655906262409216 https://twitter.com/Santosh19742211/status/1164791877356277761 While some did question the usefulness of the deploy command. A developer on Reddit shared their perspective, “Honestly, I think Angular and the CLI are already big and complex enough. Every feature possibly creates bugs and needs to be maintained. While the CLI is incredibly useful and powerful there have been also many issues in the past. On the other hand, I must admit that I can't judge the usefulness of this feature: I've never used Firebase. Is it really so hard to deploy on it? Can't this be done with a couple of lines of a shell script? As already said: One should use CI/CD anyway.” To know more in detail about the new features in Angular CLI 8.3.0, check out the official docs. Also, check out the @angular-schule/ngx-deploy-starter repository to create a new builder for utilizing the deploy command. Angular 8.0 releases with major updates to framework, Angular Material, and the CLI Ionic Framework 4.0 has just been released, now backed by Web Components, not Angular The Angular 7.2.1 CLI release fixes a webpack-dev-server vulnerability, supports TypeScript 3.2 and Angular 7.2.0-rc.0
Read more
  • 0
  • 0
  • 5576

article-image-michelangelo-pyml-introducing-ubers-platform-for-rapid-machine-learning-development
Amey Varangaonkar
25 Oct 2018
3 min read
Save for later

Michelangelo PyML: Introducing Uber’s platform for rapid machine learning development

Amey Varangaonkar
25 Oct 2018
3 min read
Transportation network giants Uber have developed Michelangelo PyML - a Python-powered platform for rapid prototyping of machine learning models. The aim of this platform is to offer machine learning as a service that democratizes machine learning and makes it possible to scale the AI models to meet business needs efficiently. Michelangelo PyML is an integration of Michelangelo - which Uber developed for large-scale machine learning in 2017. This will make it possible for their data scientists and engineers to build intelligent Python-based models that run at scale for online as well as offline tasks. Why Uber chose PyML for Michelangelo Uber developed Michelangelo in September 2017 with a clear focus of high performance and scalability. It currently enables Uber’s product teams to design, build, deploy and maintain machine learning solutions at scale and powers roughly close to 1 million predictions per second. However, that also came at the cost of flexibility. Users mainly were faced with 2 critical issues: It was possible to train the models using the algorithms that were only natively supported by Michelangelo. To run unsupported algorithms, the platform’s capability had to be extended to include additional training and deployment components. This caused a lot of inconvenience at times. The users could not use any feature transformations apart from those offered by Michelangelo’s DSL (Domain Specific Language) Apart from these constraints, Uber also observed that data scientists usually preferred Python over other programming language, given the rich suite of libraries and frameworks available in Python for effective analytics and machine learning. Also, many data scientists gathered and worked with data locally using tools such as pandas, scikit-learn and Tensorflow, as opposed to Big Data tools such as Apache Spark and Hive, while spending hours in setting them up. How PyML improves Michelangelo Based on the challenges faced in using Michelangelo, Uber decided to revamp the platform by integrating PyML to make it more flexible. PyML provides a concrete framework for data scientists to build and train machine learning models that can be deployed quickly, safely and reliably across different environments. This, without any restriction on the types of data they can use or the algorithms they can choose to build the model, makes it an ideal choice of tool to integrate with a platform like Michelangelo. By integrating Python-based models that can operate at scale with Michelangelo, Uber will now be able to handle online as well as offline queries and give smart predictions quite easily. This could be a potential masterstroke by Uber, as they try to boost their business and revenue growth after it slowed down over the last year. Read more Why did Uber created Hudi, an open source incremental processing framework on Apache Hadoop? Uber’s Head of corporate development, Cameron Poetzscher, resigns following a report on a 2017 investigation into sexual misconduct Uber’s Marmaray, an Open Source Data Ingestion and Dispersal Framework for Apache Hadoop
Read more
  • 0
  • 0
  • 5534

article-image-ubuntu-19-04-disco-dingo-beta-releases-with-support-for-linux-5-0-and-gnome-3-32
Bhagyashree R
01 Apr 2019
2 min read
Save for later

Ubuntu 19.04 Disco Dingo Beta releases with support for Linux 5.0 and GNOME 3.32

Bhagyashree R
01 Apr 2019
2 min read
Last week, the team behind Ubuntu announced the release of Ubuntu 19.04 Disco Dingo Beta, which comes with Linux 5.0 support, GNOME 3.32, and more. Its stable version is expected to release on April 18th, 2019. Following are some of the updates in Ubuntu 19.04 Disco Dingo: Updates in Linux kernel Ubuntu 19.04 is based on Linux 5.0, which was released last month. It comes with support for AMD Radeon RX Vega M graphics processor, complete support for the Raspberry Pi 3B and the 3B+, Qualcomm Snapdragon 845, and much more. Toolchain Upgrades The tools are upgraded to their latest releases. The upgraded toolchain includes glibc 2.29, OpenJDK 11, Boost 1.67, Rustc 1.31, and updated GCC 8.3, Python 3.7.2 as default,  Ruby 2.5.3, PHP 7.2.15, and more. Updates in Ubuntu Desktop This release ships with the latest GNOME 3.32 giving it a refreshed visual design. It also brings a few performance improvements and new features: GNOME Disks now supports VeraCrypt, a utility used for on-the-fly encryption. A panel is added to the Settings menu to help users manage Thunderbolt devices. With this release, more shell components are cached in GPU RAM, which reduces load and increases FPS count. Desktop zoom works much smoother. An option is added to automatically submit error reports to the error reporting dialog window. Other updates include new Yaru icon sets, Mesa 19.0, QEMU 13.1, and libvirt 14.0. This release will be supported for 9 months until January 2020. Users who require Long Term Support are recommended to use Ubuntu 18.04 LTS instead. To read the full list of updates, visit Ubuntu’s official website. Chromium blacklists nouveau graphics device driver for Linux and Ubuntu users Ubuntu releases Mir 1.0.0 Ubuntu free Linux Mint Project, LMDE 3 ‘Cindy’ Cinnamon, released
Read more
  • 0
  • 0
  • 5533
article-image-cloudflares-workers-enable-containerless-cloud-computing-powered-by-v8-isolates-and-webassembly
Melisha Dsouza
12 Nov 2018
5 min read
Save for later

Cloudflare’s Workers enable containerless cloud computing powered by V8 Isolates and WebAssembly

Melisha Dsouza
12 Nov 2018
5 min read
Cloudflare’s cloud computing platform Workers doesn’t use containers or virtual machines to deploy computing. Workers allows users to build serverless applications on Cloudflare's data centers. It provides a lightweight JavaScript execution environment to augment existing applications or create entirely new ones without having to configure or maintain infrastructure. Why did Cloudflare create workers? Cloudflare provided limited features and options that developers could build in-house. There was not much flexibility for customers to build features themselves. To enable users to write code on their servers deployed around the world, they had to allow untrusted code to run, with low overhead. This needed to process millions of requests per second and that too at a very fast speed. Customers couldn’t write their own code without the team’s supervision. It would be expensive to use traditional virtualization and container technologies like Kubernetes let alone run thousands of Kubernetes pod at 155 data centers of Cloudflare would be resource intensive. Enter Cloudflare’s ‘Workers’ to solve these issues. Features of Workers #1 ‘Isolates’- Run code from multiple customers ‘Isolates’ is a technology built by Google Chrome team to power the Javascript engine in that browser, V8: Isolates.  These are lightweight contexts that group variables, with the code allowed to mutate them. A single process can run hundreds or thousands of Isolates, while easily  switching between them. Thus, Isolates make it possible to run untrusted code from different customers within a single operating system process. They start real quick (Any given Isolate can start around a hundred times faster than a Node process on a machine) and do not allow one Isolate to access the memory of another. #2 Cold Starts Workers facilitate the concept of ‘cold start’ when a new copy of code has to be started on a machine. In the Lambda world, this means spinning up a new containerized process which can delay requests  for as much as ten seconds ending up in a terrible user experience. A Lambda can only process one single request at a time. A new Lambda has to be cold-started every time an additional concurrent request is recieved. If a Lambda doesn’t get a request soon enough, it will be shut down and it all starts again.  Since Workers don’t have to start a process, Isolates start in 5 milliseconds. It scales and deploys quickly, entirely upgrading existing Serverless technologies. #3 Context Switching A normal context switch performed by an OS can take as much as 100 microseconds. When multiplied by all the Node, Python or Go processes running on average Lambda servers, this leads to a heavy overhead. This splits the CPUs power between running the customer’s code and switching between processes. An Isolate-based system runs all of the code in a single process which means there are no expensive context switches. The machine can invest virtually all of its time running your code. #4 Memory The V8 was designed to be multi-tenant. It runs the code from the many tabs in a user’s browser in isolated environments within a single process. Since memory is often the highest cost of running a customer’s code, V8 lowers it and dramatically changes the cost economics. #5 Security It is not safe to run code from multiple customers within the same process. Testing, fuzzing, penetration testing, and bounties are required to build a truly secure system of that complexity. The open-source nature of V8 helps in creating aanisolation layer that helps Cloudflare take care of the security aspect. Cloudlfare’s Workers also allows users to build responses from multiple background service requests either to the Cloudflare cache, application origin, or third party APIs. They can build conditional responses for inbound requests to assess and subsequently block or reroute malicious or unauthorized requests. All of this at just a third of what AWS costs, remarked an astute Twitter observer. https://twitter.com/seldo/status/1061461318765555713 Running code through WebAssembly One of the disadvantages of using Workers is that, since it is an Isolate-based system, it cannot run arbitrary compiled code. Users have to either write their code in Javascript, or a language which targets WebAssembly (eg. Go or Rust). Also, if a user cannot recompile their processes, they won’t be able to run them in an Isolate. This has been nicely summarised in the above mentioned tweet. He notes that WebAssembly modules are already in the npm registry and it creates the potential for npm to become the dependency management solution for every programming language. He mentions that the “availability of open source libraries to achieve the task at hand is the primary reason people pick a programming language”. This leads us to the question of “How does software development change when you can use any library anytime?” You can head over to the Cloudflare blog to understand more about containerless cloud computing. Cloudflare Workers KV, a distributed native key-value store for Cloudflare Workers Cloudflare’s decentralized vision of the web: InterPlanetary File System (IPFS) Gateway to create distributed websites  
Read more
  • 0
  • 0
  • 5524

article-image-what-to-expect-in-unreal-engine-4-23
Vincy Davis
12 Jul 2019
3 min read
Save for later

What to expect in Unreal Engine 4.23?

Vincy Davis
12 Jul 2019
3 min read
A few days ago, Epic released the first preview of Unreal Engine 4.23 for the developer community to check out its features and report back in case of any issues, before the final release. This version has new additions of Skin Weight Profiles, VR Scouting tools, New Pro Video Codecs and many updates on features like XR, animation, core, virtual production, gameplay and scripting, audio and more. The previous version, Unreal Engine 4.22 focused on adding photorealism in real-time environments. Some updates in Unreal Engine 4.23 XR Hololens 2 Native Support: With updates to the Stereo Panoramic Capture tool, it will be much easier to capture high-quality stereoscopic stills and videos of the virtual world in industry-standard formats, and to view those captures in an Oculus or GearVR headset. Stereo Panoramic capture Tool Improvements: This will make it easy to capture high-quality stereoscopic stills and videos of the virtual world in industry-standard formats. Animation Skin Weight Profiles: The new Skin Weight Profile system will enable users to override the original Skin Weights that are stored with a Skeletal Mesh. Animation Streaming: This is aimed at improving memory management for animation data. Sub Animation Graphs: New Sub Anim Graphs will allow dynamic switching of sub-sections of an Animation Graph, enabling multi-user-collaboration and memory savings for vaulted or unavailable items. Core Unreal Insights Tool: This will help developers to collect and analyze data about the Engine's behavior in a uniform fashion. This system has three components: The Trace System API will gather information from runtime systems in a consistent format and captures it for later processing. Multiple live sessions can contribute data at the same time. The Analysis API will process data from the Trace System API, and convert it into a form that the Unreal Insights tool can use. The Unreal Insights tool will provide an interactive visualization of data processed through the Analysis API, which will provide developers with a unified interface for stats, logs, and metrics from their application. Virtual production Remote Control over HTTP Extended LiveLink Plugin New VR Scouting tools New Pro Video Codecs nDisplay: Warp and Blend for Curved Surfaces Virtual Camera Improvements Gameplay & Scripting UMG Widget Diffing: Expanded and improved Blueprint Diffing will now support Widget Blueprints as well as Actor and Animation Blueprints. Audio Open Sound Control: It will enable a native implementation of the Open Sound Control (OSC) standard in an Unreal Engine plugin. Wave Table Synthesis: The new monophonic Wavetable synthesizer leverages UE4’s built-in curve editor to author the time-domain wavetables, enabling a wide range of sound design capabilities can be driven by gameplay parameters. There are many more updates provided for the Editor, Niagara editor, Physics simulation, Rendering system and the Sequencer multi-track editor in Unreal Engine 4.23. The Unreal Engine team has notified users that the preview release is not fully quality tested, hence should be considered as unstable until the final release. Users are excited to try the latest version of Unreal Engine 4.23. https://twitter.com/ClicketyThe/status/1149070536762372096 https://twitter.com/cinedatabase/status/1149077027565309952 https://twitter.com/mygryphon/status/1149334005524750337 Visit the Unreal Engine page for more details. Unreal Engine 4.22 update: support added for Microsoft’s DirectX Raytracing (DXR) Unreal Engine 4.20 released with focus on mobile and immersive (AR/VR/MR) devices What’s new in Unreal Engine 4.19?
Read more
  • 0
  • 0
  • 5511

article-image-php-7-4-releases-with-type-declarations-shorthand-syntax-in-arrow-functions-and-more
Vincy Davis
29 Nov 2019
2 min read
Save for later

PHP 7.4 releases with type declarations, shorthand syntax in Arrow functions, and more!

Vincy Davis
29 Nov 2019
2 min read
Yesterday, the PHP development team announced the availability of PHP version 7.4. This marks the fourth feature update to the PHP 7 series. PHP 7.4 comes with numerous improvements and new features. Key features in PHP 7.4 Class properties support type declarations. Starting from PHP 7.4, arrow functions will provide a shorthand syntax for defining functions with implicit by-value scope binding The full variance support is only available if autoloading is used by the user. Also, a single file will now only support non-cyclic type references. Numeric literals can contain underscores between digits. Weak references in PHP 7.4 will allow the programmers to retain a reference to an object that does not prevent the object from being destroyed. Users can now throw exceptions from __toString(). This was previously not permitted in PHP as it used to result in a fatal error. The CURLFile now supports stream wrappers in addition to plain file names. The FILTER_VALIDATE_FLOAT filter will support the min_range and max_range options, with the same semantics as FILTER_VALIDATE_INT. A new FFI extension is introduced. It will provide a simple way to call native functions, access native variables, and create/access data structures defined in C libraries. A new IMG_FILTER_SCATTER image filter is added to introduce a scatter filter to images. Read More: The Union Types 2.0 proposal gets a go-ahead for PHP 8.0 Users are happy with the new features in PHP 7.4 release. https://twitter.com/heiglandreas/status/1199989039249678337 To know the full list of changes, head over to the PHP archive page. Users can also check out the PHP manual to learn how to migrate from PHP 7.3.x to PHP 7.4.x. PEAR’s (PHP Extension and Application Repository) web server disabled due to a security breach Symfony leaves PHP-FIG, the framework interoperability group Google App Engine standard environment (beta) now includes PHP 7.2 Redox OS will soon permanently run rustc, the compiler for the Rust programming language, says Redox creator Jeremy Soller Homebrew 2.2 releases with support for macOS Catalina
Read more
  • 0
  • 0
  • 5507
article-image-developers-can-now-incorporate-unity-features-into-native-ios-and-android-apps
Sugandha Lahoti
18 Jun 2019
2 min read
Save for later

Developers can now incorporate Unity features into native iOS and Android apps

Sugandha Lahoti
18 Jun 2019
2 min read
Yesterday, Unity made an update stating that from Unity 2019.3.a2 onwards, Android and iOS developers will be able to incorporate Unity features into their apps and games. Developers will be able to integrate the Unity runtime components and their content (augmented reality, 3D/2D real-time rendering, 2D mini-games, and more)  into a native platform project so as to use Unity as a library. “We know there are times when developers using native platform technologies (like Android/Java and iOS/Objective C) want to include features powered by Unity in their apps and games,” said J.C. Cimetiere, senior technical product manager for mobile platforms, in a blog post. How it works The mobile app build process overall is still the same. Unity creates the iOS Xcode and Android Gradle projects. However, to enable this feature, Unity team has modified the structure of the generated iOS Xcode and Android Gradle projects as follows: A library part – iOS framework and Android Archive (AAR) file – that includes all source files and plugins A thin launcher part that includes app representation data and runs the library part They have also released step-by-step instructions on how to integrate Unity as a library on iOS and Android, including basic sample projects. Currently, Unity as a Library supports full-screen rendering only. For now, rendering on only a part of the screen is not supported. Also loading more than one instance of the Unity runtime is not supported. Developers need to adapt third-party plugins (native or managed) for them to work properly.   Unity hopes that this integration will boost AR marketing by helping brands and creative agencies easily insert AR directly into their native mobile apps. Unity Editor will now officially support Linux Unity has launched the ‘Obstacle Tower Challenge’ to test AI game players Obstacle Tower Environment 2.0: Unity announces Round 2 of its ‘Obstacle Tower Challenge’ to test AI game players.
Read more
  • 0
  • 0
  • 5505

article-image-anaconda-5-2-releases
Sunith Shetty
01 Jun 2018
2 min read
Save for later

Anaconda 5.2 releases!

Sunith Shetty
01 Jun 2018
2 min read
The Anaconda team has announced a new release of Anaconda Distribution 5.2. This new version has brought several new changes in terms of platform changes, user-facing challenges, and backend improvements. Anaconda is a free open-source distribution of Python which allows fast, easier and powerful way to perform data science and machine learning tasks. It is an efficient platform used for carrying out large-scale data processing, scientific computing and more. With over 6 million users, it includes more than 250 data science packages suitable for all major operating systems such as Windows, Linux, and macOS. Every package version is managed by the package management system conda. Some of the noteworthy changes available in Anaconda Distribution 5.2 are: Major highlights More than 100 packages have been updated or added to the new release of Anaconda Distribution 5.2 (Notable Updates includes - Qt v5.9.5, OpenSSL v1.0.2o, NumPy 1.14.3, SciPy v1.1.0, Matplotlib v2.2.2, and Pandas 0.23.0). Now Windows installers control their environment more carefully. Thus even if menu shortcuts fail to get created, it won't lead to a lot of installation issues. macOS pkg installers developer certificate is now updated to Anaconda, Inc. User-facing improvements All default channels now point to repo.anaconda.com instead of repo.continuum.io Now you have more dynamic shortcut working directory behavior thus improving Windows multi-user installations To prevent usability issues, Windows installers now disallow the characters (! % ^ =) in the installation path. Backend improvements Security fixes done for more than 20 packages based on in-depth Common Vulnerabilities and Exposures (CVE) vulnerabilities. Improved behavior of --prune because of history file being updated correctly in the conda-meta directory Windows Installer will now use a trimmed down value for PATH env var, to avoid DLL hell problems with existing software In addition to these, several new changes have been added to all x86 platforms,  Linux distributions, and windows distributions. For the complete list of new changes, you can refer the release notes. In case you want to download the new version of Anaconda Distribution 5.2, you can get the file from the official page. Alternatively, you can update the current Anaconda Distribution platform to version 5.2 by using conda update conda followed by conda install anaconda=5.2. 30 common data science terms explained Data science on Windows is a big no 10 Machine Learning Tools to watch in 2018
Read more
  • 0
  • 0
  • 5501