Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Cloud & Networking

376 Articles
article-image-aws-introduces-aws-datasync-for-automated-simplified-and-accelerated-data-transfer
Natasha Mathur
27 Nov 2018
3 min read
Save for later

AWS introduces ‘AWS DataSync’ for automated, simplified, and accelerated data transfer 

Natasha Mathur
27 Nov 2018
3 min read
The AWS team introduced AWS DataSync, an online data transfer service for automating data movement, yesterday. AWS DataSync offers data transfer from on-premises storage to Amazon S3 or Amazon Elastic File System (Amazon EFS) and vice versa. Let’s have a look at what’s new in AWS DataSync. Key Functionalities Move data 10x faster: AWS DataSync uses a purpose-built data transfer protocol along with a parallel, multi-threaded architecture that has the capability to run 10 times as fast as open source data transfer. This also speeds up the migration process and the recurring data processing workflows for analytics, machine learning, and data protection processes. Per-gigabyte fee: It is a managed service and you only need to pay the per-gigabyte fee which is paying only for the amount of data that you transfer. Other than that, there are no upfront costs and no minimum fees. DataSync Agent: The ‘AWS DataSync Agent’ is a crucial part of the service. It helps connect to your existing storage and the in-cloud service to automate, scale, and validate transfers. This, in turn, ensures that you don't have to write scripts, or modify the applications. Easy setup: It is very easy to set up and use (Console and CLI access is available). All you need to do is deploy the DataSync agent on-premises, then connect it to your file systems using the Network File System (NFS) protocol. After this, select Amazon EFS or S3 as your AWS storage, and you can start moving the data. Secure data transfer: AWS DataSync offers secure data transfer over the Internet or AWS Direct Connect. It also comes with automatic encryption and data. This, in turn, minimizes the in-house development and management which is needed for fast and secure transfers. Simplify and automate data transfer: With the help of AWS DataSync, you can perform one-time data migrations, transfer the on-premises data for timely in-cloud analysis, and automate the replication to AWS to ensure data protection and recovery. AWS DataSync is available for use from now in the US East, US West, Europe and Asia Pacific Regions. For more information, check out the official AWS DataSync blog post.  Amazon re:Invent 2018: AWS Key Management Service (KMS) Custom Key Store Amazon rolls out AWS Amplify Console, a deployment and hosting service for mobile web apps, at re:Invent 2018  Day 1 at the Amazon re: Invent conference – AWS RoboMaker, Fully Managed SFTP Service for Amazon S3, and much more! 
Read more
  • 0
  • 0
  • 3563

article-image-day-1-at-the-amazon-re-invent-conference-aws-robomaker-fully-managed-sftp-service-for-amazon-s3-and-much-more
Melisha Dsouza
27 Nov 2018
6 min read
Save for later

Day 1 at the Amazon re: Invent conference - AWS RoboMaker, Fully Managed SFTP Service for Amazon S3, and much more!

Melisha Dsouza
27 Nov 2018
6 min read
Looks like Christmas has come early this year for AWS developers! Following Microsoft’s Surface devices and Amazon’s wide range of Alex products, the latter has once again made a series of big releases, at the Amazon re:Invent 2018 conference. These announcements include an AWS RoboMaker to help developers test and deploy robotics applications, AWS Transfer for SFTP – Fully Managed SFTP Service for Amazon S3, EC2 Instances (A1) Powered by Arm-Based AWS Graviton Processors, Amazon EC2 C5n Instances Featuring 100 Gbps of Network Bandwidth and much more! Let’s take a look at what developers can expect from these releases. #1 AWS RoboMaker helps developers develop, test, deploy robotics applications at scale The AWS RoboMaker allows developers to develop, simulate, test, and deploy intelligent robotics applications at scale. Code can be developed inside of a cloud-based development environment and can be tested in a Gazebo simulation. Finally, they can deploy the finished code to a fleet of one or more robots. RoboMaker uses an open-source robotics software framework, Robot Operating System (ROS), with connectivity to cloud services. The service suit includes AWS machine learning services, monitoring services, and analytics services that enable a robot to stream data, navigate, communicate, comprehend, and learn. RoboMaker can work with robots of many different shapes and sizes running in many different physical environments. After a developer designs and codes an algorithm for the robot, they can also monitor how the algorithm performs in different conditions or environments. You can check an interesting simulation of a Robot using Robomaker at the AWS site. To learn more about ROS, read The Open Source Robot Operating System (ROS) and AWS RoboMaker. #2 AWS Transfer for SFTP – Fully Managed SFTP Service for Amazon S3 AWS Transfer for SFTP is a fully managed service that enables the direct transfer of files to and fro Amazon S3 using the Secure File Transfer Protocol (SFTP). Users just have to create a server, set up user accounts, and associate the server with one or more Amazon Simple Storage Service (S3) buckets. AWS allows users to migrate their file transfer workflows to AWS Transfer for SFTP- by integrating with existing authentication systems, and providing DNS routing with Amazon Route 53. Along with AWS services, acustomer'ss data in S3 can be used for processing, analytics, machine learning, and archiving. Along with control over user identity, permissions, and keys; users will have full access to the underlying S3 buckets and can make use of many different S3 features including lifecycle policies, multiple storage classes, several options for server-side encryption, versioning, etc. On the outbound side, users can generate reports, documents, manifests, custom software builds and so forth using other AWS services, and then store them in S3 for each, controlled distribution to your customers and partners. #3 EC2 Instances (A1) Powered by Arm-Based AWS Graviton Processors Amazon has launched EC2 instances powered by Arm-based AWS Graviton Processors. These are built around Arm cores. The A1 instances are optimized for performance and cost and are a great fit for scale-out workloads where the load has to be shared across a group of smaller instances. This includes containerized microservices, web servers, development environments, and caching fleets. AWS Graviton are custom designed by AWS and deliver targeted power, performance, and cost optimizations. A1 instances are built on the AWS Nitro System, that  maximizes resource efficiency for customers while still supporting familiar AWS and Amazon EC2 instance capabilities such as EBS, Networking, and AMIs. #4 Introducing Amazon EC2 C5n Instances featuring 100 Gbps of Network Bandwidth AWS announced the availability of C5n instances that can utilize up to 100 Gbps of network bandwidth to provide a significantly higher network performance across all instance sizes, ranging from 25 Gbps of peak bandwidth on smaller instance sizes to 100 Gbps of network bandwidth on the largest instance size. They are powered by 3.0 GHz Intel® Xeon® Scalable processors (Skylake) and provide support for the Intel Advanced Vector Extensions 512 (AVX-512) instruction set. These instances also feature 33% higher memory footprint compared to C5 instances and are ideal for applications that can take advantage of improved network throughput and packet rate performance. Based on the next generation AWS Nitro System, C5n instances make 100 Gbps networking available to network-bound workloads.  Workloads on C5n instances take advantage of the security, scalability and reliability of Amazon’s Virtual Private Cloud (VPC). The improved network performance will accelerate data transfer to and from S3, reducing the data ingestion wait time for applications and speeding up delivery of results. #5  Introducing AWS Global Accelerator AWS Global Accelerator is a  a network layer service that enables organizations to seamlessly route traffic to multiple regions, while improving availability and performance for their end users. It supports both TCP and UDP protocols, and performs a health check of a user’s target endpoints while routing traffic away from unhealthy applications. AWS Global Accelerator uses AWS’ global network to direct internet traffic from an organization's users to their applications running in AWS Regions  based on a users geographic location, application health, and routing policies that can be configured. You can head over to the AWS blog to get an in-depth view of how this service works. #6 Amazon’s  ‘Machine Learning University’ In addition to these announcements at re:Invent, Amazon also released a blog post introducing its ‘Machine Learning University’, where the company announced that the same machine learning courses used to train engineers at Amazon can now be availed by all developers through AWS. These courses, available as part of a new AWS Training and Certification Machine Learning offering, will help organizations accelerate the growth of machine learning skills amongst their employees. With more than 30 self-service, self-paced digital courses and over 45 hours of courses, videos, and labs, developers can be rest assured that ML fundamental and  real-world examples and labs, will help them explore the domain. What’s more? The digital courses are available at no charge and developers only have to pay for the services used in labs and exams during their training. This announcement came right after Amazon Echo Auto was launched at Amazon’s Hardware event. In what Amazon defines as ‘Alexa to vehicles’, the Amazon Echo Auto is a small dongle that plugs into the car’s infotainment system, giving drivers the smart assistant and voice control for hands-free interactions. Users can ask for things like traffic reports, add products to shopping lists and play music through Amazon’s entertainment system. Head over to What’s new with AWS to stay updated on upcoming AWS announcements. Amazon re:Invent 2018: AWS Snowball Edge comes with a GPU option and more computing power Amazon re:Invent 2018: AWS Key Management Service (KMS) Custom Key Store Amazon announces Corretto, a open source, production-ready distribution of OpenJDK backed by AWS
Read more
  • 0
  • 0
  • 2771

article-image-introducing-tigergraph-cloud-a-database-as-a-service-in-the-cloud-with-ai-and-machine-learning-support
Savia Lobo
27 Nov 2018
3 min read
Save for later

Introducing TigerGraph Cloud: A database as a service in the Cloud with AI and Machine Learning support

Savia Lobo
27 Nov 2018
3 min read
Today, TigerGraph, the world’s fastest graph analytics platform for the enterprise, introduced TigerGraph Cloud, the simplest, most robust and cost-effective way to run scalable graph analytics in the cloud. With TigerGraph Cloud, users can easily get their TigerGraph services up and running. They can also tap into TigerGraph’s library of customizable graph algorithms to support key use cases including AI and Machine Learning. It provides data scientists, business analysts, and developers with the ideal cloud-based service for applying SQL-like queries for faster and deeper insights into data. It also enables organizations to tap into the power of graph analytics within hours. Features of TigerGraph Cloud Simplicity It forgoes the need to set up, configure or manage servers, schedule backups or monitoring, or look for security vulnerabilities. Robustness TigerGraph relies on the same framework providing point-in-time recovery, powerful configuration options, and stability that has been used for its own workloads over several years. Application Starter Kits It offers out-of-the-box starter kits for quicker application development for cases such as Anti-Fraud, Anti-Money Laundering (AML), Customer 360, Enterprise Graph analytics and more. These starter kits include graph schemas, sample data, preloaded queries and a library of customizable graph algorithms (PageRank, Shortest Path, Community Detection, and others). TigerGraph makes it easy for organizations to tailor such algorithms for their own use cases. Flexibility and elastic pricing Users pay for exactly the hours they use and are billed on a monthly basis. Spin up a cluster for a few hours for minimal cost, or run larger, mission-critical workloads with predictable pricing. This new cloud offering will also be available for production on AWS, with other cloud availability forthcoming. Yu Xu, founder and CEO, TigerGraph, said, “TigerGraph Cloud addresses these needs, and enables anyone and everyone to take advantage of scalable graph analytics without cloud vendor lock-in. Organizations can tap into graph analytics to power explainable AI - AI whose actions can be easily understood by humans - a must-have in regulated industries. TigerGraph Cloud further provides users with access to our robust graph algorithm library to support PageRank, Community Detection and other queries for massive business advantage.” Philip Howard, research director, Bloor Research, said, “What is interesting about TigerGraph Cloud is not just that it provides scalable graph analytics, but that it does so without cloud vendor lock-in, enabling companies to start immediately on their graph analytics journey." According to TigerGraph, “Compared to TigerGraph Cloud, other graph cloud solutions are up to 116x slower on two hop queries, while TigerGraph Cloud uses up to 9x less storage. This translates into direct savings for you.” TigerGraph also announces New Marquee Customers TigerGraph also announced the addition of new customers including Intuit, Zillow and PingAn Technology among other leading enterprises in cybersecurity, pharmaceuticals, and banking. To know more about TigerGraph Cloud in detail, visit its official website. MongoDB switches to Server Side Public License (SSPL) to prevent cloud providers from exploiting its open source code Google Cloud Storage Security gets an upgrade with Bucket Lock, Cloud KMS keys and more OpenStack Foundation to tackle open source infrastructure problems, will conduct conferences under the name ‘Open Infrastructure Summit’  
Read more
  • 0
  • 0
  • 2539
Visually different images

article-image-amazon-reinvent-2018-aws-key-management-service-kms-custom-key-store
Sugandha Lahoti
27 Nov 2018
3 min read
Save for later

Amazon re:Invent 2018: AWS Key Management Service (KMS) Custom Key Store

Sugandha Lahoti
27 Nov 2018
3 min read
At the ongoing Amazon re:Invent 2018, Amazon announced that AWS Key Management Service (KMS) has integrated with AWS CloudHSM. Users now have the option to create their own KMS custom key store. They can generate, store, and use their KMS keys in hardware security modules (HSMs) through the KSM. The KMS customer key store satisfies compliance obligations that would otherwise require the use of on-premises hardware security modules (HSMs). It supports AWS services and encryption toolkits that are integrated with KMS. Previously, AWS CloudHSM was not widely integrated with other AWS managed services. So, if someone required direct control of their HSMs but still wanted to use and store regulated data in AWS managed services, they had to choose between changing those requirements, not using a given AWS service, or building their own solution. With custom key store, users can configure their own CloudHSM cluster and authorize KMS to use it as a dedicated key store for keys rather than the default KMS key store. On using a KMS CMK in a custom key store, the cryptographic operations under that key are performed exclusively in the developer’s own CloudHSM cluster. Master keys that are stored in a custom key store are managed in the same way as any other master key in KMS and can be used by any AWS service that encrypts data and that supports KMS customer managed CMKs. The use of a custom key store does not affect KMS charges for storing and using a CMK. However, it does come with an increased cost and potential impact on performance and availability. Things to consider before using a custom key store Each custom key store requires the CloudHSM cluster to contain at least two HSMs. CloudHSM charges vary by region and the pricing comes to at least $1,000 per month, per HSM, if each device is permanently provisioned. The number of HSMs determines the rate at which keys can be used. Users should keep in mind the intended usage patterns for their keys and ensure appropriate provisioning of HSM resources. The number of HSMs and the use of availability zones (AZs) impacts the availability of a cluster. Configuration errors may result in a custom key store being disconnected, or key material being deleted. Users need to manually setup HSM clusters, configure HSM users, and potentially restore HSMs from backup. These are security-sensitive tasks for which users should have the appropriate resources and organizational controls in place. Read more about the KMS custom key stores on Amazon. How Amazon is reinventing Speech Recognition and Machine Translation with AI AWS updates the face detection, analysis and recognition capabilities in Amazon Rekognition Introducing Automatic Dashboards by Amazon CloudWatch for monitoring all AWS Resources.
Read more
  • 0
  • 0
  • 2984

article-image-introducing-automatic-dashboards-by-amazon-cloudwatch-for-monitoring-all-aws-resources
Savia Lobo
26 Nov 2018
1 min read
Save for later

Introducing Automatic Dashboards by Amazon CloudWatch for monitoring all AWS Resources

Savia Lobo
26 Nov 2018
1 min read
Last week, Amazon CloudWatch, a monitoring and management service, introduced Automatic Dashboards for monitoring all the AWS resources. These Automatic Dashboards are available in AWS public regions with no additional charges. Through CloudWatch Automatic Dashboards, users can now get aggregated views of health and performance of all the AWS resources. This allows users to quickly monitor, explore user accounts and resource-based view of metrics and alarms, and easily drill-down to understand the root cause of performance issues. Once identified, users can quickly act by going directly to the AWS resource. Features of these Automatic Dashboards are: They are pre-built with AWS services recommended best practices They remain resource aware These dashboards are dynamically updated to reflect the latest state of important performance metrics Users can filter and troubleshoot to a specific view without additional code to reflect the latest state of one's AWS resources. To know more about Automatic Dashboards in detail, visit its official website. AWS updates the face detection, analysis and recognition capabilities in Amazon Rekognition Amazon announces Corretto, an open source, production-ready distribution of OpenJDK backed by AWS AWS announces more flexibility its Certification Exams, drops its exam prerequisites
Read more
  • 0
  • 0
  • 2832

article-image-linux-is-reverting-the-stibp-support-due-to-major-slowdowns-in-linux-4-20
Bhagyashree R
23 Nov 2018
2 min read
Save for later

Linux is reverting the STIBP support due to major slowdowns in Linux 4.20

Bhagyashree R
23 Nov 2018
2 min read
Linux 4.20 has shown major performance issues and the reason behind this regression was Single Thread Indirect Branch Predictors (STIBP), as shared by Phoronix yesterday. This support is being reverted from the upcoming releases Linux 4.19.4 and 4.14.83 kernel points. Linus Torvalds, the creator of Linux kernel, was also surprised with the performance hit on Linux 4.20 as a result of STIBP introduction. He posted to the kernel mailing list that the performance impact was not communicated before the patches were merged and believes that this should not be enabled by default: “This was marked for stable, and honestly, nowhere in the discussion did I see any mention of just *how* bad the performance impact of this was.  When performance goes down by 50% on some loads, people need to start asking themselves whether it was worth it. It's apparently better to just disable SMT entirely, which is what security-conscious people do anyway.  So why do that STIBP slow-down by default when the people who *really* care already disabled SMT?  I think we should use the same logic as for L1TF: we default to something that doesn't kill performance. Warn once about it, and let the crazy people say "I'd rather take a 50% performance hit than worry about a theoretical issue”.“ The tests done by Michael Larabel also revealed that Linux 4.20 is facing significant performance issues in many workloads, more than some of the earlier Spectre and Meltdown mitigations. This has measurably affected PHP, Python, Java, and many other workloads and even the gaming performance to some extent. The STIBP support for cross-hyperthread Spectre V2 mitigation was backported to the Linux 4.14 and 4.19 LTS series, which is now being reverted. You can find the reverts in Greg Kroah-Hartman’s linux-stable-rc tree:  Source: Phoronix On current Linux 4.20 Git, STIBP still remains in place and a better approach to handle performance issues is being reviewed. Michael Larabel expects that the new patch series will be ready for merging prior to the shipping of Linux 4.20, which is approximately one month’s time. To know more, check out Michael Larabel’s post on Phoronix: Linux Stable Updates Are Dropping The Performance-Pounding STIBP. Read Next Linux 4.20 kernel slower than its previous stable releases, Spectre flaw to be blamed, according to Phoronix Red Hat releases Red Hat Enterprise Linux 8 beta; deprecates Btrfs filesystem Soon, RHEL (Red Hat Enterprise Linux) won’t support KDE
Read more
  • 0
  • 0
  • 2520
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $15.99/month. Cancel anytime
article-image-introducing-azure-devops-server-2019-rc1-with-better-ui-azure-sql-support-and-more
Melisha Dsouza
22 Nov 2018
3 min read
Save for later

Introducing Azure DevOps Server 2019 RC1 with better UI, Azure SQL support and more!

Melisha Dsouza
22 Nov 2018
3 min read
On 19th November, Microsoft announced the first release candidate (RC) of Azure DevOps Server 2019. Azure DevOps Server 2019 includes the new, fast, and clean Azure DevOps user interface and delivers the codebase of Microsoft Azure DevOps while being optimized for customers who prefer to self-host. Features of Azure DevOps Server 2019 In addition to existing SQL Server support, Azure DevOps Server also includes support for Azure SQL. This means customers can self-host Azure DevOps in their own datacenter using an on-premises SQL Server. They can take advantage of  Azure SQL capabilities and performance like backup features and scaling options, while reducing the administrative overhead of running the service and  self-hosting Azure DevOps in the cloud. Customers can use the globally available Microsoft hosted service to take advantage of automatic updates and automatic scaling of Azure DevOps Server. Azure DevOps Server 2019 includes a new release management interface. Customers can easily understand how their deployment is taking place - it gives them a better visibility of which bits are deployed to which environments and why. Customers can also mix and match agents self-hosted on-premises and in any cloud on Windows, Mac, or Linux while easily deploying to IaaS or PaaS in Azure as well as on-premises infrastructure. A new navigation and improved user experience in Azure DevOps A new feature introduced is ‘my work flyout’. This feature was developed after feedback obtained that when a customer is in one part of the product and wants some information from another part, they didn’t want to lose their context of the current task. With this new feature, customers can access this flyout from anywhere in the product, which will give them a quick glance at crucial information link work items, pull requests, and all favorites. Teams that use pull requests (PRs) and branch policies, there may be occasions when members need to override and bypass those policies. For teams to verify that those policy overrides are being used in the right situations, a new notification filter is added to allow users and teams to receive email alerts any time a policy is bypassed. Tests tab now gives a rich, in-context test information for Pipelines. It provides an in-progress test view, full page debugging experience, in context test history, reporting aborted test execution, and run level summary. The UI has undergone significant testing and the team suggests that for self-hosting customers the new navigation model may require updates to internal documentation and training. A direct upgrade to Azure DevOps Server is supported fromTeamm Foundation Server 2012 and newer. Previous versions of Team Foundation Server will stay on the old user interface. Check the Azure DevOps Server requirements and compatibility page to understand dependencies required for a self-hosted installation. Head over to Microsoft’s blog for more information on this news. You can download Azure DevOps Server 2019 RC1 and check out the release notes for all the features and information for this release. A multi-factor authentication outage strikes Microsoft Office 365 and Azure users Microsoft announces container support for Azure Cognitive Services to build intelligent applications that span the cloud and the edge Microsoft Azure reportedly chooses Xilinx chips over Intel Altera for AI co-processors, says Bloomberg report  
Read more
  • 0
  • 0
  • 3567

article-image-autodesk-acquires-plangrid-for-875-million-to-digitize-and-automate-construction-workflows
Savia Lobo
21 Nov 2018
3 min read
Save for later

Autodesk acquires PlanGrid for $875 million, to digitize and automate construction workflows

Savia Lobo
21 Nov 2018
3 min read
Yesterday, Autodesk, a software corporation for the architecture, engineering, construction, and manufacturing, announced that it has acquired the leading provider of construction productivity software, PlanGrid for $875 million net of cash. The transaction is expected to close during Autodesk's fourth quarter of fiscal 2019, which is, ending January 31, 2019. With this acquisition of the San Francisco based startup, Autodesk will be able to offer more comprehensive, cloud-based construction platform. PlanGrid software, launched in 2011, gives builders real-time access to project plans, punch lists, project tasks, progress photos, daily field reports, submittals and more. Autodesk’s CEO, Andrew Anagnost, said, “There is a huge opportunity to streamline all aspects of construction through digitization and automation. The acquisition of PlanGrid will accelerate our efforts to improve construction workflows for every stakeholder in the construction process.” According to TechCrunch, “The company, which is a 2012 graduate of Y Combinator, raised just $69 million, so this appears to be a healthy exit for them.” In an interview with CEO and co-founder Tracy Young in 2015 at TechCrunch Disrupt in San Francisco, she had said, “the industry was ripe for change. The heart of construction is just a lot of construction blueprints information. It’s all tracked on paper right now and they’re constantly, constantly changing”. When Tracy started the idea in 2011, her idea was to move all that paper to the cloud and display it on an iPad. According to Tracy, “At PlanGrid, we have a relentless focus on empowering construction workers to build as productively as possible. One of the first steps to improving construction productivity is the adoption of digital workflows with centralized data. PlanGrid has excelled at building beautiful, simple field collaboration software, while Autodesk has focused on connecting design to construction. Together, we can drive greater productivity and predictability on the job site.” Jim Lynch, Construction General Manager at Autodesk, said, "We'll integrate workflows between PlanGrid's software and both Autodesk Revit software and the Autodesk BIM 360 construction management platform, for a seamless exchange of information between all project members." Autodesk and PlanGrid have developed complementary construction integration ecosystems using which customers can connect other software applications. The acquisition is expected to expand the integration partner ecosystem, giving customers a customizable platform to test and scale new ways of working. To know more about this news in detail, visit Autodesk’s official press release. IBM acquired Red Hat for $34 billion making it the biggest open-source acquisition ever Could Apple’s latest acquisition yesterday of an AR lens maker signal its big plans for its secret Apple car? Plotly releases Dash DAQ: a UI component library for data acquisition in Python
Read more
  • 0
  • 0
  • 1943

article-image-linux-4-20-kernel-slower-than-its-previous-stable-releases-spectre-flaw-to-be-blamed-according-to-phoronix
Melisha Dsouza
19 Nov 2018
3 min read
Save for later

Linux 4.20 kernel slower than its previous stable releases, Spectre flaw to be blamed, according to Phoronix

Melisha Dsouza
19 Nov 2018
3 min read
On the 4th of November, Linux 4.20 rc-1 was released with a host of notable changes right from AMD Vega 20 support getting squared away, AMD Picasso APU support, Intel 2.5G Ethernet support, the removal of Speck, and other new hardware support additions and software features. The release that was supposed to upgrade the kernel’s performance, did not succeed in doing so. On the contrary, the kernel is much slower as compared to previous Linux kernel stable releases. In a blog released by Phoronix, Michael Larabel,e lead developer of the Phoronix Test Suite, Phoromatic, and OpenBenchmarking.org, discussed the results of some tests conducted on the kernel. He bisected the 4.20 kernel merge window to explore the reasons for the significant slowdowns in the kernel for many real-world workloads. The article attributes this degrade in performance to the Spectre Flaws in the processor. In order to mitigate against the Spectre flaw, an intentional kernel change was made.The change is termed as  "STIBP" for cross-hyperthread Spectre mitigation on Intel processors. Single Thread Indirect Branch Predictors (STIBP) prevents cross-hyperthread control of decisions that are made by indirect branch predictors. The STIBP addition in Linux 4.20 will affect systems that have up-to-date/available microcode with this support and where a user’s CPU has Hyper-Threading enabled/present. Performance issues in Linux 4.20 Michael has done a detailed analysis of the kernel performance and here are some of his findings. Many synthetic and real-world tests showed that the Intel Core i9 performance was not upto the mark. The Rodinia scientific OpenMP tests took 30% longer, Java-based DaCapo tests taking up to ~50% more time to complete, the code compilation tests also extended in length. There was lower PostgreSQL database server performance and longer Blender3D rendering times. All this was noticed in Core i9 7960X and Core i9 7980XE test systems while the AMD Threadripper 2990WX performance was unaffected by the Linux 4.20 upgrade. The latest Linux kernel Git benchmarks also saw a significant pullback in performance from the early days of the Linux 4.20 merge window up through the very latest kernel code as of today. Those affected systems included a low-end Core i3 7100 as well as a Xeon E5 v3 and Core i7 systems. The tests conducted found the  Smallpt renderer to slow down significantly PHP performance took a major dive, HMMer also faced a major setback compared to the current Linux 4.19 stable series. What is surprising is that there are mitigations against Spectre, Meltdown, Foreshadow, etc in Linux 4.19 as well. But 4.20 shows an additional performance drop on top of all the previously outlined performance hits this year. In the entire testing phase, the AMD systems didn’t appear to be impacted. This would mean if a user disables Spectre V2 mitigations to account for better performance- the system’s security could be compromised. You can head over to Phoronix for a complete analysis of the test outputs and more information on this news. Soon, RHEL (Red Hat Enterprise Linux) won’t support KDE Red Hat releases Red Hat Enterprise Linux 8 beta; deprecates Btrfs filesystem The Ceph Foundation has been launched by the Linux Foundation to support the open source storage project
Read more
  • 0
  • 0
  • 4416

article-image-oracles-thomas-kurian-to-replace-diane-greene-as-google-cloud-ceo-is-this-googles-big-enterprise-cloud-market-move
Melisha Dsouza
19 Nov 2018
4 min read
Save for later

Oracle’s Thomas Kurian to replace Diane Greene as Google Cloud CEO; is this Google’s big enterprise Cloud market move?

Melisha Dsouza
19 Nov 2018
4 min read
On 16th November, CEO of Google Cloud, Diane Greene, announced in a blog post that she will be stepping down from her post after 3 years of running the Google Cloud. The position will now be taken up by Thomas Kurian, who worked at Oracle for the past 22 years. Kurian will be joining Google Cloud on November 26th and transitioning into the Google Cloud leadership role in early 2019, while Diane works as a CEO till end of January 2019. Post that, she will continue as a Director on the Alphabet board. Google Cloud led by Diane Greene Diane Greene has been leading Google’s cloud computing division since early 2016. She has been considered to be Google’s best bet on being the second largest source of revenue while competing with Amazon and Microsoft in providing computing infrastructure for businesses. However, there are speculations that this decision indicates the said project hasn’t gone as well as planned. Although the cloud division has seen notable advances under the leadership of Greene, Amazon and Microsoft have stayed a step ahead in their cloud businesses.  According to Canalys, Amazon has roughly a third of the global cloud market, which contributes more to revenue than its sales on Amazon.com. Microsoft has roughly half of Amazon’s market share, and currently owns 8 percent of the Global market share of cloud infrastructure services. Maribel Lopez, of Lopez Research states “When Diane Greene came in they had a really solid chance of being the No. 2 provider, Microsoft has really closed the gap and is the No. 2 provider for most enterprise customers by a significant margin.” Greene acquired customers such as Twitter, Target, and HSBC for Google cloud. Major Fortune 1000 enterprises depend on Google Cloud for their future on. Under her leadership, Google established a training and professional services organization and Google partner organizations. They have come up with ways to help enterprises adopt AI through their Advanced Solutions Lab. Google’s industry verticals has achieved massive traction in health, financial services, retail, gaming and media, energy and manufacturing, and transportation. Along with the Cloud ML and the Cloud IoT groups, they acquired Apigee, Kaggle, qwiklabs and several promising small startups. She oversaw projects like creating custom chips for machine learning, thus gaining traction for artificial intelligence used on the platform. While the AI- centric approach bought Google in the limelight, Meaghan McGrath, who tracks Google and other cloud providers at Technology Business Research, says that “They’ve been making the right moves and saying the right things, but it just hasn’t shown through in performance financially,” She further stresses on the fact that Google is still hamstrung by a perception that it doesn’t really know how to work with corporate IT departments—an area where Microsoft has made its mark. Kurian to join Google Thomas Kurian worked at Oracle for the past 22 years and since 2015 was the president of product development.  On September 5th, Kurian told employees in an email on Sept. 5 that he was taking "extended time off from Oracle". The company said in a statement at the time that "we expect him to return soon.” 23 days later, Oracle put out a filing saying that Kurian had resigned "to pursue other opportunities." Google and Oracle did not have a pleasant history together. The two companies are involved in a eight-year legal battle related to Google's use of the Java programming language, without a license, in developing its Android operating system for smartphones. Oracle owns the intellectual property behind Java. In March, the Federal Circuit reversed a district court's ruling that had favored Google, sending the case back to the lower court to determine damages that it now must pay Oracle. CNBC reports that one former Google employee, who asked not to be named because of the sensitivity of the matter, is not optimistic that Kurian will be well received; since Kurian still has to figure out how to work with Googlers. It would be interesting to see how the face of Google Cloud changes under Kurian’s leadership. You can head over to Google’s blog to read more about this announcement. Intel Optane DC Persistent Memory available first on Google Cloud Google Cloud Storage Security gets an upgrade with Bucket Lock, Cloud KMS keys and more 10 useful Google Cloud AI services for your next machine learning project [Tutorial]
Read more
  • 0
  • 0
  • 2089
article-image-openstack-foundation-to-tackle-open-source-infrastructure-problems-will-conduct-conferences-under-the-name-open-infrastructure-summit
Melisha Dsouza
16 Nov 2018
3 min read
Save for later

OpenStack Foundation to tackle open source infrastructure problems, will conduct conferences under the name ‘Open Infrastructure Summit’

Melisha Dsouza
16 Nov 2018
3 min read
At the OpenStack Summit in Berlin this week, the OpenStack Foundation announced that from now all its bi-annual conferences will be conducted under the name of ‘Open Infrastructure Summit’. According to TechCrunch, the  Foundation itself won’t have a rebranding of its name, but a change will be brought about in the nature of what the Foundation is doing. The board will now be adopting new projects outside of the core OpenStack project. There will also be a process for adding “pilot projects” and fostering them for a minimum of 18 months. The focus for these projects will be on continuous integration and continuous delivery (CI/CD), container infrastructure, edge computing, data center, and artificial intelligence and machine learning. OpenStack currently has these pilot projects in development: Airship, Kata Containers, StarlingX and Zuul. OpenStack says that the idea of the foundation is not to manage multiple projects, or increase the Foundation’s revenue. However, the scope of this idea is focused around people who run or manage infrastructure. There are no new boards of directors or foundations for each project. The team also assures its members that the actual OpenStack technology isn’t going anywhere. OpenStack Foundation CTO Mark Collier said “We said very clearly this week that open infrastructure starts with OpenStack, so it’s not separate from it. OpenStack is the anchor tenant of the whole concept,” Collier said. Sell added, “All that we are doing is actually meant to make OpenStack better.” Adding his insights on the decision, Canonical founder Mark Shuttleworth is worried that the focus on multiple projects will “confuse people about OpenStack.” he further adds that “I would really like to see the Foundation employ the key contributors to OpenStack so that the heart of OpenStack had long-term stability that wasn’t subject to a popularity contest every six months,” Boris Renski, co-founder of OpenSTack stated that as of today a number of companies are back to doubling down on OpenStack as their core focus. He attributes this to the foundation’s focus on edge computing. The highest interest in OpenStack being shown by China. The OpenStack Foundation’s decision to tackle open source infrastructure problems, while keeping the core of the actual OpenStack project intact, is refreshing. The only possible competition it can face is from the Linux Foundation backing the Cloud Native Computing Foundation. Read Next OpenStack Rocky released to meet AI, machine learning, NFV and edge computing demands for infrastructure Introducing VMware Integrated OpenStack (VIO) 5.0, a new Infrastructure-as-a-Service (IaaS) cloud Introducing OpenStack Foundation’s Kata Containers 1.0
Read more
  • 0
  • 0
  • 2169

article-image-red-hat-releases-red-hat-enterprise-linux-8-beta-deprecates-btrfs-filesystem
Sugandha Lahoti
16 Nov 2018
3 min read
Save for later

Red Hat releases Red Hat Enterprise Linux 8 beta; deprecates Btrfs filesystem

Sugandha Lahoti
16 Nov 2018
3 min read
Earlier this month, Red Hat released RHEL 7.6. Now, Red Hata Enterprise Linux (RHEL) 8 beta version is available with more container friendliness than ever. This RHEL release is based on the Red Hat community Linux May 2018 Fedora 28 release. It uses the upstream Linux kernel 4.18 for its foundation. RHEL 8 beta introduces the concept of Application Streams. With this, userspace components can now update more quickly than core operating system packages and without having to wait for the next major version of the operating system. With Application Streams, you can also keep multiple versions of the same package around. RHEL 8 beta features RHEL 8 beta introduces a single and consistent user control panel through the RHEL Web Console. Systems admins of all experience levels can easily manage RHEL servers locally and remotely, including virtual machines. RHEL 8 beta uses IPVLAN to support efficient Linux networking in containers through connecting containers nested in virtual machines (VMs) to networking hosts. RHEL 8 beta also has a new TCP/IP stack with Bandwidth and Round-trip propagation time (BBR) congestion control. This increases performance and minimizes latency for services like streaming video or hosted storage. RHEL 8 is made secure with OpenSSL 1.1.1 and TLS 1.3 support and system-wide Cryptographic Policies. Red Hat’s lightweight, open standards-based container toolkit comes with Buildah (container building), Podman (running containers) and Skopeo (sharing/finding containers). RPM's YUM package manager has also been updated. Yum 4 delivers faster performance, fewer installed dependencies and more choices of package versions to meet specific workload requirements. File Systems in RHEL 8 beta RedHat has deprecated the Btrfs filesystem. This has really confused developers who are surprised why RedHat would opt out of it especially considering that it is also used for ChromeOS's Crostini Linux application container. From hacker news: “I'm still incredibly sad about that, especially as Btrfs has become a really solid filesystem over the last year or so in the upstream kernel.” “Indeed, Btrfs is uniquely capable and important. It has lightweight snapshots of directory trees, and fully supports NFS exports and kernel namespaces, so it can easily solve technical problems that currently can't be easily solved using ZFS or other filesystems.” Stratis is the new volume-managing file system in RHEL 8 beta. Stratis abstracts away the complexities inherent to data management via an API. Also, File System Snapshots provide for a faster way of conducting file-level tasks, like cloning virtual machines, while saving space by consuming new storage only when data changes. Existing customers and subscribers can test Red Hat Enterprise Linux 8 beta. You can also view the README file for instructions on how to download and install the software. RedHat shares what to expect from next week’s first-ever DNSSEC root key rollover. Soon, RHEL (Red Hat Enterprise Linux) won’t support KDE. Red Hat Enterprise Linux 7.5 (RHEL 7.5) now generally available.
Read more
  • 0
  • 0
  • 5266

article-image-uber-becomes-a-gold-member-of-the-linux-foundation
Savia Lobo
15 Nov 2018
2 min read
Save for later

Uber becomes a Gold member of the Linux Foundation

Savia Lobo
15 Nov 2018
2 min read
Yesterday, at Uber Open Summit 2018, the company announced that it is joining the Linux Foundation as a Gold Member with a promise to support the open source community via the Linux Foundation. Jim Zemlin, Executive Director of the Linux Foundation, said, “Uber has been influential in the open source community for years, and we’re very excited to welcome them as a Gold member at the Linux Foundation. Uber truly understands the power of open source and community collaboration, and I am honored to witness that first hand as a part of Uber Open Summit 2018.” By being a member, Uber will support the Linux Foundation’s mission and help the community in building ecosystems that accelerate open source technology development. Uber will also work towards solving complex technical problems and further promote open source adoption globally. Zemlin said, “Their expertise will be instrumental for our projects as we continue to advance open solutions for cloud-native technologies, deep learning, data visualization and other technologies that are critical to businesses today.” Thuan Pham, Uber CTO, said, “The Linux Foundation not only provides homes to many significant open source projects but also creates an open environment for companies like Uber to work together on developing these technologies. We are honored to join the Linux Foundation to foster greater collaboration with the open source community.” To know more about this membership in detail, head over to Uber Engineering. Michelangelo PyML: Introducing Uber’s platform for rapid machine learning development Uber posted a billion dollar loss this quarter. Can Uber Eats revitalize the Uber growth story? Uber announces the 2019 Uber AI Residency
Read more
  • 0
  • 0
  • 2155
article-image-amazon-announces-corretto-a-open-source-production-ready-distribution-of-openjdk-backed-by-aws
Melisha Dsouza
15 Nov 2018
3 min read
Save for later

Amazon announces Corretto, a open source, production-ready distribution of OpenJDK backed by AWS

Melisha Dsouza
15 Nov 2018
3 min read
Yesterday, at Devoxx Belgium, Amazon announced the preview of Amazon Corretto which is a free distribution of OpenJDK that offers Long term support. With Corretto, users can develop and run Java applications on popular operating systems. The team further mentioned that Corretto is multiplatform and production ready with long-term support that will include performance enhancements and security fixes. They also have plans to make Coretto the default OpenJDK on Amazon Linux 2 in 2019. The preview currently supports Amazon Linux, Windows, macOS, and Docker, with additional support planned for general availability. Corretto is run internally by Amazon on thousands of production services. It is certified as compatible with the Java SE standard. Features and benefits of Corretto Amazon Corretto lets a user run the same environment in the cloud, on premises, and on their local machine. During Preview, Corretto will allow users to develop and run Java applications on popular operating systems like Amazon Linux 2, Windows, and macOS. 2. Users can upgrade versions only when they feel the need to do so. 3. Since it is certified to meet the Java SE standard, Coretto can be used as a drop-in replacement for many Java SE distributions. 4. Corretto is available free of cost and there are no additional paid features or restrictions. 5. Coretto is backed by Amazon and the patches and improvements in Corretto enable Amazon to address high-scale, real-world service concerns. Coretto can meet heavy performance and scalability demands. 6. Customers will obtain long-term support, with quarterly updates including bug fixes and security patches. AWS will provide urgent fixes to customers outside of the quarterly schedule. At Hacker news, users are talking about how the product's documentation could be formulated in a better way. Some users feel that the “Amazon's JVM is quite complex”. Users are also talking about Oracle offering the same service at a price. One user has pointed out the differences between Oracle’s service and Amazon’s service. The most notable feature of this release apparently happens to be the LTS offered by Amazon. Head over to Amazon’s blog to read more about this release. You can also find the source code for Corretto at Github. Amazon tried to sell its facial recognition technology to ICE in June, emails reveal Amazon addresses employees dissent regarding the company’s law enforcement policies at an all-staff meeting, in a first Amazon splits HQ2 between New York and Washington, D.C. after a making 200+ states compete over a year; public sentiments largely negative
Read more
  • 0
  • 0
  • 2600

article-image-the-ceph-foundation-has-been-launched-by-the-linux-foundation-to-support-the-open-source-storage-project
Melisha Dsouza
13 Nov 2018
3 min read
Save for later

The Ceph Foundation has been launched by the Linux Foundation to support the open source storage project

Melisha Dsouza
13 Nov 2018
3 min read
At Ceph Day Berlin, yesterday (November 12)  the Linux Foundation announced the launch of the Ceph Foundation. A total of 31 organizations have come together to launch the Ceph Foundation including industries like ARM, Intel, Harvard and many more. The foundation aims to bring industry members together to support the Ceph open source community. What is Ceph? Ceph is an open source distributed storage technology that provides storage services for many of the world’s largest container and OpenStack deployments. The range of organizations using Ceph is vast. They include financial institutions like Bloomberg and Fidelity, cloud service providers like Rackspace and Linode, car manufacturers like BMW, and software firms like SAP and Salesforce. The main aim of the Ceph Foundation The main focus of the foundation is to raise money via annual membership fees from industry members. The combined pool of funds will then be spent in support of the Ceph community. The team has already raised around half a million dollars for their first year which will be used to support the Ceph project infrastructure, cloud infrastructure services, internships, and community events. The new foundation will provide a forum for community members and industry stakeholders to meet and discuss project status, development and promotional activities, community events, and strategic direction. The Ceph Foundation replaces the Ceph Advisory Board formed back in 2015. According to a Linux Foundation statement, the Ceph Foundation, will “organize and distribute financial contributions in a coordinated, vendor-neutral fashion for immediate community benefit” Ceph has an ambitious plan for new initiatives once the foundation gets properly functional. Some of these include: Expansion of and improvements to the hardware lab used to develop and test Ceph An events team to help plan various programs and targeted regional or local events Investment in strategic integrations with other projects and ecosystems Programs around interoperability between Ceph-based products and services Internships, training materials, and much more! The Ceph Foundation will provide an open, collaborative, and neutral home for project stakeholders to coordinate their development and community investments in the Ceph ecosystem. You can head over to their blog to know more about this news. Facebook’s GraphQL moved to a new GraphQL Foundation, backed by The Linux Foundation NIPS Foundation decides against name change as poll finds it an unpopular superficial move; instead increases ‘focus on diversity and inclusivity initiatives’ Node.js and JS Foundation announce intent to merge; developers have mixed feelings
Read more
  • 0
  • 0
  • 2328