Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Cloud & Networking

376 Articles
article-image-github-has-passed-an-incredible-100-million-repositories
Richard Gall
12 Nov 2018
2 min read
Save for later

GitHub has passed an incredible 100 million repositories

Richard Gall
12 Nov 2018
2 min read
It has been a big year for GitHub. The code sharing platform has this year celebrated its 10th birthday, been bought by Microsoft for an impressive $7.5 billion, and has now reached an astonishing 100 million repositories. While there will be rumblings of discontent following the huge Microsoft acquisition, it doesn't look like threats to leave GitHub have come to fruition. True, it has only been a matter of weeks since Microsoft finally took over, but there are no signs that GitHub is losing favor with developers. 1 in 3 of all GitHub repositories were created in 2018 According to GitHub, 1 in 3 of the 100 million repositories were created in 2018. That demonstrates the astonishing growth of the platform, and just how embedded it is within the day to day life of software engineers. This is further underlined by more data in GitHub's Octoverse report, published in October. "We've seen more new accounts in 2018 so far than in the first six years of GitHub combined," the report states. Perhaps the new relationship with Microsoft has actually helped push GitHub from strength to strength - MicrosoftDocs/azure-docs is the fastest growing repository in 2018. Of course, some credit should probably go to Microsoft as well - the organization has done a lot to change its image and ethos, becoming much more friendly towards open source software. Meanwhile, at Packt, we've been delighted to play a small part in helping GitHub get to its 100 million milestone. Earlier this year we hit 2,000 project repos.
Read more
  • 0
  • 0
  • 2380

article-image-cloudflares-workers-enable-containerless-cloud-computing-powered-by-v8-isolates-and-webassembly
Melisha Dsouza
12 Nov 2018
5 min read
Save for later

Cloudflare’s Workers enable containerless cloud computing powered by V8 Isolates and WebAssembly

Melisha Dsouza
12 Nov 2018
5 min read
Cloudflare’s cloud computing platform Workers doesn’t use containers or virtual machines to deploy computing. Workers allows users to build serverless applications on Cloudflare's data centers. It provides a lightweight JavaScript execution environment to augment existing applications or create entirely new ones without having to configure or maintain infrastructure. Why did Cloudflare create workers? Cloudflare provided limited features and options that developers could build in-house. There was not much flexibility for customers to build features themselves. To enable users to write code on their servers deployed around the world, they had to allow untrusted code to run, with low overhead. This needed to process millions of requests per second and that too at a very fast speed. Customers couldn’t write their own code without the team’s supervision. It would be expensive to use traditional virtualization and container technologies like Kubernetes let alone run thousands of Kubernetes pod at 155 data centers of Cloudflare would be resource intensive. Enter Cloudflare’s ‘Workers’ to solve these issues. Features of Workers #1 ‘Isolates’- Run code from multiple customers ‘Isolates’ is a technology built by Google Chrome team to power the Javascript engine in that browser, V8: Isolates.  These are lightweight contexts that group variables, with the code allowed to mutate them. A single process can run hundreds or thousands of Isolates, while easily  switching between them. Thus, Isolates make it possible to run untrusted code from different customers within a single operating system process. They start real quick (Any given Isolate can start around a hundred times faster than a Node process on a machine) and do not allow one Isolate to access the memory of another. #2 Cold Starts Workers facilitate the concept of ‘cold start’ when a new copy of code has to be started on a machine. In the Lambda world, this means spinning up a new containerized process which can delay requests  for as much as ten seconds ending up in a terrible user experience. A Lambda can only process one single request at a time. A new Lambda has to be cold-started every time an additional concurrent request is recieved. If a Lambda doesn’t get a request soon enough, it will be shut down and it all starts again.  Since Workers don’t have to start a process, Isolates start in 5 milliseconds. It scales and deploys quickly, entirely upgrading existing Serverless technologies. #3 Context Switching A normal context switch performed by an OS can take as much as 100 microseconds. When multiplied by all the Node, Python or Go processes running on average Lambda servers, this leads to a heavy overhead. This splits the CPUs power between running the customer’s code and switching between processes. An Isolate-based system runs all of the code in a single process which means there are no expensive context switches. The machine can invest virtually all of its time running your code. #4 Memory The V8 was designed to be multi-tenant. It runs the code from the many tabs in a user’s browser in isolated environments within a single process. Since memory is often the highest cost of running a customer’s code, V8 lowers it and dramatically changes the cost economics. #5 Security It is not safe to run code from multiple customers within the same process. Testing, fuzzing, penetration testing, and bounties are required to build a truly secure system of that complexity. The open-source nature of V8 helps in creating aanisolation layer that helps Cloudflare take care of the security aspect. Cloudlfare’s Workers also allows users to build responses from multiple background service requests either to the Cloudflare cache, application origin, or third party APIs. They can build conditional responses for inbound requests to assess and subsequently block or reroute malicious or unauthorized requests. All of this at just a third of what AWS costs, remarked an astute Twitter observer. https://twitter.com/seldo/status/1061461318765555713 Running code through WebAssembly One of the disadvantages of using Workers is that, since it is an Isolate-based system, it cannot run arbitrary compiled code. Users have to either write their code in Javascript, or a language which targets WebAssembly (eg. Go or Rust). Also, if a user cannot recompile their processes, they won’t be able to run them in an Isolate. This has been nicely summarised in the above mentioned tweet. He notes that WebAssembly modules are already in the npm registry and it creates the potential for npm to become the dependency management solution for every programming language. He mentions that the “availability of open source libraries to achieve the task at hand is the primary reason people pick a programming language”. This leads us to the question of “How does software development change when you can use any library anytime?” You can head over to the Cloudflare blog to understand more about containerless cloud computing. Cloudflare Workers KV, a distributed native key-value store for Cloudflare Workers Cloudflare’s decentralized vision of the web: InterPlanetary File System (IPFS) Gateway to create distributed websites  
Read more
  • 0
  • 0
  • 5524

article-image-google-kubernetes-engine-was-down-last-friday-users-left-clueless-of-outage-status-and-rca
Melisha Dsouza
12 Nov 2018
3 min read
Save for later

Google Kubernetes Engine was down last Friday, users left clueless of outage status and RCA

Melisha Dsouza
12 Nov 2018
3 min read
On the 9th of November, at 4.30 am US/Pacific time,  the Google Kubernetes Engine faced a service disruption. It was questionable whether or not a user would be able to launch a node pool through Cloud Console UI. The team responded to the issue saying that they would get back to users with more information by Friday, 9th November 04:45 am US/Pacific time. However, this was not solved by the given time. Another status update was posted by the team assuring users that mitigation work was underway by the Engineering Team. Users were to be posted with another update by 06:00 pm US/Pacific with current details. In the meantime, affected customers were advised to use gcloud command to create new Node Pools. An update for the issue being finally resolved was posted on Sunday, the 11th of November, stating that services were restored on Friday at 14:30 US/Pacific time.  . However, no proper explanation has been provided regarding what led to the service disruption. They did mention that an internal investigation of the issue will be done and appropriate improvements to the systems will be implemented to help prevent or minimize future recurrence of the issue. According to a user’s summary on Hacker News, “Some users here are reporting that other GCP services not mentioned by Google's blog are experiencing problems. Some users here are reporting that they have received no response from GCP support, even over a time span of 40+ hours since the support request was submitted.” According to another user, “When everything works, GCP is the best. Stable, fast, simple, reliable. When things stop working, GCP is the worst. They require way too much work before escalating issues or attempting to find a solution”. We can’t help but agree looking at the timeline of the service downtime. Users have also expressed disappointment over how the outage was managed. Source:Hacker News With users demanding a root cause analysis of the situation, it is only fitting that Google provides one so users can trust the company better. You can check out Google Cloud’s blog post detailing the timeline of the downtime. Machine Learning as a Service (MLaaS): How Google Cloud Platform, Microsoft Azure, and AWS are democratizing Artificial Intelligence Google’s Cloud Robotics platform, to be launched in 2019, will combine the power of AI, robotics and the cloud Build Hadoop clusters using Google Cloud Platform [Tutorial]  
Read more
  • 0
  • 0
  • 2787
Visually different images

article-image-amazon-consumer-business-migrated-to-redshift-with-plans-to-move-88-of-its-oracle-dbs-to-aurora-and-dynamodb-by-year-end
Natasha Mathur
12 Nov 2018
3 min read
Save for later

Amazon consumer business migrated to Redshift with plans to move 88% of its Oracle DBs to Aurora and DynamoDB by year end

Natasha Mathur
12 Nov 2018
3 min read
Amazon is getting quite close to moving away from Oracle. Andy Jassy, CEO, Amazon Web Services, tweeted last week regarding turning off the Oracle data warehouse and moving to RedShift. Jassy’s recent tweet seems to be a response to Oracle’s CTO, Larry Ellison’s constant taunts and punch lines. https://twitter.com/ajassy/status/1060979175098437632 The news about Amazon making its shift from Oracle stirred up in January this year. This was followed by the CNBC report this August which talked about Amazon’s plans to move from Oracle by 2020. As per the report, Amazon had already started to migrate most of its infrastructure internally to Amazon Web services. The process to move from Oracle, however, has been a bit harder than expected for Amazon. It faced an outage in one of its biggest warehouses on Prime Day (one of the Amazon’s biggest sales day in a year), last month, as reported by CNBC. The major cause of the outage was Amazon’s migration from Oracle’s database to its own technology, Aurora PostgreSQL. Moreover, Amazon and Oracle have had regular word battles in recent years over the performance of their database software and cloud tools. For instance, Larry Ellison, CTO, Oracle, slammed Amazon as he said, “Let me tell you an interesting fact: Amazon does not use [Amazon web services] to run their business. Amazon runs their entire business on top of Oracle, on top of the Oracle database. They have been unable to migrate to AWS because it’s not good enough.” Larry Ellison also slammed Amazon during Oracle OpenWorld conference last year saying, “Oracle’s services are just plain better than AWS” and how Amazon is “one of the biggest Oracle users on Planet Earth”. “Amazon's Oracle data warehouse was one of the largest (if not THE largest) in the world. RIP. We have moved on to newer, faster, more reliable, more agile, more versatile technology at more lower cost and higher scale. #AWS Redshift FTW.” tweeted Werner Vogels, CTO, Amazon. Public reaction to this decision by Amazon has been largely positive with people supporting Amazon’s decision to migrate from Oracle: https://twitter.com/eonnen/status/1061082419057442816 https://twitter.com/adamuaa/status/1061094314909057024 https://twitter.com/nayar_amit/status/1061154161125773312 Oracle makes its Blockchain cloud service generally available Integrate applications with AWS services: Amazon DynamoDB & Amazon Kinesis [Tutorial] AWS Elastic Load Balancing: support added for Redirects and Fixed Responses in Application Load Balancer
Read more
  • 0
  • 0
  • 2201

article-image-facebook-general-matrix-multiplication-fbgemm-high-performance-kernel-library-open-sourced-to-run-deep-learning-models-efficiently
Melisha Dsouza
08 Nov 2018
3 min read
Save for later

Facebook GEneral Matrix Multiplication (FBGEMM), high performance kernel library, open sourced, to run deep learning models efficiently

Melisha Dsouza
08 Nov 2018
3 min read
Yesterday (on the 7th of November), Facebook open-sourced its high-performance kernel library FBGEMM: Facebook GEneral Matrix Multiplication. This library offers optimized on-CPU performance for reduced precision calculations used to accelerate deep learning models. The library has delivered 2x performance gains when deployed at Facebook (in comparison to their current production baseline). Users can deploy it using the Caffe2 front end, and it will soon be callable directly by PyTorch 1.0 Python front end. Features of FBGEMM 1. FBGEMM is optimized for server-side inference. It delivers accuracy and efficiency when performing quantized inference using contemporary deep learning frameworks. It is a low-precision, high-performance matrix-matrix multiplications and convolution library that enables large-scale production servers to run the most powerful deep learning models efficiently. The library exploits opportunities to overcome the unique challenges of matrix multiplication at lower precision with bandwidth-bound pre- and post-GEMM operations. At Facebook, FBGEMM has benefited many AI services, increased the speed of English-to-Spanish translations by 1.3x, reduced DRAM bandwidth usage in their recommendation system used in feeds by 40%, and speed up character detection by 2.4x in Rosetta, the machine learning system for understanding text in images and videos. FBGEMM supplies modular building blocks to construct an overall GEMM pipeline needed by plugging and playing different front-end and back-end components. It combines small compute with bandwidth-bound operations and exploits cache locality by fusing post-GEMM operations with macro kernel while providing support for accuracy-loss-reducing operations. Why does GEMM matter? Floating point operations (FLOPs)  are mostly consumed by Fully connected (FC) operators in the deep learning models that are  deployed in Facebook’s data centers. These FC operators are just plain GEMM, which means that their overall efficiency directly depends on GEMM efficiency. 19% of these deep learning frameworks at Facebook implement convolution as im2col followed by GEMM. However, straightforward im2col adds overhead from the copy and replication of input data. To combat this, some deep learning libraries implement direct (im2col-free) convolution for improved efficiency. Facebook provides a way to fuse im2col with the main GEMM kernel to minimize im2col overhead. Facebook  says that recent industry and research works have indicated that inference using mixed-precision works well- without adversely affecting accuracy. FBGEMM uses this as an alternative strategy to improve inference performance with quantized models. Also, newer generations of GPUs, CPUs, and specialized tensor processors natively support lower-precision compute primitives, and hence the deep learning community is moving toward low-precision models. FBGEMM provides a way to perform efficient quantized inference on the current and upcoming generation of CPUs. Head over to Facebook’s official blog to understand more about this library and how it is implemented. A new data breach on Facebook due to malicious browser extensions allowed almost 81,000 users’ private data up for sale, reports BBC News 90% Google Play apps contain third-party trackers, share user data with Alphabet, Facebook, Twitter, etc: Oxford University Study Facebook open sources a set of Linux kernel products including BPF, Btrfs, Cgroup2, and others to address production issues
Read more
  • 0
  • 0
  • 3658

article-image-kernel-4-20-rc1-is-out
Melisha Dsouza
06 Nov 2018
3 min read
Save for later

Kernel 4.20-rc1 is out

Melisha Dsouza
06 Nov 2018
3 min read
Linus Torvalds announced on 4th November that Kernel 4.20-rc1 is tagged and pushed out, and the merge window is closed.  Linux 4.20 brings a lot of prominent changes from AMD Vega 20 support getting squared away, AMD Picasso APU support, Intel 2.5G Ethernet support, the removal of Speck, peer-to-peer PCI memory support, and other new hardware support additions and software features. Here are some of the features of 4.20-rc 1r: 70% of the patch is driver updates including changes in the gpu drivers Arch updates in x86, arm64, arm, powerpc, and the new C-SKY architecture), Updates in the  header files, networking, core mm and kernel, and tooling 4. Tooling has been upgraded as well. The Kernel will have more than 350 thousand lines of new code! The AMD Vega 20 7nm workstation GPU support is now largely squared away for when this graphics card will be released in the months ahead. GPUVM performance improvements for the AMDGPU kernel driver. The Intel DRM driver now has full PPGTT support for Haswell/Ivy/Valley View hardware. Support for the Hygon Dhyana CPUs -the new Chinese data center processors based on AMD Zen. Scheduler improvements that should benefit asymmetric CPU systems like ARM big.LITTLE processors.  Faster context switching on IBM POWER9.  Several Btrfs performance improvements.  Intel 2.5G Ethernet support was added via the new "IGC" driver. Xbox One S controller rumble support along with Logitech high-resolution scrolling and the new Apple Trackpad 2 driver are among the input hardware improvements.  The Linux kernel is now VLA-free for variable length arrays to improve code portability and better performance and security. Speck crypto code was removed due to this crypto algorithm being quite controversial with its roots inside the NSA. The highly anticipated WireGuard secure VPN tunnel is held off until the next cycle. The FreeSync / Adaptive-Sync / HDMI VRR bits are also being held off for DRM until the next cycle. As the merge window closes, there will be some delay in the pull request which will be taken care of in the second week of the merge window. The duration of the merge window is two weeks. Linus is considering making an explicit rule that he will stop taking new pull requests some time during the second week unless users have a good reason for why it was delayed. He also hopes that by the time the next merge window rolls around, there will be a new automation for it, so that everybody just automatically gets notified when their pull request hit mainline. You can head over to Phoronix.com for a detailed list of all the new improvements added to 4.2 0 rc 1. You can also read the change log for further details. Soon, RHEL (Red Hat Enterprise Linux) won’t support KDE Microsoft releases ProcDump for Linux, a Linux version of the ProcDump Sysinternals tool Facebook open sources a set of Linux kernel products including BPF, Btrfs, Cgroup2, and others to address production issues
Read more
  • 0
  • 0
  • 2104
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $15.99/month. Cancel anytime
article-image-soon-rhel-red-hat-enterprise-linux-wont-support-kde
Amrata Joshi
05 Nov 2018
2 min read
Save for later

Soon, RHEL (Red Hat Enterprise Linux) won’t support KDE

Amrata Joshi
05 Nov 2018
2 min read
Later last week, Red Hat announced that RHEL has deprecated KDE (K Desktop Environment) support. KDE Plasma Workspaces (KDE) is an alternative to the default GNOME desktop environment for RHEL. Major future release of Red Hat Enterprise Linux will no longer support using KDE instead of the default GNOME desktop environment. In the 90’s, the Red Hat team was entirely against KDE and had put lots of effort into Gnome. Since Qt was under a not-quite-free license that time, the Red Hat team was firmly behind Gnome. Steve Almy, principal product manager of Red Hat Enterprise Linux, told the Register, “Based on trends in the Red Hat Enterprise Linux customer base, there is overwhelming interest in desktop technologies such as Gnome and Wayland, while interest in KDE has been waning in our installed base.” Red Hat heavily backs the Linux desktop environment GNOME, which is developed as an independent open-source project. Also, it is used by a large bunch of other distros. Although Red Hat is indicating the end of KDE support in RHEL, KDE is very much its own independent project that will continue on its own, with or without support from future RHEL editions. Almy said, “While Red Hat made the deprecation note in the RHEL 7.6 notes, KDE has quite a few years to go in RHEL's roadmap.” This is simply a warning that certain functionality may be removed or replaced from RHEL in the future with functionality similar or more advanced to the one deprecated. Though KDE, as well as anything listed in Chapter 51 of the Red Hat Enterprise Linux 7.6 release notes,  will continue to be supported for the life of Red Hat Enterprise Linux 7. Read more about this news on the official website of Red Hat. Red Hat released RHEL 7.6 Red Hat Enterprise Linux 7.5 (RHEL 7.5) now generally available Red Hat Enterprise Linux 7.6 Beta released with focus on security, cloud, and automation
Read more
  • 0
  • 0
  • 4269

article-image-microsoft-releases-procdump-for-linux-a-linux-version-of-the-procdump-sysinternals-tool
Savia Lobo
05 Nov 2018
2 min read
Save for later

Microsoft releases ProcDump for Linux, a Linux version of the ProcDump Sysinternals tool

Savia Lobo
05 Nov 2018
2 min read
Microsoft developer, David Fowler revealed ‘ProcDump for Linux’, a Linux version of the ProcDump Sysinternals tool, over the weekend on November 3. ProcDump is a Linux reimagining of the classic ProcDump tool from the Sysinternals suite of tools for Windows. It provides a convenient way for Linux developers to create core dumps of their application based on performance triggers. Requirements for ProcDump The tool currently supports Red Hat Enterprise Linux / CentOS 7, Fedora 26, Mageia 6 and Ubuntu 14.04 LTS, with other versions being tested. It also supports gdb >= 7.6.1 and zlib (build-time only). Limitations of ProcDump Runs on Linux Kernels version 3.5+ Does not have full feature parity with Windows version of ProcDump, specifically, stay alive functionality, and custom performance counters Installing ProcDump ProcDump can be installed using two methods, first, Package Manager, which is a preferred method. The other one is via.deb package. To know more about ProcDump in detail visit its GitHub page. Microsoft Azure reportedly chooses Xilinx chips over Intel Altera for AI co-processors, says Bloomberg report ‘We are not going to withdraw from the future’ says Microsoft’s Brad Smith on the ongoing JEDI bid, Amazon concurs Microsoft bring an open-source model of Component Firmware Update (CFU) for peripheral developers
Read more
  • 0
  • 0
  • 3919

article-image-kubeflow-0-3-released-with-simpler-setup-and-improved-machine-learning-development
Melisha Dsouza
02 Nov 2018
3 min read
Save for later

Kubeflow 0.3 released with simpler setup and improved machine learning development

Melisha Dsouza
02 Nov 2018
3 min read
Early this week, the Kubeflow project launched its latest version- Kubeflow 0.3, just 3 months after version 0.2 was out. This release comes with easier deployment and customization of components along with better multi-framework support. Kubeflow is the machine learning toolkit for Kubernetes. It is an open source project dedicated to making deployments of machine learning (ML) workflows on Kubernetes simple, portable and scalable. Users are provided with a easy to use ML stack anywhere that Kubernetes is already running, and this stack can self configure based on the cluster it deploys into. Features of Kubeflow 0.3 1. Declarative and Extensible Deployment Kubeflow 0.3 comes with a deployment command line script; kfctl.sh. This tool allows consistent configuration and deployment of Kubernetes resources and non-K8s resources (e.g. clusters, filesystems, etc. Minikube deployment provides a single command shell script based deployment. Users can also use MicroK8s to easily run Kubeflow on their laptop. 2. Better Inference Capabilities Version 0.3 makes it possible to do batch inference with GPUs (but non distributed) for TensorFlow using Apache Beam.  Batch and streaming data processing jobs that run on a variety of execution engines can be easily written with Apache Beam. Running TFServing in production is now easier because of the Liveness probe added and using fluentd to log request and responses to enable model retraining. It also takes advantage of the NVIDIA TensorRT Inference Server to offer more options for online prediction using both CPUs and GPUs. This Server is a containerized, production-ready AI inference server which maximizes utilization of GPU servers. It does this by running multiple models concurrently on the GPU and supports all the top AI frameworks. 3. Hyperparameter tuning Kubeflow 0.3 introduces a new K8s custom controller, StudyJob, which allows a hyperparameter search to be defined using YAML thus making it easy to use hyperparameter tuning without writing any code. 4. Miscellaneous updates The upgrade includes a release of a K8s custom controller for Chainer (docs). Cisco has created a v1alpha2 API for PyTorch that brings parity and consistency with the TFJob operator. It is easier to handle production workloads for PyTorch and TFJob because of the new features added to them. There is also support provided for gang-scheduling using Kube Arbitrator to avoid stranding resources and deadlocking in clusters under heavy load. The 0.3 Kubeflow Jupyter images ship with TF Data-Validation. TF Data-Validation is a library used to explore and validate machine learning data. You can check the examples added by the team to understand how to leverage Kubeflow. The XGBoost example indicates how to use non-DL frameworks with Kubeflow The object detection example illustrates leveraging GPUs for online and batch inference. The financial time series prediction example shows how to leverage Kubeflow for time series analysis The team has said that the next major release:  0.4, will be coming by the end of this year. They will focus on ease of use to perform common ML tasks without having to learn Kubernetes. They also plan to make it easier to track models by providing a simple API and database for tracking models. Finally, they intend to upgrade the PyTorch and TFJob operators to beta. For a complete list of updates, visit the 0.3 Change Log on GitHub. Platform9 announces a new release of Fission.io, the open source, Kubernetes-native Serverless framework Introducing Alpha Support for Volume Snapshotting in Kubernetes 1.12 ‘AWS Service Operator’ for Kubernetes now available allowing the creation of AWS resources using kubectl    
Read more
  • 0
  • 0
  • 2600

article-image-an-early-access-to-sailfish-3-is-here
Savia Lobo
02 Nov 2018
3 min read
Save for later

An early access to Sailfish 3 is here!

Savia Lobo
02 Nov 2018
3 min read
This week, Sailfish OS announced the early release of its third generation release i.e Sailfish 3 software and has made it available to all Sailfish users who had opted-in for the early access updates. Sami Pienimäki, CEO & Co-founder of Jolla Ltd, in his release post said, “we are expanding the Sailfish community program, “Sailfish X“, with a few of key additions next week: on November 8 we release the software for various Sony Xperia XA2 models.” Why the name ‘Sailfish’? Sailfish 3.0.0 is named after the legendary National Park Lemmenjoki in Northern Lapland. We’ve always aimed at respecting our Finnish roots in naming our software versions: previously we’ve covered lakes and rivers, and now we’re set to explore our beautiful national parks. Sailfish 3 will be rolled out in phases, and thus many features are deployed in several software releases. The first phase is Sailfish 3.0.0 is available as an early access version since October 31st. The customer release is expected to roll out soon in the coming weeks. Further, the next release 3.0.1 is expected to release in early December. Security and Corporate features of Sailfish 3 Sailfish 3 has a deeper level of security, which makes it a go-to option for various corporate and organizational solutions, and other use cases. Some of the new enhanced features in Sailfish 3 include Mobile Device Management (MDM), fully integrated VPN solutions, enterprise WiFi, data encryption, and better and faster performance. It also offers a full support for regional infrastructures including steady releases & OS upgrades, local hosting, training, and a flexible feature set to support specific customer needs. User experience highlights for Sailfish 3.0.0 New Top Menu: quick settings and shortcuts can now be accessed anywhere Light ambiances: new fresh look for Sailfish OS Data encryption: memory card encryption is now available. Device file system encryption is coming in next releases. New Keyboard gestures: quickly change keyboard layouts with one swipe USB On-The-Go storage: connect to different kinds of external storage devices Camera improvements: new lock screen camera roll allows you to review the photos you just took without unlocking the device Further, due to the rewritten way to launch apps and load views, one can achieve much better UI performance in Sailfish 3. Sami mentions, “You can start to enjoy the faster Sailfish already now with the 3.0.0 release and the upcoming major Qt upgrade will further improve the responsiveness & performance resulting to 50% better overall performance.” To know more about Sailfish 3 in detail, visit its official website. GitHub now allows issue transfer between repositories; a public beta version Introducing Howler.js, a Javascript audio library with full cross-browser support BabyAI: A research platform for grounded language learning with human in the loop, by Yoshua Bengio et al
Read more
  • 0
  • 0
  • 2784
article-image-red-hat-released-rhel-7-6
Amrata Joshi
01 Nov 2018
4 min read
Save for later

Red Hat released RHEL 7.6

Amrata Joshi
01 Nov 2018
4 min read
On Tuesday, Red Hat announced the general availability of RHEL (Red Hat Enterprise Linux) 7.6. RHEL 7.6 is a consistent hybrid cloud foundation for enterprise IT. It is built on an open source innovation, designed to enable organizations to match the pace with emerging cloud-native technologies. It also supports IT operations across enterprise IT’s four footprints. Just three months back the beta version of RHEL 7.6 was released. Red Hat Enterprise Linux 7.6  addresses a range of IT challenges, emphasizes security and compliance, management and automation, and Linux container innovations. Features in RHEL 7.6 RHEL 7.6 solves security concerns IT security has always been a key challenge for many IT departments as it does not get easier in complex hybrid and multi-cloud environments. Red Hat Enterprise Linux 7.6 is the answer to this problem as it introduces a Trusted Platform Module (TPM) 2.0 hardware modules as part of Network Bound Disk Encryption (NBDE). NBDE provides security across networked environments whereas, TPM works on-premise to add an additional layer of security, tying disks to specific physical systems. These two layers of security for hybrid cloud operations help keep information on disks physically more secure. RHEL 7.6 also makes it easier to manage firewalls with improvements to nftables, a packet filtering framework. It also simplifies the configuration of counter-intrusion measures. Updated cryptographic algorithms delivered for RSA and elliptic-curve cryptography (ECC) are enabled by default with RHEL 7.6. This helps the organizations handling sensitive information to match their pace with Federal Information Processing Standards (FIPS) compliance and standards bodies like the National Institute of Standards and Technology (NIST). Management and automation get better Red Hat Enterprise Linux 7.6 helps in making Linux adoption easier for the users as it brings enhancements to the Red Hat Enterprise Linux Web Console, which provides a graphical overview of Red Hat system health and status. RHEL 7.6 has made it easier to find updates on the system summary page. It also provides automated configuration of single sign-on for identity management and a firewall control interface. This makes it easier for security administrators. RHEL 7.6 comes with the Extended Berkeley Packet Filter (eBPF), which provides a safer and efficient mechanism for monitoring activities within the kernel. Soon, it will help in enabling additional performance monitoring and network tracing tools. Red Hat Enterprise Linux 7.6 also provides support for Red Hat Enterprise Linux System Roles which is a collection of Ansible modules. These modules are designed to provide a consistent way to automate and remotely manage Red Hat Enterprise Linux deployments. Each of these modules provides a ready-made automated workflow for handling common and complex tasks, involved in Linux environments. This automation helps to remove the possibilities of human error from these tasks.  This, in turn, frees up the IT teams and lets them focus more on adding business value. Red Hat’s lightweight container toolkit Red Hat Enterprise Linux 7.6 supports the rise of cloud-native technologies by introducing Red Hat’s lightweight container toolkit. This toolkit comprises of CRI-O, Buildah, Skopeo, and now Podman. Each of these tools is built on a fully open source and community-backed technologies. They are based on open standards like the Open Container Initiative (OCI) format. Podman complements Buildah and Skopeo while sharing the same foundations as CRI-O. It enables users to run containers and groups of containers (pods) from a familiar command-line interface, which eliminates the need of a daemon. This, in turn, helps to reduce the complexity in container creation while making it easier for developers to build containers on workstations, in continuous integration/continuous development (CI/CD) systems and within high-performance computing (HPC) or big data scheduling systems. For more information on this release, check out Red Hat’s official website Red Hat Enterprise Linux 7.6 Beta released with focus on security, cloud, and automation Red Hat Enterprise Linux 7.5 (RHEL 7.5) now generally available 4 reasons IBM bought Red Hat for $34 billion
Read more
  • 0
  • 0
  • 3749

article-image-facebook-open-sources-a-set-of-linux-kernel-products-including-bpf-btrfs-cgroup2-and-others-to-address-production-issues
Bhagyashree R
31 Oct 2018
3 min read
Save for later

Facebook open sources a set of Linux kernel products including BPF, Btrfs, Cgroup2, and others to address production issues

Bhagyashree R
31 Oct 2018
3 min read
Yesterday, Facebook open sourced a suite of Linux kernel components and tools. This suite includes products that can be used for resource control and utilization, workload isolation, load balancing, measuring, monitoring, and much more. Facebook has already started using these products on a massive scale throughout its infrastructure and many other organizations are also adopting them. The following are some of the products that they have open sourced: Berkeley Packet Filter (BPF) BPF is a highly-flexible Linux kernel code execution engine. It enables safe and easy modifications of kernel behaviors with custom code by allowing bytecode to run at various hook points. Currently, it is being widely used for networking, tracing and security in a number of Linux kernel subsystems. What can you do with it? You can extend the Linux kernel behavior for a variety of purposes such as load balancing, container networking, kernel tracing, monitoring, and security. You can solve those production issues where user-space solutions alone aren’t enough by executing the user-space code in the kernel. Btrfs Btrfs is a copy-on-write (CoW) filesystem, which means that instead of overwriting in one place, all the updates to metadata or file data are written to a new location on the disk. Btrfs mainly focuses on fault tolerance, repair, and easy administration. It supports features such as snapshots, online defragmentation, pooling, and integrated multiple device support. It is the only filesystem implementation that works with resource isolation. What can you do with it? You can address and manage large storage subsystems by leveraging features like snapshots, load balancing, online defragmentation, pooling, and integrated multiple device support. You can manage, detect, and repair errors with data and metadata checksums, mirroring, and file self-healing. Netconsd (Netconsole daemon) Netconsd is a UDP-based daemon that provides lightweight transport for Linux netconsole messages. It receives and processes log data from the Linux kernel and serves it up as a structured data. Simply put, it is a kernel module that sends all kernel log messages over the network to another computer, without involving user space. What can you do with it? Detect, reorder, or request retransmission of missing messages with the provided metadata. Extract meaningful signal from the data logged by netconsd to rapidly identify and diagnose misbehaving services. Cgroup2 Cgroup2 is a Linux kernel feature that allows you to group and structure workloads and also control the amount of system resources assigned to each group. It consists of controllers for memory, I/O, central processing unit, and more. Using cgroup2, you can isolate workloads, prioritize, and configure the distribution of resources. What can you do with it? You can create isolated groups of processes and then control and measure the distribution of memory, IO, CPU and other resources for each group. You can detect resource shortages using PSI pressure metrics for memory, IO, and CPU with cgroup2. With cgroup2, production engineers will be able to deal with increasing resource pressure more proactively and prevent conflicts between workloads. Along with these products, they have open-sourced Pressure Stall Information (PSI), oomd, and many others. You can find the complete list of these products at Facebook Open Source website and also check out the official announcement. Facebook open sources QNNPACK, a library for optimized mobile deep learning Facebook introduces two new AI-powered video calling devices “built with Privacy + Security in mind” Facebook’s Glow, a machine learning compiler, to be supported by Intel, Qualcomm and others
Read more
  • 0
  • 0
  • 3147

article-image-microsoft-azure-reportedly-chooses-xilinx-chips-over-intel-altera-for-ai-co-processors-says-bloomberg-report
Melisha Dsouza
31 Oct 2018
3 min read
Save for later

Microsoft Azure reportedly chooses Xilinx chips over Intel Altera for AI co-processors, says Bloomberg report

Melisha Dsouza
31 Oct 2018
3 min read
Xilinx Inc., has reportedly won orders from Microsoft Corp.’s Azure cloud unit to account for half of the co-processors currently used on Azure servers to handle machine-learning workloads. This will replace the chips made by Intel Corp, according to people familiar with Microsoft's’ plans as reported by Bloomberg Microsoft’s decision comes with effect to add another chip supplier in order to serve more customers interested in machine learning. To date, this domain was run by Intel’s Altera division. Now that Xilinx has bagged the deal, does this mean Intel will no longer serve Microsoft? Bloomberg reported Microsoft’s confirmation that it will continue its relationship with Intel in its current offerings. A Microsoft spokesperson added that “There has been no change of sourcing for existing infrastructure and offerings”. Sources familiar with the arrangement also commented on how Xilinx chips will have to achieve performance goals to determine the scope of their deployment. Cloud vendors these days are investing heavily in research and development centering around the machine learning field. The past few years has seen an increase in need of flexible chips that can be configured to run machine-learning services. Companies like Microsoft, Google and Amazon are massive buyers of server chips and are always looking for alternatives to standard processors to increase the efficiency of their data centres. Holger Mueller, an analyst with Constellation Research Inc., told SiliconANGLE that “Programmable chips are key to the success of infrastructure-as-a-service providers as they allow them to utilize existing CPU capacity better, They’re also key enablers for next-generation application technologies like machine learning and artificial intelligence.” Earlier this year, Xilinx CEO Victor Peng made clear his plans to focus on data center customers, saying “data center is an area of rapid technology adoption where customers can quickly take advantage of the orders of magnitude performance and performance per-watt improvement that Xilinx technology enables in applications like artificial intelligence (AI) inference, video and image processing, and genomics” Last month, Xilinx made headlines with the announcement of a new breed of computer chips designed specifically for AI inference. These chips combine FPGAs with two higher-performance Arm processors, plus a dedicated AI compute engine and relates to the application of deep learning models in consumer and cloud environments. The chips promise higher throughput, lower latency and greater power efficiency than existing hardware. Looks like Xilinx is taking noticeable steps to make itself seen in the AI market. Head over to Bloomberg for the complete coverage of this news. Microsoft Ignite 2018: New Azure announcements you need to know Azure DevOps outage root cause analysis starring greedy threads and rogue scale units Microsoft invests in Grab; together aim to conquer the Southeast Asian on-demand services market with Azure’s Intelligent Cloud    
Read more
  • 0
  • 0
  • 3011
article-image-fedora-29-released-with-modularity-silverblue-and-more
Bhagyashree R
31 Oct 2018
3 min read
Save for later

Fedora 29 released with Modularity, Silverblue, and more

Bhagyashree R
31 Oct 2018
3 min read
After releasing Fedora 29 beta last month, the Fedora community announced the stable release of Fedora 29. This is the first release to include Fedora Modularity across all Fedora variants, that is, Workstation, Server, and AtomicHost. Other updates include upgrading to GNOME 3.30, ZRAM for ARM images, and a Vagrant image for Fedora Scientific. Additionally, Node.js is now updated to Node.js 10.x as its default Node.js interpreter, Python 3.6 is updated to Python 3.7, and Ruby on Rails is updated to 5.2. Fedora Modularity Modularity gives you the option to install additional versions of software on independent life cycles. You no longer have to make your whole OS upgrade decisions based on individual package versions. It will allow you to keep your OS up-to-date while keeping the right version of an application, even when the default version in the distribution changes. These are the advantages it comes with: Moving fast and slow Different users have different needs, for instance, while developers want the latest versions possible, system administrators prefer stability for a longer period of time. With Fedora Modularity, as per your use case, you can make some parts to move slowly, and other parts to move faster by choosing between latest release or stability. Automatically rebuild containers Many containers are built manually and are not actively maintained. Also, very often they are not patched with security fixes but are still used by many people. To allow maintaining and building multiple versions, Modularity comes with an environment for packagers. These containers get automatically rebuilt every time the packages get updated. Automating packager workflow Often, Fedora contributors have to maintain their packages in multiple branches. As a result, they have to perform a series of manual steps during the build process. Modularity allows packagers to maintain a single source for multiple outputs and brings an additional automation to the package build process. Fedora Silverblue This release introduces the newly named Fedora Silverblue, formerly known as Fedora Atomic Workstation. It provides atomic upgrades, easy rollbacks, and workflows that are familiar from OSTree-based servers. Additionally, it delivers desktop applications as Flatpaks. This gives better isolations and solves longstanding issues with using yum/dnf for desktop applications. GNOME 3.30 The default desktop environment of Fedora 29 is based on GNOME 3.30. This version of GNOME comes with improved desktop performance and screen sharing. It supports automatic updates for Flatpak, a next-generation technology for building and distributing applications on Linux. Read the full announcement of Fedora 29 release on its official website. Swift is now available on Fedora 28 Fedora 29 beta brings Modularity, GNOME 3.30 support and other changes GNOME 3.30 released with improved Desktop performance, Screen Sharing, and more
Read more
  • 0
  • 0
  • 2233

article-image-codefreshs-fixvember-a-devops-hackathon-to-encourage-developers-to-contribute-to-open-source
Sugandha Lahoti
30 Oct 2018
2 min read
Save for later

Codefresh’s Fixvember, a Devops hackathon to encourage developers to contribute to open source

Sugandha Lahoti
30 Oct 2018
2 min read
Open Source is getting a lot of attention these days and to incentivize people to contribute to open source Codefresh has launched "Fixvember", a do-it-from-home, DevOps hackathon. Codefresh is a Kubernetes native CI/CD which allows for creating powerful pipelines based on DinD as a service and provides self-service test environments, release management, and Docker and Helm registry. Codefresh’s Fixvember is a Devops based hackathon where Codefresh will provide DevOps professionals with a limited-edition t-shirt to contribute to open source. The event basically encourages developers (and not just Codefresh users) to make at least three contributions to open source projects, including building automation, adding better testing, and fixing bugs. The focus is on making engineers more successful by following DevOps best practices. Adding a Codefresh YAML to an open-source repo may also earn developers additional prizes or recognition. Codefresh debuts Fixvember in sync with the launch of its public-facing builds in the Codefresh platform. Codefresh is offering 120 builds/month, private Docker Registry, Helm Repository, and Kubernetes/Helm Release management for free to increase the adoption of CI/CD processes. It is also offering a huge free tier within Codefresh with everything needed to help teams. Developers can participate by following these steps. Step 1: Signup at codefresh.io/fixvember Step 2: Make 3 open source contributions that improve DevOps. This could be adding/updating a Codefresh pipeline to a repo, adding tests or validation to a repo, or just fixing bugs. Step 3: Submit your results using your special email link “I can’t promise the limited-edition t-shirt will increase in value, but if it does, I bet it will be worth $1,000 by next year. The FDA prevents me from promising any health benefits, but it’s possible this t-shirt will actually make you smarter,” joked Dan Garfield, Chief Technology Evangelist for Codefresh. “Software engineers sometimes have a hero complex that adding cool new features is the most valuable thing. But, being ‘Super Fresh’ means you do the dirty work that makes new features deploy successfully. Adding automated pipelines, writing tests, or even fixing bugs are the lifeblood of these projects.” Read more about Fixvember on Codefresh Blog. Azure DevOps outage root cause analysis starring greedy threads and rogue scale units. JFrog, a DevOps based artifact management platform, bags a $165 million Series D funding Is your Enterprise Measuring the Right DevOps Metrics?
Read more
  • 0
  • 0
  • 2789