Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Cloud & Networking

376 Articles
article-image-yandex-launched-an-intelligent-public-cloud-platform-yandex-cloud
Savia Lobo
06 Sep 2018
2 min read
Save for later

Yandex launched an intelligent public cloud platform, Yandex.Cloud

Savia Lobo
06 Sep 2018
2 min read
Yesterday, Russia’s largest search engine, Yandex, launched its intelligent public cloud platform named Yandex.Cloud. This intelligent public cloud platform has been tested by more than 50 Russian and international companies since April. Yandex.Cloud is easy to use and offers flexible pricing with a pay per use pricing model. Also, the platform has an easy access to all the Yandex technologies, which makes it easy for companies to complement an existing IT infrastructure or even serve as an alternative to it. The platform will assist companies and industries of different sizes to boost their efficiency or expand their business without large-scale investment. Yandex plans to roll the Yandex.Cloud platform slowly, first to its users of Yandex services for business, and then to all by the end of 2018. It enables companies to store and use databases containing personal data in Russia, as required by law. Features of the ‘Yandex.Cloud’ public cloud platform A scalable virtual infrastructure The new intelligent public cloud platform includes a scalable virtual infrastructure having multiple management options. Users can manage from a graphical interface or the command line. It also includes developer tools for popular programming languages such as Python and Go Automated services Labour-intensive management tasks of popular databases systems such as PostgreSQL, ClickHouse (Yandex open source high-performance database management system) and MongoDB have been automated. AI-based Yandex services Yandex.Cloud includes AI based services such as a SpeechKit speech recognition and synthesis and Yandex.Translate machine translation. Yan Leshinsky, Head of Yandex.Cloud said, “Yandex has an entire ecosystem of successful products and services that are used by millions of people on a daily basis. Yandex.Cloud provides access to the same infrastructure and technologies that we use to power Yandex services, creating unique opportunities for any business to develop their products and services based on this platform.” To know more about Yandex.Cloud, visit its official website. Ansible 2 for automating networking tasks on Google Cloud Platform [Tutorial] Machine learning APIs for Google Cloud Platform Cloud Filestore: A new high-performance storage option by Google Cloud Platform
Read more
  • 0
  • 0
  • 2987

article-image-atlassian-acquires-opsgenie-launches-jira-ops-to-make-incident-response-more-powerful
Bhagyashree R
05 Sep 2018
2 min read
Save for later

Atlassian acquires OpsGenie, launches Jira Ops to make incident response more powerful

Bhagyashree R
05 Sep 2018
2 min read
Yesterday, Atlassian made two major announcements, the acquisition of OpsGenie and the release of Jira Ops. Both these products aim to help IT operations teams resolve downtime quickly and reduce the occurrence of these incidents over time. Atlassian is an Australian enterprise software company that develops collaboration software for teams with products including JIRA, Confluence, HipChat, Bitbucket, and Stash. OpsGenie: Alert the right people at the right time Source: Atlassian OpsGenie is an IT alert and notification management tool that helps notify critical alerts to all the right people (operations and software development teams). It uses a sophisticated combination of scheduling, escalation paths, and notifications that take things like time zone and holidays into account. OpsGenie is a prompt and reliable alerting system, which comes with the following features: It is integrated with monitoring, ticketing, and chat tools, to notify the team using multiple channels, providing the necessary information for your team to immediately begin resolution. It provides various notification methods such as, email, SMS, push, phone call, and group chat to ensure alerts are seen by the users. You can build and modify schedules and define escalation rules within one interface. It tracks everything related to alerts and incidents, which helps you to gain insight into areas of success and opportunities for improvement. You can define escalation policies and on-call schedules with rotations to notify the right people and escalate when necessary. Jira Ops: Resolve incidents faster Source: Atlassian Jira Ops is an unified incident command center that provides the response team with a single place for response coordination. It is integrated with OpsGenie, Slack, Statuspage, PagerDuty, and xMatters. It guides the response team through the response workflow and automates common steps such as creating a new Slack room for each incident. Jira Ops is available through Atlassian’s early access program. Jira Ops enables you to resolve a downtime quickly by providing the following functionalities: It quickly alerts you about what is affected and what the associated impacts are. You can check the status, severity level, and duration of the incident. You can see real-time response activities. You can also find the associated Slack channel, current incident manager, and technical lead. You can find more details on OpsGenie and Jira Ops on Atlassian’s official website. Atlassian sells Hipchat IP to Slack Atlassian open sources Escalator, a Kubernetes autoscaler project Docker isn’t going anywhere
Read more
  • 0
  • 0
  • 2966

article-image-the-future-of-jenkins-is-cloud-native-and-a-faster-development-pace-with-increased-stability
Prasad Ramesh
04 Sep 2018
4 min read
Save for later

The future of Jenkins is cloud native and a faster development pace with increased stability

Prasad Ramesh
04 Sep 2018
4 min read
Jenkins has been a success for more than a decade now mainly due to its extensibility, community and it being general purpose. But there are some challenges and problems in it which have become more pronounced now. Kohsuke Kawaguchi, the creator of Jenkins, is now planning to take steps to solve these problems and make the platform better. Challenges in Jenkins With growing competition in the continuous integration (CI), The following limitations in Jenkins come in the way of teams. Some of them discourage admins from using and installing plugins. Service instability: CI is a critical service nowadays. People are running bigger workloads, needing more plugins, and high availability. Services like instant messaging platforms need to be online all the time. Jenkins is unable to keep up with this expectation and a large instance requires a lot of overhead to keep it running. It is common for someone to restart Jenkins every day and that delays processes. Errors need to be contained to a specific area without impacting the whole service. Brittle Configuration: Installing/upgrading plugins and tweaking job settings have caused side effects. This makes admins lose confidence to make these changes safely. There is a fear that the next upgrade might break something and cause problems for other teams and affect delivery. Assembly required: Jenkins requires an assembly of service blocks to make it work as a whole. As CI has become mainstream, the users want something that can be deployed in a few clicks. Having too many choices is confusing and leads to uncertainty when assembling. This is not something that can be solved by creating more plugins. Reduced Development Velocity: It is difficult for a contributor to make a change that spans across multiple plugins. The tests do not give enough confidence to shop code; many of them do not run automatically and the coverage is not deep. Changes and steps to make Jenkins better There are two key efforts here, Cloud Native Jenkins and Jolt. Cloud native is a CI engine that runs on Kubernetes and has a different architecture, Jolt will continue in Jenkins 2 and add faster development pace with increased stability. Cloud Native Jenkins It is a sub-project in the context of Cloud Native SIG. It will use Kubernetes as runtime. It will have a new extensibility mechanism to retain what works and to continue the development of the the automation platform's ecosystem. Data on Cloud Managed Data Services to achieve high availability and horizontal scalability, alleviating admins from additional responsibilities. Configuration as Code and Jenkins Evergreen help with the brittleness. There are also plans to make Jenkins secure by default design and to continue with Jenkins X which has been received very well. The aim is to get things going in 5 clicks through easy integration with key services. Jolt in Jenkins Cloud Native Jenkins is not usable for everyone and targets only a particular set of functionalities. It also requires a platform which has a limited adoption today, so Jenkins 2 will be continued at a faster pace. For this Jolt in Jenkins is introduced. This is inspired by what happened to the development of Java SE; change in the release model by shedding off parts to move faster. There will a major version number change every couple of months. The platform needs to be largely compatible and the pace needs to justify any inconvenience put on the users. For more, visit the official Jenkins Blog. How to build and enable the Jenkins Mesos plugin Google Compute Engine Plugin makes it easy to use Jenkins on Google Cloud Platform Everything you need to know about Jenkins X, the new cloud native CI/CD solution on Kubernetes
Read more
  • 0
  • 0
  • 3710
Visually different images

article-image-ubuntu-free-linux-mint-project-lmde-3-cindy-cinnamon-released
Savia Lobo
04 Sep 2018
2 min read
Save for later

Ubuntu free Linux Mint Project, LMDE 3 ‘Cindy’ Cinnamon, released

Savia Lobo
04 Sep 2018
2 min read
The Linux Mint Project community announced the release of LMDE 3 Cinnamon, codenamed as ‘Cindy’. LMDE(Linux Mint Debian Edition) is a Linux Mint project where the main goal of Linux Mint team is to see how viable their distribution would be and how much work would be necessary if Ubuntu was ever to disappear. LMDE aims to be similar to Linux Mint, but without the use of Ubuntu. Instead, LMDE package base is provided by Debian. LMDE 3 Cindy includes some bug and security fixes. However, the Debian base package stands unchanged. Mint and desktop components are updated continuously. Once ready, the newly developed features get directly into LMDE. These changes are staged for inclusion in the next upcoming Linux Mint point release, which is not yet disclosed. System requirements for LMDE 3 ‘Cindy’ Cinnamon 1GB RAM (2GB recommended for a comfortable usage) 15GB of disk space (20GB recommended) 1024×768 resolution (on lower resolutions, press ALT to drag windows with the mouse if they don’t fit in the screen) Some known issues resolved Locked root account The root account is now locked by default. To use the recovery console (from the Grub menu) or log in as root, one has to first give a new password to root: sudo passwd root Secure Boot If the computer is using Secure Boot one needs to disable it. Debian Stretch and LMDE 3 does not support it. Virtualbox Guest Additions To add support for shared folders, drag and drop, proper acceleration and display resolution in Virtualbox, click on the "Devices" menu of Virtualbox and choose "Insert Guest Additions CD Image". Choose "download" when asked and follow the instructions. Read Installing the VirtualBox Guest Additions for more details. Sound and microphone issues If there’s any issue with the microphone or the sound output, install ‘pavucontrol’. This will add "PulseAudio Volume Control" to the menu. The ‘pavucontrol’ application has more configuration options than the default volume control. Issues with KDE apps If one’s experiencing issues with KDE apps (Okular, Gwenview, KStars..etc), they can run the following command: apt install kdelibs-bin kdelibs5-data kdelibs5-plugins Read more about this release in detail in LMDE 3 Documentation. Facebook and Arm join Yocto Project as platinum members for embedded Linux development Bodhi Linux 5.0.0 released with updated Ubuntu core 18.04 and a modern look Google becomes a new platinum member of the Linux Foundation  
Read more
  • 0
  • 0
  • 4545

article-image-amazon-stocks-surge-past-2000-expect-join-apple-1-trillion-market-cap
Fatema Patrawala
31 Aug 2018
2 min read
Save for later

Amazon stocks surge past $2000, expect Amazon to join Apple in the $1 trillion market cap club anytime now

Fatema Patrawala
31 Aug 2018
2 min read
As per the Market Watch reports yesterday, Amazon shares surged past the $2000 milestone for the first time. 31 out of 47 analysts surveyed their price target to be above the level projecting the e-commerce giant’s market capitalization to be $1 trillion if the Amazon stock rises to at least $2,050.27. The stock AMZN, +0.03% ran up to an all time intraday high of $2,025.57 as much as 1.4%  earlier in the session, slightly to be up 0.3% in an afternoon trade. Amazon would become the second company after Apple Inc. AAPL, +0.21% to top $1 trillion in market cap if the stock rises to at least $2,050.27. Apple became the first-ever $1 trillion U.S. company on Aug 2 this year. Source: Market Watch In a research note by Analyst Greg Melich at MoffettNathanson, Melich highlights the potential growth for the Amazon’s Cloud business. He mentioned that despite the growing competition from other technology-sector heavyweights like Microsoft Corp and Google’s Alphabet Inc, AWS will gain market share and expand profitability. Melich says, “We are often asked, is Amazon a retailer, a tech company, or a budding media juggernaut? The answer is all of the above. Amazon’s retail business remains $1,200 of value in the sum-of-the-parts valuation of the stock price, while AWS accounts for about $900.” Read the full coverage on the Market Watch blog. Amazon calls Senator Sanders’ claims about ‘poor working conditions’ as “inaccurate and misleading” Amazon may be planning to move from Oracle by 2020 Amazon Echo vs Google Home: Next-gen IoT war
Read more
  • 0
  • 0
  • 2405

article-image-microsoft-azure-now-supports-nvidia-gpu-cloud-ngc
Vijin Boricha
31 Aug 2018
2 min read
Save for later

Microsoft Azure now supports NVIDIA GPU Cloud (NGC)

Vijin Boricha
31 Aug 2018
2 min read
Yesterday, Microsoft announced NVIDIA GPU Cloud (NGC) support on its Azure platform. Following this, data scientists, researchers, and developers can build, test, and deploy GPU computing projects on Azure. With this availability, users can run containers from NGC with Azure giving them access to on-demand GPU computing that can scale as per their requirement. This eventually eliminates the complexity of software integration and testing. The need for NVIDIA GPU Cloud (NGC) It is challenging and time-consuming to build and test reliable software stacks to run popular deep learning software such as TensorFlow, Microsoft Cognitive Toolkit, PyTorch, and NVIDIA TensorRT. This is due to the operating level and updated framework dependencies. Finding, installing, and testing the correct dependency is quite a hassle as it is supposed to be done in a multi-tenant environment and across many systems. NGC eliminates these complexities by offering pre-configured containers with GPU-accelerated software. Users can now access 35 GPU-accelerated containers for deep learning software, high-performance computing applications, high-performance visualization tools and much more enabled to run on the following Microsoft Azure instance types with NVIDIA GPUs: NCv3 (1, 2 or 4 NVIDIA Tesla V100 GPUs) NCv2 (1, 2 or 4 NVIDIA Tesla P100 GPUs) ND (1, 2 or 4 NVIDIA Tesla P40 GPUs) According to NVIDIA, these same NVIDIA GPU Cloud (NGC) containers can also work across Azure instance types along with different types or quantities of GPUs. Using NGC containers with Azure is quite easy. Users just have to sign up for a free NGC account before starting, then visit Microsoft Azure Marketplace to find the pre-configured NVIDIA GPU Cloud Image for Deep Learning and high-performance computing. Once you launch the NVIDIA GPU instance on Azure, you can pull the containers you want from the NGC registry into your running instance. You can find detailed steps to setting up NGC in the Using NGC with Microsoft Azure documentation. Microsoft Azure’s new governance DApp: An enterprise blockchain without mining NVIDIA leads the AI hardware race. But which of its GPUs should you use for deep learning? NVIDIA announces pre-orders for the Jetson Xavier Developer Kit, an AI chip for autonomous machines, at $2,499  
Read more
  • 0
  • 0
  • 3380
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $15.99/month. Cancel anytime
article-image-openstack-rocky-released-to-meet-ai-machine-learning-nfv-and-edge-computing-demands-for-infrastructure
Savia Lobo
31 Aug 2018
4 min read
Save for later

OpenStack Rocky released to meet AI, machine learning, NFV and edge computing demands for infrastructure

Savia Lobo
31 Aug 2018
4 min read
Yesterday, OpenStack announced its 18th release, Rocky. This release aims at addressing new demands for infrastructure driven by AI, machine learning, NFV and edge computing, by starting with a bare metal foundation and enabling containers, VMs and GPUs. The Rocky update is the second OpenStack update for 2018 and follows the Queens milestone that became available on Feb, 28. [box type="info" align="" class="" width=""]Rocky is named after the mountains stretching across British Columbia, the location of the previous OpenStack Summit.[/box] Highlights and Improvements in OpenStack Rocky Rocky includes several other enhancements with two key highlights such as, Improvements to the ‘Ironic’ project's bare metal provisioning service Fast Forward Upgrades Improvements to the ‘Ironic’ project's bare metal provisioning service OpenStack Ironic brings increased sophisticated management and automation capabilities to bare metal infrastructure. It is also a driver for Nova, allowing multi-tenancy. This means users can manage physical infrastructure in the same way they are used to managing VMs, especially with new Ironic features landed in Rocky: User-managed BIOS settings: BIOS (basic input output system) performs hardware initialization and has many configuration options supporting a variety of use cases when customized. The different BIOS options can aid users in gaining performance, configuring power management options, or enabling technologies such as SR-IOV or DPDK. Ironic now lets users manage BIOS settings, supporting use cases like NFV and giving users more flexibility. Conductor groups: In Ironic, the “conductor” uses drivers to execute operations on the hardware. Ironic has introduced the “conductor_group” property, which can be used to restrict what nodes a particular conductor (or conductors) have control over. This allows users to isolate nodes based on physical location, reducing network hops for increased security and performance. RAM Disk deployment interface: This is a new interface in Ironic for diskless deployments. This interface is seen in large-scale and high-performance computing (HPC) use cases when operators desire fully ephemeral instances for rapidly standing up a large-scale environment. Fast Forward Upgrades (FFU) The Fast Forward Upgrade (FFU) feature from the TripleO project helps users to overcome upgrade hurdles and get on newer releases of OpenStack faster. FFU lets a TripleO user on Release “N” quickly speed through intermediary releases to get on Release “N+3” (the current iteration of FFU being the Newton release to Queens). This helps users in gaining access to the ease-of-operations enhancements and novel developments like vGPU support present in Queens. Additional Highlights in Rocky Cyborg In Rocky, Cyborg introduces a new REST API for FPGAs, an accelerator seen in machine learning, image recognition, and other HPC use cases. This allows users to dynamically change the functions loaded on an FPGA device. Qinling Qinling is introduced in Rocky. Qinling (“CHEEN - LEENG”) is a function-as-a-service (FaaS) project that delivers serverless capabilities on top of OpenStack clouds. This allows users to run functions on OpenStack clouds without managing servers, VMs or containers, while still connecting to other OpenStack services like Keystone. Masakari This supports high availability by providing automatic recovery from failures. It also expands its monitoring capabilities to include internal failures in any instance, such as a hung OS, data corruption or a scheduling failure. Octavia This is the load balancing project that adds support for UDP (user datagram protocol), bringing load balancing to edge and IoT use cases. UDP is the transport layer frequently seen in voice, video and other real-time applications. Magnum This project makes container orchestration engines and their resources first-class resources in OpenStack. Magnum has become a Certified Kubernetes installer in the Rocky cycle. Passing these conformance tests gives users confidence that Magnum interacts with Kubernetes. To know more about other highlights in detail, visit Rocky’s release notes. Automating OpenStack Networking and Security with Ansible 2 [Tutorial] Introducing VMware Integrated OpenStack (VIO) 5.0, a new Infrastructure-as-a-Service (IaaS) cloud Deploying OpenStack – the DevOps Way
Read more
  • 0
  • 0
  • 3648

article-image-russian-censorship-board-threatens-to-block-search-giant-yandex-due-to-pirated-content
Sugandha Lahoti
30 Aug 2018
3 min read
Save for later

Russian censorship board threatens to block search giant Yandex due to pirated content

Sugandha Lahoti
30 Aug 2018
3 min read
Update, 31st August 2018: Yandex has refused to remove pirated content. According to a statement from the company, Yandex believes that the law is being misinterpreted. While pirate content must be removed from sites hosting it, the removal of links to such content on search engines falls outside the scope of the current legislation.  “In accordance with the Federal Law On Information, Information Technologies, and Information Protection, the mechanics are as follows: pirated content should be blocked by site owners and on the so-called mirrors of these sites,” Yandex says. A Yandex spokesperson said that the company works in “full compliance” with the law. “We will work with market participants to find a solution within the existing legal framework.” Check out more info on Interfax. Roskomnadzor has found Russian search giant Yandex guilty of holding pirated content. The Federal Service for Supervision of Communications, Information Technology and Mass Media or Roskomnadzor is the Russian federal executive body responsible for censorship in media and telecommunications. The Moscow City Court found the website guilty of including links to pirated content last week. The search giant was asked to remove those links and the mandate was further reiterated by Roskomnadzor this week. Per the authorities, if Yandex does not take action within today, its video platform will be blocked by the country's ISPs. Last week, major Russian broadcasters Gazprom-Media, National Media Group (NMG), and others had protested against pirated content by removing their TV channels from Yandex’s ‘TV Online’ service. They said that they would allow their content to appear again only if Yandex removes pirated content completely. Following this, Gazprom-Media had filed a copyright infringement complaint with the Moscow City Court. Subsequently, the Moscow Court made a decision compelling Yandex to remove links to pirated TV shows belonging to Gazprom-Media. Pirate content has been a long-standing challenge for the telecom sector that is yet to be completely eradicated. Not only does it lead to a loss in revenues, but also a person watching illegal movies violates copyright and intellectual property laws. The Yandex website is heavily populated with pirated content, especially TV shows and movies. Source: Yandex.video In a statement to Interfax, Deputy Head of Roskomnadzor Vadim Subbotin warned that Yandex.video will be blocked Thursday night (August 30) if the pirate links aren’t removed. “If the company does not take measures, then according to the law, the Yandex.Video service must be blocked. There’s nowhere to go,” Subbotin said. The search giant has not yet responded to this accusation. You can check out the detailed coverage of the news on Interfax. Adblocking and the Future of the Web. Facebook, Twitter takes down hundreds of fake accounts with ties to Russia and Iran. YouTube has a $25 million plan to counter fake news and misinformation.
Read more
  • 0
  • 0
  • 2489

article-image-storj-labs-new-open-source-partner-program-to-generate-revenue-opportunities-for-open-source-companies
Melisha Dsouza
30 Aug 2018
3 min read
Save for later

Storj Labs’ new Open Source Partner Program: to generate revenue opportunities for open source companies

Melisha Dsouza
30 Aug 2018
3 min read
At the Linux Foundation's Open Source Summit in Vancouver, Storj Labs a leader in decentralized cloud storage company, launched their ‘Open Source Partner Program’. This program will enable open-source projects to generate revenue when their users store data in the cloud. The program was launched with the aim to bridge the "major economic disconnect between the 24-million total open-source developers and the $180 billion cloud market" as stated by Ben Golub, Storj's executive chairman and interim CEO. How does the Open Source Partner program work? Open-source projects simply need to integrate Storj into their existing cloud application infrastructure. Since Storj uses an Amazon Web Services (AWS) S3 compliant interface, this integration should be easy. Storj provides a blockchain encrypted, distributed cloud storage with facilitates data security, improves reliability, and enhances performance when compared to traditional cloud storage approaches. Using client-side encryption ensures that data can only be accessed by the data owners. While harvesting all these benefits, open-source projects that will use the Storj network will be provided with a continuous revenue stream. 60% of its gross revenue will be given to its storage farmers and 40% will be split amongst open-source developers. Through simple Storj data connectors that will be integrated with their platforms, Storj can track data storage usage. Partners will be given help desk support and tools to test the network's performance and capabilities. What’s in it for open source companies? Monetization has always been a challenge for open source companies. They ultimately require revenue to sustain themselves. Open source drives a sizable majority of the $200 billion-plus cloud computing market which is inversely proportional to the revenue that currently makes its way directly back to their projects and companies. The ‘Open Source Partner Program’ will help open source companies to grow exponentially and meet other financial-related goals.  Ultimately, open source companies - even the ones that only provide free products - require revenue to sustain themselves, and the Storj Open Source Partner Program aims to help. What’s in it for Storj? While this revenue generation program will benefit open source companies, it can also be viewed as an effective marketing strategy for Storj.  Open source projects are all the rage these days and the more these companies turn to Storj for decentralized cloud-based solutions, the more popularity and recognition Storj gets. Storj, as well as open source companies, realize the importance of openness, decentralization, and broad-based individual empowerment, which is why this program strikes the perfect balance to support open source projects. The Storj Labs has already won over ten major open-source partners, including Confluent, Couchbase, FileZilla, MariaDB, MongoDB, and Nextcloud, to join its Open Source Partner Program. These partners will be given early, immediate access to the V3 network private alpha. You can get a complete overview of the program on Storj’s blog post. 5 reasons why your business should adopt cloud computing Demystifying Clouds: Private, Public, and Hybrid clouds Google’s second innings in China: Exploring cloud partnerships with Tencent and others
Read more
  • 0
  • 0
  • 2756

article-image-google-cloud-hands-over-kubernetes-project-operations-to-cncf-grants-9m-in-gcp-credits
Sugandha Lahoti
30 Aug 2018
3 min read
Save for later

Google Cloud hands over Kubernetes project operations to CNCF, grants $9M in GCP credits

Sugandha Lahoti
30 Aug 2018
3 min read
Google today announced that it is stepping back from managing the Kubernetes architecture and is funding the Cloud Native Computing Foundation (CNCF) $9M in GCP credits for a successful transition. These credits are split over a period of three years to cover infrastructure costs. Google is also handing over operational control of the Kubernetes project to the CNCF community. They will now take ownership of day-to-day operational tasks such as testing and builds, as well as maintaining and operating the image repository and download infrastructure. Kubernetes was first created by Google in 2014. Since then Google has been providing Kubernetes with the cloud resources that support the project development. These include CI/CD testing infrastructure, container downloads, and other services like DNS, all running on Google Cloud Platform. With Google passing the reign to CNCF, it’s goal is to make make sure “Kubernetes is ready to scale when your enterprise needs it to”. The $9M grant will be dedicated to building the world-wide network and storage capacity required to serve container downloads. In addition, a large part of this grant will also be dedicated to funding scalability testing, which runs 150,000 containers across 5,000 virtual machines. “Since releasing Kubernetes in 2014, Google has remained heavily involved in the project and actively contributes to its vibrant community. We also believe that for an open source project to truly thrive, all aspects of a mature project should be maintained by the people developing it. In passing the baton of operational responsibilities to Kubernetes contributors with the stewardship of the CNCF, we look forward to seeing how the project continues to evolve and experience breakneck adoption” said Sarah Novotny, Head of Open Source Strategy for Google Cloud. The CNCF foundation includes a large number of companies of the likes of Alibaba Cloud, AWS, Microsoft Azure, IBM Cloud, Oracle, SAP etc. All of these will be profiting from the work of the CNCF and the Kubernetes community. With this move, Google is perhaps also transferring the load of running the Kubernetes infrastructure to these members. As mentioned in their blog post, they look forward to seeing the new ideas and efficiencies that all Kubernetes contributors bring to the project’s operations. To learn more, check out the CNCF announcement post and the Google Cloud Platform blog. Kubernetes 1.11 is here! Google Kubernetes Engine 1.10 is now generally available and ready for enterprise use. Kubernetes Container 1.1 Integration is now generally available.
Read more
  • 0
  • 0
  • 2691
article-image-389-directory-server-set-to-replace-openldap-as-red-hat-and-suse-withdraw-support-for-openldap-in-their-enterprise-linux-offerings
Bhagyashree R
29 Aug 2018
2 min read
Save for later

389 Directory Server set to replace OpenLDAP as Red Hat and SUSE withdraw support for OpenLDAP in their Enterprise Linux offerings

Bhagyashree R
29 Aug 2018
2 min read
Red Hat and SUSE have withdrawn their support for OpenLDAP in their Enterprise Linux offers, which will be replaced by Red Hat’s own 389 Directory Server. The openldap-server packages were deprecated starting from Red Hat Enterprise Linux (RHEL) 7.4, and will not be included in any future major release of RHEL. SUSE, in their release notes, have mentioned that the OpenLDAP server is still available on the Legacy Module for migration purposes, but it will not be maintained for the entire SUSE Linux Enterprise Server (SLE) 15 lifecycle. What is OpenLDAP? OpenLDAP is an open source implementation of Lightweight Directory Access Protocol (LDAP) developed by the OpenLDAP Project. It is a collective effort to develop a LDAP suite of applications and development tools, which are robust, commercial-grade, and open source. What is 389 Directory Server? The 389 Directory Server is an LDAP server developed by Red Hat as a part of Red Hat’s community-supported Fedora Project. The name “389” comes from the port number used by LDAP. It supports many operating systems including Fedora, Red Hat Enterprise Linux 3 and above, Debian, Solaris 8 and above. The 389 Directory Server packages provide the core directory services components for Identity Management (IdM) in Red Hat Enterprise Linux and the Red Hat Directory Server (RHDS). The package is not supported as a stand-alone solution to provide LDAP services. Why Red Hat and SUSE withdrew their support? According to Red Hat, customers prefer Identity Management (IdM) in Red Hat Enterprise Linux solution over OpenLDAP server for enterprise use cases. This is why, they decided to focus on the technologies that Red Hat historically had deep understanding, and expertise in, and have been investing into, for more than a decade. By focusing on Red Hat Directory Server and IdM offerings, Red Hat will be able to better serve their customers of those solutions and increase the value of subscription. To know more on Red Hat and SUSE withdrawing their support for OpenLDAP, check out Red Hat’s announcement and SUSE release notes. Red Hat Enterprise Linux 7.6 Beta released with focus on security, cloud, and automation Red Hat Enterprise Linux 7.5 (RHEL 7.5) now generally available Google, IBM, RedHat and others launch Istio 1.0 service mesh for microservices
Read more
  • 0
  • 0
  • 9695

article-image-gremlin-makes-chaos-engineering-with-docker-easier-with-new-container-discovery-feature
Richard Gall
28 Aug 2018
3 min read
Save for later

Gremlin makes chaos engineering with Docker easier with new container discovery feature

Richard Gall
28 Aug 2018
3 min read
Gremlin, the product that's bringing chaos engineering to a huge range of organizations, announced today that it has added a new feature to its product: container discovery. Container discovery will make it easier to run chaos engineering tests alongside Docker. Chaos engineering and containers have always been closely related - arguably the loosely coupled architectural style of modern software driven by containers has, in turn, led to an increased demand for chaos engineering to improve software resiliency. Matt Fornaciari, Gremlin CTO, explains that "with today’s updates to the Gremlin platform, DevOps teams will be able to drastically improve the reliability of Docker in production." Read next: How Gremlin is making chaos engineering accessible [Interview] What does Gremlin's new container discovery feature do? Container discovery will do 2 things: it will make it easier for engineers to identify specific Docker containers, but more importantly, it will also allow them to simulate attacks or errors within those containerized environments. The real benefit of this is that it makes the testing process so much easier for engineers. Containers are, the press release notes, "often highly dynamic, ephemeral, and difficult to pinpoint at a given moment," which means identifying and isolating a particular container to run a 'chaos test' on can ordinarily be very challenging and time consuming. Gremlin has been working with the engineering team at Under Armour. Paul Osman, Senior Engineering Manager says that "we use Gremlin to test various failure scenarios and build confidence in the resiliency of our microservices." This new feature could save the engineers a lot of time, as he explains: "the ability to target containerized services with an easy-to-use UI has reduced the amount of time it takes us to do fault injection significantly." Read next: Chaos Engineering: managing complexity by breaking things Why is Docker such a big deal for Gremlin? As noted above, chaos engineering and containers are part of the same wider shift in software architectural styles. With Docker leading the way when it comes to containerization - its market share growing healthily - making it easier to perform resiliency tests on containers is incredibly important for the product. It's not a stretch to say that Gremlin have probably been working on this feature for some time, with users placing it high on their list of must-haves. Chaos engineering is still in its infancy - this year's Skill Up report found that it still remains on the periphery of many developer's awareness. However, that could quickly change, and it appears that Gremlin are working hard to make chaos engineering not only more accessible but also more appealing to companies for whom software resiliency is essential.
Read more
  • 0
  • 0
  • 3839

article-image-introducing-zowe-a-new-open-source-framework-to-simplify-development-on-z-os-backed-by-ibm
Bhagyashree R
28 Aug 2018
3 min read
Save for later

Open Mainframes Project introduces Zowe: A new open-source framework to simplify development on z/OS, supported by IBM

Bhagyashree R
28 Aug 2018
3 min read
IBM with its partners, Rocket Software and CA Technologies, have announced the launch of Zowe at the ongoing Open Source Summit in Vancouver, Canada. It is the first z/OS open source project, which is part of the Linux Foundation’s Open Mainframe Project community. Why is Zowe introduced? The rapid technology advancements and rising expectations in user experience demands  more productive and better integrated capabilities for z/OS, an operating system for IBM mainframes. Zowe enables delivery of such an environment through an extensible open source framework. It aims to create an ecosystem of Independent Software Vendors (ISVs), System Integrators, clients, and end users. By using it, development and operations teams can securely manage, control, script and develop on the mainframe like any other cloud platform. What are its components? The four main components of Zowe are: the Explorer server, API Mediation Layer, zLUX, and Zowe CLI. Source: Zowe Zowe APIs and Explorers z/OS Management Facility (z/OSMF) supports the use of REST APIs, which are public APIs that your application can use to work with system resources and can also extract system data. With the help of these REST APIs, Zowe submits jobs, works with the Job Entry Subsystem (JES) queue, and manipulates UNIX System Services (USS) or Multiple Virtual Storage (MVS) datasets. Explorers are visual representations of these APIs that are wrapped in the Zowe web UI application. They create an extensible z/OS framework that provides new z/OS REST services to transform enterprise tools and DevOps processes to incorporate new technology, languages, and modern workflows. Zowe API Mediation Layer The following are the key components of API Mediation Layer: API Gateway: It is built using Netflix Zuul and Spring Boot technology. Its purpose is to forward API requests to the appropriate corresponding service through the microservice endpoint UI. Discovery Service: It is built on Eureka and Spring Boot technology. It acts as the central point in the API Gateway that accepts announcements of REST services, and is a repository for active services. API Catalog: It is used to view the services running in API Mediation Layer. You can also view the corresponding API documentation to a service. Zowe Web UI Web UI named zLUX, modernizes and simplifies working on the mainframe.The UI works with the underlying REST APIs for data, jobs, and subsystems, and presents the information in a full-screen mode. It gives users a unifying experience where various applications can work together. Zowe Command Line Interface (CLI) Zowe CLI is used to allow user interactions from different platforms with z/OS. The platforms which can be cloud or distributed systems are able to submit jobs, issue TSO and z/OS console commands, integrate z/OS actions into scripts, and produce responses as JSON documents with the help of Zowe CLI. Currently, the Zowe is available in beta and is not intended for production use. The Zowe Leadership Committee is targeting to have a stable release by the end of the year. To know more about the launch of Zowe, refer to IBM’s announcement on their official website. IBM Files Patent for “Managing a Database Management System using a Blockchain Database” Google, IBM, RedHat and others launch Istio 1.0 service mesh for microservices IBM launches Nabla containers: A sandbox more secure than Docker containers
Read more
  • 0
  • 0
  • 3911
article-image-red-hat-infrastructure-migration-solution-for-proprietary-and-siloed-infrastructure
Savia Lobo
24 Aug 2018
3 min read
Save for later

Red Hat infrastructure migration solution for proprietary and siloed infrastructure

Savia Lobo
24 Aug 2018
3 min read
Red Hat recently introduced its infrastructure migration solution to help provide an open pathway to digital transformation. Red Hat infrastructure migration solution provides an enterprise-ready pathway to cloud-native application development via Linux containers, Kubernetes, automation, and other open source technologies. It helps organizations to accelerate transformation by more safely migrating and managing workload to an open source infrastructure platform, thus reducing cost and speeding innovation. Joe Fernandes, Vice President, Cloud Platforms Products at Red Hat, said, “Legacy virtualization infrastructure can serve as a stumbling block too, rather than a catalyst, for IT innovation. From licensing costs to closed vendor ecosystems, these silos can hold organizations back from evolving their operations to better meet customer demand. We’re providing a way for enterprises to leapfrog these legacy deployments and move to an open, flexible, enterprise platform, one that is designed for digital transformation and primed for the ecosystem of cloud-native development, Kubernetes, and automation.” RedHat program consists of three phases: Discovery Session: Here, Red Hat Consulting will engage with an organization in a complimentary Discovery Session to better understand the scope of the migration and document it effectively. Pilot Migrations: In this phase, an open source platform is deployed and operationalized using Red Hat’s hybrid cloud infrastructure and management tooling. Pilot migrations are carried out to demonstrate typical approaches, establish initial migration capability, and define the requirements for a larger scale migration. Migration at scale: In this phase, IT teams are able to migrate workloads at scale. Red Hat Consulting also aids in better streamline operations across virtualization pool, and navigate complex migration cases. Post the Discovery Session, recommendations are provided for a more flexible open source virtualization platform based on Red Hat technologies. These include: Red Hat Virtualization offers an open software-defined infrastructure and centralized management platform for virtualized Linux and Windows workloads. It is designed to empower customers with greater efficiency for traditional workloads, along with creating a launchpad for cloud-native and container-based application innovation. Red Hat OpenStack Platform is built on the enterprise-grade backbone of Red Hat Enterprise Linux. It helps users to build an on-premise cloud architecture that provides resource elasticity, scalability, and increased efficiency. Red Hat Hyperconverged Infrastructure is a portfolio of solutions that includes Red Hat Hyperconverged Infrastructure for both Virtualization and Cloud. Customers can use it to integrate compute, network and storage in a form factor designed to provide greater operational and cost efficiency. Using the new migration capabilities based on Red Hat’s management technologies, including Red Hat Ansible Automation, new workloads can be delivered in an automated fashion with self-service. These can also enable IT to more quickly re-create workload across hybrid and multi-cloud environment. Read more about the Red Hat infrastructure migration solution on RedHat’s official blog. Zefflin Systems unveils ServiceNow Plugin for Red Hat Ansible 2.0 Red Hat Enterprise Linux 7.5 (RHEL 7.5) now generally available Installing Red Hat CloudForms on Red Hat OpenStack
Read more
  • 0
  • 0
  • 2306

article-image-red-hat-enterprise-linux-7-6-beta-released-with-security-cloud-automation
Sugandha Lahoti
24 Aug 2018
2 min read
Save for later

Red Hat Enterprise Linux 7.6 Beta released with focus on security, cloud, and automation

Sugandha Lahoti
24 Aug 2018
2 min read
Red Hat has rolled out their Red Hat Enterprise Linux 7.6 beta in their goal of becoming the cloud powerhouse. This release focuses on security and compliance, automation, and cloud deployment features. Linux security improvements As far as Linux based security is considered, some improvements made include: GnuTLS library with Hardware Security Module (HSM) support Strengthened OpenSSL for mainframes Enhancements to the nftables firewall Integration of Berkeley Packet Filter (eBPF) to provide a safer mechanism for monitoring Linux kernel activity Hybrid cloud deployment-related changes Red Hat Enterprise Linux 7.6 has introduced a variety of cloud deployment improvements. Red Hat’s Paul Cormier considers the hybrid cloud to be the default technology choice. “Enterprises want the best answers to meet their specific needs, regardless of whether that’s through the public cloud or on bare metal in their own datacenter.” For starters, Red Hat Enterprise Linux 7.6 uses Trusted Platform Module (TPM) 2.0 hardware modules to enable Network Bound Disk Encryption (NBDE). This provides two layers of security features for hybrid cloud operations: The network-based mechanism works in the cloud, On-premises TPM helps to keep information on disks more secure. They have also introduced Podman, a part of Red Hat's lightweight container toolkit. It adds enterprise-grade security features to containers. Podman complements Buildah and Skopeo by enabling users to run, build, and share containers using the command line interface. It can also work with CRI-O, a lightweight Kubernetes containers runtime. Management and Automation The latest beta version also adds enhancements to the Red Hat Enterprise Linux Web Console including: Showing available updates on the system summary pages. Automatic configuration of single sign-on for identity management, helping to simplify this task for security administrators. An interface to control firewall services. These are just a select few updates. For a more detailed coverage, go through the release notes available on the Red Hat Blog. Red Hat Enterprise Linux 7.5 (RHEL 7.5) now generally available. What RedHat and others announced at KubeCon + CloudNativeCon 2018. RedHat and others launch Istio 1.0 service mesh for microservices.
Read more
  • 0
  • 0
  • 2894