Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Cloud Computing

175 Articles
article-image-hortonworks-partner-with-google-cloud-to-enhance-their-big-data-strategy
Gebin George
22 Jun 2018
2 min read
Save for later

Hortonworks partner with Google Cloud to enhance their Big Data strategy

Gebin George
22 Jun 2018
2 min read
Hortonworks currently is a leader in global data management solutions partnered with Google Cloud to enhance Hortonworks Data Platform (HDP) and Hortonworks Dataflow (HDF). It has promised to deliver next-generation data analytics for hybrid and multi-cloud deployments. This partnership will enable customers to leverage new innovations from the open source community via HDP and HDF on GCP for faster business innovations. HDP’s integration with Google Cloud gives us the following features: Flexibility for ephemeral workloads: Analytical workloads which are on-demand can be managed within minutes with no add-on cost and at unlimited elastic scale. Analytics made faster: Take advantage of Apache Hive and Apache Spark for interactive query, machine learning and analytics. Automated cloud provisioning: simplifies the deployment of HDP and HDF in GCP making it easier to configure and secure workloads to make optimal use of cloud resources. In addition HDF has gone through following enhancements: Deploying Hybrid Data architecture: Smooth and secure flow of data from any source which varies from on-premise to cloud. Streaming Analytics in Real-time: Build streaming applications with ease, which will capture real-time insights without having to code a single line. With the combination of HDP, HDF and Hortonworks DataPlane Service, Hortonworks can uniquely deliver consistent metadata, security and data governance across hybrid cloud and multicloud architectures. Arun Murthy, Co-Founder & Chief Product Officer, Hortonworks said “ Partnering with Google Cloud lets our joint customers take advantage of the scalability, flexibility and agility of the cloud when running analytic and IoT workloads at scale with HDP and HDF. Together with Google Cloud, we offer enterprises an easy path to adopt cloud and, ultimately, a modern data architecture. Similarly, Google Cloud’s project management director, Sudhir Hasbe, said “ Enterprises want to be able to get smarter about both their business and their customers through advanced analytics and machine learning. Our partnership with Hortonworks will give customers the ability to quickly run data analytics, machine learning and streaming analytics workloads in GCP while enabling a bridge to hybrid or cloud-native data architectures” Refer to the Hortonworks platform blog and Google cloud blog for more information on services and enhancements. Google cloud collaborates with Unity 3D; a connected gaming experience is here How to Run Hadoop on Google Cloud – Part 1 AT&T combines with Google cloud to deliver cloud networking at scale
Read more
  • 0
  • 0
  • 2338

article-image-google-cloud-collaborates-with-unity-3d-a-connected-gaming-experience-is-here
Savia Lobo
20 Jun 2018
2 min read
Save for later

Google Cloud collaborates with Unity 3D; a connected gaming experience is here!

Savia Lobo
20 Jun 2018
2 min read
Google Cloud announced its recent alliance with Unity at the Unite Berlin conference this week. Unity is a popular game development platform for a real-time 3D game and content creation. Google Cloud stated that they are building a suite of managed services and tools for creating connected games. This suite will be much focussed on real-time multiplayer experiences. With this Google Cloud becomes the default cloud provider helping developers build connected games using Unity. It will also assist them to easily build and scale their games. Additionally, developers will get an advantage of Google Cloud right from the Unity development environment without needing to become cloud experts. The reason Google Cloud collaborates with Unity is to create an open source for connecting players in multiplayer games. This project mainly aims at creating an open source, community-driven solutions built in collaboration with the world’s leading game companies. Unity will also be migrating all of the core infrastructure powering its services and offerings to Google Cloud. Unity will also be running its business on the same cloud that Unity game developers will develop, test and globally launch their games. John Riccitiello, Chief Executive Officer, Unity Technologies, said, “Migrating our infrastructure to Google Cloud was a decision based on the company’s impressive global reach and product quality. Now, Unity developers will be able to take advantage of the unparalleled capabilities to support their cloud needs on a global scale.” Google Cloud plans to release new products and features over the coming months. Keep yourself updated on this alliance by checking out Unity’s homepage. AI for Unity game developers: How to emulate real-world senses in your NPC agent behavior Google announces Cloud TPUs on the Cloud Machine Learning Engine (ML Engine) Unity 2D & 3D game kits simplify Unity game development for beginners
Read more
  • 0
  • 0
  • 2997

article-image-alibaba-cloud-partners-with-sap-to-provide-a-versatile-one-stop-cloud-computing-environment
Savia Lobo
18 Jun 2018
2 min read
Save for later

Alibaba Cloud partners with SAP to provide a versatile, one-stop cloud computing environment

Savia Lobo
18 Jun 2018
2 min read
For all those who wish to run their SAP solutions on the cloud, Alibaba has granted this wish for you! At the SAPPHIRE NOW 2018, Alibaba showcased SAP products and solutions on its Cloud platform. Now one can run their SAP solutions on Alibaba Cloud with their choice of Operating System. Alibaba Cloud is among the world's top three IaaS providers according to Gartner. It is also the largest provider of public cloud services in China, according to IDC. It provides a comprehensive suite of cloud computing services to businesses all over the world. This includes merchants with businesses located within Alibaba Group marketplaces, startups, corporations and government organizations. Using Alibaba Cloud’s global infrastructure, enterprises can leverage its robust infrastructure and computing power to achieve greater business value. It has expanded its support to SAP systems by providing: Linux support – SAP HANA, SAP MaxDB, and SAP ASE Windows support – SAP MaxDB, SQL Server to run SAP Business Suite, and other applications on the SAP Application Server ABAP Alibaba has also passed the certification to run SAP Business One HANA on its cloud. The partnership of SAP and Alibaba brings a versatile, one-stop cloud computing environment by Alibaba Cloud's reliable, high-performance and secure infrastructure, interoperating with enterprise-level business application solutions from SAP." With SAP, Alibaba Cloud platform gets an added robust global IT infrastructure and computing strengths. It also delivers enhanced ERP services in cloud environments, which in turn aids enterprises in driving their digital transformation. Read more about this on SAP on Alibaba's cloud official website.
Read more
  • 0
  • 0
  • 2271
Banner background image

article-image-juniper-networks-comes-up-with-5g-iot-ready-routing-platform-mx-series-5g
Gebin George
14 Jun 2018
3 min read
Save for later

Juniper networks comes up with 5G - IoT-ready routing platform, MX Series 5G

Gebin George
14 Jun 2018
3 min read
Juniper networks, one of industry leads in automated, scalable and secure networks, today announced fifth generation of it’s MX Series 5G Universal Routing Platform. This series has more offerings for cutting-edge infrastructure and technology like cloud and IoT, enabling high-level network programmability. It has improved the programmability, performance and flexibility, for rapid cloud deployment by introducing a new set of software. This platform supports complex networks and service-intensive applications such as secured SD-WAN-based services and so on. Executive vice president and chief product officer at Juniper Networks, Manoj Leelanivas, said “ Cloud is eating the world, 5G is ramping up, IoT is presenting a host of new challenges, and security teams simply can’t keep up with the sheer volume of cyber attacks on today’s network. One thing service providers should not have to worry about among all this is the unknown of what lies ahead.” Few highlights of this release are as follows: Juniper Penta Silicon Penta silicon is considered the heart of the 5G platform which is next-generation 16 nm service-optimized, having a packet-forwarding engine that delivers upto 50% power efficiency over existing Junos trio chipset. Pena silicon has native support to MACsec and IPsec crypto engine that enables end to end secure connectivity at scale. In addition to this, Penta silicon also supports flexible native Ethernet (FlexE). MX 5G Control User-Plane Separation (CUPS) The 3GPP CUPS standard allows the customer to separate the evolved packet core user plane (GTP-U), and control plane (GTP-C) with standard interface to help service providers scale each independently as needed. The MX Series 5G platform is the first networking platform to support a standard-based hardware accelerated 5G user-plane in both existing and future MX routers. It enables converged services (wireless and wireline) on the same platform while also allowing integration with third-party 5G control planes. MX10008 and MX10016 Universal Chassis MX series continues to do innovations in the area of cloud, enterprise networking, and previously announced PTX and QFX Universal Chassis gains two new MX variants with today’s announcement: MX10008 and MX10016. A variety of line cards and software are available to satisfy specific networking use cases across the data center, enterprise and WAN. Refer to the official Juniper website for details on MX Series 5G. Five developer centric sessions at IoT World 2018 Cognitive IoT: How Artificial Intelligence is remoulding Industrial and Consumer IoT Windows 10 IoT Core: What you need to know  
Read more
  • 0
  • 0
  • 2868

article-image-microsoft-supercharges-its-azure-ai-platform-with-new-features
Gebin George
14 Jun 2018
2 min read
Save for later

Microsoft supercharges its Azure AI platform with new features

Gebin George
14 Jun 2018
2 min read
Microsoft recently announced few innovations to their AI platform powered by Microsoft Azure. These updates are well aligned to their Digital Transformation strategy of helping organizations augment their machine learning capabilities for better performance. Cognitive Search Cognitive Search is a new feature in Azure portal which leverages the power of AI to understand the content and append the information into Azure Search. It also has support for different file-readers like PDF, office documents.It also enables OCR capabilities like key phrase extraction, language detection, image analysis and even facial recognition. So the initial search will pull all the data from various resources and then apply cognitive skills to store data in the optimized index. Azure ML SDK for Python In the Azure Machine Learning ecosystem, this additional SDK facilitates the developers and the data scientists to execute key AML workflows, Model training, Model deployment, and scoring directly using a single control plane API within Python Azure ML Packages Microsoft now offers Azure ML packages that represent a rich set of pip- installable extensions to Azure ML. This makes the process of building efficient ML models more streamlined by building on deep learning capabilities of Azure AI platform. ML.NET This cross-platform open source framework is meant for .NET developers and provides enterprise-grade software libraries of latest innovations in Machine Learning and platforms that includes Bing, Office, and Windows. This service is available in the AI platform for preview. Project Brainware This service is also available on Azure ML portal for preview. This architecture is essentially built to process deep neural networks; it uses hardware acceleration to enable fast AI. You can have a look at the Azure AI portal for more details. New updates to Microsoft Azure services for SQL Server, MySQL, and PostgreSQL Epicor partners with Microsoft Azure to adopt Cloud ERP SAP Cloud Platform is now generally available on Microsoft Azure  
Read more
  • 0
  • 0
  • 2266

article-image-oracle-announces-oracle-soar-a-tools-package-to-ease-application-migration-on-cloud
Savia Lobo
13 Jun 2018
2 min read
Save for later

Oracle announces Oracle Soar, a tools package to ease application migration on cloud

Savia Lobo
13 Jun 2018
2 min read
Oracle recently released Oracle Soar, a brand new tools, and services package to help customers migrate their applications on the cloud. Oracle Soar comprises a set of automated migration tools along with professional services i.e. a complete solution for migration. It is a semi-automated solution that fits in with Oracle's recent efforts to stand apart from other cloud providers which offer advanced automated services. Tools available within the Oracle Soar package are: Discovery assessment tool Process analyzer tool Automated data and configuration migration utilities tool Rapid integration tool The automated process is powered by True Cloud Method, which is Oracle’s proprietary approach to support customers throughout their cloud journey. Customers are also guided by a dedicated Oracle concierge service that ensures the migration aligns with modern, industry best practices. Customers can monitor the status of their cloud transition via an intuitive mobile application, which allows them to follow a step-by-step implementation guide for what needs to be done on each day. With Soar, customers can save up to 30% on cost and time as it offers simple migrations taking as little as 20 weeks for completion of the process. Oracle Soar is currently available for customers from the Oracle E-Business Suite, Oracle PeopleSoft and Oracle Hyperion Planning who will move to Oracle ERP Cloud, Oracle SCM Cloud and Oracle EPM Cloud. Read more about Oracle Soar, on Oracle’s official blog post. Oracle reveals issues in Object Serialization. Plans to drop it from core Java. Oracle Apex 18.1 is here! What Google, RedHat, Oracle, and others announced at KubeCon + CloudNativeCon 2018    
Read more
  • 0
  • 0
  • 2499
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at £15.99/month. Cancel anytime
article-image-sap-cloud-platform-is-now-generally-available-on-microsoft-azure
Savia Lobo
11 Jun 2018
3 min read
Save for later

SAP Cloud Platform is now generally available on Microsoft Azure

Savia Lobo
11 Jun 2018
3 min read
Microsoft stated that its recent addition, SAP Cloud Platform is now generally on its Azure Platform. The SAP cloud platform enables developers to build SAP applications and extensions using PaaS development platform along with integrated services. With the SAP platform becoming generally available, developers can now deploy Cloud Foundry-based SAP Cloud Platform on Azure. This is currently available in the West Europe region and Microsoft is working with SAP to enable more regions to use it in the months to come. With SAP HANA’s availability on Microsoft Azure, one can expect: Largest SAP HANA optimized VM size in the cloud Microsoft would be soon launching an Azure-M series, which will support large memory virtual machines with sizes up to 12 TB, which would be based on Intel Xeon Scalable (Skylake) processors and will offer the most memory available of any VM in the public cloud. The M series will help customers to push the limits of virtualization in the cloud for SAP HANA. Availability of a range of SAP HANA certified VMs For customers who wish to use small instances, Microsoft also offers smaller M-series VM sizes. These range from 192 GB – 4 TB with 10 different VM sizes and extend Azure’s SAP HANA certified M-series VM. These smaller M-series offer on-demand and SAP certified instances with a flexibility to spin-up or scale-up in less time. It also offers instances to spin-down to save costs within a pay-as-you-go model available worldwide. Such a flexibility and agility is not possible with a private cloud or on-premises SAP HANA deployment. 24 TB bare-metal instance and optimized price per TB For customers that need a higher performance dedicated offering for SAP HANA, Microsoft now offers additional SAP HANA TDIv5 options of 6 TB, 12 TB, 18 TB, and 24 TB configurations in addition to their current configurations from 0.7TB to 20 TB. For customers who require more memory but the same number of cores, these configurations enable them to get a better price per TB deployed. A lot more options for SAP HANA in the cloud SAP HANA has 26 distinct offerings from 192 GB to 24 TB, scale-up certification up to 20 TB and scale-out certification up to 60 TB. It offers global availability in 12 regions with plans to increase to 22 regions in the next 6 months, Azure now offers the most choice for SAP HANA workloads of any public cloud. Microsoft Azure also enables customers, To extract insights and analytics from SAP data with services such as Azure Data Factory SAP HANA connector to automate data pipelines Azure Data Lake Store for hyper-scale data storage and Power BI An industry leading self-service visualization tool, to create rich dashboards and reports from SAP ERP data. Read more about this news about SAP Cloud Platform on Azure, on Microsoft Azure blog. How to perform predictive forecasting in SAP Analytics Cloud Epicor partners with Microsoft Azure to adopt Cloud ERP New updates to Microsoft Azure services for SQL Server, MySQL, and PostgreSQL
Read more
  • 0
  • 0
  • 3392

article-image-google-kubernetes-engine-1-10-is-now-generally-available-and-ready-for-enterprise-use
Savia Lobo
04 Jun 2018
3 min read
Save for later

Google Kubernetes Engine 1.10 is now generally available and ready for enterprise use

Savia Lobo
04 Jun 2018
3 min read
Google recently announced that their Google Kubernetes Engine 1.10 is now generally available and is also ready for enterprise use. For a prolonged time, enterprises have faced challenges such as security, networking, logging, and monitoring. With the availability of Kubernetes Engine 1.10, Google has introduced new and exciting features that have a built-in robust security for enterprise use, which are: Shared Virtual Private Cloud (VPC): This enables better control of network resources Regional Persistent Disks and Regional Clusters: These ensure higher-availability and stronger SLAs. Node Auto-Repair GA and Custom Horizontal Pod Autoscaler: These can be used for greater automation. New features in the Google Kubernetes Engine 1.10 Networking One can deploy workloads in Google’s global Virtual Private Cloud (VPC) in a Shared VPC model. This gives you the flexibility to manage access to shared network resources using IAM permissions while still isolating departments. Shared VPC lets organization administrators assign administrative responsibilities, such as creating and managing instances and clusters, to service project admins while maintaining centralized control over network resources like subnets, routers, and firewalls. Shared VPC network in the Kubernetes engine 1.10 Storage This will make it easy to build highly available solutions. The Kubernetes Engine will provide support for the new Regional Persistent Disk (Regional PD). Regional PD enables a persistent network-attached block storage with synchronous replication of data between two zones within a region. One does not have to worry about application-level replication and can take advantage of replication at the storage layer, with the help of Regional PDs. This kind of replication offers a convenient building block using which one can implement highly available solutions on Kubernetes Engine. Reliability Regional clusters, which would be made available in some time soon, allow one to create a Kubernetes Engine cluster with a multi-master, highly-available control plane. This cluster would spread the masters across three zones in a region, which is an important feature for clusters with higher uptime requirements. Regional clusters also offer a zero-downtime upgrade experience when upgrading Kubernetes Engine masters. The node auto-repair feature is now generally available. It monitors the health of the nodes in one’s cluster and repairs nodes that are unhealthy. Auto-scaling In Kubernetes Engine 1.10, Horizontal Pod Autoscaler supports three different custom metrics types in beta: External - For scaling based on Cloud Pub/Sub queue length Pods - For scaling based on the average number of open connections per pod Object - For scaling based on Kafka running in the cluster To know more about the features in detail, visit the Google Blog. Kublr 1.9.2 for Kubernetes cluster deployment in isolated environments released! Kubernetes Containerd 1.1 Integration is now generally available Rackspace now supports Kubernetes-as-a-Service
Read more
  • 0
  • 0
  • 2094

article-image-amazon-neptune-aws-cloud-graph-database-is-now-generally-available
Savia Lobo
31 May 2018
2 min read
Save for later

Amazon Neptune, AWS’ cloud graph database, is now generally available

Savia Lobo
31 May 2018
2 min read
Last year, Amazon Web Services (AWS) announced the launch of its fast, reliable, and fully-managed cloud graph database, Amazon Neptune, at the Amazon Re:Invent 2017. Recently, AWS announced that Neptune is all set for the general public to explore. Graph databases store the relationships between connected data as graphs. This enables applications to access the data in a single operation, rather than a bunch of individual queries for all the data. Similarly, Neptune makes it easy for developers to build and run applications that work with highly connected datasets. Also, due to the availability of AWS managed graph database service, developers can further experience high scalability, security, durability, and availability. As Neptune becomes generally available, there are a large number of performance enhancements and updates, following are some additional upgrades: AWS CloudFormation support AWS Command Line Interface (CLI)/SDK support An update to Apache TinkerPop 3.3.2 Support for IAM roles with bulk loading from Amazon Simple Storage Service (S3) Some of the benefits of Amazon Neptune include: Neptune’s query processing engine is highly optimized for two of the leading graph models, Property Graph and W3C's RDF. It is also associated with the query languages, Apache TinkerPop Gremlin and RDF/SPARQL, providing customers the flexibility to choose the right approach based on their specific graph use case. Neptune storage scales automatically, without downtime or performance degradation as customer data increases. It allows developers to design sophisticated, interactive graph applications, which can query billions of relationships with millisecond latency. There are no upfront costs, licenses, or commitments required while using Amazon Neptune. Customers pay only for the Neptune resources they use. To know more interesting facts about Amazon Neptune in detail, visit its official blog. 2018 is the year of graph databases. Here’s why. From Graph Database to Graph Company: Neo4j’s Native Graph Platform addresses evolving needs of customers When, why and how to use Graph analytics for your big data
Read more
  • 0
  • 0
  • 2414

article-image-introducing-vmware-integrated-openstack-vio-5-0-a-new-infrastructure-as-a-service-iaas-cloud
Savia Lobo
30 May 2018
3 min read
Save for later

Introducing VMware Integrated OpenStack (VIO) 5.0, a new Infrastructure-as-a-Service (IaaS) cloud

Savia Lobo
30 May 2018
3 min read
VMware recently released its brand new Infrastructure-as-a-Service (IaaS) cloud, known as the VMware Integrated OpenStack (VIO) 5.0. This release, announced at the OpenStack Summit in Vancouver, Canada, is fully based on the new OpenStack Queens release. VIO provides customers with a fast and efficient solution to deploy and operate OpenStack clouds. These clouds are highly optimized for VMware's NFV and software-defined data center (SDDC) infrastructure, with advanced automation and onboarding. If one is already using VIO, they can use OpenStack's built-in upgrade capability to upgrade seamlessly to VIO 5.0. VMWare Integrated OpenStack(VIO)5.0 would be available in both Carrier and Data Center Editions.The VIO-Carrier Edition will addresses specific requirements of communication service providers (CSP). The improvements in this include: An Accelerated Data Plane Performance:  Support of NSX Managed Virtual Distributed Switch in Enhanced Data Path mode and DPDK provides customers with: Significant improvements in application response time, reduced network latencies breakthrough network performance optimized data plane techniques in VMware vSphere. Multi-Tenant Resource is now scalable: This will provide resource guarantee and resource isolation to each tenant. It will also support elastic resource scaling that allows CSPs to add new resources dynamically across different vSphere clusters to adapt to traffic conditions or transition from pilot phase to production in place. OpenStack for 5G and Edge Computing: Customers will have full control over the micro data centers and apps at the edge via automated API-driven orchestration and lifecycle management. The solution will help tackle enterprise use cases such as utilities, oil and gas drilling platforms, point-of-sale applications, security cameras, and manufacturing plants. Also, Telco oriented use-cases such Multi-Access Edge Computing (MEC), latency sensitivity VNF deployments, and operational support systems (OSS) would be addressed. VIO 5.0 also enables CSP and enterprise customers to utilize Queens advancements to support mission-critical workloads, across container and cloud-native application environments. Some new features include: High Scalability: One can run upto 500 hosts and 15,000 VMs in a single region using the VIO5.0. It will also introduce support for multiple regions at once with monitoring and metrics at scale. High Availability for Mission-Critical Workloads: Creating snapshots, clones, and backups of attached volumes to dramatically improve VM and application uptime via enhancements to the Cinder volume driver is now possible. Unified Virtualized Environment: Ability to deploy and run both VM and container workloads on a single virtualized infrastructure manager (VIM) and with a single network fabric based on VMware NSX-T Data Center. This architecture will enable customers to seamlessly deploy hybrid workloads where some components run in containers while others run in VMs. Advanced Security: Consolidate and simplify user and role management based on enhancements to Keystone, including the use of application credentials as well as system role assignment. VMware Integrated OpenStack 5.0 takes security to new levels with encryption of internal API traffic, Keystone to Keystone federation, and support for major identity management providers that includes VMware Identity Manager. Optimization and Standardization of DNS Services: Scalable, on-demand DNS as a service via Designate. Customers can auto-register any VM or Virtual Network Function (VNF) to a corporate approved DNS server instead of manually registering newly provisioned hosts through Designate. To know more about the other features in detail read VMWare’s official blog. How to create and configure an Azure Virtual Machine Introducing OpenStack Foundation’s Kata Containers 1.0 SDLC puts process at the center of software engineering
Read more
  • 0
  • 0
  • 3674
article-image-epicor-partners-with-microsoft-azure-to-adopt-cloud-erp
Savia Lobo
29 May 2018
2 min read
Save for later

Epicor partners with Microsoft Azure to adopt Cloud ERP

Savia Lobo
29 May 2018
2 min read
Epicor Software Corporation recently announced its partnership with Microsoft Azure to accelerate its Cloud ERP adoption. This partnership further aims at delivering Epicor’s enterprise solutions on the Microsoft Azure platform. The company plans to deploy its Epicor Prophet 21 enterprise resource planning (ERP) suite on Microsoft Azure. This would enable customers a faster growth and innovation as they look forward to digitally transform their business with the reliable, secure, and scalable features of Microsoft Azure. With the Epicor and Microsoft collaboration customers can now access the power of Epicor ERP and Prophet 21 running on Microsoft Azure. Having Microsoft as a partner, Epicor, Leverages a range of technologies such as Internet of Things (IoT), Artificial Intelligence (AI), and machine learning (ML) to deliver ready-to-use, accurate solutions for mid-market manufacturers and distributors. Plans to explore Microsoft technologies for advanced search, speech-to-text, and other use cases to deliver modern human/machine interfaces that improve productivity for customers. Steve Murphy, CEO, Epicor said that,”Microsoft’s focus on the ‘Intelligent Cloud’ and ‘Intelligent Edge’ complement our customer-centric focus”. He further stated that after looking at several cloud options, they felt Microsoft Azure offers the best foundation for building and deploying enterprise business applications that enables customers’ businesses to adapt and grow. As most of the prospects these days ask about Cloud ERP, Epicor says that by deploying such a model they would be ready to offer their customers the ability to move onto cloud with the confidence that Microsoft Azure offers. Read more about this in detail on Epicor’s official blog. Rackspace now supports Kubernetes-as-a-Service How to secure an Azure Virtual Network What Google, RedHat, Oracle, and others announced at KubeCon + CloudNativeCon 2018
Read more
  • 0
  • 0
  • 2363

article-image-platform-13-openstack-queens-the-first-fully-containerized-version-released
Gebin George
28 May 2018
2 min read
Save for later

Platform 13: OpenStack Queens, the first fully containerized version released

Gebin George
28 May 2018
2 min read
Red Hat released its 13th version of OpenStack cloud platform i.e Queens. OpenStack usually follows a rapid six-month release cycle. This release was majorly focussed upon using open-source OpenStack to bridge the gap between private and public cloud. RHOP will be generally available in June through the Red Hat customer portal and as a part of both Red Hat infrastructure and cloud suite. Red Hat’s general manager of OpenStack said “RHOP 13 is the first complete containerized OpenStack. Our customers have been asking us to make it easy to run Red Hat OpenShift Container Platform (RHOCP), Red Hat's Kubernete's offering. We want to make this as seamless as possible” OpenStack has come with very interesting cross-portfolio support, to accelerate their hybrid cloud offering. This includes: Red Hat CloudForms which help in managing day-to-day tasks in Hybrid Infrastructure. Red Hat Ceph storage, a scalable storage solution which enables provisioning of hundreds of virtual machines from a single snapshot to build a massive storage solution Red Hat OpenShift container platform which enables running of cloud-native workloads with ease. OpenShift architecture supports running of both Linux as well as Kubernetes containers on a single workload. RHOP 13 also comes with a varied set of feature enhancements and upgrades, like: Containerization capabilities OpenStack 13 is building upon the containerization capabilities and services introduced with the release of OpenStack 12. It enables containerization of all the services including networking and storage. Security capabilities By the inclusion of OpenStack Barbican, RHOP 13 comes up with tenant-level lifecycle for sensitive data protection such as passwords, security certificates and keys. With the introduction of features in Barbican, encryption-based services are available to extensive data protection. For official release notes, please refer to the official OpenStack blog. Introducing OpenStack Foundation’s Kata Containers 1.0 About the Certified OpenStack Administrator Exam OpenStack Networking in a Nutshell
Read more
  • 0
  • 0
  • 2119

article-image-is-cloud-mining-profitable
Richard Gall
24 May 2018
5 min read
Save for later

Is cloud mining profitable?

Richard Gall
24 May 2018
5 min read
Cloud mining has become into one of the biggest trends in Bitcoin and cryptocurrency. The reason is simple: it makes mining Bitcoin incredibly easy. By using cloud, rather than hardware to mine bitcoin, you can avoid the stress and inconvenience of managing hardware. Instead of using the processing power from hardware, you share the processing power of the cloud space (or more specifically the remote data center). In theory, cloud mining should be much more profitable than mining with your own hardware. However, it's easy to be caught out. At best some schemes are useless - at worst, they could be seen as a bit of a pyramid scheme. For this reason, it's essential you do your homework. However, although there are some risks associated with cloud mining, it does have benefits. Arguably it makes Bitcoin, and cryptocurrency in general, more accessible to ordinary people. Provided people get to know the area, what works and what definitely doesn't it could be a positive opportunity for many people. How to start cloud mining Let's first take a look at different methods of cloud mining. If you're going to do it properly, it's worth taking some time to consider your options. At a top level there are 3 different types of cloud mining. Renting out your hashing power This is the most common form of cloud mining. To do this, you simple 'rent out' a certain amount of your computer's hashing power. In case you don't know, hashing power is essentially your hardware's processing power; it's what allows your computer to use and run algorithms. Hosted mining As the name suggests, this is where you simply use an external machine to mine Bitcoin. To do this, you'll have to sign up with a cloud mining provider. If you do this, you'll need to be clear on their terms and conditions, and take care when calculating profitability. Virtual hosted mining Virtual hosted mining is a hybrid approach to cloud mining. To do this, you use a personal virtual server and then install the required software. This approach can be a little more fun, especially if you want to be able to build your own Bitcoin mining set up, but of course this poses challenges too. Depending on what you want to achieve any of these options may be right for you. Which cloud mining provider should you choose? As you'd expect from a trend that's growing rapidly, there's a huge number of cloud mining providers out there that you can use. The downside is that there are plenty of dubious providers that aren't going to be profitable for you. For this reason, it's best to do your research and read what others have to say. One of the most popular cloud mining providers is Hashflare. With Hashflare, you can buy a number of different types of cryptocurrencies, including Bitcoin, Ethereum, and Litecoin. You can also select your 'mining pool', which is something many providers won't let you do. Controlling the profitability of cloud mining can be difficult, so having control over your mining pool could be important. A mining pool is a bit like a hedge fund - a group of people pool together their processing resources, and the 'pay out' will be split according to the amount of work put in in order to create what's called a 'block', which is essentially a record or ledger of transactions. Hashflare isn't the only cloud mining solution available. Genesis Mining is another very high profile provider. It's incredibly accessible - you can begin a Bitcoin mining contract for just $15.99. Of course, the more you invest the better the deal you'll get. For a detailed exploration and comparison of cloud mining solutions, this TechRadar article is very useful. Take a look before you make any decisions! How can I ensure cloud mining is profitable? It's impossible to ensure profitability. Remember - cloud mining providers are out to make a profit. Although you might well make a profit, it's not necessarily in their interests to be paying money out to you. Calculating cloud mining profitability can be immensely complex. To do it properly you need to be clear on all the elements that are going to impact profitability. This includes: The cryptocurrency you are mining How much mining will cost per unit of hashing power The growth rate of block difficulty How the network hashrate might increase over the length of your mining contract There are lots of mining calculators out there that you can use to calculate how profitable cloud mining is likely to be. This article is particularly good at outlining how you can go about calculating cloud mining profitability. Its conclusion is an interesting take that's worth considering if you are interested in starting cloud mining: is "it profitable because the underlying cryptocurrency went up, or because the mining itself was profitable?" As the writer points out, if it is the cryptocurrency's value, then you might just be better off buying the cryptocurrency. Read next A brief history of Blockchain Write your first Blockchain: Learning Solidity Programming in 15 minutes “The Blockchain to Fix All Blockchains” – Overledger, the meta blockchain, will connect all existing blockchains
Read more
  • 0
  • 0
  • 3078
article-image-verizon-chooses-amazon-web-servicesaws-as-its-preferred-cloud-provider
Savia Lobo
18 May 2018
2 min read
Save for later

Verizon chooses Amazon Web Services(AWS) as its preferred cloud provider

Savia Lobo
18 May 2018
2 min read
Verizon Communications Inc. recently announced that it is migrating about 1000 of its business-critical applications and database back-end systems to the popular cloud provider, Amazon Web Services (AWS). Verizon had bought Terramark, a cloud and service provider, in 2011 as part of its public and private cloud strategy. This strategy included building its own cloud that offered infrastructure-as-a-service to its customers. AWS has stayed ahead of competition, where it offered added services to its customers. On the other hand, Verizon could not stay in the race for longer as it was usurped by Microsoft and Google. Due to this, two years ago, in 2016, Verizon closed down its public cloud offering and then sold off its cloud and managed hosting service assets to IBM and also sold a number of data centres to Equinix. Verizon had first started working with AWS in 2015 and has many business and consumer applications already running in the cloud. The current migrations to AWS is part of Verizon’s corporate-wide initiative, which is, to increase agility and reduce costs through the use of cloud computing. Some benefits of this migration include: With the help of AWS, Verizon will enable it to access more comprehensive set of cloud capabilities. This will ensure that its developers are able to invent on behalf of its customers. Verizon has built AWS-specific training facilities where its employees can quickly update themselves on the AWS technologies and learn how to innovate with speed and at scale. AWS enables Verizon to quickly deliver the best, most efficient customer experiences. Verizon also aims to make the public cloud a core part of its digital transformation, upgrading its database management approach to replace its proprietary solutions with Amazon Aurora To know more about AWS and Verizon’s partnership, read the AWS blog post. Linux Foundation launches the Acumos Al Project to make AI accessible Analyzing CloudTrail Logs using Amazon Elasticsearch How to create your own AWS CloudTrail
Read more
  • 0
  • 0
  • 2852

article-image-what-google-redhat-oracle-and-others-announced-at-kubercon-cloudnativecon-2018
Savia Lobo
17 May 2018
6 min read
Save for later

What Google, RedHat, Oracle, and others announced at KubeCon + CloudNativeCon 2018

Savia Lobo
17 May 2018
6 min read
Earlier this month, 4000+ developers attended the Cloud Native Computing Foundation’s flagship event, KubeCon + CloudNativeCon 2018 conference, held at Copenhagen, Europe from May 2nd to 4th. This conference focussed on a series of announcements on microservices, containers, and other open source tools for building applications for the web. Top vendors including Google, RedHat, Oracle, and many more announced a myriad of releases and improvements with respect to Kubernetes. Read our article on Big vendor announcements at KubeCon + CloudNativeCon Europe. Let’s brush through the top 7 vendors and their release highlights in this conference. Google released Stackdriver Kubernetes Monitoring and open sourced gVisor Released in beta, the Stackdriver Kubernetes Monitoring enables both developers and operators to use Kubernetes in a comprehensive fashion and also simplifies operations for them. Features of Stackdriver Kubernetes Monitoring include: Scalable Comprehensive Observability: Stackdriver Kubernetes Monitoring sums up logs, events and metrics from the Kubernetes environment to understand the behaviour of one’s application. These are rich, unified set of signals which are used by developers to build higher quality applications faster. It also helps operators speed root cause analysis and reduce mean time to resolution (MTTR). Seamless integration with Prometheus: The Stackdriver Kubernetes Monitoring integrates seamlessly with Prometheus--a leading Kubernetes open source monitoring approach--without any change. Unified view: Stackdriver Kubernetes Monitoring provides a unified view into signals from infrastructure, applications and services across multiple Kubernetes clusters. With this, developers, operators and security analysts, can effectively manage Kubernetes workloads. This allows them to easily observe system information from various sources, in flexible ways. Some instances include, inspecting a single container, or scaling up to explore massive, multi-cluster deployments. Get started on-cloud or on-premise easily: Stackdriver Kubernetes Monitoring is pre-integrated with Google Kubernetes Engine. Thus, one can immediately use it within their Kubernetes Engine workloads. It is easily integrated with Kubernetes deployments on other clouds or on-premise infrastructure. Hence, one can access a unified collection of logs, events, and metrics for their application, regardless of where the containers are deployed. Also, Google has open-sourced gVisor, a sandboxed container runtime. gVisor, which is lighter than a Virtual machine, enables secure isolation for containers. It also integrates with Docker and Kubernetes and thus makes it simple to run sandboxed containers in production environments. gVisor is written in Go to avoid security pitfalls that can plague kernels. RedHat shared an open source toolkit called Operator Framework RedHat in collaboration with Kubernetes open source community has shared the Operator Framework to make it easy to build Kubernetes applications. The Operator Framework is an open source toolkit designed in order to manage Kubernetes native applications named as Operators in an effective, automated and scalable manner. The Operator Framework comprises of an: Operator SDK that helps developers in building Operators based on their expertise. This does not require any knowledge of the complexities of Kubernetes API. Operator Lifecycle Manager which supervises the lifecycle of all the operators running across a kubernetes cluster. It also keep a check on the services associated with the operators. Operator Metering, which is soon to be added, allows creating a usage report for Operators providing specialized services. Oracle added new open serverless support and key Kubernetes features to Oracle Container Engine According to a report, security, storage and networking are the major challenges that companies face while working with containers. In order to address these challenges, the Oracle Container Engine have proposed some solutions, which include getting new governance, compliance and auditing features such as Identity and Access Management, role-based access control, support for the Payment Card Industry Data Security Standard, and cluster management auditing capabilities. Scalability features: Oracle is adding support for small and virtualized environments, predictable IOPS, and the ability to run Kubernetes on NVIDIA Tesla GPUs. New networking features: These include load balancing and virtual cloud network. Storage features: The company has added the OCI volume provisioner and flexvolume driver. Additionally, Oracle Container Engine features support for Helm and Tiller, and the ability to run existing apps with Kubernetes. Kublr announced that its version 1.9 provides easy configuration of Kubernetes clusters for enterprise users Kublr unleashed an advanced configuration capability in its version 1.9. This feature is designed to provide customers with flexibility that enables Kubernetes clusters to meet specific use cases. The use cases include: GPU-enabled nodes for Data Science applications Hybrid clusters spanning data centers and clouds, Custom Kubernetes tuning parameters, and Meeting other advanced requirements. New features in the Kublr 1.9 include: Kubernetes 1.9.6 and new Dashboard Improved backups in AWS with full cluster restoration An introduction to Centralized monitoring, IAM, Custom cluster specification Read more about Kublr 1.9 on Kublr blog. Kubernetes announced the availability of Kubeflow 0.1 Kubernetes brought forward a power-packed package for tooling, known as Kubeflow 0.1. Kubeflow 0.1 provides a basic set of packages for developing, training, and deploying machine learning models. This package: Supports Argo, for managing ML workflows Offers Jupyter Hub to create interactive Jupyter notebooks for collaborative and interactive model training. Provides a number of TensorFlow tools, which includes Training Controller for native distributed training. The Training Controller can be configured to CPUs or GPUs and can also be adjusted to fit the size of a cluster by a single click. Additional features such as a simplified setup via bootstrap container, improved accelerator integration, and support for more ML frameworks like Spark ML, XKGBoost, and sklearn will be released soon in the 0.2 version of KubeFlow. CNCF(Cloud Native Computing Foundation) announced a new Certified Kubernetes Application Developer program The Cloud Native Computing Foundation has successfully launched the Certified Kubernetes Application Developer (CKAD) exam and corresponding Kubernetes for Developers course. The CKAD exam certifies that users are fit to design, build, configure, and expose cloud native applications on top of Kubernetes. A Certified Kubernetes Application Developer can define application resources and use core primitives to build, monitor, and troubleshoot scalable applications and tools in Kubernetes. Read more about this program on the Cloud Native Computing Foundation blog. DigitalOcean launched managed Kubernetes service DigitalOcean cloud computing platform launched DigitalOcean Kubernetes, which is a simple and cost-effective solution for deploying, orchestrating, and managing container workloads on cloud. With the DigitalOcean Kubernetes service, developers can save time and deploy their container workloads without the need to configure things from scratch. The organization has also provided an early access to this Kubernetes service. Read more on the DigitalOcean blog. Apart, from these 7 vendors, many others such as DataDog, Humio, Weaveworks and so on have also announced features, frameworks, and services based on Kubernetes, serverless, and cloud computing. This is not the end to the announcements, read the KubeCon + CloudNativeCon 2018 website to know about other announcements rolled out in this event. Top 7 DevOps tools in 2018 Apache Spark 2.3 now has native Kubernetes support! Polycloud: a better alternative to cloud agnosticism
Read more
  • 0
  • 0
  • 2926