Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Cloud & Networking

376 Articles
article-image-hortonworks-partner-with-google-cloud-to-enhance-their-big-data-strategy
Gebin George
22 Jun 2018
2 min read
Save for later

Hortonworks partner with Google Cloud to enhance their Big Data strategy

Gebin George
22 Jun 2018
2 min read
Hortonworks currently is a leader in global data management solutions partnered with Google Cloud to enhance Hortonworks Data Platform (HDP) and Hortonworks Dataflow (HDF). It has promised to deliver next-generation data analytics for hybrid and multi-cloud deployments. This partnership will enable customers to leverage new innovations from the open source community via HDP and HDF on GCP for faster business innovations. HDP’s integration with Google Cloud gives us the following features: Flexibility for ephemeral workloads: Analytical workloads which are on-demand can be managed within minutes with no add-on cost and at unlimited elastic scale. Analytics made faster: Take advantage of Apache Hive and Apache Spark for interactive query, machine learning and analytics. Automated cloud provisioning: simplifies the deployment of HDP and HDF in GCP making it easier to configure and secure workloads to make optimal use of cloud resources. In addition HDF has gone through following enhancements: Deploying Hybrid Data architecture: Smooth and secure flow of data from any source which varies from on-premise to cloud. Streaming Analytics in Real-time: Build streaming applications with ease, which will capture real-time insights without having to code a single line. With the combination of HDP, HDF and Hortonworks DataPlane Service, Hortonworks can uniquely deliver consistent metadata, security and data governance across hybrid cloud and multicloud architectures. Arun Murthy, Co-Founder & Chief Product Officer, Hortonworks said “ Partnering with Google Cloud lets our joint customers take advantage of the scalability, flexibility and agility of the cloud when running analytic and IoT workloads at scale with HDP and HDF. Together with Google Cloud, we offer enterprises an easy path to adopt cloud and, ultimately, a modern data architecture. Similarly, Google Cloud’s project management director, Sudhir Hasbe, said “ Enterprises want to be able to get smarter about both their business and their customers through advanced analytics and machine learning. Our partnership with Hortonworks will give customers the ability to quickly run data analytics, machine learning and streaming analytics workloads in GCP while enabling a bridge to hybrid or cloud-native data architectures” Refer to the Hortonworks platform blog and Google cloud blog for more information on services and enhancements. Google cloud collaborates with Unity 3D; a connected gaming experience is here How to Run Hadoop on Google Cloud – Part 1 AT&T combines with Google cloud to deliver cloud networking at scale
Read more
  • 0
  • 0
  • 2338

article-image-nvidia-gpus-offer-kubernetes-for-accelerated-deployments-of-artificial-intelligence-workloads
Savia Lobo
21 Jun 2018
2 min read
Save for later

Nvidia GPUs offer Kubernetes for accelerated deployments of Artificial Intelligence workloads

Savia Lobo
21 Jun 2018
2 min read
Nvidia recently announced that they will make Kubernetes available on its GPUs, at the Computer Vision and Pattern Recognition (CVPR) conference. Although it is not generally available, developers will be allowed to use this technology in order to test the software and provide their feedback. Source: Kubernetes on Nvidia GPUs Kubernetes on NVIDIA GPUs will allow developers and DevOps engineers to build and deploy a scalable GPU-accelerated deep learning training. It can also be used to create inference applications on multi-cloud GPU clusters. Using this novel technology, developers can handle the growing number of AI applications and services. This will be possible by automating processes such as deployment, maintenance, scheduling and operation of GPU-accelerated application containers. One can orchestrate deep learning and HPC applications on heterogeneous GPU clusters. It also includes easy-to-specify attributes such as GPU type and memory requirement. It also offers integrated metrics and monitoring capabilities for analyzing and improving GPU utilization on clusters. Interesting features of Kubernetes on Nvidia GPUs include: GPU support in Kubernetes can be used via the NVIDIA device plugin One can easily specify GPU attributes such as GPU type and memory requirements for deployment in heterogeneous GPU clusters Visualizing and monitoring GPU metrics and health with an integrated GPU monitoring stack of NVIDIA DCGM , Prometheus and Grafana Support for multiple underlying container runtimes such as Docker and CRI-O Officially supported on all NVIDIA DGX systems (DGX-1 Pascal, DGX-1 Volta and DGX Station) Read more about this exciting news on Nvidia Developer blog NVIDIA brings new deep learning updates at CVPR conference Kublr 1.9.2 for Kubernetes cluster deployment in isolated environments released! Distributed TensorFlow: Working with multiple GPUs and servers  
Read more
  • 0
  • 0
  • 3183

article-image-google-cloud-collaborates-with-unity-3d-a-connected-gaming-experience-is-here
Savia Lobo
20 Jun 2018
2 min read
Save for later

Google Cloud collaborates with Unity 3D; a connected gaming experience is here!

Savia Lobo
20 Jun 2018
2 min read
Google Cloud announced its recent alliance with Unity at the Unite Berlin conference this week. Unity is a popular game development platform for a real-time 3D game and content creation. Google Cloud stated that they are building a suite of managed services and tools for creating connected games. This suite will be much focussed on real-time multiplayer experiences. With this Google Cloud becomes the default cloud provider helping developers build connected games using Unity. It will also assist them to easily build and scale their games. Additionally, developers will get an advantage of Google Cloud right from the Unity development environment without needing to become cloud experts. The reason Google Cloud collaborates with Unity is to create an open source for connecting players in multiplayer games. This project mainly aims at creating an open source, community-driven solutions built in collaboration with the world’s leading game companies. Unity will also be migrating all of the core infrastructure powering its services and offerings to Google Cloud. Unity will also be running its business on the same cloud that Unity game developers will develop, test and globally launch their games. John Riccitiello, Chief Executive Officer, Unity Technologies, said, “Migrating our infrastructure to Google Cloud was a decision based on the company’s impressive global reach and product quality. Now, Unity developers will be able to take advantage of the unparalleled capabilities to support their cloud needs on a global scale.” Google Cloud plans to release new products and features over the coming months. Keep yourself updated on this alliance by checking out Unity’s homepage. AI for Unity game developers: How to emulate real-world senses in your NPC agent behavior Google announces Cloud TPUs on the Cloud Machine Learning Engine (ML Engine) Unity 2D & 3D game kits simplify Unity game development for beginners
Read more
  • 0
  • 0
  • 2997
Visually different images

article-image-microsoft-condemns-ice-activity-at-u-s-border-but-still-faces-public-and-internal-criticism
Richard Gall
19 Jun 2018
3 min read
Save for later

Microsoft condemns ICE activity at U.S. border but still faces public and internal criticism

Richard Gall
19 Jun 2018
3 min read
Microsoft yesterday released a statement condemning the forcible separation of families at the U.S. border. The statement was made in response to public criticism of Microsoft after a blog post published earlier this year surfaced. In it, Microsoft's Azure Government team explained that it was supporting ICE - and was 'proud' to do so. In the statement, Microsoft said: Microsoft is not working with U.S. Immigration and Customs Enforcement or U.S. Customs and Border Protection on any projects related to separating children from their families at the border, and contrary to some speculation, we are not aware of Azure or Azure services being used for this purpose. As a company, Microsoft is dismayed by the forcible separation of children from their families at the border.  However, despite Microsoft's comment, it's clear that Azure Government is being used by ICE. In a post published in January, Tom Keane, a General Manager at Microsoft, wrote: ICE's decision to accelerate IT modernization using Azure Government will help them innovate faster while reducing the burden of legacy IT. The agency is currently implementing transformative technologies for homeland security and public safety, and we're proud to support this work with our mission-critical cloud. Clearly, Microsoft is distancing itself from the actions of ICE, but it may be too late. While it's unclear if Azure Government is being used by ICE as it implements the current wave of child incarceration, the link has already been formed in the minds of the public and Microsoft employees. Keane's words now have a chilling subtext. When he writes that Azure Government can help ICE employees 'make more informed decisions faster' and allow them 'to utilize deep learning capabilities to accelerate facial recognition and identification,' it's hard not to think about how the 'innovation' Microsoft is helping government agencies embrace is actually simply supporting state sanctioned violence against children. ICE has been cosying up to the tech world in 2018. Earlier this year, in April, ICE CTO spoke at a conference hosted by GitHub in Washington D.C. Although the incident was criticised in certain corners, it largely went unnoticed in the public domain. Given Microsoft's acquisition of GitHub in early June, this incident now takes on a new complexion in this strange narrative. Microsoft faces criticism from employees over relationship with ICE Gizmodo reported serious dissent from Microsoft employees. One employee told the website "this is the sort of thing that would make me question staying." Another is quoted as saying that they will "seriously consider leaving if I’m not happy with how they handle this.” The incident mirrors a number of other cases this year where employees of other major tech firms have criticized their organizations for government contracts. In May, for example, a number of Google employees quit over artificial intelligence ties to the Pentagon. However it's likely that things could get worse for Microsoft. For Google, the incident was largely internal. But given horrific reports from the U.S. border, questions around tech complicity in government actions will be propelled to the forefront of international debate.
Read more
  • 0
  • 0
  • 2417

article-image-alibaba-cloud-partners-with-sap-to-provide-a-versatile-one-stop-cloud-computing-environment
Savia Lobo
18 Jun 2018
2 min read
Save for later

Alibaba Cloud partners with SAP to provide a versatile, one-stop cloud computing environment

Savia Lobo
18 Jun 2018
2 min read
For all those who wish to run their SAP solutions on the cloud, Alibaba has granted this wish for you! At the SAPPHIRE NOW 2018, Alibaba showcased SAP products and solutions on its Cloud platform. Now one can run their SAP solutions on Alibaba Cloud with their choice of Operating System. Alibaba Cloud is among the world's top three IaaS providers according to Gartner. It is also the largest provider of public cloud services in China, according to IDC. It provides a comprehensive suite of cloud computing services to businesses all over the world. This includes merchants with businesses located within Alibaba Group marketplaces, startups, corporations and government organizations. Using Alibaba Cloud’s global infrastructure, enterprises can leverage its robust infrastructure and computing power to achieve greater business value. It has expanded its support to SAP systems by providing: Linux support – SAP HANA, SAP MaxDB, and SAP ASE Windows support – SAP MaxDB, SQL Server to run SAP Business Suite, and other applications on the SAP Application Server ABAP Alibaba has also passed the certification to run SAP Business One HANA on its cloud. The partnership of SAP and Alibaba brings a versatile, one-stop cloud computing environment by Alibaba Cloud's reliable, high-performance and secure infrastructure, interoperating with enterprise-level business application solutions from SAP." With SAP, Alibaba Cloud platform gets an added robust global IT infrastructure and computing strengths. It also delivers enhanced ERP services in cloud environments, which in turn aids enterprises in driving their digital transformation. Read more about this on SAP on Alibaba's cloud official website.
Read more
  • 0
  • 0
  • 2271

article-image-juniper-networks-comes-up-with-5g-iot-ready-routing-platform-mx-series-5g
Gebin George
14 Jun 2018
3 min read
Save for later

Juniper networks comes up with 5G - IoT-ready routing platform, MX Series 5G

Gebin George
14 Jun 2018
3 min read
Juniper networks, one of industry leads in automated, scalable and secure networks, today announced fifth generation of it’s MX Series 5G Universal Routing Platform. This series has more offerings for cutting-edge infrastructure and technology like cloud and IoT, enabling high-level network programmability. It has improved the programmability, performance and flexibility, for rapid cloud deployment by introducing a new set of software. This platform supports complex networks and service-intensive applications such as secured SD-WAN-based services and so on. Executive vice president and chief product officer at Juniper Networks, Manoj Leelanivas, said “ Cloud is eating the world, 5G is ramping up, IoT is presenting a host of new challenges, and security teams simply can’t keep up with the sheer volume of cyber attacks on today’s network. One thing service providers should not have to worry about among all this is the unknown of what lies ahead.” Few highlights of this release are as follows: Juniper Penta Silicon Penta silicon is considered the heart of the 5G platform which is next-generation 16 nm service-optimized, having a packet-forwarding engine that delivers upto 50% power efficiency over existing Junos trio chipset. Pena silicon has native support to MACsec and IPsec crypto engine that enables end to end secure connectivity at scale. In addition to this, Penta silicon also supports flexible native Ethernet (FlexE). MX 5G Control User-Plane Separation (CUPS) The 3GPP CUPS standard allows the customer to separate the evolved packet core user plane (GTP-U), and control plane (GTP-C) with standard interface to help service providers scale each independently as needed. The MX Series 5G platform is the first networking platform to support a standard-based hardware accelerated 5G user-plane in both existing and future MX routers. It enables converged services (wireless and wireline) on the same platform while also allowing integration with third-party 5G control planes. MX10008 and MX10016 Universal Chassis MX series continues to do innovations in the area of cloud, enterprise networking, and previously announced PTX and QFX Universal Chassis gains two new MX variants with today’s announcement: MX10008 and MX10016. A variety of line cards and software are available to satisfy specific networking use cases across the data center, enterprise and WAN. Refer to the official Juniper website for details on MX Series 5G. Five developer centric sessions at IoT World 2018 Cognitive IoT: How Artificial Intelligence is remoulding Industrial and Consumer IoT Windows 10 IoT Core: What you need to know  
Read more
  • 0
  • 0
  • 2868
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $15.99/month. Cancel anytime
article-image-microsoft-supercharges-its-azure-ai-platform-with-new-features
Gebin George
14 Jun 2018
2 min read
Save for later

Microsoft supercharges its Azure AI platform with new features

Gebin George
14 Jun 2018
2 min read
Microsoft recently announced few innovations to their AI platform powered by Microsoft Azure. These updates are well aligned to their Digital Transformation strategy of helping organizations augment their machine learning capabilities for better performance. Cognitive Search Cognitive Search is a new feature in Azure portal which leverages the power of AI to understand the content and append the information into Azure Search. It also has support for different file-readers like PDF, office documents.It also enables OCR capabilities like key phrase extraction, language detection, image analysis and even facial recognition. So the initial search will pull all the data from various resources and then apply cognitive skills to store data in the optimized index. Azure ML SDK for Python In the Azure Machine Learning ecosystem, this additional SDK facilitates the developers and the data scientists to execute key AML workflows, Model training, Model deployment, and scoring directly using a single control plane API within Python Azure ML Packages Microsoft now offers Azure ML packages that represent a rich set of pip- installable extensions to Azure ML. This makes the process of building efficient ML models more streamlined by building on deep learning capabilities of Azure AI platform. ML.NET This cross-platform open source framework is meant for .NET developers and provides enterprise-grade software libraries of latest innovations in Machine Learning and platforms that includes Bing, Office, and Windows. This service is available in the AI platform for preview. Project Brainware This service is also available on Azure ML portal for preview. This architecture is essentially built to process deep neural networks; it uses hardware acceleration to enable fast AI. You can have a look at the Azure AI portal for more details. New updates to Microsoft Azure services for SQL Server, MySQL, and PostgreSQL Epicor partners with Microsoft Azure to adopt Cloud ERP SAP Cloud Platform is now generally available on Microsoft Azure  
Read more
  • 0
  • 0
  • 2266

article-image-oracle-announces-oracle-soar-a-tools-package-to-ease-application-migration-on-cloud
Savia Lobo
13 Jun 2018
2 min read
Save for later

Oracle announces Oracle Soar, a tools package to ease application migration on cloud

Savia Lobo
13 Jun 2018
2 min read
Oracle recently released Oracle Soar, a brand new tools, and services package to help customers migrate their applications on the cloud. Oracle Soar comprises a set of automated migration tools along with professional services i.e. a complete solution for migration. It is a semi-automated solution that fits in with Oracle's recent efforts to stand apart from other cloud providers which offer advanced automated services. Tools available within the Oracle Soar package are: Discovery assessment tool Process analyzer tool Automated data and configuration migration utilities tool Rapid integration tool The automated process is powered by True Cloud Method, which is Oracle’s proprietary approach to support customers throughout their cloud journey. Customers are also guided by a dedicated Oracle concierge service that ensures the migration aligns with modern, industry best practices. Customers can monitor the status of their cloud transition via an intuitive mobile application, which allows them to follow a step-by-step implementation guide for what needs to be done on each day. With Soar, customers can save up to 30% on cost and time as it offers simple migrations taking as little as 20 weeks for completion of the process. Oracle Soar is currently available for customers from the Oracle E-Business Suite, Oracle PeopleSoft and Oracle Hyperion Planning who will move to Oracle ERP Cloud, Oracle SCM Cloud and Oracle EPM Cloud. Read more about Oracle Soar, on Oracle’s official blog post. Oracle reveals issues in Object Serialization. Plans to drop it from core Java. Oracle Apex 18.1 is here! What Google, RedHat, Oracle, and others announced at KubeCon + CloudNativeCon 2018    
Read more
  • 0
  • 0
  • 2499

article-image-sap-cloud-platform-is-now-generally-available-on-microsoft-azure
Savia Lobo
11 Jun 2018
3 min read
Save for later

SAP Cloud Platform is now generally available on Microsoft Azure

Savia Lobo
11 Jun 2018
3 min read
Microsoft stated that its recent addition, SAP Cloud Platform is now generally on its Azure Platform. The SAP cloud platform enables developers to build SAP applications and extensions using PaaS development platform along with integrated services. With the SAP platform becoming generally available, developers can now deploy Cloud Foundry-based SAP Cloud Platform on Azure. This is currently available in the West Europe region and Microsoft is working with SAP to enable more regions to use it in the months to come. With SAP HANA’s availability on Microsoft Azure, one can expect: Largest SAP HANA optimized VM size in the cloud Microsoft would be soon launching an Azure-M series, which will support large memory virtual machines with sizes up to 12 TB, which would be based on Intel Xeon Scalable (Skylake) processors and will offer the most memory available of any VM in the public cloud. The M series will help customers to push the limits of virtualization in the cloud for SAP HANA. Availability of a range of SAP HANA certified VMs For customers who wish to use small instances, Microsoft also offers smaller M-series VM sizes. These range from 192 GB – 4 TB with 10 different VM sizes and extend Azure’s SAP HANA certified M-series VM. These smaller M-series offer on-demand and SAP certified instances with a flexibility to spin-up or scale-up in less time. It also offers instances to spin-down to save costs within a pay-as-you-go model available worldwide. Such a flexibility and agility is not possible with a private cloud or on-premises SAP HANA deployment. 24 TB bare-metal instance and optimized price per TB For customers that need a higher performance dedicated offering for SAP HANA, Microsoft now offers additional SAP HANA TDIv5 options of 6 TB, 12 TB, 18 TB, and 24 TB configurations in addition to their current configurations from 0.7TB to 20 TB. For customers who require more memory but the same number of cores, these configurations enable them to get a better price per TB deployed. A lot more options for SAP HANA in the cloud SAP HANA has 26 distinct offerings from 192 GB to 24 TB, scale-up certification up to 20 TB and scale-out certification up to 60 TB. It offers global availability in 12 regions with plans to increase to 22 regions in the next 6 months, Azure now offers the most choice for SAP HANA workloads of any public cloud. Microsoft Azure also enables customers, To extract insights and analytics from SAP data with services such as Azure Data Factory SAP HANA connector to automate data pipelines Azure Data Lake Store for hyper-scale data storage and Power BI An industry leading self-service visualization tool, to create rich dashboards and reports from SAP ERP data. Read more about this news about SAP Cloud Platform on Azure, on Microsoft Azure blog. How to perform predictive forecasting in SAP Analytics Cloud Epicor partners with Microsoft Azure to adopt Cloud ERP New updates to Microsoft Azure services for SQL Server, MySQL, and PostgreSQL
Read more
  • 0
  • 0
  • 3392

article-image-atlassian-open-sources-escalator-a-kubernetes-autoscaler-project
Savia Lobo
07 Jun 2018
2 min read
Save for later

Atlassian open sources Escalator, a Kubernetes autoscaler project

Savia Lobo
07 Jun 2018
2 min read
Atlassian recently announced the release of their open source Kubernetes autoscaler project, Escalator. This project aims at resolving issues related with autoscaling where clusters were not fast enough in scaling up or down. Atlassian explained the problem with scaling up, which was when clusters hit capacity, users would have to wait for a long time for the additional Kubernetes workers to be booted up in order to assist with the additional load. Many builds cannot tolerate extended delays and would fail. On the other hand, the issue while scaling down was that when loads had subsided, the autoscaler would not scale-down fast enough. Though this is not really an issue when the node count is less, however a problem can arise when that number reaches hundreds and more. Escalator, written in Go, is the solution To address the problem with the scalability of the clusters, Atlassian created Escalator, which is a batch of job optimized autoscaler for Kubernetes. Escalator basically had two goals : Provide preemptive scale-up with a buffer capacity feature to prevent users from experiencing the 'cluster full' situation, Support aggressive scale-down of machines when they were no longer required. Atlassian also wanted to build a Prometheus metrics for the Ops team, to gauge how well the clusters were working. With Escalator, one need not wait for EC2 instances to boot and join the cluster. It also helps in saving money by allowing one to pay for the number of machines actually needed. It has also helped Atlassian save a lot of money, nearly thousands of dollars per day, based on the workloads they run. At present, Escalator is released as open source to the Kubernetes community. However, others can avail its features too. The company would be expanding the tool to its external Bitbucket Pipeline users, and would also explore ways to manage more service-based workloads. Read more about Escalator on the Atlassian blog. You can also check out its GitHub Repo. The key differences between Kubernetes and Docker Swarm Microsoft’s Azure Container Service (ACS) is now Azure Kubernetes Services (AKS) Kubernetes Containerd 1.1 Integration is now generally available
Read more
  • 0
  • 0
  • 2533
article-image-5-things-you-shouldnt-miss-in-dockercon-2018-next-week
Vijin Boricha
07 Jun 2018
5 min read
Save for later

5 things you shouldn’t miss in DockerCon 2018 next week

Vijin Boricha
07 Jun 2018
5 min read
DockerCon 2018 is around the corner and is taking place at the Moscone Center in San Francisco next week from 12th -15th June. More than 6,000 developers, architects, system admins, and other IT professionals are expected to get their hands on the latest enhancements in the container ecosystem. DockerCon is where people from the Docker community come together to learn, share, and collaborate. Here, you will find attendees from beginners, to intermediate and advanced experts who are interested in learning something new and enhancing their skill set. So, if you are interested in learning the modern ways of working with Docker then, this is your perfect chance. Here, you will have 2 full days of training, over a 100 session and hands-on labs, free workshops and more that will be brought to the table by different individuals. If you haven’t yet scheduled your DockerCon Agenda, here is the DockerCon Agenda Builder that will help you browse and search the sessions you are looking forward to in DockerCon 2018. With that being said, here are some interesting sessions you should not miss in your trip to DockerCon 2018. Automated Hardware Testing Using Docker for Space We already know how hard it is to cope up with space but that is not keeping Docker from thinking beyond web content. Space software development is difficult as they run on highly constrained embedded hardware. But Docker and its DevOps mentality helped DART create a scalable and rapidly deployable test infrastructure, in NASA’s mission to hit an asteroid at 6 km/s. This presentation will be all about how Docker can be used for both embedded development environment and scalable test environment. You will also learn about how Docker has evolved testing from a human-based testing to an automated one. Lastly, this presentation will summarize the do’s and don'ts of automated hardware testing, how you can play a key role in making a difference and what Docker wishes to achieve in the near future. Democratizing Machine Learning on Kubernetes One of the biggest challenges Docker is facing today is understanding how to build a platform that runs common open-source ML libraries such as Tensorflow. This session will be all about deploying distributed Tensorflow training cluster with GPU scheduling Kubernetes. This session will also teach you about the functioning of distributed training, its various options and which options to choose when. Lastly, this session will cover best practices on using distributed Tensorflow on top of Kubernetes. In the end, you will be provided with a public Github repository of the entire work presented in this session. Serverless Panel (Gloo function gateway) DockerCon 2018 is entirely based on your journey to containerization, where you will learn about modernizing traditional applications, adding microservices, and then serverless environments. One of the interesting development areas in 2018 is Gloo which is designed for microservice, monolithic, and serverless applications. It is a high-performance, plugin-extendable, platform-agnostic function gateway that enables the enterprise application developer to modernize a traditional application. Gloo containerizes a traditional application and uses microservices to add functions to it. Developers can then leverage orchestrated and routed portable serverless frameworks on top of Docker EE, or AWS Lambda to create hybrid cloud applications. Don’t Have A Meltdown! Practical Steps For Defending Your Apps With recent cybercrime events such as Meltdown and Spectre, security has become one of the major concerns for applications developers and operations teams. This session will demonstrate some best practices, configuration, and tools to effectively defend your container deployments from some common attacks. This session will be all about risks and preventive measures to be taken on authentication, injection, sensitive data, and more. All the events displayed in this session are inspired from highlights of OWASP Top 10 and other popular and massive attacks. By the end of this session, you will understand important security risks in your application and how you can go about mitigating them. Tips & Tricks of the Docker Captains This session is entirely focused on the tips and tricks for making the most out of Docker. These best practices will be from Docker Captains who will guide users in making common operations easier, addressing common misunderstandings, and avoiding common pitfalls. Topics covered in this session will revolve around build processes, security, orchestration, maintenance and more. This session will not only make new and intermediate user’s life easy with Docker but will also provide some new and valuable information to advanced users. DockerCon is considered as the number one container conference for IT professionals interested in learning and creating scalable solutions with innovative technologies. So, what are you waiting for? Start planning for DockerCon 2018 now and if you haven’t yet, you can register for DockerCon 2018 and get your container journey started. Related Links What’s new in Docker Enterprise Edition 2.0? Google Kubernetes Engine 1.10 is now generally available and ready for enterprise use Microsoft’s Azure Container Service (ACS) is now Azure Kubernetes Services (AKS)
Read more
  • 0
  • 0
  • 1747

article-image-google-kubernetes-engine-1-10-is-now-generally-available-and-ready-for-enterprise-use
Savia Lobo
04 Jun 2018
3 min read
Save for later

Google Kubernetes Engine 1.10 is now generally available and ready for enterprise use

Savia Lobo
04 Jun 2018
3 min read
Google recently announced that their Google Kubernetes Engine 1.10 is now generally available and is also ready for enterprise use. For a prolonged time, enterprises have faced challenges such as security, networking, logging, and monitoring. With the availability of Kubernetes Engine 1.10, Google has introduced new and exciting features that have a built-in robust security for enterprise use, which are: Shared Virtual Private Cloud (VPC): This enables better control of network resources Regional Persistent Disks and Regional Clusters: These ensure higher-availability and stronger SLAs. Node Auto-Repair GA and Custom Horizontal Pod Autoscaler: These can be used for greater automation. New features in the Google Kubernetes Engine 1.10 Networking One can deploy workloads in Google’s global Virtual Private Cloud (VPC) in a Shared VPC model. This gives you the flexibility to manage access to shared network resources using IAM permissions while still isolating departments. Shared VPC lets organization administrators assign administrative responsibilities, such as creating and managing instances and clusters, to service project admins while maintaining centralized control over network resources like subnets, routers, and firewalls. Shared VPC network in the Kubernetes engine 1.10 Storage This will make it easy to build highly available solutions. The Kubernetes Engine will provide support for the new Regional Persistent Disk (Regional PD). Regional PD enables a persistent network-attached block storage with synchronous replication of data between two zones within a region. One does not have to worry about application-level replication and can take advantage of replication at the storage layer, with the help of Regional PDs. This kind of replication offers a convenient building block using which one can implement highly available solutions on Kubernetes Engine. Reliability Regional clusters, which would be made available in some time soon, allow one to create a Kubernetes Engine cluster with a multi-master, highly-available control plane. This cluster would spread the masters across three zones in a region, which is an important feature for clusters with higher uptime requirements. Regional clusters also offer a zero-downtime upgrade experience when upgrading Kubernetes Engine masters. The node auto-repair feature is now generally available. It monitors the health of the nodes in one’s cluster and repairs nodes that are unhealthy. Auto-scaling In Kubernetes Engine 1.10, Horizontal Pod Autoscaler supports three different custom metrics types in beta: External - For scaling based on Cloud Pub/Sub queue length Pods - For scaling based on the average number of open connections per pod Object - For scaling based on Kafka running in the cluster To know more about the features in detail, visit the Google Blog. Kublr 1.9.2 for Kubernetes cluster deployment in isolated environments released! Kubernetes Containerd 1.1 Integration is now generally available Rackspace now supports Kubernetes-as-a-Service
Read more
  • 0
  • 0
  • 2094

article-image-microsoft-releases-windows-10-insider-build-17682
Natasha Mathur
01 Jun 2018
3 min read
Save for later

Microsoft releases Windows 10 Insider build 17682!

Natasha Mathur
01 Jun 2018
3 min read
Microsoft announced today that they are releasing Windows 10 Insider build 17682 from the RS5 branch today. The new release includes sets improvements, wireless projection experience, Microsoft Edge improvements, and RSAT along with other updates and fixes. Major improvements and updates Sets Improvements New tab page has been updated which makes it easy to launch apps. On clicking the plus button in a Sets window, apps are visible in the frequent destinations list. The all apps list have been integrated into the new tab page to make it easy to browse apps instead of using the search box. Apps supporting Sets when clicked will launch into a new tab. In case you select News Feed, just select the “Apps” link which is next to “News Feed”, this will help switch to the all apps list. Managing Wireless Projection Experience Earlier, there were disturbances during wireless projection for users when the session was started through file explorer or an app. This has been fixed now with Windows 10 Insider build 17682 as there’ll be a control banner at the top of a screen during a session. The control banner informs you about your connection state, lets you tune the connection as well as helps with quick disconnect or reconnect to the same sink. Tuning is done with the help of settings gear. Screen to screen latency is optimized based on the following scenarios: Game mode makes gaming over a wireless connection possible by minimizing screen to screen latency. Video mode ensures smooth playback of the videos without any glitches on the big screen by increasing screen to screen latency. Productivity mode helps to balance between game mode and video mode. Screen to screen latency is responsive enough so that typing feels natural while ensuring limited glitch in the videos. All connections start off in the productivity mode. Improvements in Microsoft Edge for developers With the latest Windows 10 insider build 17682, there is unprefixed support for the new Web Authentication API (WebAuthN). Web Authentication helps provide a scalable and interoperable solution. It helps with replacing passwords with stronger hardware-bound credentials. Microsoft Edge users can use Windows Hello (via PIN or biometrics). They can also use other external authenticators namely FIDO2 Security Keys or FIDO U2F Security Keys. This helps authenticate the websites securely. RSAT available on demand No need to manually download RSAT on every upgrade. Select the “Manage Optional features” in Settings. Then click on “Add a feature” option which will provide you with all the listed RSAT components. You can pick the components you want, and on next upgrade, Windows will ensure that all those components automatically persist the upgrade. More information about other known issues and improvements is on the Window’s Blog. Microsoft Cloud Services get GDPR Enhancements Microsoft releases Windows 10 SDK Preview Build 17115 with Machine Learning APIs Microsoft introduces SharePoint Spaces, adds virtual reality support to SharePoint
Read more
  • 0
  • 0
  • 1908
article-image-att-combines-with-google-cloud-to-deliver-cloud-networking-at-scale
Gebin George
01 Jun 2018
2 min read
Save for later

AT&T combines with Google cloud to deliver cloud networking at scale

Gebin George
01 Jun 2018
2 min read
AT&T partners with Google cloud to deliver cloud networking solutions for enterprise customers using Partner Interconnect solution. This new offering enables customers to use ATT NetBond and connect to Google Cloud Platform in a secure way. Businesses can also connect to Google Cloud via Cloud VPN. Chief product officer at ATT, Roman Pacewicz said “ We're committed to helping businesses transform through our edge-to-edge capabilities. This collaboration with Google Cloud gives businesses access to a full suite of productivity tools and a highly secure, private network connection to the Google Cloud Platform.” Paul Ferrand, President Global Customer Operations, Google Cloud said “ AT&T provides organizations globally with secure, smart solutions, and our work to bring Google Cloud's portfolio of products, services and tools to every layer of its customers' business helps serve this mission. Our alliance allows businesses to seamlessly communicate and collaborate from virtually anywhere and connect their networks to our highly-scalable and reliable infrastructure” ATT is also offering access to G-suite, Google’s cloud-based productivity suite which includes Gmail, Docs and Drive available via ATT Collaborate. Using Cloud Partner interconnect, it facilitates private connectivity to Google Cloud and helps them run multiple workloads across different cloud environment. It also allows centres that are located far away from a Google Cloud region or point of presence to connect at up to 10Gbps. Additionally, since G Suite is there with AT&T Collaborate, enterprises have access to a single source for chat, voice, video and desktop sharing. Businesses can also enable carrier-grade voice reliability and security from within the G Suite applications.It can also be used across practically any device from any location. Google Compute Engine Plugin makes it easy to use Jenkins on Google Cloud Platform How to Run Hadoop on Google Cloud – Part 1 Google announces Cloud TPUs on the Cloud Machine Learning Engine (ML Engine)
Read more
  • 0
  • 0
  • 1882

article-image-amazon-neptune-aws-cloud-graph-database-is-now-generally-available
Savia Lobo
31 May 2018
2 min read
Save for later

Amazon Neptune, AWS’ cloud graph database, is now generally available

Savia Lobo
31 May 2018
2 min read
Last year, Amazon Web Services (AWS) announced the launch of its fast, reliable, and fully-managed cloud graph database, Amazon Neptune, at the Amazon Re:Invent 2017. Recently, AWS announced that Neptune is all set for the general public to explore. Graph databases store the relationships between connected data as graphs. This enables applications to access the data in a single operation, rather than a bunch of individual queries for all the data. Similarly, Neptune makes it easy for developers to build and run applications that work with highly connected datasets. Also, due to the availability of AWS managed graph database service, developers can further experience high scalability, security, durability, and availability. As Neptune becomes generally available, there are a large number of performance enhancements and updates, following are some additional upgrades: AWS CloudFormation support AWS Command Line Interface (CLI)/SDK support An update to Apache TinkerPop 3.3.2 Support for IAM roles with bulk loading from Amazon Simple Storage Service (S3) Some of the benefits of Amazon Neptune include: Neptune’s query processing engine is highly optimized for two of the leading graph models, Property Graph and W3C's RDF. It is also associated with the query languages, Apache TinkerPop Gremlin and RDF/SPARQL, providing customers the flexibility to choose the right approach based on their specific graph use case. Neptune storage scales automatically, without downtime or performance degradation as customer data increases. It allows developers to design sophisticated, interactive graph applications, which can query billions of relationships with millisecond latency. There are no upfront costs, licenses, or commitments required while using Amazon Neptune. Customers pay only for the Neptune resources they use. To know more interesting facts about Amazon Neptune in detail, visit its official blog. 2018 is the year of graph databases. Here’s why. From Graph Database to Graph Company: Neo4j’s Native Graph Platform addresses evolving needs of customers When, why and how to use Graph analytics for your big data
Read more
  • 0
  • 0
  • 2414