Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Cloud & Networking

376 Articles
article-image-bodhi-linux-5-0-0-released-updated-ubuntu-core-modern-look
Sugandha Lahoti
24 Aug 2018
2 min read
Save for later

Bodhi Linux 5.0.0 released with updated Ubuntu core 18.04 and a modern look

Sugandha Lahoti
24 Aug 2018
2 min read
The Bodhi Team have announced the fifth major release of their Linux distribution. Bodhi Linux 5.0.0 comes with an updated Ubuntu core 18.04 and an overall modern look for its Moksha Window Manager. Bodhi Linux was first released as a stable version seven years ago, as a lightweight Linux distribution based on Ubuntu and Moksha window manager. It uses a minimal base system allowing users to populate it with the software of their choice. Bodhi Linux 5.0.0 features disc images which have a fresh new look; a modified version of the popular 'Arc Dark' theme colorized in Bodhi Green. They have also included a fresh default wallpaper, login screen, and splash scenes as your system boots. Bodhi Linux Default Desktop - Busy Bodhi Linux Desktop - Clean The Bodhi team have not provided a change log because the move to an Ubuntu 18.04 base from 16.04 is the only major difference. Ubuntu 18.04 comes with changes such as Better metric collection in Ubuntu Report Support for installing on NVMe with RAID1 Fix for a typo that made update-manager report crash Miscellaneous unattended-upgrade fixes Ubuntu welcome tool now mentions dock and notifications Patches to make audio work on Lenovo machines with dual audio codecs Restore New Tab menu item in GNOME Terminal New “Thunderbolt” panel in Settings app If you installed a pre-release of Bodhi 5.0.0 you will simply need to run your system updates for the latest ISO images. However, the system updates will not adjust the look of your desktop automatically. If you have a previous Bodhi release installed you will need to do a clean install to upgrade to Bodhi 5.0.0. Bodhi 4.5.0 will have support until Ubuntu 16.04 runs out in April 2021. You can read more about the Bodhi 5.0.0 release on Bodhi Linux Blog. What to expect from upcoming Ubuntu 18.04 release. Is Linux hard to learn? Red Hat Enterprise Linux 7.5 (RHEL 7.5) now generally available.
Read more
  • 0
  • 0
  • 3351

article-image-docker-faces-public-outcry-as-docker-for-mac-and-windows-can-be-downloaded-only-via-docker-store-login
Melisha Dsouza
23 Aug 2018
4 min read
Save for later

Docker faces public outcry as Docker for Mac and Windows can be downloaded only via Docker Store login

Melisha Dsouza
23 Aug 2018
4 min read
5 years ago, Docker was the talk of the town because it made it possible to get a number of apps running on the same old servers and it also made packaging and shipping programs easy. But the same cannot be said about Docker now as the company is facing public disapproval on their decision to allow Docker for Mac and Windows only to be downloaded if one is logged into the Docker store. Their quest for  "improving the users experience" clearly is facing major roadblocks. Two years ago, every bug report and reasonable feature request was "hard" or "something you don't want" and would result in endless back and forth for the users. On 02 June 2016, new repository keys were pushed to the docker public repository. As a direct consequence, any run of “apt-get update” (or equivalent) on a system configured with the broken repo will fail with an error “Error https://apt.dockerproject.org/ Hash Sum mismatch.” The issue affected  ALL systems worldwide that were configured with the docker repository. All Debian and ubuntu versions, independent of OS and docker versions, faced the meltdown. It became impossible to run a system update or upgrade on an existing system. This 7 hours interplanetary outage because of Docker had little tech news coverage. All that was done was a few messages on a GitHub issue. You would have expected Docker to be a little bit more careful after the above controversy, but lo and behold! Here , comes yet another badly managed change implementation.. The current matter in question On June 20th 2018, github and reddit were abuzz with comments from confused Docker users on how they couldn’t download Docker for Mac or Windows without logging into the docker store. The following URLs were spotted with the problem: Install Docker for Mac and Install Docker for Windows To this, a docker spokesperson responded saying that the change was incorporated to improve the Docker for Mac and Windows experience for users moving forward. This led to string of accusations from dedicated docker users. Some of their complains included-  Source: github.com            Source: github.com    Source: github.com The issue is still ongoing and with no further statements released from the Docker team, users are left in the dark. Inspite of all the hullabaloo, why choose Docker? A report by Dzone indicates that Docker adoption by companies was up 30% in the last year. Its annual revenue is expected to increase by 4x, growing from $749 million in 2016 to more than $3.4 billion by 2021, representing a compound annual growth rate (CAGR) of 35 percent. So what is this company doing differently? It’s no secret that Docker containers are easy to deploy in a cloud. It can be incorporated into most DevOps applications, including Puppet, Chef, Vagrant, and Ansible, which are some of the major languages in configuration management. Specifically, for CI/CD Docker makes it achievable to set up local development environments that are exactly like a live server. It can run multiple development environments from the same host with unique software, operating systems, and configurations. It helps to test projects on new or different servers. Allows multiple users to work on the same project with the exact same settings, regardless of the local host environment. It ensures that applications that are running on containers are completely segregated and isolated from each other. Which means you get complete control over traffic flow and management. So, what’s the verdict? Most users accused Docker’s move as manipulative since they are literally asking people to login with their information to target them with ad campaigns and spam emails to make money. However, there were also some in support of this move. Source: github.com One reddit user said that while there is no direct solution to this issue, You can use https://github.com/moby/moby/releases as a workaround, or a proper package manager if you're on Linux. Hopefully, Docker takes this as a cue before releasing any more updates that could spark public outcry. It would be interesting to see how many companies still stick around and use Docker irrespective of the rollercoaster ride that the users are put through. You can find further  opinions on this matter at reddit.com. Docker isn’t going anywhere Zeit releases Serverless Docker in beta What’s new in Docker Enterprise Edition 2.0?  
Read more
  • 0
  • 0
  • 8227

article-image-google-introduces-cloud-hsm-beta-hardware-security-module-for-crypto-key-security
Prasad Ramesh
23 Aug 2018
2 min read
Save for later

Google introduces Cloud HSM beta hardware security module for crypto key security

Prasad Ramesh
23 Aug 2018
2 min read
Google has rolled out a beta of its Cloud hardware security module aimed at hardware cryptographic key security. Cloud HSM allows better security for customers without them having to worry about operational overhead. Cloud HSM is a cloud-hosted hardware security module that allows customers to store encryption keys. Federal Information Processing Standard Publication (FIPS) 140-2 level 3 security is used in the Cloud HSM. FIPS is a U.S. government security standard for cryptographic modules under non-military use. This standard is certified to be used in financial and health-care institutions. It is a specialized hardware component designed to encrypt small data blocks contrary to larger blocks that are managed with Key Management Service (KMS). It is available now and is fully managed by Google, meaning all the patching, scaling, cluster management and upgrades will be done automatically with no downtime. The customer has full control of the Cloud HSM service via the Cloud KMS APIs. Il-Sung Lee, Product Manager at Google, stated: “And because the Cloud HSM service is tightly integrated with Cloud KMS, you can now protect your data in customer-managed encryption key-enabled services, such as BigQuery, Google Compute Engine, Google Cloud Storage and DataProc, with a hardware-protected key.” In addition to Cloud HSM, Google has also released betas for asymmetric key support for both Cloud KMS and Cloud HSM. Now users can create a variety of asymmetric keys for decryption or signing operations. This means that users can now store their keys used for PKI or code signing in a Google Cloud managed keystore. “Specifically, RSA 2048, RSA 3072, RSA 4096, EC P256, and EC P384 keys will be available for signing operations, while RSA 2048, RSA 3072, and RSA 4096 keys will also have the ability to decrypt blocks of data.” For more information visit the Google Cloud blog and for HSM pricing visit the Cloud HSM page. Google Cloud Next: Fei-Fei Li reveals new AI tools for developers Machine learning APIs for Google Cloud Platform Top 5 cloud security threats to look out for in 2018
Read more
  • 0
  • 0
  • 3148
Visually different images

article-image-whats-new-in-google-cloud-functions-serverless-platform
Melisha Dsouza
17 Aug 2018
5 min read
Save for later

What’s new in Google Cloud Functions serverless platform

Melisha Dsouza
17 Aug 2018
5 min read
Google Cloud Next conference in San Francisco in July 2018 saw some exciting new developments in the field of serverless technology. The company is giving development teams the ability to build apps without worrying about managing servers with their new serverless technology. Bringing the best of both worlds: Serverless and containers, Google announced that Cloud Functions is now generally available and ready for production use. Here is a list of the all-new features that developers can watch out for- #1 Write Cloud Functions using  Node 8, Python 3.7 With support for async/await and a new function signature, you can now write Cloud Functions using Node 8. Dealing with multiple asynchronous operations is now easier thanks to Cloud Functions that provide data and context. You can use the await keyword to await the results of asynchronous operations. Python 3.7 can also be used to write Cloud Functions.  Similar to Node, you get data and context for background functions, and request for HTTP. Python HTTP functions are based on the popular Flask microframework. Flask allows you to get set up really fast. The requests are based on flask.Request and the responses just need to be compatible with flask.make_response. As with Node, you get data (dict) with Python background functions and context (google.cloud.functions.Context). To signal completion, you just need to return from your function or raise an exception and Stackdriver error handling will kick in. And, similarly to Node (package.json), Cloud Functions will automatically do the installation of all of your Python dependencies (requirements.txt) and build in the cloud. You can have a look at the code differences between Node 6 and Node 8 behavior and at a Flask request on the Google Cloud website. #2 Cloud Functions is now out  for Firebase Cloud Functions for Firebase is also generally available. It has full support for Node 8, including ECMAScript 2017 and async/await. The additional granular controls include support  for runtime configuration options, including region, memory, and timeout. Thus allowing you to refine the behavior of your applications. You can find more details from the Firebase documentation. Flexibility for the application stack now stands improved. Firebase events (Analytics, Firestore, Realtime Database, Authentication) are directly available in the Cloud Functions Console on GCP. You can now trigger your functions in response to the Firebase events directly from your GCP project. #3 Run headless Chrome by accessing system libraries Google Cloud functions have also broadened the scope of libraries available by rebasing the underlying Cloud Functions operating system onto Ubuntu 18.04 LTS. Access to system libraries such as ffmpeg and libcairo2 is now available- in addition to imagemagick- as well as everything required to run headless Chrome. For example, you can now process videos and take web page screenshots in Chrome from within Cloud Functions. #4 Set environment variables You can now pass configuration to your functions by specifying key-value pairs that are bound to a function. The catch being, these pairs don’t have to exist in your source code. Environment variables are set at the deploy time using the --set-env-vars argument. These are then injected into the environment during execution time. You can find more details on the Google cloud webpage. #5 Cloud SQL direct connect Now connect Cloud Functions to Cloud SQL instances through a fully managed secure direct connection.  Explore more from the official documentation. What to expect next in Google Cloud Functions? Apart from these, Google also promises a range of features to be released in the future. These include: 1. Scaling controls This will be used to limit the number of instances on a per-function basis thus limiting traffic. Sudden traffic surge scenarios will , therefore,come under control when Cloud Functions rapidly scales up and overloads a database or general prioritization based on the importance of various parts of your system. 2. Serverless scheduling You’ll be able to schedule Cloud Functions down to one-minute intervals invoked via HTTP(S) or Pub/Sub. This allows you to execute Cloud Functions on a repeating schedule. Tasks like daily report generation or regularly processing dead letter queues will now pick up speed! 3. Compute Engine VM Access Now connect to Compute Engine VMs running on a private network using --connected-vpc option. This provides a direct connection to compute resources on an internal IP address range. 4. IAM Security Control The new Cloud Functions Invoker IAM role allows you to add IAM security to this URL. You can control who can invoke the function using the same security controls as used in Cloud Platform 5. Serverless containers With serverless containers, Google provides the same infrastructure that powers Cloud Functions. Users will now be able to simply provide a Docker image as input. This will allow them to deploy arbitrary runtimes and arbitrary system libraries on arbitrary Linux distributions This will be done while still retaining the same serverless characteristics as Cloud Functions. You can find detailed information about the updated services on Google Cloud’s Official page. Google Cloud Next: Fei-Fei Li reveals new AI tools for developers Google Cloud Launches Blockchain Toolkit to help developers build apps easily Zeit releases Serverless Docker in beta
Read more
  • 0
  • 1
  • 4564

article-image-zeit-releases-serverless-docker-in-beta
Richard Gall
15 Aug 2018
3 min read
Save for later

Zeit releases Serverless Docker in beta

Richard Gall
15 Aug 2018
3 min read
Zeit, the organization behind the cloud deployment software Now, yesterday launched Serverless Docker in beta. The concept was first discussed by the Zeit team at Zeit Day 2018 back in April, but it's now available to use and promises to radically speed up deployments for engineers. In a post published on the Zeit website yesterday, the team listed some of the key features of this new capability, including: An impressive 10x-20x improvement in cold boot performance (in practice this means cold boots can happen in less than a second A new slot configuration property that defines resource allocation in terms of CPU and Memory, allowing you to fit an application within the set of constraints that are most appropriate for it Support for HTTP/2.0 and WebSocket connections to deployments, which means you no longer need to rewrite applications as functions. The key point to remember with this release, according to Zeit, is that  "Serverless can be a very general computing model. One that does not require new protocols, new APIs and can support every programming language and framework without large rewrites." Read next: Modern Cloud Native architectures: Microservices, Containers, and Serverless – Part 1 What's so great about Serverless Docker? Clearly, speed is one of the most exciting things about serverless Docker. But there's more to it than that - it also offers a great developer experience. Johannes Schickling, co-founder and CEO of Prisma (a GraphQL data abstraction layer) said that, with Serverless Docker, Zeit "is making compute more accessible. Serverless Docker is exactly the abstraction I want for applications." https://twitter.com/schickling/status/1029372602178039810 Others on Twitter were also complimentary about Serverless Docker's developer experience - with one person comparing it favourably with AWS - "their developer experience just makes me SO MAD at AWS in comparison." https://twitter.com/simonw/status/1029452011236777985 Combining serverless and containers One of the reasons people are excited about Zeit's release is that it provides the next step in serverless. But it also brings containers into the picture too. Typically, much of the conversation around software infrastructure over the last year or so has viewed serverless and containers as two options to choose from rather than two things that can be used together. It's worth remembering that Zeit's product has largely been developed alongside its customers that use Now. "This beta contains the lessons and the experiences of a massively distributed and diverse user base, that has completed millions of deployments, over the past two years." Eager to demonstrate how Serverless Docker works for a wide range of use cases, Zeit has put together a long list of examples of Serverless Docker in action on GitHub. You can find them here. Read next A serverless online store on AWS could save you money. Build one. Serverless computing wars: AWS Lambdas vs Azure Functions
Read more
  • 0
  • 0
  • 3035

article-image-cncf-sandbox-accepts-googles-openmetrics-project
Fatema Patrawala
14 Aug 2018
3 min read
Save for later

CNCF Sandbox, the home for evolving cloud native projects, accepts Google’s OpenMetrics Project

Fatema Patrawala
14 Aug 2018
3 min read
The Cloud Native Computing Foundation (CNCF) accepted OpenMetrics, an open source specification for metrics exposition, into the CNCF Sandbox, a home for early stage and evolving cloud native projects. Google cloud engineers and other vendors had been working on this persistently from the past several months and finally it got accepted by CNCF. Engineers are further working on ways to support OpenMetrics in the OpenSensus, a set of uniform tracing and stats libraries that work with multi-vendor services. OpenMetrics will bring together the maturity and adoption of Prometheus, and Google’s background in working with stats at extreme scale. It will also bring in the experience and needs of a variety of projects, vendors, and end-users who are aiming to move away from the hierarchical way of monitoring to enable users to transmit metrics at scale. The open source initiative, focused on creating a neutral metrics exposition format will provide a sound data model for current and future needs of users. It will embed into a standard that is an evolution of the widely-adopted Prometheus exposition format. While there are numerous monitoring solutions available today, many do not focus on metrics and are based on old technologies with proprietary, hard-to-implement and hierarchical data models. “The key benefit of OpenMetrics is that it opens up the de facto model for cloud native metric monitoring to numerous industry leading implementations and new adopters. Prometheus has changed the way the world does monitoring and OpenMetrics aims to take this organically grown ecosystem and transform it into a basis for a deliberate, industry-wide consensus, thus bridging the gap to other monitoring solutions like InfluxData, Sysdig, Weave Cortex, and OpenCensus. It goes without saying that Prometheus will be at the forefront of implementing OpenMetrics in its server and all client libraries. CNCF has been instrumental in bringing together cloud native communities. We look forward to working with this community to further cloud native monitoring and continue building our community of users and upstream contributors.” says Richard Hartmann, Technical Architect at SpaceNet, Prometheus team member, and founder of OpenMetrics. OpenMetrics contributors include AppOptics, Cortex, Datadog, Google, InfluxData, OpenCensus, Prometheus, Sysdig and Uber, among others. “Google has a history of innovation in the metric monitoring space, from its early success with Borgmon, which has been continued in Monarch and Stackdriver. OpenMetrics embodies our understanding of what users need for simple, reliable and scalable monitoring, and shows our commitment to offering standards-based solutions. In addition to our contributions to the spec, we’ll be enabling OpenMetrics support in OpenCensus” says Sumeer Bhola, Lead Engineer on Monarch and Stackdriver at Google. For more information about OpenMetrics, please visit openmetrics.io. To quickly enable trace and metrics collection from your application, please visit opencensus.io. 5 reasons why your business should adopt cloud computing Alibaba Cloud partners with SAP to provide a versatile, one-stop cloud computing environment Modern Cloud Native architectures: Microservices, Containers, and Serverless – Part 1
Read more
  • 0
  • 0
  • 4182
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $15.99/month. Cancel anytime
article-image-github-open-sources-its-github-load-balancer-glb-director
Savia Lobo
10 Aug 2018
2 min read
Save for later

GitHub open sources its GitHub Load Balancer (GLB) Director

Savia Lobo
10 Aug 2018
2 min read
GitHub, open sourced the GitHub Load Balancer (GLB) Director on August 8, 2018. GLB Director is a Layer 4 load balancer which scales a single IP address across a large number of physical machines. It also minimizes connection disruption during any change in servers. Apart from open sourcing the GLB Director, GitHub has also shared details on the Load balancer design. GitHub had first released its GLB on September 22, 2016. The GLB is GitHub’s scalable load balancing solution for bare metal data centers. It powers a majority of GitHub’s public web and Git traffic, and GitHub’s critical internal systems such as its highly available MySQL clusters. How GitHub Load Balancer Director works GLB Director is designed for use in data center environments where multiple servers can announce the same IP address via BGP. Further, the network routers shard traffic amongst those servers using ECMP routing. The ECMP shards connections per-flow using consistent hashing and by addition or removal of nodes. This will cause some disruption to traffic as the state isn't stored for each flow. A split L4/L7 design is typically used to allow the L4 servers to redistribute these flows back to a consistent server in a flow-aware manner. GLB Director implements the L4 (director) tier of a split L4/L7 load balancer design. The GLB design The GLB Director does not replace services like haproxy and nginx, but rather is a layer in front of these services (or any TCP service) that allows them to scale across multiple physical machines without requiring each machine to have unique IP addresses. Source: GitHub GLB Director only processes packets on ingress. It then encapsulates them inside an extended Generic UDP Encapsulation packet. Egress packets from proxy layer servers are sent directly to clients using Direct Server Return. Read more about the GLB Director in detail on the GitHub Engineering blog post. Microsoft’s GitHub acquisition is good for the open source community Snapchat source code leaked and posted to GitHub Why Golang is the fastest growing language on GitHub GitHub has added security alerts for Python
Read more
  • 0
  • 0
  • 3588

article-image-microsoft-azures-new-governance-dapp-an-enterprise-blockchain-without-mining
Prasad Ramesh
09 Aug 2018
2 min read
Save for later

Microsoft Azure’s new governance DApp: An enterprise blockchain without mining

Prasad Ramesh
09 Aug 2018
2 min read
Microsoft Azure has just released a Blockchain-as-a-Service product that uses Ethereum to support blockchain with a set of templates to deploy and configure your choice of blockchain network. This can be done with minimal Azure and blockchain knowledge. The conventional blockchain in the open is based on Proof-of-Work (PoW) and requires mining as the parties do not trust each other. An enterprise blockchain does not require PoW but is based on Proof-of-Authority (PoA) where approved identities or validators on a blockchain, validate the transactions on the blockchain. The PoA product features a decentralized application (DApp) called the Governance DApp. Blockchains in this new model can be deployed in 5-45 minutes depending on the size and complexity of the network. The PoA network comes with security features such as identity leasing system to ensure no two nodes carry the same identity. There are also other features to achieve good performance. Web assembly smart contracts: Solidity is cited as one of the pain areas when developing smart contracts on Ethereum. This feature allows developers to use familiar languages such as C, C++, and Rust. Azure Monitor: Used to track node and network statistics. Developers can view the underlying blockchain to track statistics while the network admins can detect and prevent network outages. Extensible governance: With this feature, customers can participate in a consortium without managing the network infrastructure. It can be optionally delegated to an operator of their choosing. Governance DApp: Provides a decentralized governance in which network authority changes are administered via on-chain voting done by select administrators. It also contains validator delegation for authorities to manage their validator nodes that are set up in each PoA deployment. Users can audit change history, each change is recorded, providing transparency and auditability. Source: Microsoft Blog Along with these features, the Governance DApp will also ensure each consortium member has control over their own keys. This enables secure signing on a wallet chosen by the user. The blog mentions “In the case of a VM or regional outage, new nodes can quickly spin up and resume the previous nodes’ identities.” To know more visit the official Microsoft Blog. Read next Automate tasks using Azure PowerShell and Azure CLI [Tutorial] Microsoft announces general availability of Azure SQL Data Sync Microsoft supercharges its Azure AI platform with new features
Read more
  • 0
  • 0
  • 3358

article-image-oracle-bid-protest-against-u-s-defence-departmentspentagon-10-billion-cloud-contract
Savia Lobo
09 Aug 2018
2 min read
Save for later

Oracle’s bid protest against U.S Defence Department’s(Pentagon) $10 billion cloud contract

Savia Lobo
09 Aug 2018
2 min read
On Monday, Oracle Corp filed a protest with the Government Accountability Office(GAO) against Pentagon’s $10 billion JEDI(Joint Enterprise Defense Infrastructure) cloud contract. Oracle believes should not be awarded only to a single company but instead, allow for multiple winners. https://twitter.com/92newschannel/status/1027090662162944000 The U.S Defence Department unveiled the competition in July and stated that only a single winner, the one with the most rapid adoption of the cloud technology would be awarded. Deborah Hellinger, Oracle’s spokeswoman, said in a statement on Tuesday, “The technology industry is innovating around next-generation cloud at an unprecedented pace and JEDI virtually assures DoD will be locked into a legacy cloud for a decade or more. The single-award approach is contrary to the industry's multi-cloud strategy, which promotes constant competition, fosters innovation and lowers prices.” A bid protest is a challenge to the terms of a solicitation or the award of a federal contract. The GAO, which adjudicates and decides these challenges, will issue a ruling on the protest by November 14. This has been the first bid protest ever since the competition started a decade ago. Amazon.com is being seen as a top contender throughout the deal. Amazon Web Services or AWS is the only company approved by the U.S. government to handle secret and top secret data. Thus, this competition has attracted criticism from companies that fear Amazon Web Services, Amazon’s cloud unit, will win the contract. This would choke out hopes for others (Microsoft Corp (MSFT.O), Oracle (ORCL.N), IBM (IBM.N) and Alphabet Inc’s (GOOGL.O) Google) to win the government cloud computing contract. Read more about this news on The Register. Oracle makes its Blockchain cloud service generally available Google employees quit over company’s continued Artificial Intelligence ties with the Pentagon Oracle reveals issues in Object Serialization. Plans to drop it from core Java.  
Read more
  • 0
  • 0
  • 2217

article-image-googles-second-innings-in-china-exploring-cloud-partnerships-with-tencent-and-others
Bhagyashree R
07 Aug 2018
3 min read
Save for later

Google’s second innings in China: Exploring cloud partnerships with Tencent and others

Bhagyashree R
07 Aug 2018
3 min read
Google with the aims of re-entering the Chinese market, is in talks with the top companies in China like Tencent Holdings Ltd. (the company which owns the popular social media site, WeChat) and Inspur Group. Its aim is to expand their cloud services in the second-largest economy. According to some people who are familiar with the ongoing discussion, the talks began in early 2018 and Google was able to narrow down to three firms in late March. But because of the US - China trade war there is an uncertainty, whether this will materialize or not. Why is Google interested in cloud partnerships with Chinese tech giants? In many countries, Google rents computing power and storage over the internet and sells the G Suite, which includes Gmail, Docs, Drive, Calendar, and more tools for business. These run on their data centers. It wants to collaborate with the domestic data center and server providers in China to run internet-based services as China requires the digital information to be stored in the country. This is the reason why they need to partner with the local players. A tie-up with large Chinese tech firms, like Tencent and Inspur will also give Google powerful allies as it attempts a second innings in China after its earlier exit from the country in 2010. A cloud partnership with China will help them compete with their rivals like Amazon and Microsoft. With Tencent by their side, it will be able to go up against the local competitors including Alibaba Group Holding Ltd. How Google has been making inroads to China in the recent past In December, Google launched its AI China Center, the first such center in Asia, at the Google Developer Days event in Shanghai. In January Google agreed to a patent licensing deal with Tencent Holdings Ltd. This agreement came with an understanding that the two companies would team up on developing future technologies. Google could host services on Tencent’s data centers and the company could also promote its services to their customers. Reportedly, to expand its boundaries to China, Google has agreed upon launching a search engine which will comply with the Chinese cybersecurity regulations. A project code-named Dragonfly has been underway since spring of 2017, and accelerated after the meeting between its CEO Sundar Pichai and top Chinese government official in December 2017. It has  launched a WeChat mini program and reportedly developing an news app for China. It’s building a cloud data center region in Hong Kong this year. Joining the recently launched Mumbai, Sydney, and Singapore regions, as well as Taiwan and Tokyo, this will be the sixth GCP region in Asia Pacific. With no official announcements, we can only wait and see what happens in the future. But from the above examples, we can definitely conclude that Google is trying to expand its boundaries to China, and that too in full speed. To know more about this recent Google’s partnership with China in detail, you can refer to the full coverage on the Bloomberg’s report. Google to launch a censored search engine in China, codenamed Dragonfly Google Cloud Launches Blockchain Toolkit to help developers build apps easily
Read more
  • 0
  • 0
  • 2302
article-image-amazon-may-be-planning-to-move-from-oracle-by-2020
Natasha Mathur
07 Aug 2018
3 min read
Save for later

Amazon may be planning to move from Oracle by 2020

Natasha Mathur
07 Aug 2018
3 min read
Amazon is reportedly working towards shifting its business away from Oracle’s database software by 2020 as per the CNBC report last week. In fact, according to the CNBC report, Amazon has already started to transfer most of its infrastructure internally to Amazon Web services and will shift entirely by the first quarter of 2020. Both the organizations, Amazon and Oracle, have been fierce competitors for a really long time, comparing whose products and services are more superior. But, Amazon has also been Oracle’s major customer. It has been leveraging Oracle’s database software for many years to power its infrastructure for retail and cloud businesses. Oracle’s database has been a market standard for many since the 1990s. It is one of the most important products for many organizations across the globe as it provides these businesses with databases to run their operations on. Despite having started off its business with Oracle, Amazon launched AWS back in 2006, taking Oracle SQL based database head on and stealing away many of Oracle’s customers. This is not the first time news about Amazon making its shift from Oracle has stirred up. Amazon’s plans to move away from Oracle Technology came to light back in January this year. But, as per the statement issued to CNBC on August 1, a spokesperson for Oracle mentioned that Amazon had "spent hundreds of millions of dollars on Oracle technology" in the past many years. In fact, Larry Ellison, CEO at Oracle, mentioned during Oracle’s second quarter fiscal 2018 earnings call that “A company you’ve heard of just gave us another $50 million this quarter to buy Oracle database and other Oracle technology. That company is Amazon.” The recent news of Amazon’s migration has come at a time of substantial growth for AWS. AWS saw 49% growth rate in Q2 2018, while, Oracle’s business has remained stagnant for four years, thereby, putting more pressure on the company. There’s also been an increase in Amazon’s “backlog revenue” ( i.e. the total value of the company's future contract obligations) as it has reached $16 billion from $12.4 billion in May. In addition to this, AWS has been consistently appearing as “Leader” in the Gartner’s Magic Quadrant for Cloud Infrastructure as a Service (IAAS) since past six years. There have also been regular word wars between Larry Ellison and Andy Jassy, CEO AWS, over each other’s performance during conference keynotes and analyst calls. Andy Jassy, CEO at AWS took a shot at Oracle last year during his keynote at the AWS big tech conference. He said “Oracle overnight doubled the price of its software on AWS. Who does that to their customers? Someone who doesn't care about the customer but views them as a means to their financial ends”. Larry Ellison also slammed Amazon during Oracle OpenWorld conference last year saying, “Oracle’s services are just plain better than AWS” and how Amazon is “one of the biggest Oracle users on Planet Earth”. With all the other cloud services such as AWS, Microsoft, Google, Alibaba and IBM catching up, Oracle seems to be losing the database race. So, if Amazon does decide to phase out Oracle, then Oracle will have to step up its game big time to gain back the cloud market share. Oracle makes its Blockchain cloud service generally available Integrate applications with AWS services: Amazon DynamoDB & Amazon Kinesis [Tutorial] AWS Elastic Load Balancing: support added for Redirects and Fixed Responses in Application Load Balancer  
Read more
  • 0
  • 0
  • 2648

article-image-google-ibm-redhat-and-others-launch-istio-1-0-service-mesh-for-microservices
Savia Lobo
01 Aug 2018
3 min read
Save for later

Google, IBM, RedHat and others launch Istio 1.0 service mesh for microservices

Savia Lobo
01 Aug 2018
3 min read
Istio, an open-source platform that connects, manages and secures microservices announced its version 1.0. Istio provides service mesh for microservices from Google, IBM, Lyft, Red Hat, and other collaborators from the open-source community. What’s Istio? Popularly known as a service mesh, Istio collects logs, traces, and telemetry and then adds security and policy without embedding client libraries. Istio also acts as a platform which provides APIs that allows integration with systems for logging, telemetry, and policy. Istio also helps in measuring the actual traffic between services including requests per second, error rates, and latency. It also generates a dependency graph to know how services affect one another. Istio offers a helping hand to one’s DevOps team by providing them with tools to run distributed apps smoothly. Here’s a list of what Istio does for your team: Performs Canary rollouts for allowing the DevOps team to smoke-test any new build and ensure a good build performance. Offers fault-injection, retry logic and circuit breaking so that DevOps teams can perform more testing and change network behavior at runtime to keep applications up and running. Istio adds security. It can be used to layer mTLS on every call, adding encryption-in-flight with an ability to authorize every single call on one’s cluster and mesh. What’s new in Istio 1.0? Multi-cluster support for Kubernetes Multiple Kubernetes clusters can now be added to a single mesh, enabling cross-cluster communication and consistent policy enforcement. The multi-cluster support is now in beta. Networking APIs now in beta Networking APIs that enable fine-grained control over the flow of traffic through a mesh are now in Beta. Explicitly modeling ingress and egress concerns using Gateways allows operators to control the network topology and meet access security requirements at the edge. Mutual TLS can be easily rolled out incrementally without updating all clients Mutual TLS can now be rolled out incrementally without requiring all clients of a service to be updated. This is a critical feature that unblocks adoption in-place by existing production deployments. Istio’s mixer configuration has a support to develop out-of-process adapters Mixer now has support for developing out-of-process adapters. This will become the default way to extend Mixer over the coming releases and makes building adapters much simpler. Updated authorization policies Authorization policies which control access to services are now entirely evaluated locally in Envoy increasing their performance and reliability. Recommended Install method Helm chart installation is now the recommended install method offering rich customization options to adopt Istio on your terms. Istio 1.0 also includes performance improvement parameters such as continuous regression testing, large-scale environment simulation, and targeted fixes. Read more in detail about Istio 1.0 in its official release notes. 6 Ways to blow up your Microservices! How to build Dockers with microservices How to build and deploy Microservices using Payara Micro  
Read more
  • 0
  • 0
  • 2512

article-image-aws-elastic-load-balancing-support-added-for-redirects-and-fixed-responses-in-application-load-balancer
Natasha Mathur
30 Jul 2018
2 min read
Save for later

AWS Elastic Load Balancing: support added for Redirects and Fixed Responses in Application Load Balancer

Natasha Mathur
30 Jul 2018
2 min read
AWS announced support for two new actions namely, redirect and fixed-response for elastic load balancing in Application Load Balancer last week. Elastic Load Balancing offers automatic distribution of the incoming application traffic. The traffic is distributed across targets, such as Amazon EC2 instances, IP addresses, and containers. One of the types of load balancers that Elastic load offers is Application Load Balancer. Application Load Balancer simplifies and improves the security of your application as it uses only the latest SSL/TLS ciphers and protocols. It is best suited for load balancing of HTTP and HTTPS traffic and operates at the request level which is layer 7. Redirect and Fixed response support simplifies the deployment process while leveraging the scale, availability, and reliability of Elastic Load Balancing. Let’s discuss how these latest features work. The new redirect action enables the load balancer to redirect the incoming requests from one URL to another URL. This involves redirecting HTTP requests to HTTPS requests, allowing more secure browsing, better search ranking and high SSL/TLS score for your site. Redirects also help redirect the users from an old version of an application to a new version. The fixed-response actions help control which client requests are served by your applications. This helps you respond to the incoming requests with HTTP error response codes as well as custom error messages from the load balancer. There is no need to forward the request to the application. If you use both redirect and fixed-response actions in your Application Load Balancer, then the customer experience and the security of your user requests are improved considerably. Redirect and fixed-response actions are now available for your Application Load Balancer in all AWS regions. For more details, check out the Elastic Load Balancing documentation page. Integrate applications with AWS services: Amazon DynamoDB & Amazon Kinesis [Tutorial] Build an IoT application with AWS IoT [Tutorial]
Read more
  • 0
  • 0
  • 6235
article-image-atlassian-sells-hipchat-ip-to-slack
Richard Gall
30 Jul 2018
3 min read
Save for later

Atlassian sells Hipchat IP to Slack

Richard Gall
30 Jul 2018
3 min read
Before Slack was a thing, Atlassian's HipChat was one of a number of internal messaging tools trying to beat the competition in a nascent market place. However, with Slack today dominating the messaging app landscape, Atlassian has given in. The Australian company has announced it will be selling the HipChat IP to and discontinuing the service in February 2019. The financial details of the deal haven't been disclosed. However, Slack CEO Stewart Butterfield did reveal that Atlassian will be making a "small but symbolically important investment in Slack" in a tweet on Thursday July 26. https://twitter.com/stewart/status/1022574806623895553 The deal is being presented as a partnership rather than a straightforward acquisition. On the Atlassian blog, for example, Joff Redfern, VP of Product Management was keen to stress that this was a partnership: "We have always had a spirited yet friendly competition with Slack (and have even sent each other congratulatory cookies and cake!). Across our product portfolio, we have long shared many integrations, which hundreds of thousands of teams use every day. Through this new partnership, both companies will lean into building better integrations together and more sharply define the modern workplace experience for companies everywhere." As well as Hipchat, Slack is also purchasing the IP for Stride, another messaging app released in 2017 by Atlassian in September 2017. Stride was initially designed to succeed Hipchat, but Redfern explained that Slack's dominance of the current market meant this step simply made sense for Atlassian. "While we’ve made great early progress with Stride, we believe the best way forward for our customers and for Atlassian is to enter into a strategic partnership with Slack and no longer offer our own real-time communications products." Hipchat Server and Hipchat Datacenter will also be discontinued. Conscious that this could lead to some real migration challenges, Atlassian has put together a detailed migration guide. Who wins in the Slack and Atlassian deal? The truth is that both parties have struck a good deal here (financial details notwithstanding). Atlassian, as it acknowledges simply couldn't compete in a market where Slack seems to dominate. For Slack, too, the deal comes at a good time. Microsoft's Teams App is set to replace Skype for Business in Microsoft's Office 365 suite. A free version, released earlier this month which doesn't requite an Office 365 subscription could also be some cause for concern for Slack. The one group that loses: users Although the deal might work out well for both Slack and Atlassian, there was considerable anger on Atlassian's community forums. One asked "What the hell are on-premise customers supposed to do?! We just implemented and invested in this app! We're building apps in-house for our own purposes. We have zero ability to use Cloud services of ANY type. You are offering ZERO alternatives." One user outlined his frustrations with what it means for migration: "We needed a chat platform. We did research and after a long time landed on hipchat. We had to pull teeth to get users to move to it. We transitioned bots and automations over to hipchat."
Read more
  • 0
  • 0
  • 2146

article-image-google-new-cloud-services-platform-could-make-hybrid-cloud-more-accessible
Richard Gall
25 Jul 2018
3 min read
Save for later

Google's new Cloud Services Platform could make hybrid cloud more accessible

Richard Gall
25 Jul 2018
3 min read
Hybrid cloud is becoming an increasing reality for many businesses. This is something the software world is only just starting to acknowledge. However, at this year's Google Cloud Next, Google does seem to be making a play for the hybrid market. Its new Cloud Services Platform combines a number of tools, including Kubernetes and Istio, to support a hybrid cloud solution. In his speech at Google Cloud Next, Urs Holze, Senior VP of technical infrastructure, said that although cloud computing offers many advantages, it's "still missing something... a simple way to combine the cloud with your existing on-premise infrastructure or with other clouds." That's the thinking behind Cloud Services Platform, which brings together a whole host of tools to make managing a cloud potentially much easier than ever before. What's inside Google's Cloud Services Platform In a blog post Holze details what's going to be inside Cloud Services Platform: Service mesh: Availability of Istio 1.0 in open source, Managed Istio, and Apigee API Management for Istio Hybrid computing: GKE On-Prem with multi-cluster management Policy enforcement: GKE Policy Management, to take control of Kubernetes workloads Ops tooling: Stackdriver Service Monitoring Serverless computing: GKE Serverless add-on and Knative, an open source serverless framework Developer tools: Cloud Build, a fully managed CI/CD platform This diagram provides a clear illustration of how the various components of the Cloud Services Platform will fit together: [caption id="attachment_21065" align="aligncenter" width="960"] What's inside Google's Cloud Services Platform (via cloudplatform.googleblog.com)[/caption] Why Kubernetes and Istio are at the center of the Cloud Services Platform Holze explains the development of cloud in the context of containers. "The move to software containers", he says, "has helped some [businesses] in simplifying and speeding up how we package and deliver software." Kubernetes has been  an integral part of this shift. And although Holze has a vested interest when he says that "today it's by far the most popular way to run an manage containers," he's ultimately right - Kubernetes is one of the fastest growing open source projects on the planet. Read next: The key differences between Kubernetes and Docker Swarm Holze then follows on from this by introducing Istio. "Istio extends Kubernetes into these higher level services and makes service to service communications secure and reliable in a way that's very easy on developers." Istio is due to hit its first stable release in the next couple of days. So, insofar as both Istio and Kubernetes make it possible to manage and monitor containers at scale, bringing them together in a single platform makes for a compelling proposition for engineers. The advantage of being able to bring in tools like Kubernetes and Istio might make hybrid cloud solutions a much more attractive proposition for business and technology leaders - and for those already convinced, it could make life even better. According to Chen Goldberg, Google's Director of Engineering, speaking to journalists and Google Cloud Next, Cloud Services Platform "allows you to modernize wherever you are and at your own pace." Whether businesses buy into Google's vision remains to be seen - but it could well be a game-changer that threatens AWS dominance in the cloud world.  Read next: Go Cloud is Google’s bid to establish Golang as the go-to language of cloud Google Cloud Next: Fei-Fei Li reveals new AI tools for developers Dispelling the myths of hybrid cloud
Read more
  • 0
  • 0
  • 2055