Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Cloud Computing

175 Articles
article-image-google-announce-the-largest-overhaul-of-their-cloud-speech-to-text-api
Vijin Boricha
20 Apr 2018
2 min read
Save for later

Google announce the largest overhaul of their Cloud Speech-to-Text

Vijin Boricha
20 Apr 2018
2 min read
Last month Google announced Cloud Text-to-Speech, their speech synthesis API that features DeepMind and WaveNet models. Now, they have announced their largest overhaul of Cloud Speech-to-Text (formerly known as Cloud Speech API) since it was introduced in 2016. Google’s Speech-to-Text API has been enhanced for business use cases, including phone-call and video transcription. With this new Cloud Speech-to-Text update one can get access to the latest research from Google’s machine learning expert team, all via a simple REST API. It also supports Standard service level agreement (SLA) with 99.9% availability. Here’s a sneak peek into the latest updates to Google’s Cloud Speech-to-Text API: New video and phone call transcription models: Google has added models created for specific use cases such as phone call transcriptions and transcriptions of audio from video.Video and phone call transcription models Readable text with automatic punctuation: Google created a new LSTM neural network to improve automating punctuation in long-form speech transcription. This Cloud Speech-to-Text model, currently in beta, can automatically suggest commas, question marks, and periods for your text. Use case description with recognition metadata: The information taken from transcribed audio or video with tags such as ‘voice commands to a Google home assistant’ or ‘soccer sport tv shows’, is aggregated across Cloud Speech-to-Text users to prioritize upcoming activities. To know more about this update in detail visit Google’s blog post.
Read more
  • 0
  • 0
  • 2471

article-image-microsoft-cloud-services-gdpr
Vijin Boricha
25 Apr 2018
2 min read
Save for later

Microsoft Cloud Services get GDPR Enhancements

Vijin Boricha
25 Apr 2018
2 min read
With the GDPR deadline looming closer everyday, Microsoft has started to apply General Data Protection Regulation (GDPR) to its cloud services. Microsoft recently announced that they are providing some enhancements to help organizations using Azure and Office 365 services meet GDPR requirements. With these improvements they aim at ensuring that both Microsoft's services and the organizations benefiting from them will be GDPR-compliant by the law's enforcement date. Microsoft tools supporting GDPR compliance are as follows: Service Trust Portal, provides GDPR information resources Security and Compliance Center in the Office 365 Admin Center Office 365 Advanced Data Governance for classifying data Azure Information Protection for tracking and revoking documents Compliance Manager for keeping track of regulatory compliance Azure Active Directory Terms of Use for obtaining user informed consent Microsoft recently released a preview of a new Data Subject Access Request interface in the Security and Compliance Center and the Azure Portal via a new tab. According to Microsoft 365 team, this interface is also available in the Service Trust Portal. Microsoft Tech Community post also claims that the portal will be getting a "Data Protection Impacts Assessments" section in the coming weeks. Organizations can now perform a search for "relevant data across Office 365 locations" with the new Data Subject Access Request interface preview. This helps organizations search across Exchange, SharePoint, OneDrive, Groups and Microsoft Teams. As explained by Microsoft, once searched the data is exported for review prior to being transferred to the requestor. According to Microsoft, the Data Subject Access Request capabilities will be out of preview before the GDPR deadline of May 25th. It also claims that IT professionals will be able to execute DSRs (Data Subject Requests) against system-generated logs. To know more in detail you can visit Microsoft’s blog post.
Read more
  • 0
  • 0
  • 2460

article-image-fastly-edge-cloud-platform-files-for-ipo
Bhagyashree R
22 Apr 2019
3 min read
Save for later

Fastly, edge cloud platform, files for IPO

Bhagyashree R
22 Apr 2019
3 min read
Last week, Fastly Inc., a provider of an edge cloud platform announced that it has filed its proposed initial public offering (ipo) with the US Securities and Exchange Commission. Last year in July, in its last round of financing before a public offering,  the company raised $40 million investment. The book-running managers for the proposed offering are BofA Merrill Lynch, Citigroup, and Credit Suisse. William Blair, Raymond James, Baird, Oppenheimer & Co., Stifel, Craig-Hallum Capital Group and D.A. Davidson & Co. are co-managers for the proposed offering. Founded by Artur Bergman in 2011, Fastly is an American cloud computing services provider. Its edge cloud platform provides a content delivery network, Internet security services, load balancing, and video & streaming services. The edge cloud platform is designed from the ground up to be programmable and to support agile software development. This programmable edge cloud platform gives developers real-time visibility and control by stream logging data. So, developers are able to instantly see the impact of new code in production, troubleshoot issues as they occur, and rapidly identify suspicious traffic. Fastly boasts of catering to customers like The New York Times, Reddit, GitHub, Stripe, Ticketmaster and Pinterest. The company, in the unfinished prospectus shared how it has grown over the years, the risks of investing in the company, what are its plans for the future, and more. The company shows a steady growth in its revenue, while in December 2017 it was $104.9 million, it increased to $144.6 million, by the end of 2018. Its loss has also shown some decline from $32.5 million in December 2017 to $30.9 million in December 2018. Predicting its future market value, the prospectus says, “When incorporating these additional offerings, we estimate a total market opportunity of approximately $18.0 billion in 2019, based on expected growth from 2017, to $35.8 billion in 2022, growing with an expected CAGR of 25.6%.“ Fastly has not yet determined the number of shares to offered and the price range for the proposed offering. Currently, the company’s public filing has a placeholder amount of $100 million. However, looking at the amount of funding the company has received, TechCrunch predicts that it is more likely to get closer to $1 billion when it finally prices its shares. Fastly has two classes of authorized common stock: Class A and Class B. The rights of both the common stockholders are identical, except with respect to voting and conversion. Each Class A share is entitled to one vote per share and each Class B share is entitled to 10 votes per share. Class B shares are convertible into one shares of Class A common stock. The Class A common stock will be listed on The New York Stock Exchange under the symbol “FSLY.” To read more in detail, check out the ipo filing by Fastly. Fastly open sources Lucet, a native WebAssembly compiler and runtime Cloudflare raises $150M with Franklin Templeton leading the latest round of funding Dark Web Phishing Kits: Cheap, plentiful and ready to trick you  
Read more
  • 0
  • 0
  • 2457
Visually different images

article-image-introducing-platform9-managed-kubernetes-service
Amrata Joshi
04 Feb 2019
3 min read
Save for later

Introducing Platform9 Managed Kubernetes Service

Amrata Joshi
04 Feb 2019
3 min read
Today, the team at Platform9, a company known for its SaaS-managed hybrid cloud, introduced a fully managed, enterprise-grade Kubernetes service that works on VMware with full SLA guarantee. It enables enterprises to deploy and run Kubernetes easily without the need of management overhead and advanced Kubernetes expertise. It features enterprise-grade capabilities including multi-cluster operations, zero-touch upgrades, high availability, monitoring, and more, which are handled automatically and backed by SLA. PMK is part of the Platform9’s hybrid cloud solution, which helps organizations in centrally managing VMs, containers and serverless functions on any environment. Enterprises can support Kubernetes at scale alongside their traditional VMs, legacy applications, and serverless functions. Features of Platform9 Managed Kubernetes Self Service, Cloud Experience IT Operations and VMware administrators can now help developers with simple, self-service provisioning and automated management experience. It is now possible to deploy multiple Kubernetes clusters with a click of a button that is operated under the strictest SLAs. Run Kubernetes anywhere PMK allows organizations to run Kubernetes instantly, anywhere. It also delivers centralized visibility and management across all Kubernetes environments including on-premises, public cloud, or at the Edge. This helps the organizations to drop shadow IT and VM/Container sprawl and ensure compliance. It improves utilization and reduces costs across all infrastructure. Speed Platform9 Managed Kubernetes (PMK) allows enterprises to run in less than an hour on VMware. It also eliminates the operational complexity of Kubernetes at scale. PMK helps enterprises to modernize their VMware environments without the need of any hardware or configuration changes. Open Ecosystem Enterprises can benefit from the open source community and all the Kubernetes-related services and applications by delivering open source Kubernetes on VMware without code forks. It ensures portability across environments. Sirish Raghuram, Co-founder, and CEO of Platform9 said, “Kubernetes is the #1 enabler for cloud-native applications and is critical to the competitive advantage for software-driven organizations today. VMware was never designed to run containerized workloads, and integrated offerings in the market today are extremely clunky, hard to implement and even harder to manage. We’re proud to take the pain out of Kubernetes on VMware, delivering a pure open source-based, Kubernetes-as-a-Service solution that is fully managed, just works out of the box, and with an SLA guarantee in your own environment.” To learn more about delivering Kubernetes on VMware, check out the demo video. Platform9 announces a new release of Fission.io, the open source, Kubernetes-native Serverless framework CNCF releases 9 security best practices for Kubernetes, to protect a customer’s infrastructure GitLab 11.7 releases with multi-level child epics, API integration with Kubernetes, search filter box and more
Read more
  • 0
  • 0
  • 2419

article-image-amazon-neptune-aws-cloud-graph-database-is-now-generally-available
Savia Lobo
31 May 2018
2 min read
Save for later

Amazon Neptune, AWS’ cloud graph database, is now generally available

Savia Lobo
31 May 2018
2 min read
Last year, Amazon Web Services (AWS) announced the launch of its fast, reliable, and fully-managed cloud graph database, Amazon Neptune, at the Amazon Re:Invent 2017. Recently, AWS announced that Neptune is all set for the general public to explore. Graph databases store the relationships between connected data as graphs. This enables applications to access the data in a single operation, rather than a bunch of individual queries for all the data. Similarly, Neptune makes it easy for developers to build and run applications that work with highly connected datasets. Also, due to the availability of AWS managed graph database service, developers can further experience high scalability, security, durability, and availability. As Neptune becomes generally available, there are a large number of performance enhancements and updates, following are some additional upgrades: AWS CloudFormation support AWS Command Line Interface (CLI)/SDK support An update to Apache TinkerPop 3.3.2 Support for IAM roles with bulk loading from Amazon Simple Storage Service (S3) Some of the benefits of Amazon Neptune include: Neptune’s query processing engine is highly optimized for two of the leading graph models, Property Graph and W3C's RDF. It is also associated with the query languages, Apache TinkerPop Gremlin and RDF/SPARQL, providing customers the flexibility to choose the right approach based on their specific graph use case. Neptune storage scales automatically, without downtime or performance degradation as customer data increases. It allows developers to design sophisticated, interactive graph applications, which can query billions of relationships with millisecond latency. There are no upfront costs, licenses, or commitments required while using Amazon Neptune. Customers pay only for the Neptune resources they use. To know more interesting facts about Amazon Neptune in detail, visit its official blog. 2018 is the year of graph databases. Here’s why. From Graph Database to Graph Company: Neo4j’s Native Graph Platform addresses evolving needs of customers When, why and how to use Graph analytics for your big data
Read more
  • 0
  • 0
  • 2414

article-image-google-announces-the-beta-version-of-cloud-source-repositories
Melisha Dsouza
21 Sep 2018
3 min read
Save for later

Google announces the Beta version of Cloud Source Repositories

Melisha Dsouza
21 Sep 2018
3 min read
Yesterday, Google launched the beta version of its Cloud Source Repositories. Claiming to provide its users with a better search experience, Google Cloud Source Repositories is a Git-based source code repository built on Google Cloud. The Cloud Source Repositories introduce a powerful code search feature, which uses the document index and retrieval methods similar to Google Search. These Cloud source repositories can pose a major comeback for Google after Google Code, began shutting down in 2015. This could be a very strategic move for Google, as many coders have been looking for an alternative to GitHub, after its acquisition by Microsoft. How does Google code search work? Code search in Cloud Source Repositories optimizes the indexing, algorithms, and result types for searching code. On submitting a query, the query is sent to a root machine and sharded to hundreds of secondary machines. The machines look for matches by file names, classes, functions and other symbols, and matches the context and namespace of the symbols. A single query can search across thousands of different repositories. Cloud Source Repositories also has a semantic understanding of the code. For Java, JavaScript, Go, C++, Python, TypeScript and Proto files, the tools will also return information on whether the match is a class, method, enum or field. Solution to common code search challenges #1 To execute searches across all the code at ones’ company If a company has repositories storing different versions of the code, executing searches across all the code is exhaustive and ttime-consuming While using Cloud Source Repositories, the default branches of all repositories are always indexed and up-to-date. Hence, searching across all the code is faster. #2 To search for code that performs a common operation Cloud Source Repositories enables users to perform quick searches. Users can also save time by discovering and using the existing solution while avoiding bugs in their code. #3 If a developer cannot remember the right way to use a common code component Developers can enter a query and search across all of their company’s code for examples of how the common piece of code has been used successfully by other developers. #4 Issues with production application  If a developer encounters a specific error message to the server logs that reads ‘User ID 123 not found in PaymentDatabase’, they can perform a regular expression search for ‘User ID .* not found in PaymentDatabase’ and instantly find the location in the code where this error was triggered. All repositories that are either mirrored or added to Cloud Source Repositories can be searched in a single query. Cloud Source Repositories has a limited free tier that supports projects up to 50GB with a maximum of 5 users. You can read more about Cloud Source Repositories in the official documentation. Google announces Flutter Release Preview 2 with extended support for Cupertino themed controls and more! Google to allegedly launch a new Smart home device Google’s prototype Chinese search engine ‘Dragonfly’ reportedly links searches to phone numbers
Read more
  • 0
  • 0
  • 2395
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €14.99/month. Cancel anytime
article-image-amazon-reinvent-day-3-lamba-layers-lambda-runtime-api-and-other-exciting-announcements
Melisha Dsouza
30 Nov 2018
4 min read
Save for later

Amazon re:Invent Day 3: Lamba Layers, Lambda Runtime API and other exciting announcements!

Melisha Dsouza
30 Nov 2018
4 min read
The second last day of Amazon re:Invent 2018 ended on a high note. AWS announced two new features, Lambda Layers, and Lambda Runtime API, that claim to “make serverless development even easier”. In addition to this, they have also announced that Application Load Balancers will now invoke Lambda functions to serve HTTP(S) requests and Ruby Language support for Lambda. #1 Lambda Layers Lambda Layers allow developers to centrally manage code and data which is shared across multiple functions. Instead of packaging and deploying this shared code together with all the functions using it, developers can put common components in a ZIP file and upload it as a Lambda Layer.  These Layers can be used within an AWS account, shared between accounts, or shared publicly within the developer community. AWS  is also publishing a public layer which includes NumPy and SciPy. This layer is prebuilt and optimized to help users to carry out data processing and machine learning applications quickly. Developers can include additional files or data for their functions including binaries such as FFmpeg or ImageMagick, or dependencies, such as NumPy for Python. These layers are added to your function’s zip file when published. Layers can also be versioned to manage updates, which will make each version immutable. When a version is deleted or its permissions are revoked, a developer won’t be able to create new functions; however, functions that used it previously will continue to work. Lamba layers helps in making the function code smaller and more focused on what the application has to build. In addition to faster deployments, because less code must be packaged and uploaded, code dependencies can be reused. #2 Lambda Runtime API This is a simple interface to use any programming language, or a specific language version, for developing functions. Here, runtimes can be shared as layers, which allows developers to work with a  programming language of their choice when authoring Lambda functions. Developers using the Runtime API will have to bundle the same with their application artifact or as a Lambda layer that the application uses. When creating or updating a function, users can select a custom runtime. The function must include (in its code or in a layer) an executable file called bootstrap, that will be responsible for the communication between code and the Lambda environment. As of now, AWS has made the C++ and Rust open source runtimes available. The other open source runtimes that will possibly be available soon include: Erlang (Alert Logic) Elixir (Alert Logic) Cobol (Blu Age) Node.js (NodeSource N|Solid) PHP (Stackery) The Runtime API will depict how AWS will support new languages in Lambda. A notable feature of the C++ runtime is its simplicity and expressiveness of interpreted languages while maintaining a good performance and low memory footprint. The Rust runtime makes it easy to write highly performant Lambda functions in Rust. #3 Application Load Balancers to invoke Lambda functions to serve HTTP(S) requests This new functionality will enable users to access serverless applications from any HTTP client, including web browsers. Users can also route requests to different Lambda functions based on the requested content. Application Load Balancer will be used as a common HTTP endpoint to both simplify operations and monitor applications that use servers and serverless computing. #4 Ruby is now a supported language for AWS Lambda Developers can use Lambda functions as idiomatic Ruby code, and run them on AWS. The AWS SDK for Ruby is included in the Lambda execution environment by default making it easy and quick for functions to directly interact with the AWS resources directly. Ruby on Lambda can be used either through the AWS Management Console or the AWS SAM CLI. This will ensure developers benefit from the reduced operational overhead, scalability, availability, and pay-per-use pricing of Lambda. Head over to What’s new with AWS to stay updated on upcoming AWS announcements. Day 1 at the Amazon re: Invent conference – AWS RoboMaker, Fully Managed SFTP Service for Amazon S3, and much more! Amazon introduces Firecracker: Lightweight Virtualization for Running Multi-Tenant Container Workloads AWS introduces ‘AWS DataSync’ for automated, simplified, and accelerated data transfer  
Read more
  • 0
  • 0
  • 2393

article-image-stackrox-kubernetes-security-platform-3-0-releases-with-advanced-configuration-and-vulnerability-management-capabilities
Bhagyashree R
13 Nov 2019
3 min read
Save for later

StackRox Kubernetes Security Platform 3.0 releases with advanced configuration and vulnerability management capabilities

Bhagyashree R
13 Nov 2019
3 min read
Today, StackRox, a Kubernetes-native container security platform provider announced StackRox Kubernetes Security Platform 3.0. This release includes industry-first features for configuration and vulnerability management that enable businesses to achieve stronger protection of cloud-native, containerized applications. In a press release, Wei Lien Dang, StackRox’s vice president of product, and co-founder said, “When it comes to Kubernetes security, new challenges related to vulnerabilities and misconfigurations continue to emerge.” “DevOps and Security teams need solutions that quickly and easily solve these issues. StackRox 3.0 is the first container security platform with the capabilities orgs need to effectively deal with Kubernetes configurations and vulnerabilities, so they can reduce risk to what matters most – their applications and their customer’s data,” he added. What’s new in StackRox Kubernetes Security Platform 3.0 Features for configuration management Interactive dashboards: This will enable users to view risk-prioritized misconfigurations, easily drill-down to critical information about the misconfiguration, and determine relevant context required for effective remediation. Kubernetes role-based access control (RBAC) assessment: StackRox will continuously monitor permission for users and service accounts to help mitigate against excessive privileges being granted. Kubernetes secrets access monitoring: The platform will discover secrets in Kubernetes and monitor which deployments can use them to limit unnecessary access. Kubernetes-specific policy enforcement: StackRox will identify configurations in Kubernetes related to network exposures, privileged containers, root processes, and other factors to determine policy violations. Advanced vulnerability management capabilities Interactive dashboards: StackRox Kubernetes Security Platform 3.0 has interactive views that provide risk prioritized snapshots across your environment, highlighting vulnerabilities in both, images and Kubernetes. Discovery of Kubernetes vulnerabilities: The platform gives you visibility into critical vulnerabilities that exist in the Kubernetes platform including the ones related to the Kubernetes API server disclosed by the Kubernetes product security team. Language-specific vulnerabilities: StackRox scans container images for additional vulnerabilities that are language-dependent, providing greater coverage across containerized applications.  Along with the aforementioned features, StackRox Kubernetes Security Platform 3.0 adds support for various ecosystem platforms. These include CRI-O, the Open Container Initiative (OCI)-compliant implementation of the Kubernetes Container Runtime Interface (CRI), Google Anthos, Microsoft Teams integration, and more. These were a few latest capabilities shipped in StackRox Kubernetes Security Platform 3.0. To know more, you can check out live demos and Q&A by the StackRox team at KubeCon 2019, which will be happening from November 18-21 in San Diego, California. It brings together adopters and technologists from leading open source and cloud-native communities. Kubernetes 1.16 releases with Endpoint Slices, general availability of Custom Resources, and other enhancements StackRox App integrates into the Sumo Logic Dashboard  for improved Kubernetes security Microsoft launches Open Application Model (OAM) and Dapr to ease developments in Kubernetes and microservices  
Read more
  • 0
  • 0
  • 2385

article-image-microsoft-mwc-mobile-world-congress-day-1-hololens-2-azure-powered-kinect-camera-and-more
Melisha Dsouza
25 Feb 2019
4 min read
Save for later

Microsoft @MWC (Mobile World Congress) Day 1: HoloLens 2, Azure-powered Kinect camera and more!

Melisha Dsouza
25 Feb 2019
4 min read
The ongoing Mobile World Conference 2019 at Barcelona, has an interesting line-up of announcements, keynote speakers, summits, seminars and more. It is the largest mobile event in the world, that brings together the latest innovations and leading-edge technology from more than two thousand leading companies. The theme of this year’s conference is ‘Intelligent Connectivity’ which comprises of the combination of flexible, high-speed 5G networks, the Internet of Things (IoT), artificial intelligence (AI) and big data. Microsoft unveiled a host of new products along the same theme on the first day of the conference. Let’s have a look at some of them. #1 Microsoft HoloLens 2 AR announced! Microsoft unveiled the HoloLens 2 AR device at the Mobile World Congress (MWC). This $3,500 AR device is aimed for businesses, and not for the average person, yet. It is designed primarily for situations where field workers might need to work hands-free, such as manufacturing workers, industrial designers and those in the military, etc. This device is definitely an upgrade from Microsoft’s very first HoloLens that recognized basic tap and click gestures. The new headset recognizes 21 points of articulation per hand and accounts for improved and realistic hand motions. The device is less bulky and its eye tracking can measure eye movement and use it to interact with virtual objects. It is built to be a cloud- and edge-connected device. The HoloLens 2 field of view more than doubles the area covered by HoloLens 1. Microsoft said it has plans to announce a  follow-up to HoloLens 2 in the next year or two. According to Microsoft, this device will be even more comfortable and easier to use, and that it'll do more than the HoloLens 2. HoloLens 2 is available on preorder and will be shipping later this year. The device has already found itself in the midst of a controversy after the US Army invested $480 million in more than 100,000 headsets. The contract has stirred dissent amongst Microsoft workers. #2 Azure-powered Kinect camera for enterprise The Azure-powered Kinect camera is an “Intelligent edge device that doesn’t just see and hear but understands the people, the environment, the objects, and their actions,” according to Azure VP, Julia White. This AI-powered smart enterprise camera leverages Microsoft’s 3D imaging technology and can possibly serve as a companion hardware piece for HoloLens in the enterprise. The system has a 1-megapixel depth camera, a 12-megapixel camera and a seven-microphone array on board to help it work  with "a range of compute types, and leverage Microsoft’s Azure solutions to collect that data.” The system, priced at $399, is available for pre-order. #3 Azure Spatial Anchors Azure Spatial Anchors are launched as a part of the Azure mixed reality services. These services will help developers and business’ build cross-platform, contextual and enterprise-grade mixed reality applications. According to the Azure blog, these mixed reality apps can map, designate and recall precise points of interest which are accessible across HoloLens, iOS, and Android devices. Developers can integrate their solutions with IoT services and artificial intelligence, and protect their sensitive data using security from Azure. Users can easily infuse artificial intelligence (AI) and integrate IoT services to visualize data from IoT sensors as holograms. The Spatial Anchors will allow users to map their space and connect points of interest “to create wayfinding experiences, and place shareable, location-based holograms without any need for environmental setup or QR codes”. Users will also be able to manage identity, storage, security, and analytics with pre-built cloud integrations to accelerate their mixed reality projects. #4 Unreal Engine 4 Support for Microsoft HoloLens 2 During the  Mobile World Congress (MWC), Epic Games Founder and CEO, Tim Sweeney announced that support for Microsoft HoloLens 2 will be coming to Unreal Engine 4 in May 2019. Unreal Engine will fully support HoloLens 2 with streaming and native platform integration. Sweeney says that “AR is the platform of the future for work and entertainment, and Epic will continue to champion all efforts to advance open platforms for the hardware and software that will power our daily lives.” Unreal Engine 4 support for Microsoft HoloLens 2 will allow for "photorealistic" 3D in AR apps. Head over to Microsoft's official blog for an in-depth insight on all the products released. Unreal Engine 4.22 update: support added for Microsoft’s DirectX Raytracing (DXR) Microsoft acquires Citus Data with plans to create a ‘Best Postgres Experience’ Microsoft joins the OpenChain Project to help define standards for open source software compliance
Read more
  • 0
  • 0
  • 2380

article-image-epicor-partners-with-microsoft-azure-to-adopt-cloud-erp
Savia Lobo
29 May 2018
2 min read
Save for later

Epicor partners with Microsoft Azure to adopt Cloud ERP

Savia Lobo
29 May 2018
2 min read
Epicor Software Corporation recently announced its partnership with Microsoft Azure to accelerate its Cloud ERP adoption. This partnership further aims at delivering Epicor’s enterprise solutions on the Microsoft Azure platform. The company plans to deploy its Epicor Prophet 21 enterprise resource planning (ERP) suite on Microsoft Azure. This would enable customers a faster growth and innovation as they look forward to digitally transform their business with the reliable, secure, and scalable features of Microsoft Azure. With the Epicor and Microsoft collaboration customers can now access the power of Epicor ERP and Prophet 21 running on Microsoft Azure. Having Microsoft as a partner, Epicor, Leverages a range of technologies such as Internet of Things (IoT), Artificial Intelligence (AI), and machine learning (ML) to deliver ready-to-use, accurate solutions for mid-market manufacturers and distributors. Plans to explore Microsoft technologies for advanced search, speech-to-text, and other use cases to deliver modern human/machine interfaces that improve productivity for customers. Steve Murphy, CEO, Epicor said that,”Microsoft’s focus on the ‘Intelligent Cloud’ and ‘Intelligent Edge’ complement our customer-centric focus”. He further stated that after looking at several cloud options, they felt Microsoft Azure offers the best foundation for building and deploying enterprise business applications that enables customers’ businesses to adapt and grow. As most of the prospects these days ask about Cloud ERP, Epicor says that by deploying such a model they would be ready to offer their customers the ability to move onto cloud with the confidence that Microsoft Azure offers. Read more about this in detail on Epicor’s official blog. Rackspace now supports Kubernetes-as-a-Service How to secure an Azure Virtual Network What Google, RedHat, Oracle, and others announced at KubeCon + CloudNativeCon 2018
Read more
  • 0
  • 0
  • 2363
article-image-kubernetes-1-13-released-with-new-features-and-fixes-to-a-major-security-flaw
Prasad Ramesh
04 Dec 2018
3 min read
Save for later

Kubernetes 1.13 released with new features and fixes to a major security flaw

Prasad Ramesh
04 Dec 2018
3 min read
A privilege escalation flaw in Kubernetes was discussed on GitHub last week. Following that, Red Hat released patches for the same. Yesterday Kubernetes 1.13 was also released. The security flaw A recent GitHub issue outlines the issue. Named as CVE-2018-1002105, this issue allowed unauthorized users to craft special requests. This let the unauthorized users establish a connection to a backend server via the Kubernetes API. This let sending arbitrary requests over the same connection directly to the backend. Following this, IBM owned Red Hat released patches for this vulnerability yesterday. All Kubernetes based products are affected by this vulnerability. It has now been patched and as the impact is classified as critical by Red Hat, a version upgrade is strongly recommended if you’re running an affected product. You can find more details at the Red Hat website. Let’s now look at the new features in Kubernetes 1.13 other than the security patch. kubeadm is GA in Kubernetes 1.13 kubeadm is an essential tool for managing the lifecycle of a cluster, right from creation to configuration to upgrade. kubeadm is now officially GA. This tool handles bootstrapping of production clusters on current hardware and configuration of core Kubernetes components. With the GA release, advanced features are available around pluggability and configurability. kubeadm is aimed to be a toolbox for both admins and automated, higher-level systems. Container Storage Interface (CSI) is also GA The Container Storage Interface (CSI) is generally available in Kubernetes 1.13. It was introduced as alpha in Kubernetes 1.9 and beta in Kubernetes 1.10. CSI makes the Kubernetes volume layer truly extensible. It allows third-party storage providers to write plugins that interoperate with Kubernetes without having to modify the core code. CoreDNS replaces Kube-dns as the default DNS Server CoreDNS is replacing Kube-dns to be the default DNS server for Kubernetes. CoreDNS is a general-purpose, authoritative DNS server. It provides an extensible backwards-compatible integration with Kubernetes. CoreDNS is a single executable and a single process. It supports flexible use cases by creating custom DNS entries and is written in Go making it memory-safe. KubeDNS will be supported for at least one more release. Other than these there are also other feature updates like support for 3rd party monitoring, and more features graduating to stable and beta. For more details, on the Kubernetes release, visit the Kubernetes website. Google Kubernetes Engine was down last Friday, users left clueless of outage status and RCA Platform9 announces a new release of Fission.io, the open source, Kubernetes-native Serverless framework Google Cloud hands over Kubernetes project operations to CNCF, grants $9M in GCP credits
Read more
  • 0
  • 0
  • 2352

article-image-amazon-announces-aws-lambda-support-for-powershell-core-6-0
Melisha Dsouza
12 Sep 2018
2 min read
Save for later

Amazon announces AWS Lambda Support for PowerShell Core 6.0

Melisha Dsouza
12 Sep 2018
2 min read
In a post yesterday, the AWS Developer team has announced that AWS Lambda support will be provided for PowerShell Core 6.0. Users can now execute PowerShell Scripts and functions in response to Lambda events. Why should Developers look forward to this upgrade? The AWS Tools for PowerShell will allow developers and administrators to manage their AWS services and resources in the PowerShell scripting environment. Users will be able to manage their AWS resources with the same PowerShell tools used to manage Windows, Linux, and MacOS environments. These tools will let them perform many of the same actions as available in the AWS SDK for .NET. What’s more is that these tools can be accessed from the command line for quick tasks. For example: controlling Amazon EC2 instances. The PowerShell scripting language composes scripts to automate AWS service management. With direct access to AWS services from PowerShell, management scripts can take advantage of everything that the AWS cloud has to offer. The AWS Tools for Windows PowerShell and AWS Tools for PowerShell Core are flexible in handling credentials including support for the AWS Identity and Access Management (IAM) infrastructure. To understand how the support works, it is necessary to set up the appropriate development environment as shown below. Set up the Development Environment This can be done in a few simple steps- 1. Set up the correct version of PowerShell 2. Ensure Visual Studio Code is configured for PowerShell Core 6.0. 3. PowerShell Core is built on top of .NET Core hence install .NET Core 2.1 SDK 4. Head over to the PowerShell Gallery and install AWSLambdaPSCore module The module provides users with following cmdlets to author and publish Powershell based   Lambda functions- Source: AWS Blog You can head over to the AWS blog for detailed steps on how to use the Lambda support for PowerShell. The blog gives readers a simple example on how to execute a PowerShell script that ensures that the Remote Desktop (RDP) port is not left open on any of the EC2 security groups. How to Run Code in the Cloud with AWS Lambda Amazon hits $1 trillion market value milestone yesterday, joining Apple Inc Getting started with Amazon Machine Learning workflow [Tutorial]
Read more
  • 0
  • 0
  • 2351

article-image-hashicorp-announces-consul-1-2-to-ease-service-segmentation-with-the-connect-feature
Savia Lobo
28 Jun 2018
3 min read
Save for later

HashiCorp announces Consul 1.2 to ease Service segmentation with the Connect feature

Savia Lobo
28 Jun 2018
3 min read
HashiCorp recently announced the release of a new version of its distributed service mesh, Consul 1.2.  This release supports a new feature known as Connect, which automatically changes any existing Consul cluster into a service mesh solution. It works on any platform such as physical machines, cloud, containers, schedulers, and more. HashiCorp is San Francisco based organization that helps businesses resolve development, operations, and security challenges in infrastructure, for them to focus on other business-critical tasks. Consul is one such HashiCorp’s product; it is a distributed service mesh for connecting, securing, and configuring services across any runtime platform or any public or private cloud platform. The Connect feature within the Consul 1.2, enables secure service-to-service communication with automatic TLS encryption and identity-based authorization. HashiCorp further stated the Connect feature to be free and open source. New functionalities in the Consul 1.2 Encrypted Traffic while in transit All traffic is established with Connect through a mutual TLS. It ensures traffic to be encrypted in transit and allows services to be safely deployed in low-trust environment. Connection Authorization It will allow or deny service communication by creating a service access graph with intentions. Connect uses the logical name of the service, unlike a firewall which uses IP addresses. This means rules are scale independent; it doesn’t matter if there is one web server or 100. Intentions can be configured using the UI, CLI, API, or HashiCorp Terraform. Proxy Sidecars Applications are allowed to use a lightweight proxy sidecar process to automatically establish inbound and outbound TLS connections. With this, existing applications can work with Connect without any modification. Consul ships with a built-in proxy that doesn't require external dependencies, along with third-party proxies such as Envoy. Native Integration Performance sensitive applications can natively integrate with the Consul Connect APIs to establish and accept connections without a proxy for optimal performance and security. Certificate Management Consul creates and distributes certificates using a certificate authority (CA) provider. Consul has a built-in CA system that requires no external dependencies. This CA system integrates with HashiCorp Vault, and can also be extended to support any other PKI (Public Key Infrastructure) system. Network and Cloud Independent Connect uses standard TLS over TCP/IP, which allows Connect to work on any network configuration. However, the IP advertised by the destination service should be reachable by the underlying operating system. Further, services can communicate cross-cloud without complex overlays. Know more about these functionalities in detail, by visiting HashiCorp Consul 1.2 official blog post SDLC puts process at the center of software engineering Why Agile, DevOps and Continuous Integration are here to stay: Interview with Nikhil Pathania, DevOps practitioner What is a multi layered software architecture?  
Read more
  • 0
  • 0
  • 2346
article-image-hortonworks-partner-with-google-cloud-to-enhance-their-big-data-strategy
Gebin George
22 Jun 2018
2 min read
Save for later

Hortonworks partner with Google Cloud to enhance their Big Data strategy

Gebin George
22 Jun 2018
2 min read
Hortonworks currently is a leader in global data management solutions partnered with Google Cloud to enhance Hortonworks Data Platform (HDP) and Hortonworks Dataflow (HDF). It has promised to deliver next-generation data analytics for hybrid and multi-cloud deployments. This partnership will enable customers to leverage new innovations from the open source community via HDP and HDF on GCP for faster business innovations. HDP’s integration with Google Cloud gives us the following features: Flexibility for ephemeral workloads: Analytical workloads which are on-demand can be managed within minutes with no add-on cost and at unlimited elastic scale. Analytics made faster: Take advantage of Apache Hive and Apache Spark for interactive query, machine learning and analytics. Automated cloud provisioning: simplifies the deployment of HDP and HDF in GCP making it easier to configure and secure workloads to make optimal use of cloud resources. In addition HDF has gone through following enhancements: Deploying Hybrid Data architecture: Smooth and secure flow of data from any source which varies from on-premise to cloud. Streaming Analytics in Real-time: Build streaming applications with ease, which will capture real-time insights without having to code a single line. With the combination of HDP, HDF and Hortonworks DataPlane Service, Hortonworks can uniquely deliver consistent metadata, security and data governance across hybrid cloud and multicloud architectures. Arun Murthy, Co-Founder & Chief Product Officer, Hortonworks said “ Partnering with Google Cloud lets our joint customers take advantage of the scalability, flexibility and agility of the cloud when running analytic and IoT workloads at scale with HDP and HDF. Together with Google Cloud, we offer enterprises an easy path to adopt cloud and, ultimately, a modern data architecture. Similarly, Google Cloud’s project management director, Sudhir Hasbe, said “ Enterprises want to be able to get smarter about both their business and their customers through advanced analytics and machine learning. Our partnership with Hortonworks will give customers the ability to quickly run data analytics, machine learning and streaming analytics workloads in GCP while enabling a bridge to hybrid or cloud-native data architectures” Refer to the Hortonworks platform blog and Google cloud blog for more information on services and enhancements. Google cloud collaborates with Unity 3D; a connected gaming experience is here How to Run Hadoop on Google Cloud – Part 1 AT&T combines with Google cloud to deliver cloud networking at scale
Read more
  • 0
  • 0
  • 2338

article-image-the-ceph-foundation-has-been-launched-by-the-linux-foundation-to-support-the-open-source-storage-project
Melisha Dsouza
13 Nov 2018
3 min read
Save for later

The Ceph Foundation has been launched by the Linux Foundation to support the open source storage project

Melisha Dsouza
13 Nov 2018
3 min read
At Ceph Day Berlin, yesterday (November 12)  the Linux Foundation announced the launch of the Ceph Foundation. A total of 31 organizations have come together to launch the Ceph Foundation including industries like ARM, Intel, Harvard and many more. The foundation aims to bring industry members together to support the Ceph open source community. What is Ceph? Ceph is an open source distributed storage technology that provides storage services for many of the world’s largest container and OpenStack deployments. The range of organizations using Ceph is vast. They include financial institutions like Bloomberg and Fidelity, cloud service providers like Rackspace and Linode, car manufacturers like BMW, and software firms like SAP and Salesforce. The main aim of the Ceph Foundation The main focus of the foundation is to raise money via annual membership fees from industry members. The combined pool of funds will then be spent in support of the Ceph community. The team has already raised around half a million dollars for their first year which will be used to support the Ceph project infrastructure, cloud infrastructure services, internships, and community events. The new foundation will provide a forum for community members and industry stakeholders to meet and discuss project status, development and promotional activities, community events, and strategic direction. The Ceph Foundation replaces the Ceph Advisory Board formed back in 2015. According to a Linux Foundation statement, the Ceph Foundation, will “organize and distribute financial contributions in a coordinated, vendor-neutral fashion for immediate community benefit” Ceph has an ambitious plan for new initiatives once the foundation gets properly functional. Some of these include: Expansion of and improvements to the hardware lab used to develop and test Ceph An events team to help plan various programs and targeted regional or local events Investment in strategic integrations with other projects and ecosystems Programs around interoperability between Ceph-based products and services Internships, training materials, and much more! The Ceph Foundation will provide an open, collaborative, and neutral home for project stakeholders to coordinate their development and community investments in the Ceph ecosystem. You can head over to their blog to know more about this news. Facebook’s GraphQL moved to a new GraphQL Foundation, backed by The Linux Foundation NIPS Foundation decides against name change as poll finds it an unpopular superficial move; instead increases ‘focus on diversity and inclusivity initiatives’ Node.js and JS Foundation announce intent to merge; developers have mixed feelings
Read more
  • 0
  • 0
  • 2328