Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Cloud Computing

175 Articles
article-image-google-compute-engine-plugin-makes-it-easy-to-use-jenkins-on-google-cloud-platform
Savia Lobo
15 May 2018
2 min read
Save for later

Google Compute Engine Plugin makes it easy to use Jenkins on Google Cloud Platform

Savia Lobo
15 May 2018
2 min read
Google recently announced the Google Compute Engine Plugin for Jenkins, which helps to provision, configure and scale Jenkins build environments on Google Cloud Platform (GCP). Jenkins is one of the most popular tools for Continuous Integration(CI), a standard practice carried out by many software organizations. CI assists in automatically detecting changes that were committed to one’s software repositories, running them through unit tests, integration tests and functional tests, to finally create an artifact (JAR, Docker image, or binary). Jenkins helps one to define, build and test a process, then run it continuously against the latest software changes. However, as one scales up their continuous integration practice, one may need to run builds across fleets of machines rather than on a single server. With the Google Compute Engine Plugin, The DevOps teams can intuitively manage instance templates and launch build instances that automatically register themselves with Jenkins. The plugin automatically deletes one’s unused instances, once work in the build system has slowed down,so that one only pays for the instances needed. One can also configure the Google Compute Engine Plugin to create build instances as Preemptible VMs, which can save up to 80% on per-second pricing of builds. One can attach accelerators like GPUs and Local SSDs to instances to run builds faster. One can configure build instances as per their choice, including the networking. For instance: Disable external IPs so that worker VMs are not publicly accessible Use Shared VPC networks for greater isolation in one’s GCP projects Apply custom network tags for improved placement in firewall rules One can improve security risks present in CI using the Compute Engine Plugin as it uses the latest and most secure version of the Jenkins Java Network Launch Protocol (JNLP) remoting protocol. One can create an ephemeral build farm in Compute Engine while keeping Jenkins master and other necessary build dependencies behind firewall while using Jenkins on-premises. Read more about the Compute Engine Plugin in detail, on the Google Research blog. How machine learning as a service is transforming cloud Polaris GPS: Rubrik’s new SaaS platform for data management applications Google announce the largest overhaul of their Cloud Speech-to-Text
Read more
  • 0
  • 0
  • 4434

article-image-introducing-azure-sphere-a-secure-way-of-running-your-internet-of-things-devices
Gebin George
02 May 2018
2 min read
Save for later

Introducing Azure Sphere - A secure way of running your Internet of Things devices

Gebin George
02 May 2018
2 min read
Infrastructure made of connected things is highly trending as organizations are in the process of adopting Internet of Things. At the same time security concerns around these connected devices continues to be a bottleneck for IoT adoption. In an effort to improve IoT security, earlier this month, Microsoft released Azure Sphere, a cost-effective way of securing connected devices. Gartner claims that worldwide spending on IoT security will reach 1.5 billion in 2018. Azure Sphere is basically a suite of services, used to enhance IoT security. Following are the services included in the suite: Azure Sphere MCUs These are a certified class of microcontrollers specially designed for security of internet of things. It follows a cross-over mechanism which allows the combination of running realt-time and application processors with built-in microsoft security mechanism and connectivity. MCU chips are designed using custom silicon security technology, made by Microsoft. Some of the highlights are: A pluton security subsystem to execute complex cryptographic operations A cross-over MCU with the combination of both Cortex-A and Cortext M class processor. Build-in network connectivity to ensure devices are upto date Azure Sphere OS Azure Sphere OS is nothing but a Linux distro used to securely run the internet of things. This highly scalable and secure operating system can be used to run the specialized MCUs by adding an extra layer of security. Some of the highlights are: Secured application containers focussing on agility and robustness A custom Linux Kernel enabling silicon diversity and innovation A security monitor to manage access and integrity The Azure Sphere Security Service An end-to-end security service solely dedicated to secure Azure sphere devices, enhancing security, identifying threats, and managing trust between cloud and device endpoints. Following are the highlights: Protects your devices using certificate based-authentication system. Ensure devices authenticity by ensuring that they are running on genuine software Managing automated updates for Azure Sphere OS, for threat and incident response Easy deployment of software updates to Azure Sphere connected devices. For more information, refer the official Microsoft blog. Serverless computing wars: AWS Lambdas vs Azure Functions How to call an Azure function from an ASP.NET Core MVC application
Read more
  • 0
  • 0
  • 3615

article-image-microsoft-cloud-services-gdpr
Vijin Boricha
25 Apr 2018
2 min read
Save for later

Microsoft Cloud Services get GDPR Enhancements

Vijin Boricha
25 Apr 2018
2 min read
With the GDPR deadline looming closer everyday, Microsoft has started to apply General Data Protection Regulation (GDPR) to its cloud services. Microsoft recently announced that they are providing some enhancements to help organizations using Azure and Office 365 services meet GDPR requirements. With these improvements they aim at ensuring that both Microsoft's services and the organizations benefiting from them will be GDPR-compliant by the law's enforcement date. Microsoft tools supporting GDPR compliance are as follows: Service Trust Portal, provides GDPR information resources Security and Compliance Center in the Office 365 Admin Center Office 365 Advanced Data Governance for classifying data Azure Information Protection for tracking and revoking documents Compliance Manager for keeping track of regulatory compliance Azure Active Directory Terms of Use for obtaining user informed consent Microsoft recently released a preview of a new Data Subject Access Request interface in the Security and Compliance Center and the Azure Portal via a new tab. According to Microsoft 365 team, this interface is also available in the Service Trust Portal. Microsoft Tech Community post also claims that the portal will be getting a "Data Protection Impacts Assessments" section in the coming weeks. Organizations can now perform a search for "relevant data across Office 365 locations" with the new Data Subject Access Request interface preview. This helps organizations search across Exchange, SharePoint, OneDrive, Groups and Microsoft Teams. As explained by Microsoft, once searched the data is exported for review prior to being transferred to the requestor. According to Microsoft, the Data Subject Access Request capabilities will be out of preview before the GDPR deadline of May 25th. It also claims that IT professionals will be able to execute DSRs (Data Subject Requests) against system-generated logs. To know more in detail you can visit Microsoft’s blog post.
Read more
  • 0
  • 0
  • 2460
Banner background image
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $15.99/month. Cancel anytime
article-image-aws-sam-aws-serverless-application-model-is-now-open-source
Savia Lobo
24 Apr 2018
2 min read
Save for later

AWS SAM (AWS Serverless Application Model) is now open source!

Savia Lobo
24 Apr 2018
2 min read
AWS recently announced that  SAM (Serverless Application Model) is now open source. With AWS SAM, one can define serverless applications in a simple and clean syntax. The AWS Serverless Application Model extends AWS CloudFormation and provides a simplified way of defining the Amazon Gateway APIs, AWS Lambda functions, and Amazon DynamoDB tables needed by your serverless application. AWS SAM comprises of: the SAM specification Code translating the SAM templates into AWS CloudFormation Stacks General Information about the model Examples of common applications The SAM specification and implementation are open sourced under the Apache 2.0 license for AWS partners and customers to adopt and extend within their own toolsets. The current version of the SAM specification is available at AWS SAM 2016-10-31. Basic steps to create a serverless application with AWS SAM Step 1: Create a SAM template, a JSON or YAML configuration file that describes Lambda functions, API endpoints and the other resources in your application. Step 2: Test, upload, and deploy the application using the SAM Local CLI. During deployment, SAM automatically translates the application’s specification into CloudFormation syntax, filling in default values for any unspecified properties and determining the appropriate mappings and invocation permissions to set-up for any Lambda functions. To learn more about how to define and deploy serverless applications, read the How-To Guide and see examples. One can build serverless applications faster and further simplify one’s development of serverless applications by defining new event sources, new resource types, and new parameters within SAM. One can also modify SAM in order to integrate it with other frameworks and deployment providers from the community for building serverless applications. For more in-depth knowledge, read AWS SAM development guide on GitHub  
Read more
  • 0
  • 0
  • 2998

article-image-google-announce-the-largest-overhaul-of-their-cloud-speech-to-text-api
Vijin Boricha
20 Apr 2018
2 min read
Save for later

Google announce the largest overhaul of their Cloud Speech-to-Text

Vijin Boricha
20 Apr 2018
2 min read
Last month Google announced Cloud Text-to-Speech, their speech synthesis API that features DeepMind and WaveNet models. Now, they have announced their largest overhaul of Cloud Speech-to-Text (formerly known as Cloud Speech API) since it was introduced in 2016. Google’s Speech-to-Text API has been enhanced for business use cases, including phone-call and video transcription. With this new Cloud Speech-to-Text update one can get access to the latest research from Google’s machine learning expert team, all via a simple REST API. It also supports Standard service level agreement (SLA) with 99.9% availability. Here’s a sneak peek into the latest updates to Google’s Cloud Speech-to-Text API: New video and phone call transcription models: Google has added models created for specific use cases such as phone call transcriptions and transcriptions of audio from video.Video and phone call transcription models Readable text with automatic punctuation: Google created a new LSTM neural network to improve automating punctuation in long-form speech transcription. This Cloud Speech-to-Text model, currently in beta, can automatically suggest commas, question marks, and periods for your text. Use case description with recognition metadata: The information taken from transcribed audio or video with tags such as ‘voice commands to a Google home assistant’ or ‘soccer sport tv shows’, is aggregated across Cloud Speech-to-Text users to prioritize upcoming activities. To know more about this update in detail visit Google’s blog post.
Read more
  • 0
  • 0
  • 2471

article-image-couchbase-mobile-2-released
Richard Gall
13 Apr 2018
2 min read
Save for later

Couchbase mobile 2.0 is released

Richard Gall
13 Apr 2018
2 min read
Couchbase has just released Couchbase Mobile 2.0. And the organization is pretty excited; it claims that it's going to revolutionize the way businesses process and handle edge analytics. In many ways, Couchbase Mobile 2.0 extends many of the features of the main Couchbase server to its mobile version. Ultimately, it demonstrates Couchbase responding to some of the core demands of business - minimizing the friction between cloud solutions and mobile devices at the edge of networks. The challenges Couchbase Mobile 2.0 is trying to solve According to the Couchbase website, Couchbase Mobile 2.0 is being marketed as solving 3 key challenges: Deployment flexibility Performance at scale Security The combination of these 3 is really the holy grail for many software solutions companies. It's an attempt to resolve that tension between the need for security and stability while remaining adaptable and responsive to change. Learn more about Couchbase Mobile 2.0 here. Ravi Mayuram, Senior VP of Engineering and CTO of Couchbase said said: "With Couchbase Mobile 2.0, we are bringing some very exciting new capabilities to the edge that parallels what we have on Couchbase Server. For the first time, SQL queries and Full-Text Search are available on a NoSQL database running on the edge. Additionally, we’ve made programming much easier through thread and type safe database APIs, as well as automatic conflict resolution." Key features of Couchbase Mobile 2.0 Here are some of the key features of the Couchbase Mobile 2.0: Full text query and SQL search. Data change events will allow developers to build applications that respond more quickly. That's only going to be good for user experience. Using WebSocket for replication will make replication more efficient. That's because "it eliminates continuously polling servers". Data conflicts can now be resolved much more quickly. This new release will help to cement Couchbase's position as a data platform. And with an impressive list of customers, including Wells Fargo, Tommy Hilfiger, eBay and DreamWorks, it will be interesting to see to what extent it can grow that list. Source: Globe Newswire
Read more
  • 0
  • 0
  • 1794
article-image-aws-greengrass-machine-learning-edge
Richard Gall
09 Apr 2018
3 min read
Save for later

AWS Greengrass brings machine learning to the edge

Richard Gall
09 Apr 2018
3 min read
AWS already has solutions for machine learning, edge computing, and IoT. But a recent update to AWS Greengrass has combined all of these facets so you can deploy machine learning models to the edge of networks. That's an important step forward in the IoT space for AWS. With Microsoft also recently announcing a $5 billion investment in IoT projects over the next 4 years, by extending the capability of AWS Greengrass, the AWS team are making sure they set the pace in the industry. Jeff Barr, AWS evangelist, explained the idea in a post on the AWS blog: "...You can now perform Machine Learning inference at the edge using AWS Greengrass. This allows you to use the power of the AWS cloud (including fast, powerful instances equipped with GPUs) to build, train, and test your ML models before deploying them to small, low-powered, intermittently-connected IoT devices running in those factories, vehicles, mines, fields..." Industrial applications of machine learning inference Machine learning inference is bringing lots of advantages to industry and agriculture. For example: In farming, edge-enabled machine learning systems will be able to monitor crops using image recognition  - in turn this will enable corrective action to be taken, allowing farmers to optimize yields. In manufacturing, machine learning inference at the edge should improve operational efficiency by making it easier to spot faults before they occur. For example, by monitoring vibrations or noise levels, Barr explains, you'll be able to identify faulty or failing machines before they actually break. Running this on AWS greengrass offers a number of advantages over running machine learning models and processing data locally - it means you can run complex models without draining your computing resources. Read more in detail on the AWS Greengrass Developer Guide. AWS Greengrass should simplify machine learning inference One of the fundamental benefits of using AWS Greengrass should be that it simplifies machine learning inference at every single stage of the typical machine learning workflow. From building and deploying machine learning models, to developing inference applications that can be launched locally within an IoT network, it should, in theory, make the advantages of machine learning inference more accessible to more people. It will be interesting to see how this new feature is applied by IoT engineers over the next year or so. But it will also be interesting to see if this has any impact on the wider battle for the future of Industrial IoT. Further reading: What is edge computing? AWS IoT Analytics: The easiest way to run analytics on IoT data, Amazon says What you need to know about IoT product development
Read more
  • 0
  • 0
  • 2612

article-image-microsoft-commits-5-billion-iot-projects
Richard Gall
06 Apr 2018
2 min read
Save for later

Microsoft commits $5 billion to IoT projects

Richard Gall
06 Apr 2018
2 min read
Microsoft has announced that it will pour $5 billion into IoT over the next 4 years. To date, Microsoft has spent $1.5 billion, so this moves could be viewed as a step change in the organization's commitment to IoT. This makes sense for Microsoft. The company has fallen behind in the consumer technology race. It appears to be moving towards cloud and infrastructure projects instead. Azure has given it a strong position, but with AWS setting the pace in the cloud field, Microsoft needs to move quickly if it is to position itself as the frontrunner in the future of IoT. Julia White, CVP of Azure said this: "With our IoT platform spanning cloud, OS and devices, we are uniquely positioned to simplify the IoT journey so any customer—regardless of size, technical expertise, budget, industry or other factors—can create trusted, connected solutions that improve business and customer experiences, as well as the daily lives of people all over the world. The investment we’re announcing today will ensure we continue to meet all our customers’ needs both now and in the future." The timing of this huge investment has not gone unnoticed. At the end of March, Microsoft revealed that it was reorganizing to allow itself to place greater strategic attention on the 'intelligent cloud and intelligent edge'. It's no coincidence that the senior member set to leave is Terry Myerson, the man who has been leading the Windows side of the business since 2013. However, the extent to which this announcement from Microsoft is really that much of a pivot is questionable. In The Register, Simon Sharwood writes: "Five billion bucks is a lot of money. But not quite so impressive once you realise that Microsoft spent $13.0bn on R&D in FY 2017 and $12bn in each of FY 16 and 15. Five billion spread across the next four years may well be less than ten per cent of all R&D spend." The analysis from many quarters in the tech media is that this is a move that marks what many have been thinking - managing Windows' decline in favour of Microsoft's move into the cloud and infrastructure space. It's pretty hard to see past that - but it will be interesting to see how Microsoft continues to respond to competition from the likes of Amazon.
Read more
  • 0
  • 0
  • 2320

article-image-polaris-gps-rubriks-new-saas-platform-for-data-management-applications
Savia Lobo
06 Apr 2018
2 min read
Save for later

Polaris GPS: Rubrik's new SaaS platform for data management applications

Savia Lobo
06 Apr 2018
2 min read
Rubrik, a cloud data management company launched Polaris GPS, a new SaaS platform for Data Management Applications. This new platform helps businesses and individuals to manage their information spread across the cloud. Polaris GPS delivers a single control and policy management console across globally distributed business applications that are locally managed by Rubrik’s Cloud Data Management instances. Polaris GPS SaaS Platform This new SaaS platform forms a unified system of record for business information across all enterprise applications running in data centers and clouds. The system of record includes native search, workflow orchestration, and a global content catalog, which are exposed through an open API architecture. Developers can leverage these APIs to deliver high-value data management applications for data policy, control, security, and deep intelligence. These applications can further address challenges of risk mitigation, compliance, and governance within the enterprise. Some key features of Polaris GPS : Connects all applications and data across data center and cloud with a uniform framework. No infrastructure or upgrades required. One can leverage the latest features immediately. With Polaris GPS, one can apply the same logic throughout to any kind of data and focus on business outcomes rather than technical processes. Provides faster on-demand broker services with the help of API-driven connectivity. Helps mitigate risk with automated compliance. This means one can define policies and Polaris applies these globally to all your business applications. Read more about Polaris GPS, on Rubrik’s official website.
Read more
  • 0
  • 0
  • 2784
article-image-netflix-releases-flamescope
Richard Gall
06 Apr 2018
2 min read
Save for later

Netflix releases FlameScope

Richard Gall
06 Apr 2018
2 min read
Netflix has released FlameScope, a visualization tool that allows software engineering teams to monitor performance issues. From application startup to single threaded execution, FlameScope will provide real time insight into the time based metrics crucial to software performance. The team at Netflix has made FlameScope open  source, encouraging engineers to contribute to the project and help develop it further - we're sure that many development teams could derive a lot of value from the tool, and we're likely to see many customisations as its community grows. How does FlameScope work? Watch the video below to learn more about FlameScope. https://youtu.be/cFuI8SAAvJg Essentially, FlameScope allows you to build something a bit like a flame graph, but with an extra dimension. One of the challenges that Netflix identified that flame graphs sometimes have is that while they allow you to analyze steady and consistent workloads, "often there are small perturbations or variation during that minute that you want to know about, which become a needle-in-a-haystack search when shown with the full profile". With FlameScope, you get the flame graph, but by using a subsecond-offset heat map, you're also able to see the "small perturbations" you might have otherwise missed. As Netflix explains: "You can select an arbitrary continuous time-slice of the captured profile, and visualize it as a flame graph." Why Netflix built FlameScope FlameScope was built by the Netflix cloud engineering team. The key motivations for building it are actually pretty interesting. The team had a microservice that was suffering from strange spikes in latency, the cause a mystery. One of the members of the team found that these spikes, which occurred around every fifteen minutes appeared to correlate with "an increase in CPU utilization that lasted only a few seconds." CPU frame graphs, of course, didn't help for the reasons outlined above. To tackle this, the team effectively sliced up a flame graph into smaller chunks. Slicing it down into one second snapshots was, as you might expect, a pretty arduous task, so by using subsecond heatmaps, the team was able to create flamegraphs on a really small scale. This made it much easier to visualize those variations. The team are planning to continue to develop the FlameScope project. It will be interesting to see where they decide to take it and how the community responds. To learn more read the post on the Netflix Tech Blog.
Read more
  • 0
  • 0
  • 2518