Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Cloud & Networking

376 Articles
article-image-introducing-azure-sphere-a-secure-way-of-running-your-internet-of-things-devices
Gebin George
02 May 2018
2 min read
Save for later

Introducing Azure Sphere - A secure way of running your Internet of Things devices

Gebin George
02 May 2018
2 min read
Infrastructure made of connected things is highly trending as organizations are in the process of adopting Internet of Things. At the same time security concerns around these connected devices continues to be a bottleneck for IoT adoption. In an effort to improve IoT security, earlier this month, Microsoft released Azure Sphere, a cost-effective way of securing connected devices. Gartner claims that worldwide spending on IoT security will reach 1.5 billion in 2018. Azure Sphere is basically a suite of services, used to enhance IoT security. Following are the services included in the suite: Azure Sphere MCUs These are a certified class of microcontrollers specially designed for security of internet of things. It follows a cross-over mechanism which allows the combination of running realt-time and application processors with built-in microsoft security mechanism and connectivity. MCU chips are designed using custom silicon security technology, made by Microsoft. Some of the highlights are: A pluton security subsystem to execute complex cryptographic operations A cross-over MCU with the combination of both Cortex-A and Cortext M class processor. Build-in network connectivity to ensure devices are upto date Azure Sphere OS Azure Sphere OS is nothing but a Linux distro used to securely run the internet of things. This highly scalable and secure operating system can be used to run the specialized MCUs by adding an extra layer of security. Some of the highlights are: Secured application containers focussing on agility and robustness A custom Linux Kernel enabling silicon diversity and innovation A security monitor to manage access and integrity The Azure Sphere Security Service An end-to-end security service solely dedicated to secure Azure sphere devices, enhancing security, identifying threats, and managing trust between cloud and device endpoints. Following are the highlights: Protects your devices using certificate based-authentication system. Ensure devices authenticity by ensuring that they are running on genuine software Managing automated updates for Azure Sphere OS, for threat and incident response Easy deployment of software updates to Azure Sphere connected devices. For more information, refer the official Microsoft blog. Serverless computing wars: AWS Lambdas vs Azure Functions How to call an Azure function from an ASP.NET Core MVC application
Read more
  • 0
  • 0
  • 3615

article-image-googles-kaniko-open-source-build-tool-for-docker-images-in-kubernetes
Savia Lobo
27 Apr 2018
2 min read
Save for later

Google’s kaniko - An open-source build tool for Docker Images in Kubernetes, without a root access

Savia Lobo
27 Apr 2018
2 min read
Google recently introduced kaniko, an open-source tool for building container images from a Dockerfile even without privileged root access. Prior to kaniko, building images from a standard Dockerfile typically was totally dependent on an interactive access to a Docker daemon, which requires a root access on the machine to run. Such a process makes it difficult to build container images in environments that can’t easily or securely expose their Docker daemons, such as Kubernetes clusters. To combat these challenges, Kaniko was created. With kaniko, one can build an image from a Dockerfile and push it to a registry. Since it doesn’t require any special privileges or permissions, kaniko can even run in a standard Kubernetes cluster, Google Kubernetes Engine, or in any environment that can’t have access to privileges or a Docker daemon. How does kaniko Build Tool work? kaniko runs as a container image that takes in three arguments: a Dockerfile, a build context and the name of the registry to which it should push the final image. The image is built from scratch, and contains only a static Go binary plus the configuration files needed for pushing and pulling images.kaniko image generation The kaniko executor takes care of extracting the base image file system into the root. It executes each command in order, and takes a snapshot of the file system after each command. The snapshot is created in the user area where the file system is running and compared to the previous state that is in memory. All changes in the file system are appended to the base image, making relevant changes in the metadata of the image. After successful execution of each command in the Dockerfile, the executor pushes the newly built image to the desired registry. Finally, Kaniko unpacks the filesystem, executes commands and takes snapshots of the filesystem completely in user-space within the executor image. This is how it avoids requiring privileged access on your machine. Here, the docker daemon or CLI is not involved. To know more about how to run kaniko in a Kubernetes Cluster, and in the Google Cloud Container Builder, read the documentation on the GitHub Repo. The key differences between Kubernetes and Docker Swarm Building Docker images using Dockerfiles What’s new in Docker Enterprise Edition 2.0?
Read more
  • 0
  • 0
  • 4547

article-image-microsoft-cloud-services-gdpr
Vijin Boricha
25 Apr 2018
2 min read
Save for later

Microsoft Cloud Services get GDPR Enhancements

Vijin Boricha
25 Apr 2018
2 min read
With the GDPR deadline looming closer everyday, Microsoft has started to apply General Data Protection Regulation (GDPR) to its cloud services. Microsoft recently announced that they are providing some enhancements to help organizations using Azure and Office 365 services meet GDPR requirements. With these improvements they aim at ensuring that both Microsoft's services and the organizations benefiting from them will be GDPR-compliant by the law's enforcement date. Microsoft tools supporting GDPR compliance are as follows: Service Trust Portal, provides GDPR information resources Security and Compliance Center in the Office 365 Admin Center Office 365 Advanced Data Governance for classifying data Azure Information Protection for tracking and revoking documents Compliance Manager for keeping track of regulatory compliance Azure Active Directory Terms of Use for obtaining user informed consent Microsoft recently released a preview of a new Data Subject Access Request interface in the Security and Compliance Center and the Azure Portal via a new tab. According to Microsoft 365 team, this interface is also available in the Service Trust Portal. Microsoft Tech Community post also claims that the portal will be getting a "Data Protection Impacts Assessments" section in the coming weeks. Organizations can now perform a search for "relevant data across Office 365 locations" with the new Data Subject Access Request interface preview. This helps organizations search across Exchange, SharePoint, OneDrive, Groups and Microsoft Teams. As explained by Microsoft, once searched the data is exported for review prior to being transferred to the requestor. According to Microsoft, the Data Subject Access Request capabilities will be out of preview before the GDPR deadline of May 25th. It also claims that IT professionals will be able to execute DSRs (Data Subject Requests) against system-generated logs. To know more in detail you can visit Microsoft’s blog post.
Read more
  • 0
  • 0
  • 2460
Visually different images

article-image-aws-sam-aws-serverless-application-model-is-now-open-source
Savia Lobo
24 Apr 2018
2 min read
Save for later

AWS SAM (AWS Serverless Application Model) is now open source!

Savia Lobo
24 Apr 2018
2 min read
AWS recently announced that  SAM (Serverless Application Model) is now open source. With AWS SAM, one can define serverless applications in a simple and clean syntax. The AWS Serverless Application Model extends AWS CloudFormation and provides a simplified way of defining the Amazon Gateway APIs, AWS Lambda functions, and Amazon DynamoDB tables needed by your serverless application. AWS SAM comprises of: the SAM specification Code translating the SAM templates into AWS CloudFormation Stacks General Information about the model Examples of common applications The SAM specification and implementation are open sourced under the Apache 2.0 license for AWS partners and customers to adopt and extend within their own toolsets. The current version of the SAM specification is available at AWS SAM 2016-10-31. Basic steps to create a serverless application with AWS SAM Step 1: Create a SAM template, a JSON or YAML configuration file that describes Lambda functions, API endpoints and the other resources in your application. Step 2: Test, upload, and deploy the application using the SAM Local CLI. During deployment, SAM automatically translates the application’s specification into CloudFormation syntax, filling in default values for any unspecified properties and determining the appropriate mappings and invocation permissions to set-up for any Lambda functions. To learn more about how to define and deploy serverless applications, read the How-To Guide and see examples. One can build serverless applications faster and further simplify one’s development of serverless applications by defining new event sources, new resource types, and new parameters within SAM. One can also modify SAM in order to integrate it with other frameworks and deployment providers from the community for building serverless applications. For more in-depth knowledge, read AWS SAM development guide on GitHub  
Read more
  • 0
  • 0
  • 2998

article-image-jenkins-x-the-new-cloud-native-ci-cd-solution-on-kubernetes
Savia Lobo
24 Apr 2018
3 min read
Save for later

Everything you need to know about Jenkins X, the new cloud native CI/CD solution on Kubernetes

Savia Lobo
24 Apr 2018
3 min read
Jenkins is loved by many as the open source automation server, that provides different plugins  to support building, deploying, and automating any project. However, Jenkins is not a cloud-native tool, i.e., it lacks the OOTB (Out-Of-The-Box) capabilities to survive an outage, and scale seamlessly, among many other flaws. In order to make Jenkins cloud native, the team has come up with a brand new Jenkins X platform, an open source CI/CD solution for modern cloud applications, which would be deployed on Kubernetes. Jenkins X is currently a sub-project within the Jenkins Foundation. It fully focuses on Kubernetes, CI/CD and Cloud Native use cases for providing great developer productivity. With the Kubernetes plugin, one does not have to worry about provisioning VMs or physical servers for slaves. The target audience for Jenkins X include both the existing as well as new Jenkins users. It is specifically designed for those who are, Already using Kubernetes and want to adopt CI/CD, or Want to adopt CI/CD and want to increasingly move to the public cloud, even if they don’t know anything about Kubernetes. Key Features of Jenkins X An automated Continuous Integration(CI) and Continuous Delivery(CD) tool: Jenkins X does not require one to have a deep knowledge of the internals of a Jenkins pipeline. It provides a default setting and the best-fit pipelines for one’s projects, which would implement CI and CD fully. Automated management of the Environments: Jenkins X automates the management of the environments and the promotion of new versions of applications between environments, which each team gets, via GitOps. Automated Preview Environments: Jenkins X provides preview environments automatically for one’s pull requests. With this, one can get a faster feedback before changes are merged to master. Feedback on Issues and Pull Requests: Jenkins X automatically comments on Commits, Issues and Pull Requests with feedback when, Code is ready to be previewed, Code is promoted to environments, or If Pull Requests are generated automatically to upgrade versions. Some other notable features of Jenkins X are : Jenkins X uses a distribution of Jenkins as the core CI / CD engine. It also promotes a particular Git branching and repository model and includes tools and services, present within the distribution, to fit this model. The Jenkins X development model represents "best practice of developing Kubernetes applications", which is based in part on the experience of developing Fabric8, a project with a similar mission and on the results of the State of DevOps report. The advantage of Jenkins X is that if one follows the best practices, Jenkins X assembles all the pieces by itself, for instance, Jenkins, Kubernetes, Git, CI/CD etc. such that developers can be instantly productive. Jenkins X is shipped with K8s pipelines, agents, and integrations. This makes migrations to Kubernetes and microservices way simpler. jx: Jenkins X CLI tool Jenkins X also defines a command line tool, jx. This tool encapsulates tasks as high-level operations. Its CLI is used not only by developers from their computers, but also used by Jenkins Pipeline. It is a central user interface which allows: Easy installation of Jenkins X on any kubernetes cluster Create new Kubernetes clusters from scratch on the public cloud Set up Environments for each Team Import existing projects or create new Spring Boot applications and later: automatically set up the CI / CD pipeline and webhooks create new releases and promote them through the Environments on merge to master support Preview Environments on Pull Requests Read further more on Jenkins X on its official website.    
Read more
  • 0
  • 0
  • 3244

article-image-what-to-expect-from-upcoming-ubuntu-18-04-release
Gebin George
20 Apr 2018
2 min read
Save for later

What to expect from upcoming Ubuntu 18.04 release

Gebin George
20 Apr 2018
2 min read
Ubuntu 18.04 official release is scheduled on April 26th 2018. Ubuntu 17.10 was released in October 2017 and within a span of 6 months, they are releasing their next big update in 18.04. The version numbers of Ubuntu have an interesting trait, where-in 18.04 will be released in the 4th month of 2018, similar to 17.10, which was released in 10th month of 2017. Ubuntu 18.04 comes with some exciting new feature releases: Extending support to color emojis All the previous versions of Ubuntu supported monochrome black and white emojis, which definitely lacked aesthetic appeal. This update might not be at the top of wishlist for anyone using Ubuntu, but emojis form an integral part of modern communication, and also comparing it to other distros like Fedora, which gained color emoji support long back. With 18.04 release, you can add and view color emojis, anytime, anywhere. The release uses Noto Color emoji font, which can be downloaded from the GitHub page. Shipping with Linux Kernel 4.15 Ubuntu 18.04 now ships with the slowest Linux kernel ever since 2011 i.e Kernel 4.15. This brings in much-needed Spectre and Meltdown patch fixes for Ubuntu 18.04. Furthermore, it has also added native-support for Raspberry Pi touchscreen, and has a significant performance boost for AMD GPUs. GNOME 3.28 Unity desktop environment is no longer the default environment anymore, since the release of customized GNOME in Ubuntu 17.10 release. They are planning to continue with it and plotting the latest version of GNOME (3.28) along with 18.04. Xorg display server Wayland was introduced as the default display server for Ubuntu along with the 17.10 release. But it has turned out to be an issue as a decent amount of applications were not supported on Wayland. Hence, in the new release Ubuntu is switching back to Xorg display server as the default option and wayland will be provided as an option to the users. Increase in boot speed Canonical, the company behind Ubuntu, has claimed that Ubuntu 18.04 will have a better boot speed as the systemd’s features will help identifying the bottleneck and solve them as quickly as possible. New installer for the server edition Ubuntu was using their debian text-based installer for their server edition but with the 18.04 release, server edition will be using the all new subiquity installer. Checkout the GitHub page for more about subiquity installer. For minor bug fixes, features and enhancements, refer to the FOSSBYTES blog.
Read more
  • 0
  • 0
  • 2630
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at £15.99/month. Cancel anytime
article-image-google-announce-the-largest-overhaul-of-their-cloud-speech-to-text-api
Vijin Boricha
20 Apr 2018
2 min read
Save for later

Google announce the largest overhaul of their Cloud Speech-to-Text

Vijin Boricha
20 Apr 2018
2 min read
Last month Google announced Cloud Text-to-Speech, their speech synthesis API that features DeepMind and WaveNet models. Now, they have announced their largest overhaul of Cloud Speech-to-Text (formerly known as Cloud Speech API) since it was introduced in 2016. Google’s Speech-to-Text API has been enhanced for business use cases, including phone-call and video transcription. With this new Cloud Speech-to-Text update one can get access to the latest research from Google’s machine learning expert team, all via a simple REST API. It also supports Standard service level agreement (SLA) with 99.9% availability. Here’s a sneak peek into the latest updates to Google’s Cloud Speech-to-Text API: New video and phone call transcription models: Google has added models created for specific use cases such as phone call transcriptions and transcriptions of audio from video.Video and phone call transcription models Readable text with automatic punctuation: Google created a new LSTM neural network to improve automating punctuation in long-form speech transcription. This Cloud Speech-to-Text model, currently in beta, can automatically suggest commas, question marks, and periods for your text. Use case description with recognition metadata: The information taken from transcribed audio or video with tags such as ‘voice commands to a Google home assistant’ or ‘soccer sport tv shows’, is aggregated across Cloud Speech-to-Text users to prioritize upcoming activities. To know more about this update in detail visit Google’s blog post.
Read more
  • 0
  • 0
  • 2471

article-image-cloud-networking-news-bulletin-friday-20-april
Richard Gall
20 Apr 2018
2 min read
Save for later

Cloud and networking news bulletin - Friday 20 April

Richard Gall
20 Apr 2018
2 min read
Welcome to the cloud and networking news bulletin. Every Friday you'll find the latest updates and software releases from the world of cloud. End your week with an informative dose of tech news. Cloud and networking news on the Packt Hub Couchbase Mobile 2.0 is released. Cloud and networking news from across the web The U.S. Defense Department is taking big steps towards cloud computing. The team behind The Joint Enterprise Defense Infrastructure (JEDI) project is looking for IaaS and PaaS solutions - with a view to using cloud as the foundation for improved artificial intelligence projects. There has been concern from some quarters that DoD were planning to commit to a 10 year contract with a single vendor. However, Pentagon spokesperson Dana White suggested otherwise, saying "multiple vendors may form a partnership to offer us a competitive solution." Huawei adds Blockchain Service platform to its cloud computing services. The Chinese telecoms giant has revealed a new Blockchain Service platform. It should allow developers and businesses to build and scale Blockchain applications on Huawei's cloud. The organization suggests there could be a number of ways the service could be used, from improving financial transparency and security, to managing digital assets. VMware reveals new releases of vSphere and vSAN. The virtualization giants claim the updates are 'elevating' the way users experience hybrid cloud. Google overhauls cloud text-to-speech engine. It's been around for a couple of years now, but the new features look like they're going to make the tool more useful for businesses. With features including 'pre-built models for improved transcription accuracy from phone calls and video' and 'automatic punctuation to improve readability of transcribed long-form audio.' Esri and Alibaba Cloud working together to bring enhanced Location Intelligence technology to Cloud Users. Oracle's customer experience cloud suite expands its offerings. FireEye and Oracle Collaborate on Cloud Transformation.
Read more
  • 0
  • 0
  • 1725

article-image-docker-enterprise-edition-2-0-released
Gebin George
18 Apr 2018
3 min read
Save for later

What's new in Docker Enterprise Edition 2.0?

Gebin George
18 Apr 2018
3 min read
Docker Enterprise Edition 2.0 was released yesterday. The major focus of this new release (and the platform as a whole) is speeding up multi-cloud initiatives and automating the application delivery model, that go hand-in hand with DevOps and Agile philosophy. Docker has become an important tool for businesses in a very short space of time. With Docker EE 2.0, it looks like Docker will consolidate its position as the go-to containerization tool for enterprise organizations. Key features of Docker Enterprise Edition 2.0 Let’s look at some of the key capabilities included in Docker EE 2.0 release. Docker EE 2.0 is incredibly flexible  Flexibility is one of the biggest assets of Docker Enterprise Edition as today’s software delivery ecosystem demands freedom of choice. Organizations that are building applications on different platforms, using varied set of tools, deploying on different infrastructures and running them on different set of platforms require a huge amount of flexibility. Docker EE has addressed this concern with the following capabilities: Multi-Linux, Multi-OS, Multi-Cloud Many organizations have adopted a Hybrid cloud or Multi-cloud strategy, and build applications on different operating systems. Docker EE is registered across all the popular set of operating systems such as Windows, all the popular Linux distributions, Windows Server, and also on popular public clouds, enabling the users to deploy applications flexibly, wherever required. Docker EE 2.0 is interoperable with Docker Swarm and Kubernetes Container orchestration forms the core of DevOps and the entire ecosystem of containers revolve around Swarm or Kubernetes. Docker EE allows flexibility is switching between both these tools for application deployment and orchestration. Applications deployed on Swarm today, can be easily migrated to Kubernetes using the same compose file, making the life of developers simpler. Accelerating agile with Docker Enterprise Edition 2.0 Docker EE focuses on monitoring and managing containers to much greater extent than the open source version of Docker. The Enterprise Edition has specialized management and monitoring platform for looking after Kubernetes cluster and also has access to Kubernetes API, CLI and interfaces. Cluster management made simple: Easy-to-use cluster management services: Basic single line commands for adding cluster High availability of management plane Access to consoles and logs Securing configurations Secure application zones: With swift integration with corporate LDAPs and Active Directory system, we can divide a single cluster logically and physically into different teams. This seems to be the most convenient way to assign new namespaces to Kubernetes clusters. Layer 7 routing for Swarm: The new interlock 2.0 architecture provides new and optimized enhancements for network routing in Swarm. For more information on interlock architecture, refer the official Docker blog. Kubernetes: All the core components of Kubernetes environment like APIs, CLIs are available for users in a CCNF- conformant Kubernetes stack. There were few more enhancements related to the supply chain and security domains. For the complete set of improvements to Docker, check out the official Docker EE documentation.
Read more
  • 0
  • 0
  • 2360

article-image-couchbase-mobile-2-released
Richard Gall
13 Apr 2018
2 min read
Save for later

Couchbase mobile 2.0 is released

Richard Gall
13 Apr 2018
2 min read
Couchbase has just released Couchbase Mobile 2.0. And the organization is pretty excited; it claims that it's going to revolutionize the way businesses process and handle edge analytics. In many ways, Couchbase Mobile 2.0 extends many of the features of the main Couchbase server to its mobile version. Ultimately, it demonstrates Couchbase responding to some of the core demands of business - minimizing the friction between cloud solutions and mobile devices at the edge of networks. The challenges Couchbase Mobile 2.0 is trying to solve According to the Couchbase website, Couchbase Mobile 2.0 is being marketed as solving 3 key challenges: Deployment flexibility Performance at scale Security The combination of these 3 is really the holy grail for many software solutions companies. It's an attempt to resolve that tension between the need for security and stability while remaining adaptable and responsive to change. Learn more about Couchbase Mobile 2.0 here. Ravi Mayuram, Senior VP of Engineering and CTO of Couchbase said said: "With Couchbase Mobile 2.0, we are bringing some very exciting new capabilities to the edge that parallels what we have on Couchbase Server. For the first time, SQL queries and Full-Text Search are available on a NoSQL database running on the edge. Additionally, we’ve made programming much easier through thread and type safe database APIs, as well as automatic conflict resolution." Key features of Couchbase Mobile 2.0 Here are some of the key features of the Couchbase Mobile 2.0: Full text query and SQL search. Data change events will allow developers to build applications that respond more quickly. That's only going to be good for user experience. Using WebSocket for replication will make replication more efficient. That's because "it eliminates continuously polling servers". Data conflicts can now be resolved much more quickly. This new release will help to cement Couchbase's position as a data platform. And with an impressive list of customers, including Wells Fargo, Tommy Hilfiger, eBay and DreamWorks, it will be interesting to see to what extent it can grow that list. Source: Globe Newswire
Read more
  • 0
  • 0
  • 1794
article-image-kubernetes-1-10-released
Vijin Boricha
09 Apr 2018
2 min read
Save for later

Kubernetes 1.10 released

Vijin Boricha
09 Apr 2018
2 min read
Kubernetes has announced their first release of 2018: Kubernetes 1.10. This release majorly focuses on stabilizing 3 key areas which include storage, security, and networking. Kubernetes is an open-source system, initially designed by Google and at present is maintained by the Cloud Native Computing Foundation, which helps in automating deployment, scaling, and management of containerized applications. Storage - CSI and Local Storage move to beta: In this version, you will find: The Container Storage Interface (CSI) moves to beta. One can install new volume plugins similar to deploying a pod. This helps third-party storage providers to develop independent solutions outside the core Kubernetes codebase. Local storage management has also progressed to beta, enabling locally attached storage available as a persistent volume source. This assures lower-cost and higher performance for distributed file systems and databases. Security - External credential providers (alpha): Complementing the Cloud Controller Manager feature added in 1.9 Kubernetes has extended its feature with the addition of External credential providers in 1.10. This enables Cloud providers and other platform developers to release binary plugins to handle authentication for specific cloud-provider Identity Access Management services. Networking - CoreDNS as a DNS provider (beta): Kubernetes now provides the ability to switch the DNS service to CoreDNS during installation. CoreDNS is a single process that can now supports more use cases. To get a complete list of additional features of this release visit the Changelog. Check out other related posts: The key differences between Kubernetes and Docker Swarm Apache Spark 2.3 now has native Kubernetes support! OpenShift 3.9 released ahead of planned schedule
Read more
  • 0
  • 0
  • 2238

article-image-aws-greengrass-machine-learning-edge
Richard Gall
09 Apr 2018
3 min read
Save for later

AWS Greengrass brings machine learning to the edge

Richard Gall
09 Apr 2018
3 min read
AWS already has solutions for machine learning, edge computing, and IoT. But a recent update to AWS Greengrass has combined all of these facets so you can deploy machine learning models to the edge of networks. That's an important step forward in the IoT space for AWS. With Microsoft also recently announcing a $5 billion investment in IoT projects over the next 4 years, by extending the capability of AWS Greengrass, the AWS team are making sure they set the pace in the industry. Jeff Barr, AWS evangelist, explained the idea in a post on the AWS blog: "...You can now perform Machine Learning inference at the edge using AWS Greengrass. This allows you to use the power of the AWS cloud (including fast, powerful instances equipped with GPUs) to build, train, and test your ML models before deploying them to small, low-powered, intermittently-connected IoT devices running in those factories, vehicles, mines, fields..." Industrial applications of machine learning inference Machine learning inference is bringing lots of advantages to industry and agriculture. For example: In farming, edge-enabled machine learning systems will be able to monitor crops using image recognition  - in turn this will enable corrective action to be taken, allowing farmers to optimize yields. In manufacturing, machine learning inference at the edge should improve operational efficiency by making it easier to spot faults before they occur. For example, by monitoring vibrations or noise levels, Barr explains, you'll be able to identify faulty or failing machines before they actually break. Running this on AWS greengrass offers a number of advantages over running machine learning models and processing data locally - it means you can run complex models without draining your computing resources. Read more in detail on the AWS Greengrass Developer Guide. AWS Greengrass should simplify machine learning inference One of the fundamental benefits of using AWS Greengrass should be that it simplifies machine learning inference at every single stage of the typical machine learning workflow. From building and deploying machine learning models, to developing inference applications that can be launched locally within an IoT network, it should, in theory, make the advantages of machine learning inference more accessible to more people. It will be interesting to see how this new feature is applied by IoT engineers over the next year or so. But it will also be interesting to see if this has any impact on the wider battle for the future of Industrial IoT. Further reading: What is edge computing? AWS IoT Analytics: The easiest way to run analytics on IoT data, Amazon says What you need to know about IoT product development
Read more
  • 0
  • 0
  • 2612

article-image-polaris-gps-rubriks-new-saas-platform-for-data-management-applications
Savia Lobo
06 Apr 2018
2 min read
Save for later

Polaris GPS: Rubrik's new SaaS platform for data management applications

Savia Lobo
06 Apr 2018
2 min read
Rubrik, a cloud data management company launched Polaris GPS, a new SaaS platform for Data Management Applications. This new platform helps businesses and individuals to manage their information spread across the cloud. Polaris GPS delivers a single control and policy management console across globally distributed business applications that are locally managed by Rubrik’s Cloud Data Management instances. Polaris GPS SaaS Platform This new SaaS platform forms a unified system of record for business information across all enterprise applications running in data centers and clouds. The system of record includes native search, workflow orchestration, and a global content catalog, which are exposed through an open API architecture. Developers can leverage these APIs to deliver high-value data management applications for data policy, control, security, and deep intelligence. These applications can further address challenges of risk mitigation, compliance, and governance within the enterprise. Some key features of Polaris GPS : Connects all applications and data across data center and cloud with a uniform framework. No infrastructure or upgrades required. One can leverage the latest features immediately. With Polaris GPS, one can apply the same logic throughout to any kind of data and focus on business outcomes rather than technical processes. Provides faster on-demand broker services with the help of API-driven connectivity. Helps mitigate risk with automated compliance. This means one can define policies and Polaris applies these globally to all your business applications. Read more about Polaris GPS, on Rubrik’s official website.
Read more
  • 0
  • 0
  • 2784
article-image-netflix-releases-flamescope
Richard Gall
06 Apr 2018
2 min read
Save for later

Netflix releases FlameScope

Richard Gall
06 Apr 2018
2 min read
Netflix has released FlameScope, a visualization tool that allows software engineering teams to monitor performance issues. From application startup to single threaded execution, FlameScope will provide real time insight into the time based metrics crucial to software performance. The team at Netflix has made FlameScope open  source, encouraging engineers to contribute to the project and help develop it further - we're sure that many development teams could derive a lot of value from the tool, and we're likely to see many customisations as its community grows. How does FlameScope work? Watch the video below to learn more about FlameScope. https://youtu.be/cFuI8SAAvJg Essentially, FlameScope allows you to build something a bit like a flame graph, but with an extra dimension. One of the challenges that Netflix identified that flame graphs sometimes have is that while they allow you to analyze steady and consistent workloads, "often there are small perturbations or variation during that minute that you want to know about, which become a needle-in-a-haystack search when shown with the full profile". With FlameScope, you get the flame graph, but by using a subsecond-offset heat map, you're also able to see the "small perturbations" you might have otherwise missed. As Netflix explains: "You can select an arbitrary continuous time-slice of the captured profile, and visualize it as a flame graph." Why Netflix built FlameScope FlameScope was built by the Netflix cloud engineering team. The key motivations for building it are actually pretty interesting. The team had a microservice that was suffering from strange spikes in latency, the cause a mystery. One of the members of the team found that these spikes, which occurred around every fifteen minutes appeared to correlate with "an increase in CPU utilization that lasted only a few seconds." CPU frame graphs, of course, didn't help for the reasons outlined above. To tackle this, the team effectively sliced up a flame graph into smaller chunks. Slicing it down into one second snapshots was, as you might expect, a pretty arduous task, so by using subsecond heatmaps, the team was able to create flamegraphs on a really small scale. This made it much easier to visualize those variations. The team are planning to continue to develop the FlameScope project. It will be interesting to see where they decide to take it and how the community responds. To learn more read the post on the Netflix Tech Blog.
Read more
  • 0
  • 0
  • 2518

article-image-aws-sydney-summit-2018-is-all-about-iot
Savia Lobo
05 Apr 2018
2 min read
Save for later

AWS Sydney Summit 2018 is all about IoT

Savia Lobo
05 Apr 2018
2 min read
AWS is all set to spill its IoT beans at the Australian AWS Summit in Sydney on 11th and 12th April 2018 at Sydney’s International Convention Centre. AWS looks forward to shedding light on cloud technologies and how it can help businesses lower costs, improve efficiency and innovate at scale. Customer will also realize the potential for IoT in the real world, and in industrial use cases, says AWS. Highlights of AWS Sydney Summit 2018 The AWS Sydney Summit will have one session dedicated to IoT. (Intelligence of Things: IoT, AWS DeepLens, and Amazon SageMaker) The summit would also showcase the capabilities of AWS Greengrass in delivering IoT edge intelligence with integration to other services such as Amazon Rekognition and AWS Machine Learning solutions. The summit will also highlight how customers can leverage the power of Amazon SageMaker, which is a fully managed end-to-end machine learning tool that enables users to quickly build, train and deploy machine learning models. The team will demonstrate how to deploy different machine learning models down to an AWS DeepLens device — a custom built HD video camera designed to run complex machine learning models for video and object recognition — in just a few clicks. This summit will also talk about the latest AWS IoT Core and the AWS IoT Button. AWS IoT Core is a platform that enables one to connect devices to AWS Services and other devices. It ensures secure data and interactions and also enables applications to interact with devices even when they are offline. The AWS IoT Button is a programmable button based on the Amazon Dash Button hardware. It is a simple Wi-Fi device, which is easy to configure and designed for developers to get started with AWS IoT Core, AWS Lambda, Amazon DynamoDB, Amazon SNS, and many other Amazon Web Services without writing device-specific code. For further highlights and the complete agenda of the summit, visit the AWS Website.  
Read more
  • 0
  • 0
  • 2163