Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - IoT & Hardware

119 Articles
article-image-infosys-and-siemens-collaborate-to-build-iot-solutions-on-mindsphere
Savia Lobo
06 Jul 2018
2 min read
Save for later

Infosys and Siemens collaborate to build IoT solutions on MindSphere

Savia Lobo
06 Jul 2018
2 min read
Infosys recently announced its partnership with Siemens to build applications for Siemens' open cloud-based IoT operating system, Mindsphere. Mindsphere connects real-world objects (industrial machinery, systems, equipments and so on) to the digital world with the help of IoT using advanced analytics. It provides industry applications and services to help businesses achieve success. With this collaboration, Infosys and Siemens will enable customers to leverage the true power of data generated by their devices. The initial focus as company plans will be on customers in the manufacturing, energy, utilities, healthcare, pharmaceutical, transportation, and logistics industry. How Infosys plans to help Siemen’s Mindsphere: Infosys plans to offer end-to-end implementation services and post-implementation support for Mindsphere It will be using its repository of Industry 4.0 accelerators, platform tools, etc. to help customers get quickly on board Will enhance customers to have an efficient experience by using data analytics features such as predictive maintenance and end-to-end factory visibility Customers will also benefit by monetizing new data-driven services Ravi Kumar S, President and Deputy COO, Infosys, says, “There is an increasing need for enterprises to accelerate their digital journey and to deliver new and innovative services. This partnership will help us bring exciting solutions to our customers that will combine strategic insights and execution excellence.” With Infosys’ expertise in the field of industrial engineering, industrial analytics, AR and VR and with Siemens’ strength in manufacturing industrial assets brings valuable digital services to customers from different sectors. Know more about the partnership alliance on the Infosys Blog post. 5 DIY IoT projects you can build under $50 Build an IoT application with Google Cloud [Tutorial] Google releases Android Things library for Google Cloud IoT Core
Read more
  • 0
  • 0
  • 2732

article-image-google-becomes-new-platinum-member-of-the-linux-foundation
Savia Lobo
29 Jun 2018
2 min read
Save for later

Google becomes new platinum member of the Linux foundation

Savia Lobo
29 Jun 2018
2 min read
Google is the new platinum member of the Linux Foundation. Google will benefit the platinum member rights and the Linux community will move towards huge financial gains. The annual membership cost for Google will be around $500,000. Linux Foundation, on the other hand, is quite thrilled to have Google as one of the platinum members.  As Google is one of the biggest contributors to and supporters of open source in the tech world. In addition to this, Google leverages one of the most important open source projects for its OS -- the Linux kernel and both Android and Chrome OS, are Linux-based. This membership also secured a seat for Sarah Novotny, Google’s Head of Open-Source strategy for the Google Cloud Platform into the Board of Directors of Linux Foundation. On this achievement, Sarah mentioned, ’Open source is an essential part of Google's culture, and we've long recognized the potential of open ecosystems to grow quickly, be more resilient and adaptable in the face of change, and create better software. The Linux Foundation is a fixture in the open source community. By working closely with the organization, we can better engage with the community-at-large and continue to build a more inclusive ecosystem where everyone can benefit.’ Google joins hands with other platinum members in the Linux Foundation, including Microsoft, Intel, Huawei, Samsung, Facebook, etc. Read more about this exciting coverage at the Linux Foundation’s official announcement. Tencent becomes a platinum member of the Linux Foundation Machine learning APIs for Google Cloud Platform Google introduces Machine Learning courses for AI beginners
Read more
  • 0
  • 0
  • 2671

article-image-microsoft-azure-iot-edge-is-open-source-and-generally-available
Savia Lobo
29 Jun 2018
3 min read
Save for later

Microsoft Azure IoT Edge is open source and generally available!

Savia Lobo
29 Jun 2018
3 min read
Microsoft recently announced Azure IoT Edge to be generally available and open source. Its preview was announced at the Microsoft Build 2017, during which the company stated how this service plans to extend cloud intelligence to edge devices. Microsoft Azure IoT Edge is a fully-managed cloud service to help enterprises generate useful insights from the data collected by the Internet of things (IoT) devices. It enables one to deploy and run Artificial Intelligence services, Azure services, and custom logic directly on the cross-platform IoT devices. This, in turn, helps deliver cloud intelligence locally as per the plan. Additional features in the Azure IoT Edge include: Support for Moby container management system: Docker which is built on Moby, an open-source platform. It allows Microsoft Azure to extend the concepts of containerization, isolation, and management from the cloud to devices at the edge. Azure IoT Device Provisioning Service: This service allows customers to securely provision huge amount of devices making edge deployments more scalable. Tooling for VSCode: VSCode allows easy module development by coding, testing, debugging, and deploying. Azure IoT Edge security manager: IoT Edge security manager acts as a tough security core for protecting the IoT Edge device and all its components by abstracting the secure silicon hardware. Automatic Device Management (ADM): ADM service allows scaled deployment of IoT Edge modules to a fleet of devices based on device metadata. When a device with the right metadata (tags) joins the fleet, ADM brings down the right modules and puts the edge device in the correct state. CI/CD pipeline with VSTS : This allows managing the complete lifecycle of the Azure IoT Edge modules from development, testing, staging, and final deployment. Broad language support for module SDKs: Azure IoT Edge supports more languages than other edge offerings in the market. It includes C#, C, Node.js, Python, and Java allowing one to program the edge modules in their choice of language. There are three components required for Azure IoT Edge deployment: Azure IoT Edge Runtime Azure IoT Hub and Edge modules. The Azure IoT Edge runtime is free and will be available as open source code. Customers would require an Azure IoT Hub instance for edge device management and deployment if they are not using one for their IoT solution already. Read full news coverage at the Microsoft Azure IoT blog post. Read Next Microsoft commits $5 billion to IoT projects Epicor partners with Microsoft Azure to adopt Cloud ERP Introduction to IOT
Read more
  • 0
  • 0
  • 3216
Banner background image

article-image-amazon-alexa-and-aws-helping-nasa-improve-their-efficiency
Gebin George
22 Jun 2018
2 min read
Save for later

Amazon Alexa and AWS helping NASA improve their efficiency

Gebin George
22 Jun 2018
2 min read
While everyone is busy playing songs and giving voice commands to Amazon Alexa, the amazing voice assistant developed by Amazon is utilized by the US space agency, NASA, to organize their data-centric tasks efficiently. Chief Technology and Innovation Officer at NASA,Tom Soderstrom, said “If you have Alexa-controlled Amazon Echo smart speaker at home, tell her to enable the 'NASA Mars' app. Once done, ask Alexa anything about the Red Planet and she will come back with all the right answers. This enables serverless computing where we don't need to build for scale but for real-life work cases and get the desired results in a much cheaper way. Remember that voice as a platform is poised to give 10 times faster results. It is kind of a virtual helpdesk. Alexa doesn't need to know where the data is stored or what the passwords are to access that data. She scans and quickly provides us what we need. The only challenge now is to figure out how to communicate better with digital assistants and chatbots to make voice a more powerful medium," emphasized Soderstrom. Serverless computing gives developers the flexibility of deploying and running applications and services without thinking about scale or server management. AWS is the market leader in providing fully-managed infrastructure services, helping organizations to focus more on product development. Alexa, for example, can help JPL (federally-funded research and development centre, managed for NASA) employees scan through 400,000 sub-contracts and get the requested copy of the contract from the data-set right on the desktop in a jiffy. JPL has also integrated conference rooms with Alexa and IoT sensors which helps them solve queries quickly. One of the JPL executives also stressed on the fact that AI is not going to take away the human jobs by saying “ AI will transform industries ranging from healthcare to retail and e-commerce and auto and transportation. Sectors that won't embrace AI will be left behind, Humans are 80 percent effective and machines are also 80 percent effective. When you bring them together, they're nearly 95 percent effective” Hence, voice controlled AI- powered digital assistants are here to stay empowering Digital Transformation. How to Add an intent to your Amazon Echo skills Microsoft commits $5 billion to IoT projects Building Voice technology on IoT projects
Read more
  • 0
  • 0
  • 2273

article-image-ros-melodic-morenia-released
Gebin George
28 May 2018
2 min read
Save for later

ROS Melodic Morenia released

Gebin George
28 May 2018
2 min read
ROS is nothing but a middleware with a set of tools and software frameworks for building and stimulating robots. ROS follows a stable release cycle, coming with a new version every year on 23rd of May. ROS released its Melodic Morenia version this year on the said date, with a decent number of enhancements and upgrades. Following are the release notes: class_loader header deprecation class_loader’s headers has been renamed and the previous ones have been deprecated in an effort to bring them close to multi-platform support and its ROS 2 counterpart. You can refer to the migration script provided for the header replacements and PRs will be released for all the .packages in previous ROS distribution. Kdl_parser package enhancement Kdl_parser has now deprecated a method that was linked with tinyxml (which was already deprecated) The tinyxml replacement code is as follows: bool treeFromXml(const tinyxml2::XMLDocument * xml_doc, KDL::Tree & tree) The deprecated API will be removed in N-turle. OpenCV version update For standardization reason, the OpenCV usage version is restricted to 3.2. Enhancements in pluginlib Similar to class_loader, the headers were deprecated here as well, to bring them closer to multi-platform support. plugin_tool which was deprecated for years, has been finally removed in this version. For more updates on the packages of ROS, refer to ROS Wiki page.
Read more
  • 0
  • 0
  • 3268

article-image-icra-2018-conference-robotics-automation
Savia Lobo
25 May 2018
5 min read
Save for later

What we learned at the ICRA 2018 conference for robotics & automation

Savia Lobo
25 May 2018
5 min read
This year’s ICRA 2018 conference features interactive sessions, keynotes, exhibitions, workshops, and much more. Following are some of the interesting keynotes on machine learning, robotics, and more. Note: International Conference on Robotics and Automation (ICRA) is an international forum for robotics researchers to represent their work. It is a flagship conference of IEEE Robotics and Automation Society. This conference held at the Brisbane Convention and Exhibition Center from the 21st to 25th May, 2018  brings together experts in the field of robotics and automation. The conference includes delegates in the frontier of science and technology in robotics and automation. Implementing machine learning for safe, high-performance control of mobile robots Traditional algorithms are designed based on their a-priori knowledge leveraged from the system and its environment. This knowledge also includes system dynamics and an environment map. Such an approach can allow system to work successfully in a predictable environment. However, if the system is unaware of the environment details, it may lead to high performance losses. In order to build systems that can work efficiently in unknown and uncertain instances, the speaker, Prof. Angela Schoellig, introduces systems that are capable of learning amidst an operation and adapt the behaviour accordingly. Angela presents several approaches for online, data-efficient, and safety-guaranteed learning for robot control. In these approaches, the algorithms can: Leverage insights from control theory Make use of neural networks and Gaussian processes, which are state-of-the-art and probabilistic learning methods. Take into account any prior knowledge about system dynamics. The speaker has also demonstrated how using such novel robot control and learning algorithms can be safe and effective in real-world scenarios. You can check Angela Schoellig’s video below on how she demonstrated these algorithms on self-flying and -driving vehicles, and mobile manipulators. Meta-learning and the art of Learning to Learn Pieter Abbeel, in his talk about meta-learning (learning to learn) explains how reinforcement learning and imitation learning have been successful in various domains such as Atari, Go, and so on. You can also check out 6 Key Challenges in Deep Learning for Robotics by Pieter Abbeel presented at the NIPS 2017 conference. Humans have a default potential to learn from past experiences and can learn new skills far more quickly than machines. Pieter explains some of his recent experiments on meta-learning, where agents learn imitation or the reinforcement learning algorithms and using the algorithms as base can learn from past instances just like humans. Due to meta learning, machines can now acquire any skill just by having a single demonstration or few trials. He states that meta-learning can be applied to general examples such as omniglot and mini-imagenet, which are standard few-shot classification benchmarks. To know about meta-learning from the ground up, you can check out our article, What is Meta Learning?. You can also read our coverage on Pieter Abbeel’s accepted paper at the ICLR 2018. Robo-peers: Robust Interaction in Human-Robot Teams Richard Vaughan in this keynote explains how robots would behave in natural surroundings, i.e among humans, animals, and other peer robots. His team has worked on behaviour strategies for mobile robots. These strategies enable the robots to have sensing capabilities and also allow them to behave sophisticated like humans and have robust interactions with the world and other agents around them. Richard further described certain series of vision-mediated Human-Robot Interactions conducted within groups of driving and flying robots. The mechanisms used were simple but highly effective. Form Building Robots to Bridging the Gap between Robotics and AI Robots posses smart, reactive and user-centered programming systems using which they can physically interact with the world. In current scenarios, every layman is capable of using cutting-edge robotics technology for complex tasks such as force-sensitive assembly and safe physical human-robot interaction. Franka Emika’s Panda, the first commercial robot system, is an example of of a robot with such abilities. Sami Haddadin, in this talk offers to bridge the gap between model-based nonlinear control algorithms and data-driven machine learning via a holistic approach. He explains that neither pure control-based nor end-to-end learning algorithms are a close match to human-level general purpose machine intelligence. Two recent results reinforce this statement: i.) Learning of exact articulated robot dynamics by using the concept of first order principle networks. ii.) Learning human-like manipulation skills by combining adaptive impedance control and meta learning Panda was, right from the beginning, released with consistent research interfaces and modules to enable the robotics and AI community to build on the developments in the field until then and to push the boundaries in manipulation, interaction and general AI-enhanced robotics. Sami believes this step will positively enable the community to address the immense challenges in robotics and AI research. Socially Assistive Robots: The Next-Gen Healthcare Helpers Goldie Nejat puts down her concern by stating that the world’s elderly population is rising and so is dementia, a disease with hardly any cure. She says that robots here, can become a unique strategic technology. She further adds that they can become a crucial part of the society by helping the aged population in their day-to-day activities. In this talk she presents intelligent assistive robots, which can be used to improve the life of the older populations. The population also includes those suffering from dementia. She discusses how the assistive robots, Brian, Casper, and Tangy socially have been designed to autonomously provide cognitive and social interventions. These robots also help with activities of daily living, and lead group recreational activities in human-centered environments. These robots can serve as assistants to individuals as well as groups of users. They can personalize their interactions as per the needs of the users. These robots can also be integrated into everyday lives of other people outside the aged bracket. Read more about the other keynotes and highlights on robotics on the ICRA’s official website How to build an Arduino based ‘follow me’ drone AI powered Robotics : Autonomous machines in the making Tips and tricks for troubleshooting and flying drones safely
Read more
  • 0
  • 0
  • 3198
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at ₹800/month. Cancel anytime
article-image-partnership-alliances-of-kontakt-io-and-iota-foundation-for-iot-and-blockchain
Savia Lobo
23 May 2018
2 min read
Save for later

Partnership alliances of Kontakt.io and IOTA Foundation for IoT and Blockchain

Savia Lobo
23 May 2018
2 min read
Kontakt.io, a leading IoT location platform provider, recently announced partnership with the IOTA Foundation, a non-profit open-source foundation which backs IOTA. This partnership aims at integrating IOTA’s next generation distributed ledger technology to Kontakt.io’s location platform and is designed specifically for condition monitoring and asset tracking. The integration will allow a tamper-proof and chargeable readings of smart sensor data. It is beneficial for healthcare operators and supply chain firms, which monitor environmental conditions for compliance reasons. They can explore fully transparent ways for storing and reporting on telemetry data. Kontakt.io’s IoT platform and IOTA’s Blockchain partnership will encrypt device-to-device and device-to-cloud communication of telemetry so the data remains intact. Consumers which include manufacturers, carriers, inspectors, technology providers, and others can leverage this new technology as it would: Increase trust and transparency Ease dispute resolution Result in better compliance breach detection and Prevent delivery of faulty products How Kontakt.io and IOTA benefit each other IOTA eliminates the cost barrier, and needs lesser computing power to confirm transactions. Unlike Blockchain or Ethereum, IOTA is capable of processing a lot of operations in real time. It scales faster depending on the amount of transactions it has queued. Hence, Proof of Work(PoW) is now possible and efficient in the IoT environment with IOTA. It is likely to become the next security standard for IoT. On the other hand, IOTA has partnered with Kontakt.io to empower the building blocks of a smart supply chain using the powerful IoT platform. Read more about this partnership at Kontakt.io official website. How to run and configure an IoT Gateway Build your first Raspberry Pi project 5 reasons to choose AWS IoT Core for your next IoT project
Read more
  • 0
  • 0
  • 2409

article-image-five-developer-centric-sessions-at-iot-world-2018
Savia Lobo
22 May 2018
6 min read
Save for later

Five developer centric sessions at IoT World 2018

Savia Lobo
22 May 2018
6 min read
Internet of Things has shown a remarkable improvement over the years. The basic IoT embedded devices with sensors, have now advanced to a level where AI can be deployed into IoT devices to make them smarter. The IoT World 2018 conference was held from May 14th to 17th, at Santa Clara Convention Center, CA, USA. Al special developer centric conference designed  specifically for technologists was also part of the larger conference. The agenda for the developers’ conference was to bring together the technical leaders who have contributed with their innovations in the IoT market and tech enthusiasts who look forward to develop their careers in this domain.This conference also included sessions such as SAP learning, and interesting keynotes on Intelligent Internet of things. Here are five sessions that caught our eyes at the developer conference in IoT World 2018. How to develop Embedded Systems by using the modern software practices, Kimberly Clavin Kimberly Clavin  highlighted that a major challenge in developing autonomous vehicles include, system integration and validation techniques. These techniques are used to ensure quality factor within the code. There are a plethora of companies that have software as their core and use modern software practices such as (Test Driven Development)TDD and Continuous Integration(CI) for successful development. However, the same tactics cannot be directly implemented within the embedded environment. Kimberly presented ways to adapt these modern software practices for use within the development of embedded systems. This can help developers to create systems that are fast, scalable, and a cheaper. The highlights of this session include, Learning to test drive an embedded component. Understanding how to mock out/simulate an unavailable component. Application of Test Driven Development (TDD), Continuous Integration (CI) and mocking for achieving a scalable software process on an embedded project. How to use Machine Learning to Drive Intelligence at the Edge, Dave Shuman and Vito De Gaetano Edge IoT is gaining a lot of traction of late. One way to make edge intelligent is by building the ML models on cloud and pushing the learning and the models onto the edge. This presentation session by Dave Shuman and Vito De Gaetano, shows how organizations can push intelligence to the edge via an end-to-end open source architecture for IoT. This end-to-end open source architecture for IoT is purely based on Eclipse Kura and Eclipse Kapua. Eclipse Kura is an open source stack for gateways and the edge, whereas Eclipse Kapua is an open source IoT cloud platform. The architecture can enable: Securely connect and manage millions of distributed IoT devices and gateways Machine learning and analytics capabilities with intelligence and analytics at the edge A centralized data management and analytics platform with the ability to build or refine machine learning models and push these out to the edge Application development, deployment and integration services The presentation also showcased an Industry 4.0 demo, which highlighted how to ingest, process, analyze data coming from factory floors, i.e from the equipments and how to enable machine learning on the edge using this data. How to build Complex Things in a simplified manner, Ming Zhang Ming Zhang put forth a simple question,“Why is making hardware so hard?” Some reasons could be: The total time and cost to launch a differentiated product is prohibitively high because of expensive and iterative design, manufacturing and testing. System form factors aren’t flexible -- connected things require richer features and/or smaller sizes. There’s unnecessary complexity in the manufacturing and component supply chain. Designing a hardware is a time-consuming process, which is cumbersome and not a fun task for designers, unlike software development. Ming Zhang showcased a solution, which is ‘The zGlue ZiPlet Store’ -- a unique platform wherein users can build complex things with an ease. The zGlue Integrated Platform (ZiP) simplifies the process of designing and manufacturing devices for IoT systems and provides a seamless integration of both hardware and software on a modular platform. Building IoT Cloud Applications at Scale with Microservices, Dave Chen This presentation by Dave Chen includes how DConnectivity, big data, and analytics are transforming several business types. A major challenge in the IIoT sector is the accumulation of humongous data. This data is generated by machineries and industrial equipments such as wind turbines, sensors, and so on. Valuable information out of this data has to be extracted securely, efficiently and quickly. The presentation focused on how one can leverage microservice design principles and other open source platforms for building an effective IoT device management solution in a microservice oriented architecture. By doing this, managing the large population of IoT devices securely becomes easy and scalable. Design Patterns/Architecture Maps for IoT Design patterns are the building blocks of architecture and enable developers and architects to reuse solutions to common problems. The presentation showcased how various common design patterns for connected things, common use cases and infrastructure, can accelerate the development of connected device. Extending Security to Low Complexity IoT Endpoint Devices, Francois Le At present, there are millions of low compute, low power IoT sensors and devices deployed. These devices and sensors are predicted to multiply to billions within a decade. However, these devices do not have any kind of security even though they hold such crucial, real-time information. These low complexity devices have: Very limited onboard processing power, Less memory and battery capacity, and are typically very low cost. Low complexity IoT devices cannot work similar to IoT edge device, which can easily handle validation and encryption techniques, and also have huge processing power to handle multiple message exchanges used for authentication. The presentation states that a new security scheme needs to be designed from the ground up. It must acquire lesser space on the processor, and also have a low impact on battery life and cost. The solution should be: IoT platform agnostic and Easy to implement by IoT vendors, Easily operated over any wireless technologies (e,g, Zigbee, BLE, LoRA, etc.) seamlessly Transparent to the existing network implementation. Automated and scalable for very high volumes, Evolve with new security and encryption techniques being released Last for a long time in the field with no necessity to update the edge devices with security patches. Apart from these, many other presentations were showcased at the IoT World 2018 for developers. Some of them include, Minimize Cybersecurity Risks in the Development of IoT Solutions, Internet of Things (IoT) Edge Analytics: Do's and Don’ts. Read more keynotes presented at this exciting IoT World conference 2018 on their official website. Cognitive IoT: How Artificial Intelligence is remoulding Industrial and Consumer IoT How IoT is going to change tech teams AWS Greengrass brings machine learning to the edge    
Read more
  • 0
  • 0
  • 2183

article-image-windows-10-iot-core-what-you-need-to-know
Vijin Boricha
07 May 2018
4 min read
Save for later

Windows 10 IoT Core: What you need to know

Vijin Boricha
07 May 2018
4 min read
Microsoft had initially come up with Windows IoT which was formerly known as Windows Embedded. It was rebranded with the release of Windows 10 where Microsoft introduced twelve versions of Windows 10 that varied in features delivered, use cases, and the devices they supported. With that said, Microsoft gained a fighting place in the world of IoT with Windows 10 IoT which consists of two products catering to different customer bases: Windows 10 IoT Core and Windows 10 IoT Enterprise. Since IoT has to still evolve amongst major enterprises, we will focus on Window 10 IoT Core today. Windows 10 IoT Core is an optimized version of Windows 10 that is designed for smaller devices with or without a display that run on both ARM and x86/x64 devices. It is created to work on devices such as Raspberry Pi, Arduino, and other popular single board computers while it also utilizes the extensible Universal Windows Platform (UWP) API to build great solutions. The IoT domain has always been popular with traditional open source operating systems such as Linux distributions. Since the past couple of years, Windows has started to find its way into this domain and have proven to be an advantageous alternative in many ways. Initially setting up Windows 10 IoT Core to install the image and get started was a task. Recently Microsoft has focused on alleviating these small pain points and has got things sorted for Windows users. When it comes to developing IoT applications, open source distros lack making beautiful user interfaces possible. But with Windows this can be achieved thanks to Visual Studio. Visual Studio has always been a great environment to code in and if you are strong with C#, this can definitely be your go to platform. I emphasize on Windows users because  if you are looking at using or developing on Windows 10 IoT Core you would strictly need Windows 10 which isn’t open source. Well, this might never change. No doubt Microsoft wants to sell its software keeping its existing user happy. This would only be possible when Microsoft services work well only in its own environment. I’m sure you are wondering what could you possibly build with Windows 10 IoT Core and Raspberry Pi or Arduino. These are some breathtaking project ideas that you might be interested in building: Obstacle avoiding robot: This could be your basic project that can help you getting used to the new ecosystem you have adopted. Room light and temperature manager: Next, you can get some home automation tweaks that will help you automate your room environment.   Personal car data monitor: This can be an intermediate project where your IoT application can reveal the health of your vehicle before you start your ride.   Pet feeder: Lastly, you can take up something that involves Cloud platforms where you can feed your pet while your in office or at your neighbours instead of letting them starve. IoT is at such a stage now where the virtual world of Information Technology is connected to the read world. Initially this was possible only through Linux-based ecosystem, but with Windows 10 IoT coming into picture there has been quite a shift observed in the IoT market. Users have observed that in spite running on smaller devices Windows 10 IoT has managed to offer most of the essential features from parent Windows 10.  The world may still seem like a Linux base and deploying Python programs may look easier but it’s best to keep your options open and in this case you have a trusted platform, Windows. 5 reasons to choose AWS IoT Core for your next IoT project Should you go with Arduino Uno or Raspberry Pi 3 for your next IoT project? Splunk Industrial Asset Intelligence (Splunk IAI) targets Industrial IoT marketplace
Read more
  • 0
  • 0
  • 4333

article-image-nvidia-tesla-v100-gpus-publicly-available-in-beta-on-google-compute-engine-and-kubernetes-engine
Savia Lobo
02 May 2018
3 min read
Save for later

Nvidia Tesla V100 GPUs publicly available in beta on Google Compute Engine and Kubernetes Engine

Savia Lobo
02 May 2018
3 min read
Nvidia Tesla V100 GPUs are now publicly available in beta on Google Compute Engine and Kubernetes Engine. Also, Nvidia Tesla P100 GPUs are now generally available. Nvidia Tesla V100 GPU is almost equal to 100 CPUs. This gives customers more power to handle computationally demanding applications, like machine learning, analytics, and video processing. One can select as many as eight NVIDIA Tesla V100 GPUs, 96 vCPU and 624GB of system memory in a single VM, receiving up to 1 petaflop of mixed precision hardware acceleration performance. NVIDIA V100s are available immediately in the following regions: us-west1, us-central1 and europe-west4. Each V100 GPU is priced as low as $2.48 per hour for on-demand VMs and $1.24 per hour for Preemptible VMs. Making Nvidia Tesla V100 available on the compute engine is part of Google’s GPU expansion strategy. Similar to Google GPUs, the V100 is also billed by the second and Sustained Use Discounts apply. NVIDIA Tesla P100 GPU, on the other hand is a good fit if one wants a balance between price and performance. One can select up to four P100 GPUs, 96 vCPUs and 624GB of memory per virtual machine. The P100 is also now available in europe-west4 (Netherlands) in addition to us-west1, us-central1, us-east1, europe-west1 and asia-east1. * Maximum vCPU count and system memory limit on the instance might be smaller depending on the zone or the number of GPUs selected. ** GPU prices listed as hourly rate, per GPU attached to a VM that are billed by the second. Pricing for attaching GPUs to preemptible VMs is different from pricing for attaching GPUs to non-preemptible VMs. Prices listed are for US regions. Prices for other regions may be different. Additional Sustained Use Discounts of up to 30% apply to GPU on-demand usage only. Google Cloud makes managing GPU workloads easy for both VMs and containers by providing, Google Compute Engine where customers can use instance templates and managed instance groups to easily create and scale GPU infrastructure. NVIDIA V100s and other GPU offerings in Kubernetes Engine, where Cluster Autoscaler helps provide flexibility by automatically creating nodes with GPUs, and scaling them down to zero when they are no longer in use. Preemptible GPUs for both Compute Engine managed instance groups and Kubernetes Engine’s Autoscaler optimizes the costs while simplifying infrastructure operations. Read more about both the GPUs in detail on the Google Research Blog and benefits of each on Nvidia V100 and Nvidia P100 blog post. Google announce the largest overhaul of their Cloud Speech-to-Text Google’s kaniko – An open-source build tool for Docker Images in Kubernetes, without a root access How machine learning as a service is transforming cloud  
Read more
  • 0
  • 0
  • 3353
article-image-splunk-industrial-asset-intelligence-targets-industrial-iot-marketplace
Savia Lobo
17 Apr 2018
2 min read
Save for later

Splunk Industrial Asset Intelligence (Splunk IAI) targets Industrial IoT marketplace

Savia Lobo
17 Apr 2018
2 min read
Splunk has announced Splunk Industrial Asset Intelligence (Splunk IAI). It's now available on limited release. It would be made open to general availability later this year, in the fall. What is Splunk Industrial Asset Intelligence? The Splunk IAI system makes it easy for customers to use Splunk’s data analytics capabilities for analyzing data from industrial systems and devices. Critical industrial systems lack real time visibility. This can lead to a reactive approach to managing industrial operations and problems are often solved via intuition instead of a data-driven approach. Splunk Industrial Asset Intelligence (SIAI) has been introduced to combat these challenges facing companies in manufacturing, energy, transportation, oil and gas and other industrial verticals. The SIAI would be built on top of Splunk Enterprise machine data platform. The benefits of Splunk Industrial Asset Intelligence Benefits of Splunk IAI include : It correlates data from Industrial Control Systems (ICS), sensors, SCADA systems and applications, making it easy to monitor and diagnose equipment and operational issues in real time. Enables customers to respond to issues faster without affecting production, where unplanned downtime can equate to millions of dollars in lost revenue. The packaged set of capabilities provided by Splunk Industrial Asset Intelligence easily integrates with the existing Splunk platform. The Splunk IAI offers a single solution that ensures industrial systems running at full capacity, enabling organisations to significantly save resources and money on unplanned downtime. To learn more about Splunk Industrial Asset Intelligence, visit Splunk’s website.
Read more
  • 0
  • 0
  • 3107

article-image-ai-powered-robotics-autonomous-machines-in-the-making
Savia Lobo
16 Apr 2018
7 min read
Save for later

AI powered Robotics : Autonomous machines in the making

Savia Lobo
16 Apr 2018
7 min read
Say Robot to someone today, and Sophia the humanoid just flashes in front of the eye. This is where robotics has reached, at present; super charged by Artificial Intelligence.  Robotics and Artificial Intelligence are usually confused terms; where there is a thin line between the two. Traditional robots are pre-programmed humanoids or machines meant to do specific tasks irrespective of the environment they are placed in. Therefore, they do not show any intelligent behaviour. With a sprinkle of Artificial Intelligence, these robots have transformed into Artificially intelligent robots, which are now controlled by the AI programs making them capable of taking decisions when encountered by real world situations. How has AI helped Robotics You can look at Artificial intelligence loosely as General or narrow based on the level of task specificity. General AI could be the one from the movie Terminator or Matrix. It imparts wider knowledge and capabilities to machines that are almost similar to humans. However, general AI is way too far in the future and does not exist yet. Current robots are designed to assist humans in their day-to-day tasks in specific domains. For instance, the Roomba Vacuum cleaner is largely automated with very less human intervention. The cleaner can make decisions if it is confronted with choices such as, if the way ahead is blocked by a couch. The cleaner might decide to turn left because it has already vacuumed the carpet to the right. Let’s have a look at some basic capabilities that Artificial Intelligence has imparted into robotics with the example of a self-driving car: Adding power of perception and reasoning: Novel sensors including Sonar sensors, Infrared sensors, Kinect sensors, and so on and their functionalities give robots good perception skills, using which they can self-adapt to any situations. Our self-driving car, with the help of these sensors takes the input data from the environment (such as identifying roadblocks, signals, objects (people), others cars) and labels it, transforms it into knowledge, and interprets it. It then modifies its behaviour based on the result of this perception and takes necessary actions. Learning process: With newer experiences such as heavy traffic, detour, and so on, the self-driving car is required to perceive and reason, in order to obtain conclusions. Here, the AI creates a learning process when similar experiences are repeated in order to store knowledge and speed up intelligent responses. Making correct decisions: With AI the driverless car gets the ability to prioritize actions such as taking another route in case of an accident or detour, or applying sudden brakes when a pedestrian or an object appears suddenly, and so on, in order to be safe and effective in the decisions that they make. Effective Human interaction: This is the most prominent capability that is enabled by Natural Language Processing (NLP). Driverless car accepts and understands the passenger commands with the help of the In-car voice commands based on NLP. Thus, the AI in the car understands the meaning of natural human language and readily responds to the query thrown at it. For instance, based on the destination address given by the passenger, the AI will drive along the fastest route to get there. NLP also helps in understanding human emotions and sentiments. Real-world Applications of AI in Robotics Sophia the humanoid is by far the best real-world amalgamation of Robotics and Artificial Intelligence. However, there other real-world use cases of AI in robotics with practical applications include: Self - supervised learning : This allows robots to create their own training examples for performance improvement. For instance, if the robot has to interpret long-range ambiguous sensor data, it uses apriori training and data that it captured from close range. This knowledge is later incorporated within the robots and within the optical devices that can detect and reject objects (dust and snow, for example). The robot is now capable of detecting obstacles and objects in rough terrain and in 3D-scene analysis and modeling vehicle dynamics. An example of self- supervised learning algorithm is, a road detection algorithm. The front-view monocular camera in the car uses road probabilistic distribution model (RPDM) and fuzzy support vector machines (FSVMs). This algorithm was designed at MIT for autonomous vehicles and other mobile on-road robots. Medical field : In the medical sphere, a collaboration through the Cal-MR: Center for Automation and Learning for Medical Robotics, between researchers at multiple universities and a network of physicians created Smart Tissue Autonomous Robot (STAR). Using innovations in autonomous learning and 3D sensing, STAR is able to stitch together ‘pig intestines’ (used instead of human tissue) with better precision and reliability than the best human surgeons. STAR is not a replacement for surgeons, but in future could remain on standby to handle emergencies and assist surgeons in complex surgical procedures. It would offer major benefits in performing similar types of delicate surgeries. Assistive Robots : these are robots that sense, process sensory information, and perform actions that benefit not only the general public but also people with disabilities, or senior citizens. For instance, Bosch’s driving assistant systems are equipped with radar sensors and video cameras, allowing them to detect these road users even in complex traffic situations. Another example is, the  MICO robotic arm, which uses Kinect sensor. Challenges in adopting AI in Robotics Having an AI robot means lesser pre-programming, replacement of manpower, and so on. There is always a fear that robots may outperform humans in decision making and other intellect tasks. However, one has to take risks to explore what this partnership could lead to. It is obvious that casting an AI environment in robotics is not a cakewalk and there are challenges that experts might face. Some of them include, Legal aspects: After all robots are machines. What if something goes wrong? Who would be liable? One way to mitigate bad outcomes is by developing extensive testing protocols for the design of AI algorithms, improved cybersecurity protections, and input validation standards. This would require AI experts who not only have a deeper understanding of the technologies, but also experts from other disciplines such as law, social sciences, economics and more. Getting used to an automated environment: While it was necessary for traditional robots to be pre-programmed, with AI this will change to a certain extent and experts would just have to feed in the initial algorithms and further changes would be adapted by the robot by self-learning. AI is feared for having the capacity to take over jobs and automate many processes. Hence, broad acceptance of the new technology is required and a careful and managed transition of workers should be carried out. Quick learning with less samples: The AI systems within the robots should assist them in learning quickly even when the supply of data is limited, unlike deep learning which requires hoards of data to formulate an output. The AI-robotics fortune The future for this partnership is bright as robots become more self-dependant and might as well assist humans in their decision making. However, all of this seems like a work of fiction for now. At present, we mostly have semi-supervised learning which requires a certain human touch for essential functioning of AI systems. Unsupervised learning, one shot learning, meta-learning techniques are also creeping in, promising machines that would not require human intervention or guidance any more. Robotics manufacturers such as Silicon Valley Robotics, Mayfield robotics and so on together with auto-manufacturers such as Toyota, BMW are on a path to create autonomous vehicles, which implies that AI is becoming a priority investment for many.
Read more
  • 0
  • 0
  • 3532

article-image-aws-greengrass-machine-learning-edge
Richard Gall
09 Apr 2018
3 min read
Save for later

AWS Greengrass brings machine learning to the edge

Richard Gall
09 Apr 2018
3 min read
AWS already has solutions for machine learning, edge computing, and IoT. But a recent update to AWS Greengrass has combined all of these facets so you can deploy machine learning models to the edge of networks. That's an important step forward in the IoT space for AWS. With Microsoft also recently announcing a $5 billion investment in IoT projects over the next 4 years, by extending the capability of AWS Greengrass, the AWS team are making sure they set the pace in the industry. Jeff Barr, AWS evangelist, explained the idea in a post on the AWS blog: "...You can now perform Machine Learning inference at the edge using AWS Greengrass. This allows you to use the power of the AWS cloud (including fast, powerful instances equipped with GPUs) to build, train, and test your ML models before deploying them to small, low-powered, intermittently-connected IoT devices running in those factories, vehicles, mines, fields..." Industrial applications of machine learning inference Machine learning inference is bringing lots of advantages to industry and agriculture. For example: In farming, edge-enabled machine learning systems will be able to monitor crops using image recognition  - in turn this will enable corrective action to be taken, allowing farmers to optimize yields. In manufacturing, machine learning inference at the edge should improve operational efficiency by making it easier to spot faults before they occur. For example, by monitoring vibrations or noise levels, Barr explains, you'll be able to identify faulty or failing machines before they actually break. Running this on AWS greengrass offers a number of advantages over running machine learning models and processing data locally - it means you can run complex models without draining your computing resources. Read more in detail on the AWS Greengrass Developer Guide. AWS Greengrass should simplify machine learning inference One of the fundamental benefits of using AWS Greengrass should be that it simplifies machine learning inference at every single stage of the typical machine learning workflow. From building and deploying machine learning models, to developing inference applications that can be launched locally within an IoT network, it should, in theory, make the advantages of machine learning inference more accessible to more people. It will be interesting to see how this new feature is applied by IoT engineers over the next year or so. But it will also be interesting to see if this has any impact on the wider battle for the future of Industrial IoT. Further reading: What is edge computing? AWS IoT Analytics: The easiest way to run analytics on IoT data, Amazon says What you need to know about IoT product development
Read more
  • 0
  • 0
  • 2612
article-image-microsoft-commits-5-billion-iot-projects
Richard Gall
06 Apr 2018
2 min read
Save for later

Microsoft commits $5 billion to IoT projects

Richard Gall
06 Apr 2018
2 min read
Microsoft has announced that it will pour $5 billion into IoT over the next 4 years. To date, Microsoft has spent $1.5 billion, so this moves could be viewed as a step change in the organization's commitment to IoT. This makes sense for Microsoft. The company has fallen behind in the consumer technology race. It appears to be moving towards cloud and infrastructure projects instead. Azure has given it a strong position, but with AWS setting the pace in the cloud field, Microsoft needs to move quickly if it is to position itself as the frontrunner in the future of IoT. Julia White, CVP of Azure said this: "With our IoT platform spanning cloud, OS and devices, we are uniquely positioned to simplify the IoT journey so any customer—regardless of size, technical expertise, budget, industry or other factors—can create trusted, connected solutions that improve business and customer experiences, as well as the daily lives of people all over the world. The investment we’re announcing today will ensure we continue to meet all our customers’ needs both now and in the future." The timing of this huge investment has not gone unnoticed. At the end of March, Microsoft revealed that it was reorganizing to allow itself to place greater strategic attention on the 'intelligent cloud and intelligent edge'. It's no coincidence that the senior member set to leave is Terry Myerson, the man who has been leading the Windows side of the business since 2013. However, the extent to which this announcement from Microsoft is really that much of a pivot is questionable. In The Register, Simon Sharwood writes: "Five billion bucks is a lot of money. But not quite so impressive once you realise that Microsoft spent $13.0bn on R&D in FY 2017 and $12bn in each of FY 16 and 15. Five billion spread across the next four years may well be less than ten per cent of all R&D spend." The analysis from many quarters in the tech media is that this is a move that marks what many have been thinking - managing Windows' decline in favour of Microsoft's move into the cloud and infrastructure space. It's pretty hard to see past that - but it will be interesting to see how Microsoft continues to respond to competition from the likes of Amazon.
Read more
  • 0
  • 0
  • 2320