Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - IoT & Hardware

119 Articles
article-image-google-releases-two-new-hardware-products-coral-dev-board-and-a-usb-accelerator-built-around-its-edge-tpu-chip
Sugandha Lahoti
06 Mar 2019
2 min read
Save for later

Google releases two new hardware products, Coral dev board and a USB accelerator built around its Edge TPU chip

Sugandha Lahoti
06 Mar 2019
2 min read
Google teased its new hardware products built around its Edge TPU at the Google Next conference last summer. Yesterday, it officially launched the Coral dev board, a Raspberry-Pi look-alike, which is designed to run machine learning algorithms ‘at the edge’, and a USB accelerator. Coral Development Board The “Coral Dev Board” has a 40-pin header that runs Linux on an i.MX8M with an Edge TPU chip for accelerating TensorFlow Lite. The board also features 8GB eMMC storage, 1GB LPDDR4 RAM, Wi-Fi and Bluetooth 4.1. It has USB 2.0/3.0 ports, 3.5mm audio jack, DSI display interface, MIPI-CSI camera interface, HDMI 2.0a connector, and two Digital PDM microphones. Source: Google Coral dev board can be used as a single-board computer when you need accelerated ML processing in a small form factor.  It can also be used as an evaluation kit for the SOM and for prototyping IoT devices and other embedded systems. This board is available for $149.00. Google has also announced a $25 MIPI-CSI 5-megapixel camera for the dev board. USB Accelerator The USB Accelerator is basically a plug-in USB 3.0 stick to add machine learning capabilities to the existing Linux machines. This 65 x 30 mm accelerator can connect to Linux-based systems via a USB Type-C port. It can also work with a Raspberry Pi board at USB 2.0 speeds. The accelerator is built around a 32-bit, 32MHz Cortex-M0+ chip with 16KB of flash and 2KB of RAM. Source: Google The USB Accelerator is available for $75. Developers can build Machine Learning models for both the devices in TensorFlow Lite. More information is available on Google’s Coral Beta website. Coming soon are the PCI-E Accelerator, for integrating the Edge TPU into legacy systems using a PCI-E interface. Also coming is a fully integrated System-on-Module with CPU, GPU, Edge TPU, Wifi, Bluetooth, and Secure Element in a 40mm x 40mm pluggable module. Google expands its machine learning hardware portfolio with Cloud TPU Pods (alpha). Intel acquires eASIC, a custom chip (FPGA) maker for IoT, cloud and 5G environments Raspberry Pi launches it last board for the foreseeable future: the Raspberry Pi 3 Model A+ available now at $25.
Read more
  • 0
  • 0
  • 16597

article-image-home-assistant-an-open-source-python-home-automation-hub-to-rule-all-things-smart
Prasad Ramesh
25 Aug 2018
2 min read
Save for later

Home Assistant: an open source Python home automation hub to rule all things smart

Prasad Ramesh
25 Aug 2018
2 min read
We have Amazon Alexa, Google Home and Phillips Hue for smart actions in your home. But they are individual and require different controls. What if all of your smart devices can work together with a master hub? That is Home Assistant. Home assistant is an automation platform that can run on Raspberry Pi. It acts as a central hub for connecting and automating all your smart devices. It supports services like IFTTT, Pushbullet, Google cast, and many others. Currently there are over a thousand components supported. It tracks the state of all the installed smart devices in your home. All the devices can be controlled from a single, mobile-friendly, interface. For security and privacy, all operations via Home Assistant are done locally, meaning no data is stored on the cloud. The Home assistant website advertises functions like having lights turn on upon sunset, dimming lights when you watch a movie on Chromecast. There is a virtual image called Hass.io which is an all in one solution and get started with Home Assistant. There is a guide is to install Hass.io on a Raspberry Pi. The requirements for running Home Assistant are: Raspberry Pi 3 Model B+ + Power Supply (at least 2.5A) A Class 10 or higher, Size 32 GB or bigger Micro SD card An SD Card reader Ethernet cable (optional, Hass.io can work with WiFi) For unattended configuration, optionally a USB-Stick Home assistant is a hub, it cannot control anything on its own. Think of it as a hub that passes instructions, a master device that communicates with other devices for home automation. Home assistant can’t do anything if there are no smart devices to work with. Since it is open source, there are dozens of contributions from tinkerers and DIY enthusiasts worldwide. You can check out the automation examples to know more and use them. The installation is very simple and there is a friendly UI to control your automation tasks. There is plenty of information at the Home Assistant website to get your started. They also have a GitHub repository. Cortana and Alexa become best friends: Microsoft and Amazon release a preview of this integration Apple joins the Thread Group, signalling its Smart Home ambitions with HomeKit, Siri and other IoT products Amazon Echo vs Google Home: Next-gen IoT war
Read more
  • 0
  • 1
  • 10546

article-image-raspberry-pi-launches-it-last-board-for-the-foreseeable-future-the-raspberry-pi-3-model-a-available-now-at-25
Prasad Ramesh
16 Nov 2018
2 min read
Save for later

Raspberry Pi launches it last board for the foreseeable future: the Raspberry Pi 3 Model A+ available now at $25

Prasad Ramesh
16 Nov 2018
2 min read
Yesterday, Raspberry launched the Raspberry Pi 3 Model A+ board which is a smaller and cheaper version of the Raspberry Pi 3B+. In 2014, the first gen Raspberry Pi 1 Model B+ was followed by a lighter Model A+ with half the RAM and removed ports. This was able to fit into their Hardware Attached on Top (HAT). Until now there were no such small form factor boards for the Raspberry Pi 2 and 3. Size is cut down but not the features (most of) The Raspberry Pi 3 Model A+ retains most of the features and enhancements as the bigger board of this series. This includes a 1.4GHz 64-bit quad-core ARM Cortex-A53 CPU, 512MB LPDDR2 SDRAM, and dual-band 802.11ac wireless LAN and Bluetooth 4.2/BLE. The enhancements retained are improved USB mass-storage booting and improved thermal management. The entire Raspberry Pi 3 Model A+ board is an FCC certified radio module. This will significantly reduce the cost in conformance testing Raspberry Pi–based products. What is shrunk is the price which is now down to $25 and the board size of 65x56mm, the size of a HAT. Source: Raspberry website Raspberry Pi 3 Model A+ will likely be the last product for now In March this year, Raspberry said that the 3+ platform is the final iteration of the “classic” Raspberry Pi boards. The next steps/released products will be out of necessity and not an evolution. This is because for an evolution to happen Raspberry will need a new core silicon, on a new process node, with new memory technology. So this new board, the 3A+ is about closing things; meaning we won’t see any more products in this line, in the foreseeable future. This board does answer one of their most frequent customer requests for ‘missing products’. And clears their pipeline to focus on building the next generation of Raspberry Pi boards. For more details visit the Raspberry Pi website. Introducing Raspberry Pi TV HAT, a new addon that lets you stream live TV Tensorflow 1.9 now officially supports Raspberry Pi bringing machine learning to DIY enthusiasts Should you go with Arduino Uno or Raspberry Pi 3 for your next IoT project?
Read more
  • 0
  • 0
  • 8499
Visually different images

article-image-you-can-now-install-windows-10-on-a-raspberry-pi-3
Prasad Ramesh
14 Feb 2019
2 min read
Save for later

You can now install Windows 10 on a Raspberry Pi 3

Prasad Ramesh
14 Feb 2019
2 min read
The WoA Installer for Raspberry Pi 3 enables installing Windows 10 on the credit card size computer. The WoA Installer for Raspberry Pi 3 is made by the same members who brought Windows 10 ARM to the Lumia 950 and 950 XL. Where to start? To get started, you need Raspberry Pi 3 Model B or B+, a microSD card of at least class 1, and a Windows 10 ARM64 Image which you can get from GitHub. You also need a recent version of Windows 10 and .NET Framework 4.6.1. The WoA Installer is just a tool which helps you to deploy Windows 10 on the Raspberry Pi 3. WoA Installer needs the Core Package in order to run. You can find them listed on the GitHub page. Specification comparison Regarding specifications, the minimum requirements for Windows 10 is: Processor: 1 gigahertz (GHz) or faster processor or SoC. RAM: 1 gigabyte (GB) for 32-bit or 2 GB for 64-bit. Hard disk space: 16 GB for 32-bit OS 20 GB for 64-bit OS The Raspberry Pi 3B+ has specifications just good enough to run Windows 10: SoC: Broadcom BCM2837B0 quad-core A53 (ARMv8) 64-bit @ 1.4GHz RAM: 1GB LPDDR2 SDRAM While this sounds good, a Hacker news user points out: “Caution: To do this you need to run a rat's nest of a batch file that runs a bunch of different code obtained from the web. If you're going to try this, try on devices you don't care about. Or spend innumerable hours auditing code. Pass -- for now.” You can check out the GitHub page for more instructions. Raspberry Pi opens its first offline store in England Introducing Strato Pi: An industrial Raspberry Pi Raspberry Pi launches it last board for the foreseeable future: the Raspberry Pi 3 Model A+ available now at $25
Read more
  • 0
  • 0
  • 7520

article-image-6-powerful-microbots-developed-by-researchers-around-the-world
Prasad Ramesh
01 Sep 2018
4 min read
Save for later

6 powerful microbots developed by researchers around the world

Prasad Ramesh
01 Sep 2018
4 min read
When we hear  the word robot, we may think of large industry sized robots assembling cars or humanoid ones. However there are such tiny robots that you may not even be able to see with the naked eye. Such six microbots are covered in this article which are in early development stages. Harvard's Ambulatory Microrobot (HAMR): A robotic cockroach Source: Hardvard HAMR is a versatile, 1.8-inch-long robotic platform that resembles a cockroach. The HAMR itself weighs in under an ounce and can run, jump and carry small items about twice its own weight. It is fast and can move with the speed of almost 19 inches per second. HAMR has given the researchers a useful base idea from which they can build other ideas. For example, the HAMR-F, an enhanced version of HAMR doesn't have any restraining wires. It can move around independently, it's only slightly heavier (2.8g) and slower than the HAMR. It is powered by a micro 8mA lithium polymer battery. Scientists at Harvard's School of Engineering and Applied Sciences also added footpads recently that allows the microbot to swim on water surface, sink and walk under water. Robotic bees: RoboBees Source: Harvard Like the HAMR, the RoboBee by Harvard has improved over time, it can also fly and swim. Its first successful flight was in 2013 and in 2015 it was able to swim. More recently in 2016, it gained the ability to "perch" on surfaces using static electricity. This allows the RoboBee to save power for loner flights. The 80-milligram robot can take a swim, leap up from the water, and then land. The RoboBee can flap its wings at 220 to 300 hertz in air and 9 to 13 hertz in water. μRobotex: microbots from France Source: Sciencealert Scientists from the Femto-ST Institute in France have built the μRobotex platform. It is a new, extremely small microrobot system. This system has been able to build the smallest house in the world inside a vacuum chamber. The robot used an ion beam to cut a silica membrane to tiny pieces for assembly. The micro house is 0.015 mm high and 0.020 mm broad. In comparison, a grain of sand is anywhere from 0.05 mm to 2 mm in diameter. The completed house was kept on the tip of an optical fiber piece as shown in the image above. Salto: a one-legged jumper Source: Wired Saltatorial locomotion on terrain obstacles (Salto), developed at University of California, is a one-legged jumping robot that is 10.2 inches tall when fully extended. It weighs about 100 grams, and can jump up to 1 meter in air. Salto's skills show when it can do more than just a single jump. It can bounce off walls and can perform several jumps in a row while avoiding obstacles. Salto was inspired by the galago, a small mammal expert at jumping. The idea of Salto was about robots that can leap over rubble, to provide emergency services. The newer model is the Salto-1P. Rolls Royce’s SWARM robots Source: Rolls Royce Rolls-Royce teamed up with scholars from the University of Nottingham and Harvard University to develop independent tiny mobile robots called SWARM. They are about 0.4 inches in diameter. They are a part of Rolls-Royce’s IntelligentEngine program. The SWARM robots are put into position by a robotic snake and use tiny cameras to capture parts of an engine which are hard to access otherwise. This is very useful for mechanics to figure out what is wrong with a car engine with greater accessibility. The future plan for SWARM is to perform inspections of aircraft engines in order to not remove from the airplanes. Short-Range Independent Microrobotic Platforms (SHRIMP) Source: DARPA The Defense Advanced Research Project Agency (DARPA) wants to develop insect-scaled robots with, "untethered mobility, maneuverability, and dexterity." In other words, they want microbots that can move around independently. DARPA is planning to sponsor these robots as part of the SHRIMP program for search and rescue, disaster relief, and hazardous environment inspection. It is also looking for robots that might work as prosthetics or eyes to see in places that are hard to reach. These microbots are in early development stages but on entering production they will be very resourceful. From medical assistance to guided inspection in small areas, these microbots will prove to be useful in a variety of areas. Intelligent mobile projects with TensorFlow: Build a basic Raspberry Pi robot that listens, moves, sees, and speaks [Tutorial] 15 millions jobs in Britain at stake with AI robots set to replace humans at workforce What Should We Watch Tonight? Ask a Robot, says Matt Jones from OVO Mobile [Interview]
Read more
  • 0
  • 1
  • 7442

article-image-a-libre-gpu-effort-based-on-risc-v-rust-llvm-and-vulkan-by-the-developer-of-an-earth-friendly-computer
Prasad Ramesh
02 Oct 2018
2 min read
Save for later

A libre GPU effort based on RISC-V, Rust, LLVM and Vulkan by the developer of an earth-friendly computer

Prasad Ramesh
02 Oct 2018
2 min read
An open-source libre GPU project is under the works by Luke Kenneth Casson Leighton. He is the hardware engineer who developed the EOMA68, an earth-friendly computer. The project already has access to $250k USD in funding. The basic idea for this "libre GPU" is to use a RISC-V processor. The GPU will be mostly software-based. It will leverage the LLVM compiler infrastructure and utilize a software-based Vulkan renderer to emit code and run on the RISC-V processor. The Vulkan implementation will be used for writing in the Rust programming language. The project's current road-map has details only on the software side of figuring out the RISC-V LLVM back-end state. Work is being done on writing a user-space graphics driver, implementing the necessary bits for the proposed RISC-V extensions like "Simple-V". While doing this, they will start figuring out the hardware design and the rest of the project. The road-map is quite simplified for the arduous task at hand. The website notes: “Once you've been through the "Extension Proposal Process" with Simple-V, it need never be done again, not for one single parallel / vector / SIMD instruction, ever again.” This process will include creating a fixed-function 3D "FP to ARGB" custom instruction, a custom extension with special 3D pipelines. With Simple-V, there is no need to worry about about how those operations would be parallelised. This is not a new concept, it's borrowed directly from videocore-iv. videocore-iv calls it "virtual parallelism". It's an enormous effort on both the software and hardware ends to come up with a RISC-V, Rust, LLVM, and Vulkan open-source combined project. It is difficult even with the funding considering it is a software based GPU. It is worth noting that the EOMA68 project was started by Luke in 2016 and raised over $227k USD from crowdfunding participants and hasn't shipped yet. To know more about this project, visit the libre risc-v website. NVIDIA leads the AI hardware race. But which of its GPUs should you use for deep learning? AMD ROCm GPUs now support TensorFlow v1.8, a major milestone for AMD’s deep learning plans PyTorch-based HyperLearn Statsmodels aims to implement a faster and leaner GPU Sklearn
Read more
  • 0
  • 0
  • 6826
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $15.99/month. Cancel anytime
article-image-google-i-o-2019-flutter-ui-framework-now-extended-for-web-embedded-and-desktop
Sugandha Lahoti
08 May 2019
4 min read
Save for later

Google I/O 2019: Flutter UI framework now extended for Web, Embedded, and Desktop

Sugandha Lahoti
08 May 2019
4 min read
At the ongoing 2019 Google I/O, Google made a major overhaul to its Flutter UI framework. Flutter is now expanded from mobile to multi-platform. The company released the first technical preview of Flutter for web. The core framework for mobile devices was also upgraded to Flutter 1.5. For desktop, Flutter is being used as an experimental project. It is not production-ready, but the team has published early instructions for developing  apps to run on Mac, Windows, and Linux. An embedding API for Flutter is also available that allows it to be used in scenarios for home and automotives. Google notes, “The core Flutter project has been making progress to enable desktop-class apps, with input paradigms such as keyboard and mouse, window resizing, and tooling for Chrome OS app development. The exploratory work that we did for embedding Flutter into desktop-class apps running on Windows, Mac and Linux has also graduated into the core Flutter engine.” Flutter for Web Flutter for web allows web-based applications to be built using the Flutter framework. Per Google, with Flutter for web you can create “highly interactive, graphically rich content,” though it plans to continue evolving this version with a “focus on performance and harmonizing the codebase.” It allows developers to compile existing Flutter code written in Dart into a client experience that can be embedded in the browser and deployed to any web server. Google teamed up with the New York Times to build a small puzzle game called Kenken as an early example of what can be built using Flutter for Web. This game uses the same code across Android, iOS, the web and Chrome OS. Source: Google Blog Flutter 1.5 Flutter 1.5 hosts a variety of new features including updates to its iOS and Material widget and engine support for new mobile devices types. The latest release also brings support for Dart 2.3 with extensive UI-as-code functionality. It also has an in-app payment library which will make monetizing Flutter based apps easier. Google also showcased an ML Kit Custom Image Classifier, built using Flutter and Firebase at Google I/O 2019. The kit offers an easy-to-use app-based workflow for creating custom image classification models. You can collect training data using the phone’s camera, invite others to contribute to your datasets, trigger model training, and use trained models, all from the same app. Google has also released a comprehensive new training course for Flutter, built by The App Brewery. Their new course is available for a time-limited discount from $199 to just $10. Netizens had trouble acknowledging Google’s move and were left wondering as to whether Google wants people to invest in learning Dart or Kotlin. For reference, Flutter is entirely built in Dart and Google made two major announcements for Kotlin at the Google I/O. Android development will become increasingly Kotlin-first, and Google announcing the first preview of Jetpack Compose, a new open-source UI toolkit for Kotlin developers. A comment on Hacker News reads, “This is massively confusing. Do we invest in Kotlin ...or do we invest in Dart? Where will Android be in 2 years: Dart or Kotlin?” In response to this, another comment reads, “I don't think anyone has a definite answer, not even Google itself. Google placed several bets on different technologies and community will ultimately decide which of them is the winning one. Personally, I think native Android (Kotlin) and iOS (Swift) development is here to stay. I have tried many cross-platform frameworks and on any non-trivial mobile app, all of them cause more problem than they solve.” Another said, “If you want to do android development, Kotlin. If you want to do multi-platform development, flutter.” “Invest in Kotlin. Kotlin is useful for Android NOW. Whenever Dart starts becoming more mainstream, you'll know and have enough time to react to it”, was another user’s opinion. Read the entire conversation on Hacker News. Google launches Flutter 1.2, its first feature update, at Mobile World Congress 2019 You can now permanently delete your location history and web and app activity data on Google Microsoft Build 2019: Microsoft showcases new updates to MS 365 platform with a focus on AI and developer productivity
Read more
  • 0
  • 0
  • 6633

article-image-alibabas-chipmaker-launches-open-source-risc-v-based-xuantie-910-processor-for-5g-ai-iot-and-self-driving-applications
Vincy Davis
26 Jul 2019
4 min read
Save for later

Alibaba’s chipmaker launches open source RISC-V based ‘XuanTie 910 processor’ for 5G, AI, IoT and self-driving applications

Vincy Davis
26 Jul 2019
4 min read
Launched in 2018, Alibaba’s chip subsidiary, Pingtouge made a major announcement yesterday. Pingtouge is launching its first product - chip processor XuanTie 910 using the open-source RISC-V instruction set architecture. The XuanTie 910 processor is expected to reduce the costs of related chip production by more than 50%, reports Caixin Global. XuanTie 910, also known as T-Head, will soon be available in the market for commercial use. Pingtouge will also be releasing some of XuanTie 910’s codes on Github for free to help the global developer community to create innovative applications. No release dates have been revealed yet. What are the properties of the XuanTie 910 processor? The XuanTie 910 16-core processor has 7.1 Coremark/MHz and its main frequency can achieve 2.5GHz. This processor can be used to manufacture high-end edge-based microcontrollers (MCUs), CPUs, and systems-on-chip (SOC). It can be used in applications like 5G telecommunication, artificial intelligence (AI), and autonomous driving. XuanTie 910 processor gives 40% increased performance over the mainstream RISC-V instructions and also a 20% increase in terms of instructions. According to Synced, Xuantie 910 has two unconventional properties: It has a 2-stage pipelined out-of-order triple issue processor with two memory accesses per cycle. The processors computing, storage and multi-core capabilities are superior due to an increased extension of instructions. Xuantie 910 can extend more than 50 instructions than RISC-V. Last month, The Verge reported that an internal ARM memo has instructed its staff to stop working with Huawei. With the US blacklisting China’s telecom giant Huawei, and also banning any American company from doing business with them, it seems that ARM is also following the American strategy. Although ARM is based in U.K. and is owned by the Japanese SoftBank group, it does have an “US origin technology”, as claimed in the internal memo. This may be one of the reasons why Alibaba is increasing its efforts in developing RISC-V, so that Chinese tech companies can become independent from Western technologies. A Xuantie 910 processor can assure Chinese companies of a stable future, with no fear of it being banned by Western governments. Other than being cost-effective, RISC-V also has other advantages like more flexibility compared to ARM. With complex licence policies and high power prospect, it is going to be a challenge for ARM to compete against RISC-V and MIPS (Microprocessor without Interlocked Pipeline Stages) processors. A Hacker News user comments, “I feel like we (USA) are forcing China on a path that will make them more competitive long term.” Another user says, “China is going to be key here. It's not just a normal market - China may see this as essential to its ability to develop its technology. It's Made in China 2025 policy. That's taken on new urgency as the west has started cutting China off from western tech - so it may be normal companies wanting some insurance in case intel / arm cut them off (trade disputes etc) AND the govt itself wanting to product its industrial base from cutoff during trade disputes” Some users also feel that it is technology that wins when two big economies continue bringing up innovative technologies. A comment on Hacker News reads, “Good to see development from any country. Obviously they have enough reason to do it. Just consider sanctions. They also have to protect their own market. Anyone that can afford it, should do it. Ultimately it is a good thing from technology perspective.” Not all US tech companies are wary of partnering with Chinese counterparts. Two days ago, Salesforce, an American cloud-based software company announced a strategic partnership with Alibaba. This aims to help Salesforce localize their products in mainland China, Hong Kong, Macau, and Taiwan. This will enable Salesforce customers to market, sell, and operate through services like Alibaba Cloud and Tmall. Winnti Malware: Chinese hacker group attacks major German corporations for years, German public media investigation reveals The US Justice Department opens a broad antitrust review case against tech giants Salesforce is buying Tableau in a $15.7 billion all-stock deal
Read more
  • 0
  • 0
  • 6591

article-image-espressif-iot-devices-susceptible-to-wifi-vulnerabilities-can-allow-hijackers-to-crash-devices-connected-to-enterprise-networks
Savia Lobo
05 Sep 2019
4 min read
Save for later

Espressif IoT devices susceptible to WiFi vulnerabilities can allow hijackers to crash devices connected to enterprise networks

Savia Lobo
05 Sep 2019
4 min read
Matheus Eduardo Garbelini a member of the ASSET (Automated Systems SEcuriTy) Research Group at the Singapore University of Technology and Design released a proof of concept for three WiFi vulnerabilities in the Espressif IoT devices, ESP32/ESP8266. 3 WiFi vulnerabilities on the ESP32/8266 IoT device Zero PMK Installation (CVE-2019-12587) This WiFi vulnerability hijacks clients on version ESP32 and ESP8266 connected to enterprise networks. It allows an attacker to take control of the WiFi device EAP session by sending an EAP-Fail message in the final step during the connection between the device and the access point. The researcher discovered that both the IoT devices update their Pairwise Master Key (PMK) only when they receive an EAP-Success message. If the EAP-Fail message is received before the EAP-Success, the device skips to update the PMK received during a normal EAP exchange (EAP-PEAP, EAP-TTLS or EAP-TLS). During this time, the device normally accepts the EAPoL 4-Way handshake. Each time ESP32/ESP8266 starts, the PMK is initialized as zero, thus, if an EAP-Fail message is sent before the EAP-Success, the device uses a zero PMK. Thus allowing the attacker to hijack the connection between the AP and the device. ESP32/ESP8266 EAP client crash (CVE-2019-12586) This WiFi vulnerability is found in SDKs of ESP32 and ESP8266 and allows an attacker to precisely cause a crash in any ESP32/ESP8266 connected to an enterprise network. In combination with the zero PMK Installation vulnerability, it could increase the damages to any unpatched device. This vulnerability allows attackers in radio range to trigger a crash to any ESP device connected to an enterprise network. Espressif has fixed such a problem and committed patches for ESP32 SDK, however, the SDK and Arduino board support for ESP8266 is still unpatched. ESP8266 Beacon Frame Crash (CVE-2019-12588) In this WiFi vulnerability, CVE-2019-12588 the client 802.11 MAC implementation in Espressif ESP8266 NONOS SDK 3.0 and earlier does not correctly validate the RSN AuthKey suite list count in beacon frames, probe responses, and association responses. This allows attackers in radio range to cause a denial of service (crash) via a crafted message. Two situations in a malformed beacon frame can trigger two problems: When sending crafted 802.11 frames with the field Auth Key Management Suite Count (AKM) in RSN tag with size too large or incorrect, ESP8266 in station mode crashes. When sending crafted 802.11 frames with the field Pairwise Cipher Suite Count in RSN tag with size too large or incorrect, ESP8266 in station mode crashes. “The attacker sends a malformed beacon or probe response to an ESP8266 which is already connected to an access point. However, it was found that ESP8266 can crash even when there’s no connection to an AP, that is even when ESP8266 is just scanning for the AP,” the researcher says. A user on Hacker News writes, “Due to cheap price ($2—$5 depending on the model) and very low barrier to entry technically, these devices are both very popular as well as very widespread in those two categories. These chips are the first hits for searches such as "Arduino wifi module", "breadboard wifi", "IoT wifi module", and many, many more as they're the downright easiest way to add wifi to something that doesn't have it out of the box. I'm not sure how applicable these attack vectors are in the real world, but they affect a very large number of devices for sure.” To know more about this news in detail, read the Proof of Concept on GitHub. Other interesting news in IoT security Cisco Talos researchers disclose eight vulnerabilities in Google’s Nest Cam IQ indoor camera Microsoft reveals Russian hackers “Fancy Bear” are the culprit for IoT network breach in the U.S. Researchers reveal vulnerability that can bypass payment limits in contactless Visa card
Read more
  • 0
  • 0
  • 6070

article-image-google-to-kill-another-product-the-works-with-nest-api-in-the-wake-of-bringing-all-smart-home-products-under-google-nest
Bhagyashree R
09 May 2019
5 min read
Save for later

Google to kill another product, the 'Works with Nest' API in the wake of bringing all smart home products under "Google Nest"

Bhagyashree R
09 May 2019
5 min read
Update: Included Google’s recent plan of action after facing backlash by Nest users.   At this year’s Google I/O developer conference, Google announced that it is bringing all the Nest and Google Home products under one brand “Google Nest”. As a part of this effort, Nest announced on Tuesday that it will be discontinuing the Works with Nest API by August 30, 2019, in favor of Works with Google Assistant. “We want to unify our efforts around third-party connected home devices under a single developer platform – a one-stop shop for both our developers and our customers to build a more helpful home. To accomplish this, we’ll be winding down Works with Nest on August 31, 2019, and delivering a single unified experience through the Works with Google Assistant program,” wrote Nest in a post. Google with this change aims to make the whole smart home experience for users more secure and unified. Over the next few months, users with Nest accounts will need to migrate to Google Accounts, which will serve as a single front-end for using products across Nest and Google. Along with providing a unified experience, Google also promises to be transparent about the data it collects, which it mentioned in an extensive document published on Tuesday. The document titled “Google Nest commitment to privacy in the home” describes how its connected smart home devices work and also lays out Google’s approach for managing user data. Though Google is promising improved security and privacy with this change, this will also end up breaking some existing third-party integrations. And, one of them is IFTTT (If This, Then That), a software platform with which you can write “applets” that allow devices from different manufacturers to talk to each other. We can use IFTTT for things like automatically adjusting the thermostat when the user comes closer to their house based on their phone location, turning Philips Hue smart lights on when a Nest Cam security camera detects motion, and more. Developers who work with Works with Nest API are recommended to visit the Actions on Google Smart Home developer site to learn how to integrate smart home devices or services with the Google Assistant. What Nest users think about this decision? Though Google is known for its search engine and other online services, it is also known for abandoning and killing its products in a trice. This decision of phasing out Works with Nest has left many users infuriated who have brought Nest products. https://twitter.com/IFTTT/status/1125930219305615360 “The big problem here is that there are a lot of people that have spent a lot of money on buying quality hardware that isn't just for leisure, it's for protection. I'll cite my 4 Nest Protects and an outdoor camera as an example. If somehow they get "sunsetted" due to some Google whim, fad or Because They Can, then I'm going to be pretty p*ssed, to say the least. Based on past experience I don't trust Google to act in the users' interest,” said one Hacker News user. Some other users think that this change could be for better, but the timeline that Google has decided is pretty stringent. A Hacker News user commented on a discussion triggered by this news, “Reading thru it, it is not as brutal as it sounds, more than they merged it into the Google Assistant API, removing direct access permission to the NEST device (remember microphone-gate with NEST) and consolidating those permissions into Assistant. Whilst they are killing it off, they have a transition. However, as far as timelines go - August 2019 kill off date for the NEST API is brutal and not exactly the grace period users of connected devices/software will appreciate or in many cases with tech designed for non-technical people - know nothing until suddenly in August find what was working yesterday is now not working.” Google’s reaction to the feedback by Nest users As a response to the backlash by Nest users, Google published a blog post last week sharing its plan of action. According to this plan, users’ existing devices and integrations will continue to work with their Nest accounts. However, they will not have access to any new features that will be available through their Google account. Google further clarified that it will stop taking any new Works with Nest connection requests from August 31, 2019. “Once your WWN functionality is available on the WWGA platform you can migrate with minimal disruption from a Nest Account to a Google Account,” the blog post reads. Though Google did share its plans regarding the third-party integrations, it was pretty vague about the timelines. It wrote, “One of the most popular WWN features is to automatically trigger routines based on Home/Away status. Later this year, we'll bring that same functionality to the Google Assistant and provide more device options for you to choose from. For example, you’ll be able to have your smart light bulbs automatically turn off when you leave your home.” It further shared that it has teamed up with Amazon and other partners for bringing custom integrations to Google Nest. Read the official announcement on Nest’s website. Google employees join hands with Amnesty International urging Google to drop Project Dragonfly What if buildings of the future could compute? European researchers make a proposal. Google to allegedly launch a new Smart home device
Read more
  • 0
  • 0
  • 5334
article-image-introducing-raspberry-pi-tv-hat-a-new-addon-that-lets-you-stream-live-tv
Prasad Ramesh
19 Oct 2018
2 min read
Save for later

Introducing Raspberry Pi TV HAT, a new addon that lets you stream live TV

Prasad Ramesh
19 Oct 2018
2 min read
Yesterday the Raspberry Pi Foundation launched a new device called the Raspberry Pi TV HAT. It is a small board, TV antenna that lets you decode and stream live TV. The TV HAT is roughly the size of a Raspberry Pi Zero board. It connects to the Raspberry Pi via a GPIO connector and has a port for a TV antenna connector. The new Raspberry Pi addon is designed after a new form factor of HAT (Hardware Attached on Top). The addon itself is a half-sized HAT matching the outline of Raspberry Pi Zero boards. Source: Raspberry Pi website TV HAT specifications and requirement The board addon has a Sony CXD2880 TV tuner. It supports TV standards like DVB-T2 (1.7MHz, 5MHz, 6MHz, 7MHz, 8MHz channel bandwidth), and DVB-T (5MHz, 6MHz, 7MHz, 8MHz channel bandwidth). The frequencies it can recieve are VHF III, UHF IV, and UHF V. Raspbian Stretch (or later) is required for using the Raspberry Pi TV HAT. TVHeadend is the recommended software to start with TV streams. There is a ‘Getting Started’ guide on the Raspberry Pi website. Watch on the Raspberry Pi With the TV HAT can receive and you can view television on a Raspberry Pi board. The Pi can also be used as a server to stream television over a network to other devices. When running as a server the TV HAT works with all 40-pin GPIO Raspberry Pi boards. Watching on TV on the Pi itself needs more processing, so the use of a Pi 2, 3, or 3B+ is recommended. The TV HAT connected to a Raspberry Pi board: Source: Raspberry Pi website Streaming over a network Connecting a TV HAT to your network allows viewing streams on any device connected to the network. This includes computers, smartphones, and tablets. Initially, it will be available only in Europe. The Raspberry Pi TV HAT is now on sale for $21.50, visit the Raspberry Pi website for more details. Tensorflow 1.9 now officially supports Raspberry Pi bringing machine learning to DIY enthusiasts How to secure your Raspberry Pi board [Tutorial] Should you go with Arduino Uno or Raspberry Pi 3 for your next IoT project?
Read more
  • 0
  • 0
  • 5185

article-image-yubico-reveals-biometric-yubikey-at-microsoft-ignite
Fatema Patrawala
07 Nov 2019
4 min read
Save for later

Yubico reveals Biometric YubiKey at Microsoft Ignite

Fatema Patrawala
07 Nov 2019
4 min read
On Tuesday, at the ongoing Microsoft Ignite, Yubico, the leading provider of authentication and encryption hardware, announced the long-awaited YubiKey Bio. YubiKey Bio is the first YubiKey to support fingerprint recognition for secure and seamless passwordless logins. As per the team this feature has been a top requested feature from many of their YubiKey users. Key features in YubiKey Bio The YubiKey Bio delivers the convenience of biometric login with the added benefits of Yubico’s hallmark security, reliability and durability assurances. Biometric fingerprint credentials are stored in the secure element that helps protect them against physical attacks. As a result, a single, trusted hardware-backed root of trust delivers a seamless login experience across different devices, operating systems, and applications. With support for both biometric- and PIN-based login, the YubiKey Bio leverages the full range of multi-factor authentication (MFA) capabilities outlined in the FIDO2 and WebAuthn standard specifications. In keeping with Yubico’s design philosophy, the YubiKey Bio will not require any batteries, drivers, or associated software. The key seamlessly integrates with the native biometric enrollment and management features supported in the latest versions of Windows 10 and Azure Active Directory, making it quick and convenient for users to adopt a phishing-resistant passwordless login flow. “As a result of close collaboration between our engineering teams, Yubico is bringing strong hardware-backed biometric authentication to market to provide a seamless experience for our customers,” said Joy Chik, Corporate VP of Identity, Microsoft. “This new innovation will help drive adoption of safer passwordless sign-in so everyone can be more secure and productive.” The Yubico team has worked with Microsoft in the past few years to help drive the future of passwordless authentication through the creation of the FIDO2 and WebAuthn open authentication standards. Additionally they have built YubiKey integrations with the full suite of Microsoft products including Windows 10 with Azure Active Directory and Microsoft Edge with Microsoft Accounts. Microsoft Ignite attendees saw a live demo of passwordless sign-in to Microsoft Azure Active Directory accounts using the YubiKey Bio. The team also promises that by early next year, enterprise users will be able to authenticate to on-premises Active Directory integrated applications and resources. And provide seamless Single Sign-On (SSO) to cloud- and SAML-based applications. To take advantage of strong YubiKey authentication in Azure Active Directory environments, users can refer to this page for more information. On Hacker News, this news has received mixed reactions while some are in favour of the biometric authentication, others believe that keeping stronger passwords is still a better choice. One of them commented, “1) This is an upgrade to the touch sensitive button that's on all YubiKeys today. The reason you have to touch the key is so that if an attacker gains access to your computer with an attached Yubikey, they will not be able to use it (it requires physical presence). Now that touch sensitive button becomes a fingerprint reader, so it can't be activated by just anyone. 2) The computer/OS doesn't have to support anything for this added feature.” Another user responds, “A fingerprint is only going to stop a very opportunistic attacker. Someone who already has your desktop and app password and physical access to your desktop can probably get a fingerprint off a glass, cup or something else. I don't think this product is as useful as it seems at first glance. Using stronger passwords is probably just as safe.” Google updates biometric authentication for Android P, introduces BiometricPrompt API GitHub now supports two-factor authentication with security keys using the WebAuthn API You can now use fingerprint or screen lock instead of passwords when visiting certain Google services thanks to FIDO2 based authentication Microsoft and Cisco propose ideas for a Biometric privacy law after the state of Illinois passed one SafeMessage: An AI-based biometric authentication solution for messaging platforms
Read more
  • 0
  • 0
  • 4988

article-image-researchers-reveal-light-commands-laser-based-audio-injection-attacks-on-voice-control-devices-like-alexa-siri-and-google-assistant
Fatema Patrawala
06 Nov 2019
5 min read
Save for later

Researchers reveal Light Commands: laser-based audio injection attacks on voice-control devices like Alexa, Siri and Google Assistant

Fatema Patrawala
06 Nov 2019
5 min read
Researchers from the University of Electro-Communications in Tokyo and the University of Michigan released a paper on Monday, that gives alarming cues about the security of voice-control devices. In the research paper the researchers presented ways in which they were able to manipulate Siri, Alexa, and other devices using “Light Commands”, a vulnerability in in MEMS (microelectro-mechanical systems) microphones. Light Commands was discovered this year in May. It allows attackers to remotely inject inaudible and invisible commands into voice assistants, such as Google assistant, Amazon Alexa, Facebook Portal, and Apple Siri using light. This vulnerability can become more dangerous as voice-control devices gain more popularity. How Light Commands work Consumers use voice-control devices for many applications, for example to unlock doors, make online purchases, and more with simple voice commands. The research team tested a handful of such devices, and found that Light Commands can work on any smart speaker or phone that uses MEMS. These systems contain tiny components that convert audio signals into electrical signals. By shining a laser through the window at microphones inside smart speakers, tablets, or phones, a far away attacker can remotely send inaudible and potentially invisible commands which are then acted upon by Alexa, Portal, Google assistant or Siri. Many users do not enable voice authentication or passwords to protect devices from unauthorized use. Hence, an attacker can use light-injected voice commands to unlock the victim's smart-lock protected home doors, or even locate, unlock and start various vehicles. Further researchers also mentioned that Light Commands can be executed at long distances as well. To prove this they demonstrated the attack in a 110 meter hallway, the longest hallway available in the research phase. Below is the reference image where team demonstrates the attack, additionally they have captured few videos of the demonstration as well. Source: Light Commands research paper. Experimental setup for exploring attack range at the 110 m long corridor The Light Commands attack can be executed using a simple laser pointer, a laser driver, and a sound amplifier. A telephoto lens can be used to focus the laser for long range attacks. Detecting the Light Commands attacks Researchers also wrote how one can detect if the devices are attacked by Light Commands. They believe that command injection via light makes no sound, an attentive user can notice the attacker's light beam reflected on the target device. Alternatively, one can attempt to monitor the device's verbal response and light pattern changes, both of which serve as command confirmation. Additionally they also mention that so far they have not seen any such cases where the Light Command attack has been maliciously exploited. Limitations in executing the attack Light Commands do have some limitations in execution: Lasers must point directly at a specific component within the microphone to transmit audio information. Attackers need a direct line of sight and a clear pathway for lasers to travel. Most light signals are visible to the naked eye and would expose attackers. Also, voice-control devices respond out loud when activated, which could alert nearby people of foul play. Controlling advanced lasers with precision requires a certain degree of experience and equipment. There is a high barrier to entry when it comes to long-range attacks. How to mitigate such attacks Researchers in the paper suggested to add an additional layer of authentication in voice assistants to mitigate the attack. They also suggest that manufacturers can attempt to use sensor fusion techniques, such as acquiring audio from multiple microphones. When the attacker uses a single laser, only a single microphone receives a signal while the others receive nothing. Thus, manufacturers can attempt to detect such anomalies, ignoring the injected commands. Another approach proposed is reducing the amount of light reaching the microphone's diaphragm. This can be possible by using a barrier that physically blocks straight light beams to eliminate the line of sight to the diaphragm, or by implementing a non-transparent cover on top of the microphone hole to reduce the amount of light hitting the microphone. However, researchers also agreed that such physical barriers are only effective to a certain point, as an attacker can always increase the laser power in an attempt to pass through the barriers and create a new light path. Users discuss photoacoustic effect at play On Hacker News, this research has gained much attention as users find this interesting and applaud researchers for the demonstration. Some discuss the laser pointers and laser drivers price and features available to hack the voice assistants. Others discuss how such techniques come to play, one of them says, “I think the photoacoustic effect is at play here. Discovered by Alexander Graham Bell has a variety of applications. It can be used to detect trace gases in gas mixtures at the parts-per-trillion level among other things. An optical beam chopped at an audio frequency goes through a gas cell. If it is absorbed, there's a pressure wave at the chopping frequency proportional to the absorption. If not, there isn't. Synchronous detection (e.g. lock in amplifiers) knock out any signal not at the chopping frequency. You can see even tiny signals when there is no background. Hearing aid microphones make excellent and inexpensive detectors so I think that the mics in modern phones would be comparable. Contrast this with standard methods where one passes a light beam through a cell into a detector, looking for a small change in a large signal. https://chem.libretexts.org/Bookshelves/Physical_and_Theoret... Hats off to the Michigan team for this very clever (and unnerving) demonstration.” Smart Spies attack: Alexa and Google Assistant can eavesdrop or vish (voice phish) unsuspecting users, disclose researchers from SRLabs How Chaos Engineering can help predict and prevent cyber-attacks preemptively An unpatched security issue in the Kubernetes API is vulnerable to a “billion laughs” attack Intel’s DDIO and RDMA enabled microprocessors vulnerable to new NetCAT attack Wikipedia hit by massive DDoS (Distributed Denial of Service) attack; goes offline in many countries
Read more
  • 0
  • 0
  • 4688
article-image-debian-gnu-linux-port-for-risc-v-64-bits-why-it-matters-and-roadmap
Amrata Joshi
20 Jun 2019
7 min read
Save for later

Debian GNU/Linux port for RISC-V 64-bits: Why it matters and roadmap

Amrata Joshi
20 Jun 2019
7 min read
Last month, Manuel A. Fernandez Montecelo, a Debian contributor and developer talked about the Debian GNU/Linux riscv64 port at the RISC-V workshop. Debian, a Unix-like operating system consists of free software supported by the Debian community that comprises of individuals who basically care about free and open-source software. The goal of the Debian GNU/Linux riscv64 port project has been to have Debian ready for installation and running on systems that implement a variant of the RISC-V (an open-source hardware instruction set architecture) based systems. The feedback from the people regarding his presentation at the workshop was positive. Earlier this week,  Manuel A. Fernandez Montecelo announced an update on the status of Debian GNU/Linux riscv64 port. The announcement comes weeks before the release of buster which will come with another set of changes to benefit the port. What is RISC-V used for and why is Debian interested in building this port? According to the Debian wiki page, “RISC-V (pronounced "risk-five") is an open source instruction set architecture (ISA) based on established reduced instruction set computing (RISC) principles. In contrast to most ISAs, RISC-V is freely available for all types of use, permitting anyone to design, manufacture and sell RISC-V chips and software. While not the first open ISA, it is significant because it is designed to be useful in modern computerized devices such as warehouse-scale cloud computers, high-end mobile phones and the smallest embedded systems. Such uses demand that the designers consider both performance and power efficiency. The instruction set also has a substantial body of supporting software, which fixes the usual weakness of new instruction sets. In this project the goal is to have Debian ready to install and run on systems implementing a variant of the RISC-V ISA: Software-wise, this port will target the Linux kernel Hardware-wise, the port will target the 64-bit variant, little-endian This ISA variant is the "default flavour" recommended by the designers, and the one that seems to attract more interest for planned implementations that might become available in the next few years (development boards, possible consumer hardware or servers).” Update on Debian GNU/Linux riscv64 port Image source: Debian Let’s have a look at the graph where the percent of arch-dependent packages that are built for riscv64 (grey line) has been around or higher than 80% since mid-2018. The arch-dependent packages are almost half of Debian's [main, unstable] archive. It means that the arch-independent packages can be used by all the ports, provided that the software is present on which they rely on. The update also highlights that around 90% of packages from the whole archive has been made available for this architecture. Image source: Debian The graph above highlights that the percentages are very stable for all architectures. Montecelo writes, “This is in part due to the freeze for buster, but it usually happens at other times as well (except in the initial bring-up or in the face of severe problems).” Even the second-class ports appear to be stable. Montecelo writes, “Together, both graphs are also testament that there are people working on ports at all times, keeping things working behind the scenes, and that's why from a high level view it seems that things just work.” According to him, apart from the work of porters themselves, there are people working on bootstrapping issues that make it easier to bring up ports, better than in the past. They also make coping better when toolchain support or other issues related to ports, blow up. He further added, “And, of course, all other contributors of Debian help by keeping good tools and building rules that work across architectures, patching the upstream software for the needs of several architectures at the same time (endianness, width of basic types), many upstream projects are generic enough that they don't need specific porting, etc.” Future scope and improvements yet to come To get Debian running on RISC-V will not be easy because of various reasons including limited availability of hardware being able to run Debian port and limited options for using bootloaders. According to Montecelo, this is an area of improvement from them. He further added, “Additionally, it would be nice to have images publicly available and ready to use, for both Qemu and hardware available like the HiFive Unleashed (or others that might show up in time), but although there's been some progress on that, it's still not ready and available for end users.” Presently, they are beyond 500 packages from the Rust ecosystem in the archive (which is about 4%) which can’t be built and used until Rust gets support for the architecture. Rust requires LLVM and there’s no Rust compiler based on GCC or other toolchains. Montecelo writes, “Firefox is the main high-level package that depends on Rust, but many packages also depend on librsvg2 to render SVG images, and this library has been converted to Rust. We're still using the C version for that, but it cannot be sustained in the long term." Apart from Rust, other packages use LLVM to some extent, but currently, it is not fully working for riscv64. The support of LLVM for riscv64 is expected to be completed this year. While talking about other programming languages, he writes, “There are other programming language ecosystems that need attention, but they represent a really low percentage (only dozens of packages, of more than 12 thousand; and with no dependencies outside that set). And then, of course, there is a long tail of packages that cannot be built due to a missing dependency, lack of support for the architecture or random failures -- together they make a substantial number of the total, but they need to be looked at and solved almost on a case-by-case basis.” Why are people excited about this? Many users seem to be excited about the news, one of the reasons being that there won’t be a need to bootstrap from scratch as Rust now will be able to cross-compile easily because of the Riscv64 support. A user commented on HackerNews, “Debian Rust maintainer here. We don't need to bootstrap from scratch, Rust (via LLVM) can cross-compile very easily once riscv64 support is added.” Also, this appears to be a good news for Debian, as cross-compiling has really come a long way on Debian. Rest are awaiting for more to get incorporated with riscv. Another user commented, “I am waiting until the Bitmanip extension lands to get excited about RISC-V: https://github.com/riscv/riscv-bitmanip” Few others think that there is a need for LLVM support for riscv64. A user commented, “The lack of LLVM backend surprises me. How much work is it to add a backend with 60 instructions (and few addressing modes)? It's clearly far more than I would have guessed.” Another comment reads, “Basically LLVM is now a dependency of equal importance to GCC for Debian. Hopefully this will help motivate expanding architecture-support for LLVM, and by proxy Rust.” According to users, the architecture of this port misses on two major points, one being the support for LLVM compiler and the other one being the support for Rust based on GCC. If the port gets the LLVM support by this year, users will be able to develop a front end for any programming language as well as a backend for any instruction set architecture. Now, if we consider the case of support for Rust based on GCC, then the port will help developers to get support for many language extensions as GCC provides the same. A user commented on Reddit, “The main blocker to finish the port is having a working Rust toolchain. This is blocked on LLVM support, which only supports RISCV32 right now, and RISCV64 LLVM support is expected to be finished during 2019.” Another comment reads, “It appears that enough people in academia are working on RISCV for LLVM to accept it as a mainstream backend, but I wish more stakeholders in LLVM would make them reconsider their policy.” To know more about this news, check out Debian’s official post. Debian maintainer points out difficulties in Deep Learning Framework Packaging Debian project leader elections goes without nominations. What now? Are Debian and Docker slowly losing popularity?  
Read more
  • 0
  • 0
  • 4656

article-image-ieee-computer-society-predicts-top-ten-tech-trends-for-2019-assisted-transportation-chatbots-and-deep-learning-accelerators-among-others
Natasha Mathur
21 Dec 2018
5 min read
Save for later

IEEE Computer Society predicts top ten tech trends for 2019: assisted transportation, chatbots, and deep learning accelerators among others

Natasha Mathur
21 Dec 2018
5 min read
IEEE Computer Society (IEEE-CS) released its annual tech future predictions, earlier this week, unveiling the top ten most likely to be adopted technology trends in 2019. "The Computer Society's predictions are based on an in-depth analysis by a team of leading technology experts, identify top technologies that have substantial potential to disrupt the market in the year 2019," mentions Hironori Kasahara, IEEE Computer Society President. Let’s have a look at their top 10 technology trends predicted to reach wide adoption in 2019. Top ten trends for 2019 Deep learning accelerators According to IEEE computer society, 2019 will see widescale adoption of companies designing their own deep learning accelerators such as GPUs, FPGAs, and TPUs, which can be used in data centers. The development of these accelerators would further allow machine learning to be used in different IoT devices and appliances. Assisted transportation Another trend predicted for 2019 is the adoption of assisted transportation which is already paving the way for fully autonomous vehicles. Although the future of fully autonomous vehicles is not entirely here, the self-driving tech saw a booming year in 2018. For instance, AWS introduced DeepRacer, a self-driving race car, Tesla is building its own AI hardware for self-driving cars, Alphabet’s Waymo will be launching the world’s first commercial self-driving cars in upcoming months, and so on. Other than self-driving, assisted transportation is also highly dependent on deep learning accelerators for video recognition. The Internet of Bodies (IoB) As per the IEEE computer society, consumers have become very comfortable with self-monitoring using external devices like fitness trackers and smart glasses. With digital pills now entering the mainstream medicine, the body-attached, implantable, and embedded IoB devices provide richer data that enable development of unique applications. However, IEEE mentions that this tech also brings along with it the concerns related to security, privacy, physical harm, and abuse. Social credit algorithms Facial recognition tech was in the spotlight in 2018. For instance, Microsoft President- Brad Smith requested governments to regulate the evolution of facial recognition technology this month, Google patented a new facial recognition system that uses your social network to identify you, and so on.  According to the IEEE, social credit algorithms will now see a rise in adoption in 2019. Social credit algorithms make use of facial recognition and other advanced biometrics that help identify a person and retrieve data about them from digital platforms. This helps them check the approval or denial of access to consumer products and services. Advanced (smart) materials and devices IEEE computer society predicts that in 2019, advanced materials and devices for sensors, actuators, and wireless communications will see widespread adoption. These materials include tunable glass, smart paper, and ingestible transmitters, will lead to the development of applications in healthcare, packaging, and other appliances.   “These technologies will also advance pervasive, ubiquitous, and immersive computing, such as the recent announcement of a cellular phone with a foldable screen. The use of such technologies will have a large impact on the way we perceive IoT devices and will lead to new usage models”, mentions the IEEE computer society. Active security protection From data breaches ( Facebook, Google, Quora, Cathay Pacific, etc) to cyber attacks, 2018 saw many security-related incidents. 2019 will now see a new generation of security mechanisms that use an active approach to fight against these security-related accidents. These would involve hooks that can be activated when new types of attacks are exposed and machine-learning mechanisms that can help identify sophisticated attacks. Virtual reality (VR) and augmented reality (AR) Packt’s 2018 Skill Up report highlighted what game developers feel about the VR world. A whopping 86% of respondents replied with ‘Yes, VR is here to stay’. IEEE Computer Society echoes that thought as it believes that VR and AR technologies will see even greater widescale adoption and will prove to be very useful for education, engineering, and other fields in 2019. IEEE believes that now that there are advertisements for VR headsets that appear during prime-time television programs, VR/AR will see widescale adoption in 2019. Chatbots 2019 will also see an expansion in the development of chatbot applications. Chatbots are used quite frequently for basic customer service on social networking hubs. They’re also used in operating systems as intelligent virtual assistants. Chatbots will also find its applications in interaction with cognitively impaired children for therapeutic support. “We have recently witnessed the use of chatbots as personal assistants capable of machine-to-machine communications as well. In fact, chatbots mimic humans so well that some countries are considering requiring chatbots to disclose that they are not human”, mentions IEEE.   Automated voice spam (robocall) prevention IEEE predicts that the automated voice spam prevention technology will see widespread adoption in 2019. It will be able to block a spoofed caller ID and in turn enable “questionable calls” where the computer will ask questions to the caller for determining if the caller is legitimate. Technology for humanity (specifically machine learning) IEEE predicts an increase in the adoption rate of tech for humanity. Advances in IoT and edge computing are the leading factors driving the adoption of this technology. Other events such as fires and bridge collapses are further creating the urgency to adopt these monitoring technologies in forests and smart roads. "The technical community depends on the Computer Society as the source of technology IP, trends, and information. IEEE-CS predictions represent our commitment to keeping our community prepared for the technological landscape of the future,” says the IEEE Computer Society. For more information, check out the official IEEE Computer Society announcement. Key trends in software development in 2019: cloud native and the shrinking stack Key trends in software infrastructure in 2019: observability, chaos, and cloud complexity Quantum computing, edge analytics, and meta learning: key trends in data science and big data in 2019
Read more
  • 0
  • 0
  • 4628
Modal Close icon
Modal Close icon