Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - IoT & Hardware

119 Articles
article-image-cortana-and-alexa-become-best-friends-microsoft-and-amazon-release-a-preview-of-this-integration
Savia Lobo
17 Aug 2018
3 min read
Save for later

Cortana and Alexa become best friends: Microsoft and Amazon release a preview of this integration

Savia Lobo
17 Aug 2018
3 min read
Yesterday, Microsoft and Amazon announced a public preview of the integration of their intelligent digital assistants, Cortana and Alexa for US users. Both Cortana and Alexa allow each digital assistant to summon each other and access additional apps and services on their Windows 10 PCs and Harman Kardon Invoke speakers. This digital assistant integration was first announced on 30th August last year and was demonstrated at the Microsoft Build Developer Conference, 2018. https://www.youtube.com/watch?v=KxwjnuhNVIY Why did Microsoft and Amazon collaborate Cortana and Alexa? "I want them to have access to as many of those A.I.s as possible", said Jeff Bezos in an interview with The New York times, while putting forth his vision for users to communicate with AIs as they do with their friends while asking them recommendations about a good restaurant or a famous hiking place nearby, and so on. He further stated, "The world is big and so multifaceted. There are going to be multiple successful intelligent agents, each with access to different sets of data and with different specialized skill areas. Together, their strengths will complement each other and provide customers with a richer and even more helpful experience." Satya Nadella, CEO, Microsoft added,"Bringing Cortana's knowledge, Office 365 integration, commitments, and reminders to Alexa is a great step toward that goal". Cortana users can have another way of making their lives easier with a great shopping experience. For instance, if you’re at work but remember you have to get soft drinks for a dinner party in the evening and you’re using their Windows 10 PC, iPhone or Android phone, you can simply ask Alexa to order soft drinks using the preferred payment method for their Amazon account. “Alexa, open Cortana” “Hey Cortana, open Alexa” For trying out this exciting update for Alexa on Cortana, and vice versa, you can simply say “Hey Cortana, open Alexa” on a Windows 10 PC, or “Alexa, open Cortana” on an Echo device. https://twitter.com/tomwarren/status/1029722099789832200 As explained by Amazon in one of its recent posts, “The goal is to have two integrated digital assistants who can carry out tasks across different dimensions of daily life — at home or work, and on whatever device is most convenient. Currently, Cortana and Alexa can each be enabled as a skill on the other.” In Microsoft Office 365 users can simply ask Cortana to summon Alexa through a PC at work or can use Alexa to order groceries, adjust the thermostat before heading home for the day. Also before heading to work, one could enlist Cortana through an Echo device to preview a daily calendar, add an item to a to-do list or check for new emails while making breakfast in the kitchen. As a part of this latest public preview, users can freely offer their feedback on how they can help both the communities in improving the Alexa+Cortana experience. The feedback will be based on what the users like, what they did not, and what features they use the most. With customer feedback, the experience will keep getting better and more precise as more people use it and as changes are updated to the underlying algorithms. “Engineers will use feedback from the public preview to deepen the collaboration between Cortana and Alexa”, stated Jennifer Langston in Amazon’s official post. Read more about this collaboration in detail on the  Amazon blog and Microsoft blog. Amazon Alexa and AWS helping NASA improve their efficiency Amazon Echo vs Google Home: Next-gen IoT war Microsoft Azure’s new governance DApp: An enterprise blockchain without mining
Read more
  • 0
  • 0
  • 3339

article-image-amazon-devices-echo-device-lineup-alexa-presentation-language
Sugandha Lahoti
21 Sep 2018
4 min read
Save for later

It’s Day 1 for Amazon Devices: Amazon expands its Echo device lineup, previews Alexa Presentation Language and more

Sugandha Lahoti
21 Sep 2018
4 min read
Amazon has unveiled a range of Echo devices at the Amazon Devices Event hosted in their Seattle headquarters, yesterday. The products announced included a revamped selection of Amazon’s smart speakers ( Echo Sub, Echo Dot, and Echo Plus), smart displays (the Echo Show and Echo Spot), and other smart devices. Also released, was a smart microwave (AmazonBasics Microwave), Echo Wall Clock, Fire TV Recast, and Amazon Smart Plug This event marks the largest number of devices and features (over 30) that Amazon has ever launched in a day. Alexa Presentation Language For developers, Amazon introduced the Alexa Presentation Language, to easily create Alexa skills for Alexa devices with screens. The Alexa Presentation Language (APL) is in preview and allows developers to build voice experiences with graphics, images, slideshows and video. Developers will be able to control how graphics flow with voice, customize visuals and adapt them to Alexa devices and skills.  Supported devices will include Echo Show, Echo Spot, Fire TV, and select Fire Tablet devices. Now let’s take a broad look at the key device announcements. Amazon Smart Speakers Echo Dot: The new version of the Smart speaker now offers 70 percent louder sound as compared to its predecessor. It is a voice-controlled smart speaker with Alexa integration. It can sort music, news, information, and more. The driver is now much larger from 1.1” to 1.6” for better sound clarity and improved bass. It is Bluetooth enabled so you can connect to another speaker or use it all by itself. Echo Input:  If you already have speakers, this device can add Alexa voice control to them via a 3.5mm audio cable or Bluetooth. It has a four-microphone array. Echo Input is just 12.5mm tall and thin enough to disappear into the room. It will be available later this year for $34.99. Echo Plus: Echo Plus combines Amazon’s cloud-based Natural Language Understanding and Automatic Speech Recognition along with built-in Zigbee hub to make it one of the premier smart speakers. It also has a new fabric casing, and built-in temperature sensor. This model's pre-orders begin today for $149.99. Echo Link: The Echo Link device can connect to a receiver or amplifier, with multiple digital and analog inputs and outputs for compatibility with your existing stereo equipment. It can control music selection, volume, and multi-room playback on your stereo with your Echo or the Alexa app. Echo Link will be available to customers soon. Echo Sub: This 100-watt subwoofer can connect to other speakers and create a 2.1-sound solution. The $129.99 Echo Sub will launch later this month with pre-orders beginning today. Amazon Smart Displays Echo Show: The new Echo Show is completely redesigned with a larger screen, smart home hub, and improved sound quality. Amazon is also introducing Doorbell Chime Announcements, so users will hear a chime on all Echo devices when someone presses your smart doorbell. Echo Show includes a high resolution 10-inch HD display and an 8-mic array. The new Echo Show will be available to customers for $229.99. Shipping starts next month. Other Smart devices Echo Wall Clock: It is a $30 Echo companion device, an analog clock with Alexa-powered voice recognition. It is 10-inch, battery-powered and features a ring of 60 LEDs around the rim that show ongoing Alexa timers. It also has automatic time syncing and Daylight Savings Time adjustment. AmazonBasics Microwave: It’s a $59.99 voice-activated microwave. It features Dash Replenishment and an array of Alexa features including integration with connected ovens, door locks, and other smart fixtures, reminders, and access to more than 50,000 third-party skills. Fire TV Recast: This is a companion DVR that lets users watch, record, and replay free over-the-air programming to any Fire TV, Echo Show, and on compatible Fire tablet and mobile devices. Users can also record up to two or four shows at once, and stream on any two devices at a time. It can also be paired with Alexa. Amazon Smart Plug: The Amazon Smart Plug works with Alexa to add voice control to any outlet. You can schedule lights, fans, and appliances to turn on and off automatically, or control them remotely when you’re away. Follow along the live blog of the event for a minute to minute update. Google to allegedly launch a new Smart home device. Cortana and Alexa become best friends: Microsoft and Amazon release a preview of this integration. The iRobot Roomba i7+ is a cleaning robot that maps and stores your house and also empties the trash automatically.
Read more
  • 0
  • 0
  • 3320

article-image-intel-releases-patches-to-add-linux-kernel-support-for-upcoming-dedicated-gpu-releases
Melisha Dsouza
18 Feb 2019
2 min read
Save for later

Intel releases patches to add Linux Kernel support for upcoming dedicated GPU releases

Melisha Dsouza
18 Feb 2019
2 min read
Last week, Intel released a big patch series to introduce the concept of memory regions to the Intel Linux graphics driver, which is being added to the Intel "i915" Linux kernel DRM driver. Intel stated that these patches were in “preparation for upcoming devices with device local memory”, without giving out any specific details of these “upcoming devices”. It was in December 2018, that Intel had made its plans clear that it's working on everything from integrated GPUs and discrete graphics for gaming to GPUs for data centers. Fast forward to 2019, Intel is now testing the drivers required to make them run. Phoronix was the first to speculate that this device-local memory was for Intel's discrete graphics cards with dedicated vRAM; expected to debut in 2020. Specifying their motivations behind the release of the new patches, Intel tweeted: https://twitter.com/IntelGraphics/status/1096537915222642689 Amongst other features once implemented, the patches will allow a system to: Have different "regions" of memory for system memory as for any device local memory (LMEM). Introduce a simple allocator and allow the existing GEM memory management code to allocate memory to different memory regions. Providing fake LMEM (local memory) regions to exercise a new code path. These patches will lay the groundwork for Linux support for the upcoming dedicated GPU’s. According to Phoronix’s Michael Larabel, "With past generations of Intel graphics, we generally see the first Linux kernel patches roughly a year or so out from the actual hardware debut." Twitter users have expressed enthusiasm towards this announcement: https://twitter.com/benjamimgois/status/1096544747597037571 https://twitter.com/ebound/status/1096498313392783360 You can head over to Freedesktop.org to have a look at these patches. Researchers prove that Intel SGX and TSX can hide malware from antivirus software Uber releases AresDB, a new GPU-powered real-time Analytics Engine TensorFlow team releases a developer preview of TensorFlow Lite with new mobile GPU backend support
Read more
  • 0
  • 0
  • 3294
Visually different images

article-image-researchers-at-uc-berkeleys-robot-learning-lab-introduce-blue-a-new-low-cost-force-controlled-robot-arm
Bhagyashree R
18 Apr 2019
2 min read
Save for later

Researchers at UC Berkeley's Robot Learning Lab introduce Blue, a new low-cost force-controlled robot arm

Bhagyashree R
18 Apr 2019
2 min read
Yesterday, a team of researchers from UC Berkeley's Robot Learning Lab announced the completion of their three-year-long project called Blue. It is a low-cost, high-performance robot arm that was built to work in real-world environments such as warehouses, homes, hospitals, and urban landscapes. https://www.youtube.com/watch?v=KZ88hPgrZzs&feature=youtu.be With Blue, the researchers aimed to significantly accelerate research towards useful home robots. Blue is capable of mimicking human motions in real-world environments and enables more intuitive teleoperation. Pieter Abbeel, the director of the Berkeley Robot Learning Lab and co-founder and chief scientist of AI startup Covariant, shared the vision behind this project, “AI has been moving very fast, and existing robots are getting smarter in some ways on the software side, but the hardware’s not changing. Everybody’s using the same hardware that they’ve been using for many years . . . We figured there must be an opportunity to come up with a new design that is better for the AI era. Blue design details Its dynamic properties meet or exceed the needs of a human operator, for instance, the robot has a nominal position-control bandwidth of 7.5 Hz and repeatability within 4mm. It is a kinematically-anthropomorphic robot arm with a 2 KG payload and can cost less than $5000. It consists of 7 Degree of Freedom, which includes 3 in the shoulder, 1 in the elbow, and 3 in the wrist. Blue has quasi-direct drive (QDD) actuators, which offer better force control, selectable impedance, and are highly backdrivable. These actuators make Blue resilient to damage and also makes it safer for humans to be around. The team is first distributing early release arms to developers and industry partners. We can see a product release within the next six months. The team is also planning to have a production edition of the Blue robot arm, which will be available by 2020. To read more on Blue, check out the Berkley Open Arms site. Walmart to deploy thousands of robots in its 5000 stores across US Boston Dynamics’ latest version of Handle, robot designed for logistics Setting up a Raspberry Pi for a robot – Headless by Default [Tutorial]
Read more
  • 0
  • 0
  • 3292

article-image-mozilla-re-launches-project-things-as-webthings-an-open-platform-for-monitoring-and-controlling-devices
Bhagyashree R
19 Apr 2019
3 min read
Save for later

Mozilla re-launches Project Things as WebThings, an open platform for monitoring and controlling devices

Bhagyashree R
19 Apr 2019
3 min read
Yesterday, the Mozilla IoT team announced that ‘Project Things’ is now out from its early experimental phase with a new name, ‘WebThings’. Mozilla WebThings is an open platform that allows you to monitor and control devices over the web. This project by Mozilla is an open source implementation of Web of Things, which defines software architectural styles and programming patterns that allow real-world objects to be a part of the World Wide Web. The idea here is to create a decentralized Internet of Things by providing “things”, URLs on the web to make them linkable and discoverable. Mozilla WebThings comprises of two components: WebThings Gateway WebThings Gateway is privacy and security-focused software distribution built for smart home gateways. It enables you to directly monitor and control your smart home gateways over the web, without relying on a middleman. Mozilla further announced that WebThings Gateway 0.8 is now available for download. This release comes with a feature that allows users to privately log data from their smart home devices. This logged data can also be visualized with interactive graphs. “This feature is still experimental, but viewing these logs will help you understand the kinds of data your smart home devices are collecting and think about how much of that data you are comfortable sharing with others via third-party services,” said Ben Francis, a Software Engineer at Mozilla. This release also brings in new alarms capabilities for devices like smoke, carbon monoxide, and motion detectors. With this new feature, users can configure rules to alert them when an alarm is triggered while they are away or check whether an alarm is currently active. The team has also started working on a new version of WebThings Gateway for OpenWrt, a Linux operating system targeting embedded devices. This version will be designed to act as a WiFi access point itself, instead of just connecting to an existing wireless network as a client. WebThings Framework WebThings Framework is a suite of reusable software components using which you can build your own web things, which directly expose the Web Thing API. This makes them easily discoverable by a Web of Things gateway or client. It can then automatically detect the device’s capabilities and monitor and control it over the web. These components are implemented in a range of languages including Node.js, Python, Java, Rust, and C++ (for Arduino). To know more in detail, check out the official announcement by Mozilla. Mozilla introduces Pyodide, a Python data science stack compiled to WebAssembly Mozilla developers have built BugBug which uses machine learning to triage Firefox bugs Mozilla adds protection against fingerprinting and Cryptomining scripts in Firefox Nightly and Beta  
Read more
  • 0
  • 0
  • 3280

article-image-xhci-usb-3-0-issues-have-finally-been-resolved
Amrata Joshi
11 Mar 2019
2 min read
Save for later

XHCI (USB 3.0+) issues have finally been resolved!

Amrata Joshi
11 Mar 2019
2 min read
Users have been facing issues with XHCI (USB 3 host controller) bus driver since quite some time now. Last month, Waddlesplash, a team member at Haiku, worked towards fixing the XHCI bus driver. Though few users contributed some small fixes, which helped the driver to boot Haiku within QEMU. But there were still few issues that caused device lockups such as USB mouse/keyboard stalls. The kernel related issues have been resolved now. Even the devices don’t lock up now and even the performance has been greatly improved to 120MB/s on some USB3 flash drives and XHCI chipsets. Users can now try the improved driver which is more efficient. The only remaining issue is a hard-stall on boot with certain USB3 flash drives on NEC/Renesas controllers. The work related to USB2 flash drives on USB3 ports and mounting the flash drives has finished. Most of the issues related to controller initialization got fixed by hrev52772. The issues related to broken transfer finalization logic and random device stalls have been fixed. This driver will be more useful as a reference than FreeBSD’s, OpenBSD’s, or Linux’s to other OS developers. The race condition in request submission has been fixed. A dead code has been removed and the style has been cleaned. Also, the device structure has been improved now. To know more about this news, check out the Haiku’s official blog post. USB 4 will integrate Thunderbolt 3 to increase the speed to 40Gbps USB-IF launches ‘Type-C Authentication Program’ for better security Google releases two new hardware products, Coral dev board and a USB accelerator built around its Edge TPU chip
Read more
  • 0
  • 0
  • 3272
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $15.99/month. Cancel anytime
article-image-ros-melodic-morenia-released
Gebin George
28 May 2018
2 min read
Save for later

ROS Melodic Morenia released

Gebin George
28 May 2018
2 min read
ROS is nothing but a middleware with a set of tools and software frameworks for building and stimulating robots. ROS follows a stable release cycle, coming with a new version every year on 23rd of May. ROS released its Melodic Morenia version this year on the said date, with a decent number of enhancements and upgrades. Following are the release notes: class_loader header deprecation class_loader’s headers has been renamed and the previous ones have been deprecated in an effort to bring them close to multi-platform support and its ROS 2 counterpart. You can refer to the migration script provided for the header replacements and PRs will be released for all the .packages in previous ROS distribution. Kdl_parser package enhancement Kdl_parser has now deprecated a method that was linked with tinyxml (which was already deprecated) The tinyxml replacement code is as follows: bool treeFromXml(const tinyxml2::XMLDocument * xml_doc, KDL::Tree & tree) The deprecated API will be removed in N-turle. OpenCV version update For standardization reason, the OpenCV usage version is restricted to 3.2. Enhancements in pluginlib Similar to class_loader, the headers were deprecated here as well, to bring them closer to multi-platform support. plugin_tool which was deprecated for years, has been finally removed in this version. For more updates on the packages of ROS, refer to ROS Wiki page.
Read more
  • 0
  • 0
  • 3268

article-image-raspberry-pi-4-has-a-usb-c-design-flaw-some-power-cables-dont-work
Vincy Davis
10 Jul 2019
5 min read
Save for later

Raspberry Pi 4 has a USB-C design flaw, some power cables don't work

Vincy Davis
10 Jul 2019
5 min read
Raspberry Pi 4 was released last month, with much hype and promotions. It has a 1.5GHz quad-core 64-bit ARM Cortex-A72 CPU, three memory options of up to 4GB, full-throughput gigabit Ethernet, and a USB-C port as a power connector. The USB-C power connector was the first of its kind addition in the Pi 4 board. However, four days after its release, Tyler Ward, an electronics and product engineer disclosed that the new Pi4 is not charging when used with an electronically marked or e-marked USB-C cables, the type used by Apple MacBooks and other laptops. Two days ago, Pi's co-creator Eben Upton also confirmed the same. Upton says that, “A smart charger with an e-marked cable will incorrectly identify the Raspberry Pi 4 as an audio adapter accessory, and refuse to provide power.” Upton adds that the technical breakdown of the underlying issue in the Pi 4's circuitry, by Tyler Ward offers a detailed overview of why e-marked USB-C cables won't power the Pi. According to Ward’s blog, “The root cause of the problem is the shared cc pull down resistor on the USB Type-C connector. By looking at the reduced pi schematics, we can see it as R79 which connects to both the CC lines in the connector.” “With most chargers this won’t be an issue as basic cables only use one CC line which is connected through the cable and as a result the pi will be detected correctly and receive power. The problem comes in with e-marked cables which use both CC connections”, he adds.  Ward has suggested some workarounds for this problem, firstly he recommends to use a non e-marked cable, which most USB-C phone charger cables are likely to have, rather than the e-marked cable. Also, the older chargers with A-C cables or micro B to C adaptors will also work if they provide enough power, as these don’t require CC detection to get charged. The complete solution to this problem would be if Pi would, in a future board revision, add a 2nd CC resistor to the board and fix the problem. Another option is to buy the $8/£8 official Raspberry Pi 4 power supply. In a statement to TechRepublic, Upton adds that “It's surprising this didn't show up in our (quite extensive) field testing program.”  Benson Leung, a Google Chrome OS engineer has also criticized Raspberry Pi in a medium blogpost which he has sarcastically titled,“How to design a proper USB-C™ power sink (hint, not the way Raspberry Pi 4 did it)”. Leung has identified two critical mistakes on Raspberry Pi’s part. He says that Raspberry Pi should have copied the figure from the USB-C Spec exactly, instead of designing a new circuit. Leung says that Raspberry Pi “designed this circuit themselves, perhaps trying to do something clever with current level detection, but failing to do it right.” The second mistake, he says, is that they didn’t actually test their Pi4 design with advanced cables. “The fact that no QA team inside of Raspberry Pi’s organization caught this bug indicates they only tested with one kind (the simplest) of USB-C cables.”, he adds. Many users agreed with Leung and  expressed their own views on the faulty USB-C design on the Raspberry Pi 4. They think it’s hard to believe that Raspberry Pi shipped these models before trying it with a MacBook charger. A user on Hacker News comments, “I find it incredible that presumably no one tried using a MacBook charger before this shipped. If they did and didn't document the shortcoming that's arguably just as bad. Surely a not insignificant number of customers have MacBooks? If I was writing some test specs this use case would almost certainly feature, given the MacBook Pro's USB C adapter must be one of the most widespread high power USB C charger designs in existence. Especially when the stock device does not ship with a power supply, not like it was unforeseeable some customers would just use the chargers they already have.” Some are glad that they have not yet ordered their Raspberry Pi 4 yet. https://twitter.com/kb2ysi/status/1148631629088342017 However, some users believe it’s not that big a deal. https://twitter.com/kb2ysi/status/1148635750210183175 A user on Hacker News comments, “Eh, it’s not too bad. I found a cable that works and I’ll stick to it. Even with previous-gen Pis there was always a bit of futzing with cables to find one that has small enough voltage drop to not get power warnings (even some otherwise “good” cables really cheap out on copper). The USB C thing is still an issue, and I’m glad it’ll be fixed, but it’s really not that big of a deal.” No schedule has been disclosed on the release of the revision by Upton nor Raspberry Pi till now. 10+ reasons to love Raspberry Pi You can now install Windows 10 on a Raspberry Pi 3 Raspberry Pi opens its first offline store in England
Read more
  • 0
  • 0
  • 3239

article-image-walmart-deploy-thousands-of-robots-in-5000-stores-across-us
Fatema Patrawala
12 Apr 2019
4 min read
Save for later

Walmart to deploy thousands of robots in its 5000 stores across US

Fatema Patrawala
12 Apr 2019
4 min read
Walmart, the world’s largest retailer following the latest tech trend is going all in on robots. It plans to deploy thousands of robots for lower level jobs in its 5000 of 11, 348 stores in US. In a statement released on its blog on Tuesday, the retail giant said that it was unleashing a number of technological innovations, including autonomous floor cleaners, shelf-scanners, conveyor belts, and "pickup towers" on stores across the United States. Elizabeth Walker from Walmart Corporate Affairs says, “Every hero needs a sidekick, and some of the best have been automated. Smart assistants have huge potential to make busy stores run more smoothly, so Walmart has been pioneering new technologies to minimize the time an associate spends on the more mundane and repetitive tasks like cleaning floors or checking inventory on a shelf. This gives associates more of an opportunity to do what they’re uniquely qualified for: serve customers face-to-face on the sales floor.” Further Walmart announced that it would be adding 1,500 new floor cleaners, 300 more shelf-scanners, 1,200 conveyor belts, and 900 new pickup towers. It has been tested in dozens of markets and hundreds of stores to prove the effectiveness of the robots. Also, the idea of replacing people with machines for certain job roles will reduce costs for Walmart. Perhaps if you are not hiring people, they can't quit, demand a living wage, take sick days off etc resulting in better margins and efficiencies. According to Walmart CEO Doug McMillon, “Automating certain tasks gives associates more time to do work they find fulfilling and to interact with customers. Continuing this logic, the retailer points to robots as a source of greater efficiency, increased sales and reduced employee turnover.” "Our associates immediately understood the opportunity for the new technology to free them up from focusing on tasks that are repeatable, predictable and manual," John Crecelius, senior vice president of central operations for Walmart US, said in an interview with BBC Insider. "It allows them time to focus more on selling merchandise and serving customers, which they tell us have always been the most exciting parts of working in retail." With the war for talent raging on in the world of retail and the demand for minimum wage hikes a frequent occurrence, Walmart's expanding robot army is a signal that the company is committed to keeping labor costs down. Does that mean at the cost of cutting jobs or employee restructuring? Walmart has not specified what number of jobs it will cut as a result of this move. But when automation takes place and at the largest retailer in the US is Walmart, significant job losses can be expected to hit. https://twitter.com/NoelSharkey/status/1116241378600730626 Early last year, Bloomberg reported that Walmart is removing around 3500 store co-managers, a salaried role that acts as a lieutenant underneath each store manager. The U.S. in particular has an inordinately high proportion of employees performing routine functions that could be easily automated. As such, retail automation is bound to hit them the hardest. With costs on the rise, and Amazon as a constant looming threat that has resulted in the closing of thousands of mom-and-pop stores across the US, it was inevitable that Walmart would turn to automation as a way to stay competitive in the market. As the largest retail employer in the US, transitions to an automated retailing model, it will leave a good proposition of the 7,04,000 strong US retail workforce either unemployed, underemployed or unready to transition into other jobs. How much Walmart assists its redundant workforce to transition to another livelihood will be litmus test to its widely held image of a caring employer in contrast to Amazon’s ruthless image. How Rolls Royce is applying AI and robotics for smart engine maintenance AI powered Robotics : Autonomous machines in the making Four interesting Amazon patents in 2018 that use machine learning, AR, and robotics
Read more
  • 0
  • 0
  • 3232

article-image-intel-unveils-the-first-3d-logic-chip-packaging-technology-foveros-powering-its-new-10nm-chips-sunny-cove
Savia Lobo
13 Dec 2018
3 min read
Save for later

Intel unveils the first 3D Logic Chip packaging technology, ‘Foveros’, powering its new 10nm chips, ‘Sunny Cove’

Savia Lobo
13 Dec 2018
3 min read
Yesterday, the chip manufacturing giant unleashed Foveros, its news 3-D packaging technology, which makes it easy to stack logic chips over one another. Intel claims users can see the first products to use Foveros in the second half of next year. Talking about the stacking logic, Raja Koduri, Intel’s chief architect, said, “You can pack more transistors in a given space. And also you can pack different kinds of transistors; if you want to put a 5G radio right on top of a CPU, solving the stacking problem would be great, because you have all of your functionality but also a small form factor.” With the Foveros technology, Intel will allow for smaller "chiplets," which describes fast logic chips sitting atop a base die that handles power, I/O and power delivery. This project will also help Intel overcome one of its biggest challenges, i.e, building full chips at 10nm scale. The Forveros backed product will be a 10 nanometer compute element on a base die, typically used in low-power devices. Source: Intel  Sunny Cove: Intel’s codename for the new 10nm chips Sunny Cove will be at the heart of Intel’s next-generation Core and Xeon processors which would be available in the latter half of next year. According to Intel, Sunny Cove will provide users with an improved latency and will allow for more operations to be executed in parallel (thus acting more like a GPU). On the graphics front, Intel’s also got new Gen11 integrated graphics “designed to break the 1 TFLOPS barrier,” which will be part of these Sunny Cove chips. Intel also promises improved speeds in AI related tasks, cryptography, and machine learning among other new features with the CPUs. According to a detailed report by Ars Technica, “Sunny Cove makes the first major change to x64 virtual memory support since AMD introduced its x86-64 64-bit extension to x86 in 2003. Bits 0 through 47 are used, with the top 16 bits, 48 through 63, all copies of bit 47. This limits virtual address space to 256TB. These systems can also support a maximum of 256TB of physical memory.” Starting from the second half of next year, everything from mobile devices to data centers may feature Foveros processors over time. “The company wouldn't say where, exactly, the first Foveros-equipped chip will end up, but it sounds like it'll be ideal for incredibly thin and light machines”, Engadget reports. To know more about this news in detail, visit Intel Newsroom. Microsoft Azure reportedly chooses Xilinx chips over Intel Altera for AI co-processors, says Bloomberg report Apple T2 security chip has Touch ID, Security Enclave, hardware to prevent microphone eavesdropping, amongst many other features! How the Titan M chip will improve Android security  
Read more
  • 0
  • 0
  • 3231
Prasad Ramesh
12 Sep 2018
2 min read
Save for later

The new Bolt robot from Sphero wants to teach kids programming

Prasad Ramesh
12 Sep 2018
2 min read
Sphero, a robotic toy building company announced their latest Bolt robotic ball aimed at teaching kids basic programming. It has advanced sensors, an LED matrix, and infrared sensors to communicate with other Bolt robots. The robot itself is 73mm in diameter. There’s an 8x8 LED matrix inside a transparent casing shell. This matrix displays helpful prompts like a lightning bolt when Bolt is charging. Users can fully program the LED matrix to display a wide variety of icons connected to certain actions. This can be a smiley face when a program is completed, sad face on failure or arrow marks for direction changes. The new Bolt has a longer battery life of around two hours, charges back up in six hours. It connects to the Sphero Edu app to use community created activities, or even to build your own analyze sensor data etc. The casing is now transparent instead of the opaque colored ones from previous Sphero balls. The sphere weighs 200g in all and houses infrared sensors that allow the Bolt to detect other nearby Bolts to interact with. Users can program specific interactions between multiple Bolts. The Edu app supports coding through drawing on the screen or via Scratch blocks. You can also use JavaScript to program the robot to create custom games and drawing. There are sensors to track speed, acceleration, and direction, or to drive BOLT. This can be done without having to aim since the Bolt has a compass. There is also an ambient light sensor that allows programming the Bolt based on the room’s brightness. Other than education, you can also simply drive BOLT and play games with the Sphero Play app. Source: Sphero website It sounds like a useful little robot and is available now to consumers for $149.99. Educators can also buy BOLT in 15-packs for classroom learning. For more details, visit the Sphero website. Is ROS 2.0 good enough to build real-time robotic applications? Spanish researchers find out. How to assemble a DIY selfie drone with Arduino and ESP8266 ROS Melodic Morenia released
Read more
  • 0
  • 0
  • 3228

article-image-google-to-allegedly-launch-a-new-smart-home-device
Guest Contributor
20 Sep 2018
2 min read
Save for later

Google to allegedly launch a new Smart home device

Guest Contributor
20 Sep 2018
2 min read
In the midst of all the leaks related to Pixel 3 and Pixel 3 XL regarding whether Google will embrace iPhone like notch or will have wireless charging, reports have surfaced that Google has even more news to showcase in its big hardware event “Made By Google” on October 9. According to a report from MySmartPrice, Google might launch a new device called "Google Home Hub" Smart Speaker sporting a 7-inch display with large squarish speakers in two variants Chalk white and Charcoal. Image source mysmartprice Google has been pretty successful with its smart home devices like Google Home series but after Amazon teased its smart home device with screen called 'Amazon Echo Show' Tech giant was keen to work on a product to compete with its rival. If the leaked news from "MySmartPrice" is to be believed, with Google Home Hub powered by Google assistant we can watch YouTube, HBO, and videos from other content providers. Additionally, the device will also display time, weather, daily commute information and other regular Google assistant features.  However, it will not have full-fledged Android OS. While the device comes power packed with the Google software but based on leaks, what seems to be missing from the device is the camera. It would have been perfect if the device sported a camera as well which could have been used for video calling as Google is aggressively marketing its video calling app Google Duo. The device will, however, feature  WiFi and Bluetooth. Image source: mysmartprice With the new device, Google might also introduce new features for the Google assistant. Though there is no confirmation from Google regarding the product yet but the timing makes perfect sense as Google's upcoming event on October 9th would be the perfect place to announce a Google Home Hub along with its much awaited Pixel smartphone series. Read full article on Mysmartprice. Author Bio Full time Linux Admin part time reader, always up for latest technology and a cup of tea, interested in Cloud services, Machine learning and Artificial Intelligence. Amazon Echo vs Google Home: Next-gen IoT war. Home Assistant: an open source Python home automation hub to rule all things smart. Cortana and Alexa become best friends: Microsoft and Amazon release a preview of this integration.
Read more
  • 0
  • 0
  • 3217

article-image-microsoft-azure-iot-edge-is-open-source-and-generally-available
Savia Lobo
29 Jun 2018
3 min read
Save for later

Microsoft Azure IoT Edge is open source and generally available!

Savia Lobo
29 Jun 2018
3 min read
Microsoft recently announced Azure IoT Edge to be generally available and open source. Its preview was announced at the Microsoft Build 2017, during which the company stated how this service plans to extend cloud intelligence to edge devices. Microsoft Azure IoT Edge is a fully-managed cloud service to help enterprises generate useful insights from the data collected by the Internet of things (IoT) devices. It enables one to deploy and run Artificial Intelligence services, Azure services, and custom logic directly on the cross-platform IoT devices. This, in turn, helps deliver cloud intelligence locally as per the plan. Additional features in the Azure IoT Edge include: Support for Moby container management system: Docker which is built on Moby, an open-source platform. It allows Microsoft Azure to extend the concepts of containerization, isolation, and management from the cloud to devices at the edge. Azure IoT Device Provisioning Service: This service allows customers to securely provision huge amount of devices making edge deployments more scalable. Tooling for VSCode: VSCode allows easy module development by coding, testing, debugging, and deploying. Azure IoT Edge security manager: IoT Edge security manager acts as a tough security core for protecting the IoT Edge device and all its components by abstracting the secure silicon hardware. Automatic Device Management (ADM): ADM service allows scaled deployment of IoT Edge modules to a fleet of devices based on device metadata. When a device with the right metadata (tags) joins the fleet, ADM brings down the right modules and puts the edge device in the correct state. CI/CD pipeline with VSTS : This allows managing the complete lifecycle of the Azure IoT Edge modules from development, testing, staging, and final deployment. Broad language support for module SDKs: Azure IoT Edge supports more languages than other edge offerings in the market. It includes C#, C, Node.js, Python, and Java allowing one to program the edge modules in their choice of language. There are three components required for Azure IoT Edge deployment: Azure IoT Edge Runtime Azure IoT Hub and Edge modules. The Azure IoT Edge runtime is free and will be available as open source code. Customers would require an Azure IoT Hub instance for edge device management and deployment if they are not using one for their IoT solution already. Read full news coverage at the Microsoft Azure IoT blog post. Read Next Microsoft commits $5 billion to IoT projects Epicor partners with Microsoft Azure to adopt Cloud ERP Introduction to IOT
Read more
  • 0
  • 0
  • 3216
article-image-the-linux-foundation-announces-the-chips-alliance-project-for-deeper-open-source-hardware-integration
Sugandha Lahoti
12 Mar 2019
2 min read
Save for later

The Linux Foundation announces the CHIPS Alliance project for deeper open source hardware integration

Sugandha Lahoti
12 Mar 2019
2 min read
In order to advance open source hardware, the Linux Foundation announced a new CHIPS Alliance project yesterday. Backed by Esperanto, Google, SiFive, and Western Digital, the CHIPS Alliance project “will foster a collaborative environment that will enable accelerated creation and deployment of more efficient and flexible chip designs for use in mobile, computing, consumer electronics, and IoT applications.” The project will help in making open source CPU chip and system-on-a-chip (SoC) design more accessible to the market, by creating an independent entity where companies and individuals can collaborate and contribute resources. It will provide the chip community with access to high-quality, enterprise-grade hardware. This project will include a Board of Directors, a Technical Steering Committee, and community contributors who will work collectively to manage the project. To initiate the process, Google will contribute a Universal Verification Methodology (UVM)-based instruction stream generator environment for RISC-V cores. The environment provides configurable, highly stressful instruction sequences that can verify architectural and micro-architectural corner-cases of designs. SiFive will improve the RocketChip SoC generator and the TileLink interconnect fabric in opensource as a member of the CHIPS Alliance. They will also contribute to Chisel (a new opensource hardware description language), and the FIRRTL intermediate representation specification. SiFive will also maintain Diplomacy, the SoC parameter negotiation framework. Western Digital, another contributor will provide high performance, 9-stage, dual issue, 32-bit SweRV Core, together with a test bench, and high-performance SweRV instruction set simulator. They will also contribute implementations of OmniXtend cache coherence protocol. Looking ahead Dr. Yunsup Lee, co-founder, and CTO, SiFive said in a statement “A healthy, vibrant semiconductor industry needs a significant number of design starts, and the CHIPS Alliance will fill this need.” More information is available at CHIPS Alliance org. Mapzen, an open-source mapping platform, joins the Linux Foundation project Uber becomes a Gold member of the Linux Foundation Intel unveils the first 3D Logic Chip packaging technology, ‘Foveros’, powering its new 10nm chips, ‘Sunny Cove’.
Read more
  • 0
  • 0
  • 3211

article-image-nvidias-new-turing-architecture-worlds-first-ray-tracing-gpu
Fatema Patrawala
14 Aug 2018
4 min read
Save for later

Nvidia unveils a new Turing architecture: “The world’s first ray tracing GPU”

Fatema Patrawala
14 Aug 2018
4 min read
The Siggraph 2018 Conference brought in the biggest announcements from Nvidia unveiling a new turing architecture and three new pro-oriented workstation graphics cards in its Quadro family. This is the greatest leap for Nvidia since the introduction of the CUDA GPU in 2006. The Turing architecture features new RT Cores to accelerate ray tracing and new Tensor Cores for AI inferencing to enable real-time ray tracing. The two engines along with more powerful compute for simulation and enhanced rasterization will usher in a new generation of hybrid rendering to address the $250 billion visual effects industry. Hybrid rendering enables cinematic-quality interactive experience, amazing new effects powered by neural networks and fluid interactivity on highly complex models. The company also unveiled its initial Turing-based products - the NVIDIA® Quadro® RTX™ 8000, Quadro RTX 6000 and Quadro RTX 5000 GPUs. They are expected to revolutionize the work of approximately 50 million designers and artists across multiple industries. At the Annual Siggraph conference, Jensen Huang, founder and CEO, Nvidia mentions, “Turing is NVIDIA’s most important innovation in computer graphics in more than a decade. Hybrid rendering will change the industry, opening up amazing possibilities that enhance our lives with more beautiful designs, richer entertainment and more interactive experiences. The arrival of real-time ray tracing is the Holy Grail of our industry.” Here’s the list of Turing architecture features in detail. Real-Time Ray Tracing Accelerated by RT Cores The Turing architecture is armed with dedicated ray-tracing processors called RT Cores. It will accelerate the computation similar to light and sound travel in 3D environments at up to 10 GigaRays a second. Turing will accelerate real-time ray tracing operations by up to 25x than that of the previous Pascal generation. GPU nodes can be used for final-frame rendering for film effects at more than 30x the speed of CPU nodes. AI Accelerated by powerful Tensor Cores The Turing architecture also features Tensor Cores, processors that accelerate deep learning training and inferencing, providing up to 500 trillion tensor operations a second. It will power AI-enhanced features for creating applications with new capabilities including DLAA (deep learning anti-aliasing). DLAA is a breakthrough in high-quality motion image generation for denoising, resolution scaling and video re-timing. These features are part of the NVIDIA NGX™ software development kit, a new deep learning-powered technology stack. It will enable developers to easily integrate accelerated, enhanced graphics, photo imaging and video processing into applications with pre-trained networks Faster Simulation and Rasterization with New Turing Streaming Multiprocessor A new streaming multiprocessor architecture is featured in the new Turing-based GPUs to add an integer execution unit, that will execute in parallel with the floating point datapath. A new unified cache architecture with double bandwidth of the previous generation is added too. As it is combined with new graphics technologies like variable rate shading, the Turing SM achieves unprecedented levels of performance per core. With up to 4,608 CUDA cores, Turing supports up to 16 trillion floating point operations in parallel with 16 trillion integer operations per second. Developers will be able to take advantage of NVIDIA’s CUDA 10, FleX and PhysX SDKs to create complex simulations, such as particles or fluid dynamics for scientific visualization, virtual environment and special effects. The new Turing architecture has already received support from companies like Adobe, Pixar, Siemens, Black Magic, Weta Digital, Epic Games and Autodesk. The new Quadro RTX is priced at $2,300 for a 16GB version and $6,300 for 24GB version. Double the memory to 48GB and Nvidia expects you to pay about $10,000 for the high-end card. For more information you may visit the Nvidia official blog page. IoT project: Design a Multi-Robot Cooperation model with Swarm Intelligence [Tutorial] Amazon Echo vs Google Home: Next-gen IoT war 5 DIY IoT projects you can build under $50
Read more
  • 0
  • 0
  • 3211