Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - IoT & Hardware

119 Articles
article-image-introducing-strato-pi-an-industrial-raspberry-pi
Prasad Ramesh
26 Nov 2018
4 min read
Save for later

Introducing Strato Pi: An industrial Raspberry Pi

Prasad Ramesh
26 Nov 2018
4 min read
Italian companies have designed Strato Pi, a Raspberry Pi based board intended to be used in industrial applications. It can be used in areas where a higher level of reliability is required. Source: sferlabs website Strato Pi features The board is roughly the same size of Regular Raspberry Pi 2/3 and is engineered to work in an industrial environment that demands more rugged devices. Power supply that can handle harsh environments The Strato Pi can accept a power supply from a wide range and can handle substantial amounts of ripple, noise and voltage fluctuations. The power supply circuit is heavily protected and filtered with oversized electrolytic capacitors, diodes, inductors, and a high efficiency voltage regulator. The power converter is based on PWN converted integrated circuits which can provide up to 95% power efficiency and up to 3A continuous current output. Over current limiting, over voltage protection and thermal shutdown are also built-in. The board is also protected against reverse polarity with resettable fuses. There is surge protection up to ±500V/2ohms 1.2/50μs which ensures reliability even in harsh environments. UPS to safeguard against power failure In database and data collection applications, supper power interruption may cause data loss. To tackle this Strato Pi has an integrated power supply that gives enough time to save data and shutdown when there is a power failure. The battery power supply stage of the board supplies power to the Strato Pi circuits without any interruption even when the main power supply fails. This stage also charges the battery via a high efficiency step-up converter to generate the optimal charging voltage independent of the main power supply voltage value. Built-in real time clock The Strato Pi has a built-in battery-backed real time clock/calendar. It is directly connected to the Raspberry Pi via the I2C bus interface. This shows the correct time even when there is no internet connection. This real time clock is based on the MCP79410 general purpose Microchip RTCC chip. A replaceable CR1025 battery acts as backup power source when the main power is not available. In always powered on state, the battery can last over 10 years. Serial Port Strato Pi uses the interface circuits of the RS-232 and RS-485 serial ports. They are insulated from the main and battery power supply voltages which avoids failures due to ground loops. A proprietary algorithm powered micro-controller, automatically manages the data direction of RS-485. Without any special configuration, the baud rate and the number of bits are taken into account. Thus, the Raspberry board can communicate through its TX/RX lines without any other additional signal. Can Bus The Controller Area Network (CAN) bus is widely used and is based on a multi-master architecture. This board implements an easy to use CAN bus controller. It has both RS-485 and CAN bus ports which can be used at the same time. CAN specification version 2.0B can be used and support of up to 1 Mbps is available. A hardware watchdog A hardware watchdog is an electronic circuit that can automatically reset the processor if there is a software hang. This is implemented with the help of the on board microcontroller. This is independent of  the Raspberry Pi’s internal CPU watchdog circuit. The base variant starts at roughly $88. They also have a mini and products like a prebuilt server. For more details on Strato Pi, sferlabs website. Raspberry Pi launches it last board for the foreseeable future: the Raspberry Pi 3 Model A+ available now at $25 Introducing Raspberry Pi TV HAT, a new addon that lets you stream live TV Intelligent mobile projects with TensorFlow: Build your first Reinforcement Learning model on Raspberry Pi [Tutorial]
Read more
  • 0
  • 0
  • 4503

article-image-intels-10th-gen-10nm-ice-lake-processor-offers-ai-apps-new-graphics-and-best-connectivity
Vincy Davis
02 Aug 2019
4 min read
Save for later

Intel’s 10th gen 10nm ‘Ice Lake’ processor offers AI apps, new graphics and best connectivity

Vincy Davis
02 Aug 2019
4 min read
After a long wait, Intel has officially launched its first 10th generation core processors, code-named ‘Ice Lake’. The first batch contains 11 highly integrated 10nm processors which showcases high-performance artificial intelligence (AI) features and is designed for sleek 2 in 1s and laptops. The ‘Ice Lake’ processors are manufactured on Intel’s 10nm processor and consist of the 14nm chipset in the same carrier. It includes two or four Sunny Cove cores along with Intel’s Gen 11 Graphics processing unit (GPU). The 10nm measure of the processor indicates the size of the transistors used. The 10 nanometer miniscule length also shows the power of the transistor as it is considered that smaller the transistor, better is its power consumption. Read More: Intel unveils the first 3D Logic Chip packaging technology, ‘Foveros’, powering its new 10nm chips, ‘Sunny Cove’ Chris Walker, Intel corporate vice president and general manager of Mobility Client Platforms in the Client Computing Group says that “With broad-scale AI for the first time on PCs, an all-new graphics architecture, best-in-class Wi-Fi 6 (Gig+) and Thunderbolt 3 – all integrated onto the SoC, thanks to Intel’s 10nm process technology and architecture design – we’re opening the door to an entirely new range of experiences and innovations for the laptop.” Intel was supposed to ship the 10nm processors, way back in 2016. Intel CEO Bob Swan says that the delay was due to the “company’s overly aggressive strategy for moving to its next node.” Intel has also introduced a new processor number naming structure for the 10th generation ‘Ice Lake’ processors which indicates the generation and the level of graphics performance of the processor. Image source: Intel What’s new in the 10th generation Intel core processors? Intelligent performance The 10th generation core processors are the first purpose-built processors for AI on laptops and 2 in 1s. They are built for modern AI-infused applications and contains many features such as: Intel Deep Learning Boost, used for specifically boosting flexibility to run complex AI workloads. It has a dedicated instruction set that accelerates neural networks on the CPU for maximum responsiveness. Up to 1 teraflop of GPU engine compute for sustained high-throughput inference applications Intel’s Gaussian & Neural Accelerator (GNA) provides an exclusive engine for background workloads such as voice processing and noise prevention at ultra-low power, for utmost battery life. New graphics With the Iris Plus graphics, the 10th generation core processors imparts double graphic performance in 1080p and higher-level content creation in 4K video editing, application of video filters and high-resolution photo processing. This is the first time that Intel’s Graphics processing unit (GPU) will support VESA’s Adaptive Sync* display standard. It enables a smoother gaming experience across games like Dirt Rally 2.0* and Fortnite*. According to Intel, this is the industry's first integrated GPU to incorporate variable rate shading for better rendering performance, as it uses the Gen11 graphics architecture.  The 10th generation core processors supports the BT.2020* specification, hence it is possible to view a 4K HDR video in a billion colors. Best connectivity With improved board integration, PC manufacturers can innovate on form factor for sleeker designs with Wi-Fi 6 (Gig+) connectivity and up to four Thunderbolt 3 ports. Intel claims this is the “fastest and most versatile USB-C connector available.” In the first batch of 11 'Ice Lake' processors, there are 6 Ice Lake U series and 5 Ice Lake Y series processors. Given below is the complete Ice Lake processors list. Image Source: Intel Intel has revealed that laptops with the 10th generation core processors can be expected in the holiday season this year. The post also states that they will soon release additional products in the 10th generation Intel core mobile processor family due to increased needs in computing. The upcoming processors will “deliver increased productivity and performance scaling for demanding, multithreaded workloads.”   Users love the new 10th generation core processor features and are especially excited about the Gen 11 graphics. https://twitter.com/Tribesigns/status/1133284822548279296 https://twitter.com/Isaacraft123/status/1156982456408596481 Many users are also expecting to see the new processors in the upcoming Mac notebooks. https://twitter.com/ChernSchwinn1/status/1157297037336928256 https://twitter.com/matthewmspace/status/1157295582844575744 Head over to the Intel newsroom page for more details. Apple advanced talks with Intel to buy its smartphone modem chip business for $1 billion, reports WSJ Why Intel is betting on BFLOAT16 to be a game changer for deep learning training? Hint: Range trumps Precision. Intel’s new brain inspired neuromorphic AI chip contains 8 million neurons, processes data 1K times faster
Read more
  • 0
  • 0
  • 4439

article-image-is-ros-2-0-good-enough-to-build-real-time-robotic-applications-spanish-researchers-find-out
Prasad Ramesh
11 Sep 2018
4 min read
Save for later

Is ROS 2.0 good enough to build real-time robotic applications? Spanish researchers find out.

Prasad Ramesh
11 Sep 2018
4 min read
Last Friday, a group of Spanish researchers have published a research paper titled ‘Towards a distributed and real-time framework for robots: evaluation of ROS 2.0 communications for real-time robotic applications’. This paper talks about an experimental setup exploring the suitability of ROS 2.0 for real-time robotic applications. In this paper, ROS 2.0 communications is evaluated in a robotic inter-component communication hardware case running on top of Linux. The researchers have benchmarked and studied the worst case latencies and characterized ROS 2.0 communications for real-time applications. The results indicate that a proper real-time configuration of the ROS 2.0 framework reduces jitter making soft real-time communications possible but there were also some limitations that prevented hard real-time communications. What is ROS? ROS is a popular framework that provides services for the development of robotic applications. It has utilities like a communication infrastructure, drivers for a variety of software and hardware components, libraries for diagnostics, navigation, manipulation, and other things. ROS simplifies the process of creating complex and robust robot behavior across many robotic platforms. ROS 2.0 is the new version which extends the concepts of the first version. Data Distribution Service (DDS) middleware is used in ROS 2.0 due to its characteristics and benefits as compared to other solutions. Need for real-time applications in robotic systems In all robotic systems, tasks need to be time responsive. While moving at a certain speed, robots must be able to detect an obstacle and stop to avoid collision. These robot systems often have timing requirements to execute tasks or exchange data. By not meeting the timing requirements, the system behavior will degrade or the system will fail. With ROS being the standard software infrastructure for robotic applications development, demands rose in the ROS community to include real-time capabilities. Hence, ROS 2.0 was created for delivering real-time performance. But to deliver a complete, distributed and real-time solution for robots, ROS 2.0 needs to be surrounded with appropriate elements. These elements are described in the papers Time-sensitive networking for robotics and Real-time Linux communications: an evaluation of the Linux communication stack for real-time robotic applications. ROS 2 uses DDS as its communication middleware. DDS contains Quality of Service (QoS) parameters which can be configured and tuned for real-time applications. The results of the experiment In the research paper, a setup was made to measure the real-time performance of ROS 2.0 communications over Ethernet in a PREEMPT-RT patched kernel. The end-to-end latencies between two ROS 2.0 nodes in different machines was measured. A Linux PC and an embedded device which could represent a robot controller (RC) and a robot component (C) were used for the setup. An overview of the setup can be seen as follows: Source: LinkedIn Some of the results are as follows: Source: LinkedIn The image describes the Impact of RT settings under different system load. They are a) System without additional load without RT settings. b) is system under load without RT settings. c) is system without additional load and RT settings. d) is system under load and RT settings. The results from the experiment showed that a proper real-time configuration of the ROS 2.0 framework and DDS threads greatly reduces the jitter andworst-casee latencies. This mean a smooth and fast communication. However, there were also some limitations when there is noncritical traffic in the Linux Network Stack is in picture. By configuring the network interrupt threads and using Linux traffic control QoS methods, some of the problems could be avoided. The researchers conclude that it is possible to achieve soft real-time communications with mixed-critical traffic using the Linux Network stack. However hard real-time is not possible due to the aforementioned limitations. For a more detailed understanding of the experiments and results, you can read the research paper. Shadow Robot joins Avatar X program to bring real-world avatars into space 6 powerful microbots developed by researchers around the world Boston Dynamics’ ‘Android of robots’ vision starts with launching 1000 robot dogs in 2019
Read more
  • 0
  • 0
  • 4412
Visually different images

article-image-boston-dynamics-android-of-robots-vision-starts-with-launching-1000-robot-dogs-in-2019
Sugandha Lahoti
23 Jul 2018
2 min read
Save for later

Boston Dynamics’ ‘Android of robots’ vision starts with launching 1000 robot dogs in 2019

Sugandha Lahoti
23 Jul 2018
2 min read
A video went viral in February showcasing a dog like robot opening a door for another robot. These agile robots are the brainchild of Boston Dynamics, an American Robotics company. Fast forward to this month, Boston Dynamics is all geared up to produce thousands of these robot dogs. According to a report by Inverse, the company has set a target date of July 2019 to manufacture 1,000 of its SpotMini robot dogs annually. SpotMini is a smaller variant of Boston Dynamics’ many robots. This four-legged robot weighs around 30 kgs and can comfortably fit in an office or home. It is one of the quietest robots built by the company. SpotMini is completely mobile with a 5 degree-of-freedom arm. It also has multiple perception sensors for navigation and mobile manipulation. SpotMini Spot, SpotMini’s elder version stands at close to four feet and weighs about 75 kgs. This four-legged robot is exclusively made for rough terrain mobility and superhuman stability. Its video has been streamed on Youtube nearly 19 million times. Spot According to founder Marc Raibert, SpotMini is currently being tested for use in construction, delivery, security, and home assistance applications. The company has already announced plans to launch it in 2019 as their short-term goal. They have currently built almost ten robodogs by hand, and are in plans to build 100 models with contract manufacturers at the end of this year. In the long run, the company intends SpotMini to eventually become a multi-use platform of sorts. At TechCrunch’s TC Sessions: Robotics event 2018, Raibert stated that “the goal for us is to become the what Android operating system is for phones: a versatile foundation for limitless applications.” Sony resurrects robotic pet Aibo with advanced AI AI powered Robotics : Autonomous machines in the making How to assemble a DIY selfie drone with Arduino and ESP8266 What we learned at the ICRA 2018 conference for robotics & automation
Read more
  • 0
  • 0
  • 4384

article-image-windows-10-iot-core-what-you-need-to-know
Vijin Boricha
07 May 2018
4 min read
Save for later

Windows 10 IoT Core: What you need to know

Vijin Boricha
07 May 2018
4 min read
Microsoft had initially come up with Windows IoT which was formerly known as Windows Embedded. It was rebranded with the release of Windows 10 where Microsoft introduced twelve versions of Windows 10 that varied in features delivered, use cases, and the devices they supported. With that said, Microsoft gained a fighting place in the world of IoT with Windows 10 IoT which consists of two products catering to different customer bases: Windows 10 IoT Core and Windows 10 IoT Enterprise. Since IoT has to still evolve amongst major enterprises, we will focus on Window 10 IoT Core today. Windows 10 IoT Core is an optimized version of Windows 10 that is designed for smaller devices with or without a display that run on both ARM and x86/x64 devices. It is created to work on devices such as Raspberry Pi, Arduino, and other popular single board computers while it also utilizes the extensible Universal Windows Platform (UWP) API to build great solutions. The IoT domain has always been popular with traditional open source operating systems such as Linux distributions. Since the past couple of years, Windows has started to find its way into this domain and have proven to be an advantageous alternative in many ways. Initially setting up Windows 10 IoT Core to install the image and get started was a task. Recently Microsoft has focused on alleviating these small pain points and has got things sorted for Windows users. When it comes to developing IoT applications, open source distros lack making beautiful user interfaces possible. But with Windows this can be achieved thanks to Visual Studio. Visual Studio has always been a great environment to code in and if you are strong with C#, this can definitely be your go to platform. I emphasize on Windows users because  if you are looking at using or developing on Windows 10 IoT Core you would strictly need Windows 10 which isn’t open source. Well, this might never change. No doubt Microsoft wants to sell its software keeping its existing user happy. This would only be possible when Microsoft services work well only in its own environment. I’m sure you are wondering what could you possibly build with Windows 10 IoT Core and Raspberry Pi or Arduino. These are some breathtaking project ideas that you might be interested in building: Obstacle avoiding robot: This could be your basic project that can help you getting used to the new ecosystem you have adopted. Room light and temperature manager: Next, you can get some home automation tweaks that will help you automate your room environment.   Personal car data monitor: This can be an intermediate project where your IoT application can reveal the health of your vehicle before you start your ride.   Pet feeder: Lastly, you can take up something that involves Cloud platforms where you can feed your pet while your in office or at your neighbours instead of letting them starve. IoT is at such a stage now where the virtual world of Information Technology is connected to the read world. Initially this was possible only through Linux-based ecosystem, but with Windows 10 IoT coming into picture there has been quite a shift observed in the IoT market. Users have observed that in spite running on smaller devices Windows 10 IoT has managed to offer most of the essential features from parent Windows 10.  The world may still seem like a Linux base and deploying Python programs may look easier but it’s best to keep your options open and in this case you have a trusted platform, Windows. 5 reasons to choose AWS IoT Core for your next IoT project Should you go with Arduino Uno or Raspberry Pi 3 for your next IoT project? Splunk Industrial Asset Intelligence (Splunk IAI) targets Industrial IoT marketplace
Read more
  • 0
  • 0
  • 4333

article-image-amazon-freertos-adds-a-new-bluetooth-low-energy-support-feature
Natasha Mathur
27 Nov 2018
2 min read
Save for later

Amazon FreeRTOS adds a new ‘Bluetooth low energy support’ feature

Natasha Mathur
27 Nov 2018
2 min read
Amazon team announced a newly added ‘bluetooth low energy support’ (BLE) feature to its  Amazon FreeRTOS. Amazon FreeRTOS is an open source, free to download and use IoT operating system for microcontrollers makes it easy for you to program, deploy, secure, connect, and manage small, low powered devices. It extends the FreeRTOS kernel (a popular open source operating system for microcontrollers) using software libraries that make it easy for you to connect your small, low-power devices to AWS cloud services or to more powerful devices that run AWS IoT Greengrass, a software that helps extend the cloud capabilities to local devices. Amazon FreeRTOS With the helo of Amazon FreeRTOS, you can collect data from them for IoT applications. Earlier, it was only possible to configure devices to a local network using common connection options such as Wi-Fi, and Ethernet. But, now with the addition of the new BLE feature, you can securely build a connection between Amazon FreeRTOS devices that use BLE  to AWS IoT via Android and iOS devices. BLE support in Amazon FreeRTOS is currently available in beta. Amazon FreeRTOS is widely used in industrial applications, B2B solutions, or consumer products companies like the appliance, wearable technology, or smart lighting manufacturers. For more information, check out the official Amazon freeRTOS update post. FreeRTOS affected by 13 vulnerabilities in its TCP/IP stack Amazon re:Invent 2018: AWS Key Management Service (KMS) Custom Key Store Amazon rolls out AWS Amplify Console, a deployment and hosting service for mobile web apps, at re:Invent 2018
Read more
  • 0
  • 0
  • 4234
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $15.99/month. Cancel anytime
article-image-tensorflow-1-9-now-officially-supports-raspberry-pi-bringing-machine-learning-to-diy-enthusiasts
Savia Lobo
06 Aug 2018
2 min read
Save for later

Tensorflow 1.9 now officially supports Raspberry Pi bringing machine learning to DIY enthusiasts

Savia Lobo
06 Aug 2018
2 min read
The Raspberry Pi board developers can now make use of the latest TensorFlow 1.9 features to build their board projects. Most developers use Raspberry Pi for shaping their innovative DIY projects. The Pi also acts as a pathway to introduce people to programming with an added benefit of coding in Python. The main objective of blending TensorFlow with the Raspberry Pi board is to let people explore the capabilities of machine learning on cost-effective and flexible devices. Eben Upton, the founder of the Raspberry Pi project, says, “It is vital that a modern computing education covers both fundamentals and forward-looking topics. With this in mind, we’re very excited to be working with Google to bring TensorFlow machine learning to the Raspberry Pi platform. We’re looking forward to seeing what fun applications kids (of all ages) create with it.” By being able to use TensorFlow features, existing users, as well as new users, can try their hand on live machine learning projects. Here are few real-life examples of Tensorflow on Raspberry Pi: DonkeyCar platform DonkeyCar, a platform to build DIY Robocars, uses TensorFlow and the Raspberry Pi to create self-driving toy cars. Object Recognition Robot The Tensorflow framework is useful for recognizing objects. This robot uses a library, a camera, and a Raspberry Pi, using which one can detect up to 20,000 different objects. Waste sorting robot This robot is capable of sorting every piece of garbage with the same precision as a human. This robot is able to recognize at least four types of waste. To identify the category to which it belongs, the system uses TensorFlow and OpenCV. One can easily install Tensorflow from the pre-built binaries using Python pip package system from the pre-built binaries. One can also install it by simply running these commands on the Raspbian 9 (stretch) terminal: sudo apt install libatlas-base-dev pip3 install tensorflow Read more about this project on GitHub page 5 DIY IoT projects you can build under $50 Build your first Raspberry Pi project How to mine bitcoin with your Raspberry Pi
Read more
  • 0
  • 0
  • 4208

article-image-western-digital-risc-v-swerv-core-is-now-on-github
Prasad Ramesh
28 Jan 2019
2 min read
Save for later

Western Digital RISC-V SweRV Core is now on GitHub

Prasad Ramesh
28 Jan 2019
2 min read
Last week, Western Digital made Verilog sources for its open source RISC-V core publically available on GitHub under Apache 2.0. What is SweRV Core? ‘SweRV Core’ was made by Western Digital for internal use which they decided to contribute to the open source community. The SweRV Core is a 32-bit, nine stage pipeline core which is two-way superscalar. It is small in size and has a simulation performance of up to 4.9 CoreMarks/Mhz. SweRV Core comes supports data-intensive applications like storage controllers, industrial IoT devices, real-time analytics in surveillance systems etc., Running on a 28mm CMOS battery, the power-efficient design has clock speeds up to 1.8Ghz. This core will be seen in future and upcoming WD products. Martin Fink, CTO of Western Digital, says to the Business Wire: “As Big Data and Fast Data continues to proliferate, purpose-built technologies are essential for unlocking the true value of data across today’s wide-ranging data-centric applications. Our SweRV Core and the new cache coherency fabric initiative demonstrate the significant possibilities that can be realized by bringing data closer to processing power.” What do you need for SweRV Core? Verilator 3.926 or newer Espresso needs to be installed if you want to add or remove instructions To start using it, Core clone the GitHub repo, setup RV_ROOT pointing to the path of your system, and run make with tools/Makefile. Last year Western Digital had also open-sourced SweRV Instruction Set Simulator (ISS). It is a program for designers to simulate code on SweRV core. Optionally, you can determine your config before running make. To get started, you can check out the GitHub repository. The Linux and RISC-V foundations team up to drive open source development and adoption of RISC-V instruction set architecture (ISA) A libre GPU effort based on RISC-V, Rust, LLVM and Vulkan by the developer of an earth-friendly computer LLVM officially migrating to GitHub from Apache SVN
Read more
  • 0
  • 0
  • 4184

article-image-rigettis-128-qubit-chip-quantum-computer
Fatema Patrawala
16 Aug 2018
3 min read
Save for later

Rigetti plans to deploy 128 qubit chip Quantum computer

Fatema Patrawala
16 Aug 2018
3 min read
Rigetti computers are committed to building the world’s most powerful computers and they believe the true value of quantum will be unlocked by practical applications. Rigetti CEO Chad Rigetti, posted recently on Medium about their plans to deploy 128 qubit chip quantum computing system, challenging Google, IBM, and Intel for leadership in this emerging technology. They have planned to deploy this system in the next 12 months and shared their investment in resources at the application layer to encourage experimentation on quantum computers. Over the past year, Rigetti has built 8-qubit and 19-qubit superconducting quantum processors, which are accessible to users over the cloud through their open source software platform Forest. These chips have been useful in helping researchers around the globe to carry out and test programs on their quantum-classical hybrid computers. However, to drive practical use of quantum computing today, Rigetti must be able to scale and improve the performance of the chips and connect them to the electronics on which they run . To achieve this, the next phase of quantum computing will require more power at the hardware level to drive better results. Rigetti is in a unique position to solve this problem and build systems that scale. Chad Rigetti adds, “Our 128-qubit chip is developed on a new form factor that lends itself to rapid scaling. Because our in-house design, fab, software, and applications teams work closely together, we’re able to iterate and deploy new systems quickly. Our custom control electronics are designed specifically for hybrid quantum-classical computers, and we have begun integrating a 3D signaling architecture that will allow for truly scalable quantum chips. Over the next year, we’ll put these pieces together to bring more power to researchers and developers.” While they are focussed on building the 128 qubit chip, the Rigetti team is also looking at ways to enhance the application layer by pursuing quantum advantage in three areas; i.e. quantum simulation, optimization and machine learning. The team believes quantum advantage will be achieved by creating a solution that is faster, cheaper and of a better quality. They have posed an open question as to which industry will build the first commercially useful application to add tremendous value to researchers and businesses around the world. Read the full coverage on the Rigetti Medium post. Quantum Computing is poised to take a quantum leap with industries and governments on its side Q# 101: Getting to know the basics of Microsoft’s new quantum computing language PyCon US 2018 Highlights: Quantum computing, blockchains and serverless rule!
Read more
  • 0
  • 0
  • 4098

article-image-nsa-researchers-present-security-improvements-for-zephyr-and-fucshia-at-linux-security-summit-2018
Bhagyashree R
04 Sep 2018
5 min read
Save for later

NSA researchers present security improvements for Zephyr and Fucshia at Linux Security Summit 2018

Bhagyashree R
04 Sep 2018
5 min read
Last week, James Carter and Stephen Smalley presented the architecture and security mechanisms of two operating systems, Zephyr and Fuchsia at the Linux Security Summit 2018. James and Stephen are computer security researchers in the Information Assurance Research organization of the US National Security Agency (NSA). They discussed the current concerns in the operating systems and their contribution and others to further advance security of these emerging open source operating systems. They also compared the security features of Zephyr and Fucshia to Linux and Linux-based systems such as Android. Zephyr Zephyr is a scalable real-time operating system (RTOS) for IoT devices, supporting cross-architecture with security as the main focus. It targets devices that are resource constrained seeking to be a new "Linux" for little devices. Protection mechanisms in Zephyr Zephyr introduced basic hardware-enforced memory protections in the v1.8 release and these were officially supported in the v1.9 releases. The microcontrollers should either have a memory protection unit (MPU) or a memory management unit (MMU) to support these protection mechanisms. These mechanisms provide protection by the following ways: They enforce Read Only/No Execute (RO/NX) restrictions to protect the read-only data from tampering. Provides runtime support for stack depth overflow protections. The researchers’ contribution was to review the basic memory protections and also develop a set of kernel memory protection tests that were modeled after subset of lkdtm tests in Linux from KSPP. These tests were able to detect bugs and regression in Zephyr MPU drivers and are now a part of the standard regression testing that Zephyr performs on all future changes. Userspace support in Zephyr In previous versions, everything ran in a supervisor mode, so Zephyr introduced a userspace support in v1.10 and v1.11. This requires the basic memory protection support and MPU/MMU. It provides basic support for user mode threads with isolated memory. The researchers contribution, here, was to develop userspace tests to verify some of the security-relevant properties for user mode threads, confirm the correctness of x86 implementation, and validate initial ARM and ARC userspace implementations. App shared memory: A new feature contributed by the researchers Originally, Zephyr provided an access to all the user threads to the global variables of all applications. This imposed high burden on application developers to, Manually organize application global variable memory layout to meet (MPU-specific) size/alignment restrictions. Manually define and assign memory partitions and domains. To solve this problem, the researchers developed a new feature which will come out in v1.13 release, known as App Shared Memory, having features: It is a more developer-friendly way of grouping application globals based on desired protections. It automatically generates linker script, section markings, memory partition/domain structures. Provides helpers to ease application coding. Fucshia Fucshia is an open source microkernel-based operating system, primarily developed by Google. It is based on a new microkernel called Zircon and targets modern hardware such as phones and laptops. Security mechanisms in Fucshia Microkernel security primitives Regular handles: Through handles, userspace can access kernel objects. They can identify both the object and a set of access rights to the object. With proper rights, one can duplicate objects, pass them across IPC, and obtain handles to child objects. Some of the concerns pointed out in regular handles are: If you have a handle to a job, you can get handle to anything in the job using object_get_child() Leak of root job handle Refining default rights down to least privilege Not all operations check access rights Some rights are unimplemented, currently Resource handles: These are a variant of handles for platform resources such as, memory mapped I/O, I/O port, IRQ, and hypervisor guests. Some of the concerns pointed out in resource handles are: Coarse granularity of root resource checks Leak of root resource handle Refining root resource down to least privilege Job policy: In Fucshia, every process is a part of a job and these jobs can further have child jobs. Job policy is applied to all processes within the job. These policies include error handling behavior, object creation, and mapping of WX memory. Some of the concerns pointed out in job policies are: Write execute (WX) is not yet implemented Inflexible mechanism Refining job policies down to least privilege vDSO (virtual dynamic shared object) enforcement: This is the only way to invoke system calls and is fully read-only. Some of the concerns pointed out in vDSO enforcement are: Potential for tampering with or bypassing the vDSO, for example, processs_writes_memory() allows you to overwrite the vDSO Limited flexibility, for example,  as compared to seccomp Userspace mechanisms Namespaces: It is a collection of objects that you can enumerate and access. Sandboxing: Sandbox is the configuration of a process’s namespace created based on its manifest. Some of the concerns pointed out in namespaces and sandboxing are: Sandbox only for application packages (and not system services) Namespace and sandbox granularity No independent validation of sandbox configuration Currently uses global /data and /tmp To address the aforementioned concerns the researchers suggested a MAC framework. It could help in the following ways: Support finer-grained resource checks Validate namespace/sandbox It could help control propagation, support revocation, apply least privilege Just like in Android, it could provide a unified framework for defining, enforcing, and validating security goals for Fuchsia. This was a sneak peek from the talk. To know more about the architecture, hardware limitations, security features of Zephyr and Fucshia in detail, watch the presentation on YouTube: Security in Zephyr and Fucshia - Stephen Smalley & James Carter, National Security Agency. Cryptojacking is a growing cybersecurity threat, report warns Red Hat Enterprise Linux 7.6 Beta released with focus on security, cloud, and automation Red Hat Enterprise Linux 7.6 Beta released with focus on security, cloud, and automation
Read more
  • 0
  • 0
  • 4083
article-image-these-robot-jellyfish-are-on-a-mission-to-explore-and-guard-the-oceans
Bhagyashree R
24 Sep 2018
3 min read
Save for later

These robot jellyfish are on a mission to explore and guard the oceans

Bhagyashree R
24 Sep 2018
3 min read
Earlier last week, a team of US scientists, from Florida Atlantic University (FAU) and the US Office of Naval Research published a paper on five jellyfish robots that they have manufactured. The paper is titled Thrust force characterization of free-swimming soft robotic jellyfish. The prime motive of the scientists to build such robotic jellyfish is to track and monitor fragile marine ecosystems without causing unintentional damage to them. These soft robots can swim through openings narrower than their bodies and are powered by hydraulic silicon tentacles. These so-called ‘jelly-bots’ have the ability to squeeze through narrow openings using circular holes cut in a plexiglass plate. The design structure of ‘Jelly-bots’ Jelly-bots have a similar design to that of a moon jellyfish (Aurelia aurita) during the ephyra stage of its life cycle before they becoming a fully grown medusa. To avoid the damage to fragile biological systems by the robots, soft hydraulic network actuators are chosen. To allow the jellyfish to steer, the team uses two impeller pumps to inflate the eight tentacles. The mold models for the jellyfish robot were designed in SolidWorks and subsequently, 3D printed with an Ultimaker 2 out of PLA (polylactic acid). Each jellyfish has varying rubber hardness to test the effect it has on the propulsion efficiency. Source: IOPScience What this study was about? These jelly robots will help the scientists in determining the impact of the following factors on the measured thrust force: Actuator material Shore hardness Actuation frequency Tentacle stroke actuation amplitude The scientists found that all three of these factors significantly impact mean thrust force generation, which peaks with a half-stroke actuation amplitude at a frequency of 0.8 Hz. Results The material composition of the actuators significantly impacted the measured force produced by the jellyfish, as did the actuation frequency and stroke amplitude. The greatest forces were measured with a half-stroke amplitude at 0.8 Hz and a tentacle actuator-flap material Shore hardness composition of 30–30. In the test, the jellyfish was able to swim through the narrow openings than the nominal diameter of the robot and demonstrated the ability to swim directionally. The jellyfish robots were tested in the ocean and have the potential to monitor and explore delicate ecosystems without inadvertently damaging them. One of the scientists, Dr. Engeberg said to Tech Xplore: "In the future, we plan to incorporate environmental sensors like sonar into the robot's control algorithm, along with a navigational algorithm. This will enable it to find gaps and determine if it can swim through them." To know more in detail about the jellybots, read the research paper published by these scientists. You may also go through a  video showing jellybots functioning in deep waters. Sex robots, artificial intelligence, and ethics: How desire shapes and is shaped by algorithms MEPs pass a resolution to ban “Killer robots” 6 powerful microbots developed by researchers around the world
Read more
  • 0
  • 0
  • 4006

article-image-arduino-now-has-a-command-line-interface-cli
Prasad Ramesh
27 Aug 2018
2 min read
Save for later

Arduino now has a command line interface (CLI)

Prasad Ramesh
27 Aug 2018
2 min read
Listening to the Arduino developer community, the Arduino team has released a command line interface (CLI) for it. The CLI is a single binary file that performs most of the features present in the IDE. There was a wide gap between using the IDE and being able to use CLI completely for everything in Arduino. The CLI will allow you to Install new libraries, create new projects, and compile projects directly from the command line. Developers will get an advantage to test their projects quickly. You can also create your own libraries and compile them directly, for your own or third-party codes. Installing project dependencies will be as easy as typing the following command: arduino-cli lib install "WiFi101” “WiFi101OTA” In addition, the CLI has a JSON interface added for easy parsing by other programs. There were many requests for makefiles integration and the support has been added for it. The Arduino CLI can run on both ARM and Intel (x86, x86_64) architectures which means it can be installed on a Raspberry Pi or on any server. Massimo Banzi, Arduino founder stated: “I think it is very exciting for Arduino, one single binary that does all the complicated things in the Arduino IDE.” The Arduino team looks forward to people seeing integrating this tool in various IDEs. In the blog post by the Arduino team they have mentioned, “Imagine having the Arduino IDE or Arduino Create Editor speaking directly to Arduino CLI – and you having full control of it. You will be able to compile on your machine or on our online servers, detect your board or create your own IDE on top of it!” CLI is a better alternative to PlatformIO and will work on all three major operating systems, Linux, Windows, and macOS. The code is open source but you will need a license for commercial use. Visit the GitHub repository to get started with Arduino CLI. How to assemble a DIY selfie drone with Arduino and ESP8266 How to build an Arduino based ‘follow me’ drone Should you go with Arduino Uno or Raspberry Pi 3 for your next IoT project?
Read more
  • 0
  • 0
  • 3946

article-image-silicon-interconnect-fabric-replace-printed-circuit-boards-new-ucla-research
Sugandha Lahoti
26 Sep 2019
4 min read
Save for later

Silicon-Interconnect Fabric is soon on its way to replace Printed Circuit Boards, new UCLA research claims

Sugandha Lahoti
26 Sep 2019
4 min read
Researchers from UCLA claim in a news study that printed circuit board could be replaced with what they call silicon-interconnect fabric or Si-IF. This fabric allows bare chips to be connected directly to wiring on a separate piece of silicon. The researchers are Puneet Gupta and Subramanian Iyer, members of the electrical engineering department at the University of California at Los Angeles. How can Silicon-Interconnect Fabric be useful In a report published on IEEE Spectrum on Tuesday, the researchers suggest that printed circuit boards can be replaced with silicon which will especially help in building smaller, lighter-weight systems for wearables and other size-constrained gadgets. They write, “Unlike connections on a printed circuit board, the wiring between chips on our fabric is just as small as wiring within a chip. Many more chip-to-chip connections are thus possible, and those connections are able to transmit data faster while using less energy.” Si-IF can also be useful for building “powerful high-performance computers that would pack dozens of servers’ worth of computing capability onto a dinner-plate-size wafer of silicon.” The silicon-interconnect fabric could possibly dissolute the system-on-chip (SoC) into integrated collections of dielets, or chiplets. The researchers say, “It’s an excellent path toward the dissolution of the (relatively) big, complicated, and difficult-to-manufacture systems-on-chips that currently run everything from smartphones to supercomputers. In place of SoCs, system designers could use a conglomeration of smaller, simpler-to-design, and easier-to-manufacture chiplets tightly interconnected on an Si-IF.” The researchers linked up chiplets on a silicon-interconnect fabric built on a 100-millimeter-wide wafer. Unlike chips on a printed circuit board, they can be placed a mere 100 micrometers apart, speeding signals and reducing energy consumption. For evaluating the size, the researchers compared an Internet of Things system based on an Arm microcontroller. Using Si-IF shrinks the size of the board by 70 percent but also reduces its weight from 20 grams to 8 grams. Challenges associated with Silicon-Interconnect Fabric Even though large progress has been made on Si-IF integration over the past few years, the researchers point out that much remains to be done. For instance, there is a need of having a commercially viable, high-yield Si-IF manufacturing process. You also need mechanisms to test bare chiplets as well as unpopulated Si-IFs. New heat sinks or other thermal-dissipation strategies will also be required to take advantage of silicon’s good thermal conductivity. In addition, the chassis, mounts, connectors, and cabling for silicon wafers need to be engineered to enable complete systems. There is also the need to make several changes to design methodology and to consider system reliability. People agreed that the research looked promising. However, some felt that replacing PCBs with Si-IF sounded overachieving, to begin with. A comment on Hacker News reads, “I agree this looks promising, though I'm not an expert in this field. But the title is a bit, well, overpromising or broad. I don't think we'll replace traditional motherboards anytime soon (except maybe in smartphones?). Rather, it will be an incremental progress.” Others were also not convinced. A hacker news user pointed out several benefits of PCBs. “ PCBs are cheaper to manufacture than silicon wafers. PCBs can be arbitrarily created and adjusted with little overhead cost (time and money). PCBs can be re-worked if a small hardware fault(s) is found. PCBs can carry large amount of power. PCBs can help absorb heat away from some components. PCBs have a small amount of flexibility, allowing them to absorb shock much easier PCBs can be cut in such a way as to allow for mounting holes or be in relatively arbitrary shapes. PCBs can be designed to protect some components from static damage.” You can read the full research on IEEE. Hot Chips 31: IBM Power10, AMD’s AI ambitions, Intel NNP-T, Cerebras largest chip with 1.2 trillion transistors and more. IBM open-sources Power ISA and other chips; brings OpenPOWER foundation under the Linux Foundation Deep learning models have massive carbon footprints, can photonic chips help reduce power consumption? Samsung develops Key-value SSD prototype, paving the way for optimizing network storage efficiency and extending server CPU processing power MIT researchers built a 16-bit RISC-V compliant microprocessor from carbon nanotubes
Read more
  • 0
  • 0
  • 3906
article-image-amd-releases-amd-open-source-driver-for-vulkan-v-2019-q1-2
Bhagyashree R
23 Jan 2019
2 min read
Save for later

AMD releases AMD Open-Source Driver for Vulkan v-2019.Q1.2

Bhagyashree R
23 Jan 2019
2 min read
Last week, the AMD team released v-2019.Q1.2 version of AMD Open Source for Vulkan (AMDVLK). This release comes with fairly small updates including a DXVK fix, one new Vulkan extension, and some more updates. What’s new in v-2019.Q1.2 The XGL code exposes YUV planes directly to allow applications to implement their own color conversion. Symbols are now not included when building the driver in its release confirmation, which could help with performance. The default WgpMode is updated from wgp to cu The performance regression introduced by the updates that added support for the LOAD_INDEX path for handling pipeline binds is now fixed. AMDVLK architecture: The following diagram shows its architecture: Souce: GitHub AMD open-sourced AMDVLK in 2017, which was earlier the part of AMDGPU-PRO driver. It is a Vulkan driver for Radeon graphics adapters on Linux and is built on top of AMD’s Platform Abstraction Library (PAL). PAL provides hardware and OS abstractions for Radeon (GCN+) user-mode 3D graphics drivers. It also provides users with a consistent experience across platforms, including support for recently released GPUs and compatibility with AMD developer tools. As PAL does not come with a shader compiler, clients are expected to use an external compiler library that targets PAL's Pipeline ABI to produce compatible shader binaries. Shaders compile a VkPipeline object as a single entity by shaders using the LLVM-Based Pipeline Compiler (LLPC) library. LLPC is built on the existing shader compilation infrastructure of LLVM for AMD GPUs to generate code objects that are compatible with PAL’s pipeline ABI. To know more in detail about AMDVLK, you can check out its GitHub repository. AMD ROCm GPUs now support TensorFlow v1.8, a major milestone for AMD’s deep learning plans AMD open sources V-EZ, the Vulkan wrapper library AMD’s $293 million JV with Chinese chipmaker Hygon starts production of x86 CPUs
Read more
  • 0
  • 0
  • 3859

article-image-intel-introduces-cryogenic-control-chip-horse-ridge-for-commercially-viable-quantum-computing
Fatema Patrawala
11 Dec 2019
4 min read
Save for later

Intel introduces cryogenic control chip, ‘Horse Ridge’ for commercially viable quantum computing

Fatema Patrawala
11 Dec 2019
4 min read
On Monday, Intel Labs introduced first of its kind cryogenic control chip codenamed Horse Ridge. According to Intel, Horse Ridge will enable commercially viable quantum computers and speed up development of full-stack quantum computing systems. Intel announced that Horse Ridge will enable control of multiple quantum bits (qubits) and set a clear path toward scaling larger systems. This seems to be a major milestone on the path to quantum practicality. As right now the challenge for quantum computing is that it only works at near-freezing temperatures. Intel is trying to change that with this control chip. As per Intel, Horse Ridge will be able to enable control at very low temperatures, as it will eliminate hundreds of wires going into a refrigerated case that houses the quantum computer. Horse Ridge is developed in partnership with Intel’s research collaborators at QuTech at Delft University of Technology. It is fabricated using Intel’s 22-nanometer FinFET manufacturing technology. The in-house fabrication of these control chips at Intel will dramatically accelerate the company’s ability to design, test, and optimize a commercially viable quantum computer, the company said. “A lot of research has gone into qubits, which can do simultaneous calculations. But Intel saw that controlling the qubits created another big challenge to developing large-scale commercial quantum systems,” states Jim Clarke, Director of quantum hardware, Intel in the official press release . “It’s pretty unique in the community, as we’re going to take all these racks of electronics you see in a university lab and miniaturize that with our 22-nanometer technology and put it inside of a fridge,” added Clarke. “And so we’re starting to control our qubits very locally without having a lot of complex wires for cooling.” The name “Horse Ridge” is inspired from one of the coldest regions in Oregon known as the Horse Ridge. It is designed to operate at cryogenic temperatures, approx 4 degrees Kelvin which is 7 degrees Fahrenheit and 4 degrees Celsius. What is the innovation behind Horse Ridge Quantum computers promise the potential to tackle problems that conventional computers can’t handle by themselves. Quantum computers leverage a phenomenon of quantum physics that allows qubits to exist in multiple states simultaneously. As a result, qubits can conduct a large number of calculations at the same time dramatically speeding up complex problem-solving. But Intel acknowledges the fact that the quantum research community still lags behind in demonstrating quantum practicality, a benchmark to determine if a quantum system can deliver game-changing performance to solve real-world problems. Till date, researchers have focused on building small-scale quantum systems to demonstrate the potential of quantum devices. In these efforts, researchers have relied upon existing electronic tools and high-performance computing rack-scale instruments to connect the quantum system to the traditional computational devices that regulates qubit performance and programs the system inside the cryogenic refrigerator. These devices are often custom designed to control individual qubits, requiring hundreds of connective wires in and out of the refrigerator. However, this extensive control cabling for each qubit hinders the ability to scale the quantum system to the hundreds or thousands of qubits required to demonstrate quantum practicality, not to mention the millions of qubits required for a commercially viable quantum solution. With Horse Ridge, Intel radically simplifies the control electronics required to operate a quantum system. Replacing these bulky instruments with a highly integrated system-on-chip (SoC) will simplify system design and allow for sophisticated signal processing techniques to accelerate set-up time, improve qubit performance, and enable the system to efficiently scale to larger qubit counts. “One option is to run the control electronics at room temperature and run coax cables down to configure the qubits. But you can immediately see that you’re going to run into a scaling problem because you get to hundreds or thousands of cables and it’s not going to work,” said Richard Uhlig, Managing Director Intel Labs. “What we’ve done with Horse Ridge is that it’s able to run at temperatures that are much closer to the qubits themselves. It runs at about 4 degrees Kelvin. The innovation is that we solved the challenges around getting CMOS to run at those temperatures and still have a lot of flexibility in how the qubits are controlled and configured.” To know more about this exciting news, check out the official announcement from Intel. Are we entering the quantum computing era? Google’s Sycamore achieves ‘quantum supremacy’ while IBM refutes the claim The US to invest over $1B in quantum computing, President Trump signs a law Quantum computing, edge analytics, and meta learning: key trends in data science and big data in 2019
Read more
  • 0
  • 0
  • 3793