Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3709 Articles
article-image-the-golang-team-has-started-working-on-go-2-proposals
Prasad Ramesh
30 Nov 2018
4 min read
Save for later

The Golang team has started working on Go 2 proposals

Prasad Ramesh
30 Nov 2018
4 min read
Yesterday, Google engineer Robert Griesemer published a blog post highlighting the outline of the next steps for Golang towards the Go 2 release. Google developer Russ Cox started the thought process behind Go 2 in his talk at GopherCon 2017. The talk was about the future of Go and pertaining to the changes that were talked about, the talk was informally called Go 2. A major change between the two versions is in the way design and changes are influenced. The first version only involved a small team but the second version will have much more participation from the community. The proposal process started in 2015, the Go core team will now work in the proposals for the second version of the programming language. The current status of Go 2 proposals As of November 2018, there are about 120 open issues on GitHub labeled Go 2 proposal. Most of them revolve around significant language or library changes often not compatible with Go 1. The ideas from the proposals will probably influence the language and libraries of the second version. Now there are millions of Go programmers and a large Go code body that needs to be brought together without an ecosystem split. Hence the changes done need to be less and carefully selected. To do this, the Go core team is implementing a proposal evaluation process for significant potential changes. The proposal evaluation process The purpose of the evaluation process is to collect feedback on a small number of select proposals to make a final decision. This process runs in parallel to a release cycle and has five steps. Proposal selection: The Go core team selects a few Go 2 proposals that seem good to them for acceptance. Proposal feedback: After selecting, the Go team announces the selected proposals and collects feedback from the community. This gives the large community an opportunity to make suggestions or express concerns. Implementation: The proposals are implemented based on the feedback received. The goal is to have significant changes ready to submit on the first day up an upcoming release. Implementation feedback: The Go team and community have a chance to experiment with the new features during the development cycle. This helps in getting further feedback. Final launch decision: The Go team makes the final decision on shipping each change at the end of the three-month development cycle. At this time, there is an opportunity to consider if the change delivers the expected benefits or has created any unexpected costs. When shipped, the changes become a part of the Go language. Proposal selection process and the selected proposals For a proposal to be selected, the minimum criteria are that it should: address an important issue for a large number of users have a minimal impact on other users is drafted with a clear and well-understood solution For trials a select few proposals will be implemented that are backward compatible and hence are less likely to break existing functionality. The proposals are: General Unicode identifiers based on Unicode TR31 which will allow using non-Western alphabets. Adding binary integer literals and support for_ in number literals. Not a very big problem solving change, but this brings Go up to par with other languages in this aspect. Permit signed integers as shift counts. This will clean up the code and get shift expressions better in sync with index expressions and built-in functions like cap and len. The Go team has now started with the proposal evaluation process and now the community can provide feedback. Proposals with clear, positive feedback will be taken ahead as they aim to implement changes by  February 1, 2019. The development cycle is Feb-May 2019 and the chosen features will be implemented as per the outlined process. For more details, you can visit the Go Blog. Golang just celebrated its ninth anniversary GoCity: Turn your Golang program into a 3D city Golang plans to add a core implementation of an internal language server protocol
Read more
  • 0
  • 0
  • 3762

article-image-laravel-6-0-releases-with-laravel-vapor-compatibility-lazycollection-improved-authorization-response-and-more
Fatema Patrawala
04 Sep 2019
2 min read
Save for later

Laravel 6.0 releases with Laravel vapor compatibility, LazyCollection, improved authorization response and more

Fatema Patrawala
04 Sep 2019
2 min read
Laravel 6.0 releases with improvements in Laravel 5.8. Introduction to semantic versioning, compatibility with Laravel Vapor, improved authorization responses, job middleware, lazy collections, sub-query improvements, the extraction of frontend scaffolding to the laravel/ui Composer package, and a variety of other bug fixes and usability improvements. Key features in Laravel 6.0 Semantic versioning The Laravel framework package now follows the semantic versioning standard. This makes the framework consistent with the other first-party Laravel packages which already followed this versioning standard. Laravel vapor compatibility Laravel 6.0 provides compatibility with Laravel Vapor, an auto-scaling serverless deployment platform for Laravel. Vapor abstracts the complexity of managing Laravel applications on AWS Lambda, as well as interfacing those applications with SQS queues, databases, Redis clusters, networks, CloudFront CDN, and more. Improved exceptions via ignition Laravel 6.0 ships with Ignition, which is a new open source exception detail page. Ignition offers many benefits over previous releases, such as improved Blade error file and line number handling, runnable solutions for common problems, code editing, exception sharing, and an improved UX. Improved authorization responses In previous releases of Laravel, it was difficult to retrieve and expose custom authorization messages to end users. This made it difficult to explain to end-users exactly why a particular request was denied. In Laravel 6.0, this is now easier using authorization response messages and the new Gate::inspect method. Job middleware Job middleware allows developers to wrap custom logic around the execution of queued jobs, reducing boilerplate in the jobs themselves. Lazy collections Many developers already enjoy Laravel's powerful Collection methods. To supplement the already powerful Collection class, Laravel 6.0 has introduced a LazyCollection, which leverages PHP's generators to allow users to work with very large datasets while keeping memory usage low. Eloquent subquery enhancements Laravel 6.0 introduces several new enhancements and improvements to database subquery support. To know more about this release, check out the official Laravel blog page. What’s new in web development this week? Wasmer’s first Postgres extension to run WebAssembly is here! JavaScript will soon support optional chaining operator as its ECMAScript proposal reaches stage 3 Google Chrome 76 now supports native lazy-loading
Read more
  • 0
  • 0
  • 3760

article-image-google-announces-new-artificial-intelligence-features-for-google-search
Sugandha Lahoti
25 Sep 2018
5 min read
Save for later

Google announces new Artificial Intelligence features for Google Search on its 20th birthday

Sugandha Lahoti
25 Sep 2018
5 min read
At the” Future of Search” event held at San Francisco yesterday, Google celebrated its 20th anniversary announcing a variety of new features to Google Search engine. Their proprietary search engine uses sophisticated machine learning, computer vision, and data science. The focus of this event was Artificial Intelligence and making new features available on a smartphone. Let’s look at what all was announced. Activity Cards on Google Discover Perhaps the most significant feature, is the Google Discover which is a completely revamped version of Google Feed. Google Feed is the content discovery news feed available in the dedicated Google App and on the Google’s homepage. Now, it has got a new look, brand, and feel. It features more granular controls over content that appears. A new feature called activity cards will show up in a user’s search results if they've done searches on one topic repetitively. This activity card will help users pick up from where they left off their Google Search. Users can retrace their steps to find useful information that they have found earlier and might not remember which sites had that. Google Discover starts with English and Spanish in the U.S. and will expand to more languages and countries soon. Collections in Google Search Collections in Google Search can help users keep track of content they have visited, such as a website, article, or an image, and quickly get back to it later. Users can now add content from an activity card directly to Collections. This makes it easy to keep track of and organize the content to be revisited. Dynamic organization with Knowledge graph Users will see more videos and fresh visual content, as well as evergreen content—articles and videos that aren’t new to the web, but are new to you. This feature uses the Topic Layer in the Knowledge graph to predict level of expertise on a topic for a user and help them further develop those interests. The Knowledge Graph can intelligently show relevant content, rather than prioritizing chronological order. The content will appear based on user engagement and browsing history. The Topic Layer is built by analyzing all the content that exists on the web for a given topic and develops hundreds and thousands of subtopics. It then looks for patterns to understand how these subtopics relate to each other, to explore the next content a user may want to view. AMP Stories Google will now use Artificial Intelligence to create AMP stories, which will appear in both Google Search and image search results. AMP Stories is Google’s open source library that enables publishers to build web-based flipbooks with smooth graphics, animations, videos, and streaming audio. Featured Videos The next enhancement is Featured Videos that will semantically link to subtopics of searches in addition to top-level content. Google will automatically generate preview clips for relevant videos, using AI to find the most relevant parts of the clip. Google Lens Google has also improved its image search algorithm due to which images will now be sorted by the relevance of the web results they correspond to. Image search results will also contain more information about the pages that they come from. They also announced Google Lens, its visual search tool for the web. Lens in Google Images will analyze and detect objects in snapshots and show relevant images. Better SOS Alerts Google is also updating their SOS Alerts on Google Search and Maps with AI. They will use AI and significant computational power to create better forecasting models that predict when and where floods will occur. This information is also intelligently incorporated into Google Public Alerts. Improve Job Search with Pathways They are also improving their job search with AI by introducing a new feature called Pathways. When someone searches for jobs on Google, they will be shown jobs available right now in their area, but also be provided with information about effective local training and education programs. To learn in detail about where Google is headed next, read their blog Google at 20. Policy changes in Google Chrome sign in, an unexpected surprise gift from Google The team also announced a policy change in Google's popular Chrome browser which was not taken well. Following this, the browser automatically logs users into Chrome using other Google services. This has got people worried about their privacy as this can lead to Google tracking their browsing history and collecting data to target them with ads. Prior to this unexpected change, it was possible to sign in to a Google service, such as Gmail, via the Chrome browser without actually logging in to the browser itself. Adrienne Porter Felt, engineer, and manager at Google Chrome has however clarified the issue. She said that the Chrome browser sign in does not mean that Chrome is automatically sending your browsing history to your Google account. She further added, “My teammates made this change to prevent surprises in a shared device scenario. In the past, people would sometimes sign out of the content area and think that meant they were no longer signed into Chrome, which could cause problems on a shared device.” Read the Google clarification report on the Reader app. The AMPed up web by Google. Google announces Flutter Release Preview 2 with extended support for Cupertino themed controls and more! Pay your respects to Inbox, Google’s email innovation is getting discontinued.  
Read more
  • 0
  • 0
  • 3758

article-image-rsa-conference-2019-highlights-top-5-cybersecurity-products-announced
Melisha Dsouza
08 Mar 2019
4 min read
Save for later

RSA Conference 2019 Highlights: Top 5 cybersecurity products announced

Melisha Dsouza
08 Mar 2019
4 min read
The theme at the ongoing RSA 2019 conference is “Better”. As the official RSA page explains, “This means working hard to find better solutions. Making better connections with peers from around the world. And keeping the digital world safe so everyone can get on with making the real world a better place.” Keeping up with the theme of the year, the conference saw some exciting announcements, keynotes, and seminars presented by some of the top security experts and organizations. Here is our list of the top 5 new Cybersecurity products announced at RSA Conference 2019: #1 X-Force Red Blockchain Testing service IBM announced the ‘X-Force Red Blockchain Testing service’ to test vulnerabilities in enterprise blockchain platforms. This service will be run by IBM's in-house X-Force Red security team and will test the security of back-end processes for blockchain-powered networks. The service will evaluate the whole implementation of enterprise blockchain platforms. This will include chain code, public key infrastructure, and hyperledgers. Alongside, this service will also assess hardware and software applications that are usually used to control access and manage blockchain networks. #2 Microsoft Azure Sentinel Azure Sentinel will help developers “build next-generation security operations with cloud and AI”. It gives developers a holistic view of security across the enterprise. The service will help them collect data across all users, devices, applications, and infrastructure, both on-premises and in multiple clouds. It can then detect previously uncovered threats and minimize false positives using analytics and threat intelligence. Azure sentinel also helps investigate threats with AI and hunt suspicious activities at scale while responding to incidents rapidly with built-in orchestration and automation of common tasks. #3 Polaris Software Integrity Platform The Polaris Software Integrity Platform is an integrated, easy-to-use solution that enables security and development teams to quickly build secure and high-quality software. The service lets developers integrate and automate static, dynamic, and software composition analysis with the tools they are familiar with. The platform also provides security teams with a holistic view of application security risk across their portfolio and the SDLC. It enables developers to address security flaws in their code as they write it, without switching tools using the Polaris Code Sight IDE plugin. #4 CyberArk Privileged Access security solution v10.8 The CyberArk Privileged Access Security Solution v10.8 automates detection, alerting and response for unmanaged and potentially-risky Amazon Web Services (AWS) accounts. This version also features Just-in-Time capabilities to deliver flexible user access to cloud-based or on-premises Windows systems. The Just-in-Time provisional access to Windows servers will enable administrators to configure the amount of access time granted to Windows systems, irrespective of whether they are cloud-based or on-premises. This will reduce operational friction. The solution can now identify privileged accounts in AWS, unmanaged Identity and Access Management (IAM) users (such as Shadow Admins), and EC2 instances and accounts. This will help track AWS credentials and accelerate the on-boarding process for these accounts. #5 Cyxtera AppGate SDP IoT Connector Cyxtera’s IoT Connector, a feature within AppGate SDP secures unmanaged and undermanaged IoT devices with a 360-degree perimeter protection. It isolates IoT resources using their Zero Trust model. Each AppGate IoT Connector instance scales for both volume and throughput and handles a wide array of IoT devices. AppGate operates in-line and limits access to prevent lateral attacks while allowing devices to seamlessly perform their functions. It can be easily deployed without replacing existing hardware or software. Apart from this, the other products launched at the conference included CylancePERSONA, CrowdStrike Falcon for Mobile, Twistlock 19.03 and much more. To stay updated with all the events, keynotes, seminars, and releases happening at the RSA 2019 conference, head over to their official blog. The Erlang Ecosystem Foundation launched at the Code BEAM SF conference NSA releases Ghidra, a free software reverse engineering (SRE) framework, at the RSA security conference Google teases a game streaming service set for Game Developers Conference
Read more
  • 0
  • 0
  • 3755

article-image-google-to-be-the-founding-member-of-cdf-continuous-delivery-foundation
Bhagyashree R
15 Mar 2019
3 min read
Save for later

Google to be the founding member of CDF (Continuous Delivery Foundation)

Bhagyashree R
15 Mar 2019
3 min read
On Tuesday, Google announced of being one of the founding members of the newly-formed Continuous Delivery Foundation (CDF). As a part of its membership, Google will be contributing to two projects namely Spinnaker and Tekton. About Continuous Delivery Foundation The formation of CDF was announced at the Linux Foundation Open Source Leadership Summit on Tuesday. CDF will act as a “vendor-neutral home” for some of the most important open source projects for continuous delivery and specifications to speed up the release pipeline process. https://twitter.com/linuxfoundation/status/1105515314899492864 The existing CI/CD ecosystem is heavily fragmented, which makes it difficult for developers and companies to decide on particular tooling for their projects. Also, DevOps practitioners often find it very challenging to gather guidance information on software delivery best practices. CDF was formed to make CI/CD tooling easier and define the best practices and guidelines that will enable application developers to deliver better and more secure software at speed. CDF is currently hosting some of the most popularly used CI/CD tools including Jenkins, Jenkins X, Spinnaker, and Tekton. The foundation is backed by 20+ founding members which include Alauda, Alibaba, Anchore, Armory.io, Atos, Autodesk, Capital One, CircleCI, CloudBees, DeployHub, GitLab, Google, HSBC, Huawei, IBM, JFrog, Netflix, Puppet, Rancher, Red Hat, SAP, Snyk, and SumoLogic. Why Google joined CDF? Google as a part of this foundation will be working on Spinnaker and Tekton. Originally created by Netflix and jointly led by Netflix and Google, Spinnaker is an open source, multi-cloud delivery platform. It comes with various features for making continuous delivery reliable including support for advanced deployment strategies, an open source canary analysis service named Kayenta, and more. The Spinnaker’s user community has great experience in the continuous delivery domain, and by joining CDF Google aims to share that expertise with the broader community. Tekton is a set of shared, open source components for building CI/CD systems. It allows you to build, test, and deploy applications across multiple environments such as virtual machines, serverless, Kubernetes, or Firebase. In the next few months, we can expect to see support for results and event triggering in Tekton. Google is also planning to work with CI/CD vendors to build an ecosystem of components that will allow users to use Tekton with existing tools like Jenkins X, Kubernetes native, and others. Dan Lorenc, Staff Software Engineer at Google Cloud, sharing Google’s motivation behind joining CDF said, “Continuous Delivery is a critical part of modern software development, but today space is heavily fragmented. The Tekton project addresses this problem by working with the open source community and other leading vendors to collaborate on the modernization of CI/CD infrastructure.” Kim Lewandowski, Product Manager at Google Cloud, said, “The ability to deploy code securely and as fast as possible is top of mind for developers across the industry. Only through best practices and industry-led specifications will developers realize a reliable and portable way to take advantage of continuous delivery solutions. Google is excited to be a founding member of the CDF and to work with the community to foster innovation for deploying software anywhere.” To know more, check out the official announcement at the Google Open Source blog. Google Cloud Console Incident Resolved! Cloudflare takes a step towards transparency by expanding its government warrant canaries Google to acquire cloud data migration start-up ‘Alooma’
Read more
  • 0
  • 0
  • 3754

article-image-google-clouds-titan-and-android-pie-come-together-to-secure-users-data-on-mobile-devices
Sunith Shetty
15 Oct 2018
3 min read
Save for later

Google Cloud’s Titan and Android Pie come together to secure users’ data on mobile devices

Sunith Shetty
15 Oct 2018
3 min read
Barring the Google+ security incident, Google has had an excellent track record for providing exceptional security services to protect different levels of users’ data with ease. Android 9, now aims to provide users with more options to protect user data. To enhance user data security, Android will now be combining Android’s Backup Service and Google Could’s Titan technology to protect data backups while also maintaining the required privacy. Complete backed-up users’ data is essential for rich user experience A lot of time and efforts may be required to create an identity, adding new data, and customizing the users’ settings based on their preferences for an individual app. Whenever the user upgrades to a new device or re-installs the applications, preserving the user data is a must for smooth user experience. A huge chunk of data is generated when using mobile apps, thus adopting proper techniques is necessary to backup the required data. Backing up a small amount of data can be frustrating for users especially when they open the app on a new device. Android Backup service + Titan technology = Secured data backups With Android Pie, devices can now take advantage of a new technique where backed-up application data can only be decrypted using a key. This key is randomly generated at the client. You can encrypt the key using the user’s lock screen PIN/Pattern/passcode, which isn’t known to Google. The password-protected key is encrypted to a Titan security chip on Google Cloud’s datacenter floor. The titan chip is configured in such a way that it will release the backup encryption key only when it is presented with a correct claim derived from the user’s passcode. The titan chip must authorize every access to the decryption key, thus it can permanently block access after too many incorrect attempts at guessing the user’s password. This will mitigate brute force attacks. The number of legal attempts is strictly set by a custom Titan firmware. The number cannot be updated or changed without erasing the contents of the chip. This implies that no one can access the user’s backed-up data without knowing the passcode. Android team hired an external agency for security audit The Android Security & Privacy team hired global cybersecurity and risk mitigation expert NCC Group to complete a security audit, in order to ensure this new technique prevents anyone (including Google) from accessing users application data. The result received positive outcomes around Google’s security design processes, code quality validations, and easing known attack vectors. All these aspects were taken into account prior to launching the service. The engineers corrected some issues quickly which were discovered during the audit. In order to get complete details on how the service fared, you can check the detailed report of NCC Group findings. These external reviews allow Google and Android to maintain transparency and openness which allows users to feel safe about their data, says the Android team. For a complete list of details, you can refer the official Google blog. Read more Google takes steps towards better security, introduces new API policies for 3rd parties and a Titan Security system for mobile devices Facebook says only 29 million and not 50 million users were affected by last month’s security breach Facebook finds ‘no evidence that hackers accessed third party Apps via user logins’, from last week’s security breach
Read more
  • 0
  • 0
  • 3753
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $15.99/month. Cancel anytime
Prasad Ramesh
16 Oct 2018
3 min read
Save for later

Tesla v9 to incorporate neural networks for autopilot

Prasad Ramesh
16 Oct 2018
3 min read
Tesla v9 to incorporate neural networks for autopilot Tesla, the car maker founded by Elon Musk is incorporating larger neural networks for autopilot in the new Tesla v9. Based on the new Autopilot capabilities of version 9, it was known that the new neural net was a significant upgrade over the v8. It can now track vehicles and other objects around the car by making better use of the eight cameras around the car. Tesla motors club member Jimmy_d, a deep learning expert has shared his thoughts on v9 and the neural network used in it. Tesla has now deployed a new camera network to handle all 8 cameras. Like V8 the V9 neural network system consists of a set of ‘camera networks’ which process camera output directly. There is a separate set of ‘post processing’ networks that take output from the camera networks and turn it into higher level actionable abstractions. V9 is a pretty big change from V8. Other major changes from V8 to V9 as stated by Jimmy are: Same weight file being used for all cameras (this has pretty interesting implications and previously V8 main/narrow seems to have had separate weights for each camera) Processed resolution of 3 front cameras and back camera: 1280×960 (full camera resolution) Processed resolution of pillar and repeater cameras: 640×480 (1/2×1/2 of camera’s true resolution) all cameras: 3 color channels, 2 frames (2 frames also has very interesting implications) (was 640×416, 2 color channels, 1 frame, only main and narrow in V8) These camera changes mean a much larger neural network that require more processing power. The V9 network takes images in a resolution of 1280x960 with 3 color channels and 2 frames per camera. That’s 1280x960x3x2 as an input which is 7.3MB. The V8 main camera processing frame was 640x416x2 that is, 0.5MB. The v9 camera will have access to more details. About the network size, Jimmy said: “This V9 network is a monster, and that’s not the half of it. When you increase the number of parameters (weights) in an NN by a factor of 5 you don’t just get 5 times the capacity and need 5 times as much training data. In terms of expressive capacity increase it’s more akin to a number with 5 times as many digits. So if V8’s expressive capacity was 10, V9’s capacity is more like 100,000.” Tesla CEO Elon Musk had something to say about the estimates made by Jimmy: https://twitter.com/elonmusk/status/1052101050465808384 The amount of training data doesn’t go up by a mere 5x. It takes at least thousands and even millions of times more data to fully utilize a network that has 5x as many parameters. We will see this new neural network implementation on the road in new cars about six months down the line. For more details, you can view the discussion on the Tesla motors club website. Tesla is building its own AI hardware for self-driving cars Elon Musk reveals big plans with Neuralink DeepMind, Elon Musk, and others pledge not to build lethal AI
Read more
  • 0
  • 0
  • 3751

article-image-primeng-8-0-0-releases-with-angular-8-support-focustrap-and-more
Bhagyashree R
14 Jun 2019
2 min read
Save for later

PrimeNG 8.0.0 releases with Angular 8 support, FocusTrap, and more

Bhagyashree R
14 Jun 2019
2 min read
Yesterday, the team behind PrimeNG, a collection of rich UI components for Angular, announced the release of PrimeNG 8.0.0. This release comes with Angular 8.0 support, a new feature called FocusTrap, and other quality improvements. Here are some of the updates in PrimeNG 8.0.0: Compatibility with Angular 8 The main focus behind this release was to support Angular 8. Currently, PrimeNG 8.0.0 does not come with Ivy support as there are various breaking changes for the team to tackle in 8.x. “It is easier to use Ivy although initially there are no significant gains, for library authors such as ourselves there are challenges ahead to fully support Ivy,” the team wrote in the announcement. This compiler is opt-in right now but in the future release, probably in v9, we can expect it to become the default. Currently, there are “no real gains” of using it, however, you can give it a whirl to check whether your app works right with Ivy. You can enable it by adding "enableIvy": true in your angularCompilerOptions, and restart your application. Another issue that you need to keep in mind is Angular 8’s web animations regression that breaks your application if you add import 'web-animations-js'; into polyfills.ts. PrimeNG 8.0.0 users are recommended to use a fork of web-animations until the issue is fixed. Other new features and enhancements A new feature called FocusTrap is introduced, which is a new directive that keeps focus within a certain DOM element while tabbing. Spinner now has the decimalSeperator and thousandSeperator props. A formatInput prop is added to Spinner that formats input numbers according to localSeperators. The FileUpload component uses HttpClient that works with interceptors. This is why the team has removed onBeforeSend and added onSend. Headers prop for FileUpload are introduced to define HttpHeaders for the post request. The ‘rows’ of Table now supports two-way binding. Read more about PrimeNG 8.0 on its official website. Angular 8.0 releases with major updates to framework, Angular Material, and the CLI 5 useful Visual Studio Code extensions for Angular developers Ionic Framework 4.0 has just been released, now backed by Web Components, not Angular
Read more
  • 0
  • 0
  • 3749

article-image-red-hat-released-rhel-7-6
Amrata Joshi
01 Nov 2018
4 min read
Save for later

Red Hat released RHEL 7.6

Amrata Joshi
01 Nov 2018
4 min read
On Tuesday, Red Hat announced the general availability of RHEL (Red Hat Enterprise Linux) 7.6. RHEL 7.6 is a consistent hybrid cloud foundation for enterprise IT. It is built on an open source innovation, designed to enable organizations to match the pace with emerging cloud-native technologies. It also supports IT operations across enterprise IT’s four footprints. Just three months back the beta version of RHEL 7.6 was released. Red Hat Enterprise Linux 7.6  addresses a range of IT challenges, emphasizes security and compliance, management and automation, and Linux container innovations. Features in RHEL 7.6 RHEL 7.6 solves security concerns IT security has always been a key challenge for many IT departments as it does not get easier in complex hybrid and multi-cloud environments. Red Hat Enterprise Linux 7.6 is the answer to this problem as it introduces a Trusted Platform Module (TPM) 2.0 hardware modules as part of Network Bound Disk Encryption (NBDE). NBDE provides security across networked environments whereas, TPM works on-premise to add an additional layer of security, tying disks to specific physical systems. These two layers of security for hybrid cloud operations help keep information on disks physically more secure. RHEL 7.6 also makes it easier to manage firewalls with improvements to nftables, a packet filtering framework. It also simplifies the configuration of counter-intrusion measures. Updated cryptographic algorithms delivered for RSA and elliptic-curve cryptography (ECC) are enabled by default with RHEL 7.6. This helps the organizations handling sensitive information to match their pace with Federal Information Processing Standards (FIPS) compliance and standards bodies like the National Institute of Standards and Technology (NIST). Management and automation get better Red Hat Enterprise Linux 7.6 helps in making Linux adoption easier for the users as it brings enhancements to the Red Hat Enterprise Linux Web Console, which provides a graphical overview of Red Hat system health and status. RHEL 7.6 has made it easier to find updates on the system summary page. It also provides automated configuration of single sign-on for identity management and a firewall control interface. This makes it easier for security administrators. RHEL 7.6 comes with the Extended Berkeley Packet Filter (eBPF), which provides a safer and efficient mechanism for monitoring activities within the kernel. Soon, it will help in enabling additional performance monitoring and network tracing tools. Red Hat Enterprise Linux 7.6 also provides support for Red Hat Enterprise Linux System Roles which is a collection of Ansible modules. These modules are designed to provide a consistent way to automate and remotely manage Red Hat Enterprise Linux deployments. Each of these modules provides a ready-made automated workflow for handling common and complex tasks, involved in Linux environments. This automation helps to remove the possibilities of human error from these tasks.  This, in turn, frees up the IT teams and lets them focus more on adding business value. Red Hat’s lightweight container toolkit Red Hat Enterprise Linux 7.6 supports the rise of cloud-native technologies by introducing Red Hat’s lightweight container toolkit. This toolkit comprises of CRI-O, Buildah, Skopeo, and now Podman. Each of these tools is built on a fully open source and community-backed technologies. They are based on open standards like the Open Container Initiative (OCI) format. Podman complements Buildah and Skopeo while sharing the same foundations as CRI-O. It enables users to run containers and groups of containers (pods) from a familiar command-line interface, which eliminates the need of a daemon. This, in turn, helps to reduce the complexity in container creation while making it easier for developers to build containers on workstations, in continuous integration/continuous development (CI/CD) systems and within high-performance computing (HPC) or big data scheduling systems. For more information on this release, check out Red Hat’s official website Red Hat Enterprise Linux 7.6 Beta released with focus on security, cloud, and automation Red Hat Enterprise Linux 7.5 (RHEL 7.5) now generally available 4 reasons IBM bought Red Hat for $34 billion
Read more
  • 0
  • 0
  • 3749

article-image-game-developers-say-virtual-reality-is-here-to-stay
Natasha Mathur
08 Jun 2018
5 min read
Save for later

Game developers say Virtual Reality is here to stay

Natasha Mathur
08 Jun 2018
5 min read
“I don’t want to spend $600 to have a box on my head when I play video games” or “Too many cables, not enough games” are some of the statements you’ll hear quite often if you come across a Virtual Reality non-believer. Despite all the criticism that the Virtual Reality gaming industry receives, game developers across the world think differently. This year’s Skill Up report highlights what game developers feel about the VR world. A whopping 86% of respondents said ‘Yes, VR is here to stay’. With issues like heavy hardware, motion sickness, etc, getting fixed as the advancement in the VR technology continues, let’s have a look at other reasons for VR commanding a high level of confidence among game developers. Why is VR here to stay? VR hardware manufacturing is rising The future of Virtual Reality is already set in motion. From Google kickstarting the VR industry by releasing Google Cardboard back in 2014, just look at the number of latest releases in VR headsets in the past six months now and you can do the math yourself. With the likes of Lenovo Mirage Solo, Oculus Go, HTC Vive Pro, Oculus Rift, Sony PlayStation VR and Samsung Odyssey entering the market, it’s quite evident that there is a growing demand for the VR headsets. In addition to headsets being produced extensively, there are dedicated chipsets such as the latest Qualcomm’s XR1 chipset being built to support these headsets and solve the ever-concerning problem in the VR world: High prices. HTC (Vive), Vuzix, Meta, and Pico, are among the others that are working towards creating dedicated chipsets for the standalone headsets. Prices are falling Virtual Reality manufacturers across the globe have a common goal in mind: to make manufacturing of the VR hardware cheaper. Oculus was the first one to drop the prices of the Oculus Rift permanently to $399. Later, Sony joined in by bringing the price of its PlayStation VR  down to as low as $200. Another common complaint regarding the price of the VR headsets is that all VR headsets required additional computing power and hardware to make them operable making it not an average Jane/Joe’s cup of tea. But that problem seems to be fast disappearing with the release of standalone headsets now. Qualcomm recently announced a new chipset for standalone AR/VR headsets at Augmented World Expo. More games to hit the market “There aren’t enough AAA games for the VR world”. Listen closely, and you’ll notice a VR non-believer expressing their disbelief towards the VR world. But, the Virtual Reality Industry is smart. It keeps coming up with ways to pull even the hardcore console gamers into the enticing VR space. The immersive nature of Virtual Reality provides a potential to build fascinating games that catch people’s interest. Events such as the Annual Game developers conference, VR & AR Developer Conference, and PAX East ignite the interest of developers to create these innovative games even more. With already popular VR games such as Doom, Fallout, Perception, etc in existence, there are other games on their way to hit the market. For instance, creators of Titanfall, Respawn Entertainment have announced that a brand new AAA VR title in partnership with Oculus will get released in 2019. Respawn is working hard in this new field and will be challenging other studios in what future gaming for VR looks like. VR isn’t just limited to headsets People confuse VR being limited to just headsets and games. But it is so much more than that. There are different fields leveraging the potential of Virtual Reality. For instance, NASA uses Virtual Reality Lab to train astronauts. NASA is also looking into using headsets like the HTC Vive, VR gloves from 3rd party developers, and assets from games like Mars 2030 and Earthlight to make VR training simulations for a fraction of the cost. Other industries that can immediately benefit from using Virtual Reality are Healthcare, Education, Museum, Entertainment, among others. For instance, Doctors use VR to treat anxiety disorders and phobias, while some at Stanford University used it to set up practice spaces for surgeons. Testing autonomous vehicles for safety purposes also uses the wonders of Virtual Reality for simulation purposes. It will also help speed up the development of Autonomous Vehicles. Similarly, in the education space, one can use Virtual Reality in the classroom to visualize certain concepts related to physics for students. People can be transported back to the Bronze Age with the help of VR in Museums. The Entertainment Industry has been making VR movies such as Walking New York, From nothing, Surge, etc, which create an altogether different experience for the viewers by making them feel like they’re actually present in the scenario of the VR world. Imagine watching Jurassic Park or Avatar in VR! It takes time for any new technology to find user adoption and Virtual Reality has been no exception. But in recent times, it seems to have broken the barriers. One proxy for this claim is the news of VR headsets sale crossing 1 million last year. The ball has started rolling more and more in favor of the VR world. A whole industry is being formed, new technology being made, and VR is no longer just some hype. Top 7 modern Virtual Reality hardware systems SteamVR introduces new controllers for game developers, the SteamVR Input system Build a Virtual Reality Solar System in Unity for Google Cardboard  
Read more
  • 0
  • 0
  • 3747
article-image-deepminds-alphazero-shows-unprecedented-growth-in-ai-masters-3-different-games
Sugandha Lahoti
07 Dec 2018
3 min read
Save for later

Deepmind’s AlphaZero shows unprecedented growth in AI, masters 3 different games

Sugandha Lahoti
07 Dec 2018
3 min read
Google’s DeepMind introduced AlphaZero last year as a reinforcement learning program that masters three different types of board games, Chess, Shogi and Go to beat world champions in each case. Yesterday, they announced that a full evaluation of AlphaZero has been published in the journal Science, which confirms and updates the preliminary results. The research paper describes how Deepmind’s AlphaZero learns each game from scratch, without any human intervention or no inbuilt domain knowledge but the basic rules of the game. Unlike traditional game playing programs, Deepmind’s AlphaZero uses deep neural networks, a general-purpose reinforcement learning algorithm, and a general-purpose tree search algorithm. The first play by the program is completely random. Over-time the system uses RL algorithms to learn from wins, losses and draws to adjust the parameters of the neural network. The amount of training varies taking approximately 9 hours for chess, 12 hours for shogi, and 13 days for Go. For searching, it uses Monte-Carlo Tree Search (MCTS)  to select the most promising moves in games. Testing and Evaluation Deepmind’s AlphaZero was tested against the best engines for chess (Stockfish), shogi (Elmo), and Go (AlphaGo Zero). All matches were played for three hours per game, plus an additional 15 seconds for each move. AlphaZero was able to beat all its component in each evaluation. Per Deepmind’s blog: In chess, Deepmind’s AlphaZero defeated the 2016 TCEC (Season 9) world champion Stockfish, winning 155 games and losing just six games out of 1,000. To verify the robustness of AlphaZero, it was also played on a series of matches that started from common human openings. In each opening, AlphaZero defeated Stockfish. It also played a match that started from the set of opening positions used in the 2016 TCEC world championship, along with a series of additional matches against the most recent development version of Stockfish, and a variant of Stockfish that uses a strong opening book. In all matches, AlphaZero won. In shogi, AlphaZero defeated the 2017 CSA world champion version of Elmo, winning 91.2% of games. In Go, AlphaZero defeated AlphaGo Zero, winning 61% of games. AlphaZero’s ability to master three different complex games is an important progress towards building a single AI system that can solve a wide range of real-world problems and generalize to new situations. People on the internet are also highly excited about this new achievement. https://twitter.com/DanielKingChess/status/1070755986636488704 https://twitter.com/demishassabis/status/1070786070806192129 https://twitter.com/TrevorABranch/status/1070765877669187584 https://twitter.com/LeonWatson/status/1070777729015013376 https://twitter.com/Kasparov63/status/1070775097970094082 Deepmind’s AlphaFold is successful in predicting the 3D structure of a protein making major inroads for AI use in healthcare. Google makes major inroads into healthcare tech by absorbing DeepMind Health. AlphaZero: The genesis of machine intuition
Read more
  • 0
  • 0
  • 3746

article-image-googles-sidewalk-lab-smart-city-project-threatens-privacy-and-human-rights-amnesty-intl-ca-says
Fatema Patrawala
30 Apr 2019
6 min read
Save for later

Google’s Sidewalk Lab smart city project threatens privacy and human rights: Amnesty Intl, CA says

Fatema Patrawala
30 Apr 2019
6 min read
Sidewalk Toronto, a joint venture between Sidewalk Labs, which is owned by Google parent company Alphabet Inc., and Waterfront Toronto, is proposing a high-tech neighbourhood called Quayside for the city’s eastern waterfront. In March 2017, Waterfront Toronto had shared a Request for proposal for this project with the Sidewalk Labs team. It ultimately got approval by Oct 2017 and is currently led by Eric Schmidt, Alphabet Inc CEO and Daniel Doctoroff, Sidewalk Labs CEO. As per reports from Daneilla Barreto, a digital activism coordinator for Amnesty International Canada, the project will normalize the mass surveillance and is a direct threat to human rights. https://twitter.com/AmnestyNow/status/1122932137513164801 The 12-acre smart city, which will be located between East Bayfront and the Port Lands, promises to tackle the social and policy challenges affecting Toronto: affordable housing, traffic congestion and the impacts of climate change. Imagine self-driving vehicles shuttling you around a 24/7 neighbourhood featuring low-cost, modular buildings that easily switch uses based on market demand. Picture buildings heated or cooled by a thermal grid that doesn’t rely on fossil fuels, or garbage collection by industrial robots. Underpinning all of this is a network of sensors and other connected technology that will monitor and track environmental and human behavioural data. That last part about tracking human data has sparked concerns. Much ink has been spilled in the press about privacy protections and the issue has been raised repeatedly by citizens in two of four recent community consultations held by Sidewalk Toronto. They have proposed to build the waterfront neighbourhood from scratch, embed sensors and cameras throughout and effectively create a “digital layer”. This digital layer may result monitoring actions of individuals and collection of their data. In the Responsible Data Use Policy Framework released last year, the Sidewalk Toronto team made a number of commitments with regard to privacy, such as not selling personal information to third parties or using it for advertising purposes. Daneilla further argues that privacy was declared a human right and is protected under the Universal Declaration of Human Rights adopted by the United Nations in 1948. However, in the Sidewalk Labs conversation, privacy has been framed as a purely digital tech issue. Debates have focused on questions of data access, who owns it, how will it be used, where it should all be stored and what should be collected. In other words it will collect the minutest information of an individual’s everyday living. For example, track what medical offices they enter, what locations they frequent and who their visitors are, in turn giving away clues to physical or mental health conditions, immigration status, whether if an individual is involved in any kind of sex work, their sexual orientation or gender identity or, the kind of political views they might hold. It will further affect their health status, employment, where they are allowed to live, or where they can travel further down the line. All of these raise a question: Do citizens want their data to be collected at this scale at all? And this conversation remains long overdue. Not all communities have agreed to participate in this initiative as marginalized and racialized communities will be affected most by surveillance. The Canadian Civil Liberties Association (CCLA) has threatened to sue Sidewalk Toronto project, arguing that privacy protections should be spelled out before the project proceeds. Toronto’s Mayor John Tory showed least amount of interest in addressing these concerns during a panel on tech investment in Canada at South by Southwest (SXSW) on March 10. Tory was present in the event to promote the city as a go-to tech hub while inviting the international audience at SXSW at the other industry events. Last October, Saadia Muzaffar announced her resignation from Waterfront Toronto's Digital Strategy Advisory Panel. "Waterfront Toronto's apathy and utter lack of leadership regarding shaky public trust and social license has been astounding," the author and founder of TechGirls Canada said in her resignation letter. Later that month, Dr. Ann Cavoukian, a privacy expert and consultant for Sidewalk Labs, put her resignation too. As she wanted all data collection to be anonymized or "de-identified" at the source, protecting the privacy of citizens. Why big tech really want your data? Data can be termed as a rich resource or the “new oil” in other words. As it can be mined in a number of ways, from licensing it for commercial purposes to making it open to the public and freely shareable.  Apparently like oil, data has the power to create class warfare, permitting those who own it to control the agenda and those who don’t to be left at their mercy. With the flow of data now contributing more to world GDP than the flow of physical goods, there’s a lot at stake for the different players. It can benefit in different ways as for the corporate, it is the primary beneficiaries of personal data, monetizing it through advertising, marketing and sales. For example, Facebook for past 2 to 3 years has repeatedly come under the radar for violating user privacy and mishandling data. For the government, data may help in public good, to improve quality of life for citizens via data--driven design and policies. But in some cases minorities and poor are highly impacted by the privacy harms caused due to mass surveillance, discriminatory algorithms among other data driven technological applications. Also public and private dissent can be discouraged via mass surveillance thus curtailing freedom of speech and expression. As per NY Times report, low-income Americans have experienced a long history of disproportionate surveillance, the poor bear the burden of both ends of the spectrum of privacy harms; are subject to greater suspicion and monitoring while applying for government benefits and live in heavily policed neighborhoods. In some cases they also lose out on education and job opportunities. https://twitter.com/JulieSBrill/status/1122954958544916480 In more promising news, today the Oakland Privacy Advisory Commission released 2 key documents one on the Oakland privacy principles and the other on ban on facial recognition tech. https://twitter.com/cfarivar/status/1123081921498636288 They have given emphasis to privacy in the framework and mentioned that, “Privacy is a fundamental human right, a California state right, and instrumental to Oaklanders’ safety, health, security, and access to city services. We seek to safeguard the privacy of every Oakland resident in order to promote fairness and protect civil liberties across all of Oakland’s diverse communities.” Safety will be paramount for smart city initiatives, such as Sidewalk Toronto. But we need more Oakland like laws and policies that protect and support privacy and human rights. One where we are able to use technology in a safe way and things aren’t happening that we didn’t consent to. #NotOkGoogle: Employee-led town hall reveals hundreds of stories of retaliation at Google Google announces new policy changes for employees to report misconduct amid complaints of retaliation and harassment #GoogleWalkout organizers face backlash at work, tech workers show solidarity
Read more
  • 0
  • 0
  • 3745

article-image-low-carbon-kubernetes-scheduler-a-demand-side-management-solution-that-consumes-electricity-in-low-grid-carbon-intensity-areas
Savia Lobo
27 Jun 2019
7 min read
Save for later

Low Carbon Kubernetes Scheduler: A demand side management solution that consumes electricity in low grid carbon intensity areas

Savia Lobo
27 Jun 2019
7 min read
Machine learning experts are increasingly becoming interested in researching on how machine learning can be used to reduce greenhouse gas emissions and help society adapt to a changing climate. For example, Machine Learning can be used to regulate cloud data centres that manage an important asset, ‘Data’ as these data centres typically comprise tens to thousands of interconnected servers and consume a substantial amount of electrical energy. Researchers from Huawei published a paper in April 2015, estimating that by 2030 data centres will use anywhere between 3% and 13% of global electricity At the ICT4S 2019 conference held in Lappeenranta, Finland, from June 10-15, researchers from the University of Bristol, UK, introduced their research on a low carbon scheduling policy for the open-source Kubernetes container orchestrator. “Low Carbon Kubernetes Scheduler” can provide demand-side management (DSM) by migrating consumption of electric energy in cloud data centres to countries with the lowest carbon intensity of electricity. In their paper the researchers highlight, “All major cloud computing companies acknowledge the need to run their data centres as efficiently as possible in order to address economic and environmental concerns, and recognize that ICT consumes an increasing amount of energy”. Since the end of 2017, Google Cloud Platform runs its data centres entirely on renewable energy. Also, Microsoft has announced that its global operations have been carbon neutral since 2012. However, not all cloud providers have been able to make such an extensive commitment. For example, Oracle Cloud is currently 100% carbon neutral in Europe, but not in other regions. The Kubernetes Scheduler selects compute nodes based on the real-time carbon intensity of the electric grid in the region they are in. Real-time APIs that report grid carbon intensity is available for an increasing number of regions, but not exhaustively around the planet. In order to effectively demonstrate the schedulers ability to perform global load balancing, the researchers have evaluated the scheduler based on its ability to the metric of solar irradiation. “While much of the research on DSM focusses on domestic energy consumption there has also been work investigating DSM by cloud data centres”, the paper mentions. Demand side management (DSM) refers to any initiatives that affect how and when electricity is being required by consumers. Source: CEUR-WS.org Existing schedulers work with consideration to singular data centres rather than taking a more global view. On the other hand, the Low Carbon Scheduler considers carbon intensity across regions as scaling up and down of a large number of containers that can be done in a matter of seconds. Each national electric grid contains electricity generated from a variable mix of alternative sources. The carbon intensity of the electricity provided by the grid anywhere in the world is a measure of the amount of greenhouse gas released into the atmosphere from the combustion of fossil fuels for the generation of electricity. Significant generation sites report the volume of electricity input to the grid in regular intervals to the organizations operating the grid (for example the National Grid in the UK) in real-time via APIs. These APIs typically provide the retrieval of the production volumes and thus allow to calculate the carbon intensity in real-time. The Low carbon scheduler collects the carbon intensity from the available APIs and ranks them to identify the region with the lowest carbon intensity. [box type="shadow" align="" class="" width=""]For the European Union, such an API is provided by the European Network of Transmission System Operators for Electricity (www.entsoe.eu) and for the UK this is the Balancing Mechanism Reporting Service (www.elexon.co.uk).[/box] Why Kubernetes for building a low carbon scheduler Kubernetes can make use of GPUs4 and has also been ported to run on ARM architecture 5. Researchers have also said that Kubernetes has to a large extent won the container orchestration war. It also has support for extendability and plugins which makes it the “most suitable for which to develop a global scheduler and bring about the widest adoption, thereby producing the greatest impact on carbon emission reduction”. Kubernetes allows schedulers to run in parallel, which means the scheduler will not need to re-implement the pre-existing, and sophisticated, bin-packing strategies present in Kubernetes. It need only to apply a scheduling layer to complement the existing capabilities proffered by Kubernetes. According to the researchers, “Our design, as it operates at a higher level of abstraction, assures that Kubernetes continues to deal with bin-packing at the node level, while the scheduler performs global-level scheduling between data centres”. The official Kubernetes documentation describes three possible ways of extending the default scheduler (kube-scheduler): adding these rules to the scheduler source code and recompiling, implementing one’s own scheduler process that runs instead of, or alongside kube-scheduler, or implementing a scheduler extender. Evaluating the performance of the low carbon Kubernetes scheduler The researchers recorded the carbon intensities for the countries that the major cloud providers operate data centers between 18.2.2019 13:00 UTC and 21.4.2019 9:00 UTC. Following is a table showing countries where the largest public cloud providers operate data centers, as of April 2019. Source: CEUR-WS.org They further ranked all countries by the carbon intensity of their electricity in 30-minute intervals. Among the total set of 30-minute values, Switzerland had the lowest carbon intensity (ranked first) in 0.57% of the 30-minute intervals, Norway 0.31%, France 0.11% and Sweden in 0.01%. However, the list of the least carbon intense countries only contains countries in central Europe locations. To justify Kubernetes’ ability or globally distributed deployments the researchers chose to optimize placement to regions with the greatest degree of solar irradiance termed a Heliotropic Scheduler. This scheduler is termed ‘heliotropic’ in order to differentiate it from a ‘follow-the-sun’ application management policy that relates to meeting customer demand around the world by placing staff and resources in proximity to those locations (thereby making them available to clients at lower latency and at a suitable time of day). A ‘heliotropic’ policy, on the other hand, goes to where sunlight, and by extension solar irradiance, is abundant. They further evaluated the Heliotropic Scheduler implementation by running BOINC jobs on Kubernetes. BOINC (Berkeley Open Infrastructure for Network Computing) is a software platform for volunteer computing that allows users to contribute computational capacity from their home PCs towards scientific research. Einstein@Home, SETI@home and IBM World Community Grid are some of the most widely supported projects. Researchers say: “Even though many cloud providers are contracting for renewable energy with their energy providers, the electricity these data centres take from the grid is generated with release of a varying amount of greenhouse gas emissions into the atmosphere. Our scheduler can contribute to moving demand for more carbon intense electricity to less carbon intense electricity”. While the paper concludes that wind-dominant, solar-complementary strategy is superior for the integration of renewable energy sources into cloud data centres’ infrastructure, the Low Carbon Scheduler provides a proof-of-concept demonstrating how to reduce carbon intensity in cloud computing. To know more about this implementation for lowering carbon emissions read the research paper. Machine learning experts on how we can use machine learning to mitigate and adapt to the changing climate VMware reaches the goal of using 100% renewable energy in its operations, a year ahead of their 2020 vision Deep learning models have massive carbon footprints, can photonic chips help reduce power consumption?
Read more
  • 0
  • 0
  • 3740
article-image-california-passes-the-u-s-first-iot-security-bill
Prasad Ramesh
25 Sep 2018
3 min read
Save for later

California passes the U.S.' first IoT security bill

Prasad Ramesh
25 Sep 2018
3 min read
California likes to be leading the way when it comes to digital regulation. Just a few weeks ago it passed legislation that looks like it could restore net neutrality. Now, a bill designed to tighten IoT security, is with the governor awaiting signature for it to be carried into California state law. The bill, SB-327 Information privacy: connected devices, was initially introduced in February 2017 by Senator Jackson. It was the first legislation of its kind in the US. Approved at the end of August, it will come into effect at the start of 2020 once signed by Governor Jerry Brown. Read next: IoT Forensics: Security in an always connected world where things talk What California’s IoT bill states The new IoT security bill covers another of important areas. For example, for manufacturers, IoT devices will need to contain certain safety and security features: Security should be appropriate to the nature and function of the device. The feature should be appropriate to the information an IoT may collect, contain, or transmit. It should be designed to protect the device and information within it from unauthorized access, destruction, use, modification, or disclosure. If an IoT device is requires authentication over the internet, further conditions need to be met, such as: The preset password must be unique to each device that is manufactured. The device must ask the user to generate a new authentication method before being able to use it for the first time. It’s worth noting that the points mentioned above for IoT security are not applicable to IoT devices that are subject to security requirements under federal law. Also a covered entity like a health care provider, business associate, contractor, or employer subject to the Health Insurance Portability and Accountability Act of 1996 (HIPAA) or the Confidentiality of Medical Information Act is exempt from the title points mentioned. The IoT is a network of several of devices that connect to the internet via Wi-Fi. They are not openly visible as most of them are used in a local network but often do not have many security measures. The bill doesn't have any exact definitions for a ‘reasonable security feature’ but provides a few guiding points in interest a user’s security. The legislation states: “This bill, beginning on January 1, 2020, would require a manufacturer of a connected device, as those terms are defined, to equip the device with a reasonable security feature or features that are appropriate to the nature and function of the device, appropriate to the information it may collect, contain, or transmit, and designed to protect the device and any information contained therein from unauthorized access, destruction, use, modification, or disclosure, as specified.” Criticisms of the IoT bill Some cybersecurity experts have criticised the legislation. For example, Robert Graham writes on his Security Errarta blog that the bill is “based on a superficial understanding of cybersecurity/hacking that will do little improve security, while doing a lot to impose costs and harm innovation.” He explains that “the point [of good cybersecurity practice] is not to add ‘security features’ but to remove ‘insecure features’.” Graham’s criticisms underline that while the legislation might be well-intentioned, whether it will be impactful remains another matter. This is, at the very least, a step in the right direction by a state that is keen to take digital security and freedom into its own hands. You can read the bill at the California Legislative information website. How Blockchain can level up IoT Security Defending your business from the next wave of cyberwar: IoT Threats
Read more
  • 0
  • 0
  • 3740

article-image-introducing-zero-server-a-zero-configuration-server-for-react-node-js-html-and-markdown
Bhagyashree R
27 Feb 2019
2 min read
Save for later

Introducing Zero Server, a zero-configuration server for React, Node.js, HTML, and Markdown

Bhagyashree R
27 Feb 2019
2 min read
Developers behind the CodeInterview.io and RemoteInterview.io websites have come up with Zero, a web framework to simplify modern web development. Zero takes the overhead of the usual project configuration for routing, bundling, and transpiling to make it easier to get started. Zero applications consist of static and code files. Static files are all non-code files like images, documents, media files, etc. Code files are parsed, bundled, and served by a particular builder for that file type. Zero supports Node.js, React, HTML, Markdown/MDX. Features in Zero server Autoconfiguration Zero eliminates the need for any configuration files in your project folder. Developers will just have to place their code and it will be automatically compiled, bundled, and served. File-system based routing The routing will be based on the file system, for example, if your code is placed in ‘./api/login.js’, it will be exposed at ‘http://domain.com/api/login’. Auto-dependency resolution Dependencies are automatically installed and resolved. To install a specific version of a package, developers just have to create their own package.json. Support for multiple languages Zero supports code written in multiple languages. So, with Zero, you can do things like exposing your TensorFlow model as a Python API, writing user login code in Node.js, all under a single project folder. Better error handling Zero isolate endpoints from each other by running them in their own process. This will ensure that if one endpoint crashes there is no effect on any other component of the application. For instance, if /api/login crashes, there will be no effect on /chatroom page or /api/chat API. It will also automatically restart the crashed endpoints when the next user visits them. To know more about the Zero server, check out its official website. Introducing Mint, a new HTTP client for Elixir Symfony leaves PHP-FIG, the framework interoperability group Google Chrome developers “clarify” the speculations around Manifest V3 after a study nullifies their performance hit argument  
Read more
  • 0
  • 0
  • 3740