Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3709 Articles
article-image-hackers-steal-bitcoins-worth-41m-from-binance-exchange-in-a-single-go
Savia Lobo
09 May 2019
3 min read
Save for later

Hackers steal bitcoins worth $41M from Binance exchange in a single go!

Savia Lobo
09 May 2019
3 min read
On Tuesday, Binance Exchange, one of the popular cryptocurrency exchanges, reported a huge security breach where hackers stole around 7,000 bitcoins worth $41 millions, in a single transaction. The hackers were able to gain a bulk of user API keys, 2FA codes, and a lot of other information. Binance Exchange said that the hackers used a variety of techniques, including phishing, viruses and other attacks. “We are still concluding all possible methods used. There may also be additional affected accounts that have not been identified yet”, Binance said in their official statement. Binance confirmed that only the BTC hot wallet was affected and all the other wallets are secure and unharmed. The affected ‘hot wallet’ contained about 2% of Binance’s total BTC holdings. The firm also mentioned that the hackers were extremely patient and carried out well-orchestrated actions through multiple seemingly independent accounts at the most opportune time. “The transaction is structured in a way that passed our existing security checks. It was unfortunate that we were not able to block this withdrawal before it was executed. Once executed, the withdrawal triggered various alarms in our system. We stopped all withdrawals immediately after that”, Binance’s official statement mentions. Binance said that no user funds will be affected and it will use the SAFU fund to cover this incident in full. Binance has estimated a week’s time to conduct a thorough security review of this incident during which all deposits and withdrawals will be needed to remain suspended. The security review will include all parts of their huge systems and data and the updates will be posted frequently. “We beg for your understanding in this difficult situation”, Binance urged their users. They further added, “Please also understand that the hackers may still control certain user accounts and may use those to influence prices in the meantime. We will monitor the situation closely. But we believe with withdrawals disabled, there isn’t much incentive for hackers to influence markets.” Larry Cermak, Head Analyst at The Block and former researcher at Diar, who conducted a research of the Binance hack concluded that it was the sixth largest exchange hack in history. He also said, “the $41 million is “peanuts” for Binance” and it will take hardly 47 days to make the money lost during the breach. https://twitter.com/lawmaster/status/1126090906908676096 In a live video chat, Binance's chief executive Changpeng Zhao sought to answer questions about the hack. https://twitter.com/CharlieShrem/status/1126166334121881601 To know more about this news, read the complete official document. Symantec says NSA’s Equation group tools were hacked by Buckeye in 2016 way before they were leaked by Shadow Brokers in 2017 Listen: We discuss what it means to be a hacker with Adrian Pruteanu [Podcast] Hacker destroys Iranian cyber-espionage data; leaks source code of APT34’s hacking tools on Telegram
Read more
  • 0
  • 0
  • 2648

article-image-red-hat-releases-openshift-4-with-adaptability-enterprise-kubernetes-and-more
Vincy Davis
09 May 2019
3 min read
Save for later

Red Hat releases OpenShift 4 with adaptability, Enterprise Kubernetes and more!

Vincy Davis
09 May 2019
3 min read
The 3-day Red Hat Summit 2019 has kicked off at Boston Convention and Exhibition Center, United States.Yesterday, Red Hat announced a major release, Red Hat OpenShift 4, the next generation of its trusted enterprise Kubernetes platform, re-engineered to address the complex realities of container orchestration in production systems. Red Hat OpenShift 4 simplifies hybrid and multicloud deployments to help IT organizations utilize new applications, help businesses to thrive and gain momentum in an ever-increasing set of competitive markets. Features of Red Hat OpenShift 4 Simplify and automate the cloud everywhere Red Hat OpenShift 4 will help in automating and operationalize the best practices for modern application platforms. It will operate as a unified cloud experience for the hybrid world, and enable an automation-first approach including Self-managing platform for hybrid cloud: This provides a cloud-like experience via automatic software updates and lifecycle management across the hybrid cloud which enables greater security, auditability, repeatability, ease of management and user experience. Adaptability and heterogeneous support: It will be available in the coming months across major public cloud vendors including Alibaba, Amazon Web Services (AWS), Google Cloud, IBM Cloud, Microsoft Azure, private cloud technologies like OpenStack, virtualization platforms and bare-metal servers. Streamlined full stack installation:When streamlined full stack installation is used along with an automated process, it will make it easier to get started with enterprise Kubernetes. Simplified application deployments and lifecycle management: Red Hat helped stateful and complex applications on Kubernetes with Operators which will helps in self operating application maintenance, scaling and failover. Trusted enterprise Kubernetes The Cloud Native Computing Foundation (CNCF) has certified the Red Hat OpenShift Container Platform, in accordance with Kubernetes. It offers built on the backbone of the world’s leading enterprise Linux platform backed by the open source expertise, compatible ecosystem, and leadership of Red Hat. It also provides codebase which will help in securing key innovations from upstream communities. Empowering developers to innovate OpenShift 4 supports the evolving needs of application development as a consistent platform to optimize developer productivity with: Self-service, automation and application services that will help developers to extend their application by on-demand provisioning of application services. Red Hat CodeReady Workspaces enables developers to hold the power of containers and Kubernetes, while working with familiar Integrated Development Environment (IDE) tools that they use day-to-day. OpenShift Service Mesh combines Istio, Jaeger, and Kiali projects as a single capability that encodes communication logic for microservices-based application architectures. Knative for building serverless applications in Developer Preview makes Kubernetes an ideal platform for building, deploying and managing serverless or function-as-a-service (FaaS) workloads. KEDA (Kubernetes-based Event-Driven Autoscaling), a collaboration between Microsoft and Red Hat supports deployment of serverless event-driven containers on Kubernetes, enabling Azure Functions in OpenShift, in Developer Preview. This allows accelerated development of event-driven, serverless functions across hybrid cloud and on-premises with Red Hat OpenShift. Red Hat mentioned that OpenShift 4 will be available in the coming months. To read more details about Openshift 4, head over to the official press release on Red Hat. To know about the other major announcements at the Red Hat Summit 2019 like Microsoft collaboration, Red Hat Enterprise Linux 8 (RHEL 8), visit our coverage on Red Hat Summit 2019 Highlights. Red Hat rebrands logo after 20 years; drops Shadowman Red Hat team announces updates to the Red Hat Certified Engineer (RHCE) program Red Hat Satellite to drop MongoDB and will support only PostgreSQL backend
Read more
  • 0
  • 0
  • 2855

article-image-linux-forms-urban-computing-foundation-open-source-tools-build-autonomous-vehicles-smart-infrastructure
Fatema Patrawala
09 May 2019
3 min read
Save for later

Linux forms Urban Computing Foundation: Set of open source tools to build autonomous vehicles and smart infrastructure

Fatema Patrawala
09 May 2019
3 min read
The Linux Foundation, nonprofit organization enabling mass innovation through open source, on Tuesday announced the formation of the Urban Computing Foundation (UCF). UCF will accelerate open source software to improve mobility, safety, road infrastructure, traffic congestion and energy consumption in connected cities. UCF’s mission is to enable developers, data scientists, visualization specialists and engineers to improve urban environments, human life quality, and city operation systems to build connected urban infrastructure. The founding members of UCF are Facebook, Google, IBM, UC San Diego, Interline Technologies, Uber etc. The executive director of Linux Foundation, Jim Zemlin spoke to Venturebeat, and said the Foundation will adopt an open governance model developed by the Technical Advisory Council (TAC), which will include technical and IP stakeholders in urban computing who’ll guide its work through projects by review and curation. The intent, added Zemlin, is to provide platforms to developers who seek to address traffic congestion, pollution, and other problems plaguing modern metros. Here’s the list of TAC members: Drew Dara-Abrams, principal, Interline Technologies Oliver Fink, director Here XYZ, Here Technologies Travis Gorkin, engineering manager of data visualization, Uber Shan He, project leader of Kepler.gl, Uber Randy Meech, CEO, StreetCred Labs Michal Migurski, engineering manager of spatial computing, Facebook Drishtie Patel, product manager of maps, Facebook Paolo Santi, senior researcher, MIT Max Sills, attorney, Google On Tuesday, Facebook announced their participation as a founding member of the Urban Computing Foundation (UCF). https://twitter.com/fb_engineering/status/1125783991452180481 Facebook mentions in its post that, “We are using our expertise — including a predictive model for mapping electrical grids, disaster maps , and more accurate population density maps — to improve access to this type of technology”. Further Facebook mentions that UCF will establish a neutral space for the critical work. It will include adapting geospatial and temporal machine learning techniques for urban environments and developing simulation methodologies for modeling and predicting citywide phenomena. Uber also reported about their joining and announced their contribution of Kepler.gl as the initiative’s first official project. Kepler is Uber’s open source, no-code geospatial analysis tool for creating large-scale data sets. It was released in 2018, and is currently used by Airbnb, Atkins Global, Cityswifter, Lime, Mapbox, Sidewalk Labs, and UBILabs, among others to generate visualizations of location data. While all of this set a path towards making of smarter cities, it also raises an alarm to another way of violating privacy and mishandling user data as per the history in tech. Moreover when recently Amnesty International in Canada regarded the Google Sidewalk Labs project in Toronto to normalize mass surveillance and a direct threat to human rights. Questions are raised as to the tech companies forming foundation to address traffic congestion issue but not to address the privacy violation or online extremism. https://twitter.com/shannoncoulter/status/1126199285530238976 The Linux Foundation announces the CHIPS Alliance project for deeper open source hardware integration Mapzen, an open-source mapping platform, joins the Linux Foundation project Uber becomes a Gold member of the Linux Foundation
Read more
  • 0
  • 0
  • 2663

article-image-google-employees-lay-down-actionable-demands-after-staging-a-sit-in-to-protest-retaliation
Sugandha Lahoti
09 May 2019
4 min read
Save for later

Google employees lay down actionable demands after staging a sit-in to protest retaliation

Sugandha Lahoti
09 May 2019
4 min read
After organizing a sit-in on Mayday to protest against the ongoing retaliation in the company against two Google Walkout organizers, Google employees yesterday published a post on medium calling for clear and actionable demands. The sit-in was organized to protest alleged retaliation toward employees at the hands of their managers “From being told to go on sick leave when you’re not sick, to having your reports taken away, we’re sick of retaliation,” Google employees tweeted via @GoogleWalkout. “Six months ago, we walked out. This time, we’re sitting in.” https://twitter.com/GoogleWalkout/status/1123670797225164801 In the past, Google had several fallings with its staff. In April, two Google Walkout organizers accused the company of retaliation against them over last year’s Google Walkout protest and hosted a Retaliation Town Hall to share their stories and strategize. Following the statements of Whittaker and Stapleton, the two organizers in the town hall session, several current, and former Googlers took to Twitter to register complaints and share their experiences of facing retaliation from the company. This protest quickly became a forum where over 350+ current and past employees shared their experience with retaliation under the hashtag #NotokGoogle. Last year 20,000 Google employees along with Temps, Vendors, and Contractors walked out to protest the discrimination, racism, and sexual harassment that they encountered at Google’s workplace. They had laid out an original set of demands, which Google failed to fulfill, delivering on only one of them partially. Read More: Google employees ‘Walkout for Real Change’ today. These are their demands. Now, walkout organizers have reiterated their demands stating, “We issued a clear, articulate, and actionable set of demands. Google has had six months to meet these demands: in that time, they’ve partially met only one of them.” https://twitter.com/GoogleWalkout/status/1126133460224172033 “Google seems to have lost its mooring, and trust between workers and the company is deeply broken. As the company progresses from crisis to crisis, it is clear Google management is failing, along with HR. It’s time to put HR on a PIP (Performance Improvement Plan) and bring in someone we trust to supervise it. It’s time to escalate.”  highlights the medium blog post. The most important call demanded by Googlers is to fix Google’s HR department. They say, “We call for a transparent, open investigation of HR and its abysmal handling of employee complaints related to working conditions, discrimination, harassment, and retaliation.” They want third-party investigators who will not prioritize the company and the reputation of abusers and harassers over their victims (something which Google’s internal team tends to do a lot). They even quoted Uber who brought in Eric Holder and Arianna Huffington, who led the investigation into a former Uber employee’s claims of sexism and sexual harassment at the workplace. “These investigators need to be selected by Googlers and have no financial relationship with Google or Alphabet. They will need to respect the wishes of any worker they speak to as to whether they want to make their stories public and then publish their findings publicly”, Google Walkout for real change mention in the blog post. Googlers also urge the company to meet the original Google demands. “Google must meet the Walkout demands, already.” The earlier demand for putting an employee representative on the company’s board of directors and having the chief diversity officer report directly to the CEO has received no response from Google. After the retaliation faced by Whittaker and Stapleton, they also demand that Google, “unblock Meredith’s transfer, and allow her to continue her work as before, fully funded and supported, and to allow Claire to transfer to a new team without continued retaliation and interference.” Employees also want Alphabet CEO Larry Page to intervene and address the demands of the walkout as well as recommit Google to meeting them.  “Larry controls Alphabet’s board and has the individual authority to make changes, where others do not,” the organizers wrote. Google declined to comment but pointed to its previous statement regarding retaliation: “We prohibit retaliation in the workplace and publicly share our very clear policy. To make sure that no complaint raised goes unheard at Google, we give employees multiple channels to report concerns, including anonymously, and investigate all allegations of retaliation.” #NotOkGoogle: Employee-led town hall reveals hundreds of stories of retaliation at Google Google announces new policy changes for employees to report misconduct amid complaints of retaliation. #GoogleWalkout organizers face a backlash at work, tech workers show solidarity
Read more
  • 0
  • 0
  • 1851

article-image-google-to-kill-another-product-the-works-with-nest-api-in-the-wake-of-bringing-all-smart-home-products-under-google-nest
Bhagyashree R
09 May 2019
5 min read
Save for later

Google to kill another product, the 'Works with Nest' API in the wake of bringing all smart home products under "Google Nest"

Bhagyashree R
09 May 2019
5 min read
Update: Included Google’s recent plan of action after facing backlash by Nest users.   At this year’s Google I/O developer conference, Google announced that it is bringing all the Nest and Google Home products under one brand “Google Nest”. As a part of this effort, Nest announced on Tuesday that it will be discontinuing the Works with Nest API by August 30, 2019, in favor of Works with Google Assistant. “We want to unify our efforts around third-party connected home devices under a single developer platform – a one-stop shop for both our developers and our customers to build a more helpful home. To accomplish this, we’ll be winding down Works with Nest on August 31, 2019, and delivering a single unified experience through the Works with Google Assistant program,” wrote Nest in a post. Google with this change aims to make the whole smart home experience for users more secure and unified. Over the next few months, users with Nest accounts will need to migrate to Google Accounts, which will serve as a single front-end for using products across Nest and Google. Along with providing a unified experience, Google also promises to be transparent about the data it collects, which it mentioned in an extensive document published on Tuesday. The document titled “Google Nest commitment to privacy in the home” describes how its connected smart home devices work and also lays out Google’s approach for managing user data. Though Google is promising improved security and privacy with this change, this will also end up breaking some existing third-party integrations. And, one of them is IFTTT (If This, Then That), a software platform with which you can write “applets” that allow devices from different manufacturers to talk to each other. We can use IFTTT for things like automatically adjusting the thermostat when the user comes closer to their house based on their phone location, turning Philips Hue smart lights on when a Nest Cam security camera detects motion, and more. Developers who work with Works with Nest API are recommended to visit the Actions on Google Smart Home developer site to learn how to integrate smart home devices or services with the Google Assistant. What Nest users think about this decision? Though Google is known for its search engine and other online services, it is also known for abandoning and killing its products in a trice. This decision of phasing out Works with Nest has left many users infuriated who have brought Nest products. https://twitter.com/IFTTT/status/1125930219305615360 “The big problem here is that there are a lot of people that have spent a lot of money on buying quality hardware that isn't just for leisure, it's for protection. I'll cite my 4 Nest Protects and an outdoor camera as an example. If somehow they get "sunsetted" due to some Google whim, fad or Because They Can, then I'm going to be pretty p*ssed, to say the least. Based on past experience I don't trust Google to act in the users' interest,” said one Hacker News user. Some other users think that this change could be for better, but the timeline that Google has decided is pretty stringent. A Hacker News user commented on a discussion triggered by this news, “Reading thru it, it is not as brutal as it sounds, more than they merged it into the Google Assistant API, removing direct access permission to the NEST device (remember microphone-gate with NEST) and consolidating those permissions into Assistant. Whilst they are killing it off, they have a transition. However, as far as timelines go - August 2019 kill off date for the NEST API is brutal and not exactly the grace period users of connected devices/software will appreciate or in many cases with tech designed for non-technical people - know nothing until suddenly in August find what was working yesterday is now not working.” Google’s reaction to the feedback by Nest users As a response to the backlash by Nest users, Google published a blog post last week sharing its plan of action. According to this plan, users’ existing devices and integrations will continue to work with their Nest accounts. However, they will not have access to any new features that will be available through their Google account. Google further clarified that it will stop taking any new Works with Nest connection requests from August 31, 2019. “Once your WWN functionality is available on the WWGA platform you can migrate with minimal disruption from a Nest Account to a Google Account,” the blog post reads. Though Google did share its plans regarding the third-party integrations, it was pretty vague about the timelines. It wrote, “One of the most popular WWN features is to automatically trigger routines based on Home/Away status. Later this year, we'll bring that same functionality to the Google Assistant and provide more device options for you to choose from. For example, you’ll be able to have your smart light bulbs automatically turn off when you leave your home.” It further shared that it has teamed up with Amazon and other partners for bringing custom integrations to Google Nest. Read the official announcement on Nest’s website. Google employees join hands with Amnesty International urging Google to drop Project Dragonfly What if buildings of the future could compute? European researchers make a proposal. Google to allegedly launch a new Smart home device
Read more
  • 0
  • 0
  • 5334

article-image-introducing-open-eye-msa-consortium-by-industry-leaders-for-targeting-high-speed-optical-connectivity-applications
Amrata Joshi
09 May 2019
3 min read
Save for later

Introducing Open Eye MSA Consortium by industry leaders for targeting high-speed optical connectivity applications

Amrata Joshi
09 May 2019
3 min read
Yesterday, the Open Eye Consortium announced the establishment of its Multi-Source Agreement (MSA) to standardize advanced specifications for optical modules. These specifications are for lower latency, efficient and lower cost optical modules that target 50Gbps, 100Gbps, 200Gbps, and up to 400Gbps optical modules for datacenter interconnects over single-mode and multimode fiber. The formation of the Open Eye MSA was initiated by MACOM and Semtech Corporation with 19 current members in Promoter and Contributing membership classes. The initial specification release of this MSA is planned for Fall 2019 followed by product availability later in the year. Open Eye MSA aims towards the adoption of PAM-4 optical interconnects scaling to 50Gbps, 100Gbps, 200Gbps, and 400Gbps by expanding the existing standards. This will help optical module implementations use less complex, lower cost, lower power, and optimized clock and data recovery (CDR). The Open Eye MSA is investing in the development of an industry-standard optical interconnect that would bring interoperability among a broad group of industry-leading technology providers, including providers of lasers, electronics, and optical components. MSA consortium’s approach enables users to scale to next-generation baud rates. Dale Murray, Principal Analyst at LightCounting, said, “LightCounting forecasts that sales of next-generation Ethernet products will exceed $500 million in 2020. However, this is only possible if suppliers can meet customer requirements for cost and power consumption. The new Open Eye MSA addresses both of these critical requirements. Having low latency is an extra bonus that HPC and AI applications will benefit from.” The initial Open Eye MSA specification will be focused on 53Gbps per lane PAM-4 solutions for 50G SFP, 100G DSFP, 100G SFP-DD, 200G QSFP, and 400G QSFP-DD and OSFP single-mode modules. The subsequent specifications will aim at multimode and 100Gbps per lane applications. David (Chan Chih) Chen, AVP, Strategic Marketing for Transceiver, AOI, said, “Through its participation in the Open Eye MSA, AOI is leveraging our laser and optical module technology to deliver benefits of low cost, high-speed connectivity to next-generation data centers.” Jeffery Maki, Distinguished Engineer II, Juniper Networks, said, “As a leader in switching, routing and optical interconnects, Juniper Networks has a unique perspective into the technology and market dynamics affecting enterprise, cloud and service provider data centers, and the Open Eye MSA provides a forum to apply our insight and expertise on the pathway to 200G and faster connectivity speeds.” To know more about this news, check out, Open Eye MSA’s page. Understanding network port numbers, TCP, UDP, and ICMP on an operating system The FTC issues orders to 7 broadband companies to analyze ISP privacy practices given they are also ad-support content platforms Using statistical tools in Wireshark for packet analysis [Tutorial]  
Read more
  • 0
  • 0
  • 1517
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $15.99/month. Cancel anytime
article-image-red-hat-summit-2019-highlights-microsoft-collaboration-red-hat-enterprise-linux-8-rhel-8-idc-study-predicts-support-for-software-worth-10-trillion
Savia Lobo
09 May 2019
11 min read
Save for later

Red Hat Summit 2019 Highlights: Microsoft collaboration, Red Hat Enterprise Linux 8 (RHEL 8), IDC study predicts support for software worth $10 trillion

Savia Lobo
09 May 2019
11 min read
The 3-day Red Hat Summit 2019 kicked off yesterday at Boston Convention and Exhibition Center, United States. Since yesterday, there have been a lot of exciting announcements on board including the collaboration of Red Hat with Microsoft where Satya Nadella (Microsoft’s CEO) came over to announce this collaboration. Red Hat also announced the release of Red Hat Enterprise Linux 8, an IDC study predicting $10 trillion global revenue for Red Hat by the end of 2019 and much more. Let us have a look at each of these announcements in brief. Azure Red Hat OpenShift: A Red Hat and Microsoft collaboration The latest Red Hat and Microsoft collaboration, Azure Red Hat OpenShift, must be important - when the Microsoft’s CEO himself came across from Seattle to present it. Red Hat had already brought OpenShift to Azure last year. This is the next step in its partnership with Microsoft. The new Azure Red Hat OpenShift combines Red Hat's enterprise Kubernetes platform OpenShift (running on Red Hat Enterprise Linux (RHEL) with Microsoft's Azure cloud. With Azure Red Hat OpenShift, customers can combine Kubernetes-managed, containerized applications into Azure workflows. In particular, the two companies see this pairing as a road forward for hybrid-cloud computing. Paul Cormier, President of Products and Technologies at RedHat, said, “Azure Red Hat OpenShift provides a consistent Kubernetes foundation for enterprises to realize the benefits of this hybrid cloud model. This enables IT leaders to innovate with a platform that offers a common fabric for both app developers and operations.” Some features of the Azure Red Hat OpenShift include: Fully managed clusters with master, infrastructure and application nodes managed by Microsoft and Red Hat; plus, no VMs to operate and no patching required. Regulatory compliance will be provided through compliance certifications similar to other Azure services. Enhanced flexibility to more freely move applications from on-premise environments to the Azure public cloud via the consistent foundation of OpenShift. Greater speed to connect to Azure services from on-premises OpenShift deployments. Extended productivity with easier access to Azure public cloud services such as Azure Cosmos DB, Azure Machine Learning and Azure SQL DB for building the next-generation of cloud-native enterprise applications. According to the official press release, “Microsoft and Red Hat are also collaborating to bring customers containerized solutions with Red Hat Enterprise Linux 8 on Azure, Red Hat Ansible Engine 2.8 and Ansible Certified modules. In addition, the two companies are working to deliver SQL Server 2019 with Red Hat Enterprise Linux 8 support and performance enhancements.” Red Hat Enterprise Linux 8 (RHEL 8) is now generally available Red Hat Enterprise Linux 8 (RHEL 8) gives a consistent OS across public, private, and hybrid cloud environments. It also provides users with version choice, long life-cycle commitments, a robust ecosystem of certified hardware, software, and cloud partners, and now comes with built-in management and predictive analytics. Features of RHEL 8 Support for latest and emerging technologies RHEL 8 is supported across different architectures and environments such that the user has a consistent and stable OS experience. This helps them to adapt to emerging tech trends such as machine learning, predictive analytics, Internet of Things (IoT), edge computing, and big data workloads. This is mainly due to the hardware innovations like the GPUS, which can assist machine learning workloads. RHEL 8 is supported to deploy and manage GPU-accelerated NGC containers on Red Hat OpenShift. These AI containers deliver integrated software stacks and drivers to run GPU-optimized machine learning frameworks such as TensorFlow, Caffe2, PyTorch, MXNet, and others. Also, NVIDIA’s DGX-1 and DGX-2 servers are RHEL certified and are designed to deliver powerful solutions for complex AI challenges. Introduction of Application Streams RHEL 8 introduces Application Streams where fast-moving languages, frameworks and developer tools are updated frequently without impacting the core resources. This melds faster developer innovation with production stability in a single, enterprise-class operating system. Abstracts complexities in granular sysadmin tasks with RHEL web console RHEL 8 abstracts away many of the deep complexities of granular sysadmin tasks behind the Red Hat Enterprise Linux web console. The console provides an intuitive, consistent graphical interface for managing and monitoring the Red Hat Enterprise Linux system, from the health of virtual machines to overall system performance. To further improve ease of use, RHEL supports in-place upgrades, providing a more streamlined, efficient and timely path for users to convert Red Hat Enterprise Linux 7 instances to Red Hat Enterprise Linux 8 systems. Red Hat Enterprise Linux System Roles for managing and configuring Linux in production The Red Hat Enterprise Linux System Roles automate many of the more complex tasks around managing and configuring Linux in production. Powered by Red Hat Ansible Automation, System Roles are pre-configured Ansible modules that enable ready-made automated workflows for handling common, complex sysadmin tasks. This automation makes it easier for new systems administrators to adopt Linux protocols and helps to eliminate human error as the cause of common configuration issues. Supports OpenSSL 1.1.1 and TLS 1.3 cryptographic standards To enhance security, RHEL 8 supports the OpenSSL 1.1.1 and TLS 1.3 cryptographic standards. This provides access to the strongest, latest standards in cryptographic protection that can be implemented system-wide via a single command, limiting the need for application-specific policies and tuning. Support for the Red Hat container toolkit With cloud-native applications and services frequently driving digital transformation, RHEL 8 delivers full support for the Red Hat container toolkit. Based on open standards, the toolkit provides technologies for creating, running and sharing containerized applications. It helps to streamline container development and eliminates the need for bulky, less secure container daemons. Other additions in the RHEL 8 include: It drives added value for specific hardware configurations and workloads, including the Arm and POWER architectures as well as real-time applications and SAP solutions. It forms the foundation for Red Hat’s entire hybrid cloud portfolio, starting with Red Hat OpenShift 4 and the upcoming Red Hat OpenStack Platform 15. Red Hat Enterprise Linux CoreOS, a minimal footprint operating system designed to host Red Hat OpenShift deployments is built on RHEL 8 and will be released soon. Red Hat Enterprise Linux 8 is also broadly supported as a guest operating system on Red Hat hybrid cloud infrastructure, including Red Hat OpenShift 4, Red Hat OpenStack Platform 15 and Red Hat Virtualization 4.3. Red Hat Universal Base Image becomes generally available Red Hat Universal Base Image, a userspace image derived from Red Hat Enterprise Linux for building Red Hat certified Linux containers is now generally available. The Red Hat Universal Base Image is available to all developers with or without a Red Hat Enterprise Linux subscription, providing a more secure and reliable foundation for building enterprise-ready containerized applications. Applications built with the Universal Base Image can be run anywhere and will experience the benefits of the Red Hat Enterprise Linux life cycle and support from Red Hat when it is run on Red Hat Enterprise Linux or Red Hat OpenShift Container Platform. Red Hat reveals results of a commissioned IDC study Yesterday, at its summit, Red Hat also announced new research from IDC that examines the contributions of Red Hat Enterprise Linux to the global economy. The press release says: “According to the IDC study, commissioned by Red Hat, software and applications running on Red Hat Enterprise Linux are expected to contribute to more than $10 trillion worth of global business revenues in 2019, powering roughly 5% of the worldwide economy as a cross-industry technology foundation.” According to IDC’s research, IT organizations using Red Hat Enterprise Linux can expect to save $6.7 billion in 2019 alone. IT executives responding to the global survey point to Red Hat Enterprise Linux: Reducing the annual cost of software by 52% Reducing the amount of time IT staff spend doing standard IT tasks by 25% Reducing costs associated with unplanned downtime by 5% To know more about this announcement in detail, read the official press release. Red Hat infrastructure migration solution helped CorpFlex to reduce the IT infrastructure complexity and costs by 87% Using Red Hat’s infrastructure migration solution, CorpFlex, a managed service provider in Brazil successfully reduced the complexity of its IT infrastructure and lowered costs by 87% through its savings on licensing fees. Delivered as a set of integrated technologies and services, including Red Hat Virtualization and Red Hat CloudForms, the Red Hat infrastructure migration solution has provided CorpFlex with the framework for a complete hybrid cloud solution while helping the business improve its bottom line. Red Hat Virtualization is built on Red Hat Enterprise Linux and the Kernel-based Virtual Machine (KVM) project. This gave CorpFlex the ability to virtualize resources, processes, and applications with a stable foundation for a cloud-native and containerized future. Migrating to the Red Hat Virtualization can provide improved affordable virtualization, improved performance, enhanced collaboration, extended automation, and more. Diogo Santos, CTO of CorpFlex said, “With Red Hat Virtualization, we’ve not only seen cost-saving in terms of licensing per virtual machine but we’ve also been able to enhance our own team’s performance through Red Hat’s extensive expertise and training.” To know more about this news in detail, head over to the official press release. Deutsche Bank increases operational efficiency by using Red Hat’s open hybrid cloud technologies to power its ‘Fabric’ application platform Fabric is a key component of Deutsche Bank's digital transformation strategy and serves as an automated, self-service hybrid cloud platform that enables application teams to develop, deliver and scale applications faster and more efficiently. Red Hat Enterprise Linux has served as a core operating platform for the bank for a number of years by supporting a common foundation for workloads both on-premises and in the bank's public cloud environment. For Fabric, the bank continues using Red Hat’s cloud-native stack, built on the backbone of the world’s leading enterprise Linux platform, with Red Hat OpenShift Container Platform. The bank deployed Red Hat OpenShift Container Platform on Microsoft Azure as part of a security-focused new hybrid cloud platform for building, hosting and managing banking applications. By deploying Red Hat OpenShift Container Platform, the industry's most comprehensive enterprise Kubernetes platform, on Microsoft Azure, IT teams can take advantage of massive-scale cloud resources with a standard and consistent PaaS-first model. According to the press release, “The bank has achieved its objective of increasing its operational efficiency, now running over 40% of its workloads on 5% of its total infrastructure, and has reduced the time it takes to move ideas from proof-of-concept to production from months to weeks.” To know more about this news in detail, head over to the official press release. Lockheed Martin and Red Hat together to bring accelerate upgrades to the U.S. Air Force’s F-22 Raptor fighter jets Red Hat announced that Lockheed Martin was working with it to modernize the application development process used to bring new capabilities to the U.S. Air Force’s fleet of F-22 Raptor fighter jets. Lockheed Martin Aeronautics replaced its previous waterfall development process used for F-22 Raptor with an agile methodology and DevSecOps practices that are more adaptive to the needs of the U.S. Air Force. Together, Lockheed Martin and Red Hat created an open architecture based on Red Hat OpenShift Container Platform that has enabled the F-22 team to accelerate application development and delivery. “The Lockheed Martin F-22 Raptor is one of the world’s premier fighter jets, with a combination of stealth, speed, agility, and situational awareness. Lockheed Martin is working with the U.S. Air Force on innovative, agile new ways to deliver the Raptor’s critical capabilities to warfighters faster and more affordable”, the press release mentions. Lockheed Martin chose Red Hat Open Innovation Labs to lead them through the agile transformation process and help them implement an open source architecture onboard the F-22 and simultaneously disentangle its web of embedded systems to create something more agile and adaptive to the needs of the U.S. Air Force. Red Hat Open Innovation Labs’ dual-track approach to digital transformation combined enterprise IT infrastructure modernization and, through hands-on instruction, helped Lockheed’s team adopt agile development methodologies and DevSecOps practices. During the Open Innovation Labs engagement, a cross-functional team of five developers, two operators, and a product owner worked together to develop a new application for the F-22 on OpenShift. After seeing an early impact with the initial project team, within six months, Lockheed Martin had scaled its OpenShift deployment and use of agile methodologies and DevSecOps practices to a 100-person F-22 development team. During a recent enablement session, the F-22 Raptor scrum team improved its ability to forecast for future sprints by 40%, the press release states. To know more about this news in detail, head over to its official press release on Red Hat. This story will have certain updates until the Summit gets over. We will update the post as soon as we get further updates or new announcements. Visit the official Red Hat Summit website to know more about the sessions conducted, the list of keynote speakers, and more. Red Hat rebrands logo after 20 years; drops Shadowman Red Hat team announces updates to the Red Hat Certified Engineer (RHCE) program Red Hat Satellite to drop MongoDB and will support only PostgreSQL backend
Read more
  • 0
  • 0
  • 3805

article-image-sherin-thomas-explains-how-to-build-a-pipeline-in-pytorch-for-deep-learning-workflows
Packt Editorial Staff
09 May 2019
8 min read
Save for later

Sherin Thomas explains how to build a pipeline in PyTorch for deep learning workflows

Packt Editorial Staff
09 May 2019
8 min read
A typical deep learning workflow starts with ideation and research around a problem statement, where the architectural design and model decisions come into play. Following this, the theoretical model is experimented using prototypes. This includes trying out different models or techniques, such as skip connection, or making decisions on what not to try out. PyTorch was started as a research framework by a Facebook intern, and now it has grown to be used as a research or prototype framework and to write an efficient model with serving modules. The PyTorch deep learning workflow is fairly equivalent to the workflow implemented by almost everyone in the industry, even for highly sophisticated implementations, with slight variations. In this article, we explain the core of ideation and planning, design and experimentation of the PyTorch deep learning workflow. This article is an excerpt from the book PyTorch Deep Learning Hands-On by Sherin Thomas and Sudhanshi Passi. This book attempts to provide an entirely practical introduction to PyTorch. This PyTorch publication has numerous examples and dynamic AI applications and demonstrates the simplicity and efficiency of the PyTorch approach to machine intelligence and deep learning. Ideation and planning Usually, in an organization, the product team comes up with a problem statement for the engineering team, to know whether they can solve it or not. This is the start of the ideation phase. However, in academia, this could be the decision phase where candidates have to find a problem for their thesis. In the ideation phase, engineers brainstorm and find the theoretical implementations that could potentially solve the problem. In addition to converting the problem statement to a theoretical solution, the ideation phase is where we decide what the data types are and what dataset we should use to build the proof of concept (POC) of the minimum viable product (MVP). Also, this is the stage where the team decides which framework to go with by analyzing the behavior of the problem statement, available implementations, available pretrained models, and so on. This stage is very common in the industry, and I have come across numerous examples where a well-planned ideation phase helped the team to roll out a reliable product on time, while a non-planned ideation phase destroyed the whole product creation. Design and experimentation The crucial part of design and experimentation lies in the dataset and the preprocessing of the dataset. For any data science project, the major timeshare is spent on data cleaning and preprocessing. Deep learning is no exception from this. Data preprocessing is one of the vital parts of building a deep learning pipeline. Usually, for a neural network to process, real-world datasets are not cleaned or formatted. Conversion to floats or integers, normalization and so on, is required before further processing. Building a data processing pipeline is also a non-trivial task, which consists of writing a lot of boilerplate code. For making it much easier, dataset builders and DataLoader pipeline packages are built into the core of PyTorch. The dataset and DataLoader classes Different types of deep learning problems require different types of datasets, and each of them might require different types of preprocessing depending on the neural network architecture we use. This is one of the core problems in deep learning pipeline building. Although the community has made the datasets for different tasks available for free, writing a preprocessing script is almost always painful. PyTorch solves this problem by giving abstract classes to write custom datasets and data loaders. The example given here is a simple dataset class to load the fizzbuzz dataset, but extending this to handle any type of dataset is fairly straightforward. PyTorch's official documentation uses a similar approach to preprocess an image dataset before passing that to a complex convolutional neural network (CNN) architecture. A dataset class in PyTorch is a high-level abstraction that handles almost everything required by the data loaders. The custom dataset class defined by the user needs to override the __len__ and __getitem__ functions of the parent class, where __len__ is being used by the data loaders to determine the length of the dataset and __getitem__ is being used by the data loaders to get the item. The __getitem__ function expects the user to pass the index as an argument and get the item that resides on that index: from dataclasses import dataclassfrom torch.utils.data import Dataset, DataLoader@dataclass(eq=False)class FizBuzDataset(Dataset):    input_size: int    start: int = 0    end: int = 1000    def encoder(self,num):        ret = [int(i) for i in '{0:b}'.format(num)]        return[0] * (self.input_size - len(ret)) + ret    def __getitem__(self, idx):        x = self.encoder(idx)        if idx % 15 == 0:            y = [1,0,0,0]        elif idx % 5 ==0:            y = [0,1,0,0]        elif idx % 3 == 0:            y = [0,0,1,0]        else:            y = [0,0,0,1]        return x,y           def __len__(self):        return self.end - self.start The implementation of a custom dataset uses brand new dataclasses from Python 3.7. dataclasses help to eliminate boilerplate code for Python magic functions, such as __init__, using dynamic code generation. This needs the code to be type-hinted and that's what the first three lines inside the class are for. You can read more about dataclasses in the official documentation of Python (https://docs.python.org/3/library/dataclasses.html). The __len__ function returns the difference between the end and start values passed to the class. In the fizzbuzz dataset, the data is generated by the program. The implementation of data generation is inside the __getitem__ function, where the class instance generates the data based on the index passed by DataLoader. PyTorch made the class abstraction as generic as possible such that the user can define what the data loader should return for each id. In this particular case, the class instance returns input and output for each index, where, input, x is the binary-encoder version of the index itself and output is the one-hot encoded output with four states. The four states represent whether the next number is a multiple of three (fizz), or a multiple of five (buzz), or a multiple of both three and five (fizzbuzz), or not a multiple of either three or five. Note: For Python newbies, the way the dataset works can be understood by looking first for the loop that loops over the integers, starting from zero to the length of the dataset (the length is returned by the __len__ function when len(object) is called). The following snippet shows the simple loop: dataset = FizBuzDataset()for i in range(len(dataset)):    x, y = dataset[i]dataloader = DataLoader(dataset, batch_size=10, shuffle=True,                     num_workers=4)for batch in dataloader:    print(batch) The DataLoader class accepts a dataset class that is inherited from torch.utils.data.Dataset. DataLoader accepts dataset and does non-trivial operations such as mini-batching, multithreading, shuffling, and so on, to fetch the data from the dataset. It accepts a dataset instance from the user and uses the sampler strategy to sample data as mini-batches. The num_worker argument decides how many parallel threads should be operating to fetch the data. This helps to avoid a CPU bottleneck so that the CPU can catch up with the GPU's parallel operations. Data loaders allow users to specify whether to use pinned CUDA memory or not, which copies the data tensors to CUDA's pinned memory before returning it to the user. Using pinned memory is the key to fast data transfers between devices, since the data is loaded into the pinned memory by the data loader itself, which is done by multiple cores of the CPU anyway. Most often, especially while prototyping, custom datasets might not be available for developers and in such cases, they have to rely on existing open datasets. The good thing about working on open datasets is that most of them are free from licensing burdens, and thousands of people have already tried preprocessing them, so the community will help out. PyTorch came up with utility packages for all three types of datasets with pretrained models, preprocessed datasets, and utility functions to work with these datasets. This article is about how to build a basic pipeline for deep learning development. The system we defined here is a very common/general approach that is followed by different sorts of companies, with slight changes. The benefit of starting with a generic workflow like this is that you can build a really complex workflow as your team/project grows on top of it. Build deep learning workflows and take deep learning models from prototyping to production with PyTorch Deep Learning Hands-On written by Sherin Thomas and Sudhanshu Passi. F8 PyTorch announcements: PyTorch 1.1 releases with new AI tools, open sourcing BoTorch and Ax, and more Facebook AI open-sources PyTorch-BigGraph for faster embeddings in large graphs Top 10 deep learning frameworks
Read more
  • 0
  • 0
  • 8120

article-image-google-i-o-2019-d1-highlights-smarter-display-search-feature-with-ar-capabilities-android-q-linguistically-advanced-google-lens-and-more
Fatema Patrawala
09 May 2019
11 min read
Save for later

Google I/O 2019 D1 highlights: smarter display, search feature with AR capabilities, Android Q, linguistically advanced Google lens and more

Fatema Patrawala
09 May 2019
11 min read
This year’s Google IO 2019 was meant to be big, and it didn't disappoint at all. There's a lot of big news to talk about as it introduced and showcased exciting new products, updates, features and functionalities for a much better user experience. Google I/O kicked off yesterday and it will run through Thursday May 9 at the Shoreline Amphitheater in Mountain View, California. It has approximately 7000 attendees from all around the world. “To organize the world’s information and make it universally accessible and useful. We are moving from a company that helps you find answers to a company that helps you get things done. Our goal is to build a more helpful Google for everyone.” Sundar Pichai, Google CEO commenced his Keynote session with such strong statements. He further listed a few recent tech advances and said “We continue to believe that the biggest breakthroughs happen at the intersection of AI.” He then went on to discuss how Google is confident that it can do more AI without private data leaving your devices, and that the heart of the solution will be federated learning. Basically, federated learning is a distributed machine learning approach which enables model training on a large corpus of decentralized data. It enables mobile phones at different geographical locations to collaboratively train a machine learning model without transferring any data that may contain personal information from the devices. While the keynote lasted for nearly two hours, some of the breakthrough innovation in tech were introduced which will be briefed in detail moving ahead in the article. Google Search at Google IO 2019 Google remains a search giant, and that's something it has not forgotten at Google IO 2019. However, search is about to become far more visually rich, thanks to the inclusion of AR camera trick which is now introduced directly into search results. They held an on-stage demonstration at Google IO which showed how a medical student could search for a muscle group, and be presented within mobile search results with a 3D representation of the body part. Not only could it be played with within the search results, it could be placed on the user’s desk to be seen at real scale from their smartphone’s screen. Source: Google And even larger things, like an AR shark, could be put into your AR screen, straight from the app. The Google team showcased this feature as the shark virtually appeared live in front of the audience. Google Lens bill splitting and food recommendations Google Lens was something which caught audience’s interest in the Google's App arsenal. Lens was presented in a way that it can use image recognition to deliver information based on what your camera is looking at. A demo was shown on how a combination of mapping data and image recognition will let Google Lens make recommendations from a restaurant’s menu, just by pointing your camera at it. And when the bill arrives, point your camera at the receipt and it'll show you tipping info and bill splitting help. They also announced their partnership with recipe providers to allow Lens to produce video tutorials when your phone is pointed at a written recipe. Source: Google Debut of Android Q beta 3 version At Google IO Android Q beta 3 was introduced, it is the 10th generation of the Android operating system, and it comes with new features for phone and tablet users. Google announced that there are over 2.5 billion active Android devices as the software extends to televisions, in-car systems and smart screens like the Google Home Hub. Further it was discussed how the Android will work with foldable devices, and how it will be able to seamlessly tweak its UI depending on the format and ratio of the folding device. Another new feature of live caption system in Android Q will turn audio instantly into text to be read. It's a system function triggered within the volume rocker menu. They can be tweaked for legibility to your eyes, doesn't require an internet connection, and happens on videos that have never been manually close-captioned. It's at an OS level, letting it work across all your apps. Source: Google The smart reply feature will now work across all messaging apps in Android Q, with the OS smartly predicting your text. The Dark Theme activated by battery saver or the quick tile was introduced. Lighting up less pixels in your phone will save its battery life. Android Q will also double down on security and privacy features, such as a Maps incognito mode, reminders for location usage and sharing (such as only when a delivery app is in use), and TLSV3 encryption for low end devices. Security updates will roll out faster too, updating over the air without a reboot needed for the device. With Android Q Beta 3 which will be launched today on 21 new devices, Google also announced that it will make Kotlin, a statically typed programming language for writing its Android apps. Chrome to be more transparent in terms of cookie control Google announced that it will update Chrome to provide users with more transparency about how sites are using cookies, as well as simpler controls for cross-site cookies. A number of changes will be made to Chrome to enable features, like modifying how cookies work so that developers need to explicitly specify which cookies are allowed to work across websites — and could be used to track users. The mechanism is built on the web's SameSite cookie attribute and you can find the technical details on web.dev. In the coming months, Chrome will require developers to use this mechanism to access their cookies across sites. This change will enable users to clear all such cookies while leaving single domain cookies unaffected, preserving user logins and settings. It will also enable browsers to provide clear information about which sites are setting these cookies, so users can make informed choices about how their data is used. This change also has a significant security benefit for users, protecting cookies from cross-site injection and data disclosure attacks like Spectre and CSRF by default. They further announced that they will eventually limit cross-site cookies to HTTPS connections, providing additional important privacy protections for our users. Developers can start to test their sites and see how these changes will affect behavior in the latest developer build of Chrome. They have also announced Flutter for web, mobile and desktop. It will allow web-based applications to be built using the Flutter framework. The core framework for mobile devices will be upgraded to Flutter 1.5. And for the desktop, Flutter will be used as an experimental project. “We believe these changes will help improve user privacy and security on the web — but we know that it will take time. We’re committed to working with the web ecosystem to understand how Chrome can continue to support these positive use cases and to build a better web.” says Ben Galbraith - Director, Chrome Product Management and Justin Schuh - Director, Chrome Engineering Next generation Google Assistant Google has been working hard to compress and streamline the AI that Google Assistant taps into from the cloud when it is processing voice commands. Currently every voice request has to be run through three separate processing models to land on the correctly-understood voice command. It is only data that until now has taken up 100GB of storage on many Google servers. But that's about to change. As Google has figured how to shrink that down to just 500MB of storage space, and to put it on your device. This will help lower the latency between your voice request and the task you've wished to trigger being carried out. It's 10x faster - 'real time', according to Google. It also showed a demo where, a Google rep fired off a string of voice commands that required Google Assistant to access multiple apps, execute specific actions, and understand not only what the rep was saying, but what she actually meant. For example she said, “Hey Google, what’s the weather today? What about tomorrow? Show me John Legend on Twitter; Get a Lyft ride to my hotel; turn the flashlight on; turn it off; take a selfie.” Assistant executed the whole sequence flawlessly, in a span of about 15 seconds. Source: Google Further demos showed off its ability to compose texts and emails that drew on information about the user’s travel plans, traffic conditions, and photos. And last but not the least it can also silence your alarms and timers by just saying 'Stop' to help you go back to your slumber. Google Duplex gets smarter Google Duplex is a Google Assistant service which earlier use to make calls and bookings on your behalf based on the requests. But now It's getting more smarter as it comes with the new 'Duplex on the web' feature. Now you can ask Google Duplex to plan a trip, and it'll begin filling in website forms such as reservation details, hire car bookings and more, on your behalf. And it only awaits you to confirm the details it has input. Google Home Hub is dead, Long live the Nest Hub Max At Google IO, the company announced it was dropping the Google Home moniker, instead rebranding its devices with the Nest name, bringing them in line with its security systems. The Nest Hub Max was introduced, with a camera and larger 10-inch display. With a built-in Nest Cam wide-angle lens security camera (127 degrees), which the original Home Hub omitted due to privacy concerns, it's now a far more security-focussed device. It also lets you make video calls using a wide range of video calling apps. Cameras and mics can be physically switched off with a slider that cuts off the electronics, for the privacy-conscious. Source: Google Voice and Face match features, allowing families to create voice and face models, will let the Hub Max know to only show an individual's information or recommendations. It'll also double up as a kitchen TV, if you've access to YouTube TV plans, and lowering the volume is as simple as raising your hand in front of the display. It'll be launched this summer for $229 in the US, and AU$349 in Australia. The original Hub also gets a price cut to $129 / AU$199. Other honorable mentions Google Stadia: Google had introduced its new game-streaming service, called Stadia in March. The service uses Google’s own servers to store and run games, which you can then connect to and play whenever you’d like on literally any screen in your house including your desktop, laptop, TV, phone and tablet. Basically, if it’s internet-connected and has access to Chrome, it can run Stadia. Today at I/O they announced that Stadia will not only stream games from the cloud to the Chrome browser but on the Chromecast, and other Pixel and Android devices. They plan to launch ahead this year in the US, Canada, UK, and Europe. A cheaper Pixel phone: While other smartphones are getting more competitive in terms of pricing, Google introduced its new Pixel 3a which is less powerful than the existing Pixel 3, and at a base price of $399, which is half as expensive as Pixel 3. In 2017 Forbes had done an analysis on why Google Pixel failed in the market and one of the reason was its exorbitant high price. It states that the tech giant needs to come to the realization that its brand in the phone hardware business is just not worth as much as Samsung's or Apple's that it can command the same price premium. Source: Google “Focus mode:” A new feature coming to Android P and Q devices this summer will let you turn off your most distracting apps to focus on a task, while still allowing text messages, calls, and other important notifications through. Augmented reality in Google Maps: AR is one of those technologies that always seems to impress the tech companies that make it more than it impresses their actual users. But Google may finally be finding some practical uses for it, like overlaying walking directions when you hold up your phone’s camera to the street in front of you. Incognito mode for Google Maps: It also announced a new “incognito” mode for Google Maps, which will stop keeping records of your whereabouts while it’s enabled. And they will further roll out this feature in Google Search and YouTube. Google I/O 2019: Flutter UI framework now extended for Web, Embedded, and Desktop You can now permanently delete your location history, and web and app activity data on Google Google’s Sidewalk Lab smart city project threatens privacy and human rights: Amnesty Intl, CA says
Read more
  • 0
  • 0
  • 3613

article-image-linux-5-1-out-with-io_uring-io-interface-persistent-memory-new-patching-improvements-and-more-2
Vincy Davis
08 May 2019
3 min read
Save for later

Linux 5.1 out with Io_uring IO interface, persistent memory, new patching improvements and more!

Vincy Davis
08 May 2019
3 min read
Yesterday, Linus Torvalds, the principal developer of the Linux kernel announced the release of Linux 5.1 in a mailing list announcement. This release provides users with an open source operating system with lots of great additions, as well as improvements to existing features. The previous version, Linux 5.0 was released two months ago. “On the whole, 5.1 looks very normal with just over 13k commits (plus another 1k+ if you count merges). Which is pretty much our normal size these days. No way to boil that down to a sane shortlog, with work all over.”, said Linus Torvalds in the official announcement. What’s new in Linux 5.1? Io_uring: New Linux IO interface Linux 5.1 introduces a new high-performance interface called io_uring. It’s easy to use and hard to misuse user/application interface. Io_uring has an efficient buffered asynchronous I/O support, the ability to do I/O without even performing a system call via polled I/O, and other efficiency enhancements. This will help deliver fast and efficient I/O for Linux. Io_uring permits safe signal delivery in the presence of PID reuse which will improve power management without affecting power consumption. Liburing is used as the user-space library which will make the usage simpler. Axboe's FIO benchmark has also been adapted already to support io_uring. Security In Linux 5.1, the SafeSetID LSM module has been added which will provide administrators with security and policy controls. It will restrict UID/GID transitions from a given UID/GID to only those approved by system-wide acceptable lists. This will also help in stopping to receive the auxiliary privileges associated with CAP_SET{U/G}ID, which will allow the user to set up user namespace UID mappings. Storage Along with physical RAM, users can now use persistent memory as RAM (system memory), allowing them to boot the system to a device-mapper device without using initramfs, as well as support for cumulative patches for the live kernel patching feature. This persistent memory can also be used as a cost-effective RAM replacement. Live patching improvements With Linux 5.1 a new capability is being added to live patching, it’s called Atomic Replace. It includes all wanted changes from all older live patches and can completely replace them in one transition. Live patching enables a running system to be patched without the need for a full system reboot. This will allow new drivers compatible with new hardware. Users are quite happy with this update. A user on Reddit commented, “Finally! I think this one fixes problems with Elantech's touchpads spamming the dmesg log. Can't wait to install it!” Another user added, “Thank you and congratulations for the developers!” To download the Linux kernel 5.1 sources, head over to kernel.org. To know more about the release, check out the official mailing announcement. Ubuntu 19.04 Disco Dingo Beta releases with support for Linux 5.0 and GNOME 3.32 Announcing Linux 5.0! Bodhi Linux 5.0.0 released with updated Ubuntu core 18.04 and a modern look  
Read more
  • 0
  • 0
  • 5691
article-image-google-i-o-2019-flutter-ui-framework-now-extended-for-web-embedded-and-desktop
Sugandha Lahoti
08 May 2019
4 min read
Save for later

Google I/O 2019: Flutter UI framework now extended for Web, Embedded, and Desktop

Sugandha Lahoti
08 May 2019
4 min read
At the ongoing 2019 Google I/O, Google made a major overhaul to its Flutter UI framework. Flutter is now expanded from mobile to multi-platform. The company released the first technical preview of Flutter for web. The core framework for mobile devices was also upgraded to Flutter 1.5. For desktop, Flutter is being used as an experimental project. It is not production-ready, but the team has published early instructions for developing  apps to run on Mac, Windows, and Linux. An embedding API for Flutter is also available that allows it to be used in scenarios for home and automotives. Google notes, “The core Flutter project has been making progress to enable desktop-class apps, with input paradigms such as keyboard and mouse, window resizing, and tooling for Chrome OS app development. The exploratory work that we did for embedding Flutter into desktop-class apps running on Windows, Mac and Linux has also graduated into the core Flutter engine.” Flutter for Web Flutter for web allows web-based applications to be built using the Flutter framework. Per Google, with Flutter for web you can create “highly interactive, graphically rich content,” though it plans to continue evolving this version with a “focus on performance and harmonizing the codebase.” It allows developers to compile existing Flutter code written in Dart into a client experience that can be embedded in the browser and deployed to any web server. Google teamed up with the New York Times to build a small puzzle game called Kenken as an early example of what can be built using Flutter for Web. This game uses the same code across Android, iOS, the web and Chrome OS. Source: Google Blog Flutter 1.5 Flutter 1.5 hosts a variety of new features including updates to its iOS and Material widget and engine support for new mobile devices types. The latest release also brings support for Dart 2.3 with extensive UI-as-code functionality. It also has an in-app payment library which will make monetizing Flutter based apps easier. Google also showcased an ML Kit Custom Image Classifier, built using Flutter and Firebase at Google I/O 2019. The kit offers an easy-to-use app-based workflow for creating custom image classification models. You can collect training data using the phone’s camera, invite others to contribute to your datasets, trigger model training, and use trained models, all from the same app. Google has also released a comprehensive new training course for Flutter, built by The App Brewery. Their new course is available for a time-limited discount from $199 to just $10. Netizens had trouble acknowledging Google’s move and were left wondering as to whether Google wants people to invest in learning Dart or Kotlin. For reference, Flutter is entirely built in Dart and Google made two major announcements for Kotlin at the Google I/O. Android development will become increasingly Kotlin-first, and Google announcing the first preview of Jetpack Compose, a new open-source UI toolkit for Kotlin developers. A comment on Hacker News reads, “This is massively confusing. Do we invest in Kotlin ...or do we invest in Dart? Where will Android be in 2 years: Dart or Kotlin?” In response to this, another comment reads, “I don't think anyone has a definite answer, not even Google itself. Google placed several bets on different technologies and community will ultimately decide which of them is the winning one. Personally, I think native Android (Kotlin) and iOS (Swift) development is here to stay. I have tried many cross-platform frameworks and on any non-trivial mobile app, all of them cause more problem than they solve.” Another said, “If you want to do android development, Kotlin. If you want to do multi-platform development, flutter.” “Invest in Kotlin. Kotlin is useful for Android NOW. Whenever Dart starts becoming more mainstream, you'll know and have enough time to react to it”, was another user’s opinion. Read the entire conversation on Hacker News. Google launches Flutter 1.2, its first feature update, at Mobile World Congress 2019 You can now permanently delete your location history and web and app activity data on Google Microsoft Build 2019: Microsoft showcases new updates to MS 365 platform with a focus on AI and developer productivity
Read more
  • 0
  • 0
  • 6633

article-image-all-about-browser-fingerprinting-the-privacy-nightmare-that-keeps-web-developers-awake-at-night
Bhagyashree R
08 May 2019
4 min read
Save for later

All about Browser Fingerprinting, the privacy nightmare that keeps web developers awake at night

Bhagyashree R
08 May 2019
4 min read
Last week, researchers published a paper titled Browser Fingerprinting: A survey, that gives a detailed insight into what browser fingerprinting is and how it is being used in the research field and the industry. The paper further discusses the current state of browser fingerprinting and the challenges surrounding it. What is browser fingerprinting? Browser fingerprinting refers to the technique of collecting various device-specific information through a web browser to build a device fingerprint for better identification. The device-specific information may include details like your operating system, active plugins, timezone, language, screen resolution, and various other active settings. This information can be collected through a simple script running inside a browser. A server can also collect a wide variety of information from public interfaces and HTTP headers. This is a completely stateless technique as it does not require storing any collected information inside the browser. The following table shows an example of a browser fingerprint: Source: arXiv.org The history of browser fingerprinting Back in 2009, Jonathan Mayer, who works as an Assistant Professor in the Computer Science Department at Princeton University, investigated if the differences in browsing environments can be exploited to deanonymize users. In his experiment, he collected the content of the navigator, screen, navigator.plugins, and navigator.mimeTypes objects of browsers. The results drawn from his experiment showed that from a total of 1328 clients, 1278 (96.23%) could be uniquely identified. Following this experiment, in 2010, Peter Eckersley from the Electronic Frontier Foundation (EFF) performed the Panopticlick experiment in which he investigated the real-world effectiveness of browser fingerprinting. For this experiment, he collected 470,161 fingerprints in the span of two weeks. This huge amount of data was collected from HTTP headers, JavaScript, and plugins like Flash or Java. He concluded that browser fingerprinting can be used to uniquely identify 83.6% of the device fingerprints he collected. This percentage shot up to 94.2% if users had enabled Flash or Java as these plugins provided additional device information. This is the study that proved that individuals can really be identified through these details and the term “browser fingerprinting was coined”. Applications of Browser fingerprinting As is the case with any technology, browser fingerprinting can be used for both negative and positive applications. By collecting the browser fingerprints, one can track users without their consent or attack their device by identifying a vulnerability. Since these tracking scripts are silent and executed in the background users will have no clue that they are being tracked. Talking about the positive applications, with browser fingerprinting, users can be warned beforehand if their device is out of date by recommending specific updates. This technique can be used to fight against online fraud by verifying the actual content of a fingerprint. “As there are many dependencies between collected attributes, it is possible to check if a fingerprint has been tampered with or if it matches the device it is supposedly belonging to,” reads the paper. It can also be used for web authentication by verifying if the device is genuine or not. Preventing unwanted tracking by Browser fingerprinting By modifying the content of fingerprints: To prevent third-parties from identifying individuals through fingerprints, we can send random or pre-defined values instead of the real ones. As third-parties rely on fingerprint stability to link fingerprints to a single device, these unstable fingerprints will make it difficult for them to identify devices on the web. Switching browsers: A device fingerprint is mainly composed of browser-specific information. So, users can use two different browsers, which will result in two different device fingerprints. This will make it difficult for a third-party to track the browsing pattern of a user. Presenting the same fingerprint for all users: If all the devices on the web present the same fingerprint, there will no advantage of tracking the devices. This is the approach that the Tor Browser uses, which is known as the Tor Browser Bundle (TBB). Reducing the surface of browser APIs: Another defense mechanism is decreasing the surface of browser APIs and reducing the quantity of information a tracking script can collect. This can be done by disabling plugins so that there are no additional fingerprinting vectors like Flash or Silverlight to leak extra device information. Read the full paper, to know more in detail. DuckDuckGo proposes “Do-Not-Track Act of 2019” to require sites to respect DNT browser setting Mozilla Firefox will soon support ‘letterboxing’, an anti-fingerprinting technique of the Tor Browser Mozilla engineer shares the implications of rewriting browser internals in Rust
Read more
  • 0
  • 0
  • 3989

article-image-symantec-says-nsas-equation-group-tools-were-hacked-by-buckeye-in-2016-way-before-they-were-leaked-by-shadow-brokers-in-2017
Savia Lobo
07 May 2019
5 min read
Save for later

Symantec says NSA’s Equation group tools were hacked by Buckeye in 2016 way before they were leaked by Shadow Brokers in 2017

Savia Lobo
07 May 2019
5 min read
In a report released yesterday by Symantec, the popular cybersecurity software and services company, it revealed that the Buckeye group used the Equation group's tools way before they were leaked by Shadow Brokers in 2017. With the help of these tools, Buckeye exploited the Windows zero-day in 2016. According to The New York Times: “Based on the timing of the attacks and clues in the computer code, researchers with the firm Symantec believe the Chinese did not steal the code but captured it from an N.S.A. attack on their own computers — like a gunslinger who grabs an enemy’s rifle and starts blasting away.” In 2017, a mysterious cyber group known as the Shadow Brokers leaked a bunch of tools belonging to the Equation group, one of the most technically adept espionage groups, tied to the Tailored Access Operations(TAO) unit of the U.S. NSA. This leak had a major impact as many attackers rushed forward to lay their hands on the tools disclosed. One of the tools named as the EternalBlue exploit was used in the WannaCry ransomware outbreak, which took place in May 2017. Symantec’s recent report highlights that Buckeye cyber espionage group (aka APT3, Gothic Panda) actually began using the Equation Group tools in various attacks at least a year prior when Shadow Brokers leaked the tools. The evidence traces back in March 2016, in Hong Kong, where Buckeye group began using a variant of DoublePulsar (Backdoor.Doublepulsar) backdoor, which was later disclosed in the Shadow Brokers’ leak. The DoublePulsar exploit was delivered to victims using a custom exploit tool (Trojan.Bemstour) that was specifically designed to install DoublePulsar. Bemstour exploited two Window vulnerabilities for achieving remote kernel code execution on targeted computers: One was a Windows zero-day vulnerability (CVE-2019-0703) that was reported by Symantec to Microsoft in September 2018 and was patched on March 12, 2019. The other Windows vulnerability (CVE-2017-0143) was patched on March 2017 after it was discovered to have been used by two exploit tools—EternalRomance and EternalSynergy--also released in the Shadow Brokers’ leak. According to Symantec’s report, “How Buckeye obtained Equation Group tools at least a year prior to the Shadow Brokers leak remains unknown.” Per Symantec report, the Buckeye group had been active since at least 2009, when it began mounting a string of espionage attacks, mainly against organizations based in the U.S. The report further states that the Buckeye group disappeared during the mid-2017. Also, three alleged members of the group were indicted in the U.S. in November 2017. However, the Bemstour exploit tool and the DoublePulsar variant used by Buckeye continued to be used until at least September 2018, but with different malware. In 2011, the N.S.A. used sophisticated malware, Stuxnet, to destroy Iran’s nuclear centrifuges. They later saw that the same code proliferated around the world, doing damage to random targets, including American business giants like Chevron. According to The New York Times, “Details of secret American cybersecurity programs were disclosed to journalists by Edward J. Snowden, a former N.S.A. contractor now living in exile in Moscow. A collection of C.I.A. cyber weapons, allegedly leaked by an insider, was posted on WikiLeaks.” To this, Eric Chien, a security director at Symantec said, “We’ve learned that you cannot guarantee your tools will not get leaked and used against you and your allies.” “This is the first time we’ve seen a case — that people have long referenced in theory — of a group recovering unknown vulnerabilities and exploits used against them, and then using these exploits to attack others,” Mr. Chien said. The New York Times post mentions, “The Chinese appear not to have turned the weapons back against the United States, for two possible reasons, Symantec researchers said. They might assume Americans have developed defenses against their own weapons, and they might not want to reveal to the United States that they had stolen American tools.” Two NSA employees told The New York Times that post the Shadow Brokers’ leak of the most highly coveted hacking tools in 2016 and 2017, the NSA turn over its arsenal of software vulnerabilities to Microsoft for patching and also shut down some of the N.S.A.’s most sensitive counterterrorism operations. “The N.S.A.’s tools were picked up by North Korean and Russian hackers and used for attacks that crippled the British health care system, shut down operations at the shipping corporation Maersk and cut short critical supplies of a vaccine manufactured by Merck. In Ukraine, the Russian attacks paralyzed critical Ukrainian services, including the airport, Postal Service, gas stations and A.T.M.s.”, The New York Times reported. Michael Daniel, the president of the Cyber Threat Alliance, previously a cybersecurity coordinator for the Obama administration, said, “None of the decisions that go into the process are risk-free. That’s just not the nature of how these things work. But this clearly reinforces the need to have a thoughtful process that involves lots of different equities and is updated frequently.” Chein said, in the future, American officials will need to factor in the real likelihood that their own tools will boomerang back on American targets or allies. A lot of security reports and experts feel there are certain loopholes to this report and that the report lacked backing by some intelligent sources. https://twitter.com/RidT/status/1125747510625091585 https://twitter.com/ericgeller/status/1125551150567129089 https://twitter.com/jfersec/status/1125746228195622912 https://twitter.com/GossiTheDog/status/1125754423245004800 https://twitter.com/RidT/status/1125746008577724416 To know more about this news in detail, head over to Symantec’s complete report. DockerHub database breach exposes 190K customer data including tokens for GitHub and Bitbucket repositories PostgreSQL security: a quick look at authentication best practices [Tutorial] Facebook accepts exposing millions of user passwords in a plain text to its employees after security researcher publishes findings
Read more
  • 0
  • 0
  • 2348
article-image-microsoft-build-2019-introducing-windows-terminal-application-packed-with-multiple-tab-opening-improved-text-and-more
Amrata Joshi
07 May 2019
2 min read
Save for later

Microsoft Build 2019: Introducing Windows Terminal, application packed with multiple tab opening, improved text and more

Amrata Joshi
07 May 2019
2 min read
Yesterday, at the Microsoft Build 2019, the team at Microsoft announced Windows Terminal, a new terminal application for users of command-line tools and shells like PowerShell, Command Prompt, and WSL. This terminal will be delivered via the Microsoft Store in Windows 10 and will be regularly updated. Key features of Windows Terminal Multiple tabs Windows Terminal comes with multiple tab support so users will now be able to open any number of tabs, each connected to a command-line shell or app of their choice. E.g. PowerShell, Ubuntu on WSL, Command Prompt, a Raspberry Pi via SSH, etc. Text Windows terminal uses a GPU accelerated DirectWrite/DirectX-based text rendering engine so that it displays text characters, glyphs, and symbols present within fonts on the PC. In addition, it also includes emoji, powerline symbols, CJK ideograms, icons, programming ligatures, etc. It can also render text much faster as compared to the previously used engines. Users now have the option of using their own new font. Settings and configurability Windows Terminal comes with many settings and configuration options that manage Terminal’s appearance and each of the shells/profiles that users open as new tabs. The settings are stored in a structured text file so that it makes it easy for users and/or tools to configure. With the terminal’s configuration mechanism, users will be able to create multiple “profiles” for each shell/app/tool. And these profiles can have their own combination of color themes, font styles and sizes, background blur/transparency levels, etc so that users can now create their own custom-styled Terminal. Windows Console The team further announced that they are open sourcing Windows Console which hosts the command-line infrastructure in Windows and provides the traditional Console UX. The primary goal of the console is preserving backward compatibility with existing command-line tools, scripts, etc. To know more about this news, check out Microsoft’s blog post. Microsoft introduces Remote Development extensions to make remote development easier on VS Code Docker announces collaboration with Microsoft’s .NET at DockerCon 2019 Microsoft and GitHub employees come together to stand with the 996.ICU repository    
Read more
  • 0
  • 0
  • 3548

article-image-msbuild2019-microsoft-launches-new-products-to-secure-elections-and-political-campaigns
Sugandha Lahoti
07 May 2019
2 min read
Save for later

#MSBuild2019: Microsoft launches new products to secure elections and political campaigns

Sugandha Lahoti
07 May 2019
2 min read
It seems big tech giants are getting pretty serious about protecting election integrity and adopting data protection measures. At the ongoing Microsoft Build 2019 developer conference, CEO Satya Nadella announced ElectionGuard, a free open-source software development kit (SDK) as an extension of Microsoft’s Defending Democracy Program. ElectionGuard SDK It is an open-source SDK and voting system reference implementation that was developed in partnership with Galois. This SDK will provide voting system vendors with the ability to enable end-to-end verifiability and improved risk-limiting audit capabilities for elections in their systems. It will be offered free to voting system vendors either to integrate into their existing systems or to use to build all-new election systems. “One of the things we want to ensure is real transparency and verifiability in election systems. And so this is an open source project that will be alive on GitHub by the end of this month, which will even bring some new technology from Microsoft Research around homomorphic encryption, so that you can have the software stack that can modernize all of the election infrastructure everywhere in the world,” CEO Satya Nadella said onstage today at Microsoft’s annual Build developer conference in Seattle. The ElectionGuard SDK and reference implementation will be available on GitHub in June, just ahead of the EU elections. Microsoft 365 for Campaigns Micrsoft365 for Campaigns provides security capabilities of Microsoft 365 Business to political parties and individual candidates. M365 for Campaigns will be rolled out to customers this summer for $5 per user per month. Any campaign using M365 for Campaigns will have free access to Microsoft’s AccountGuard service. Microsoft claims it'll be affordable, naturally, and "preconfigured to optimize for the unique operating environments campaigns face." Starting next month, M365 for Campaigns will be available for all federal election campaign candidates, federal candidate committees, and national party committees in the United States Microsoft Build is in its 6th year and will continue till 8th May. The conference hosts over 6,000 attendees with early 500 student-age developers and over 2,600 customers and partners in attendance. Watch this space for more coverage of Microsoft Build 2019. Microsoft introduces Remote Development extensions to make remote development easier on VS Code Docker announces a collaboration with Microsoft’s .NET at DockerCon 2019 How Visual Studio Code can help bridge the gap between full-stack development and DevOps [Sponsered by Microsoft]
Read more
  • 0
  • 0
  • 2390