Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3709 Articles
article-image-github-now-supports-two-factor-authentication-with-security-keys-using-the-webauthn-api
Bhagyashree R
22 Aug 2019
4 min read
Save for later

GitHub now supports two-factor authentication with security keys using the WebAuthn API

Bhagyashree R
22 Aug 2019
4 min read
Yesterday, GitHub announced that it now supports Web Authentication (WebAuthn) for security keys. In addition to time-based one-time password (TOTP) applications and text messages, you can now also configure two-factor authentication using a security key. https://twitter.com/github/status/1164240757278027779 WebAuthn is a standard by W3C that uses a public key instead of passwords or SMS texts for registering and authentication. It leverages strong authenticators that come built into devices like Windows Hello or Apple’s Touch ID. The purpose behind WebAuthn is not only to address security problems like phishing and data breaches but also significantly increase ease of use. Citing the reason behind bringing this support, Lucas Garron, GitHub’s Security Engineer, wrote in the announcement, “Account security is critical for GitHub. Although we support strong authentication options, many people still don’t use a password manager or two-factor authentication because individual passwords have always been the easiest choice.” You will be able to use physical security keys on GitHub if you are using the following: Firefox and Chrome-based browsers on Windows, macOS, Linux, and Android Edge users on Windows Brave on iOS using the new YubiKey 5Ci Safari Technology Preview on macOS GitHub also allows using your laptop or phone as a security key if you do not want to carry an actual physical key. For this, you are required to register your device first. People using Microsoft Edge on Windows can register their device using Windows Hello with facial recognition, fingerprint reader, or PIN. Chrome users on macOS can use Touch ID, while on Android they can use the fingerprint reader to register their device. Currently, security keys are secondary to authentication with a TOTP application or a text message. As more platforms start supporting security keys, GitHub plans to eventually make them the primary second factor. “Because platform support is not yet ubiquitous, GitHub currently supports security keys as a supplemental second factor. But we’re evaluating security keys as a primary second factor as more platforms support them. In addition, WebAuthn can make it possible to support login using your device as a “single-factor” security key with biometric authentication instead of a password,” Garron said. This announcement got mixed reactions from users. While some think that security keys are future of online authentication, others believe that we are better off with just a plain username-and-password authentication. The concerns users have for fingerprints and other biometric means for authentication is that they are not really a secret and if in case they are compromised there is no way to reset them. https://twitter.com/probonopd/status/1164241777089548289 Those supportive of this step are excited about the ease of use WebAuthn brings. A user on Hacker News commented, "This is fantastic. I look forward to finally having much easier authentication on the web. Imagine browsers syncing between devices a single encryption key that will authenticate you to all sites, which you can easily back up to a piece of paper." Another user suggested, "In a somewhat related vein: it would be really fantastic if Github allowed the same SSH key (in my case: a Yubikey-resident SSH key) on multiple accounts; we use separate accounts for different clients, and Github's refusal to allow an SSH key to be used on multiple accounts means I can't use Yubikey SSH keys for those." If you’d like to add support for security keys as an authentication option for your web service, you can use a JSON. Check out the official announcement by GitHub to know in detail. GitHub deprecates and then restores Network Graph after GitHub users share their disapproval DockerHub database breach exposes 190K customer data including tokens for GitHub and Bitbucket repositories Apache Software Foundation finally joins the GitHub open source community  
Read more
  • 0
  • 0
  • 4167

article-image-timescaledb-goes-distributed-implements-chunking-over-sharding-for-scaling-out
Sugandha Lahoti
22 Aug 2019
5 min read
Save for later

TimescaleDB goes distributed; implements ‘Chunking’ over ‘Sharding’ for scaling-out

Sugandha Lahoti
22 Aug 2019
5 min read
TimeScaleDB announced yesterday that they are going distributed; this version is currently in private beta with the public version slated for later this year. They are also bringing PostgreSQL back. However, with PostgreSQL, a major problem is scaling out. To address this, TimeScaleDB does not implement traditional sharding, instead, using ‘chunking’. What is TimescaleDB’s chunking? In TimescaleDB, chunking is the mechanism which scales PostgreSQL for time-series workloads. Chunks are created by automatically partitioning data by multiple dimensions (one of which is time). In a blog post, TimeScaleDB specifies, “this is done in a fine-grain way such that one dataset may be comprised of 1000s of chunks, even on a single node.” Chunking offers a wide set of capabilities unlike sharding, which only offers the option to scale out. These include scaling-up (on the same node) and scaling-out (across multiple nodes). It also offers elasticity, partitioning flexibility,  data retention policies, data tiering, and data reordering. TimescaleDB also automatically partitions a table across multiple chunks on the same instance, whether on the same or different disks. TimescaleDB’s multi-dimensional chunking auto-creates chunks, keeps recent data chunks in memory, and provides time-oriented data lifecycle management (e.g., for data retention, reordering, or tiering policies). However, one issue is the management of the number of chunks (i.e., “sub-problems”). For this, they have come up with hypertable abstraction to make partitioned tables easy to use and manage. Hypertable abstraction makes chunking manageable Hypertables are typically used to handle a large amount of data by breaking it up into chunks, allowing operations to execute efficiently. However, when the number of chunks is large, these data chunks can be distributed over several machines by using distributed hypertables. Distributed hypertables are similar to normal hypertables, but they add an additional layer of hypertable partitioning by distributing chunks across data nodes. They are designed for multi-dimensional chunking with a large number of chunks (from 100s to 10,000s), offering more flexibility in how chunks are distributed across a cluster. Users are able to interact with distributed hypertables similar to a regular hypertable (which itself looks just like a regular Postgres table). Chunking does not put additional burden on applications and developers because  TimescaleDB does not interact directly with chunks (and thus do not need to be aware of this partition mapping themselves, unlike in some sharded systems). The system also does not expose different capabilities for chunks than the entire hypertable. TimescaleDB goes distributed TimescaleDB is already available for testing in private beta as for selected users and customers. The initial licensed version is expected to be widely available. This version will support features such as high write rates, query parallelism, predicate push down for lower latency, elastically growing a cluster to scale storage and compute, and fault tolerance via physical replica. Developers were quite intrigued by the new chunking process. A number of questions were asked on Hacker News, duly answered by TimescaleDB creators. One of the questions put forth is related to the Hot partition problem. A user asks, “The biggest limit is that their "chunking" of data by time-slices may lead directly to the hot partition problem -- in their case, a "hot chunk." Most time series is 'dull time' -- uninteresting time samples of normal stuff. Then, out of nowhere, some 'interesting' stuff happens. It'll all be in that one chunk, which will get hammered during reads.” To which Erik Nordström, Timescale engineer replied, “ TimescaleDB supports multi-dimensional partitioning, so a specific "hot" time interval is actually typically split across many chunks, and thus server instances. We are also working on native chunk replication, which allows serving copies of the same chunk out of different server instances. Apart from these things to mitigate the hot partition problem, it's usually a good thing to be able to serve the same data to many requests using a warm cache compared to having many random reads that thrashes the cache.” Another question asked said, “In this vision, would this cluster of servers be reserved exclusively for time series data or do you imagine it containing other ordinary tables as well?” To which, Mike Freedman, CTO of TimeScale answered, “We commonly see hypertables (time-series tables) deployed alongside relational tables, often because there exists a relation between them: the relational metadata provides information about the user, sensor, server, security instrument that is referenced by id/name in the hypertable. So joins between these time-series and relational tables are often common, and together these serve the applications one often builds on top of your data. Now, TimescaleDB can be installed on a PG server that is also handling tables that have nothing to do with its workload, in which case one does get performance interference between the two workloads. We generally wouldn't recommend this for more production deployments, but the decision here is always a tradeoff between resource isolation and cost.” Some thought sharding remains the better choice even if chunking improves performance. https://twitter.com/methu/status/1164381453800525824 Read the official announcement for more information. You can also view the documentation. TimescaleDB 1.0 officially released Introducing TimescaleDB 1.0 RC, the first OS time-series database with full SQL support Zabbix 4.2 release packed with modern monitoring system for data collection, processing and visualization
Read more
  • 0
  • 0
  • 6268

article-image-ibm-open-sources-power-isa-and-other-chips-brings-openpower-foundation-under-the-linux-foundation
Vincy Davis
22 Aug 2019
3 min read
Save for later

IBM open-sources Power ISA and other chips; brings OpenPOWER foundation under the Linux Foundation

Vincy Davis
22 Aug 2019
3 min read
Yesterday, IBM made a huge announcement to seize its commitment to the open hardware movement. At the ongoing Linux Foundation Open Source Summit 2019, Ken King, the general manager for OpenPower at IBM disclosed that the Power Series chipmaker is open-sourcing their Power Instruction Set Architecture (ISA) and other chips for developers to build new hardware.  IBM wants the open community members to take advantage of “POWER's enterprise-leading capabilities to process data-intensive workloads and create new software applications for AI and hybrid cloud built to take advantage of the hardware's unique capabilities,'' says IBM.  At the Summit, King also announced that the OpenPOWER Foundation will be integrated with the Linux Foundation. Launched in 2013, IBM’s OpenPOWER Foundation is a collaboration of Power ISA-based products and has the support of 350 members, including IBM, Google, Hitachi, and Red Hat.  By moving the OpenPOWER foundation under the Linux Foundation, IBM wants the developer community to try the Power-based systems without paying any fee. It will motivate developers to customize their OpenPower chips for applications like AI and hybrid cloud by taking advantage of POWER’s rich feature set. “With our recent Red Hat acquisition and today’s news, POWER is now the only architecture—and IBM the only processor vendor—that can boast of a completely open systems stack, from the foundation of the processor instruction set and firmware all the way through the software,” King adds. Read More: Red Hat joins the RISC-V foundation as a Silver level member The Linux Foundation supports open source projects by providing financial and intellectual resources, infrastructure, services, events, and training. Hugh Blemings, the Executive Director of OpenPOWER Foundation said in a blog post that, “The OpenPOWER Foundation will now join projects and organizations like OpenBMC, CHIPS Alliance, OpenHPC and so many others within the Linux Foundation.” He concludes, “The Linux Foundation is the premier open-source group, and we’re excited to be working more closely with them.” Many developers are of the opinion that IBM open sourcing the ISA is a decision taken too late. A user on Hacker News  comments, “28 years after introduction. A bit late.” Another user says, “I'm afraid they are doing it for at least 10 years too late” Another comment reads, “might be too little too late. I used to be powerpc developer myself, now nearly all the communities, the ecosystem, the core developers are gone, it's beyond repair, sigh” Many users also think that IBM’s announcements are a direct challenge to the RISC-V community. A Redditor comments, “I think the most interesting thing about this is that now RISC-V has a direct competitor, and I wonder how they'll react to IBM's change.” Another user says, “Symbolic. Risc-V, is more open, and has a lot of implementations already, many of them open. Sure, power is more about high performance computing, but it doesn't change that much. Still, nice addition. It doesn't really change substantially anything about Power or it's future adoption” You can visit the IBM newsroom, for more information on the announcements. Black Hat USA 2019 conference Highlights: IBM’s ‘warshipping’, OS threat intelligence bots, Apple’s $1M bug bounty programs and much more! IBM continues to layoff older employees solely to attract Millennials to be at par with Amazon and Google IBM halt sales of Watson AI tool for drug discovery amid tepid growth: STAT report
Read more
  • 0
  • 0
  • 2909

article-image-julia-v1-2-releases-with-support-for-argument-splatting-unicode-12-new-star-unary-operator-and-more
Vincy Davis
21 Aug 2019
3 min read
Save for later

Julia v1.2 releases with support for argument splatting, Unicode 12, new star unary operator, and more

Vincy Davis
21 Aug 2019
3 min read
Yesterday, the team behind Julia announced the release of Julia v1.2. It is the second minor release in the 1.x series and has new features such as argument splatting, support for Unicode 12 and a new ⋆ (star) unary operator. Julia v1.2 also has many performance improvements with marginal and undisruptive changes. The post states that Julia v1.2 will not have a long term support and “As of this release, 1.1 has been effectively superseded by 1.2, which means there will not likely be any further 1.1.x releases. Our good friend 1.0 is still currently the only long-term support version.” What’s new in Julia v1.2 This version supports Argument splatting (x...). It can be used in calls to the new pseudo-function in constructors. Support for Unicode 12 has been added. A new unary operator ⋆ (star) has been added. New library functions A new argument !=(x), >(x), >=(x), <(x), <=(x) has been added to assist in returning the partially-applied versions of the functions A new getipaddrs() function is added to return all the IP addresses of the local machine with the IPv4 addresses New library function Base.hasproperty and Base.hasfield  Other improvements in Julia v1.2 Multi-threading changes It will now be possible to schedule and switch tasks during @threads loops, and perform limited I/O. A new thread-safe replacement has been added to the Condition type. It can now be accessed as Threads.Condition. Standard library changes The extrema function now accepts a function argument in the same way like minimum and maximum. The hasmethod method can now check for matching keyword argument names. The mapreduce function will accept multiple iterators. Functions that invoke commands like run(::Cmd), will get a ProcessFailedException rather than an ErrorException. A new no-argument constructor for Ptr{T} has been added to construct a null pointer. Jeff Bezanson, Julia co-creator says, “If you maintain any packages, this is a good time to add CI for 1.2, check compatibility, and tag new versions as needed.” Users are happy with the Julia v1.2 release and are all praises for the Julia language. A user on Hacker News comments, “Julia has very well thought syntax and runtime I hope to see it succeed in the server-side web development area.” Another user says, “I’ve recently switched to Julia for all my side projects and I’m loving it so far! For me the killer feature is the seamless GPUs integration.” For more information on Julia v1.2, head over to its release notes. Julia co-creator, Jeff Bezanson, on what’s wrong with Julialang and how to tackle issues like modularity and extension Julia announces the preview of multi-threaded task parallelism in alpha release v1.3.0 Mozilla is funding a project for bringing Julia to Firefox and the general browser environment
Read more
  • 0
  • 0
  • 3688

article-image-microsoft-edge-beta-is-now-ready-for-you-to-try
Bhagyashree R
21 Aug 2019
3 min read
Save for later

Microsoft Edge Beta is now ready for you to try

Bhagyashree R
21 Aug 2019
3 min read
Yesterday, Microsoft announced the launch of the first beta builds of its Chromium-based Edge browser. Developers using the supported versions of Windows and macOS can now test it and share their feedback with the Microsoft Edge team. https://twitter.com/MicrosoftEdge/status/1163874455690645504 The preview builds are made available to developers and early adopters through three channels on the Microsoft Edge Insider site: Beta, Canary, and Developer. Earlier this year, Microsoft opened the Canary and Developer channel. Canary builds receive updates every night, while Developer builds are updated weekly. Beta is the third and final preview channel that is the most stable one among the three. It receives updates every six weeks, along with periodic minor updates for bug fixes and security. Source: Microsoft Edge Insider What’s new in Microsoft Edge Beta Microsoft Edge Beta comes with several options using which you can personalize your browsing experience. It supports the dark theme and offers 14 different languages to choose from. If you are not a fan of the default new tab page, you can customize it with its tab page customizations. There are currently three preset styles that you can switch between:  Focused, Inspirational and Informational. You can further customize and personalize Microsoft Edge through the different addons available on Microsoft Edge Insider Addons store or other Chromium-based web stores. Talking about the user privacy bit, this release brings support for tracking prevention. Enabling this feature will protect you from being tracked by websites that you don’t visit. You can choose from three levels of privacy, which are Basic, Balanced and Strict. Microsoft Edge Beta also comes with some of the commercial features that were announced at Build this year. Microsoft Search is now integrated into Bing that lets you search for OneDrive files directly from Bing search. There is support for Internet Explorer mode that brings Internet Explorer 11 compatibility directly into Microsoft Edge. It also supports Windows Defender Application Guard for isolating enterprise-defined untrusted sites. Along with this release, Microsoft also launched the Microsoft Edge Insider Bounty Program.  Under this program researchers who find any high-impact vulnerabilities in Dev and Beta channels will receive rewards up to US$30,000. Read Microsoft’s official announcement to know more in detail. Microsoft officially releases Microsoft Edge canary builds for macOS users Microsoft Edge mobile browser now shows warnings against fake news using NewsGuard Microsoft Edge Beta available on iOS with a breaking news alert, developer options and more
Read more
  • 0
  • 0
  • 2541

article-image-cerebras-systems-unveil-wafer-scale-engine-an-ai-chip-with-1-2-trillion-transistors-and-56-times-larger-than-largest-nvidia-gpu
Savia Lobo
21 Aug 2019
5 min read
Save for later

Cerebras Systems unveil Wafer Scale Engine, an AI chip with 1.2 trillion transistors and 56 times larger than largest Nvidia GPU

Savia Lobo
21 Aug 2019
5 min read
A California-based AI startup, Cerebras Systems has unveiled the largest semiconductor chip ever built named as the ‘Wafer Scale Engine’ built to quickly train deep learning models. The Cerebras Wafer Scale Engine (WSE) is 46,225 millimeters square, contains more than 1.2 trillion transistors. It is “more than 56X larger than the largest graphics processing unit, containing 3,000X more on-chip memory and more than 10,000X the memory bandwidth,” the whitepaper reads. Most of the chips available today include a collection of chips built on top of a 12-inch silicon wafer and are processed in a chip factory in a batch.  However, the WSE chip is interconnected on a single wafer. “The interconnections are designed to keep it all functioning at high speeds so the trillion transistors all work together as one,” Venture Beats reports. Andrew Feldman, co-founder and CEO of Cerebras system said, “Designed from the ground up for AI work, the Cerebras WSE contains fundamental innovations that advance the state-of-the-art by solving decades-old technical challenges that limited chip size — such as cross-reticle connectivity, yield, power delivery, and packaging.” He further adds, “Every architectural decision was made to optimize performance for AI work. The result is that the Cerebras WSE delivers, depending on workload, hundreds or thousands of times the performance of existing solutions at a tiny fraction of the power draw and space.” According to Wired, “Cerebras’ chip covers more than 56 times the area of Nvidia’s most powerful server GPU, claimed at launch in 2017 to be the most complex chip ever. Cerebras founder and CEO Andrew Feldman says the giant processor can do the work of a cluster of hundreds of GPUs, depending on the task at hand, while consuming much less energy and space.” Source: Twitter In the whitepaper, Feldman explains, for maximum performance, the entire model should fit in the fastest memory, which is the memory closest to the computation cores. This is not the case in CPUs, TPUs, and GPUs, where main memory is not integrated with compute. Instead, the vast majority of memory is based off-chip, far away on separate DRAM chips or a stack of these chips in a high bandwidth memory (HBM) device. As a result, the main memory is excruciatingly slow. The dawn of AI brought in an added consumption of higher processing power which gave rise to the demand of GPUs. However, even if a machine is filled with dozens of Nvidia’s graphics chips or GPUs, “it can take weeks to “train” a neural network, the process of tuning the code so that it finds a solution to a given problem,” according to Fortune. Linley Gwennap, a chip observer who publishes a distinguished chip newsletter, Microprocessor Report told Fortune that bundling together multiple GPUs in a computer starts to show diminishing returns once more than eight of the chips are combined. Feldman further adds “The hard part is moving data.” While training a neural network, thousands of operations happen in parallel. Also, chips must constantly share data as they crunch those parallel operations. However, computers with multiple chips may face performance issues while trying to pass data back and forth between the chips over the slower wires that link them on a circuit board. The solution Feldman said was to “take the biggest wafer you can find and cut the biggest chip out of it that you can.” Per Fortune, “the chip won’t be sold on its own but will be packaged into a computer “appliance” that Cerebras has designed. One reason is the need for a complex system of water-cooling, a kind of irrigation network to counteract the extreme heat generated by a chip running at 15 kilowatts of power.” “The wafers were produced in partnership with Taiwan Semiconductor Manufacturing, the world’s largest chip manufacturer, but Cerebras has exclusive rights to the intellectual property that makes the process possible.” J.K. Wang, TSMC’s senior vice president of operations said, “We are very pleased with the result of our collaboration with Cerebras Systems in manufacturing the Cerebras Wafer Scale Engine, an industry milestone for wafer-scale development.” “TSMC’s manufacturing excellence and rigorous attention to quality enable us to meet the stringent defect density requirements to support the unprecedented die size of Cerebras’ innovative design.” The whitepaper explains that 400,000 cores on Cerebras WSE are connected via a Swarm communication fabric in a 2D mesh with 100 Petabits per second of bandwidth. Swarm provides a hardware routing engine to each of the compute cores and connects them with short wires optimized for latency and bandwidth. Feldman said that “a handful” of customers are trying the chip, including on drug design problems. He plans to sell complete servers built around the chip, rather than chips on their own  but declined to discuss price or availability. Many find this announcement interesting given the number of transistors in work on the wafer engine. A few are skeptical if this chip will live up to the expectation. A user on Reddit commented, “I think this is fascinating. If things go well with node scaling and on-chip non-volatile memory, by mid 2030 we could be approaching human brain densities on a single ‘chip’ without even going 3D. It's incredible.” A user on HackerNews writes, “In their whitepaper, they claim "with all model parameters in on-chip memory, all of the time," yet that entire 15 kW monster has only 18 GB of memory. Given the memory vs compute numbers that you see in Nvidia cards, this seems strangely low.” https://twitter.com/jwangARK/status/1163928272134168581 https://twitter.com/jwangARK/status/1163928655145426945 To know more about Cerebras WSE chip in detail, read the complete whitepaper. Why DeepMind AlphaGo Zero is a game changer for AI research Speech2Face: A neural network that “imagines” faces from hearing voices. Is it too soon to worry about ethnic profiling? Alibaba’s chipmaker launches open source RISC-V based ‘XuanTie 910 processor’ for 5G, AI, IoT and self-driving applications
Read more
  • 0
  • 0
  • 2736
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $15.99/month. Cancel anytime
article-image-introducing-opendrop-an-open-source-implementation-of-apple-airdrop-written-in-python
Vincy Davis
21 Aug 2019
3 min read
Save for later

Introducing OpenDrop, an open-source implementation of Apple AirDrop written in Python

Vincy Davis
21 Aug 2019
3 min read
A group of German researchers recently published a paper “A Billion Open Interfaces for Eve and Mallory: MitM, DoS, and Tracking Attacks on iOS and macOS Through Apple Wireless Direct Link”, at the 28th USENIX Security Symposium (August 14–16), USA. The paper reveals security and privacy vulnerabilities in Apple’s AirDrop file-sharing service as well as denial-of-service (DoS) attacks which leads to privacy leaks or simultaneous crashing of all neighboring devices. As part of the research, Milan Stute and Alexander Heinrich, two researchers have developed an open-source implementation of Apple AirDrop written in Python - OpenDrop. OpenDrop is like a FOSS implementation of AirDrop. It is an experimental software and is the result of reverse engineering efforts by the Open Wireless Link project (OWL). It is compatible with Apple AirDrop and used for sharing files among Apple devices such as iOS and macOS or on Linux systems running an open re-implementation of Apple Wireless Direct Link (AWDL). The OWL project consists of researchers from the Secure Mobile Networking Lab at TU Darmstadt looking into Apple’s wireless ecosystem. It aims to assess security, privacy and enables cross-platform compatibility for next-generation wireless applications. Currently, OpenDrop only supports Apple devices. However, it does not support all features of AirDrop and may be incompatible with future AirDrop versions. It uses the current version of OpenSSL and libarchive and requires Python 3.6+ version. OpenDrop is licensed under the GNU General Public License v3.0. It is not affiliated with or endorsed by Apple Inc. Limitations in OpenDrop Triggering macOS/iOS receivers via Bluetooth Low Energy: Since Apple devices begin their AWDL interface and AirDrop server only after receiving a custom advertisement via Bluetooth LE, it is possible that Apple AirDrop receivers may not be discovered. Sender/Receiver authentication and connection state: Currently, OpenDrop does not conduct peer authentication. It does not verify that the TLS certificate is signed by Apple's root or not. Also, OpenDrop accepts any file that it receives automatically. Sending multiple files: OpenDrop does not support sending multiple files for sharing, a feature supported by Apple’s AirDrop. Users are excited to try the new OpenDrop implementation. A Redditor comments, “Yesssss! Will try this out soon on Ubuntu.” Another comment reads, “This is neat. I did not realize that enough was known about AirDrop to reverse engineer it. Keep up the good work.” Another user says, “Wow, I can’t wait to try this out! I’ve been in the Apple ecosystem for years and AirDrop was the one thing I was really going to miss.” Few Android users wish to see such implementations in an Android app. A user on Hacker News says, “Would be interesting to see an implementation of this in the form of an Android app, but it looks like that might require root access.” A Redditor comments, “It'd be cool if they were able to port this over to android as well.” To know how to send and receive files using OpenDrop, check out its Github page. Apple announces expanded security bug bounty program up to $1 million; plans to release iOS Security Research Device program in 2020 Google Project Zero reveals six “interactionless” bugs that can affect iOS via Apple’s iMessage ‘FaceTime Attention Correction’ in iOS 13 Beta 3 uses ARKit to fake eye contact
Read more
  • 0
  • 0
  • 8924

article-image-after-postgresql-digitalocean-now-adds-mysql-and-redis-to-its-managed-databases-offering
Savia Lobo
20 Aug 2019
2 min read
Save for later

After PostgreSQL, DigitalOcean now adds MySQL and Redis to its managed databases’ offering

Savia Lobo
20 Aug 2019
2 min read
Today, DigitalOcean, the cloud for developing modern apps, announced that it has introduced Managed Databases for MySQL and Redis, the popular open-source relational and in-memory databases, respectively. These offerings eliminate the complexity involved in managing, scaling and securing database infrastructure, and instead allow developers to focus on building apps. DigitalOcean’s Managed Databases was launched in February--with PostgreSQL as its first offering service--and allows developers to create fully-managed database instances in the cloud. Managed Databases provides features such as worry-free setup and maintenance, free daily backups with point-in-time recovery, standby nodes with automated failovers, end-to-end security, and scalable performance. These new offerings build upon the existing support for PostgreSQL, providing worry-free maintenance for three of the most popular database engines. DigitalOcean’s Senior Vice President of Product Shiven Ramji said, “With the additions of MySQL and Redis, DigitalOcean now supports three of the most requested database offerings, making it easier for developers to build and run applications, rather than spending time on complex management.”  “The developer is not just the DNA of DigitalOcean, but the reason for much of the company’s success. We must continue to build on this success and support developers with the services they need most on their journey towards simple app development,” he further added. DigitalOcean selected MySQL and Redis as the next offerings for its Managed Databases service due to overwhelming demand from its customer base and the developer community at large. DigitalOcean’s Managed Databases offerings for MySQL and Redis are available in New York, Frankfurt and San Francisco data center regions, with support for additional regions being added over the next few weeks. To know more about this news in detail, head over to Digital Ocean’s official website. Digital Ocean announces ‘Managed Databases for PostgreSQL’ DigitalOcean launches its Kubernetes-as-a-service at KubeCon+CloudNativeCon to ease running containerized apps Limited Availability of DigitalOcean Kubernetes announced!
Read more
  • 0
  • 0
  • 2453

article-image-apple-accidentally-unpatches-a-fixed-bug-in-ios-12-4-that-enables-its-jailbreaking
Bhagyashree R
20 Aug 2019
4 min read
Save for later

Apple accidentally unpatches a fixed bug in iOS 12.4 that enables its jailbreaking

Bhagyashree R
20 Aug 2019
4 min read
The internet was all ablaze when several security researchers reported that Apple has accidentally reintroduced a bug in iOS 12.4 that was patched in iOS 12.3. Many iOS users are already exploiting this vulnerability to jailbreak their devices with iOS 12.4. https://twitter.com/lorenzofb/status/1163480993707253761 iOS 12.4 jailbreaking As the name suggests, jailbreaking allows you to bypass the rules and regulations imposed by Apple on iOS, tvOS and watchOS operating systems. After getting the root access, you will be able to install software that is unavailable in the Apple App Store, run unsigned code, read and write to the root filesystem, and more. Many researchers shared steps and jailbreaking tools to help Apple users perform jailbreaking on their devices. A security researcher, who goes by the name Pwn20wnd on Twitter, released unc0ver v3.5.2, a jailbreaking tool, yesterday. With iOS 12.4 and unc0ver, you will be able to jailbreak A7-A11 devices. However, it does not currently fully support the A12 processor found in the iPhone XS, XS Max, and XR for iOS 12.1.3 and up. https://twitter.com/Pwn20wnd/status/1163537425211150336 Here’s a video by GeoSn0w showing how you can jailbreak your pre-A12 devices (iPhone 5S up to iPhone X) using unc0ver on iOS 12.4 which is currently the latest signed version from Apple: https://www.youtube.com/watch?v=qSItdLEr8WI Security implications of jailbreaking your iOS device Though there haven’t been any reports of malicious activity yet, this misstep does put millions of iOS users at risk as jailbreaking your devices can make them less secure. Security researchers are warning users to be careful about what apps they download. A hacker with malicious intentions can target jailbroken iPhones to easily install malware. Pwn20wnd told Motherboard that an attacker could “make perfect spyware” by exploiting this vulnerability. Giving an example, he said, “a malicious app could include an exploit for this bug that allows it to escape the usual iOS sandbox—a mechanism that prevents apps from reaching data of other apps or the system—and steal user data.” He adds, “It is very likely that someone is already exploiting this bug for bad purposes.” Patrick Wardle, a principal security researcher at the Mac management firm Jamf told the Wired, "This is rather inexcusable, as it puts millions of iOS users at risk. And the irony, as others have already noted, is that since Apple doesn't allow us to downgrade to old versions, we're really kind of sitting ducks." Apple and the security research community Earlier this month, Apple sued a Florida-based virtualization company Corellium for copyright infringement. Corellium offers “perfect replicas” or virtual iOS builds that can be used for security research and other purposes. Many security researchers felt that such tools could have been really helpful to identify mistakenly reintroduced vulnerabilities such as this one. "This shows that Apple continues to struggle with security—even on iOS which is clearly their priority. And this was uncovered by an independent security researcher, which illustrates the value such researchers add. Apple's more communicative approach with their new bug bounty program is good, but their attempts to shut down researcher tools like Corellium are bad," said Wardle in a Wired report. This month, Apple did take a few steps towards making its restrictive OS open to security researchers. It shared its plan to offer special iPhones to security researchers next year that will help them find security flaws and vulnerabilities in iOS. These devices will be given to researchers who report bugs through Apple’s bug bounty program for iOS, which was launched in 2016. At this year’s Black Hat conference, the company extended its use to cover macOS, Apple Watch, Apple TV, and more. Read Motherboard's full story of iOS 12.4 jailbreaking to know more in detail. Apple announces ‘WebKit Tracking Prevention Policy’ that considers web tracking as a security vulnerability MacStadium announces ‘Orka’ (Orchestration with Kubernetes on Apple) Microsoft contractors also listen to Skype and Cortana audio recordings, joining Amazon, Google and Apple in privacy violation scandals
Read more
  • 0
  • 0
  • 2460

article-image-new-bluetooth-vulnerability-knob-attack-can-manipulate-the-data-transferred-between-two-paired-devices
Vincy Davis
20 Aug 2019
6 min read
Save for later

New Bluetooth vulnerability, KNOB attack can manipulate the data transferred between two paired devices

Vincy Davis
20 Aug 2019
6 min read
Recently, a group of researchers exposed a severe vulnerability called Key Negotiation Of Bluetooth (KNOB) that allows an attacker to break the Bluetooth Basic Rate/Extended Data Rate (BR/EDR) security. The vulnerability allows the attacker to intercept, monitor, or manipulate encrypted Bluetooth traffic between two paired devices, without being detected. The vulnerability was identified by researchers at the Center for IT-Security, Privacy and Accountability (CISPA) who shared their findings in a paper “The KNOB is Broken: Exploiting Low Entropy in the Encryption Key Negotiation Of Bluetooth BR/EDR”. The paper was included in the proceedings of the 28th USENIX Security Symposium (August 14–16), USA. In November 2018, the researchers of the paper shared the details of the attack with the Bluetooth SIG, the CERT Coordination Center, and the International Consortium for the Advancement of Cybersecurity on the Internet (ICASI), which is an industry-led coordination body founded by Intel, Microsoft, Cisco, Juniper and IBM. The vulnerability has been assigned CVE ID CVE-2019-9506. [box type="shadow" align="" class="" width=""]The Bluetooth BR/EDR is a popular wireless technology which is used for low-power short-range communications, and is maintained by the Bluetooth Special Interest Group (SIG).[/box] How does the KNOB attack the victim’s devices The researchers specify that such an attack would “allow a third party, without knowledge of any secret material (such as link and encryption keys), to make two (or more) victims agree on an encryption key—enabling the attacker to easily brute force the negotiated encryption keys, decrypt the eavesdropped ciphertext, and inject valid encrypted messages (in real-time)." Researchers add that the attack is “standard-compliant because all Bluetooth BR/EDR versions require to support encryption keys with entropy between 1 and 16 bytes and do not secure the key negotiation protocol. As a result, the attacker completely breaks Bluetooth BR/EDR security without being detected." In some cases, it can also allow an attacker to reduce the length of an encryption key to a single octet. "In addition, since not all Bluetooth specifications mandate a minimum encryption key length, it is possible that some vendors may have developed Bluetooth products where the length of the encryption key used on a BR/EDR connection could be set by an attacking device down to a single octet,” according to an advisory released by Bluetooth. This in turn would make it much easier for an attacker to brute force the encryption key used by the paired devices to communicate with each other. The KNOB attack is effective, stealthy and cheap The KNOB attack is a serious threat to the security and privacy of all Bluetooth device users. It exploits the vulnerable encryption key negotiation protocol, hence risking all standard compliant Bluetooth devices irrespective of their Bluetooth version number and implementation details. This attack is highly ‘effective’ and severe as it can even attack secure Bluetooth connections. The KNOB attack is considered ‘stealthy’ (secretive), as the users and the Bluetooth application developers do not come to know about the attack, since it generally uses a Bluetooth link-layer encryption as a trusted service. Also, the protocol is transparent to the Bluetooth host (OS) and the Bluetooth application used by the victims. The KNOB attack is also cheap because the attacker does not need an expensive resource or an attacker model to conduct the attack. The researchers say, “We were surprised to discover such fundamental issues in a widely used and 20 years old standard. We urge the Bluetooth SIG to update the specification of Bluetooth according to our findings. Until the specification is not fixed, we do not recommend to trust any link-layer encrypted Bluetooth BR/EDR link.” Proposed countermeasures to the KNOB attack The researchers have proposed two classes of countermeasures to the KNOB attack. The first class is called the Legacy compliant countermeasure which requires a standard amount of negotiable entropy that cannot be easily brute-forced, e.g.,16 bytes of entropy. It also includes automated checks by the Bluetooth host to confirm the amount of negotiated entropy each time the link layer encryption is activated. This will enable the hosts to abort the connection if the entropy does not meet the minimum requirement. Another class of countermeasure is called the Non-legacy compliant which modifies the encryption key negotiation protocol by securing it using the link key. The link key should be a shared and an authenticated secret should always be made available before starting the entropy negotiation protocol. It should also have message integrity and confidentiality. Devices vulnerable to the KNOB attack The researchers have conducted the attack on more than 17 unique Bluetooth chips including Broadcom, Qualcomm, Apple, Intel, and Chicony manufacturers and all the devices were found to be vulnerable to the KNOB attack. On August 13th, Bluetooth released a Security Notice stating that the Bluetooth SIG has updated the Bluetooth Core Specification to recommend a minimum encryption key length of 7 octets for further BR/EDR connections. However, the Bluetooth SIG says, “There is no evidence that the vulnerability has been exploited maliciously and the Bluetooth SIG is not aware of any devices implementing the attack having been developed, including by the researchers who identified the vulnerability. ” The researchers of this paper also disclosed KNOB attack to the Bluetooth Chip vendors in late 2018, following which some vendors have implemented workarounds for the vulnerability on their devices. These vendors include Apple macOS, iOS, and watchOS, Google, Cisco IP phones and Webex and Blackberry powered by Android phones who have added fixes to this vulnerability in their latest updates. Last week, the CERT Coordination Center also released an advisory to this attack. Last week, Microsoft released an update titled “CVE-2019-9506 | Encryption Key Negotiation of Bluetooth Vulnerability” They have proposed “a default 7-octet minimum key length to ensure that the key negotiation does not trivialize the encryption.” The researchers of this paper have also notified users that if their device has not been updated since late 2018, then it is likely to be vulnerable. Many people  are surprised to learn about the KNOB attack and are advising others to update their devices. https://twitter.com/aiacobelli_sec/status/1162348463402684416 https://twitter.com/4jorge/status/1162983043969236992 https://twitter.com/lgrangeia/status/1162170365541605377 https://twitter.com/jurajsomorovsky/status/1162119755475537926 To know more details about the KNOB attack, check out the “The KNOB is Broken” paper. Google to provide a free replacement key for its compromised Bluetooth Low Energy (BLE) Titan Security Keys Amazon FreeRTOS adds a new ‘Bluetooth low energy support’ feature Security flaws in Boeing 787 CIS/MS code can be misused by hackers, security researcher says at Black Hat 2019
Read more
  • 0
  • 0
  • 2471
article-image-google-services-protonmail-and-protonvpn-suffered-an-outage-yesterday
Amrata Joshi
20 Aug 2019
3 min read
Save for later

Google services, ProtonMail and ProtonVPN suffered an outage yesterday

Amrata Joshi
20 Aug 2019
3 min read
Yesterday, Google reported that it is facing an issue with authentication to Google App Engine sites, Identity Aware Proxy, the Google Cloud Console, and Google OAuth 2.0 endpoints. This began at 11:30 PT yesterday and it ended at 1:27 PT.  With Google, ProtonMail and ProtonVPN also experienced an outage yesterday. Due to the server failure, few accounts were temporarily unavailable. Proton’s infrastructure team found out the issue and fixed it and the systems are now functioning normally. The official post reads, “We apologize for the inconvenience. No emails or data were lost, but some incoming emails may be delayed.” Google Cloud Status Dashboard reads, "We are currently experiencing an issue with authentication to Google App Engine sites, the Google Cloud Console, Identity Aware Proxy, and Google OAuth 2.0 endpoints.” https://twitter.com/crash_signal/status/1163526809847324675 DownDetector confirmed that there is an outage in some parts of the world and few users on Twitter confirmed the news.  Services like Google, Google Drive, and Gmail experienced issues. Few users weren’t able to access the services and others faced slow loading times.  Google’s OAuth which is used for Google Sign-In services failed yesterday, so a lot of users were unable to log in. OAuth is used for signing in to Google services such as Gmail, Google Calendar, and Chromebooks. It is also used by other applications when users use Google to log into them. And if any issues come up in OAuth then users will not be able to log into any of the above-mentioned services. At 12:18 PT, few customers reported that they could successfully attempt to utilize an incognito window under the Chrome browser to log in. At 12:43 PT, the team reported that mitigation work was on and error rates have fallen down.  This outage is the fourth one in a row, last month, Google suffered an outage as an issue was reported with Cloud Networking and Load balancing within us-east1. Just two months ago Google Calendar was down for almost three hours around the world. In the same month,  Google Cloud suffered a major outage which took down many Google services including YouTube, GSuite, Gmail, etc. Yesterday at 1:27 PT, the team updated, “The issue with authentication to Google App Engine sites, the Google Cloud Console, Identity Aware Proxy, and Google OAuth 2.0 endpoints has been resolved for all affected customers as of Monday, 2019-08-19 12:30 US/Pacific. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to help prevent or minimize future recurrence.” Few users are sceptical about the company’s recent outages, a user commented on HackerNews, “Google often has a outage or two around this time of the year when all the US schools come back and millions of students log in at the same time.” Another user commented, “It sure feels like there have been quite a few big outages this summer (Google in particular). I wonder if they are getting sloppy or this is just bad luck?” https://twitter.com/FXE4008/status/1163522257119043590 https://twitter.com/gabe565/status/1163519934095343616 Google and Mozilla to remove Extended Validation indicators in Chrome 77 and Firefox 70 OpenTracing and OpenCensus merge into OpenTelemetry project; Google introduces OpenCensus Web Google suffers another Outage as Google Cloud servers in the us-east1 region are cut off  
Read more
  • 0
  • 0
  • 2639

article-image-announcing-async-std-beta-release-an-async-port-of-rusts-standard-library
Bhagyashree R
20 Aug 2019
3 min read
Save for later

Announcing ‘async-std’ beta release, an async port of Rust's standard library

Bhagyashree R
20 Aug 2019
3 min read
Last week, Stjepan Glavina, a Rust programmer at Ferrous Systems, announced that the ‘async-std’ library has now reached its beta phase. This library has the “looks and feels” of Rust’s standard library but replaces its components with their async counterparts. The current state of asynchronous programming in Rust Asynchronous code facilitates the execution of multiple tasks concurrently on the same OS thread. It can potentially make your application much faster while using fewer resources as compared to a corresponding threaded implementation. Speaking of Rust’s asynchronous ecosystem, we can say that it is still early days. The standard library’s Future trait was recently stabilized and the async/await feature will soon be landing in a future version. Why async-std is introduced Rust’s Future trait is often considered to be difficult to understand, not because it is complex but because it is something that people are not used to. Stating what makes Futures confusing, the book accompanying the ‘async-std’ library states, “Futures have three concepts at their base that seem to be a constant source of confusion: deferred computation, asynchronicity and independence of execution strategy.” The ‘async-std’ library, together with its supporting libraries, aims to make asynchronous programming easier in Rust. It is based on Future and supports a set of traits from the futures library. It is also designed to support the new async programming model that is slated to be stabilized in Rust 1.39. The async-std library serves as an interface to all important primitives including filesystem operations, network operations and concurrency basics like timers. In addition to the async variations of I/O primitives found in std, it comes with async versions of concurrency primitives like Mutex and RwLock. It also ships with a ‘tasks’ module that performs a single allocation per spawned task and awaits the result of the task without the need of an extra channel. Speaking about the learning curve of async-std, Glavina said, “By mimicking standard library's well-understood APIs as closely as possible, we hope users will have an easy time learning how to use async-std and switching from thread-based blocking APIs to asynchronous ones. If you're familiar with Rust's standard library, very little should come as a surprise.” The library received a mixed reaction from the community. A user said, “In fact, Rust does have a great solution for non-blocking code: just use threads! Threads work great, they are very fast on Linux, and solutions such as goroutines are just implementations of threads in userland anyway...People tell me that Rust services scale up to thousands of requests per second on Linux by just using 1:1 threads.” A Rust developer on Reddit commented, “Looks good. I'm hoping we can soon see this project, the futures crate, async WGs crates and Tokio converge to build unified async foundations, reduce duplicated efforts (and avoid seeing dependencies explode when using several crates using async together). It's unclear to me why apparently similar crates are popping up, but I hope this is just temporary explorations of async that will merge together.” Check out the official announcement to know more about the async-std library. Also, check out its book: Async programming in Rust with async-std. Rust 1.37.0 releases with support for profile-guided optimization, built-in cargo vendor, and more Introducing Abscissa, a security-oriented Rust application framework by iqlusion Introducing Ballista, a distributed compute platform based on Kubernetes and Rust  
Read more
  • 0
  • 0
  • 2534

article-image-introducing-kdevops-modern-devops-framework-for-linux-kernel-development
Fatema Patrawala
20 Aug 2019
3 min read
Save for later

Introducing kdevops, a modern DevOps framework for Linux kernel development

Fatema Patrawala
20 Aug 2019
3 min read
Last Friday, Luis Chamberlain announced the release of kdevops as a Linux kernel development DevOps framework. Chamberlain wrote in his email, “the goal behind this project is to provide a modern devops framework for Linux kernel development. It is not a test suite, it is designed to use any test suites, and more importantly, it allows us to let us easily set up test environments in a jiffie. It supports different virtualization environments, and different cloud environments, and supports different Operating Systems.” kdevops is a sample framework which lets you to easily set up a testing environment for a number of different use cases. How does kdevops work? kdevops relies on Vagrant, Terraform and Ansible to get you going with your virtualization/bare metal/cloud provisioning environment. It relies heavily on public ansible galaxy roles and terraform modules. This lets the kdevops team share codes with the community and allow them to use the project as a demo framework which uses theses ansible roles and terraform modules. There are three parts to the long terms ideals for kdevops: Provisioning required virtual hosts/cloud environment Provisioning your requirements Running whatever you want Ansible will be used to get all the required ansible roles. Then Vagrant or Terraform can be used to provision hosts. Vagrant makes use of two ansible roles to setup update ~/.ssh/config and update the systems with basic development preference files, things like .git config or bashrc hacks. This last part is handled by the devconfig ansible role. Since ~/.ssh/config is updated you can then run further ansible roles manually when using Vagrant. If using Terraform for cloud environments, it updates ~/.ssh/config directly without ansible, however since access to hosts on cloud environments can vary in time running all ansible roles is expected to be done manually. What you can do with kdevops Full vagrant provisioning, including updating your ~/.ssh/config Terraform provisioning on different cloud providers Running ansible to install dependencies on debian Using ansible to clone, compile and boot into any random kernel git tree with a supplied config Updating ~/.ssh/config for terraform, first tested with the OpenStack provider, with both generic and special minicloud support. Other terraform providers just require making use of the newly published terraform module add-host-ssh-config On Hacker News, this release has gained positive reviews, but the only concern for users is if it has anything to do with devops as it appears to be an automated testing environment provision. One of them comments, “This looks cool, but I'm not sure what it has to do with devops? It just seems to be automated test environment provisioning, am I missing something?” On Reddit as well, Linux users are happy with this setup and they find it really promising, one of the comments read, “I have so much hacky scriptwork around kvm, have always been looking for a cleaner setup; this looks super promising. thank you.” To know more about this release, check out the official announcement page as well as the GitHub page. Why do IT teams need to transition from DevOps to DevSecOps? Is DevOps really that different from Agile? No, says Viktor Farcic [Podcast] Azure DevOps report: How a bug caused ‘sqlite3 for Python’ to go missing from Linux images
Read more
  • 0
  • 0
  • 3349
article-image-security-flaws-in-boeing-787-cis-ms-code-can-be-misused-by-hackers-security-researcher-says-at-black-hat-2019
Savia Lobo
19 Aug 2019
7 min read
Save for later

Security flaws in Boeing 787 CIS/MS code can be misused by hackers, security researcher says at Black Hat 2019

Savia Lobo
19 Aug 2019
7 min read
At the Black Hat 2019 security conference in Las Vegas, Ruben Santamarta, an IOActive Principal Security Consultant in his presentation said that there were vulnerabilities in the Boeing 787 Dreamliner’s components, which could be misused by hackers. The security flaws are in the code for a component known as a Crew Information Service/Maintenance System. “The CIS/MS is responsible for applications like maintenance systems and the so-called electronic flight bag, a collection of navigation documents and manuals used by pilots,” according to Bruce Schneier's (public-interest technologist) blog.  Boeing, however, strongly disagreed with Santamarta’s findings saying that such an attack is not possible and rejected Santamarta’s “claim of having discovered a potential path to pull it off.” SantaMarta says, “An attacker could potentially pivot from the in-flight entertainment system to the CIS/MS to send commands to far more sensitive components that control the plane's safety-critical systems, including its engine, brakes, and sensors.” According to Wired, “Santa­marta himself admits that he doesn't have a full enough picture of the aircraft—or access to a $250 million jet—to confirm his claims.” In a whitepaper Santamarta released earlier this month, he points out that in September 2018, a publicly accessible Boeing server was identified using a simple Google search, exposing multiple files. On further analysis, the exposed files contained parts of the firmware running on the Crew Information System/Maintenance System (CIS/MS) and Onboard Networking System (ONS) for the Boeing 787 and 737 models respectively. These included documents, binaries, and configuration files. Also, a Linux-based Virtual Machine used to allow engineers to access part of the Boeing’s network access was also available.  “The research presented in this paper is based on the analysis of information from public sources, collected documents, and the reverse engineering work performed on the 787’s CIS/MS firmware, which has been developed by Honeywell, based on a regular (nonavionics, non-certified, and non-ARINC-653-compliant) VxWorks 6.2 RTOS (x86) running on a Commercial Off The Shelf (COTS) CPU board (Pentium M),” the whitepaper states.  Santamarta identified three networks in the 787, the Open Data Network (ODN), the Isolated Data Network (IDN), and the Common Data Network (CDN). The ODN talks with the outside, handling communication with potentially dangerous devices. The IDN handles secure devices, but not necessarily ones that are connected to aircraft safety systems; a flight data recorder is an example. Santamarta described the CDN as the "backbone communication of the entire network," connecting to electronics that could impact the safety of the aircraft. According to PCMag, “Santamarta was clear that there are serious limitations to his research, since he did not have access to a 787 aircraft. Still, IOActive is confident in its findings. "We have been doing this for many years, we know how to do this kind of research." SantaMarta said "We're not saying it's doomsday, or that we can take a plane down. But we can say: This shouldn't happen." Boeing, on the other hand, denies the claims put forward by SantaMarta and says that the claims do not represent any real threat of a cyberattack. In a statement to Wired, Boeing writes, "IOActive's scenarios cannot affect any critical or essential airplane system and do not describe a way for remote attackers to access important 787 systems like the avionics system." The statement further reads, "IOActive reviewed only one part of the 787 network using rudimentary tools, and had no access to the larger system or working environments. IOActive chose to ignore our verified results and limitations in its research, and instead made provocative statements as if they had access to and analyzed the working system. While we appreciate responsible engagement from independent cybersecurity researchers, we're disappointed in IOActive's irresponsible presentation." "Although we do not provide details about our cybersecurity measures and protections for security reasons, Boeing is confident that its airplanes are safe from cyberattack," the company's statement concludes. In a follow-up call with WIRED, Boeing’s company spokesperson said that “in investigating IOActive's claims, Boeing had gone so far as to put an actual Boeing 787 in "flight mode" for testing, and then had its security engineers attempt to exploit the vulnerabilities that Santamarta had exposed. They found that they couldn't carry out a successful attack.”  Further, according to Wired, Boeing also consulted with the Federal Aviation Administration and the Department of Homeland Security about Santamarta's attack hypothesis. The DHS didn't respond to a request for comment, but an FAA spokesperson wrote in a statement to WIRED that it's "satisfied with the manufac­turer’s assessment of the issue." The Boeing fleet has been in the news for quite some time ever since Boeing's grounded 737 MAX 8 aircraft killed a total of 346 people in two fatal air crashes in October last year and in March this year.  Stefan Savage, a computer science professor at the University of California at San Diego, said,"The claim that one shouldn't worry about a vulnerability because other protections prevent it from being exploited has a very bad history in computer security." Savage is currently working with other academic researchers on an avionics cybersecurity testing platform. "Typically, where there's smoke there's fire," he further adds.  Per Wired, “The Aviation Industry Sharing and Analysis Center shot back in a press release that his findings were based on "technical errors." Santamarta countered that the A-ISAC was "killing the messenger," attempting to discredit him rather than address his research.” PCMag writes, “Santamarta is skeptical. He conceded that it's possible Boeing added mitigations later on, but says there was no evidence of such protections in the code he analyzed." A reader on Schneier’s blog post writes that Boeing should allow SantaMarta’s team to conduct a test, for the betterment of the passengers, “I really wish Boeing would just let them test against an actual 787 instead of immediately dismissing it. In the long run, it would work out way better for them, and even the short term PR would probably be a better look.” Another reader commented about lax FAA standards on schneier’s blog post, “Reading between the lines, this would infer that FAA/EASA certification requires no penetration testing of an aircrafts systems before approving a new type. That sounds like “straight to the scene of the accident” to me…” A user who is responsible for maintenance of 787’s wrote on HackerNews, “Unlike the security researcher, I do have access to multiple 787s as I am one of many people responsible for maintaining them. I'm obviously not going to attempt to exploit the firmware on an aircraft for obvious reasons, but the security researcher's notion that you can "pivot" from the in flight entertainment to anything to do with aircraft operation is pure fantasy.” He further added, “These systems are entirely separate, including the electricity that controls the systems. This guy is preying on individuals' lack of knowledge about aircraft mechanics in order to promote himself.” Another user on HackerNews shared, “I was flying about a year ago and was messing with the in flight entertainment in a 787. It was pretty easy to figure out how to get to a boot menu in the in flight entertainment. I was thinking "huh, this seems like maybe a way in". Seeing how the in-flight displays navigational data it must be on the network as the flight systems. I'm sure there is some kind of segregation but it’s probably not ultimately secure.” Savage tells Wired, "This is a reminder that planes, like cars, depend on increasingly complex networked computer systems. They don't get to escape the vulnerabilities that come with this." To know more about this news, read the whitepaper by the IOActive team. You can also head over to Wired’s detailed analysis.  “Deep learning is not an optimum solution for every problem faced”: An interview with Valentino Zocca 4 common challenges in Web Scraping and how to handle them Microsoft workers protest the lethal use of Hololens2 in the $480m deal with US military
Read more
  • 0
  • 0
  • 3325

article-image-git-2-23-released-with-two-new-commands-git-switch-and-git-restore-a-new-tutorial-and-much-more
Amrata Joshi
19 Aug 2019
4 min read
Save for later

Git 2.23 released with two new commands ‘git switch’ and ‘git restore’, a new tutorial, and much more!

Amrata Joshi
19 Aug 2019
4 min read
Last week, the team behind Git released Git 2.23 that comes with experimental commands, backward compatibility and much more. This release has received contributions from over 77 contributors out of which 26 were new. What’s new in Git 2.23? Experimental commands This release comes with a new pair of experimental commands, git switch and git restore for providing a better interface for the git checkout.  “Two new commands "git switch" and "git restore" are introduced to split "checking out a branch to work on advancing its history" and "checking out paths out of the index and/or a tree-ish to work on advancing the current history" out of the single "git checkout" command,” the official mail thread reads.  Git checkout can be used to change branches with git checkout <branch>. In case if the user doesn’t want to switch branches, git checkout can be used to change individual files, too. These new commands aim to separate the responsibilities of git checkout into two narrower categories that is operations, which change branches and operations that change files.  Backward compatibility  The "--base" option of "format-patch" is now compatible with "git patch-id --stable".  Git fast-export/import pair The "git fast-export/import" pair will be now used to handle commits with log messages in encoding other than UTF-8. git clone --recurse-submodules "git clone --recurse-submodules" has now learned to set up the submodules for ignoring commit object names that are recorded in the superproject gitlink. git diff/grep The pattern "git diff/grep" that is used for extracting funcname and words boundary for Rust has now been added. git fetch" and "git pull The commands "git fetch" and "git pull" are used to report when a fetch results in non-fast-forward updates that lets the user notice unusual situation.    git status With this release, the extra blank lines in "git status" output have been reduced. Developer support This release comes with developer support for emulating unsatisfied prerequisites in tests for ensuring that the remainder of the tests succeeds when tests with prerequisites are skipped. A new tutorial for git-core developers This release comes with a new tutorial that target aspiring git-core developers. This tutorial demonstrates end-to-end workflow of creating a change to the Git tree, for sending it for review, as well as making changes that are based on comments. Bug fixes in Git 2.23 In the earlier version, "git worktree add" used to fail when another worktree that was connected to the same repository was corrupt. This issue has been corrected in this release. An issue with the file descriptor has been fixed. This release comes with an updated parameter validation. The code for parsing scaled numbers out of configuration files has been made more robust and easier to follow with this release. Few users seem to be happy about the new changes made, a user commented on HackerNews, “It's nice to hear that there appears to be progress being made in making git's tooling nicer and more consistent. Git's model itself is pretty simple, but the command line tools for working with it aren't and I feel that this fuels most of the "Git is hard" complaints.” Few others are still skeptical about the new commands, another user commented, “On the one hand I'm happy on the new "switch" and "restore" commands. On the other hand, I wonder if they truly add any value other than the semantic distinction of functions otherwise present in checkout.” To know more about this news in detail, read the official blog post on GitHub. GitHub has blocked an Iranian software developer’s account GitHub services experienced a 41-minute disruption yesterday iPhone can be hacked via a legit-looking malicious lightning USB cable worth $200, DefCon 27 demo shows
Read more
  • 0
  • 0
  • 10049