Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3709 Articles
article-image-rails-6-releases-with-action-mailbox-parallel-testing-action-text-and-more
Vincy Davis
19 Aug 2019
4 min read
Save for later

Rails 6 releases with Action Mailbox, Parallel Testing, Action Text, and more!

Vincy Davis
19 Aug 2019
4 min read
After a long wait, the stable version of Rails 6 is finally available for users. Five days ago, David Hansonn, the Ruby on Rails creator, released the final version, which has many new major features such as Action Mailbox, Action Text, Parallel Testing, and Action Cable Testing. Rails 6 also has many minor changes, fixes, and upgrades in Railties, Action Pack, Action View, and more. This version also requires Ruby 2.5.0+ for running codes. Hansonn says, “While we took a little while longer with the final version than expected, the time was spent vetting that Rails 6 is solid.” He also informs that GitHub, Shopify, and Basecamp and other companies and applications have already been using the pre-release version of Rails 6 in their production. https://twitter.com/dhh/status/1162426045405921282 Read More: The first release candidate of Rails 6.0.0 is now out! Major new features in Rails 6 Action Mailbox This new framework can direct incoming emails to controller like mailboxes, such that user can use it for processing in Rails. Action Mailbox ships with access to Amazon SES, Mailgun, Mandrill, Postmark, and SendGrid. It is also possible to control inbound mails via the built-in Exim, Postfix, and Qmail ingresses. These inbound emails are transformed to InboundEmail records using Active Record. They can also be routed asynchronously using Active Job to one or several dedicated mailboxes. To know more about the basics of Action Mailbox, head over to action mailbox basics. Action Text Action Text includes the Trix editor that can handle formatting, links, quotes, lists, embedded images, and galleries. It also provides rich text content which is saved in the RichText model associated with the existing Active Record model in the chosen application. To get an overview on Action Mailbox, read the action text overview page. Parallel Testing Parallel Testing allows users to parallelize their test suite, thus reducing the time required to run the entire test suite. The forking process is the default method used to do parallel testing. To learn how to do parallel testing with processes, check out the parallel testing page. Action Cable Testing Action Cable testing tools allows users to test their Action Cable functionality at the connections, channels and broadcast levels. For information on connection test case and channel test case, head over to the testing action cable. Other changes in Rails 6 Railties Railties handles the bootstrapping process in a Rails application and also provides the Rails generators core. Multiple database support for rails db:migrate:status command has been added. A new guard has been introduced to protect against DNS rebinding attacks. Action Pack The Action Pack framework is used for handling and responding to web requests. It also provides mechanisms for routing, controllers, and more. Rails 6 allows the use of #rescue_from for handling parameter parsing errors. A new middleware ActionDispatch::HostAuthorization has been added to guard against DNS rebinding attacks. Developers are excited to use the new features introduced in Rails 6, especially the parallel testing feature. A user on Hacker News comments, “Wow, Multiple DB and Parallel Testing is super productive. I hope framework maintainers of other language community should also get inspired by these features.” Another comment reads, “The multiple database support is really exciting. Anything that makes it easier to push database reads to replicas is huge.” Another user says, “Congrats to the Rails team ! I can't praise Rails enough. Such a huge boost in productivity for prototyping or full production app. I use it for both work or side project. I can't imagine a world without it. Long live Rails!” Twitteratis are also praising the Rails 6 release. https://twitter.com/tenderlove/status/1162566272271339521 https://twitter.com/AviShastry/status/1162755780229107713 https://twitter.com/excid3/status/1162426797046284288 To know about the minor changes, fixes, and upgrades in Rails 6, check out the Ruby on Rails 6.0 Release Notes. Head over to the Ruby blog for more details about the release. GitLab considers moving to a single Rails codebase by combining the two existing repositories Rails 6 will be shipping source maps by default in production Ruby on Rails 6.0 Beta 1 brings new frameworks, multiple DBs, and parallel testing
Read more
  • 0
  • 0
  • 3454

article-image-cloudflare-plans-to-go-public-files-s-1-with-the-sec
Savia Lobo
19 Aug 2019
3 min read
Save for later

Cloudflare plans to go public; files S-1 with the SEC

Savia Lobo
19 Aug 2019
3 min read
Cloudflare announced its plans to go public and has filed an S-1 with the SEC (Securities and Exchange Commission) last week. This action taken by Cloudflare comes after it received a hoard of ‘negative publicity’ to the use of the network by the 8chan online forum, which is known to have inspired the mass shootings in El Paso, Texas, and ChristChurch, New Zealand. “We are aware of some potential customers that have indicated their decision to not subscribe to our products was impacted, at least in part, by the actions of certain of our paying and free customers,” the filing says. Post the El Paso mass shooting incident, a few days back, Cloudflare first continued to defend hosting 8chan calling it their ‘moral obligation’ to provide 8chan their services. However, after an intense public and media backlash, Cloudflare reversed their stance and announced that it would completely stop providing support for 8chan. To this, Jim Watkins, the owner of 8chan, said in a video statement, “It is clearly a political move to remove 8chan from CloudFlare; it has dispersed a peacefully assembled group of people.” Cloudflare said they avoid cutting off websites for objectionable content as it can also “harm our brand and reputation”; however, it banned the Neo-Nazi website, Daily Stormer in 2017 after the website claimed that Cloudflare was protecting them and secretly agreed with the site's neo-Nazi articles. “We received significant adverse feedback for these decisions from those concerned about our ability to pass judgment on our customers and the users of our platform, or to censor them by limiting their access to our products, and we are aware of potential customers who decided not to subscribe to our products because of this,” says the filing. Cloudflare also plans to list shares on the New York Stock Exchange under the ticker symbol "NET," the filing mentioned. It has also raised just over $400 million from investors including Franklin Templeton Investments, Fidelity Investments, Microsoft and Baidu, Forbes states. “Activities of our paying and free customers or the content of their websites or other Internet properties, as well as our response to those activities, could cause us to experience significant adverse political, business, and reputational consequences with customers, employees, suppliers, government entities, and others,” the company said in the filing. According to Forbes, “The filing reveals that Prince owns 16.6% of the company, which (after factoring in a private company discount) is worth about $270 million based on the 2015 valuation. Zatlyn(co-founder) owns 5.6% of the company, worth about $90 million. Holloway(co-founder) owns a 3.2% stake. Cloudflare has not yet indicated the price range for selling its shares.” Earlier this year, Fastly, another cloud provider also went public. “After pricing its IPO at $16 per share, Fastly’s equity skated higher in early trading. Today  Fastly is worth $23.19 per share, up about 45 percent,” Crunchbase reported in July. To know more about this news in detail, head over to the S-1 filing report. Cloudflare RCA: Major outage was a lot more than “a regular expression went bad” Cloudflare suffers 2nd major internet outage in a week. This time due to globally deploying a rogue regex rule After refusing to sign the Christchurch Call to fight online extremism, Trump admin launches tool to defend “free speech” on social media platforms
Read more
  • 0
  • 0
  • 2493

article-image-anime-studio-khara-switching-primary-3d-cg-tools-to-blender
Sugandha Lahoti
19 Aug 2019
4 min read
Save for later

Japanese Anime studio Khara is switching its primary 3D CG tools to Blender

Sugandha Lahoti
19 Aug 2019
4 min read
Popular Japanese animation studio Khara, announced on Friday that it will be moving to open source 3D software Blender as its primary 3D CG tool. Khara is a motion picture planning and production company and are currently working on “EVANGELION:3.0+1.0”, a film to be released in June 2020. Primarily, they will partially use Blender for ‘EVANGELION:3.0+1.0’ but will make the full switch once that project is finished. Khara is also helping the Blender Foundation by joining the Development Fund as a corporate member. Last month, Epic Games granted Blender $1.2 million in cash. Following Epic Games, Ubisoft also joined the Blender Development fund and adopted Blender as its main DCC tool. Why Khara opted for Blender? Khara had been using AutoDesk’s “3ds Max” as their primary tool for 3D CG so far. However, their project scale got bigger than what was possible with 3ds Max. 3ds Max is also quite expensive; according to Autodesk’s website, an annual fee for a single user is $2,396. Khara also had to reach out to small and medium-sized businesses for its projects. Another complaint was that Autodesk took time to release improvements to their proprietary software, which happens at a much faster rate in an open source software environment. They had also considered Maya as one of the alternatives, but dropped the idea as it resulted in duplication of work resource. Finally they switched to Blender, as it is open source and free. They were also intrigued by the new Blender 2.8 release which provided them with a 3D creation tool that worked like “paper and pencil”.  Blender’s Grease Pencil feature enables you to combine 2D and 3D worlds together right in the viewport. It comes with a new multi-frame edition mode with which you can change and edit several frames at the same time. It has a build modifier to animate the drawings similar to the Build modifier for 3D objects. “I feel the latest Blender 2.8 is intentionally ‘filling the gap’ with 3ds Max to make those users feel at home when coming to Blender. I think the learning curve should be no problem.”, told Mr. Takumi Shigyo, Project Studio Q Production Department. Khara founded “Project Studio Q, Inc.” in 2017, a company focusing mainly on the movie production and the training of Anime artists. Providing more information on their use of Blender, Hiroyasu Kobayashi, General Manager of Digital Dpt. and Director of Board of Khara, said in the announcement, “Preliminary testing has been done already. We are now at the stage to create some cuts actually with Blender as ‘on live testing’. However, not all the cuts can be done by Blender yet. But we think we can move out from our current stressful situation if we place Blender into our work flows. It has enough potential ‘to replace existing cuts’.” While Blender will be used for the bulk of the work, Khara does have a backup plan if there's anything Blender struggles with. Kobayashi added "There are currently some areas where Blender cannot take care of our needs, but we can solve it with the combination with Unity. Unity is usually enough to cover 3ds Max and Maya as well. Unity can be a bridge among environments." Khara is also speaking with their partner companies to use Blender together. Khara’s transition was well appreciated by people. https://twitter.com/docky/status/1162279830785646593 https://twitter.com/eoinoneillPDX/status/1154161101895950337 https://twitter.com/BesuBaru/status/1154015669110710273 Blender 2.80 released with a new UI interface, Eevee real-time renderer, grease pencil, and more Following Epic Games, Ubisoft joins Blender Development fund; adopts Blender as its main DCC tool Epic Games grants Blender $1.2 million in cash to improve the quality of their software development projects
Read more
  • 0
  • 0
  • 7730

article-image-mapbox-introduces-martini-a-client-side-terrain-mesh-generation-code
Vincy Davis
16 Aug 2019
3 min read
Save for later

Mapbox introduces MARTINI, a client-side terrain mesh generation code

Vincy Davis
16 Aug 2019
3 min read
Two days ago, Vladimir Agafonkin, an engineer at Mapbox introduced a client-side terrain mesh generation code, called MARTINI, short for ‘Mapbox's Awesome Right-Triangulated Irregular Networks Improved’. It uses a Right-Triangulated Irregular Networks (RTIN) mesh, which consists of big right-angle triangles to render smooth and detailed terrain in 3D. RTIN has two advantages such as: The algorithm generates a hierarchy of all approximations of varying precision, thus enabling quick retrieving. It is very fast making it feasible for client-side meshing from raster terrain tiles. In a blog post, Agafonkin demonstrates a drag and zoom terrain visualization for users to adjust mesh precision in real time. The terrain visualization also displays the number of triangles generated with an error rate. Image Source: Observable How does the RTIN Hierarchy work Mapbox's MARTINI uses the RTIN algorithm which has a size of (2k+1) x (2k+1) grids, “that's why we add 1-pixel borders on the right and bottom”, says Agafonkin. The RTIN algorithm initiates an error map where a grid of error values guides the following mesh retrieval. The error map indicates the user if a certain triangle has to be split or not, by taking the height error value into account. The RTIN algorithm first calculates the error approximation of the smallest triangles, which is then propagated to the parent triangles. This process is repeated until the top two triangles’ errors are calculated and a full error map is produced. This process results in zero T-junctions and thus no gaps in the mesh. Image Source: Observable For retrieving a mesh, RTIN hierarchy starts with two big triangles, which is then subdivided to approximate, according to the error map. Agafonkin says, “This is essentially a depth-first search in an implicit binary tree of triangles, and takes O(numTriangles) steps, resulting in nearly instant mesh generation.” Users have appreciated the Mapbox's MARTINI demo and animation presented by Agafonkin in the blog post. A user on Hacker News says, “This is a wonderful presentation of results and code, well done! Very nice to read.” Another user comments, “Fantastic. Love the demo and the animation.” Another comment on Hacker News reads, “This was a pleasure to play with on an iPad. Excellent work.” For more details on the code and algorithm used in Mapbox's MARTINI, check out Agafonkin’s blog post. Introducing Qwant Maps: an open source and privacy-preserving maps, with exclusive control over geolocated data Top 7 libraries for geospatial analysis Using R to implement Kriging – A Spatial Interpolation technique for Geostatistics data
Read more
  • 0
  • 0
  • 4772

article-image-ionic-react-rc-is-now-out
Bhagyashree R
16 Aug 2019
3 min read
Save for later

Ionic React RC is now out!

Bhagyashree R
16 Aug 2019
3 min read
Earlier this year, the Ionic team released the beta version of Ionic React. After receiving the developer feedback and contributions from the community, the team launched Ionic React RC on Wednesday. The team says that this first major release of Ionic React was made possible by Ionic 4.0. Previously, Ionic was built using Angular components, but Ionic 4.0 was rewritten to use Web Components. This change made the Ionic Framework, an app development framework that can be used alongside any front end frameworks, not just Angular. Why is Ionic React needed Explaining the motivation behind Ionic React, Ely Lucas, Software Engineer & Dev Advocate at Ionic wrote in the announcement, “Ionic React RC marks the first major release of our vision to bring Ionic development to more developers on other frameworks.” Though it is possible to import the core Ionic components directly into React projects, this method does not provide a good developer experience. Also, when working with web components in React, you need to write some boilerplate code to properly communicate with the web components. Ionic React will essentially work as a “thin wrapper” around the core components of Ionic and will export them as native React components. It will also handle the boilerplate code for you. However, you still need to write a few features in the native framework such as page lifetime management and lifecycle methods. You can do this by extending the react-router package with @ionic/react-router. Considering this is a release candidate, the team is not expecting many major changes. Sharing the team’s next steps, Lucas said, “We will be looking closely at any issues that pop up during the RC phase and working on some final code stabilization and minor bug fixes...We also plan on creating some more content and guides in the docs to help with some best practices we’ve found when working with Ionic React.” The team is now seeking developer feedback before they come up with the final release. If you encounter any issues, you can report it on the GitHub repo and tag the issue with “package react”. To know further updates on Ionic React, you can also have a chat with the team who will be present at React Rally from August 22-23 at Salt Lake City, UT. This is a community conference that brings together developers of all backgrounds using React.js, React Native, and related tools. Many developers are excited about this update. Here’s what a few Twitter users are saying: https://twitter.com/clandestoapp/status/1161893695194636289 https://twitter.com/miniallaghi/status/1161964880913719297 Check out the official announcement by the Ionic team to know more in detail. The Ionic team announces the release of Ionic React Beta Ionic Framework 4.0 has just been released, now backed by Web Components, not Angular Ionic v4 RC released with improved performance, UI Library distribution and more  
Read more
  • 0
  • 0
  • 2667

article-image-vmwares-plan-to-acquire-pivotal-software-reflects-a-rise-in-pivotals-shares
Amrata Joshi
16 Aug 2019
3 min read
Save for later

VMware's plan to acquire Pivotal Software reflects a rise in Pivotal's shares

Amrata Joshi
16 Aug 2019
3 min read
Pivotal Software Inc, a software and services company and VMware Inc., its parent company are negotiating a deal for VMware to acquire Pivotal as per a recent regulatory filing from Dell Technologies Inc. VMware, Pivotal, and Dell Technologies had jointly filed the document on Wednesday and informed the government regulators about the potential transaction. Also, the regulatory filing stated that representatives of the companies are “proceeding to negotiate definitive agreements with respect to a transaction to acquire all of the outstanding shares of Class A common stock of Pivotal for cash at a per share price equal to $15.00.” According to Reuters, “The VMware Special Committee has requested that Dell exchange Pivotal Class B stock for VMWare Class A stock.” This deal will offer around $4 billion to Pivotal and with this news doing rounds, on Wednesday, Pivotal's shares rose 63% to $13.60. Meanwhile, VMWare’s shares has shown a downfall of 3% at $148.25 in extended trading. On Wednesday, Dell’s shares also saw a downfall, 1.65% to $47.80 in after-market trading, while Dell is also the controlling stockholder for both Pivotal and VMware as they both are majorly owned by Dell. Again, on Thursday, Pivotal shares saw a jump of as much as 72% premarket. However, it seems the companies are still in talks and nothing has been finalized as of now. As per Business Wire’s post, “A definitive agreement between Pivotal and VMware has not been executed. There can be no assurances that a definitive agreement will be executed between the parties.” Pivotal had launched its initial public offering of stock last year in April. Initially, the company performed well, but its shares dropped 28% in June after facing disappointing first quarter earnings results. And it seems that the talks of this deal have already started benefiting the company which is evident by the rise in the shares. While VMware always had a decent track record with respect to go-to market execution, with Pivotal, VMware will now be able to experiment with PaaS (Platform-as-a-Service). Holger Mueller, Constellation Research Inc. analyst said in a statement to CNBC, “With VMware buying Pivotal, it will be able to diversify into platform-as-a-service.” Mueller further added, “It also means Pivotal won’t become an embarrassing episode in the Dell saga.” Wikibon analyst, Dave Vellante said, “VMware is the software ‘mother ship’ for Dell with a proven track record of go-to-market execution, engineering excellence and the ability to consistently create shareholder value.”  Vellante further added, “Pivotal, on the other hand, has consistently struggled to achieve industry leadership and relevance.” Few users think that this was a strategic move by Dell for its own benefit by increasing Pivotal’s shares. A user commented on HackerNews, “Dell owns Pivotal. Dell owns VMware. Dell has debt. Pivotal goes public at $15/share. Pivotal stock tanks. VMWare buys Pivotal at $15/share. Dell has less debt. The difference between Pivotal's internal share price, prior to going public, and their IPO price is pure profit for Dell. This was entirely a tactical move to reduce debt for Dell. Thanks for footing the bill shareholders!” To know more about this news, check out the post by Reuters. AI chipmaking startup ‘Graphcore’ raises $200m from BMW, Microsoft, Bosch, Dell Dell reveals details on its recent security breach Pivotal and Heroku team up to create Cloud Native Buildpacks for Kubernetes  
Read more
  • 0
  • 0
  • 2042
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at ₹800/month. Cancel anytime
article-image-rust-1-37-0-releases-with-support-for-profile-guided-optimization-built-in-cargo-vendor-and-more
Bhagyashree R
16 Aug 2019
4 min read
Save for later

Rust 1.37.0 releases with support for profile-guided optimization, built-in cargo vendor, and more

Bhagyashree R
16 Aug 2019
4 min read
After releasing version 1.36.0 last month, the team behind Rust announced the release of Rust 1.37.0 yesterday. Among the highlights of this version are support for referring to enum variants via type aliases, built-in cargo vendor, unnamed const items, profile-guided optimization, and more. Key updates in Rust 1.37.0 Referring to enum variants through type aliases Starting with this release, you can refer to enum variants through type aliases in expression and pattern contexts. Since Self behaves like a type alias in implementations, you can also refer to enum variants with Self::Variant. Built-in Cargo support for vendored dependencies Until now the cargo vendor command was available as a separate crate to developers. Starting with Rust 1.37.0, it is integrated directly into Cargo, the Rust package manager, and crate host. This Cargo subcommand fetches all the crates.io and git dependencies for a project into the vendor/ directory. It also shows the configuration necessary to use the vendored code during builds. Using unnamed const items for macros Rust 1.37.0 allows you to create unnamed const items. So, instead of giving an explicit name to your constants, you can name them as ‘_’. This update will enable you to easily create ergonomic and reusable declarative and procedural macros for static analysis purposes. Support for profile-guided optimization Rust’s compiler, rustc now supports Profile-Guided Optimization (PGO) through the -C profile-generate and -C profile-use flags. PGO allows the compiler to optimize your code based on feedback for real workloads. It optimizes a program in the following two steps: The program is first built with instrumentation inserted by the compiler. This is achieved by passing the -C profile-generate flag to rustc. This instrumented program is then run on sample data and the profiling data is written to a file. The program is built again, however, this time the collected profiling data is fed into rustc by using the -C profile-use flag. This build will use the collected data to enable the compiler to make better decisions about code placement, inlining, and other optimizations. Choosing a default binary in Cargo projects The cargo run command allows you to run a binary or example of the local package enabling you to quickly test CLI applications. It often happens that there are multiple binaries present in the same packages. In such cases, developers need to explicitly mention the name of the binary they want to run with the --bin flag. This makes the cargo run command not as ergonomic, especially when we are calling a binary more often than the others. To solve this issue, Rust 1.37.0 introduces a new key in Cargo.toml called default-run. Declaring this key in the [package] section will make the cargo run command default to the chosen binary if the --bin flag is not passed. Developers have already started testing out this new release. A developer who used  profile-guided optimization shared his experience on Hacker News, “The effect is very dependent on program structure and actual code running, but for a suitable application it's reasonable to expect anything from 5-15%, and sometimes much more (see e.g. Firefox reporting 18% here).” Others also speculated that async/await will come in Rust 1.39. “Seems like async/await is going to slip into Rust 1.39 instead.” Another user said, “Congrats! Like many I was looking forward to async/await in this release but I'm happy they've taken some extra time to work through any existing issues before releasing it.” Check out the official announcement by the Rust team to know more in detail. Rust 1.36.0 releases with a stabilized ‘Future’ trait, NLL for Rust 2015, and more Introducing Vector, a high-performance data router, written in Rust “Why was Rust chosen for Libra?”, US Congressman questions Facebook on Libra security design choices
Read more
  • 0
  • 0
  • 2954

article-image-stripes-negative-emissions-commitment-to-directly-remove-co2-and-store-its-sequestration-to-mitigate-global-warming
Vincy Davis
16 Aug 2019
4 min read
Save for later

Stripe's ‘Negative Emissions Commitment’ to pay for removal and sequestration of CO2 to mitigate global warming

Vincy Davis
16 Aug 2019
4 min read
Yesterday, Stripe, the online payments platform provider, announced a phenomenal initiative of ‘Negative Emissions Commitment’. According to the commitment, Stripe will pay for the removal and sequestration of carbon dioxide directly from the atmosphere in a secure and long-term storage to mitigate or delay global warming. https://twitter.com/patrickc/status/1162120064302059520 [box type="shadow" align="" class="" width=""]Carbon sequestration is the process of capturing and storing atmospheric carbon dioxide or other forms of carbon for a long term in a secure storage.[/box] Besides Stripe, there are other growing startups such as Carbon Engineering, Climeworks, and Global Thermostat, which are actively working in this space. Stripe seeks to purchase negative carbon dioxide (CO2) emissions at any price per tCO2 (total Carbon dioxide). The official blog post adds, “And so we commit to spending at least twice as much on sequestration as we do on offsets, with a floor of at least $1M per year.” This initiative of Stripe comes after IPCC in its recent summary report stated that in scenarios where the temperature usually stays below 2°C of temperature increase will have “substantial net negative emissions by 2100, on average around 2 gigatons of CO2 per year.” Image Source: IPCC Stripe plans to work with experts in selecting successful carbon capture solutions based on cost-effectiveness as it is expected that it will cost more than $100 per tCO2, as compared to the $8 per tCO2 that the company pays for offsets. What are Stripe’s current efforts in technology landscape? There are three ongoing projects that the software company expects funding for, First, is the land management project, which aims to improve natural carbon sinks by forestation initiatives, soil management reform, and agricultural techniques. Scientists and entrepreneurs can try to increase the duration of CO2 storage by hacking plant roots such that more CO2 can be stored for an extended period of time. The second ongoing project is on enhanced weathering. The project will make CO2 in a gas or liquid to react with silicate minerals and rocks rich in Ca and Mg to form carbonate minerals. This collected carbon is later sequestered for centuries in the mineral. Next, is a direct-air capture project, which is an industrial installation that uses energy to force air into contact with a CO2-sorbent. Later, the CO2 is separated from the sorbent and transported to long-term storage sites. Stripe believes that humanity will need more such techniques in the coming decades in order to achieve the collective goal of removing negative emissions from the atmosphere. The company expects that if a scalable and verifiable negative emissions technology is made available for $100 per tonne of captured CO2 (tCO2) in the market, it could turn out to be a trillion-dollar industry by the end of the century. Such kind of projects will not only help in reducing negative emissions but will also put an end to anthropogenic climate change. Stripe has also announced that they are open to funding such projects to mitigate negative emissions in the coming decade. People all over the world are admiring Stripe’s commitment in mitigating negative emissions. https://twitter.com/onwards2020/status/1162141027139932160 https://twitter.com/noeltoolan/status/1162160662086242307 https://twitter.com/RayWalshe/status/1162122719678390274 Many are also expecting that other companies will follow Stripe in this initiative. https://twitter.com/Ruh_abhi/status/1162181740158279685 https://twitter.com/hukl/status/1162285694385041408 For more details about Stripe’s Negative Emissions Commitment, head over to Stripe’s official blog. Stripe’s API degradation RCA found unforeseen interaction of database bugs and a config change led to cascading failure across critical services Stripe’s API suffered two consecutive outages yesterday causing elevated error rates and response times Stripe updates its product stack to prepare European businesses for SCA-compliance
Read more
  • 0
  • 0
  • 2905

article-image-react-devtools-4-0-releases-with-support-for-hooks-experimental-suspense-api-and-more
Bhagyashree R
16 Aug 2019
3 min read
Save for later

React DevTools 4.0 releases with support for Hooks, experimental Suspense API, and more!

Bhagyashree R
16 Aug 2019
3 min read
Yesterday, the React team announced the release of React DevTools 4.0 for Chrome, Firefox, and Edge. In addition to better performance and navigation experience, this release fully supports React Hooks and provides a way to test the experimental Suspense API. Key updates in React DevTools 4.0 Better performance by reducing the “bridge traffic” The React DevTools extension is made up of two parts: frontend and backend. The frontend portion includes the components tree, the Profiler, and all the other things that are visible to you. On the other hand, the backend portion is the one that is invisible. This portion is in charge of notifying the frontend by sending messages through a “bridge”. In previous versions of React DevTools, the traffic caused by this notification process was one of the biggest performance bottlenecks. Starting with React DevTools 4.0, the team has tried to reduce this bridge traffic by minimizing the amount of messages sent by the backend to render the Components tree. The frontend can request more information whenever required. Automatically logs React component stack warnings React DevTools 4.0 now provides an option to automatically append component stack information to the console in the development phase. This will enable developers to identify where exactly in the component tree failure has happened. To disable this feature just navigate to the General settings panel and uncheck the “Append component stacks to warnings and errors.” Source: React Components tree updates Improved hooks support: Hooks allow you to use state and other React features without writing a class. In React DevTools 4.0, hooks have the same level of support as props and state. Component filters: Navigating through large component trees can often be tiresome. Now, you can easily and quickly find the component you are looking for by applying the component filters. "Rendered by" list and an owners tree: React DevTools 4.0 now has a new "rendered by" list in the right-hand pane that will help you quickly step through the list of owners. There is also an owners tree, the inverse of the "rendered by" list, which lists all the things that have been rendered by a particular component. Suspense toggle: The experimental Suspense API allows you to “suspend” the rendering of a component until a condition is met. In <Suspense> components you can specify the loading states when components below it are waiting to be rendered. This DevTools release comes with a toggle to let you test these loading states. Source: React Profiler changes Import and export profiler data: The profiler data can now be exported and shared among other developers for better collaboration. Source: React Reload and profile: React profiler collects performance information each time the application is rendered. This helps you identify and rectify any possible performance bottlenecks in your applications. In previous versions, DevTools only allowed profiling a “profiling-capable version of React.” So, there was no way to profile the initial mount of an application. This is now supported with a "reload and profile" action. Component renders list: The profiler in React DevTools 4.0 displays a list of each time a selected component was rendered during a profiling session. You can use this list to quickly jump between commits when analyzing a component’s performance. You can check out the release notes of React DevTools 4.0 to know what other features have landed in this release. React 16.9 releases with an asynchronous testing utility, programmatic Profiler, and more React Native 0.60 releases with accessibility improvements, AndroidX support, and more React Native VS Xamarin: Which is the better cross-platform mobile development framework?
Read more
  • 0
  • 0
  • 3734

article-image-nmap-7-80-releases-with-a-new-npcap-windows-packet-capture-driver-and-other-80-improvements
Vincy Davis
14 Aug 2019
3 min read
Save for later

Nmap 7.80 releases with a new Npcap Windows packet capture driver and other 80+ improvements!

Vincy Davis
14 Aug 2019
3 min read
On August 10, Gordon Lyon, the creator of Nmap announced the release of Nmap 7.80 during the recently concluded DefCon 2019 in Las Vegas. This is a major release of Nmap as it contains 80+ enhancements and is the first stable release in over a year. The major highlight of this release is the newly built Npcap Windows packet capturing library. Ncap uses modern APIs and accords better performance, features and is more secure. What’s new in Nmap 7.80? Npcap Windows packet capture driver: Npcap is based on the discontinued WinPcap library, but with improved speed, portability, and efficiency. It uses the ‘Libpcap‘ library which enables Windows applications to use a portable packet capturing API and supported on Linux and Mac OS X. Npcap can optionally be restricted to only allow administrators to sniff packets, thus providing increased security. New 11 NSE scripts added: NSE scripts has been added from 8 authors, thus taking the total number of NSE scripts to 598. The new 11 scripts are focussed on HID devices, Jenkins servers, HTTP servers, Logical Units (LU) of TN3270E servers and more. pcap_live_open has been replaced with pcap_create: pcap_create solves the packet loss problems on Linux and also performance improvements on other platforms. rand.lua library: The new ‘rand.lua’ library uses the best sources of random available on the system to generate random strings. oops.lua library: This new library helps in easily reporting errors, including plenty of debugging details. TLS support added: TLS support has been added to rdp-enum-encryption, which enables the regulation of protocol version against servers that require TLS. New service probe and match lines: New service probe and match lines have been added for adb and the Android Debug Bridge, to enable remote code execution. Two new common error strings: Two new common error strings has been added to improve MySQL detection by the script http-sql-injection. New script-arg http.host: It allows users to force a particular value for the Host header in all HTTP requests. Users love the new improvements in Nmap 7.80. https://twitter.com/ExtremePaperC/status/1160388567098515456 https://twitter.com/Jiab77/status/1160555015041363968 https://twitter.com/h4knet/status/1161367177708093442 For the full list of changes in Nmap 7.80, head over to the Nmap announcement. Amazon adds UDP load balancing support for Network Load Balancer Brute forcing HTTP applications and web applications using Nmap [Tutorial] Discovering network hosts with ‘TCP SYN’ and ‘TCP ACK’ ping scans in Nmap[Tutorial]
Read more
  • 0
  • 0
  • 4483
article-image-nvidias-latest-breakthroughs-in-conversational-ai-trains-bert-in-under-an-hour-launches-project-megatron-to-train-transformer-based-models-at-scale
Bhagyashree R
14 Aug 2019
4 min read
Save for later

NVIDIA’s latest breakthroughs in conversational AI: Trains BERT in under an hour, launches Project Megatron to train transformer based models at scale

Bhagyashree R
14 Aug 2019
4 min read
Researchers have been constantly putting their efforts into improving conversational AI to make them better understand human languages and their nuances. One such advancement in the conversational AI field is the introduction of Transformer-based models such as OpenAI’s GPT-2 and Google’s BERT. In a quest to make the training and deployment of these vastly large language models efficient, NVIDIA researchers recently conducted a study, the details of which they shared yesterday. https://twitter.com/ctnzr/status/1161277599793860618 NVIDIA’s Tensor core GPU took less than an hour to train the BERT model BERT, short for, Bidirectional Encoder Representations from Transformers, was introduced by a team of researchers at Google Language AI. It is capable of performing a wide variety of state-of-the-art NLP tasks including Q&A, sentiment analysis, and sentence classification. What makes BERT different from other language models is that it applies the bidirectional training of Transformer to language modelling. Transformer is an attention mechanism that learns contextual relations between words in a text. It is designed to pre-train deep bidirectional representations from the unlabeled text by using both left and right context in all layers. NVIDIA researchers chose BERT-LARGE, a version of BERT created with 340 million parameters for the study. NVIDIA’s DGX SuperPOD was able to train the model in a record-breaking time of 53 minutes. The Super POD was made up of 92 DGX-2H nodes and 1472 GPUs, which were running PyTorch with Automatic Mixed Precision. The following table shows the time taken to train BERT-Large for various numbers of GPUs: Source: NVIDIA Looking at these results, the team concluded, “The combination of GPUs with plenty of computing power and high-bandwidth access to lots of DRAM, and fast interconnect technologies, makes the NVIDIA data center platform optimal for dramatically accelerating complex networks like BERT.” In a conversation with reporters and analysts, Bryan Catarazano, Vice President of Applied Deep Learning Research at NVIDIA said, “Without this kind of technology, it can take weeks to train one of these large language models.” NVIDIA further said that it has achieved the fastest BERT inference time of 2.2 milliseconds by running it on a Tesla T4 GPU and TensorRT 5.1 optimized for datacenter inference. NVIDIA launches Project Megatron, under which it will research training transformer language models at scale Beginning this year, OpenAI introduced the 1.5 billion parameter GPT-2 language model that generates nearly coherent and meaningful texts. The NVIDIA Research team has built a scaled-up version of this model, called GPT-2 8B. As its name suggests, it is made up of 8.3 billion parameters, which makes it 24X the size of BERT-Large. To train this huge model the team used PyTorch with 8-way model parallelism and 64-way data parallelism on 512 GPUs. This experiment was part of a bigger project called Project Megatron, under which the team is trying to create a platform that facilitates the training of such “enormous billion-plus Transformer-based networks.” Here’s a graph showing the compute performance and scaling efficiency achieved: Source: NVIDIA With the increase in the number of parameters, there was also a noticeable improvement in accuracy as compared to smaller models. The model was able to achieve a wikitext perplexity of 17.41, which surpasses previous results on the wikitext test dataset by Transformer-XL. However, it does start to overfit after about six epochs of training that can be mitigated by using even larger scale problems and datasets. NVIDIA has open-sourced the code for reproducing the single-node training performance in its BERT GitHub repository. The NLP code on Project Megatron is also openly available in Megatron Language Model GitHub repository. To know more in detail, check out the official announcement by NVIDIA. Also, check out the following YouTube video: https://www.youtube.com/watch?v=Wxi_fbQxCM0 Baidu open sources ERNIE 2.0, a continual pre-training NLP model that outperforms BERT and XLNet on 16 NLP tasks CMU and Google researchers present XLNet: a new pre-training method for language modeling that outperforms BERT on 20 tasks ACLU (American Civil Liberties Union) file a complaint against the border control officers for violating the constitutional rights of an Apple employee
Read more
  • 0
  • 0
  • 3686

article-image-iphone-can-be-hacked-via-a-legit-looking-malicious-lightning-usb-cable-worth-200-defcon-27-demo-shows
Savia Lobo
14 Aug 2019
5 min read
Save for later

iPhone can be hacked via a legit-looking malicious lightning USB cable worth $200, DefCon 27 demo shows

Savia Lobo
14 Aug 2019
5 min read
While our phones are running low on battery, we do not think twice before inserting a USB to charge it. Also, while transferring files to and fro other devices, we consider the simple wire as benign. Recently, in a demonstration at DefCon 27, a hacker by the online handle MG infected a simple iPhone USB lightning cable with “a small Wi-Fi-enabled implant, which, when plugged into a computer, lets a nearby hacker run commands as if they were sitting in front of the screen”, TechCrunch reports. Per Motherboard, MG made these cables by hand, painstakingly modifying real Apple cables to include the implant. MG told Motherboard, "It looks like a legitimate cable and works just like one. Not even your computer will notice a difference. Until I, as an attacker, wirelessly take control of the cable.” These dummy cables named as “O.MG cables” are visually indistinguishable from the original cables. They also work similar to an original piece, allowing users to charge their devices via USB or transfer files from their iOS devices. The hacker not only showcased the infected cable at DefCon but has also put these similar cables on sale for $200. "There has been a lot of interest and support behind this project," MG says on his blog, "and lots of requests on how to acquire a cable. That's a great feeling!" Once the cable is plugged into a device, it enables an attacker to mount a wireless hijack of the computer. “Once plugged in, an attacker can remotely control the affected computer to send realistic-looking phishing pages to a victim’s screen, or remotely lock a computer screen to collect the user’s password when they log back in,” TechCrunch writes. “In the test with Motherboard, MG connected his phone to a wifi hotspot emanating out of the malicious cable in order to start messing with the target Mac itself. MG typed in the IP address of the fake cable on his own phone's browser and was presented with a list of options, such as opening a terminal on my Mac. From here, a hacker can run all sorts of tools on the victim's computer”, Motherboard’s Joseph Cox writes. On being asked how close an attacker should be plugged in device, MG said, "I’m currently seeing up to 300 feet with a smartphone when connecting directly." “A hacker could use a stronger antenna to reach further if necessary. But the cable can be configured to act as a client to a nearby wireless network. And if that wireless network has an internet connection, the distance basically becomes unlimited." he added. Now MG wants to get the cables produced as a legitimate security tool; he said the company Hak5 is onboard with making that happen. These cables would be made from scratch rather than modified Apple ones, according to Motherboard. MG said, "Apple cables are simply the most difficult to do this to, so if I can successfully implant one of these, then I can usually do it to other cables." How can one avoid getting tricked by the dummy USB lightning cables? Users should ensure they do not go by the looks of the external packaging if any random cable is simply lying around. One should also avoid accepting unsolicited chargers, USB dongles, or similar components as gifts from people they do not trust. Also, one should avoid borrowing chargers from people they do not know.   While purchasing any tech component, users should choose from legit sources online or from any physical ensured locations where the packaging hasn’t been tampered with. While out in public places, one should always ensure their devices, cables, USB dongles, and other components are nearby and secure. A user on HackerNews is infuriated over why major vendors like Windows, macOS, and Linux have not implemented these basic precautions “It's a severe discredit to the major operating system vendors that plugging in a USB stick can still compromise a system.” The user further adds, “If a USB device identifies itself as a keyboard, the system shouldn't accept its keystrokes until either that keyboard has typed the user's login password, or the user uses a different input device to authorize it. If it identifies itself as a storage device, the filesystem driver should be hardened. If it identifies itself as an obscure 90s printer with a buggy driver written in C, it should prompt the user to confirm the device type before it loads the driver.” Another user on HackerNews wondered how one could ensure the cables sold online are legitimate; he writes, “Even more frightening, people selling them as seemingly legitimate cables on Amazon? People will pay you and you get a new botnet. How many could you sell before it's discovered? How can I, as a consumer, even tell? Amazon will even allow you to sell your malcable under the Apple brand.” To know more about this news in detail, head over to Motherboard complete report.  Google Project Zero reveals six “interactionless” bugs that can affect iOS via Apple’s iMessage Google’s Project Zero reveals several serious zero-day vulnerabilities in a fully remote attack surface of the iphone Apple Card, iPhone’s new payment system, is now available for select users
Read more
  • 0
  • 0
  • 3015

article-image-poetry-a-python-dependency-management-and-packaging-tool-releases-v1-beta-1-with-url-dependency
Vincy Davis
14 Aug 2019
4 min read
Save for later

Poetry, a Python dependency management and packaging tool, releases v1 beta 1 with URL dependency

Vincy Davis
14 Aug 2019
4 min read
Last week, Poetry, a dependency management and packaging tool for Python released their version 1 beta 1. Before venturing into details of this Poetry release, let’s have a brief overview about Python, its issues with dependency management, pipenv and Poetry in general. There’s no doubt that Python is loved by many developers. It is considered as one of the top-rated programming languages with benefits like extensive support library, less-complex syntax, high productivity, excellent integration feature, and many more. Though it has been rated as one of the fastest growing programming languages in 2019, there are some problems with Python, which if rectified, can make it more powerful and accessible to users. Python’s poor dependency management is one such issue. Dependency management helps in managing all the libraries required to make an application work. It becomes extremely necessary when working in a complex project or in a multi-environment. An ideal dependency management tool assists in tracking, updating libraries easier and faster, as well as to solve package dependency issues. Python’s dependency management requires users to make a virtual environment to have separate dependencies, manual addition of version number in every file, inability to parallelize dependency installation and more.  To combat these issues, Python now has two maturing dependency management tools called Pipenv and Poetry. Each of these tools simplify the process of creating a virtual environment and sorting dependencies.  The PyPA-endorsed Pipenv automatically creates and manages a virtualenv for user projects. It also adds/removes packages from the Pipfile as a user installs/uninstalls packages. Its main features include automatically generating a Pipfile and a Pipfile.lock, if one doesn't exist, creating a virtualenv, adding packages to a Pipfile when installed to name a few. On the other hand, Poetry dependency management tool uses only one pyproject.toml file to manage all the dependencies. Poetry allows users to declare the libraries that their project depends on and Poetry will automatically install/update them for the user. It allows projects to be directly published to PyPI, easy tracking of the state of dependencies, and more.  New features in Poetry v1 beta 1 The major highlight in Poetry v1 beta 1 is the new added support for url dependencies in the pull request checklist. This new feature is a significant one for Python users, as it can be added to a current project via the add command or by modifying the pyproject.toml file directly.  Other features in Poetry v1 beta 1 Support for publishing to PyPI using API tokens Licenses can be identified by their full name Settings can be specified with environment variables Settings no longer need to be prefixed by settings, when using the config command.  Users, in general, are quite happy with the Poetry dependency management tool for Python, as can be seen in the user reactions below from Hacker News.  A comment on Hacker News reads, “I like how transparent poetry is about what's happening when you run it, and how well presented that information is. I've come to loathe pipenv's progress bar. Running it in verbose mode isn't much better. I can't be too mad at pipenv, but all in all poetry is a better experience.” Another user says, “Poetry is very good. I think projects should use it. I hope the rest of the ecosystem can catch up quickly. Tox and pip and and pex need full support for PEP 517/518.” Another user comments, “When you run poetry, it activates the virtualenv before it runs whatever you wanted. So `poetry add` (it's version of pip install) doesn't require you to have the virtualenv active. It will activate it, run the install, and update your dependency specifications in pyproject.toml.  You can also do `poetry run` and it will activate the virtualenv before it runs whatever shell command comes after. Or you can do `poetry shell` to run a shell inside the virtualenv. I like the seamless integration, personally.” Łukasz Langa at PyLondinium19: “If Python stays synonymous with CPython for too long, we’ll be in big trouble” PyTorch 1.2 is here with a new TorchScript API, expanded ONNX export, and more NumPy 1.17.0 is here, officially drops Python 2.7 support pushing forward Python 3 adoption
Read more
  • 0
  • 0
  • 3555
article-image-pypy-supports-python-2-7-even-as-major-python-projects-migrate-to-python-3
Fatema Patrawala
14 Aug 2019
4 min read
Save for later

PyPy will continue to support Python 2.7, even as major Python projects migrate to Python 3

Fatema Patrawala
14 Aug 2019
4 min read
The switch from Python 2 to Python 3 has been rocky and all signs point to Python 3 pulling firmly into the lead. Python 3 is broadly compatible with several libraries and there's an encouraging rate of adoption by cloud providers for application support too as Python 2 reaches its EOL in 2020. But there are still plenty of efforts to keep Python 2 alive in one form or another. The default implementation of Python is open source, so it can easily be forked and maintained separately. Currently all major open source Python packages support Python 3.x and Python 2.7. Last year Python team updated users that Python 2.7 maintenance will stop in 2020. Originally, there was no official date but in March 2018, the team announced the date to be January 1, 2020. https://twitter.com/ThePSF/status/1160839590967685121 This means that the maintainers of Python 2 will stop supporting it even for security patches. There are many institutions and codebases who have not yet ported their code from Python 2 to Python 3. Hence, Python volunteers have created resources to help publicize and educate, but there's still more work that needs to be done. For which the Python Software Foundation has contracted with Changeset Consulting, to help communicate about the sunsetting of Python 2. The high-level goal for Changeset's involvement is to help users through the end of the transition, help with communication so volunteers are not overwhelmed, and help update public-facing assets so core developers are not overwhelmed. This will also require all the major Python projects to migrate to Python 3 and above. However, PyPy confirmed last week that they do not plan to deprecate Python 2.7 support as long as PyPy exists, according to the official Twitter statement. https://twitter.com/pypyproject/status/1160209907079176192 Apart from this, PyPy runtime is popular among developers due to its built-in JIT which provides major speed boosts to Python code. Pypy has long favored Python 2 over Python 3. This favoritism isn't solely because the first versions of PyPy were Python 2 implementations and Python 3 has only recently entered the picture. It's also due to a key part of PyPy's ecosystem, RPython which is a dynamic language implementation framework has its foundation in Python 2. This is not likely to change, according to PyPy's official FAQ. The page states, “the Python 2 version of PyPy will be around 'forever', i.e. as long as PyPy itself is around.” According to Pypy’s official announcement it will support Python 3 while continuing to support Python 2.7 version. Last year when Python rolled out the announcement that Python 2 will officially end in 2020, users on Hacker News discussed about the most popular packages being compatible with Python 3 while millions of people in the industry still work on Python 2.7. One of the users comments read, “most popular packages are now compatible with Python 3 I often see this but I think it's a perception from the Internet/web world. I work for CGI, all (I'm not kidding) our software (we have many) are 2.7. You will never see them used "on the web/Internet/forum/network" place but the day-to-day job of millions of people in the industry is 2.7. And we are a tiny focused industry. So I'm sure there are many other industries like us which are 2.7 that you never heard of. That's why "most popular" mean nothing once you take how Python is used as a whole. We don't use any of this web/Internet/network "popular" packages. I'm not saying Python shouldn't move on. I'm just trying to argue against this "most popular packages" while millions of us, even if you don't know it, use none of those. GNU Radio 3.8.0.0 releases with new dependencies, Python 2 and 3 compatibility, and much more! NumPy 1.17.0 is here, officially drops Python 2.7 support pushing forward Python 3 adoption Python 3.8 new features: the walrus operator, positional-only parameters, and much more
Read more
  • 0
  • 0
  • 5718

article-image-macstadium-announces-orka-orchestration-with-kubernetes-on-apple
Savia Lobo
13 Aug 2019
2 min read
Save for later

MacStadium announces ‘Orka’ (Orchestration with Kubernetes on Apple)

Savia Lobo
13 Aug 2019
2 min read
Today, MacStadium, an enterprise-class cloud solution for Apple Mac infrastructure, announced ‘Orka’ (Orchestration with Kubernetes on Apple). Orka is a new virtualization layer for Mac build infrastructure based on Docker and Kubernetes technology. It offers a solution for orchestrating macOS in a cloud environment using Kubernetes on genuine Apple Mac hardware. With Orka, users can apply native Kubernetes commands for macOS virtual machines (VMs) on genuine Apple hardware. “While Kubernetes and Docker are not new to full-stack developers, a solution like this has not existed in the Apple ecosystem before,” MacStadium wrote in an email statement to us. “The reality is that most enterprises need to develop applications for Apple platforms, but these enterprises prefer to use nimble, software-defined build environments,” said Greg McGraw, Chief Executive Officer, MacStadium. “With Orka, MacStadium’s flagship orchestration platform, developers and DevOps teams now have access to a software-defined Mac cloud experience that treats infrastructure-as-code, similar to what they are accustomed to using everywhere else.” Developers creating apps for Mac or iOS must build on genuine Apple hardware. However, until now, popular orchestration and container technologies like Kubernetes and Docker have been unable to leverage Mac operating systems. With Orka, Apple OS development teams can use container technology features in a Mac cloud, the same way they build on other cloud platforms like AWS, Azure or GCP. As part of its initial release, Orka will ship with a plugin for Jenkins, an open-source automation tool that enables developers to build, test and deploy their software using continuous integration techniques. Macstadium will also present a session at DevOps World | Jenkins World in San Francisco (August 12-15) demonstrating users how Orka integrates with Jenkins build pipelines and how it leverages the capability and power of Docker/Kubernetes in a Mac development environment. To know more about Orka in detail, visit MacStadium’s official website. CNCF-led open source Kubernetes security audit reveals 37 flaws in Kubernetes cluster; recommendations proposed Introducing Ballista, a distributed compute platform based on Kubernetes and Rust Implementing Horizontal Pod Autoscaling in Kubernetes [Tutorial]
Read more
  • 0
  • 0
  • 2558