Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3709 Articles
article-image-blender-celebrates-its-25th-birthday
Natasha Mathur
03 Jan 2019
3 min read
Save for later

Blender celebrates its 25th birthday!

Natasha Mathur
03 Jan 2019
3 min read
Blender, a free and open source 3D computer graphics software, celebrated its 25th birthday yesterday. Blender team celebrated the birthday by publishing a post that talked about the journey of blender from 1993 to 2018, taking a trip down the memory lane. Blender’s Journey (1994 - 2018) The Blender team states that during the 1993 Christmas Ton Roosendaal, creator of Blender started working on the Blender software, making use of the designs that he made during his 1993 course.                                                   Original design doc from 1993 The first blender version came to life on January 2nd, 1994 and used the subdivision-based windowing system working. This date has now been marked as Blender’s official Birthday and Roosendaal even has an old backup of this version on his SGI Indigo2 workstation. Blender was first released publicly online on 1st January 1998 as an SGI freeware. The Linux and Windows versions of Blender were released shortly after. In May 2002, Roosendaal started the non-profit Blender Foundation. The first goal for the Blender Foundation was to find a way to continue the development and promotion of Blender as a community-based open source project. https://www.youtube.com/watch?time_continue=164&v=8A-LldprfiE Blender's 25th birthday With the popularity of the internet in the early 2000s, the source code for Blender became available under GNU General Public License (GPL) on October 13th, 2002. This day marked Blender as the open source and free 3D creation software that we use till date. Blender team started “Project Orange” in 2005, that resulted in the world’s first and widely recognized Open Movie “Elephants Dream”. The success of the open movie project led to Roosendaal establishing the “Blender Institute” in summer 2007. Blender Institute has now become the permanent office and studio where the team organizes the Blender Foundation goals and facilitates the Open Projects related to 3D movies, games or visual effects. In early 2008, Roosendaal started the Blender 2.5 project, which was a major overhaul of the UI, tool definitions, data access system, event handling, and animation system. The main goal of the project was to bring the core of Blender to the contemporary interface standards as well as the input methods. The first alpha version for Blender 2.5 was presented on Siggraph 2009, with the final release of 2.5 getting published in 2011. In 2012, the Blender team put its focus on further developing and exploring a Visual Effect creation pipeline that included features such as motion tracking, camera solving, masking, grading and good color pipeline. Coming back to 2018, it was just last week when the Blender team released Blender 2.8 with a revamped user interface, high-end viewport, and other great features. Mozilla partners with Khronos Group to bring glTF format to Blender Building VR objects in React V2 2.0: Getting started with polygons in Blender Blender 2.5: Detailed Render of the Earth from Space
Read more
  • 0
  • 0
  • 5757

article-image-openssh-8-0-released-addresses-scp-vulnerability-new-ssh-additions
Fatema Patrawala
19 Apr 2019
2 min read
Save for later

OpenSSH 8.0 released; addresses SCP vulnerability and new SSH additions

Fatema Patrawala
19 Apr 2019
2 min read
Theo de Raadt and the OpenBSD developers who maintain the OpenSSH, today released the latest version OpenSSH 8.0. OpenSSH 8.0 has an important security fix for a weakness in the scp(1) tool when you use scp for copying files to/from remote systems. Till now when copying files from remote systems to a local directory, SCP was not verifying the filenames of what was being sent from the server to client. This allowed a hostile server to create or clobber unexpected local files with attack-controlled data regardless of what file(s) were actually requested for copying from the remote server. OpenSSH 8.0 adds client-side checking that the filenames sent from the server match the command-line request. While this client-side checking added to SCP, the OpenSSH developers recommend against using it and instead use sftp, rsync, or other alternatives. "The scp protocol is outdated, inflexible and not readily fixed. We recommend the use of more modern protocols like sftp and rsync for file transfer instead.", mention OpenSSH developers. New to OpenSSH 8.0 meanwhile is support for ECDSA keys in PKCS#11 tokens, experimental quantum-computing resistant key exchange method. Also, the default RSA key size from ssh-keygen has been increased to 3072 bits and more SSH utilities supporting a "-v" flag for greater verbosity are added. It also comes with a wide range of fixes throughout including a number of portability fixes. More details on OpenSSH 8.0 is available on OpenSSH.com. OpenSSH, now a part of the Windows Server 2019 OpenSSH 7.8 released! OpenSSH 7.9 released
Read more
  • 0
  • 0
  • 5748

article-image-googles-global-coding-competitions-code-jam-hashcode-and-kick-start-come-together-on-a-single-website
Amrata Joshi
26 Nov 2018
3 min read
Save for later

Google’s global coding competitions, Code Jam, HashCode and Kick Start come together on a single website

Amrata Joshi
26 Nov 2018
3 min read
Last week, Google brought the popular coding competitions Code Jam, HashCode and Kick Start together on a single website. This brand new UI will make the navigation better to make it user friendly. The user profile will now show notifications which will make the user experience better. Code Jam Google’s global coding competition, Code Jam, gives an opportunity to programmers around the world to solve tricky algorithmic puzzles. The first round includes three sub rounds. Next, the top 1,500 participants from each sub-round then get a chance to compete for a spot in round 2. Top 1,000 contestants are chosen out of them and they get an opportunity to move to the third round. Top 25 contestants will get selected from the third round and they will compete for the finals. The winners get the championship title and $15,000. HashCode HashCode is a team-based programming challenge organized by Google for students and professionals around the world. After registering for the contest, the participants will get an access to the Judge System. The Judge System is an online platform where one can form the team, join a hub, practice, and compete during the rounds. One can choose their team and programming language and the HashCode team assigns an engineering problem to the teams by live streaming on Youtube. The teams can compete either from a local hub or any another location of their choice. The selected teams will compete for the final round at Google’s office. Kick Start Kick Start, also a global online coding competition, consists of a variety of algorithmic challenges designed by Google engineers. Participants can either participate in one of the online rounds or in all of them. The top participants will get a chance to be interviewed at Google. The best part about KickStart is that it is open to all participants and there is no pre-qualification needed. If you are competing in a coding competition for the first time, then KickStart is the best option. What can you expect with this unified interface? Some good competition and some amazing insights coming from each of the rounds. Personalized certificate of completion. A chance to practice coding and experience new challenges A lot of opportunities To stay updated with the registration dates and details, one can sign up on Google’s coding competition’s official page. To know more about the competitions, check out Google’s blog. Google hints shutting down Google News over EU’s implementation of Article 11 or the “link tax” Recode Decode #GoogleWalkout interview shows why data and evidence don’t always lead to right decisions in even the world’s most data-driven company Google Dart 2.1 released with improved performance and usability
Read more
  • 0
  • 0
  • 5748

article-image-is-it-time-to-ditch-chrome-ad-blocking-extensions-will-now-only-be-for-enterprise-users
Sugandha Lahoti
03 Jun 2019
6 min read
Save for later

Is it time to ditch Chrome? Ad blocking extensions will now only be for enterprise users

Sugandha Lahoti
03 Jun 2019
6 min read
Update: Opera, Brave, Vivaldi have ignored Chrome's anti-ad-blocker changes, despite shared codebase. On June 12, Google published a blog post clarifying it's intentions with ad blocking extension system saying it isn't trying to kill ad blockers. "This has been a controversial change since the Web Request API is used by many popular extensions, including ad blockers. We are not preventing the development of ad blockers or stopping users from blocking ads. Instead, we want to help developers, including content blockers, write extensions in a way that protects users’ privacy." In January, Chrome updated its Manifest V3 extension system that could lead to crippling all ad blockers. Even though Chrome’s Manifest extension system received overwhelmingly negative feedback, Google is standing firm on Chrome’s ad blocking changes. Last week, the company shared a statement on Google groups that current ad blocking capabilities will not be changed. Chrome will still have the capability to block unwanted content, but this will be restricted to only paid, enterprise users of Chrome. “Chrome is deprecating the blocking capabilities of the webRequest API in Manifest V3, not the entire webRequest API (though blocking will still be available to enterprise deployments).” What is this Manifest v3 controversy? Google developers have introduced an alternative to the webRequest API named the declarativeRequest API, which limits the blocking version of the webRequest API. declarativeNetRequest is a less effective, rules-based system. Chrome currently imposes a limit of 30,000 rules, However, most popular ad blocking rules lists use almost 75,000 rules. Although Google claimed that they’re looking to increase this number, they didn’t assure it. “We are planning to raise these values but we won’t have updated numbers until we can run performance tests to find a good upper bound that will work across all supported devices.” According to Manifest V3, the declarativeNetRequest API will be treated as the primary content-blocking API in extensions. Chrome developers listed two reasons behind this new update, one was performance and the other was better privacy guarantee to users. What this API does is, allow extensions to tell Chrome what to do with a given request, rather than have Chrome forward the request to the extension. This allows Chrome to handle a request synchronously. Regarding performance upgrade, however, a study was published on WhoTracks.me who analyzed the network performance of the most commonly used ad blockers: uBlock Origin, Adblock Plus, Brave, DuckDuckGo and Cliqz’z Ghostery. The study revealed that these content-blockers, except DuckDuckGo, have only sub-millisecond median decision time per request. This small amount of time will not have any overhead noticeable by users. Additionally, the efficiency of content blockers is continuously being improved with innovative approaches or with the help of technologies like WebAssembly. A uBlock maintainer had earlier reported an issue on the Chromium bug tracker for this feature: “If this (quite limited) declarativeNetRequest API ends up being the only way content blockers can accomplish their duty, this essentially means that two content blockers I have maintained for years, uBlock Origin (“uBO”) and uMatrix, can no longer exist.” In their update, Google wrote that appropriately permissioned extensions will still be able to observe network requests using the webRequest API, which he insisted is "foundational for extensions that modify their behavior based on the patterns they observe at runtime." Now, the lead developer of uBlock Origin, Raymond Hill has commented on the situation. Losing the ability to block content with the webRequest API is his main concern. "This breaks uBlock Origin and uMatrix, [which] are incompatible with the basic matching algorithm [Google] picked, ostensibly designed to enforce EasyList-like filter lists," he explained in an email to The Register. "A blocking webRequest API allows open-ended content blocker designs, not restricted to a specific design and limits dictated by the same company which states that content blockers are a threat to its business." He also called out Google’s business model on uBlock Origin’s GitHub. “The blocking ability of the webRequest API caused Google to yield control of content blocking to content blockers. Now that Google Chrome is the dominant browser, it is in a better position to shift the optimal point between the two goals which benefits Google's primary business. The deprecation of the blocking ability of the webRequest API is to gain back this control, and to further now instrument and report how web pages are filtered since now the exact filters which are applied to web page is information which will be collectable by Google Chrome.” For a number of web users, this was the last straw. Many said they'd be moving on from Chrome to other privacy-friendly browsers. A comment reads, “If you use an iOS device, Safari is awesome. The integration between all your hardware devices syncing passwords, tabs, bookmarks, reading list, etc. kicks ass. That’s all not to mention its excellent built-in privacy features and that it’s really really fast.” Another comment reads, “I used to have Firefox. When I heard that even Microsoft was going to use chromium I realized, Firefox is literally the last front ! I installed Firefox and started using it as my main browser.” Another says, “Genuinely, most people are choosing between privacy and convenience. And with Firefox you don't need to choose.” Mozilla’s Firefox has taken this opportunity to attract Chrome users with a new page detailing how to Switch from Chrome to Firefox. “Switching to Firefox is fast, easy and risk-free. Firefox imports your bookmarks, autofill, passwords and preferences from Chrome.” The latest Firefox release also comes with a new feature that can help users block fingerprinting coming from ad trackers. The brave browser also tweeted about Chrome’s development, stating it will block ads regardless of Chrome’s decisions. https://twitter.com/brave/status/1134182650615173120 Users also appreciated Brave’s privacy features. https://twitter.com/jenzhuscott/status/1134035348240109568 Chrome software security engineer Chris Palmer took to Twitter to claim the move was intended to help improve the end-user browsing experience, and paid enterprise users would be exempt from the changes. https://twitter.com/fugueish/status/1133851275794059265 Chrome security leader Justin Schuh also said the changes were driven by privacy and security concerns. https://twitter.com/justinschuh/status/1134092257190064128 Top browsers, Opera, Brave, Vivaldi have ignored Chrome's anti-ad-blocker changes, despite having a shared codebase. https://twitter.com/opera/status/1137717494733508609 https://twitter.com/brave/status/1134182650615173120 https://twitter.com/vivaldibrowser/status/1136204715786719232 Google Chrome developers “clarify” the speculations around Manifest V3 after a study nullifies their performance hit argument. Flutter gets new set of lint rules to build better Chrome OS apps Chromium developers propose an alternative to webRequest API that could result in existing ad blockers’ end
Read more
  • 0
  • 0
  • 5744

article-image-liz-fong-jones-reveals-she-is-leaving-google-in-february
Richard Gall
03 Jan 2019
2 min read
Save for later

Liz Fong-Jones reveals she is leaving Google in February

Richard Gall
03 Jan 2019
2 min read
Liz Fong-Jones has been a key figure in the politicization of Silicon Valley over the last 18 months. But the Developer Advocate at Google Cloud Platform revealed today (3rd January 2018) that she is to leave the company in February, citing Google's lack of leadership in response to the demands made by employees during the Google walkout in November 2018. Fong-Jones hinted that she had found another role before Christmas, writing on Twitter that she had found a new job: https://twitter.com/lizthegrey/status/1075837650433646593 That was confirmed today when Fong-Jones tweeted "Resignation letter is in. February 25 is my last day." Her new role hasn't yet been revealed, but it appears that she will be remain within SRE. She told one follower that she will likely be at SRECon in Dublin later in the year. https://twitter.com/lizthegrey/status/1080837397347221505 She made it clear that she had no issue with her team, stating that her decision to leave was instead "a reflection on what Google's become over the 11 years I've worked there." Why Liz Fong-Jones exit from Google is important Fong-Jones exit from Google doesn't reflect well on the company. If anything, it only serves to highlight the company's stubbornness. Despite months to respond to serious allegations of sexual harassment and systemic discrimination, there appears to be a refusal to acknowledge problems, let alone find a way forward to tackle them. From Fong-Jones perspective, it the move is probably as much pragmatic as it is symbolic. She spoke on Twitter of "burnout" at "doing what has to be done, as second shift work." https://twitter.com/lizthegrey/status/1080848586135560192 While there are clearly personal reasons for Fong-Jones to leave Google, because of her importance as a figure in conversations around tech worker rights and diversity, her exit will have significant symbolic power. It's likely that she'll continue to play an important part in helping tech workers - in Silicon Valley and elsewhere - organize for a better future, even as she aims to do "more of what you want to do".
Read more
  • 0
  • 0
  • 5731

article-image-react-16-x-roadmap-released-with-expected-timeline-for-features-like-hooks-suspense-and-concurrent-rendering
Sugandha Lahoti
28 Nov 2018
3 min read
Save for later

React 16.x roadmap released with expected timeline for features like “Hooks”, “Suspense”, and “Concurrent Rendering”

Sugandha Lahoti
28 Nov 2018
3 min read
Yesterday, the React team published a roadmap for React 16.x releases. They have split the rollout of new React features into different milestones. The team has made it clear that they have a single vision for how all of these features fit together, but are releasing each part as soon as it is ready, for users to start testing them sooner. The expected milestone React 16.6: Suspense for Code Splitting (already shipped) This new feature can “suspend” rendering while components are waiting for something, and display a loading indicator. It is a convenient programming model that provides better user experience in Concurrent Mode  In React 16.6, Suspense for code splitting supports only one use case: lazy loading components with React.lazy() and <React.Suspense>. React 16.7: React Hooks (~Q1 2019) React Hooks allows users access to features like state and lifecycle from function components. They also let developers reuse stateful logic between components without introducing extra nesting in a tree. Hooks are only available in the 16.7 alpha versions of React. Some of their API is expected to change in the final 16.7 version. Hooks class support might possibly move to a separate package, reducing the default bundle size of React, in future releases. React 16.8: Concurrent Mode (~Q2 2019) Concurrent Mode lets React apps be more responsive by rendering component trees without blocking the main thread. It is opt-in and allows React to interrupt a long-running render to handle a high-priority event. Concurrent Mode was previously referred to as “async mode”. A name change happened to highlight React’s ability to perform work on different priority levels. This sets it apart from other approaches to async rendering. As of now, the team doesn’t expect many bugs in Concurrent Mode, but states that components that produce warnings in <React.StrictMode> may not work correctly. They plan to publish more guidance about diagnosing and fixing issues as part of the 16.8 release documentation. React 16.9: Suspense for Data Fetching (~mid 2019) In the already shipped React 16.6, the only supported use case for Suspense is code splitting. In the future 16.9 release, React will officially support ways to use Suspense for data fetching. The team will provide a reference implementation of a basic “React Cache” that’s compatible with Suspense. Data fetching libraries like Apollo and Relay will be able to integrate with Suspense by following a simple specification. The team  expects this feature to be adopted incrementally, and through layers like Apollo or Relay rather than directly. They also plan to complete two more projects Modernizing React DOM and Suspense for Server Rendering  in 2019. As these projects require more exploration, they aren’t tied to a particular release as of now. For more information, visit the React blog. React Conf 2018 highlights: Hooks, Concurrent React, and more React introduces Hooks, a JavaScript function to allow using React without classes React 16.6.0 releases with a new way of code splitting, and more!
Read more
  • 0
  • 0
  • 5724
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $15.99/month. Cancel anytime
article-image-pypy-supports-python-2-7-even-as-major-python-projects-migrate-to-python-3
Fatema Patrawala
14 Aug 2019
4 min read
Save for later

PyPy will continue to support Python 2.7, even as major Python projects migrate to Python 3

Fatema Patrawala
14 Aug 2019
4 min read
The switch from Python 2 to Python 3 has been rocky and all signs point to Python 3 pulling firmly into the lead. Python 3 is broadly compatible with several libraries and there's an encouraging rate of adoption by cloud providers for application support too as Python 2 reaches its EOL in 2020. But there are still plenty of efforts to keep Python 2 alive in one form or another. The default implementation of Python is open source, so it can easily be forked and maintained separately. Currently all major open source Python packages support Python 3.x and Python 2.7. Last year Python team updated users that Python 2.7 maintenance will stop in 2020. Originally, there was no official date but in March 2018, the team announced the date to be January 1, 2020. https://twitter.com/ThePSF/status/1160839590967685121 This means that the maintainers of Python 2 will stop supporting it even for security patches. There are many institutions and codebases who have not yet ported their code from Python 2 to Python 3. Hence, Python volunteers have created resources to help publicize and educate, but there's still more work that needs to be done. For which the Python Software Foundation has contracted with Changeset Consulting, to help communicate about the sunsetting of Python 2. The high-level goal for Changeset's involvement is to help users through the end of the transition, help with communication so volunteers are not overwhelmed, and help update public-facing assets so core developers are not overwhelmed. This will also require all the major Python projects to migrate to Python 3 and above. However, PyPy confirmed last week that they do not plan to deprecate Python 2.7 support as long as PyPy exists, according to the official Twitter statement. https://twitter.com/pypyproject/status/1160209907079176192 Apart from this, PyPy runtime is popular among developers due to its built-in JIT which provides major speed boosts to Python code. Pypy has long favored Python 2 over Python 3. This favoritism isn't solely because the first versions of PyPy were Python 2 implementations and Python 3 has only recently entered the picture. It's also due to a key part of PyPy's ecosystem, RPython which is a dynamic language implementation framework has its foundation in Python 2. This is not likely to change, according to PyPy's official FAQ. The page states, “the Python 2 version of PyPy will be around 'forever', i.e. as long as PyPy itself is around.” According to Pypy’s official announcement it will support Python 3 while continuing to support Python 2.7 version. Last year when Python rolled out the announcement that Python 2 will officially end in 2020, users on Hacker News discussed about the most popular packages being compatible with Python 3 while millions of people in the industry still work on Python 2.7. One of the users comments read, “most popular packages are now compatible with Python 3 I often see this but I think it's a perception from the Internet/web world. I work for CGI, all (I'm not kidding) our software (we have many) are 2.7. You will never see them used "on the web/Internet/forum/network" place but the day-to-day job of millions of people in the industry is 2.7. And we are a tiny focused industry. So I'm sure there are many other industries like us which are 2.7 that you never heard of. That's why "most popular" mean nothing once you take how Python is used as a whole. We don't use any of this web/Internet/network "popular" packages. I'm not saying Python shouldn't move on. I'm just trying to argue against this "most popular packages" while millions of us, even if you don't know it, use none of those. GNU Radio 3.8.0.0 releases with new dependencies, Python 2 and 3 compatibility, and much more! NumPy 1.17.0 is here, officially drops Python 2.7 support pushing forward Python 3 adoption Python 3.8 new features: the walrus operator, positional-only parameters, and much more
Read more
  • 0
  • 0
  • 5718

article-image-core-python-team-confirms-sunsetting-python-2-on-january-1-2020
Vincy Davis
10 Sep 2019
3 min read
Save for later

Core Python team confirms sunsetting Python 2 on January 1, 2020

Vincy Davis
10 Sep 2019
3 min read
Yesterday, the team behind Python posted details about the sunsetting of Python 2. As announced before, post January 1, 2020, Python 2 will not be maintained by the Python team. This means that it will no longer receive new features and it will not be improved even if a security problem is found in it. https://twitter.com/gvanrossum/status/1170949978036084736 Why is Python 2 retiring? In the detailed post, the Python team explains that the huge alterations needed in Python 2 led to the birth of Python 3 in 2006. To keep users happy, the Python team kept improving and publishing both the versions together. However, due to some changes that Python 2 couldn’t  handle and scarcity of time required to improve Python 3 faster, the Python team has decided to sunset the second version. The team says, “So, in 2008, we announced that we would sunset Python 2 in 2015, and asked people to upgrade before then. Some did, but many did not. So, in 2014, we extended that sunset till 2020.” The Python team has clearly stated that January 1, 2020 onwards, they will not upgrade or improve the second version of Python even if a fatal security problem crops up in it. Their advice to Python 2 users is to switch to Python 3 using the official porting guide as the former will not support many tools in the future. On the other hand, Python 3 supports graph for all the 360 most popular Python packages. Users can also check out the ‘Can I Use Python 3?’ to find out which tools need to upgrade to Python 3. Python 3 adoption has begun As the end date of Python has been decided earlier on, many implementations of Python have already dropped support for Python 2 or are supporting both Python 2 and 3 for now. Two months ago, NumPy, the library for Python programming language officially dropped support for Python 2.7 in its latest version NumPy 1.17.0. It will only support Python versions 3.5 – 3.7. Earlier this year, pandas 0.24 stopped support for Python 2. Pandas maintainer, Jeff Reback had said, “It's 2019 and Python 2 is slowly trickling out of the PyData stack.” However, not all projects are yet fully on board. There has also been efforts taken to keep Python 2 alive. In August this year, PyPy announced that that they do not plan to deprecate Python 2.7 support as long as PyPy exists. https://twitter.com/pypyproject/status/1160209907079176192 Many users are happy to say goodbye to the second version of Python in favor of building towards a long term vision. https://twitter.com/mkennedy/status/1171132063220502528 https://twitter.com/MeskinDaniel/status/1171244860386480129 A user on Hacker News comments, “In 2015, there was no way I could have moved to Python 3. There were too many libraries I depended on that hadn't ported yet. In 2019, I feel pretty confident about using Python 3, having used it exclusively for about 18 months now. For my personal use case at least, this timeline worked out well for me. Hopefully it works out for most everyone. I can't imagine they made this decision without at least some data backing it up.” Head over to the Python website for more details about about this news. Latest news in Python Introducing OpenDrop, an open-source implementation of Apple AirDrop written in Python Poetry, a Python dependency management and packaging tool, releases v1 beta 1 with URL dependency Łukasz Langa at PyLondinium19: “If Python stays synonymous with CPython for too long, we’ll be in big trouble”
Read more
  • 0
  • 0
  • 5712

article-image-a-kernel-vulnerability-in-apple-devices-gives-access-to-remote-code-execution
Prasad Ramesh
01 Nov 2018
2 min read
Save for later

A kernel vulnerability in Apple devices gives access to remote code execution

Prasad Ramesh
01 Nov 2018
2 min read
A heap buffer overflow vulnerability was found in Apple’s XNU OS kernels by Kevin Backhouse. An exploit can potentially cause any iOS or macOS device on the same network to reboot, without any user interaction. Apple has classified this kernel vulnerability as a remote code execution (RCE) vulnerability in the kernel. It may be possible to exploit buffer overflow to execute arbitrary code in the kernel. The vulnerability is fixed in iOS 12 and macOS Mojave. The vulnerability is caused by a heap buffer overflow in the networking code within the XNU kernel. XNU is a kernel system developed by Apple. It is used in both iOS and macOS, hence most iPhones, iPads, and Macbooks are affected. An attacker merely needs to send a malicious IP packet the target device’s IP address to trigger this. The vulnerability is triggered only if the attacker is in the same network as the target. This becomes easy if you’re using a free WiFi network from a coffee shop. The vulnerability being in the kernel, anti-viruses cannot protect your device. The attacker can control the size and content of the heap buffer giving a potential to gain remote code execution of a device. There are two known mitigations against this kernel vulnerability: Enabling stealth mode in the macOS firewall prevents the attack from taking place. Don’t use public WiFi networks as there is a high risk of being attacked. These OS versions and devices are vulnerable: All devices with Apple iOS 11 and earlier All Apple macOS High Sierra devices up to 10.13.6. This is patched in security update 2018-001. Devices using Apple macOS Sierra up to 10.12.6. This is patched in security update 2018-005. Apple OS X El Capitan and earlier devices The kernel vulnerability was reported by Kevin Backhouse to Apple in time to be rolled out with iOS 12 and macOS Mojave. The vulnerabilities were announced on October 30. For more details visit the LGMT website. Final release for macOS Mojave is here with new features, security changes and a privacy flaw The kernel community attempting to make Linux more secure Apple has introduced Shortcuts for iOS 12 to automate your everyday tasks
Read more
  • 0
  • 0
  • 5703

article-image-google-universal-transformers-extension-standard-translation-system
Fatema Patrawala
22 Aug 2018
4 min read
Save for later

Google Brain’s Universal Transformers: an extension to its standard translation system

Fatema Patrawala
22 Aug 2018
4 min read
Last year in August Google released the Transformer, a novel neural network architecture based on a self-attention mechanism particularly well suited for language understanding. Before the Transformer, most neural network based approaches to machine translation relied on recurrent neural networks (RNNs) which operated sequentially using recurrence. In contrast to RNN-based approaches, the Transformer used no recurrence, instead it processed all words or symbols in the sequence and let each word attend the other word over multiple processing steps using a self-attention mechanism to incorporate context from words farther away. This approach led Transformer to train the recurrent models much faster and yield better translation results than RNNs. “However, on smaller and more structured language understanding tasks, or even simple algorithmic tasks such as copying a string (e.g. to transform an input of “abc” to “abcabc”), the Transformer does not perform very well.”, says Stephan Gouws and Mostafa Dehghani from the Google Brain team. Hence this year the team has come up with Universal Transformers, an extension to standard Transformer which is computationally universal using a novel and efficient flavor of parallel-in-time recurrence. The Universal Transformer is built to yield stronger results across a wider range of tasks. How does the Universal Transformer function The Universal Transformer is built on the parallel structure of the Transformer to retain its fast training speed. It has replaced the Transformer’s fixed stack of different transformation functions with several applications of a single, parallel-in-time recurrent transformation function. Crucially, where an RNN can process a sequence symbol-by-symbol (left to right), the Universal Transformer will process all symbols at the same time (like the Transformer), but then refine its interpretation of every symbol in parallel over a variable number of recurrent processing steps using self-attention. This parallel-in-time recurrence mechanism is both faster than the serial recurrence used in RNNs, making the Universal Transformer more powerful than the standard feedforward Transformer. Source: Google AI Blog At each step, information is communicated from each symbol (e.g. word in the sentence) to all other symbols using self-attention, just like in the original Transformer. However, now the number of times transformation will be applied to each symbol (i.e. the number of recurrent steps) can either be manually set ahead of time (e.g. to some fixed number or to the input length), or it can be decided dynamically by the Universal Transformer itself. To achieve the latter, the team has added an adaptive computation mechanism to each position which will allocate more processing steps to symbols that are ambiguous or require more computations. Furthermore, on a diverse set of challenging language understanding tasks the Universal Transformer generalizes significantly better and achieves a new state of the art on the bAbI linguistic reasoning task and the challenging LAMBADA language modeling task. But perhaps the larger feat is that the Universal Transformer also improves translation quality by 0.9 BLEU1 over a base Transformer with the same number of parameters, trained in the same way on the same training data. “Putting things in perspective, this almost adds another 50% relative improvement on top of the previous 2.0 BLEU improvement that the original Transformer showed over earlier models when it was released last year”, says the Google Brain team. The code to train and evaluate Universal Transformers can be found in the open-source Tensor2Tensor repository page. Read in detail about the Universal Transformers on the Google AI blog. Create an RNN based Python machine translation system [Tutorial] FAE (Fast Adaptation Engine): iOlite’s tool to write Smart Contracts using machine translation Setting up the Basics for a Drupal Multilingual site: Languages and UI Translation
Read more
  • 0
  • 0
  • 5700
article-image-virality-of-fake-news-on-social-media-are-weaponized-ai-bots-to-blame-questions-destin-sandlin
Savia Lobo
04 Feb 2019
4 min read
Save for later

Virality of fake news on social media: Are weaponized AI bots to blame, questions Destin Sandlin

Savia Lobo
04 Feb 2019
4 min read
A lot of fake news has been spreading in recent times via social media channels such as Facebook, Twitter, Whatsapp, and so on. A group of researchers from the University of Southern California came up with a paper titled “Combating Fake News: A Survey on Identification and Mitigation Techniques” that discusses existing methods and techniques applicable to identification and mitigation of fake news. Microsoft Edge mobile browser also flags untrustworthy news sites with the help of a plugin named NewsGuard. But how far are we in combating the ‘Fake News”? This weekend, Destin Sandlin, an engineer who conducts an educational video series Smarter Every Day on YouTube, tweeted how Fake News is getting popular on YouTube by being literally engineered within our daily feeds by using sophisticated AI, destructive bots and so on. https://twitter.com/smartereveryday/status/1091833011262423040 He started off by tweeting about “weaponized bots, algorithm exploitation, countermeasures, and counter-countermeasures!” He mentioned seeing a YouTube video thumbnail with a picture of Donald Trump and Ruth Bader Ginsburg side by side. What caught his eye was, the video received 135,000 views making him feel it’s a legit video. He further explained that the video was simply a bot reading a script. He realized that these bots have come-up with ways to auto-make YouTube videos and upload them. “I recognize that this video is meant to manipulate me so I go to close the video.” https://twitter.com/smartereveryday/status/1091833831206866944 Sandlin highlighted another fact that these videos had a 2,400 to 143 like to dislike ratio. He believes that this was some sort of weaponized algorithm exploitation. Source: Twitter He said that in order to get maximum views on YouTube, all a video has to do is get onto the sidebar or in the suggested videos list. He also mentioned an example of a channel that appeared in his suggestion list, named "The Small Workshop", which managed to get 13 million views. https://twitter.com/smartereveryday/status/1091835106149453826 The Trump - Ginsburg video Sandlin searched the YouTube for "After trump sends note to Ginsburg" following which he got tons of different videos but with the same content. He said, “They all use the exact same script, but the computerized voices are different to not trip YouTube's audio detectors, the videos all use different footage to avoid any visual content ID match”. “This is an offensive AI at work, and it's built to avoid every countermeasure”, he added. Sandlin tweeted, “I think the strategy is simple… if you bot-create enough videos on the same topic and generate traffic to those artificially…many will fail, but eventually, the algorithm will suggest one of them above the others, and it will be promoted as “THE ONE”.” He further said that tech company engineers are tasked with developing countermeasures to these kinds of attacks. He is dubious of the attacking party he suspects, “Is there a building in a foreign country where soldiers go to work/battle every day to "comment, like, and subscribe?” or are these clever software developers building bots to automatically create videos and accounts to promote those videos? “I would assume they’re using AI to see what types of videos and comments are amplified the most.” He wonders, “How often Do TTPs (Techniques, Tactics, and Procedures) change?  When the small groups of engineers at YouTube, Facebook, Instagram or Twitter develop a countermeasure, how long until counter-countermeasures are developed and deployed?” According to a post at Resurgent, “Perhaps Sandlin’s suggestion, responding with an active unity, a countermeasure of forgiveness and grace, is the best answer. There’s no AI or algorithm that can defeat those weapons.” Read Destin Sandlin’s complete Tweet thread to know more. WhatsApp limits users to five text forwards to fight against fake news and misinformation Is Anti-trust regulation coming to Facebook following fake news inquiry made by a global panel in the House of Commons, UK? Fake news is a danger to democracy. These researchers are using deep learning to model fake news to understand its impact on elections
Read more
  • 0
  • 0
  • 5698

article-image-linux-5-1-out-with-io_uring-io-interface-persistent-memory-new-patching-improvements-and-more-2
Vincy Davis
08 May 2019
3 min read
Save for later

Linux 5.1 out with Io_uring IO interface, persistent memory, new patching improvements and more!

Vincy Davis
08 May 2019
3 min read
Yesterday, Linus Torvalds, the principal developer of the Linux kernel announced the release of Linux 5.1 in a mailing list announcement. This release provides users with an open source operating system with lots of great additions, as well as improvements to existing features. The previous version, Linux 5.0 was released two months ago. “On the whole, 5.1 looks very normal with just over 13k commits (plus another 1k+ if you count merges). Which is pretty much our normal size these days. No way to boil that down to a sane shortlog, with work all over.”, said Linus Torvalds in the official announcement. What’s new in Linux 5.1? Io_uring: New Linux IO interface Linux 5.1 introduces a new high-performance interface called io_uring. It’s easy to use and hard to misuse user/application interface. Io_uring has an efficient buffered asynchronous I/O support, the ability to do I/O without even performing a system call via polled I/O, and other efficiency enhancements. This will help deliver fast and efficient I/O for Linux. Io_uring permits safe signal delivery in the presence of PID reuse which will improve power management without affecting power consumption. Liburing is used as the user-space library which will make the usage simpler. Axboe's FIO benchmark has also been adapted already to support io_uring. Security In Linux 5.1, the SafeSetID LSM module has been added which will provide administrators with security and policy controls. It will restrict UID/GID transitions from a given UID/GID to only those approved by system-wide acceptable lists. This will also help in stopping to receive the auxiliary privileges associated with CAP_SET{U/G}ID, which will allow the user to set up user namespace UID mappings. Storage Along with physical RAM, users can now use persistent memory as RAM (system memory), allowing them to boot the system to a device-mapper device without using initramfs, as well as support for cumulative patches for the live kernel patching feature. This persistent memory can also be used as a cost-effective RAM replacement. Live patching improvements With Linux 5.1 a new capability is being added to live patching, it’s called Atomic Replace. It includes all wanted changes from all older live patches and can completely replace them in one transition. Live patching enables a running system to be patched without the need for a full system reboot. This will allow new drivers compatible with new hardware. Users are quite happy with this update. A user on Reddit commented, “Finally! I think this one fixes problems with Elantech's touchpads spamming the dmesg log. Can't wait to install it!” Another user added, “Thank you and congratulations for the developers!” To download the Linux kernel 5.1 sources, head over to kernel.org. To know more about the release, check out the official mailing announcement. Ubuntu 19.04 Disco Dingo Beta releases with support for Linux 5.0 and GNOME 3.32 Announcing Linux 5.0! Bodhi Linux 5.0.0 released with updated Ubuntu core 18.04 and a modern look  
Read more
  • 0
  • 0
  • 5691

article-image-netflix-open-sources-zuul-2-cloud-gateway
Pavan Ramchandani
28 May 2018
2 min read
Save for later

Netflix open sources Zuul 2 cloud gateway

Pavan Ramchandani
28 May 2018
2 min read
Netflix in their tech blog announced that their popular cloud gateway Zuul 2 is now open-source. Zuul 2 was announced back in 2016 is Netflix's Java-based API gateway that handles all the request for Netflix's user base. Zuul 2 is the front door, acting as a filter to any request that comes into the Netflix's server. This gateway monitors the request and routes the request to the appropriate service to then act on the request. Zuul, in a way, is responsible for keeping Netflix standing strong and fulfilling your streaming requests. Netflix is known for open sourcing a lot of the tools developed in-house for the community. Zuul 2 is a battle-tested tool as it has been handling the massive Netflix infrastructure. Since its open sourcing, the developers have an option of a more resilient tool that can be used in their infrastructure architecture. Netflix promises to keep the security aspect intact for the open source Zuul 2. Also to add to this news, Netflix announced some more features for Zuul 2. Here are the feature additions: Server protocols: Zuul 2 has full support for HTTP/2 connections. Also, Mutual TLS will enhance Zuul's operation in secure infrastructure. Resiliency features: To increase the availability, Netflix will be adding a feature called Adaptive Retries that is used on Netflix. Also, it would be adding configurable concurrency limits for protecting the origins from getting overloaded and separating the other origins that run behind Zuul. Request Passport: This feature will enable the Zuul server to track all events that occur for each request. This will allow you to compute the asynchronous requests for better availability of your services. Status Categories: This feature helps you categorize the requests by extending the success and failure state in terms of HTTP status code. Request attempts: It tracks all the proxy attempts and provides you the status of each attempt. This really helps to identify the retries and to debug the routing. Zuul also has enhanced self-service routing, load balancing, anomaly detection, among other primary features that Netflix uses to keep the infrastructure secure and running. Netflix has released several other tools including Titus (container management), Conductor (microservice orchestration), Hystrix (cloud management), Vizceral (traffic management), among other efficient tools that can be used in large infrastructures. You can read Netflix's announcement blog to get more insights on the future development in Zuul 2. What software stack does Netflix use?
Read more
  • 0
  • 0
  • 5662
article-image-freertos-affected-by-13-vulnerabilities-in-its-tcp-ip-stack
Savia Lobo
23 Oct 2018
2 min read
Save for later

FreeRTOS affected by 13 vulnerabilities in its TCP/IP stack

Savia Lobo
23 Oct 2018
2 min read
FreeRTOS, a popular real-time operating system kernel for embedded devices, is found to have 13 vulnerabilities, as reported by Bleeping Computers yesterday. A part of these 13 vulnerabilities results in flaws in its remote code execution. FreeRTOS supports more than 40 hardware platforms and powers microcontrollers in a diverse range of products including temperature monitors, appliances, sensors, fitness trackers, and any microcontroller-based devices. Although it works at a smaller component scale, it lacks the complexity that comes with more elaborate hardware. However, it allows processing of data as it comes in. A researcher at Zimperium, Ori Karliner, analyzed the operating system and found that all of its varieties are vulnerable to: 4 remote code execution bugs, 1 denial of service, 7 information leak, and another security problem which is yet undisclosed Here’s a full list of the vulnerabilities and their identifiers, that affect FreeRTOS: CVE-2018-16522 Remote Code Execution CVE-2018-16525 Remote Code Execution CVE-2018-16526 Remote Code Execution CVE-2018-16528 Remote Code Execution CVE-2018-16523 Denial of Service CVE-2018-16524 Information Leak CVE-2018-16527   Information Leak CVE-2018-16599 Information Leak CVE-2018-16600 Information Leak CVE-2018-16601 Information Leak CVE-2018-16602 Information Leak CVE-2018-16603 Information Leak CVE-2018-16598 Other FreeRTOS versions affected by the vulnerability FreeRTOS versions up to V10.0.1, AWS FreeRTOS up to V1.3.1, OpenRTOS and SafeRTOS (With WHIS Connect middleware TCP/IP components) are affected. Amazon has been notified of the situation. In response to this, the company has released patches to mitigate the problems. Per the report, “Amazon decided to become involved in the development of the product for the Internet-of-Things segment. The company extended the kernel by adding libraries to support cloud connectivity, security and over-the-air updates.” According to Bleeping Computers, “Zimperium is not releasing any technical details at the moment. This is to allow smaller vendors to patch the vulnerabilities. The wait time expires in 30 days.” To know more about these vulnerabilities in detail, visit the full coverage by Bleeping Computers. NSA researchers present security improvements for Zephyr and Fucshia at Linux Security Summit 2018 How the Titan M chip will improve Android security EFF kicks off its Coder’s Rights project with a paper on protecting security researchers’ rights
Read more
  • 0
  • 0
  • 5660

article-image-developers-of-firefox-focus-set-to-replace-androids-webview-with-geckoview
Bhagyashree R
14 Sep 2018
2 min read
Save for later

Developers of Firefox Focus set to replace Android’s WebView with GeckoView

Bhagyashree R
14 Sep 2018
2 min read
Yesterday, Mozilla announced that they will be releasing a new version of Firefox Focus for Android next week. This version will be powered by Gecko, which is a browser engine developed by Mozilla and is also used in Firefox Quantum. Firefox Focus enables you to stay “focused” by automatically blocking ads and trackers. Once you are done browsing, you can delete your search history completely using its erase button. It provides a faster browsing experience and you do not have to worry about the retargeted ads. Why Firefox Focus needs Gecko? Since the beginning, Focus has been using Android’s built-in WebView, but it has some limitations. WebView is not designed for building browsers. It only supports a subset of web standards, as Google expects developers to use native Android APIs, and not the Web, for advanced standards. To add next-generation privacy features to Focus, its developers require deep access to the browser internals. This is why, they decided to use their own engine, Gecko. Firefox for Android already uses Gecko, but not in a way that’s easy to reuse in other applications. To make Gecko reusable they built GeckoView. To allow the use of GeckoView in other applications, the developers have decoupled the engine from its user interface and packaged it as a reusable Android library. In a nutshell, GeckoView will help them leverage all of their Firefox expertise in building more compelling, safe, and robust online experiences. They are also planning to use GeckoView in entirely new products like Firefox Reality, a browser designed for virtual and augmented reality headsets. You will hear more about Firefox Reality later this year promises the Mozilla blog. You can currently download Focus Beta and report issues, if any. If you are an Android developer, you can give this library a try or directly contribute on GitHub. To read more about GeckoView, check out the announcement on Mozilla’s official website. Mozilla releases Firefox 62.0 with better scrolling on Android, a dark theme on macOS, and more Upcoming Firefox update will, by default, protect users privacy by blocking ad tracking Firefox Nightly’s Secure DNS Experimental Results out
Read more
  • 0
  • 0
  • 5655