Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3709 Articles
article-image-unity-learn-premium-a-learning-platform-for-professionals-to-master-real-time-3d-development
Sugandha Lahoti
27 Jun 2019
3 min read
Save for later

Unity Learn Premium, a learning platform for professionals to master real-time 3D development

Sugandha Lahoti
27 Jun 2019
3 min read
Unity has announced a new learning platform for professionals and hobbyists to advance their Unity knowledge and skills within their industry. The Unity Learn Premium, builds upon the launch of the free Unity Learn platform. The Unity Learn, platform hosts hundreds of free projects and tutorials, including two new beginner projects. Users can search learning materials by topic, content type, and level of expertise. Tutorials comes with  how-to instructions, video clips, and code snippets, making it easier to switch between Unity Learn and the Unity Editor. The Unity Learn Premium service allows creators to get immediate answers, feedback, and guidance directly from experts with Learn Live, biweekly interactive sessions with Unity-certified instructors. Learners can also track progress on guided learning paths, work through shared challenges with peers, and access an exclusive library of resources updated every month with the latest Unity releases. The premium version will offer live access to Unity experts, and learning content across industries, including architecture, engineering, and construction, automotive, transportation, and manufacturing), media and entertainment, and gaming. The Unity Learn Premium announcement comes on the heels of the launch of the Unity Academic Alliance. With this membership program,  educators and institutions can incorporate Unity into their curriculum. Jessica Lindl, VP and Global Head of Education, Unity Technologies wrote to us in a statement, “Until now, there wasn’t a definitive learning resource for learning intermediate to advanced Unity skills, particularly for professionals in industries beyond gaming. The workplace of today and tomorrow is fast-paced and driven by innovation, meaning workers need to become lifelong learners, using new technologies to upskill and ultimately advance their careers. We hope that Unity Learn Premium will be the perfect tool for professionals to continue on this learning path.” She further wrote to us, "Through our work to enable the success of creators around the world, we discovered there is no definitive source for advancing from beginner to expert across all industries, which is why we're excited to launch the Unity Learn Platform. The workplace of today and tomorrow is fast-paced and driven by innovation, forcing professionals to constantly be reskilling and upskilling in order to succeed. We hope the Unity Learn Platform enables these professionals to excel in their respective industries." Unity Learn Premium will be available at no additional cost for Plus and Pro subscribers and offered as a standalone subscription for $15/month. You can access more information here. Related News Developers can now incorporate Unity features into native iOS and Android apps Unity Editor will now officially support Linux Obstacle Tower Environment 2.0: Unity announces Round 2 of its ‘Obstacle Tower Challenge’ to test AI game players.
Read more
  • 0
  • 0
  • 5115

article-image-brave-ad-blocker-gives-69x-better-performance-with-its-new-engine-written-in-rust
Bhagyashree R
27 Jun 2019
3 min read
Save for later

Brave ad-blocker gives 69x better performance with its new engine written in Rust

Bhagyashree R
27 Jun 2019
3 min read
Looks like Brave has also jumped on the bandwagon of writing or rewriting its components in the Rust programming language. Yesterday, its team announced that they have reimplemented its ad-blocker in Rust that was previously written in C++. As a result, the ad-blocker is now 69x faster as compared to the current engine. The team chose Rust as it is a memory-safe and performant language. The new ad-blocker implementation can be compiled to native code and run within the native browser core. It can also be packaged in a standalone Node.js module. This reimplemented version is available on Brave’s  Dev channel and Nightly channel. How does this new ad-blocking algorithm work? The previous ad-blocking algorithm relied on the observation that most of the requests were passed through without blocking. It used the Bloom filter data structure that tracks fragments of requests that may match and rules out those that do not. The new implementation is based on uBlock Origin and Ghostery’s ad-blocking approach, which is tokenization specific to add-block rule matching against URLs and rule evaluation optimized to the different kinds of rules. What makes this new algorithm faster is that it quickly eliminates any rules that are not likely to match a request from search. “To organize filters in a way that speeds up their matching, we observe that any alphanumeric (letters and numbers) substring that is part of a filter needs to be contained in any matching URL as well,” the team explained. All these substrings are hashed to a single number that results in a number of tokens. The tokens make matching much easier and faster when a URL is tokenized in the same way. The team further wrote, “Even though by nature of hashing algorithms multiple different strings could hash to the same number (a hash collision), we use them to limit rule evaluation to only those that could possibly match.” If a rule has a specific hostname, it is tokenized too. If a rule contains a single domain option, the entire domain is hashed as another token. Performance gains made by the reimplementation For the performance evaluation, the team has used the dataset published with the Ghostery ad-blocker performance study that includes 242,945 requests across 500 popular websites. The new ad-blocker was tested against this dataset with different ad-block rule lists including the biggest one: EasyList and EasyPrivacy combined.  The team performed all the benchmarks on the adblock-rust 0.1.21 library. They used a 2018 MacBook Pro laptop with 2.6 GHz Intel Core i7 CPU and 32GB RAM. Following are performance gains this new ad-blocker showed: The new algorithm with its optimized set of rules is 69x faster on average as compared to the current engine. When tested with the popular filter list combination of EasyList and EasyPrivacy, it gave a “class-leading performance of spending only 5.7μs on average per request.” It already supports most of the filter rule syntax that has evolved beyond the original specification. This will enable the team to handle web compatibility issues better and faster. The browser does some of the work that can be helpful to the ad-blocker. This further reduces the overheads resulting in an ad-blocker with the best in class performance. Head over to Brave’s official website to know more in detail. Edge, Chrome, Brave share updates on upcoming releases, recent milestones, and more at State of Browsers event Brave introduces Brave Ads that share 70% revenue with users for viewing ads Chromium-based Brave browser shows 22% faster page load time than its Muon-based counterpart  
Read more
  • 0
  • 0
  • 3101

article-image-a-study-confirms-that-pre-bunk-game-reduces-susceptibility-to-disinformation-and-increases-resistance-to-fake-news
Fatema Patrawala
27 Jun 2019
7 min read
Save for later

A study confirms that pre-bunk game reduces susceptibility to disinformation and increases resistance to fake news

Fatema Patrawala
27 Jun 2019
7 min read
On Tuesday, the University of Cambridge published a research performed on  thousands of online game players. The study shows how an online game can work like a “vaccine'' and increase skepticism towards fake news. This was done by giving people a weak dose of the methods behind disinformation campaigns. Last year in February, University of Cambridge researchers helped in launching the browser game Bad News. In this game, you take on the role of fake news-monger. Drop all pretense of ethics and choose a path that builds your persona as an unscrupulous media magnate. But while playing the game you have to keep an eye on your ‘followers’ and ‘credibility’ meters. The task is to get as many followers as you can while slowly building up fake credibility as a news site. And you lose if you tell obvious lies or disappoint your supporters! Jon Roozenbeek, study co-author from Cambridge University, and Dr Sander van der Linden, Director of the Cambridge Social Decision-Making Lab worked with Dutch media collective DROG and design agency Gusmanson to develop Bad News. DROG develops programs and courses and also conducts research aimed at recognizing disinformation online. The game is primarily available in English, and many other languages like Czech, Dutch, German, Greek, Esperanto, Polish, Romanian, Serbian, Slovenian and Swedish. They have also developed a special Junior version for children in the age group between 8 - 11. Jon Roozenbee, said: “We are shifting the target from ideas to tactics. By doing this, we are hoping to create what you might call a general ‘vaccine’ against fake news, rather than trying to counter each specific conspiracy or falsehood.” Hu further added, “We want to develop a simple and engaging way to establish media literacy at a relatively early age, then look at how long the effects last”. The study says that the game increased psychological resistance to fake news After the game was available to play, thousands of people spent fifteen minutes completing it, and many allowed the data to be used for the research. According to a study of 15000 participants, this game has shown to increase “psychological resistance” to fake news. Players stoke anger and fear by manipulating news and social media within the simulation: they deployed twitter bots, photo-shopped evidence, and incited conspiracy theories to attract followers. All of this was done while maintaining a “credibility score” for persuasiveness. “Research suggests that fake news spreads faster and deeper than the truth, so combating disinformation after-the-fact can be like fighting a losing battle,” said Dr Sander van der Linden. “We wanted to see if we could preemptively debunk, or ‘pre-bunk’, fake news by exposing people to a weak dose of the methods used to create and spread disinformation, so they have a better understanding of how they might be deceived. “This is a version of what psychologists call ‘inoculation theory’, with our game working like a psychological vaccination.” The study was performed by asking players to rate the reliability of content before and after gameplay To gauge the effects of the game, players were asked to rate the reliability of a series of different headlines and tweets before and after gameplay. They were randomly allocated a mixture of real and fake news. There were six “badges” to earn in the game, each reflecting a common strategy used by creators of fake news: impersonation; conspiracy; polarisation; discrediting sources; trolling; emotionally provocative content. There were in-game questions too that measured the effects of Bad News deployed for four of its featured fake news badges. As a result for the disinformation tactic of “impersonation”, which involves mimicking of trusted personalities on social media, the game reduced perceived reliability of the fake headlines and tweets by 24% from pre to post gameplay. Further it reduced perceived reliability of deliberately polarising headlines by about 10%, and “discrediting sources” that is attacking a legitimate source with accusations of bias – by 19%. For “conspiracy”, the spreading of false narratives blaming secretive groups for world events, perceived reliability was reduced by 20%. The researchers also found that those who registered as most susceptible to fake news headlines in the beginning benefited most from the “inoculation”. “We find that just fifteen minutes of gameplay has a moderate effect, but a practically meaningful one when scaled across thousands of people worldwide, if we think in terms of building societal resistance to fake news,” said van der Linden. The sample for the study was skewed towards younger male The sample was self-selecting those who came across the game online and opted to play, and as such was skewed toward younger, male, liberal, and more educated demographics. Hence, the first set of results from Bad News has its limitations, say researchers. However, the study found the game to be almost equally effective across age, education, gender, and political persuasion. But researchers did not mention if they plan to do a follow up study keeping in mind the limitations of this research. “Our platform offers early evidence of a way to start building blanket protection against deception, by training people to be more attuned to the techniques that underpin most fake news,” added Roozenbeek. Community discussion revolve around various fake news reporting techniques This news has attracted much attention on Hacker News, and users have commented about various news reporting techniques that journalists use to promote different stories. One of the user comments reads, “The "best" fake news these days is the stuff that doesn't register even to people are read-in on the usual anti-patterns. Subtle framing, selective quotation, anonymous sources, "repeat the lie" techniques, and so on, are the ones that I see happening today that are hard to immunize yourself from. Ironically, the people who fall for these are more likely to self-identify as being aware and clued in on how to avoid fake news.” Another users says, “Second best. The best is selective reporting. Even if every story is reported 100% accurately and objectively, by choosing which stories are promoted, and which buried, you can set any agenda you want.” One of them also commented that the discussion diluted the term Fake news in influences and propaganda, it reads, “This discussion is falling into a trap where "Fake News" is diluted to synonym for all influencing news and propaganda. Fake News is propaganda that consists of deliberate disinformation or hoaxes. Nothing mentioned here falls into a category of Fake News. Fake News creates cognitive dissonance and distrust. More subtler methods work differently. But mainstream media also does Fake News" arguments are whataboutism.” To this another user responds, “I've upvoted you because you make a good point, but I disagree. IMO, Fake News, in your restrictive definition, is to modern propaganda what Bootstrap is to modern frontend dev. It's an easy shortcut, widely known, and even talented operators are going to use it because it's the easiest way to control a (domestic or foreign) population. But resources are there, funding is there, to build much more subtle/complex systems if needed. Cut away Bootstrap, and you don't particularly dent the startup ecosystem. Cut away fake news, and you don't particularly dent the ability of troll farms to get work done. We're in a new era, fake news or not.” Game rivals, Microsoft and Sony, form a surprising cloud gaming and AI partnership DeepMind’s AI uses reinforcement learning to defeat humans in multiplayer games Introducing Minecraft Earth, Minecraft’s AR-based game for Android and iOS users  
Read more
  • 0
  • 0
  • 2249

article-image-mozilla-introduces-track-this-a-new-tool-that-will-create-fake-browsing-history-and-fool-advertisers
Amrata Joshi
27 Jun 2019
4 min read
Save for later

Mozilla introduces Track THIS, a new tool that will create fake browsing history and fool advertisers

Amrata Joshi
27 Jun 2019
4 min read
Most of us somewhere worry about our activities getting tracked on the internet, remember the last time you got the ads based on your interests or based on your browsing history and you start thinking as to ‘if at all I am getting tracked’? Most of our activities are getting tracked by the web through cookies that make a note of things such as language preferences, websites visited by the user and much more. But the problem gets doubled when even data brokers and advertising networks use these cookies to collect user information without the consent. In this case, users need to have control over what advertisers know about them. This month the team at Mozilla Firefox announced the Enhanced Tracking Protection that is by default in flagship Firefox Quantum browser against third-party cookies. In addition to this two days ago, the team also announced the launch of a project called Track THIS, a tool that can help you fool the advertisers. Track THIS opens up 100 tabs that are crafted to fit a specific character which includes a hypebeast, a filthy rich person, a doomsday prepper, or an influencer.  The users’ browsing history will be depersonalized in a way that advertisers will struggle targeting ads to the users as the tool will confuse them. Track This will show users the ads for the products that they might not be interested in, users will still continue to see ads but not the targeted ones.  The official blog post reads, “Let’s be clear, though. This will show you ads for products you might not be interested in at all, so it’s really just throwing off brands who want to advertise to a very specific type of person. You’ll still be seeing ads. And eventually, if you just use the internet as you typically would day to day, you’ll start seeing ads again that align more closely to your normal browsing habits. If you’d rather straight-up block third-party tracking cookies, go ahead and get Enhanced Tracking Protection in Firefox.” Let’s now understand the working of Track THIS  Before trying Track THIS, users need to manage their tabs and save their work or they can open up a new window or browser to start the process. Track THIS will itself open 100 tabs. Users then need to choose a profile to trick advertisers into thinking that a user is someone else Users need to confirm that they are ready to open 100 tabs based on that profile. Users then need to close all 100 tabs and open up a new window. The ads will only be impacted for a few days but ad trackers can soon start reflecting users’ normal browsing habits. Once done with experimenting, users can get Firefox with Enhanced Tracking Protection to block third-party tracking cookies by default. It seems users are excited about this news as they will be able to get rid of targeted advertisements. https://twitter.com/minnakank/status/1143863045447458816 https://twitter.com/inthecompanyof/status/1143842275476299776 Few users are scared of using the tool on their phones and are a little skeptical about the 100 tabs. A user commented on HackerNews, “I'm really afraid to click one of those links on mobile. Does it just spawn 100 new tabs?” Another user commented, “Not really sure that a browser should allow a site to open 100 tabs programmatically, if anything this is telling me that Firefox is open to such abuse.” To know more about this news, check out the official blog post. Mozilla releases Firefox 67.0.3 and Firefox ESR 60.7.1 to fix a zero-day vulnerability, being abused in the wild Mozilla to bring a premium subscription service to Firefox with features like VPN and cloud storage Mozilla makes Firefox 67 “faster than ever” by deprioritizing least commonly used features  
Read more
  • 0
  • 0
  • 4616

article-image-low-carbon-kubernetes-scheduler-a-demand-side-management-solution-that-consumes-electricity-in-low-grid-carbon-intensity-areas
Savia Lobo
27 Jun 2019
7 min read
Save for later

Low Carbon Kubernetes Scheduler: A demand side management solution that consumes electricity in low grid carbon intensity areas

Savia Lobo
27 Jun 2019
7 min read
Machine learning experts are increasingly becoming interested in researching on how machine learning can be used to reduce greenhouse gas emissions and help society adapt to a changing climate. For example, Machine Learning can be used to regulate cloud data centres that manage an important asset, ‘Data’ as these data centres typically comprise tens to thousands of interconnected servers and consume a substantial amount of electrical energy. Researchers from Huawei published a paper in April 2015, estimating that by 2030 data centres will use anywhere between 3% and 13% of global electricity At the ICT4S 2019 conference held in Lappeenranta, Finland, from June 10-15, researchers from the University of Bristol, UK, introduced their research on a low carbon scheduling policy for the open-source Kubernetes container orchestrator. “Low Carbon Kubernetes Scheduler” can provide demand-side management (DSM) by migrating consumption of electric energy in cloud data centres to countries with the lowest carbon intensity of electricity. In their paper the researchers highlight, “All major cloud computing companies acknowledge the need to run their data centres as efficiently as possible in order to address economic and environmental concerns, and recognize that ICT consumes an increasing amount of energy”. Since the end of 2017, Google Cloud Platform runs its data centres entirely on renewable energy. Also, Microsoft has announced that its global operations have been carbon neutral since 2012. However, not all cloud providers have been able to make such an extensive commitment. For example, Oracle Cloud is currently 100% carbon neutral in Europe, but not in other regions. The Kubernetes Scheduler selects compute nodes based on the real-time carbon intensity of the electric grid in the region they are in. Real-time APIs that report grid carbon intensity is available for an increasing number of regions, but not exhaustively around the planet. In order to effectively demonstrate the schedulers ability to perform global load balancing, the researchers have evaluated the scheduler based on its ability to the metric of solar irradiation. “While much of the research on DSM focusses on domestic energy consumption there has also been work investigating DSM by cloud data centres”, the paper mentions. Demand side management (DSM) refers to any initiatives that affect how and when electricity is being required by consumers. Source: CEUR-WS.org Existing schedulers work with consideration to singular data centres rather than taking a more global view. On the other hand, the Low Carbon Scheduler considers carbon intensity across regions as scaling up and down of a large number of containers that can be done in a matter of seconds. Each national electric grid contains electricity generated from a variable mix of alternative sources. The carbon intensity of the electricity provided by the grid anywhere in the world is a measure of the amount of greenhouse gas released into the atmosphere from the combustion of fossil fuels for the generation of electricity. Significant generation sites report the volume of electricity input to the grid in regular intervals to the organizations operating the grid (for example the National Grid in the UK) in real-time via APIs. These APIs typically provide the retrieval of the production volumes and thus allow to calculate the carbon intensity in real-time. The Low carbon scheduler collects the carbon intensity from the available APIs and ranks them to identify the region with the lowest carbon intensity. [box type="shadow" align="" class="" width=""]For the European Union, such an API is provided by the European Network of Transmission System Operators for Electricity (www.entsoe.eu) and for the UK this is the Balancing Mechanism Reporting Service (www.elexon.co.uk).[/box] Why Kubernetes for building a low carbon scheduler Kubernetes can make use of GPUs4 and has also been ported to run on ARM architecture 5. Researchers have also said that Kubernetes has to a large extent won the container orchestration war. It also has support for extendability and plugins which makes it the “most suitable for which to develop a global scheduler and bring about the widest adoption, thereby producing the greatest impact on carbon emission reduction”. Kubernetes allows schedulers to run in parallel, which means the scheduler will not need to re-implement the pre-existing, and sophisticated, bin-packing strategies present in Kubernetes. It need only to apply a scheduling layer to complement the existing capabilities proffered by Kubernetes. According to the researchers, “Our design, as it operates at a higher level of abstraction, assures that Kubernetes continues to deal with bin-packing at the node level, while the scheduler performs global-level scheduling between data centres”. The official Kubernetes documentation describes three possible ways of extending the default scheduler (kube-scheduler): adding these rules to the scheduler source code and recompiling, implementing one’s own scheduler process that runs instead of, or alongside kube-scheduler, or implementing a scheduler extender. Evaluating the performance of the low carbon Kubernetes scheduler The researchers recorded the carbon intensities for the countries that the major cloud providers operate data centers between 18.2.2019 13:00 UTC and 21.4.2019 9:00 UTC. Following is a table showing countries where the largest public cloud providers operate data centers, as of April 2019. Source: CEUR-WS.org They further ranked all countries by the carbon intensity of their electricity in 30-minute intervals. Among the total set of 30-minute values, Switzerland had the lowest carbon intensity (ranked first) in 0.57% of the 30-minute intervals, Norway 0.31%, France 0.11% and Sweden in 0.01%. However, the list of the least carbon intense countries only contains countries in central Europe locations. To justify Kubernetes’ ability or globally distributed deployments the researchers chose to optimize placement to regions with the greatest degree of solar irradiance termed a Heliotropic Scheduler. This scheduler is termed ‘heliotropic’ in order to differentiate it from a ‘follow-the-sun’ application management policy that relates to meeting customer demand around the world by placing staff and resources in proximity to those locations (thereby making them available to clients at lower latency and at a suitable time of day). A ‘heliotropic’ policy, on the other hand, goes to where sunlight, and by extension solar irradiance, is abundant. They further evaluated the Heliotropic Scheduler implementation by running BOINC jobs on Kubernetes. BOINC (Berkeley Open Infrastructure for Network Computing) is a software platform for volunteer computing that allows users to contribute computational capacity from their home PCs towards scientific research. Einstein@Home, SETI@home and IBM World Community Grid are some of the most widely supported projects. Researchers say: “Even though many cloud providers are contracting for renewable energy with their energy providers, the electricity these data centres take from the grid is generated with release of a varying amount of greenhouse gas emissions into the atmosphere. Our scheduler can contribute to moving demand for more carbon intense electricity to less carbon intense electricity”. While the paper concludes that wind-dominant, solar-complementary strategy is superior for the integration of renewable energy sources into cloud data centres’ infrastructure, the Low Carbon Scheduler provides a proof-of-concept demonstrating how to reduce carbon intensity in cloud computing. To know more about this implementation for lowering carbon emissions read the research paper. Machine learning experts on how we can use machine learning to mitigate and adapt to the changing climate VMware reaches the goal of using 100% renewable energy in its operations, a year ahead of their 2020 vision Deep learning models have massive carbon footprints, can photonic chips help reduce power consumption?
Read more
  • 0
  • 0
  • 3740

article-image-do-google-ads-secretly-track-stack-overflow-users
Vincy Davis
27 Jun 2019
5 min read
Save for later

Do Google Ads secretly track Stack Overflow users?

Vincy Davis
27 Jun 2019
5 min read
Update: A day after a user found a bug on Stack Overflow’s devtools website, Nick Craver, the Architecture Lead for Stack Overflow, has updated users on their working. He says that the fingerprinting issue has emerged from the ads relayed through 3rd party providers. Stack Overflow has been reaching out to experts and the Google Chrome security team and has also filed a bug in the Chrome tracker. Stack Overflow has contacted Google, their ad server for assistance and are testing deployment of Safe Frame to all ads. The Safe Frame API will configure if all ads on the page should be forced to be rendered using a SafeFrame container. Stack Overflow is also trying to deploy the Feature-Policy header to block access to most browser features from all components in the page. Craver has also specified in the update that Stack Overflow has decided not to turn off these ad campaigns swiftly, as they need the repro to fix these issues. A user by the name greggman has discovered a bug on Stack Overflow’s devtools website. Today, while working on his browser's devtools website, he noticed the following message: Image source: Stack Overflow Meta website  greggman then raised the query “Why is Stack Overflow trying to start audio?” on the Stack Overflow Meta website, which is intended for bugs, features, and discussion of Stack Overflow for its users. He then found out that the above message appears whenever a particular ad is appearing on the website. The ad is from Microsoft via Google.  Image source: Stack Overflow Meta Website  Later another user, TylerH did an investigation and revealed some intriguing information about the identified bug. He found out that the Google Ad is employing the audio API, to collect information from the users’ browser, in an attempt to fingerprint it.   He says that “This isn't general speculation, I've spent the last half hour going though the source code linked above, and it goes to considerable lengths to de-anonymize viewers. Your browser may be blocking this particular API, but it's not blocking most of the data.”  TylerH claims that this fingerprint tracking of users is definitely not done for legitimate feature detection. He adds that this technique is done in aggregate to generate a user fingerprint, which is included along with the advertising ID, while recording analytics for the publisher. This is done to detect the following : Users’ system resolution and accessibility settings The audio API capabilities, supported by the users’ browser The mobile browser-specific APIs, supported by the users’ browser TylerH states that this bug can detect many other details about the user, without the users’ consent. Hence he issues a warning to all Stack Overflow users to “Use an Ad blocker!” As both these findings gained momentum on the Stack Overflow Meta website, Nick Craver,  the Architecture Lead for Stack Overflow replied to greggman and TylerH, “Thanks for letting us know about this. We are aware of it. We are not okay with it.” Craver also mentioned that Stack Overflow has reached out to Google, to obtain their support. He also notified users that “This is not related to ads being tested on the network and is a distinctly separate issue. Programmatic ads are not being tested on Stack Overflow at all.” Users are annoyed at this response by Craver. Many are not ready to believe that the Architecture Lead for Stack Overflow did not have any idea about this and is now going to work on it. A user on Hacker News comments that this response from Craver “encapsulates the entire problem with the current state of digital advertising in 1 simple sentence.” Few users feel like this is not surprising at all, as all websites use ads as tracking mechanisms. A HN user says that “Audio feature detection isn't even a novel technique. I've seen trackers look at download stream patterns to detect whether or not BBR congestion control is used, I have seen mouse latency based on the difference between mouse ups and downs in double clocks and I have seen speed-of-interaction checks in mouse movements.”  Another comment reads, “I think ad blocking is a misnomer. What people are trying to do when blocking ads is prevent marketing people from spying on them. And the performance and resource consumption that comes from that. Personal opinion: Laws are needed to make what advertisers are doing illegal. Advertisers are spying on people to the extent where if the government did it they'd need a warrant.” While there is another user, who thinks that the situation is not that bad, with Stack Overflow at least taking responsibility of this bug. The user on Hacker News wrote, “Let's be adults here. This is SO, and I imagine you've used and enjoyed the use of their services just like the rest of us. Support them by letting passive ads sit on the edge of the page, and appreciate that they are actually trying to solve this issue.” Approx. 250 public network users affected during Stack Overflow’s security attack Stack Overflow confirms production systems hacked Facebook again, caught tracking Stack Overflow user activity and data
Read more
  • 0
  • 0
  • 2951
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at ₹800/month. Cancel anytime
article-image-the-go-team-shares-new-proposals-planned-to-be-implemented-in-go-1-13-and-1-14
Bhagyashree R
27 Jun 2019
5 min read
Save for later

The Go team shares new proposals planned to be implemented in Go 1.13 and 1.14

Bhagyashree R
27 Jun 2019
5 min read
Yesterday, the Go team shared the details of what all is coming in Go 1.13, the first release that is implemented using the new proposal evaluation process. In this process, feedback is taken from the community on a small number of proposals to reach the final decision. The team also shared what proposals they have selected to implement in Go 1.14 and the next steps. At Gophercon 2017, Russ Cox, Go programming language tech lead at Google, first disclosed the plan behind the implementation of Go 2. This plan was simple: the updates will be done in increments and will have minimal to no effect on everybody else. Updates in Go 1.13 Go 1.13, which marks the first increment towards Go 2, is planned to release in early August this year. A lot of language changes have landed in this release that were shortlisted from the huge list of Go 2 proposals based on the new proposal evaluation process. These proposals are selected under the criteria that they should address a problem, have minimal disruption, and provide a clear and well-understood solution. The team selected “relatively minor and mostly uncontroversial” proposals for this version. These changes are backward-compatible as modules, Go’s new dependency management system is not the default build mode yet. Go 1.11 and Go 1.12 include preliminary support for modules that makes dependency version information explicit and easier to manage. Proposals planned to be implemented in Go 1.13 The proposals that were initially planned to be implemented in Go 1.13 were: General Unicode identifiers based on Unicode TR31: This proposes to add support for enabling programmers using non-Westen alphabets to combine characters in identifiers and export uncased identifiers. Binary integer literals and support for_ in number literals: Go comes with support for octal, hexadecimal, and standard decimal literals. However, unlike other mainstream languages like Java 7, Python 3, and Ruby, it does not have support for binary integer literals. This proposes adding support for binary integer literals as a new prefix to integer literals like 0b or 0B. Another minor update is adding support for a blank (_) as a separator in number literals to improve the readability of complex numbers. Permit signed integers as shift counts: This proposes to change the language spec such that the shift count can be a signed or unsigned integer, or any non-negative constant value that can be represented as an integer. Out of these shortlisted proposals the binary integer literals, separators for number literals, and signed integer shift counts are implemented. The general Unicode identifiers proposal was not implemented as there was no “concrete design document in place in time.“ The proposal to support binary integer literals was significantly expanded, which led to an overhauled and modernized Go’s number literal syntax. Updates in Go 1.14 After the relatively minor updates in Go 1.13, the team has plans to take it up a notch with Go 1.14. With the new major version Go 2, their overarching goal is to provide programmers improved scalability. To achieve this, the team has to tackle the three biggest hurdles: package and version management, better error handling support, and generics. The first hurdle, package and version management will be addressed by the modules feature, which is growing stronger with each release. For the other two, the team has presented draft designs at last year’s GopherCon in Denver. https://youtu.be/6wIP3rO6On8 Proposals planned to be implemented in Go 1.14 Following are the proposals that are shortlisted for Go 1.14: A built-in Go error check function, ‘try’: This proposes a new built-in function named ‘try’ for error handling. It is designed to remove the boilerplate ‘if’ statements typically associated with error handling in Go. Allow embedding overlapping interfaces: This is a backward-compatible proposal to make interface embedding more tolerant. Diagnose ‘string(int)’ conversion in ‘go vet’: This proposes to remove the explicit type conversion string(i) where ‘i’ has an integer type other than ‘rune’. The team is making this backward-incompatible change as it was introduced in the early days of Go and now has become quite confusing to comprehend. Adopt crypto principles: This proposes to implement design principles for cryptographic libraries outlined in the Cryptography Principles document. The team is now seeking community feedback on these proposals. “We are especially interested in fact-based evidence illustrating why a proposal might not work well in practice or problematic aspects we might have missed in the design. Convincing examples in support of a proposal are also very helpful,” the blog post reads. While developers are confident that Go 2 will bring a lot of exciting features and enhancements, not many are a fan of some of the proposed features, for instance, the try function. “I dislike the try implementation, one of Go's strengths for me after working with Scala is the way it promotes error handling to a first class citizen in writing code, this feels like its heading towards pushing it back to an afterthought as tends to be the case with monadic operations,” a developer commented on Hacker News. Some Twitter users also expressed their dislike towards the proposed try function: https://twitter.com/nicolasparada_/status/1144005409755357186 https://twitter.com/dullboy/status/1143934750702362624 These were some of the updates proposed for Go 1.13 and Go 1.14. To know more about this news, check out the Go Blog. Go 1.12 released with support for TLS 1.3, module support among other updates Go 1.12 Release Candidate 1 is here with improved runtime, assembler, ports and more State of Go February 2019 – Golang developments report for this month released  
Read more
  • 0
  • 0
  • 3445

article-image-elastic-stack-7-2-0-releases-elastic-siem-and-general-availability-of-elastic-app-search
Vincy Davis
27 Jun 2019
4 min read
Save for later

Elastic Stack 7.2.0 releases Elastic SIEM and general availability of Elastic App Search

Vincy Davis
27 Jun 2019
4 min read
Yesterday, the team behind Elastic Stack announced the release of Elastic Stack 7.2.0. The major highlight of this release is the free availability of Elastic SIEM (Security information and event management) as a part of Elastic’s default distribution. The Elastic SIEM app provides interactivity, ad hoc search, responsive drill downs and packages it into an intuitive product experience. Elastic Stack 7.2.0 also comes with the free availability of the Elastic app search for its users, which was only available as a hosted service up until now. With this release, Elastic has advanced the Kubernetes and container monitoring initiative to include the monitoring of the NATS open source messaging system, CoreDNS, and to support the CRI-O format container logs. https://youtu.be/bmx13X87e2s What is Elastic SIEM? The SIEM app is an interactive UI workspace for security teams to triage events and perform initial investigations. It assigns a Timeline Event Viewer which allows analysts to gather and store evidence of an attack, pin and comment on relevant events, and share their findings all from within Kibana. Kibana is an open source data visualization plugin for Elasticsearch. Elastic SIEM is being introduced as a beta in the 7.2 release of the Elastic Stack. Image Source: Elastic blog The Elastic SIEM app enables analysis of host-related and network-related security events as part of alert investigations or interactive threat hunting, including the following: The Hosts view in the SIEM app provides key metrics regarding host-related security events, and a set of data tables that enable interaction with the Timeline Event Viewer. The Network view in the SIEM app informs analysts of key network activity metrics, facilitates investigation time enrichment, and provides network event tables that enable interaction with the Timeline Event Viewer. Analysts can easily drag objects of interest into the Timeline Event Viewer to create the required query filter to get to the bottom of an alert. With Auto-saving, it is possible to  ensure that the results of the investigation are available for incident response teams. Elastic SIEM is available on the Elasticsearch Service on Elastic Cloud, or for download. Since this a major feature of Elastic Stack, it has got people quite excited. https://twitter.com/cbnetsec/status/1143661272594096128 https://twitter.com/neu5ron/status/1143623893476958208 https://twitter.com/netdogca/status/1143581280837107714 https://twitter.com/tommyyyyyyyy/status/1143791589325725696 General availability of Elastic App Search on-premise With the Elastic Stack 7.2.0 version, the Elastic App Search product is going to be freely available for users as a downloadable, self-managed search solution. Though Elastic App Search has been around for over a decade as a cloud-based solution, users of Elastic will have a greater flexibility to build fluid and engaging search experiences. As part of this release, the below services will be offered in a downloadable form: Simple and focused data ingestion Powerful search APIs and UI frameworks Insightful analytics Intuitive relevance controls Elastic Stack 7.2.0 is also introducing the Metrics Explorer. It will enable users to quickly visualize the most important infrastructure metrics and interact with them using common tags and chart groupings inside the Infrastructure app. With this feature, users can create a chart and  see on the dashboard. Other Highlights Elasticsearch simplifies search-as-you-type, adds a UI around snapshot/restore, gives more control over relevance without sacrificing performance, and much more. Kibana makes it even easier to build a secure, multi-tenant Kibana instance with advanced RBAC for Spaces. Elastic Stack 7.2.0 has also introduced kiosk mode for Canvas, and the maps created in the new Maps app can now be embedded in any Kibana dashboard. There are also new easy-on-your-eyes dark-mode map tiles and much more. Beats improves edge-based processing with a new JavaScript processor, and more. Logstash gets faster with the Java execution pipeline going GA. It now fully supports JMS as an input and output, and more. Users are very impressed with the features introduced in Elastic Stack 7.2.0 https://twitter.com/mikhail_khusid/status/1143695869411307526 https://twitter.com/markcartertm/status/1143652867284189184 Visit the Elastic blog for more details. Core security features of Elastic Stack are now free! Elasticsearch 7.0 rc1 releases with new allocation and security features Elastic Stack 6.7 releases with Elastic Maps, Elastic Update and much more!
Read more
  • 0
  • 0
  • 6924

article-image-fedora-workstation-31-to-come-with-wayland-support-improved-core-features-of-pipewire-and-more
Bhagyashree R
26 Jun 2019
3 min read
Save for later

Fedora Workstation 31 to come with Wayland support, improved core features of PipeWire, and more

Bhagyashree R
26 Jun 2019
3 min read
On Monday, Christian F.K. Schaller, Senior Manager for Desktop at Red Hat, shared a blog post that outlined the various improvements and features coming in Fedora Workstation 31. These include Wayland improvements, more PipeWire functionality, continued improvements around Flatpak, Fleet Commander, and more.  Here are some of the enhancements coming to Fedora Workstation 31: Wayland transitioning to complete soon Wayland is a desktop server protocol that was introduced to replace the X Windowing System with a modern and simpler windowing system in Linux and other Unix-like operating systems. The team is focusing on removing the X Windowing System dependency so that the GNOME Shell will be able to run without the need of XWayland.  Schaller shared that the work related to removing X dependency is done for the shell itself. However, some things are left in regards to the GNOME Setting daemon. Once this work is complete an X server (XWayland) will only start if an X application is run and will shut down when the application is stopped. Another aspect that the team is working on is allowing X applications to run as root under XWayland. Running desktop applications as root is generally not considered safe. However, there are few applications that only work when they are run as root. This is why the team has decided to continue support for running applications as root in XWayland. The team is also adding support for NVidia binary driver to allow running a native Wayland session on top of the binary driver. PipeWire with improved desktop sharing portal PipeWire is a multimedia framework that aims to improve the handling of audio and video in Linux. This release will come with more improved core features of PipeWire. The existing desktop sharing portal is now enhanced and will soon have Miracast support. The team’s ultimate goal is to make the GNOME integration even more seamless than the standalone app.  Better infrastructure for building Flatpaks Flatpak is a utility for software deployment and package management in Linux. The team is making the infrastructure for building Flatpaks from RPMS better. They will also be offering applications from flathub.io and quay.io out of the box and in accordance with Fedora rules for third-party software. The team will also be making a Red Hat UBI based runtime available. A third-party developer can use this runtime to build their applications and be sure that it will be supported by Red Hat for the lifetime of a given RHEL release. Fedora Toolbox with improved GNOME Terminal  Fedora Toolbox is a tool that gives developers a seamless experience when using an immutable OS like Silverblue. Currently, improvements are being done to GNOME Terminal that will ensure a more natural behavior inside the terminal when interacting with pet containers. The is looking for ways to make the selection of containers more discoverable so that developers will easily get access to a Red Hat UBI container or a Red Hat TensorFlow container for instance.  Along with these, the team is improving the infrastructure for Linux fingerprint reader support, securing Gamemode, adding support for Dell Totem, improving media codec support, and more. To know more in detail check out Schaller’s blog post. Fedora 30 releases with GCC 9.0, GNOME 3.32, performance improvements, and much more! Fedora 30 Beta released with desktop environment options, GNOME 3.32, and much more Fedora 31 will now come with Mono 5 to offer open-source .NET support
Read more
  • 0
  • 0
  • 5060

article-image-amazon-launches-vpc-traffic-mirroring-for-capturing-and-inspecting-network-traffic
Amrata Joshi
26 Jun 2019
4 min read
Save for later

Amazon launches VPC Traffic Mirroring for capturing and inspecting network traffic

Amrata Joshi
26 Jun 2019
4 min read
Yesterday the team at AWS launched VPC Traffic Mirroring, a new feature that can be used with the existing Virtual Private Clouds (VPCs) for capturing and inspecting network traffic at scale. https://twitter.com/nickpowpow/status/1143550924125868033 Features of VPC Traffic Monitoring Detecting network and responding to attacks Users can now detect network and security anomalies and extract traffic of interest from any workload in a VPC and route it to the detection tools with VPC Traffic Mirroring. Users can now detect and respond to attacks more quickly than with traditional log-based tools. Better network visibility Users can now get the network visibility and control for making better security decisions. Regulatory and compliance requirements It is now possible to meet regulatory and compliance requirements that mandate monitoring, logging, etc. Troubleshooting Users can mirror application traffic internally for testing and troubleshooting and analyze traffic patterns. It is now easy for users to proactively locate choke points that will hamper the performance of the applications. The blog post reads, “You can think of VPC Traffic Mirroring as a “virtual fiber tap” that gives you direct access to the network packets flowing through your VPC.” Mirror traffic from any EC2 instance Users can choose to capture all the traffic or can use filters for capturing the packets that are of particular interest and can limit the number of bytes captured per packet. VPC Traffic Mirroring can be used in a multi-account AWS environment for capturing traffic from VPCs spread across many AWS accounts. Users can now mirror traffic from any EC2 instance powered by the AWS Nitro system. It is now possible to replicate the network traffic from an EC2 instance within their Amazon Virtual Private Cloud (Amazon VPC) and forward that traffic to security and monitoring appliances for use cases such as threat monitoring, content inspection, and troubleshooting. And these appliances can be easily deployed on an individual Amazon EC2 instance or a fleet of instances behind a Network Load Balancer (NLB) with the help of a User Datagram Protocol (UDP) listener. Amazon VPC traffic mirroring also supports traffic filtering and packet truncation, allowing customers to extract only traffic they are interested in monitoring. Improved security VPC Traffic mirroring helps in capturing packets at the Elastic Network Interface (ENI) level that cannot be tampered, thus strengthening security. Users can choose to analyze their network traffic from a wide range of monitoring solutions that are integrated with Amazon VPC traffic mirroring on AWS Marketplace. Key elements for VPC Traffic Mirroring Mirror source It is an AWS network resource within a particular VPC which can be used as the source of traffic. VPC Traffic Mirroring supports Elastic Network Interfaces (ENIs) as mirror sources. Mirror target It is an ENI or Network Load Balancer that works as a destination for the mirrored traffic. The mirror target can be in the same AWS account as the Mirror Source or it can be in a different account for the implementation of the central-VPC model. Mirror filter It is a specification of the inbound or outbound traffic that is to be captured or skipped. It can be used to specify a protocol that ranges for the source, destination ports, and CIDR blocks for the source and destination. Traffic mirror session It is a connection that is between a mirror source and target that uses a filter. Sessions are numbered, evaluated in order, and the first match (accept or reject) is used to determine the fate of the packet. A given packet is sent to at most one target. VPC Traffic Mirroring is now available and customers can start using it in all commercial AWS Regions except for Asia Pacific (Sydney), China (Beijing), and China (Ningxia). Support for these regions is still pending and will be added soon, as per the official post. To know more about this news, check out Amazon’s official blog post. Amazon adds UDP load balancing support for Network Load Balancer Amazon patents AI-powered drones to provide ‘surveillance as a service’ Amazon is being sued for recording children’s voices through Alexa without consent
Read more
  • 0
  • 0
  • 3983
article-image-introducing-tensorwatch-a-debugging-and-visualization-tool
Amrata Joshi
26 Jun 2019
3 min read
Save for later

Introducing TensorWatch, a debugging and visualization tool

Amrata Joshi
26 Jun 2019
3 min read
Yesterday, the team at Microsoft introduced TensorWatch, an open source debugging and visualization tool designed for deep learning, data science, and reinforcement learning. https://twitter.com/MSFTResearch/status/1143574610820026368 TensorWatch works in Jupyter Notebook and shows real-time visualization of machine learning training.It can also perform several key analysis tasks for models and data. It is flexible and extensible so that users can build their own custom visualizations, UIs, and dashboards. It can execute arbitrary queries against live ML training process and return a stream as a result of the query and view this stream by using a visualizer. TensorWatch is under development and aims to provide a platform for debugging machine learning in an easy to use, extensible, and hackable package. The official blog post reads, “We like to think of TensorWatch as the Swiss Army knife of debugging tools with many advanced capabilities researchers and engineers will find helpful in their work. We presented TensorWatch at the 2019 ACM SIGCHI Symposium on Engineering Interactive Computing Systems.” Key features of TensorWatch Easy customization and visualizations TensorWatch uses Jupyter Notebook instead of prepackaged user interfaces that are often difficult to customize. It provides an interactive debugging of real-time training processes that either uses a composable UI in Jupyter Notebooks or live shareable dashboards in Jupyter Lab. As TensorWatch is a Python library, users can now build their own custom UIs or can use TensorWatch in the vast Python data science ecosystem. It supports several standard visualization types, including histograms, bar charts, pie charts, and 3D variations. Streams As per the architecture of TensorWatch, data and other objects such as files, console, sockets, cloud storage, and even visualizations themselves are considered as streams. TensorWatch streams can listen to other streams that leads to the creation of custom data flow graphs. TensorWatch allows users to implement a variety of advanced scenarios. The blog post reads, “For example, you can render many streams into the same visualization, or one stream can be rendered in many visualizations simultaneously, or a stream can be persisted in many files, or not persisted at all.” Lazy logging mode With TensorWatch, the team  introduced lazy logging mode which doesn’t require explicit logging of all the information beforehand. TensorWatch helps users to observe and track variables including large models or entire batches during the training. It allows users to perform interactive queries that can run in the context of these variables and further returns the streams as a result. The blog reads, “For example, you can write a lambda expression that computes mean weight gradients in each layer in the model at the completion of each batch and send the result as a stream of tensors that can be plotted as a bar chart.” Users seem to be excited about this news as TensorWatch will help visualize streams of data in real time. https://twitter.com/CSITsites/status/1143735826028908544 https://twitter.com/alxndrkalinin/status/1136386187336269834 https://twitter.com/RitchieNg/status/1133678155015704576 To know more about this news, check out Microsoft’s blog post. Docker and Microsoft collaborate over WSL 2, future of Docker Desktop for Windows is near Microsoft finally makes Hyper-V Server 2019 available, after a delay of more than six months Microsoft quietly deleted 10 million faces from MS Celeb, the world’s largest facial recognition database
Read more
  • 0
  • 0
  • 1586

article-image-introducing-pyoxidizer-an-open-source-utility-for-producing-standalone-python-applications-written-in-rust
Bhagyashree R
26 Jun 2019
4 min read
Save for later

Introducing PyOxidizer, an open source utility for producing standalone Python applications, written in Rust

Bhagyashree R
26 Jun 2019
4 min read
On Monday, Gregory Szorc, a Developer Productivity Engineer at Airbnb, introduced PyOxidizer, a Python application packaging and distribution tool written in Rust. This tool is available for Windows, macOS, and Linux operating systems. Sharing his vision behind this tool, Szorc wrote in the announcement, “I want PyOxidizer to provide a Python application packaging and distribution experience that just works with a minimal cognitive effort from Python application maintainers.” https://twitter.com/indygreg/status/1143187250743668736 PyOxidizer aims to solve complex packaging and distribution problems so that developers can put their efforts into building applications instead of juggling with build systems and packaging tools. According to the GitHub README, “PyOxidizer is a collection of Rust crates that facilitate building libraries and binaries containing Python interpreters.” Its most visible component is the ‘pyoxidizer’ command line tool. With this tool, you can create new projects, add PyOxidizer to existing projects, produce binaries containing a Python interpreter, and various related functionality. How PyOxidizer is different from other Python application packaging/distribution tools PyOxidizer provides the following benefits over other Python application packaging/distribution tools: It works across all popular platforms, unlike many other tools that only target Windows or macOS. It works even if the executing system does not have Python installed. It does not have special system requirements like SquashFS, container runtimes, etc. Its startup performance is comparable to traditional Python execution. It supports single file executables with minimal or none system dependencies. Here are some of the features PyOxidizer comes with: Generates a standalone single executable file One of the most important features of PyOxidizer is that it can produce a single executable file that contains a fully-featured Python interpreter, its extensions, standard library, and your application's modules and resources. PyOxidizer embeds self-contained Python interpreters as a tool and software library by exposing its lower-level functionality. Serves as a bridge between Rust and Python The ‘Oxidizer’ part in PyOxidizer comes from Rust. Internally, it uses Rust to produce executables and manage the embedded Python interpreter and its operations. Along with solving the problem of packaging and distribution with Rust, PyOxidizer can also serve as a bridge between these two languages. This makes it possible to add a Python interpreter to any Rust project and vice versa. With PyOxidizer, you can bootstrap a new Rust project that contains an embedded version of Python and your application. “Initially, your project is a few lines of Rust that instantiates a Python interpreter and runs Python code. Over time, the functionality could be (re)written in Rust and your previously Python-only project could leverage Rust and its diverse ecosystem,” explained Szorc. The creator chose Rust for the run-time and build-time components because it is considered to be one of the superior systems programming languages and does not require considerable effort solving difficult problems like cross-compiling. He believes that implementing the embedding component in Rust also opens more opportunities to embed Python in Rust programs. “This is largely an unexplored area in the Python ecosystem and the author hopes that PyOxidizer plays a part in more people embedding Python in Rust,” he added. PyOxidizer executables are faster to start and import During the execution, binaries built with PyOxidizer does not have to do anything special like creating a temporary directory to run the Python interpreter. Everything is loaded directly from the memory without any explicit I/O operations. So, when a Python module is imported, its bytecode is loaded from a memory address in the executable using zero-copy. This results in making the executables produced by PyOxidizer faster to start and import. PyOxidizer is still in its early stages. Yesterday’s initial release is good at producing executables embedding Python. However, not much has been implemented yet to solve the distribution part of the problem. Some of the missing features that we can expect to come in the future are an official build environment, support for C extensions, more robust packaging support, easy distribution, and more. The creator encourages Python developers to try this tool and share feedback with him or file an issue on GitHub. You can also contribute to this project via Patreon or PayPal. Many users are excited to try this tool: https://twitter.com/kevindcon/status/1143750501592211456 https://twitter.com/acemarke/status/1143389113871040517 Read the announcement made by Szorc to know more in detail. Python 3.8 beta 1 is now ready for you to test PyPI announces 2FA for securing Python package downloads Matplotlib 3.1 releases with Python 3.6+ support, secondary axis support, and more
Read more
  • 0
  • 0
  • 5976

article-image-apache-kafka-2-3-is-here
Vincy Davis
26 Jun 2019
3 min read
Save for later

Apache Kafka 2.3 is here! 

Vincy Davis
26 Jun 2019
3 min read
Two days ago, the Apache Kafka team released the latest version of their open source distributed data streaming software, Apache Kafka 2.3. This release has several improvements to the Kafka Core, Connect and Streams REST API. In this release, a new Maximum Log Compaction Lag has been added. It has also improved monitoring for partitions, and fairness in SocketServer processors and much more. What’s new in Apache Kafka 2.3? Kafka Core Reduced the amount of time the broker spends scanning log files JIRA optimizes a process such that Kafka has to check its log segments only. In the earlier versions, the time required for log recovery was not proportional to the number of logs. With Kafka 2.3, it has become proportional to the number of unflushed log segments and has made a 50% reduction in broker startup time. Improved monitoring for partitions which have lost replicas In this release, Kafka Core has added metrics showing partitions that have exactly the minimum number of in-sync replicas. By monitoring these metrics, users can see partitions that are on the verge of becoming under-replicated. Also, the --under-min-isr command line flag has been added to the kafka-topics command. This will allow users to easily see which topics have fewer than the minimum number of in-sync replicas. Added a Maximum Log Compaction Lag In the earlier versions, after the latest key is written, the previous key values in a first-order approximation would get compacted after some time. With this release, it will now be possible to set the maximum amount of time for an old value to stick around. The new parameter max.log.compation.time.ms will specify how long an old value may possibly live in a compacted topic. This will enable Apache Kafka to comply with data retention regulations such as the GDPR. Improved fairness in SocketServer processors Apache Kafka 2.3 will prioritize existing connections over new ones and will improve the broker’s resilience to connection storms. It also adds a max.connections per broker setting. Core Kafka has also improved failure handling in the Replica Fetcher. Incremental Cooperative Rebalancing in Kafka Connect In Kafka Connect, worker tasks are distributed among the available worker nodes. When a connector is reconfigured or a new connector is deployed-- as well as when a worker is added or removed-- the tasks must be rebalanced across the Connect cluster. This helps ensure that all of the worker nodes are doing a fair share of the Connect work. With Kafka 2.3, it will be possible to make configuration changes easier. Kafka Connect has also added connector contexts to Connect worker logs. Kafka Streams Users are allowed to store record timestamps in RocksDB Kafka Streams will have timestamps included in the state store. This will lay the groundwork to ensure future features like handling out-of-order messages in KTables and implementing TTLs for KTables. Added in-memory window store and session Store This release has an in-memory implementation for the Kafka Streams window store and session store. The in-memory implementations provide higher performance, in exchange for lack of persistence to disk. Kafka Streams has also added KStream.flatTransform and KStream.flatTransformValues. https://twitter.com/apachekafka/status/1138872848678653952 These are some of the select updates, head over to the Apache blog for more details. Amazon Managed Streaming for Apache Kafka (Amazon MSK) is now generally available Confluent, an Apache Kafka service provider adopts a new license to fight against cloud service providers Twitter adopts Apache Kafka as their Pub/Sub System
Read more
  • 0
  • 0
  • 2901
article-image-a-vulnerability-discovered-in-kubernetes-kubectl-cp-command-can-allow-malicious-directory-traversal-attack-on-a-targeted-system
Amrata Joshi
25 Jun 2019
3 min read
Save for later

A vulnerability discovered in Kubernetes kubectl cp command can allow malicious directory traversal attack on a targeted system

Amrata Joshi
25 Jun 2019
3 min read
Last week, the Kubernetes team announced that a security issue (CVE-2019-11246) was discovered with Kubernetes kubectl cp command. According to the team this issue could lead to a directory traversal in such a way that a malicious container could replace or create files on a user’s workstation.  This vulnerability impacts kubectl, the command line interface that is used to run commands against Kubernetes clusters. The vulnerability was discovered by Charles Holmes, from Atredis Partners as part of the ongoing Kubernetes security audit sponsored by CNCF (Cloud Native Computing Foundation). This particular issue is a client-side defect and it requires user interaction to exploit the system. According to the post, this issue is of high severity and  the Kubernetes team encourages to upgrade kubectl to Kubernetes 1.12.9, 1.13.6, and 1.14.2 or later versions for fixing this issue. To upgrade the system, users need to follow the installation instructions from the docs. The announcement reads, “Thanks to Maciej Szulik for the fix, to Tim Allclair for the test cases and fix review, and to the patch release managers for including the fix in their releases.” The kubectl cp command allows copying the files between containers and user machine. For copying files from a container, Kubernetes runs tar inside the container for creating a tar archive and then copies it over the network, post which, kubectl unpacks it on the user’s machine. In case, the tar binary in the container is malicious, it could possibly run any code and generate unexpected, malicious results. An attacker could use this to write files to any path on the user’s machine when kubectl cp is called, which is limited only by the system permissions of the local user. The current vulnerability is quite similar to CVE-2019-1002101 which was an issue in the kubectl binary, precisely in the kubectl cp command. The attacker could exploit this vulnerability for writing files to any path on the user’s machine. Wei Lien Dang, co-founder and vice president of product at StackRox, said, “This vulnerability stems from incomplete fixes for a previously disclosed vulnerability (CVE-2019-1002101). This vulnerability is concerning because it would allow an attacker to overwrite sensitive file paths or add files that are malicious programs, which could then be leveraged to compromise significant portions of Kubernetes environments.” Users are advised to run kubectl version --client and in case it does not say client version 1.12.9, 1.13.6, or 1.14.2 or newer, then it means the user is running a vulnerable version which needs to be upgraded. To know more about this news, check out the announcement.  Kubernetes 1.15 releases with extensibility around core Kubernetes APIs, cluster lifecycle stability, and more! HAProxy 2.0 released with Kubernetes Ingress controller, layer 7 retries, polyglot extensibility, gRPC support and more Red Hat releases OpenShift 4 with adaptability, Enterprise Kubernetes and more!    
Read more
  • 0
  • 0
  • 2956

article-image-qt-and-lg-electronics-partner-to-make-webos-as-the-platform-of-choice-for-embedded-smart-devices
Amrata Joshi
25 Jun 2019
3 min read
Save for later

Qt and LG Electronics partner to make webOS as the platform of choice for embedded smart devices

Amrata Joshi
25 Jun 2019
3 min read
The team at Qt and LG Electronics partner to provide webOS as the platform for embedded smart devices in the automotive, robotics and smart home sectors. The webOS, also known as LG webOS, is a Linux kernel-based multitasking operating system for smart devices. The webOS platform powers smart home devices including LG Smart TVs and smart home appliances, and can also deliver greater consumer benefits in high-growth industries such as the automotive sector. The system UI of LG webOS is written mostly using Qt Quick 2 and Qt technology. Last year in March, LG announced an open-source edition of webOS. I.P. Park, president and CTO of LG Electronics, said in a statement, “Smart devices have the potential to deliver an unmatched customer experience wherever we may be – in our homes, cars, and anywhere in between.” Park further added, “Our partnership with Qt enables us to dramatically enhance webOS, providing our customers with the most advanced platform for the creation of highly immersive devices and services. We look forward to continuing our long-standing collaboration with Qt to deliver memorable experiences in the exciting areas of automotive, smart homes and robotics.” LG selected Qt as its business and technical partner for webOS to meet the challenging requirements and also to navigate the market dynamics of the automotive, smart home and robotics industries. With this partnership, Qt will provide LG with end-to-end, integrated as well as a hardware-agnostic development environment for engineers, developers, and designers for creating innovative and immersive apps and devices. Also, officially, the webOS will become a reference operating system of Qt. This partnership will help the customers to leverage webOS’ set of middleware-enabled functionality that saves customer time and effort in their embedded development projects. Qt’s feature-rich development tools such as Qt Creator, Qt Design Studio and Qt 3D Studio will also support webOS. Juha Varelius, CEO of Qt, said to us, “LG has been a technology leader for generations, which is one of the many reasons they’ve become such a trusted partner of Qt.” Varelius further added, “With the company’s initiative to expand the reach of webOS into rapidly growing markets, LG is underscoring the massive potential of Qt-enabled connected experiences. By collaborating with LG on this initiative, we’re able to make it easy as possible for our customers to build devices that bring a new definition to the word ‘smart’.” Qt 5.13 releases with a fully-supported WebAssembly module, Chromium 73 support, and more Qt Creator 4.9.0 released with language support, QML support, profiling and much more Qt installation on different platforms [Tutorial]  
Read more
  • 0
  • 0
  • 2010