Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3709 Articles
article-image-deepminds-alphastar-ai-agent-will-soon-anonymously-play-with-european-starcraft-ii-players
Sugandha Lahoti
11 Jul 2019
4 min read
Save for later

DeepMind's Alphastar AI agent will soon anonymously play with European StarCraft II players

Sugandha Lahoti
11 Jul 2019
4 min read
Earlier this year, DeepMind’s AI Alphastar defeated two professional players at StarCraft II, a real-time strategy video game. Now, European Starcraft II players will get a chance to face off experimental versions of AlphaStar, as part of ongoing research into AI. https://twitter.com/MaxBakerTV/status/1149067938131054593 AlphaStar learns by imitating the basic micro and macro-strategies used by players on the StarCraft ladder. A neural network was trained initially using supervised learning from anonymised human games released by Blizzard. Once the agents get trained from human game replays, they’re then trained against other competitors in the “AlphaStar league”. This is where a multi-agent reinforcement learning process starts. New competitors are added to the league (branched from existing competitors). Each of these agents then learns from games against other competitors. This ensures that each competitor performs well against the strongest strategies, and does not forget how to defeat earlier ones. Anyone who wants to participate in this experiment will have to opt into the chance to play against the StarCraft II program. There will be an option provided in the in-game pop-up window. Users can alter their opt-in selection at any time. To ensure anonymity, all games will be blind test matches. European players that opt-in won't know if they've been matched up against AlphaStar. This will help ensure that all games are played under the same conditions, as players may tend to react differently when they know they’re against an AI. A win or a loss against AlphaStar will affect a player’s MMR (Matchmaking Rating) like any other game played on the ladder. "DeepMind is currently interested in assessing AlphaStar’s performance in matches where players use their usual mix of strategies," Blizzard said in its blog post. "Having AlphaStar play anonymously helps ensure that it is a controlled test, so that the experimental versions of the agent experience gameplay as close to a normal 1v1 ladder match as possible. It also helps ensure all games are played under the same conditions from match to match." Some people have appreciated the anonymous testing feature. A Hacker News user commented, “Of course the anonymous nature of the testing is interesting as well. Big contrast to OpenAI's public play test. I guess it will prevent people from learning to exploit the bot's weaknesses, as they won't know they are playing a bot at all. I hope they eventually do a public test without the anonymity so we can see how its strategies hold up under focused attack.” Others find it interesting to see what happens if players know they are playing against AlphaStar. https://twitter.com/hardmaru/status/1149104231967842304   AlphaStar will play in Starcraft’s three in-universe races (Terran, Zerg, or Protoss). Pairings on the ladder will be decided according to normal matchmaking rules, which depend on how many players are online while AlphaStar is playing. It will not be learning from the games it plays on the ladder, having been trained from human replays and self-play. The Alphastar will also use a camera interface and more restricted APMs. Per the blog post, “AlphaStar has built-in restrictions, which cap its effective actions per minute and per second. These caps, including the agents’ peak APM, are more restrictive than DeepMind’s demonstration matches back in January, and have been applied in consultation with pro players.” https://twitter.com/Eric_Wallace_/status/1148999440121749504 https://twitter.com/Liquid_MaNa/status/1148992401157054464   DeepMind will be benchmarking the performance of a number of experimental versions of AlphaStar to enable DeepMind to gather a broad set of results during the testing period. DeepMind will use a player’s replays and the game data (skill level, MMR, the map played, race played, time/date played, and game duration) to assess and describe the performance of the AlphaStar system. However, Deepmind will remove identifying details from the replays including usernames, user IDs and chat histories. Other identifying details will be removed to the extent that they can be without compromising the research DeepMind is pursuing. For now, AlphaStar agents will play only in Europe. The research results will be released in a peer-reviewed scientific paper along with replays of AlphaStar’s matches. Google DeepMind’s AI AlphaStar beats StarCraft II pros TLO and MaNa; wins 10-1 against the gamers Deepmind’s AlphaZero shows unprecedented growth in AI, masters 3 different games Deepmind’s AlphaFold is successful in predicting the 3D structure of a protein making major inroads for AI use in healthcare
Read more
  • 0
  • 0
  • 2822

article-image-doteveryone-criticises-the-uk-governments-online-harms-white-paper-and-urges-a-more-unified-and-systemic-approach
Richard Gall
10 Jul 2019
4 min read
Save for later

Doteveryone criticises the UK government's Online Harms White Paper and urges a more unified and systemic approach

Richard Gall
10 Jul 2019
4 min read
Earlier this year, the UK government published the Online Harms White Paper. The report, presented to Parliament by Home Secretary Sajid Javid and Secretary of State for Digital, Culture, Media & Sport Jeremy Wright, argued for an independent regulator to hold technology companies to account in order to protect users from harms ranging from cyber-bullying to hate-speech. Read the Online Harms White Paper. Although the proposals were presented as ambitious and unique, technology ethics think tank Doteveryone has drafted a response to the white paper as part of the consultation process (which ended at the beginning of July). In it, the organization highlights what's missing from the white paper and what else should be done in order to build a safer and more accessible digital world. What problems does Doteveryone identify in the Online Harms White Paper? Doteveryone is direct in its criticism of the Online Harms White Paper. Writing on medium, Catherine Miller, Doteveryone's Director of Policy, calls the white paper "a hodge-podge of Codes of Practice and initiatives with neither a clear articulation of what problem the proposals are supposed to solve, nor a clear vision for what alternative future they’re intended to promote." Doteveryone published a detailed response to the white paper earlier this month, but there are a couple of core issues that the charity believes the report fails to address. Read next: Doteveryone report claims the absence of ethical frameworks and support mechanisms could lead to a ‘brain drain’ in the U.K. tech industry There's still no "unifying narrative" to tackling the challenges posed by tech The report doesn't, Doteveryone argues, offer an integrated solution to the problems it refers to. Essentially, the white paper does little more than enter a discussion that's already confusing and fragmented by confusing the picture further. Miller writes that it "sits alongside a proliferation of overlapping initiatives including the forthcoming Consumer Markets White Paper, ICO’s age-appropriate design-code, and the Furman Review into digital competition." She argues that "without a unifying narrative these are almost impossible to navigate and often in potential conflict." Organizations must be systemic in their approach to solving problems, not reactive Doteveryone argues that "the government must offer carrots and not just sticks." Citing its own research that suggests that 5% of employees in tech and 16% working in AI have left their roles over concern about the impact of their products, Miller goes on to suggest that its important for the industry - or rather government - to encourage a systemic approach to tackling and minimizing online harms. This means that tech companies would have a duty of care to their users. By focusing on the design and decision making processes that are part and parcel of working in tech, that will not only ensure that tech companies can avoid being reactive in dealing with the consequences of their actions, it also creates a safer, more open environment for tech workers as well. In theory at least, it removes engineers and product managers from the conflict of interest that can emerge between users and employers. The focus on 'harm' fails to properly protect citizens Miller and Doteveryone also take issue with the very notion of 'harm' as it expressed in the report. In the white paper, 'harm' is something a regulator is needed to force organizations to respond to - it is never presented as a systemic problem. The way to tackle this, Miller suggests is by focusing more on the fundamental rights of citizens. "We recommend the regulator uses the established UN human rights framework to set out public interest objectives for online services to meet." A further reason to shift the focus in this way is it will ensure the regulation - and, indeed, the regulator - remains "forward-looking and anticipatory." As Miller explains, "Digital technologies move too fast for reactive and retrospective regulation to be effective." By moving towards a more structural way of understanding issues like harm and risk, we can ensure we are in a better position to tackle harm in the future. So what next for the Online Harms White Paper? The consultation period for the Online Harms White Paper has now closed - so the next move is ultimately with the government. However, with the Conservative government in turmoil and Brexit forcing just about every other issue far down the agenda, it seems unlikely that we're going to see much movement. However, if the digital economy really is set to be a part of a Britain that's "going it alone", thinking carefully about what this means for everyone from employees to users will remain immensely important. Doteveryone's insight and expertise shouldn't be ignored.
Read more
  • 0
  • 0
  • 1282

article-image-amazon-aurora-makes-postgresql-serverless-generally-available
Vincy Davis
10 Jul 2019
3 min read
Save for later

Amazon Aurora makes PostgreSQL Serverless generally available

Vincy Davis
10 Jul 2019
3 min read
Yesterday, Danilo Poccia, an Evangelist at Amazon Web Services announced the PostgreSQL-compatible edition of Aurora Serverless will be generally available. Aurora PostgreSQL Serverless lets customers create database instances that only run when needed and automatically scale up or down based on demand. If a database isn’t needed, it will shut down until it is needed. With Aurora Serverless, users have to pay on a per-second basis for the database capacity one uses when the database is active, plus the usual Aurora storage costs. Last year, Amazon had made the Aurora Serverless MySQL generally available. How the Aurora PostgreSQL Serverless storage works When a database is created with Aurora Serverless, users set the minimum and maximum capacity. The client applications transparently connect to a proxy fleet that routes the workload to a pool of resources that are automatically scaled. Scaling is done quickly, as the resources are ‘warm’ and ready to be added to serve user requests. Image Source: Amazon blog The storage layer is independent from the computer resources, used by the database, as the storage is not provisioned in advance. The minimum storage is 10GB, however based on the database usage, the Amazon Aurora storage will automatically grow, up to 64 TB, in 10GB increments with no impact to database performance. How to create an Aurora Serverless PostgreSQL Database Create a database from the Amazon RDS console, using Amazon Aurora as engine. Select the PostgreSQL version compatible with Aurora serverless. After selecting the version, the serverless option becomes available. Currently, its is version 10.5. Enter an identifier to the new DB cluster, choose the master username, and let Amazon RDS generate a password. This will let users retrieve their credentials during database creation. Select the minimum and maximum capacity for the database, in terms of Aurora Capacity Units (ACUs), and in the additional scaling configuration, choose to pause compute capacity after 5 minutes of inactivity. Based on the setting, Aurora Serverless will automatically create scaling rules for thresholds for CPU utilization, connections, and the available memory. Aurora Serverless PostgreSQL will now be available in US East (N. Virginia and Ohio), US West (Oregon), EU (Ireland), and Asia Pacific (Tokyo). Many developers are happy with the announcement. https://twitter.com/oxbits/status/1148840886224265218 https://twitter.com/sam_jeffress/status/1148845547110854656 https://twitter.com/maciejwalkowiak/status/1148829295948771331 Visit the Amazon blog for more details. How do AWS developers manage Web apps? Amazon launches VPC Traffic Mirroring for capturing and inspecting network traffic Amazon adds UDP load balancing support for Network Load Balancer
Read more
  • 0
  • 0
  • 3164

article-image-firefox-68-releases-with-recommended-extensions-strict-security-measures-and-reimplemented-url-bar
Bhagyashree R
10 Jul 2019
5 min read
Save for later

Firefox 68 releases with recommended extensions, strict security measures, and reimplemented URL bar

Bhagyashree R
10 Jul 2019
5 min read
Yesterday, Mozilla announced the release of Firefox 68, which brings new updates like support for BigInts, Contrast Checks, dark mode in reader view, and a reimplemented URL bar. They have also added Enhanced Tracking Protection which blocks known “third-party tracking cookies” by default. Improved extension security and discovery Firefox 68 comes with a new reporting feature in ‘about:addons’ using which you can report any security and performance issues with extensions and themes. The team has also redesigned the extensions dashboard in ‘about:addons’ where you can find all the information about your extensions including data and settings access required by each extension. You can get high quality, secure extensions from Mozilla’s Recommended Extensions program present in ‘about:addons’. These recommended extensions are indicated by special badging on addons.mozilla.org (AMO): Source: Mozilla Additionally, to provide users improved protection from threats and annoyances on the web, Firefox 68 comes with cryptomining and fingerprinting protections added to strict content blocking settings in Privacy & Security preferences. Read also: Mozilla adds protection against fingerprinting and Cryptomining scripts in Firefox Nightly and Beta Support for JavaScript BigInt Firefox 68 comes with support for JavaScript’s new BigInt numeric type, which is currently in stage 3 of the ECMAScript specification. Previously, JavaScript only had the Number numeric type. As JavaScript considers numbers as floating-point, they can represent both integers and decimal fractions. Source: Mozilla However, the limitation is that 64-bits floats fail to reliably represent integers larger than 2 ** 53. To make working with large number easier, a new primitive is introduced, BigInt. It provides a way to represent whole numbers larger than 2 ** 53. Updates in DevTools In addition to enhancing the already smart debugging tools, Firefox 68 brings more improvements in DevTools: Accessibility checks in DevTools: This release ships with a new capability for DevTools that check for basic accessibility issues in your web pages. The Accessibility Inspector now comes with a new ‘Check’ that currently reports any color contrast issue with text on a page. The Firefox team plans to add a number of audit tools to highlight accessibility problems on your website in future releases. A way to emulate print media from DevTools: A button is added to the Page inspector using which you can enable “print media emulation”. This makes it easy to see what elements of a page will be visible when printed. Improved CSS warnings: The Web console will show you more information about CSS warnings and include a link to related nodes. A Web console filter: You can now filter content in the Web console using a valid regular expression. Here’s a video showing how this works: https://youtu.be/E6bGOe2fvW0 Web compatibility This release fixes a few web compatibility issues to ensure that every user will be able to access a website regardless of their choice of device or browser: Internet Explorer’s legacy rules property and addRule() and removeRule() CSS methods are added to the CSSStyleSheet interface. Safari’s ‘-webkit-line-clamp’ CSS property is also added. Support for CSS scroll snapping Firefox 68 comes with support for CSS scroll snapping that gives you a standardized way to control the behavior of scrolling inside a container. It works in a very similar fashion to how native apps work on phones and tablets. Now that this update has landed in Firefox, developers will have the same version of the specification as Chrome and Safari. Developers who have used the old Firefox implementation of the Scroll Snap specification are required to update their code, otherwise scroll snapping will no longer work in Firefox 68 and up. The reimplemented URL bar, QuantumBar Firefox’s URL bar, which is also known as the AwesomeBar, has been completely reimplemented using HTML, CSS, and JavaScript web technologies. This overhauled version is named ”QuantumBar”. Though not much will change appearance-wise, its updated architecture behind the scenes will make it easier to maintain and extend in the future. Access to cameras and other media devices now require HTTPS Starting from Firefox 68, camera and microphone will require an HTTPS connection to work. The getUserMedia method will throw NotAllowedError if you try to access the media devices from an insecure HTTP connection, similar to how Chrome works. Many developers are happy with this update. A user on Hacker News commented, “It's fantastic that it works with localhost (and I assume 127.0.0.1?), and it's fantastic that it doesn't work with anything else. This is the best middle ground.” However, some are also worried considering that this will affect the current working of their apps or websites. “This sucks, my community[1] has a local offline-first video/audio call app that we run on a physical mesh network. This will make it impossible for people to talk to each other, without first needing to be connected online to some certificate authority, or without some extraordinarily difficult pre-installation process, which is often not even possible on a phone. HTTPS was important, but now it's being used to shoehorn dependency on a centralized online-only authority. Perfectly ripe to censor anyone.”, wrote a Hacker News user To know more in detail, check out the official announcement by Mozilla. Mozilla launches Firefox Preview, an early version of a GeckoView-based Firefox for Android Firefox 67 enables AV1 video decoder ‘dav1d’, by default on all desktop platforms Mozilla makes Firefox 67 “faster than ever” by deprioritizing least commonly used features
Read more
  • 0
  • 0
  • 3020

article-image-raspberry-pi-4-has-a-usb-c-design-flaw-some-power-cables-dont-work
Vincy Davis
10 Jul 2019
5 min read
Save for later

Raspberry Pi 4 has a USB-C design flaw, some power cables don't work

Vincy Davis
10 Jul 2019
5 min read
Raspberry Pi 4 was released last month, with much hype and promotions. It has a 1.5GHz quad-core 64-bit ARM Cortex-A72 CPU, three memory options of up to 4GB, full-throughput gigabit Ethernet, and a USB-C port as a power connector. The USB-C power connector was the first of its kind addition in the Pi 4 board. However, four days after its release, Tyler Ward, an electronics and product engineer disclosed that the new Pi4 is not charging when used with an electronically marked or e-marked USB-C cables, the type used by Apple MacBooks and other laptops. Two days ago, Pi's co-creator Eben Upton also confirmed the same. Upton says that, “A smart charger with an e-marked cable will incorrectly identify the Raspberry Pi 4 as an audio adapter accessory, and refuse to provide power.” Upton adds that the technical breakdown of the underlying issue in the Pi 4's circuitry, by Tyler Ward offers a detailed overview of why e-marked USB-C cables won't power the Pi. According to Ward’s blog, “The root cause of the problem is the shared cc pull down resistor on the USB Type-C connector. By looking at the reduced pi schematics, we can see it as R79 which connects to both the CC lines in the connector.” “With most chargers this won’t be an issue as basic cables only use one CC line which is connected through the cable and as a result the pi will be detected correctly and receive power. The problem comes in with e-marked cables which use both CC connections”, he adds.  Ward has suggested some workarounds for this problem, firstly he recommends to use a non e-marked cable, which most USB-C phone charger cables are likely to have, rather than the e-marked cable. Also, the older chargers with A-C cables or micro B to C adaptors will also work if they provide enough power, as these don’t require CC detection to get charged. The complete solution to this problem would be if Pi would, in a future board revision, add a 2nd CC resistor to the board and fix the problem. Another option is to buy the $8/£8 official Raspberry Pi 4 power supply. In a statement to TechRepublic, Upton adds that “It's surprising this didn't show up in our (quite extensive) field testing program.”  Benson Leung, a Google Chrome OS engineer has also criticized Raspberry Pi in a medium blogpost which he has sarcastically titled,“How to design a proper USB-C™ power sink (hint, not the way Raspberry Pi 4 did it)”. Leung has identified two critical mistakes on Raspberry Pi’s part. He says that Raspberry Pi should have copied the figure from the USB-C Spec exactly, instead of designing a new circuit. Leung says that Raspberry Pi “designed this circuit themselves, perhaps trying to do something clever with current level detection, but failing to do it right.” The second mistake, he says, is that they didn’t actually test their Pi4 design with advanced cables. “The fact that no QA team inside of Raspberry Pi’s organization caught this bug indicates they only tested with one kind (the simplest) of USB-C cables.”, he adds. Many users agreed with Leung and  expressed their own views on the faulty USB-C design on the Raspberry Pi 4. They think it’s hard to believe that Raspberry Pi shipped these models before trying it with a MacBook charger. A user on Hacker News comments, “I find it incredible that presumably no one tried using a MacBook charger before this shipped. If they did and didn't document the shortcoming that's arguably just as bad. Surely a not insignificant number of customers have MacBooks? If I was writing some test specs this use case would almost certainly feature, given the MacBook Pro's USB C adapter must be one of the most widespread high power USB C charger designs in existence. Especially when the stock device does not ship with a power supply, not like it was unforeseeable some customers would just use the chargers they already have.” Some are glad that they have not yet ordered their Raspberry Pi 4 yet. https://twitter.com/kb2ysi/status/1148631629088342017 However, some users believe it’s not that big a deal. https://twitter.com/kb2ysi/status/1148635750210183175 A user on Hacker News comments, “Eh, it’s not too bad. I found a cable that works and I’ll stick to it. Even with previous-gen Pis there was always a bit of futzing with cables to find one that has small enough voltage drop to not get power warnings (even some otherwise “good” cables really cheap out on copper). The USB C thing is still an issue, and I’m glad it’ll be fixed, but it’s really not that big of a deal.” No schedule has been disclosed on the release of the revision by Upton nor Raspberry Pi till now. 10+ reasons to love Raspberry Pi You can now install Windows 10 on a Raspberry Pi 3 Raspberry Pi opens its first offline store in England
Read more
  • 0
  • 0
  • 3239

article-image-ico-to-fine-marriott-over-124-million-for-compromising-383-million-users-data-last-year
Savia Lobo
10 Jul 2019
4 min read
Save for later

ICO to fine Marriott over $124 million for compromising 383 million users’ data last year

Savia Lobo
10 Jul 2019
4 min read
The UK’s watchdog, Information Commissioner's Office (ICO) announced that it plans to impose a fine of more than £99 million ($124 million) under GDPR, on the popular hotel chain, Marriott International over a massive data breach which occurred last year. On November 19, 2018, Marriott revealed that the data breach occurred in Marriott’s Starwood guest database and that this breach was happening over the past four years and collected information about customers who made reservations in its Starwood subsidiary. The company initially said hackers stole the details of roughly 500 million hotel guests. However, with a further thorough investigation the number was later corrected to 383 million. This is ICO’s second announcement of imposing significant fines on companies involved in major data breaches. A few days ago, ICO declared its intentions of issuing British Airways a fine of £183.39M for compromising personal identification information of over 500,000 customers. According to ICO’s official website, “A variety of personal data contained in approximately 339 million guest records globally were exposed by the incident, of which around 30 million related to residents of 31 countries in the European Economic Area (EEA). Seven million related to UK residents.” Information Commissioner Elizabeth Denham, said, “The GDPR makes it clear that organizations must be accountable for the personal data they hold. This can include carrying out proper due diligence when making a corporate acquisition, and putting in place proper accountability measures to assess not only what personal data has been acquired, but also how it is protected.” “Personal data has a real value so organizations have a legal duty to ensure its security, just like they would do with any other asset. If that doesn’t happen, we will not hesitate to take strong action when necessary to protect the rights of the public,” she further added. In a filing with the US Securities Exchange Commission, yesterday, Marriott International’s President and CEO, Arne Sorenson, said, “We are disappointed with this notice of intent from the ICO, which we will contest. Marriott has been cooperating with the ICO throughout its investigation into the incident, which involved a criminal attack against the Starwood guest reservation database. “We deeply regret this incident happened. We take the privacy and security of guest information very seriously and continue to work hard to meet the standard of excellence that our guests expect from Marriott”, Sorenson added. He further informed that the Starwood guest reservation database that was attacked is no longer used for business operations. A few hours after Marriott revealed about the data breach last year, two lawsuits were filed against it. First, by two Oregon men: Chris Harris and David Johnson, for exposing their data, and the other lawsuit was filed in the state of Maryland by a Baltimore law firm Murphy, Falcon & Murphy.  The petitioners in the Oregon lawsuit claimed $12.5 billion in costs and losses; however, the petitioners for the Maryland lawsuit didn't specify the amount for damages they were seeking from Marriott. According to OregonLive’s post last year, “The lawsuit seeks $12.5 billion -- or $25 for each customer whose privacy may have been jeopardized after making a reservation with Starwood brand hotels, including W Hotels, St. Regis, Sheraton, and Westin”. “The $25 as a minimum value for the time users will spend canceling credit cards due to the Marriott hack”, OregonLive further reported. Many are happy with ICO’s decision of imposing fines on major companies that put customer data at risk. A user on Reddit has commented, “Finally!! I am hoping this is a trend and a game changer for the companies to better protect their customer information!”. Another user said, “Great news, The GDPR is working.” To know more about this news in detail, head over to ICO’s official website. Former Senior VP’s take on the Mariott data breach; NYT reports suspects Chinese hacking ties Facebook fails to fend off a lawsuit over data breach of nearly 30 million users Experts discuss Dark Patterns and deceptive UI designs: What are they? What do they do? How do we stop them?
Read more
  • 0
  • 0
  • 1935
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $15.99/month. Cancel anytime
article-image-glitch-hits-2-5-million-apps-secures-30m-in-funding-and-is-now-available-in-vs-code
Sugandha Lahoti
10 Jul 2019
5 min read
Save for later

Glitch hits 2.5 million apps, secures $30M in funding, and is now available in VS Code

Sugandha Lahoti
10 Jul 2019
5 min read
Glitch, the web apps creating tool, has made a series of major announcements yesterday. Glitch is a tool that lets you code full-stack apps right in the browser, where they’re instantly deployed. Glitch, formerly known as Fog Creek Software, is an online community where people can upload projects and enable others to remix them. Creating web apps with Glitch is as easy as working on Google Docs. The Glitch community reached a milestone by hitting 2.5 million free and open apps, more than the number in Apple's app store. Many apps on Glitch are decidedly smaller, simpler, and quicker to make on average focused on single-use things. Since all apps are open source, others can then remix the projects into their own creations. Glitch raises $30M with a vision of being a healthy, responsible company Glitch has raised $30M in a Series A round funding from a single investor, Tiger Global. The round closed in November 2018, but Anil Dash, CEO of Glitch said he wanted to be able to show people that the company did what it said it would do, before disclosing the funding to the public; the company has grown twice in size since. Glitch is not your usual tech startup. The policies, culture, and creative freedom offered are unique. Their motto is to be a simple tool for creating web apps for people and teams of all skill levels, while fostering a friendly and creative community and a different kind of company aiming to set the standard for thoughtful and ethical practices in tech. The company is on track for building one of the friendliest, most inclusive, and welcoming social platforms on the internet. They’re built with sustainability in mind, are independent, privately held, and transparent and open in business model and processes. https://twitter.com/firefox/status/1148716282696601601 They are building a healthy, responsible company and have shared their inclusion statistics, and benefits like salary transparency, paid climate leave (consists upto 5 consecutive work days taken at employee’s discretion, for extreme weather), full parental leave and more in a public handbook. This handbook is open-sourced so anyone, anytime, anywhere can see how the company runs day to day. Because this handbook is made in Glitch, users can remix it to get their own copy that is customizable. https://twitter.com/Pinboard/status/1148645635173670913 As the community and the company have grown, they have also invested significantly in diversity, inclusion, and tech ethics. On the gender perspective, 47% of the company identifies as cisgender women, 40% identify as cisgender men, 9% identify as non-binary/gender non-conforming/questioning and 4% did not disclose. On the race and ethnicity front, the company is 65% white, 7% Asian, 11% black, 4% Latinx, 11% two or more races and 2% did not disclose. Meanwhile, 29% of the company identifies as queer and 11% of people reported having a disability. Their social platform, Anil notes has no wide-scale abuse, systematic misinformation, or surveillance-based advertising. The company wants to, “prove that a group of people can still create a healthy community, a successful business, and have a meaningful impact on society, all while being ethically sound.” A lot of credit for Glitch and it’s inclusion policies goes to Anil Dash, the CEO. As pointed by Kimberly Bryant, who is the founder of BlackGirlsCode, “'A big reason for Glitch's success and vision though is Anil. This "inclusion mindset" starts at the top and I think that is evidenced by the companies and founders who get it right.” Karla Monterroso, CEO Code2040 says, “It becomes about operationalizing strategy. About creating actual inclusion. About how you intentionally build a diverse team and an org that is just.” https://twitter.com/karlitaliliana/status/1148641017823764480 https://twitter.com/karlitaliliana/status/1148653580842196992   Dash notes, “It’s the entire team working together. Buy-in at every level of the organization, people being brave enough to be vulnerable, all doing the hard work of self-reflection & not being defensive. And knowing we’re only getting started.” Other community members and tech experts have also appreciated Dash’s resilience into building an open source, sustainable, inclusive platform. https://twitter.com/TheSamhita/status/1148706941432225792 https://twitter.com/LeeTomson/status/1148655031308210176   People have also used it for activist purposes and highly recommend it. https://twitter.com/schep_/status/1148654037518168065 Glitch now on VSCode offering real-time code collab Glitch is also available in Visual Studio Code allowing everyone from beginners to experts to code.  Features include real-time collaboration, code rewind, and live previews. This feature is available in preview; users can download the Glitch VS Code extension on the Visual Studio Marketplace. Features include: Rewind: look back through code history, rollback changes, and see files as they were in the past with a diff. Console: Open the console and run commands directly on Glitch container. Logs: See output in logs just like on Glitch. Debugger: make use of the built-in Node debugger to inspect full-stack code. Source: Medium https://twitter.com/horrorcheck/status/1148635444218933250 For now the company is dedicated solely to building out Glitch and release specialized and powerful features for businesses later this year. How do AWS developers manage Web apps? Introducing Voila that turns your Jupyter notebooks to standalone web applications PayPal replaces Flow with TypeScript as their type checker for every new web app
Read more
  • 0
  • 0
  • 2896

article-image-microsoft-defender-atp-detects-astaroth-trojan-a-fileless-info-stealing-backdoor
Bhagyashree R
09 Jul 2019
3 min read
Save for later

Microsoft Defender ATP detects Astaroth Trojan, a fileless, info-stealing backdoor

Bhagyashree R
09 Jul 2019
3 min read
Yesterday, the Microsoft Defender Advanced Threat Protection (ATP) Research Team shared details of a fileless malware campaign through which attackers were dropping Astaroth Trojan into the memory of infected computers. https://twitter.com/MsftSecIntel/status/1148262969710698498 Astaroth is a malware known for abusing living-off-the-land binaries (LOLbins) such as Windows Management Instrumentation Command-line (WMIC) to steal sensitive information including credentials, keystrokes, and other data. It sends stolen data to a remote attacker, who can misuse them to carry out financial theft or sell victim information in the cybercriminal underground. This trojan has been public since 2017 and has affected a few European and Brazilian companies. As of now, Microsoft has not disclosed whether any other user’s machine was compromised. What are fileless threats? Fileless malware attacks either run the payload directly in the memory or use already installed applications to carry out the attack. As these attacks use legitimate programs, they are very difficult to detect for most security programs and even for experienced security analysts. Andrea Lelli, a member of Microsoft Defender ATP Research Team, thinks that though these attacks are difficult to detect, they are certainly not undetectable. “There’s no such thing as the perfect cybercrime: even fileless malware leaves a long trail of evidence that advanced detection technologies in Microsoft Defender Advanced Threat Protection (Microsoft Defender ATP) can detect and stop,” he wrote in the blog post. How is the Astaroth Trojan attack implemented? During a standard review, Lelli observed that telemetry was showing a sudden increase in the use of WMIC tool to run a script. This made him suspicious of a fileless attack. Upon further investigation, he realized that the campaign was trying to run Astaroth backdoor directly into the memory. Here’s how the initial access and execution takes place using only system tools: Source: Microsoft The attack begins with a spear-phishing email containing a malicious link that redirects a user to an LNK file. When the user double-clicks on the LNK file, it triggers the execution of the WMIC tool with the “/Format” parameter. This allows the download and execution of a JavaScript code that in turn downloads payloads by abusing the Bitsadmin tool. The downloaded payloads are Base64-encoded and are decoded using the Certutil tool. While others remain encrypted, two of them are decoded to plain DLL files. The Regsvr32 tool loads one of the decoded DLLs, which then decrypts and loads other files until the Astaroth, the final payload is injected into the Userinit process. How does Microsoft Defender ATP detect and stop these attacks? Microsoft Defender ATP comes with several advanced technologies to “spot and stop a wide range of attacks.” It leverages protection capabilities from the cloud including metadata-based ML engine, behavior-based ML engine, AMSI-paired ML engine, file classification engine, among others. On the client-side, it includes protection techniques such as memory scanning engine, emulation engine, network engine, and more. Here’s a diagram depicting all the protection technologies Microsoft Defender ATP comes with: Source: Microsoft Check out the official post by Microsoft Defender ATP Research to know more in detail. Microsoft is seeking membership to Linux-distros mailing list for early access to security vulnerabilities 12 Visual Studio Code extensions that Node.js developers will love [Sponsored by Microsoft] 5 reasons Node.js developers might actually love using Azure [Sponsored by Microsoft]
Read more
  • 0
  • 0
  • 2283

article-image-introducing-photon-micro-gui-an-open-source-lightweight-ui-framework-with-reusable-declarative-c-code
Vincy Davis
09 Jul 2019
4 min read
Save for later

Introducing Photon Micro GUI: An  open-source, lightweight UI framework with reusable declarative C++ code

Vincy Davis
09 Jul 2019
4 min read
Photon Micro is an open-source, lightweight and modular GUI, which comprises of fine-grained and flyweight ‘elements’. It uses a declarative C++ code with a heavy emphasis on reuse, to form deep element hierarchies. Photon has its own HTML5 inspired canvas drawing engine and uses Cairo as a 2D graphics library. Cairo supports the X Window System, Quartz, Win32, image buffers, PostScript, PDF, and SVG file output. Joel de Guzman, the creator of Photon Micro GUI, and the main author of the Boost.Spirit Parser library, the Boost.Fusion library and the Boost.Phoenix library says, “One of the main projects I got involved with when I was working in Japan in the 90s, was a lightweight GUI library named Pica. So I went ahead, dusted off the old code and rewrote it from the ground up using modern C++.” The Photon Micro GUI client can use the following gallery code: Image Source: Cycfi This pops up a warning: Image Source: Cycfi Some highlights of Photon Micro GUI Modularity and reuse are two important design aspects of the Photon Micro GUI. It is supported by the following functionalities: Share Photon Micro GUI can be shared using std::shared_ptr. Hold Hold can be used to share an element somewhere in the view hierarchy. Key_intercept It is a delegate element that intercepts key-presses. Fixed_size Elements are extremely lightweight, fixed_size fixes the size of the GUI contained element. margin, left_margin These are two of the many margins, including right_margin, top_margin, etc. It adds padding around the element like the margin adds 20 pixels all around the contained element. The left_margin adds a padding of 20 pixels to separate the icon and the text box. vtile, htile Vertical and horizontal fluid layout elements allocate sufficient space to contained elements. This enables stretchiness, fixed sizing, and vertical and horizontal alignment, to place elements in a grid. Stretchiness is the ability of elements to stretch within a defined minimum and maximum size limit. Guzman adds, “While it is usable, and based on very solid architecture and design, there is still a lot of work to do. First, the Windows and Linux ports are currently in an unusable state due to recent low-level refactoring.” Some developers have shown interest in the elements of Photon Micro GUI. https://twitter.com/AlisdairMered/status/1148242189354450944 A user on Hacker News comments, “Awesome, that looks like an attempt to replace QML by native C++. Would be great if there was a SwiftUI inspired C++ UI framework (well, of course C++ might not lend itself so well for the job, but I'm just very curious what it would look like if someone makes a real attempt).” Some users feel that more work needs to be done to make this GUI more accessible and less skeuomorphic. [box type="shadow" align="" class="" width=""]Skeuomorphism is a term most often used in graphical user interface design to describe interface objects that mimic their real-world counterparts in how they appear and/or how the user can interact with them (IDF).[/box] A user says, “Too many skeuomorphic elements. He needs to take the controls people know and understand and replace them with cryptic methods that require new learning, and are hidden from view by default. Otherwise, no one will take it seriously as a modern UI.” Another user on Hacker News adds, “don’t use a GUI toolkit like this, that draws its own widgets rather than using platform standard ones when developing a plugin for a digital audio workstation (e.g. VST or Audio Unit), as this author is apparently doing. Unless someone puts in all the extra effort to implement platform-specific accessibility APIs for said toolkit.” For details about the other highlights, head over to Joel de Guzman’s post. Apple showcases privacy innovations at WWDC 2019: Sign in with Apple, AdGuard Pro, new App Store guidelines and more Google and Facebook allegedly pressured and “arm-wrestled” EU expert group to soften European guidelines for fake news: Open Democracy Report Google I/O 2019 D1 highlights: smarter display, search feature with AR capabilities, Android Q, linguistically advanced Google lens and more  
Read more
  • 0
  • 0
  • 6701

article-image-openjdk-project-valhallas-lw2-early-access-builds-are-now-available-for-you-to-test
Bhagyashree R
09 Jul 2019
3 min read
Save for later

OpenJDK Project Valhalla’s LW2 early access builds are now available for you to test

Bhagyashree R
09 Jul 2019
3 min read
Last week, the early access builds for OpenJDK Project Valhalla's LW2 phase was released, which was first proposed in October last year. LW2 is the next iteration of the L-World series that brings further language and JDK API support for inline types. https://twitter.com/SimmsUpNorth/status/1147087960212422658 Proposed in 2014, Project Valhalla is an experimental OpenJDK project under which the team is working on major new language features and enhancements for Java 10 and beyond. The new features and enhancements are being done in the following focus areas: Value Types Generic Specialization Reified Generics Improved 'volatile' support The LW2 specifications Javac source support Starting from LW2, the prototype is based on mainline JDK (currently version 14). That is why it requires source-level >= JDK14. To make a class declaration of inline type it uses the “inline class’ modifier or ‘@__inline__’ annotation. Interfaces, annotation types, or enums cannot be declared as inline types. The top-level, inner, or local classes may be inline types. As inline types are implicitly final, they cannot be abstract. Also, all instance fields of an inline class are implicitly final. Inline types implicitly extend ‘java.lang.Object’ similar to enums, annotation types, and interfaces. Supports "Indirect" projections of inline types via the "?" operator. javac now allows using ‘==’ and ‘!=’ operators to compare inline type. Java APIs Among the new or modified APIs include ‘isInlineClass()’, ‘asPrimaryType()’, ‘asIndirectType()’, ‘isIndirectType()’, ‘asNullableType()’, and ‘isNullableType()’. Now the ‘getName()’ method reflects the Q or L type signatures for arrays of inline types. Using ‘newInstance()’ on an inline type will throw ‘NoSuchMethodException’ and ‘setAccessible()’ will throw ‘InaccessibleObjectException’. With LW2, initial core Reflection and VarHandles support are in place. Runtime When attempting to synchronize or call wait(*) or notify*() on an inline type IllegalMonitorException will be thrown. ‘ClassCircularityError’ is thrown if loading an instance field of an inline type which declares its own type either directly ‘NotSerializableException’ will be thrown if you are attempting to serialize an inline type. If you are casting from indirect type to inline type, it may result in ‘NullPointerException’. Download the early access binaries to test this prototype. These were some of the specifications of LW2 iteration. Check out the full list of specification at OpenJDK’s official website. Also, stay tuned with the current happenings in Project Valhalla. Getting started with Z Garbage Collector(ZGC) in Java 11 [Tutorial] Storm 2.0.0 releases with Java enabled architecture, new core and streams API, and more Firefox 67 will come with faster and reliable JavaScript debugging tools
Read more
  • 0
  • 0
  • 2364
article-image-linux-5-2-releases-with-inclusion-of-sound-open-firmware-project-new-mount-api-improved-pressure-stall-information-and-more
Vincy Davis
09 Jul 2019
5 min read
Save for later

Linux 5.2 releases with inclusion of Sound Open Firmware project, new mount API, improved pressure stall information and more

Vincy Davis
09 Jul 2019
5 min read
Two days ago, Linus Torvalds, the principal developer of the Linux kernel announced the release of Linux 5.2 in his usual humorous way, describing it as a ‘Bobtail Squid’. The release has new additions like the inclusion of the Sound Open Firmware (SOF) project, improved pressure stall information, new mount API, significant performance improvements in the BFQ I/O scheduler, new GPU drivers, optional support for case-insensitive names in ext4 and more. The earlier version, Linux 5.1 was released exactly two months ago. Torvalds says, “there really doesn't seem to be any reason for another rc, since it's been very quiet. Yes, I had a few pull requests since rc7, but they were all small, and I had many more that are for the upcoming merge window. So despite a fairly late core revert, I don't see any real reason for another week of rc, and so we have a v5.2 with the normal release timing.” Linux 5.2 also kicks off the Linux 5.3 merge window. What’s new in Linux 5.2? Inclusion of Sound Open Firmware (SOF) project Linux 5.2 includes Sound Open Firmware (SOF) project, which has been created to reduce firmware issues by providing an open source platform to create open source firmware for audio DSPs. The SOF project is backed by Intel and Google. This will enable users to have open source firmware, personalize it, and also use the power of the DSP processors in their sound cards in imaginative ways. Improved Pressure Stall information With this release, users can configure sensitive thresholds and use poll() and friends to be notified, whenever a certain pressure threshold is breached within the user-defined time window. This allows Android to monitor and prevent mounting memory shortages, before they cause problems for the user. New mount API With Linux 5.2, Linux developers have redesigned the entire mount API, thus resulting in addition of six new syscalls: fsopen(2), fsconfig(2), fsmount(2), move_mount(2), fspick(2), and open_tree(2). The previous mount(2) interface was not easy for applications and users to understand the returned errors, was not suitable for the specification of multiple sources such as overlayfs need and it was not possible to mount a file system into another mount namespace. Significant performance improvements in the BFQ I/O scheduler BFQ is a proportional-share I/O scheduler available for block devices since the 4.12 kernel release. It associates each process or group of processes with a weight, and grants a fraction of the available I/O bandwidth to that proportional weight. In Linux 5.2, there have been performance tweaks to the BFQ I/O scheduler such that the application start-up time has increased under load by up to 80%. This drastically increases the performance and decreases the execution time of the BFQ I/O scheduler. New GPU drivers for ARM Mali devices In the past, the Linux community had to create open source drivers for the Mali GPUs, as ARM has never been open source friendly with the GPU drivers. Linux 5.2 has two new community drivers for ARM Mali accelerators, such that lima covers the older t4xx and panfrost the newer 6xx/7xx series. This is expected to help the ARM Mali accelerators. More CPU bug protection, and "mitigations" boot option Linux 5.2 release has more bug infrastructure added to deal with the Microarchitectural Data Sampling (MDS) hardware vulnerability, thus allowing access to data available in various CPU internal buffers. Also, in order to help users to deal with the ever increasing amount of CPU bugs across different architectures, the kernel boot option mitigations= has been added. It's a set of curated, arch-independent options to enable/disable protections regardless irrespective of the system they are running in. clone(2) to return pidfds Due to the design of Unix, sending signals to processes or gathering /proc information is not always safe due to the possibility of PID reuse. With clone(2) returning to pidfds, it will allow users to get pids at process creation time, which are usable with the pidfd_send_signal(2) syscall. pidfds helps Linux to avoid this problem, and the new clone(2) flag will make it even easier to get pidfs, thus providing an easy way to signal and process PID metadata safely. Optional support for case-insensitive names in ext4 This release implements support for case-insensitive file name lookups in ext4, based on the feature bit and the encoding stored in the superblock. This will enable users to configure directories with chattr +F (EXT4_CASEFOLD_FL) attribute. This attribute is only enabled on empty directories for filesystems that support the encoding feature, thus preventing collision of file names that differ by case. Freezer controller for cgroups v2 added A freezer controller provides an ability to stop the workload in a cgroup and temporarily free up some resources (cpu, io, network bandwidth and, potentially, memory) for some other tasks. Cgroup v2 lacked this functionality, until this release. This functionality is always available and is represented by cgroup.freeze and cgroup.events cgroup control files. Device mapper dust target added Linux 5.2 adds a device mapper 'dust' target to simulate a device that has failing sectors and/or read failures. It also adds the ability to enable the emulation of the read failures at an arbitrary time. The 'dust' target aims to help storage developers and sysadmins that want to test their storage stack. Users are quite happy with the Linux 5.2 release. https://twitter.com/ejizhan/status/1148047044864557057 https://twitter.com/konigssohne/status/1148014299484512256 https://twitter.com/YuzuSoftMoe/status/1148419200228179968 Linux 5.2 has many other performance improvements introduced in the file systems, memory management, block layer and more. Visit the kernelnewbies page, for more details. “Don’t break your users and create a community culture”, says Linus Torvalds, Creator of Linux, at KubeCon + CloudNativeCon + Open Source Summit China 2019 Canonical, the company behind the Ubuntu Linux distribution, was hacked; Ubuntu source code unaffected OpenWrt 18.06.4 released with updated Linux kernel, security fixes Curl and the Linux kernel and much more!
Read more
  • 0
  • 0
  • 2804

article-image-mozilla-is-funding-a-project-for-bringing-julia-to-firefox-and-the-general-browser-environment
Bhagyashree R
09 Jul 2019
3 min read
Save for later

Mozilla is funding a project for bringing Julia to Firefox and the general browser environment

Bhagyashree R
09 Jul 2019
3 min read
Last week, Mozilla disclosed the winners of Mozilla Research Grants for the first half of 2019. Among the winning proposals was “Bringing Julia to the Browser” that aligns with Mozilla’s goal to bring data science and scientific computing tools to the browser. Mozilla had said it was specifically interested in receiving submissions about supporting R or Julia at the browser level. Every six months Mozilla awards grants of value $25,000 to support research in emerging technologies and also topics relevant to Mozilla. It started accepting proposals for 2019 H1 funding series in April this year. https://twitter.com/jofish/status/1121860158835978241 These proposals were expected to address any one of the twelve research questions under the following three categories: Growing the Web: WASM, Rust, Digital trust for cost-conscious users, Beyond online behavioral advertising (OBA) New Interaction Modes: Iodide: R or Julia in the Browser, Fathom, Futures, Mixed Reality, Voice, Common Voice Privacy & Security: Data, Privacy & Security for Firefox Mozilla has been constantly putting its efforts to make the life of data scientists easier on the web. In March, it introduced Iodide that allows data scientists to create interactive documents using web technologies. In April, it came up with Pyodide that brings the Python runtime to the browser via WebAssembly. By funding this research by Valentin Churavy, an MIT Ph.D. student and a member of the official Julia team, Mozilla is taking the next step towards improving access to popular data science tools on the web. They are planning to port R or Julia, languages that are popular among statisticians and data miners, to WebAssembly. Their ultimate goal is to introduce a plugin for Iodide that will automatically convert data basic types between R/Julia and Javascript and will be able to share class instances between R/Julia and Javascript. Though Python and R have been developers’ first choice, Julia is also catching up in becoming one of the promising languages for scientific computing. Its execution speeds are comparable to that of C/C++ and high-level abstractions are comparable to MATLAB. It offers support for modern machine learning frameworks such as TensorFlow and MXNet. Developers can also use Flux, a Julia machine learning library to easily write neural networks. Following are all the Mozilla Research Grants for 2019H1: Source: Mozilla Research Mozilla will be opening a new round of grants in fall this year (probably in November). To know more, check out its official announcement and also join the mailing list. Mozilla introduces Pyodide, a Python data science stack compiled to WebAssembly Mozilla introduces Iodide, a tool for data scientists to create interactive documents using web technologies Mozilla’s Firefox Send is now publicly available as an encrypted file sharing service
Read more
  • 0
  • 0
  • 3089

article-image-a-zero-day-vulnerability-on-mac-zoom-client-allows-hackers-to-enable-users-camera-leaving-750k-companies-exposed
Savia Lobo
09 Jul 2019
4 min read
Save for later

A zero-day vulnerability on Mac Zoom Client allows hackers to enable users’ camera, leaving 750k companies exposed

Savia Lobo
09 Jul 2019
4 min read
A vulnerability in Mac’s Zoom Client allows any malicious website to initiate users’ camera and forcibly join a Zoom call without their authority. This vulnerability was publicly disclosed by security researcher, Jonathan Leitschuh, today. The flaw exposes up to 750,000 companies around the world using the video conferencing app on their Macs, to conduct day-to-day business activities. It also allows a website to launch a DoS (Denial of Service) attack on Macs by repeatedly joining a user to an invalid call. Even if one tries to uninstall the app from their devices, it can even re-install the app without user’s permission with the help of a localhost web server on the machine that should have installed the app at least once. https://twitter.com/OldhamMade/status/1148476854837415936 “This vulnerability leverages the amazingly simple Zoom feature where you can just send anyone a meeting link (for example https://zoom.us/j/492468757) and when they open that link in their browser their Zoom client is magically opened on their local machine”, Leitschuh writes. Leitschuh said that the vulnerability was responsibly disclosed on March 26, this year. This means the company had 90 days to fix this issue based on the disclosure policy. He had suggested a ‘quick fix’ which Zoom could have implemented by simply changing their server logic. However, Zoom first took 10 days to confirm the vulnerability and held a meeting about how the vulnerability would be patched, only 18 days before the end of the 90-day public disclosure deadline, i.e. June 11th, 2019. A day before the public disclosure, Zoom had only implemented the quick fix solution. “An organization of this profile and with such a large user base should have been more proactive in protecting their users from attack”, Leitschuh says. Leitschuh also mentioned the Tenable Remote Code Execution in Zoom security vulnerability which was only patched within the last 6 months. “Had the Tenable vulnerability been combined with this vulnerability it would have allowed RCE against any computer with the Zoom Mac client installed. If a similar future vulnerability were to be found, it would allow any website on the internet to achieve RCE on the user’s machine”, Leitschuh adds. According to ZDNet, “Leitschuh also pointed out to Zoom that a domain it used for sending out updates was about to expire before May 1, but the domain was renewed in late April”. In a statement to The Verge, Zoom said, the local webserver was developed “to save users some clicks after Apple changed its Safari web browser in a way that requires Zoom users to confirm that they want to launch Zoom each time”. Zoom defended their “workaround” and said it is a “legitimate solution to poor user experience, enabling our users to have seamless, one-click-to-join meetings, which is our key product differentiator.” The company said it would do some minor tweaking to the app this month. “Zoom will save users’ and administrators’ preferences for whether the video will be turned on, or not when they first join a call”, the company said. https://twitter.com/backlon/status/1148464344876716033 This move by Zoom is unfair towards users where they have to turn their cameras off and the company just escapes with a minor change to the app for such a serious security lapse issue where they should have taken a major step. Many are unhappy with the way Zoom is handling this vulnerability. https://twitter.com/chadloder/status/1148375915329495040 https://twitter.com/ticky/status/1148389970073096192 Users can patch the camera issue by themselves by updating their Mac and disabling the setting that allows Zoom to turn your camera on when joining a meeting. As mentioned earlier, the vulnerability may re-install the applications; hence, users are advised to run some terminal commands to turn off their web server. Leitschuh has explained these commands in detail in his blog post on Medium. Google researcher reveals an unpatched bug in Windows’ cryptographic library that can quickly “take down a windows fleet” Apple promotes app store principles & practices as good for developers and consumers following rising antitrust worthy allegations Google Project Zero reveals an iMessage bug that bricks iPhone causing repetitive crash and respawn operations
Read more
  • 0
  • 0
  • 3226
article-image-dont-break-your-users-and-create-a-community-culture-says-linus-torvalds-creator-of-linux-at-kubecon-cloudnativecon-open-source-summit-china-2019
Sugandha Lahoti
09 Jul 2019
5 min read
Save for later

“Don’t break your users and create a community culture”, says Linus Torvalds, Creator of Linux, at KubeCon + CloudNativeCon + Open Source Summit China 2019

Sugandha Lahoti
09 Jul 2019
5 min read
At the Cloud Native Computing Foundation’s flagship conference, KubeCon + CloudNativeCon + Open Source Summit China 2019, Linus Torvalds, creator of Linux and Git was in a conversation with Dirk Hohndel, VP and Chief Open Source Officer, VMware on the past, present, and future of Linux. The cloud Native conference gathers technologists from leading open source and cloud native communities scheduled to take place in San Diego, California from November 18-21, 2019. When I think about Linux, Linus says, I worry about the technology and not care about the market. In a lot of areas of technology, being first is more important than being best because if you get a huge community around yourself you have already won. Linus says he and the Linux community and maintainers don’t focus on individual features; what they focus on is the process of getting those features out and making releases. He doesn’t believe in long term planning; there are no plans that span more than roughly six months. Top questions on security, gaming and Linux’s future, learnings and expectations Is the interest in Linux from people outside of the core Linux community declining? Linus opposes this statement stating that it’s still growing albeit not at quite the same rate it used to be. He says that people outside the Linux kernel community should care about Linux’s consistency and the fact that there are people to make sure that when you move to a new kernel your processes will not break. Where is the major focus for security in IT infrastructure? Is it in the kernel, or in the user space? When it comes to security you should not focus on one particular area alone. You need to have secure hardware, software, kernels, and libraries at every stage. The true path to security is to have multiple layers of security where even if one layer gets compromised there is another layer that picks up that problem. The kernel, he says, is one of the more security conscious projects because if the kernel has a security problem it's a problem for everybody. What are some learnings that other projects like Kubernetes and the whole cloud native world can take from the kernel? Linus acknowledges that he is not sure how much the kernel development model really translates to other projects. Linux has a different approach to maintenance as compared to other projects as well as a unified picture of where it is headed. However other projects can take up two learnings from Linux: Don't break your users: Linus says, this has been a mantra for the kernel for a long time and it's something that a lot of other projects seem to not have learned. If you want your project to flourish long term you shouldn’t let your users worry about upgrades and versions and instead make them aware of the fact that you are a stable platform. Create a common culture: In order to have a long life for a platform/project, you should create a community and have a common culture, a common goal to work together for a long term. Is gaming a platform where open source is going to be relevant? When you take up a new technology, Linus states,  you want to take as much existing infrastructure as possible to make it easy to get to your goals. Linux has obviously been a huge part of that in almost every setting. So the only places where Linux isn't completely taking over are those where there was a very strong established market and code base already. If you do something new, exciting and interesting you will almost inevitably use Linux as the base and that includes new platforms for gaming. What can we expect for Linux for the second thirty years? Will it continue just as today or where do you think we're going? Realistically if you look at what Linux does today, it's not that different from what operating systems did 50-60 years ago. What has changed is the hardware and the use. Linux is right in between those two things. What an operating system fundamentally does is act as a resource manager and as the interface between software and hardware. Linus says, “ I don't know what software and hardware will look like in 30 years but I do know we'll still have an operating system and that will probably be called Linux. I may not be around in 30 years but I will be around in 2021 for the 30 year Linux anniversary.” Go through the full conversation here. Linus Torvalds is sorry for his ‘hurtful behavior’, is taking ‘a break (from the Linux community) to get help’. Linux 5.1 out with Io_uring IO interface, persistent memory, new patching improvements and more! Microsoft is seeking membership to Linux-distros mailing list for early access to security vulnerabilities
Read more
  • 0
  • 0
  • 3185

article-image-next-js-9-releases-with-built-in-zero-config-typescript-support-automatic-static-optimization-api-routes-and-more
Vincy Davis
08 Jul 2019
5 min read
Save for later

Next.js 9 releases with built in zero-config TypeScript support, automatic static optimization, API routes and more

Vincy Davis
08 Jul 2019
5 min read
Today an exciting news was awaiting the Next.js users, five months after Next.js 8 was released. Today the Next.js team has released their next version Next.js 9. The major highlights of this release are the built in zero-config TypeScript support, automatic static optimization, API routes and improved developer experience. All the features are backwards compatible with the earlier versions of Next.js. Some of the major features are explained in brief below: Built-In Zero-Config TypeScript Support Automated Setup Getting started with TypeScript in Next.js is easy: rename any file, page or component, from ‘.js’ to ‘.tsx’. Then, run ‘next dev’. Next.js will also create a default ‘tsconfig.json’ with sensible defaults, if not already present. Integrated Type-Checking While in development Next.js will show type errors after saving a file. Type-checking happens in the background, allowing users to interact with the updated application in the browser instantly. Type errors will propagate to the browser as they become available. Next.js will also automatically fail the production build, if type errors are present. This helps prevent shipping broken code to production. Dynamic Route Segments Next.js supports creating routes with basic named parameters, a pattern popularized by ‘path-to-regexp’. Creating a page that matches the route ‘/post/:pid’ can now be achieved by creating a file in your pages directory named: ‘pages/post/[pid].js’. Next.js will automatically match requests like ‘/post/1, /post/hello-nextjs’, etc and render the page defined in ‘pages/post/[pid].js’. The matching URL segment will be passed as a query parameter to your page with the name specified between the ‘[square-brackets]’. Automatic Static Optimization Starting with Next.js 9, users will no longer have to make the choice between fully server-rendering or statically exporting their application. Users can now do both on a per-page basis. Automatic Partial Static Export A heuristic was introduced to automatically determine if a page can be prerendered to static HTML using ‘getInitialProps’. This allows Next.js to emit hybrid applications that contain both server-rendered and statically generated pages. The built-in Next.js server (‘next start’) and programmatic API (‘app.getRequestHandler()’) both support this build output transparently. There is no configuration or special handling required. Statically generated pages are still reactive: Next.js will hydrate the application client-side for full interactivity. Furthermore, Next.js will update the application after hydration also, if the page relies on query parameters in the URL. API Routes To start using API routes, users have to create a directory called ‘api/’ inside the ‘pages/ directory’. All files in this directory will be automatically mapped to ‘/api/<your route>’, in the same way as other page files are mapped to routes. All the files inside the ‘pages/api/’ directory export a request handler function instead of a React Component. Besides using incoming data, the API endpoint will also return data. Next.js will provide ‘res.json()’ by default making it easier to send data. When making changes to API endpoints in development, the user need not restart the server as the code is automatically reloaded. Production Optimizations Prefetching in-Viewport <Link>s Next.js 9 will automatically prefetch <Link> components as they appear in-viewport. This feature improves the responsiveness of users application by making navigations to new pages quicker. Next.js uses an Intersection Observer to prefetch the assets necessary in the background. These requests have low-priority and yield to ‘fetch()’ or XHR requests. Next.js will avoid automatically prefetching if the user has data-saver enabled. Optimized AMP by Default Next.js 9 will render optimized AMP by default. Optimized AMP is up to 50% faster than traditional AMP. Dead Code Elimination for typeof window Branches Next.js 9 replaces ‘typeof window’ with its appropriate value (undefined or object) during server and client builds. This change allows Next.js to remove dead code from compiled code automatically. Developer Experience Improvements Next.js 9 aims to bring unobtrusive and ease-of-use improvements to help its users develop in the best way. Compiling Indicator An RFC / "good first issue" has been created to discuss potential solutions for the problem of indicating that work is being done. Users will also see a small triangle to show that Next.js is doing compilation work, at the bottom right corner of the page. Console Output Starting from Next.js 9, the log output will jump less and will no longer clear the screen. This allows for a better overall experience as the users terminal window will have more relevant information and will flicker less, while Next.js will integrate better. Users are very happy with the striking features introduced in Next.js 9. https://twitter.com/johnbrett_/status/1148167900840255489 https://twitter.com/chanlitohem/status/1148167705834352640 A user on Reddit says that, “That API routes feature looks amazing. Will definitely check it out.” Another Redditor comments, “Gonna give v9 a try today. Very stoked for the new dynamic routing!” Head over to the Next.js official blog for more details. Next.js 7, a framework for server-rendered React applications, releases with support for React context API and Webassembly Meet Sapper, a military grade PWA framework inspired by Next.js 16 JavaScript frameworks developers should learn in 2019
Read more
  • 0
  • 0
  • 4541