Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3709 Articles
article-image-is-dark-an-aws-lambda-challenger
Fatema Patrawala
01 Aug 2019
4 min read
Save for later

Is Dark an AWS Lambda challenger?

Fatema Patrawala
01 Aug 2019
4 min read
On Monday, the CEO and Co-founder of Dark, Ellen Chisa, announced the project had raised $3.5 million in funding in a Medium post. Dark is a holistic project that includes a programming language (Darklang), an editor and an infrastructure. The value of this, according to Chisa, is simple: "developers can code without thinking about infrastructure, and have near-instant deployment, which we’re calling deployless." Along with Chisa, Dark is led by CTO, Paul Biggar, who is also the founder of CircleCI, the CI/CD pioneering company. The seed funding is led by Cervin Ventures, in participation with Boldstart, Data Collective, Harrison Metal, Xfactor, Backstage, Nextview, Promus, Correlation, 122 West and Yubari. What are the key features of the Dark programming language? One of the most interesting features in Dark is that deployments take a mere 50 milliseconds. Fast. Chisa says that currently the best teams can manage deployments around 5–10 minutes, but many take considerably longer, sometimes hours. But Dark was designed to change this. It's purpose-built, Chisa seems to suggest, for continuous delivery. “In Dark, you’re getting the benefit of your editor knowing how the language works. So you get really great autocomplete, and your infrastructure is set up for you as soon as you’ve written any code because we know exactly what is required.” She says there are three main benefits to Dark’s approach: An automated infrastructure No need to worry about a deployment pipeline ("As soon as you write any piece of backend code in Dark, it is already hosted for you,” she explains.) Tracing capabilities are built into your code. "Because you’re using our infrastructure, you have traces available in your editor as soon as you’ve written any code. There's undoubtedly a clear sense - whatever users think of the end result - that everything has been engineered with an incredibly clear vision. Dark has been deployed on SaaS platform and project tracking tools Chisa highlights how some customers have already shipped entire products on Dark. Chase Olivieri, who built Altitude, a subscription SaaS providing personalized flight deals, using Drark is cited by Chisa, saying that "as a bootstrapper, Dark has allowed me to move fast and build Altitude without having to worry about infrastructure, scaling, or server management." Downside of Dark is programmers have to learn a new language Speaking to TechCrunch, Chisa admitted their was a downside to Dark - you have to learn a new language. "I think the biggest downside of Dark is definitely that you’re learning a new language, and using a different editor when you might be used to something else, but we think you get a lot more benefit out of having the three parts working together." Chisa acknowledged that it will require evangelizing the methodology to programmers, who may be used to employing a particular set of tools to write their programs. But according to her the biggest selling point is that it will remove the complexity around deployment by bringing an integrated level of automation to the process. Is Darklang basically like AWS Lambda? The community on Hacker News compares Dark with AWS Lambda, with many pessimistic about its prospects. In particular they are skeptical about the efficiency gains Chisa describes. "It only sounds maybe 1 step removed from where aws [sic] lambda’s are now," said one user. "You fiddle with the code in the lambda IDE, and submit for deployment. Is this really that much different?” Dark’s Co-founder, Paul Biggar responded to this in the thread. “Dark founder here. Yes, completely agree with this. To a certain extent, Dark is aimed at being what lambda/serverless should have been." He continues by writing: "The thing that frustrates me about Lambda (and really all of AWS) is that we're just dealing with a bit of code and bit of data. Even in 1999 when I had just started coding I could write something that runs every 10 minutes. But now it's super challenging. Why is it so hard to take a request, munge it, send it somewhere, and then respond to it. That should be trivial! (and in Dark, it is)" The team has planned to roll out the product publicly in September. To find out more more about Dark, read the team's blog posts including What is Dark, How Dark is a functional language, and How Dark allows deploys in 50ms. The V programming language is now open source – is it too good to be true? “Why was Rust chosen for Libra?”, US Congressman questions Facebook on Libra security design choices Rust’s original creator, Graydon Hoare on the current state of system programming and safety
Read more
  • 0
  • 0
  • 4733

article-image-electron-6-0-releases-with-improved-promise-support-native-touch-id-authentication-support-and-more
Bhagyashree R
01 Aug 2019
3 min read
Save for later

Electron 6.0 releases with improved Promise support, native Touch ID authentication support, and more

Bhagyashree R
01 Aug 2019
3 min read
On Tuesday, the team behind Electron, the web framework for building desktop apps, announced the release of Electron 6.0. It comes with further improvement in the ‘Promise’ support, native Touch ID authentication support for macOS, native emoji and color picker methods, and more. This release is upgraded to Chrome 76, Node.js 12.4.0, and V8 7.6. https://twitter.com/electronjs/status/1156273653635407872 Promisification of functions continue Starting from Electron 5.0, the team introduced a process called “promisification” in which callback-based functions are converted to return ‘Promises’. In Electron 6.0, the team has converted 26 functions to return Promises and also supported callback-based invocation. Among these “promisified” functions are ‘contentTracing.getCategories()’, ‘cookies.flushStore()’, ‘dialog.showCertificateTrustDialog()’, and more. Three new variants of the Helper app The hardened runtime was introduced to prevent exploits like code injection, DLL hijacking, and process memory space tampering. However, to serve the purpose it does restricts things like writable-executable memory and loading code signed by a different Team ID.  If your app relies on such functionalities, you can add an entitlement to disable individual protection. To enable a hardened runtime in an Electron app, special code signing entitlements were granted to Electron Helper. Starting from Electron 6.0, three new variants of the Helper app are added to keep these granted entitlements scoped to the process types that require them. These are ‘Electron Helper (Renderer).app)’, ‘(Electron Helper (GPU).app)’, and ‘(Electron Helper (Plugin).app)’. Developers using ‘electron-osx-sign’ to codesign their Electron app, do not have to make any changes to their build logic. But if you are using custom scripts instead, then you will need to ensure that the three Helper apps are correctly codesigned. To correctly package your application with these new helpers, use ‘[email protected]’ or higher. Miscellaneous changes to Electron 6.0 Electron 6.0 brings native Touch ID authentication support for macOS. There are now native emoji and color picker methods for Windows and macOS. The ‘chrome.runtime.getManifest’ API for Chrome extensions is added that returns details about the app or extension from the manifest. The ‘<webview>.getWebContentsId()’ method is added that allows getting the WebContents ID of WebViews when the remote module is disabled. Support is added for the Chrome extension content script option ‘all_frames’. This option allows an extension to specify whether JS and CSS files should be injected into all frames or only into the topmost frame in a tab. With Electron 6.0, the team has laid out the groundwork for a future requirement, which says that all native Node modules loaded in the renderer process will be either N-API or Context Aware. This is done for faster performance, better security, and reduced maintenance workload. Along with the release announcement, the team also announced the end of life of Electron 3.x.y and has recommended upgrading to a newer version of Electron. To know all the new features in Electron 6.0, check out the official announcement. Electron 5.0 ships with new versions of Chromium, V8, and Node.js The Electron team publicly shares the release timeline for Electron 5.0 How to create a desktop application with Electron [Tutorial]
Read more
  • 0
  • 0
  • 3699

article-image-data-transfer-project-now-apple-joins-google-facebook-microsoft-and-twitter-to-make-data-sharing-seamless
Vincy Davis
01 Aug 2019
2 min read
Save for later

Data Transfer Project: Now Apple joins Google, Facebook, Microsoft and Twitter to make data sharing seamless

Vincy Davis
01 Aug 2019
2 min read
Yesterday, Data Transfer Project (DTP) updated on their website that Apple has officially joined the project as a contributor, along with other tech giants like Google, Facebook, Microsoft and Twitter. Read More: Google, Microsoft, Twitter, and Facebook team up for Data Transfer Project The Data Transfer Project launched in 2018, is an open-source, service-to-service data portability platform which allows individuals to move their data across the web, whenever they want. The seamless transfer of data aims to give users more control of their data across the web. It’s tools will make it possible for users to port their music playlists, contacts or documents from one social network to another, without much effort. Currently, the DTP has 18 contributors. Their partners and open source community have inserted more than 42,000 lines of code and changed more than 1,500 files in the Project. Other alternative social networks like Deezer, Mastodon, and Solid have also joined the project. New Cloud logging and monitoring framework features and new APIs from Google Photos and Smugmug have also been added. The Data Transfer Project is still in the development stage, as its official site states that “We are continually making improvements that might cause things to break occasionally. So as you are trying things please use it with caution and expect some hiccups.” It's Github page has regular updates since its launch and has 2,480 stars, 209 forks and 187 watchers currently. Many users are happy that Apple has also joined the Project, as this means easy transfer of data for them. https://twitter.com/backlon/status/1156259766781394944 https://twitter.com/humancell/status/1156549440133632000 https://twitter.com/BobertHepker/status/1156352450875592704 Some users suspect that such projects will encourage unethical sharing of user data. https://twitter.com/zananeichan/status/1156416593913667585 https://twitter.com/sarahjeong/status/1156313114788241408 Visit the Data Transfer Project website for more details. Google Project Zero reveals six “interactionless” bugs that can affect iOS via Apple’s iMessage Softbank announces a second AI-focused Vision Fund worth $108 billion with Microsoft, Apple as major investors Apple advanced talks with Intel to buy its smartphone modem chip business for $1 billion, reports WSJ
Read more
  • 0
  • 0
  • 2254

article-image-u-s-senator-introduces-a-new-social-media-addiction-reduction-tech-smart-act-that-bans-endless-scrolling-and-autoplay
Savia Lobo
31 Jul 2019
6 min read
Save for later

U.S. Senator introduces a new Social Media Addiction Reduction Tech (SMART) Act that bans endless scrolling and autoplay

Savia Lobo
31 Jul 2019
6 min read
Yesterday, Senator Josh Hawley proposed a bill to voice against the different techniques tech giants use to exploit users’ attention for keeping them addicted to their apps. The Social Media Addiction Reduction Technology (SMART) Act would “ban certain features that are designed to be addictive, would require choice parity for consent, and would give users the power to monitor their time spent on social media”,  Sen. Hawley’s official post states. “Big tech has embraced a business model of addiction. Too much of the ‘innovation’ in this space is designed not to create better products, but to capture more attention by using psychological tricks that make it difficult to look away. This legislation will put an end to that and encourage true innovation by tech companies,” Senator Hawley said. “Deceptive design played an enormous part in last week’s FTC settlement with Facebook, and Hawley’s bill would make it unlawful for tech companies to use dark patterns to manipulate users into opting into services”, The Verge reports. The bill would ban user in-app achievements such as “Snapstreak” on Snapchat that gets the user addicted and difficult to leave the social media platform. [box type="shadow" align="" class="" width=""]Snapchat explains Snapstreaks as “The number next to the 🔥 tells you how many days you've been on a Snapstreak. For example, if you have an 8 next to the 🔥 it means you both have Snapped (not chatted) back and forth with this friend for 8 days.[/box] The bill, if passed, would require social media organizations to, within six months,  implement a feature allowing users to set a time limit on how long they can access the platform each day. With the default time limit being 30 minutes, "if the user elects to increase or remove the time limit, [it] resets the time limit to 30 minutes a day on the first day of every month," the bill text says. The bill also demands including a pop-up every 30-minute that would notify users of the total time spent. Apple and Google have included these monitoring systems with Screen Time and Digital Wellbeing. Instagram and Facebook also let you keep tabs on how much time you spend on them each day. Josh Golin, Executive Director of Campaign for a Commercial-Free Childhood, said, “Social media companies deploy a host of tactics designed to manipulate users in ways that undermines their wellbeing. We commend Senator Hawley for introducing legislation that would prohibit some of the most exploitative tactics, including those frequently deployed on children and teens.” Sen. Hawley talked about natural stopping points, like the end of a page, naturally prompt users to choose whether to continue reading. However, tech giants eliminate these mental opportunities by using structures like infinite scroll for newsfeeds and autoplay for videos. Within three months, if the bill is passed, the companies would be banned from offering features that automatically load and display content "other than music or video content that the user has prompted to play" without that person opting in. If users have reached the end of a block of a tweet, they will have to "specifically request (such as by pushing a button or clicking an icon, but not by simply continuing to scroll) that additional content is loaded and displayed." In a hearing on putting legislative limits on the persuasiveness of technology, late last month, Tristan Harris, a former Google design ethicist, explained how platforms create products to increase the amount of time users spend on a site. “If I take the bottom out of this glass and I keep refilling the water or the wine, you won’t know when to stop drinking. That’s what happens with infinitely scrolling feeds”, Harris explained the committee. According to Bloomberg, Google and Facebook declined to comment. NetChoice, a trade group that counts both companies as members, said, “The goal of this bill is to make being online a less-enjoyable experience.” Many users and app developers are not in favour of the bill and have exclaimed why this tech is was implemented. A user on HackerNews writes, “I'm the dev that built Netflix's autoplay of the next episode. We built it first on the web player because it is easy to A/B test new features there. We called it "post-play" at the time…...So yes, Netflix wants you to spend more hours watching Netflix and the product team is scientifically engineering the product to make it more addictive. But...the product team at Doritos does the same thing.” https://twitter.com/ptbrennan11/status/1156221816983248896 A user on Reddit comments, “I design user interactions for a living, and infinite scroll is used ALL the time, and often in different levels/areas of a page. Just like we can design things to be fast, easy, addicting, etc, we can design things to be slow and require more consideration….. This seems like a thoughtless proposal that needs to be a step up from where it is: …..” https://twitter.com/petersuderman/status/1156296953069744128 https://twitter.com/JDVance1/status/1156285549638012930 A lot of users may not accept the conditions of this bill as they feel it would be too restrictive and the pop-ups after certain time interval may break their continuity online, and a lot other factors. However, some feel, social media companies can at least provide choices for users to keep autoplay settings on. A user David Kwan commented on The Verge article, “With the intent of establishing UX/UI design policies, instead of a ”ban,” the US gov could establish design guidelines (like the UK gov) that can help mitigate online addiction and other design matters that affect how people interact with digital media. For example, instead of allowing platforms like YouTube to set autoplay settings “on” by default, the default setting should be set to “off” instead.” https://twitter.com/reckless/status/1156206435887439874 In May, Sen. Hawley introduced a bill to ban loot boxes in video games that said such microtransactions exist to exploit children. In June, he introduced a bill that declares top internet companies to undergo external audits to evaluate whether their content moderation systems are free of political bias. To know more about Sen. Hawley’s SMART Act in detail, read the proposed bill. Along with platforms like Facebook, now websites using embedded ‘Like’ buttons are jointly responsible for what happens to the collected user data, rules EU court Microsoft mulls replacing C and C++ code with Rust calling it a “modern safer system programming language” with great memory safety features Microsoft adds Telemetry files in a “security-only update” without prior notice to users  
Read more
  • 0
  • 0
  • 1551

article-image-after-musks-neuralink-now-facebook-wants-to-read-your-thoughts-ucsf-researchers-could-help-them-build-a-non-invasive-brain-computer-interface-for-ar-glasses
Vincy Davis
31 Jul 2019
6 min read
Save for later

After Musk’s Neuralink, now Facebook wants to read your thoughts. UCSF researchers could help them build a “non-invasive” brain-computer interface for AR glasses

Vincy Davis
31 Jul 2019
6 min read
Yesterday, Facebook posted a detailed report of their research in brain-computer interface (BCI) with an aim to build a non-invasive device that would type what a person is imagining. This device is expected to be an input solution for future augmented reality (AR) glasses. Facebook first proposed its plan to build this technology at the F8 2017 conference. https://twitter.com/boztank/status/1156228719129665539 Two weeks ago, Elon’s Musk presented ‘Neuralink’ based on a brain-computer interface technology. It is a sewing machine-like robot that can implant ultrathin threads deep into the human brain. It uses four sensors which will be placed in a wearable device, under the scalp to an inductive coil behind the ear. It will contain a Bluetooth radio and a battery and will be controlled through an iPhone app. Neuralink aims to give people the ability to read computers and smartphones using their thoughts, while Facebook aims to explore human brain to write an input solution for AR glasses. Unlike Neuralink, Facebook plans to take a non-invasive route to reading minds. One part of the vision of Facebook’s research coincides with a paper titled “Real-time decoding of question-and-answer speech dialogue using human cortical activity”. The paper is published by a team of researchers from the University of California San Francisco (UCSF) and are supported by Facebook. The paper demonstrates real-time decoding of perceived and produced speech from high-density electrocorticography (ECoG) activity in humans to detect when they heard or said an utterance and then decode the utterance’s identity. The researchers were able to perceive and produce utterances with 76% and 61% accuracy rates respectively. The paper aims to help patients, who are unable to speak or move due to locked-in syndrome, paralysis or epilepsy, to interact on a rapid timescale similar to human conversations. How does real-time decoding of question-and-answer speech work? Three human epilepsy patients undergoing treatment at the UCSF Medical Center gave his or her written informed consent to participate in this research. ECoG arrays were surgically implanted on their cortical surface i.e., outer layer of the cerebrum of each participant. Image Source: Real-time decoding approach For each trial, each participant was made to hear a question with a possible set of answer choices on a screen. The participants were asked to choose any one of the answers and verbally say it when a green response cue appears on the screen. At that same time, participant’s cortical activity was acquired from the ECoG electrodes, implanted on the participants temporal and frontal cortex surface. The cortical activity is then filtered in real-time to extract high gamma activity. Next, a speech detection model uses the spatiotemporal pattern of a high gamma activity to predict if a question is being heard or an answer is being produced (or neither) at each time point. If a question event is detected, the time window of high gamma activity is passed to a question classifier which uses Viterbi decoding to compute question utterance probability. The question with the highest probability is considered the output of the decoded question. A stimulus set is designed such that each answer is only likely for a particular set of questions called context priors. These context priors are then combined with the predicted question probabilities to obtain answer priors. When the speech detection model detects an answer event, the same procedure is followed with an answer classifier. Finally, the context integration model combines all the answer probabilities with the answer priors to yield answer posterior probabilities. The answer with the highest number of posterior probability is considered as the output i.e., the decoded answer. Thus by integrating what the participants hear and say, the researchers are using “an interactive question-and-answer behavioral paradigm” to present a real-world assistive communication setting for patients. Although the researchers were unable to “make quantitative and definitive claims about the relationship between decoder performance and functional-anatomical coverage” with three participants, they are satisfied that this is a promising step towards the goal of “demonstrating that produced speech can be detected and decoded from neural activity in real-time while integrating dynamic information from the surrounding context.” Many people have found it fascinating and consider this research as an important implication for patients who are unable to communicate. https://twitter.com/drjkwan/status/1156251608260534272 https://twitter.com/SaberaTalukder/status/1156450592274956288 How is Facebook using brain-computer interface for AR? The UCSF researchers have maintained that their algorithm is capable of recognizing only a small set of words and phrases, and are working towards translating much larger vocabulary. Facebook say that Facebook Reality Labs (FRL) researchers have limited access to de-identified data, as it remain onsite at UCSF and under its control at all times. In the post, Facebook states that FRL has built a research kit of a wearable brain-computer interface device. They have been testing the ability of the system to decode single imagined words like “home,” “select,” and “delete,” with non-invasive technologies, using near-infrared light. It also says that though the system is currently bulky, slow, and unreliable, it’s potential is significant. “We don’t expect this system to solve the problem of input for AR anytime soon. It could take a decade, but we think we can close the gap,” Facebook writes in the post. Facebook is building towards a bigger goal of implementing systems that can “interact with today's VR systems — and tomorrow's AR glasses.” Users are highly skeptical of Facebook’s vision as many doubt Facebook’s intention of exploring human brain control in the name of AR wearables. https://twitter.com/HuxleysRazor/status/1156243187251470337 https://twitter.com/Croftt/status/1156256571569033216 While many have raised ethical questions that exploring human membranes in the name of research is risky and disturbing. https://twitter.com/gjergjdollani/status/1156320052943228935 https://twitter.com/MichaelCholod/status/1156303132315402240 https://twitter.com/hubertpaulo/status/1156320970778533888 Many have also raised concerns that Facebook, a company who has been in the wrong books lately, (read data breaches, GDPR violation, tracking users data), cannot be trusted. https://twitter.com/sterlingcrispin/status/1156360116557344768 https://twitter.com/sterlingcrispin/status/1156399668403691521 Facebook did manage to find few supporters who were excited about this technology. https://twitter.com/FKSportsBlog/status/1156436559173955584 https://twitter.com/DrewRoberts/status/1156255780548616192 Along with platforms like Facebook, now websites using embedded ‘Like’ buttons are jointly responsible for what happens to the collected user data, rules EU court The US Justice Department opens a broad antitrust review case against tech giants Microsoft Azure VP demonstrates Holoportation, a reconstructed transmittable 3D technology
Read more
  • 0
  • 0
  • 3167

article-image-unity-2019-2-releases-with-updated-probuilder-shader-graph-2d-animation-burst-compiler-and-more
Fatema Patrawala
31 Jul 2019
3 min read
Save for later

Unity 2019.2 releases with updated ProBuilder, Shader Graph, 2D Animation, Burst Compiler and more

Fatema Patrawala
31 Jul 2019
3 min read
Yesterday, the Unity team announced the release of Unity 2019.2. In this release, they have added more than 170 new features and enhancements for artists, designers, and programmers. They have updated ProBuilder, Shader Graph, 2D Animation, Burst Compiler, UI Elements, and many more. Major highlights for Unity 2019.2 ProBuilder 4.0 ships as verified with 2019.2. It is a unique hybrid of 3D modeling and level design tools, optimized for building simple geometry but capable of detailed editing and UV unwrapping as needed. Polybrush is now available via Package Manager as a Preview package. This versatile tool lets you sculpt complex shapes from any 3D model, position detail meshes, paint in custom lighting or coloring, and blend textures across meshes directly in the Editor. DSPGraph is the new audio rendering/mixing system, built on top of Unity’s C# Job System. It’s now available as a Preview package. They have improved on UI Elements, Unity’s new UI framework, which renders UI for graph-based tools such as Shader Graph, Visual Effect Graph, and Visual Scripting. To help you better organize your complex graphs, Unity has added subgraphs to Visual Effect Graph. You can share, combine, and reuse subgraphs for blocks and operators, and also embed complete VFX within VFX. There is an improvement in the integration between Visual Effect Graph and the High-Definition Render Pipeline (HDRP), which pulls VFX Graph in by default, providing you with additional rendering features. With Shader Graph you can now use Color Modes to highlight nodes on your graph with colors based on various features or select your own colors to improve readability. This is especially useful in large graphs. The team has added swappable Sprites functionality to the 2D Animation tool. With this new feature, you can change a GameObject’s rendered Sprites while reusing the same skeleton rig and animation clips. This lets you quickly create multiple characters using different Sprite Libraries or customize parts of them with Sprite Resolvers. With this release Burst Compiler 1.1 includes several improvements to JIT compilation time and some C# improvements. Additionally, the Visual Studio Code and JetBrains Rider integrations are available as packages. Mobile developers will benefit from improved OpenGL support, as the team has added OpenGL multithreading support (iOS) to improve performance on low-end iOS devices that don’t support Metal. As with all releases, 2019.2 includes a large number of improvements and bug fixes. You can find the full list of features, improvements, and fixes in Unity 2019.2 Release Notes. How to use arrays, lists, and dictionaries in Unity for 3D game development OpenWrt 18.06.4 released with updated Linux kernel, security fixes Curl and the Linux kernel and much more! How to manage complex applications using Kubernetes-based Helm tool [Tutorial]
Read more
  • 0
  • 0
  • 4654
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $15.99/month. Cancel anytime
article-image-five-eyes-call-for-backdoor-access-to-end-to-end-encryption-to-tackle-emerging-threats-despite-warnings-from-cybersecurity-and-civil-rights-communities
Fatema Patrawala
31 Jul 2019
5 min read
Save for later

“Five Eyes” call for backdoor access to end-to-end encryption to tackle ‘emerging threats’ despite warnings from cybersecurity and civil rights communities

Fatema Patrawala
31 Jul 2019
5 min read
Yesterday the Guardian reported that the “Five Eyes” nations (UK, US, Canada, Australia and New Zealand) met in London on Tuesday. The representatives discussed plans to give spy agencies and police a backdoor access to encrypted social media messages on platforms like Facebook and WhatsApp in order to combat online child abuse and terrorism. As per the Home Office, it is a two day meeting hosted by the the new UK Home Secretary, Priti Patel. The agenda for the meeting was to focus on the ‘Emerging threats’ and how best to address the opportunities and risks posed by the new technologies. Ministers attending the event included: Australian Minister for Home Affairs Peter Dutton MP Canadian Minister of Public Safety and Emergency Preparedness Ralph Goodale MP Canadian Minister of Immigration, Refugees and Citizenship Ahmed Hussen MP Canadian Associate Deputy Minister of Justice Francois Daigle New Zealand’s Minister of Justice Andrew Little MP New Zealand’s Attorney General David Parker MP US Attorney General William Barr US Acting Deputy Secretary of Homeland Security David Pekokse Ministers discussed challenges with end to end encryption Ms Patel demanded that Facebook, along with Twitter and Google, allow access to hidden messages by intelligence agencies. She said to the Daily Telegraph that, “The use of end-to-end encryption in this way has the potential to have serious consequences for the vital work which companies already undertake to identify and remove child abuse and terrorist content.” “It will also hamper our own law enforcement agencies, and those of our allies, in their ability to identify and stop criminals abusing children, trafficking drugs, weapons and people, or terrorists plotting attacks.” US Attorney General William Barr said in the meeting, “Encryption presents a unique challenge. We must ensure that we do not stand by as advances in technology create spaces where criminal activity of the most heinous kind can go undetected and unpunished.” The security ministers of the five nations said in a statement that online child abuse material had increased twenty-fold in the past four years, to 18 million images found last year, according to the Guardian. Controversial ‘Ghost Protocol’ proposed by UK intelligence agency It was noted that, GCHQ, the UK agency which monitors and breaks into communications, has suggested that Silicon Valley companies could develop technology that would silently add a police officer or intelligence agent to conversations or group chats. This seems to be a controversial so called “ghost protocol” which is opposed by companies, civil society organizations and by security experts too. In May, tech companies signed an open letter to the GCHQ, opposing the concept of a ghost protocol. The protocol involved plans for tech companies to allow law enforcement access to encrypted messages by including the government as a third-party that would secretly receive a copy of the messages without the other parties' knowledge. Dangers of ghost protocols The reality of this will be that user’s privacy will be under assault on a variety of fronts. Companies crave access to personal information because it helps them target users with advertisements. Bad actors want access to private information for nefarious and criminal purposes. And governments want the ability to know who you communicate with and what you say, presumably in order to better protect you. But it begs the question, of how much intrusion we are willing to accept in the name of national security. Because the dangers are innumerable. For example China’s social credit system, if you are officially designated as a “discredited individual,” or laolai in Mandarin, you are banned from spending on “luxuries,” whose definition includes air travel and fast trains. This class of people, most of whom have defaulted on their debts, sit on a public database maintained by China’s Supreme Court. For them, daily life is a series of inflicted indignities some big, some small from not being able to rent a home in their own name, to being shunned by relatives and business associates, highlights the Inkstone report. You can't have a secure system that fully protects user privacy while also having a backdoor that lets the government in whenever it sees fit. Community criticizes move saying backdoors to encryption renders it worthless There was an immediate backlash to this news, Forbes reported. A backdoor is a vulnerability. And introducing weakness into end-to-end encryption renders that encryption worthless. On Reddit as well, users from Australia criticized this development, “It is not the purview of government to know what its' citizens are talking about at all times.” While on Hacker News, users discuss the challenges of spying the platforms in bulk, “In bulk, as opposed to targeted spying - you can send an agent to hide behind the bushes, or plant a microphone, or infiltrate a group. Which was possible for a long time before computers or electronics, but it's not possible to do it at scale - you can spy on a few hundred people this way, but not on a few million.” Apple advanced talks with Intel to buy its smartphone modem chip business for $1 billion, reports WSJ Facebook signs on more than a dozen backers for its GlobalCoin cryptocurrency including Visa, Mastercard, PayPal and Uber Microsoft quietly deleted 10 million faces from MS Celeb, the world’s largest facial recognition database
Read more
  • 0
  • 0
  • 2068

article-image-stack-overflow-suffered-an-outage-yesterday
Fatema Patrawala
31 Jul 2019
2 min read
Save for later

Stack Overflow suffered an outage yesterday

Fatema Patrawala
31 Jul 2019
2 min read
Yesterday the Stack Overflow site was down according to its status report on the site. The outage map reported, 931 issues on the Stack Overflow site originating from the United States of America, Canada, United Kingdom, India, Brazil and 67 more countries. The official page read, “We apologize for any inconvenience, but an unexpected error occurred while you were browsing our site. It’s not you, it’s us. This is our fault.” Downdetector also reports of the Stack Overflow outage starting at 12:30 am PDT. It shows 79% of the problems are related to the website and 20% are on its log-in page. Though there are no reports from the official Stack Overflow page or on their Twitter handle of the issues being resolved and the site being functional, Nick Craver, lead engineer and site reliability engineer at Stack Overflow tweeted yesterday at 1:30 pm PDT, “All systems are green now - a non-yielding scheduler inside our primary SQL Server caused a cascade failure of systems that depend upon it. We'll follow-up with vendors and increase our resiliency in several systems here.” https://twitter.com/Nick_Craver/status/1156260065797726208 On Hacker News, users speculated that the issue might have been caused due to a roll out of one particular branch in Stack Overflow model over to netcoreapp2.2. One of the users commented, “2 hours earlier: https://twitter.com/Nick_Craver/status/1156220122933207041 ‘We'll be carefully rolling this out starting shortly’ .. oopsie, debug mode in production commence.” Another user comments, “The question is how are they gonna fix it without stackoverflow?” They are also annoyed with the lack of communication from the Stack Overflow team, “What's up here? I don't see any updates on their twitter or blog. What am I gonna do the rest of the day??” https://twitter.com/DronpesAtWork/status/1156248140544016386 Last week, Stack Overflow users took to its Meta site to express concerns regarding the communication breakdown between the site and its community. The users highlighted that Stack Overflow has repeatedly failed at consulting the community before coming up with a major change, with the most recent case being the removal of the “Hot Meta Posts” section. Is DevOps really that different from Agile? No, says Viktor Farcic [Podcast] Building a Twitter news bot using Twitter API [Tutorial] NSA warns users of BlueKeep vulnerability; urges them to update their Windows systems
Read more
  • 0
  • 0
  • 1863

article-image-blender-2-80-released-with-new-ui-interface-eevee-real-time-renderer-grease-pencil
Bhagyashree R
31 Jul 2019
3 min read
Save for later

Blender 2.80 released with a new UI interface, Eevee real-time renderer, grease pencil, and more

Bhagyashree R
31 Jul 2019
3 min read
After about three long years of development, the most awaited Blender version, Blender 2.80 finally shipped yesterday. This release comes with a redesigned UI interface, workspaces, templates, Eevee real-time renderer, grease pencil, and much more. The user interface is revamped with a focus on usability and accessibility Blender’s user interface is revamped with a better focus on usability and accessibility. It has a fresh look and feel with a dark theme and modern icon set. The icons change color based on the theme you select so that they are readable against bright or dark backgrounds. Users can easily access the most used features via the default shortcut keys or map their own. You will be able to fully use Blender with a one-button trackpad or pen input as it now supports the left mouse button by default for selection. It provides a new right-click context menu for quick access to important commands in the given context. There is also a Quick Favorites popup menu where you can add your favorite commands. Get started with templates and workspaces You can now choose from multiple application templates when starting a new file. These include templates for 3D modeling, shading, animation, rendering, grease pencil based 2D drawing and animation, sculpting, VFX, video editing, and the list goes on. Workspaces give you a screen layout for specific tasks like modeling, sculpting, animating, or editing. Each template that you choose will provide a default set of workspaces that can be customized. You can create new workspaces or copy from the templates as well. Completely rewritten 3D Viewport Blender 2.8’s completely rewritten 3D viewport is optimized for modern graphics and offers several new features. The new Workbench render engine helps you get work done in the viewport for tasks like scene layout, modeling, and sculpting. Viewport overlays allow you to decide which utilities are visible on top of the render. The LookDev new shading mode allows you to test multiple lighting conditions (HDRIs) without affecting the scene settings. The smoke and fire simulations are overhauled to make them look as realistic as possible. Eevee real-time renderer Blender 2.80 has a new physically-based real-time renderer called Eevee. It performs two roles: a renderer for final frames and the engine driving Blender's real-time viewport for creating assets. Among the various features it supports volumetrics, screen-space reflections and refractions, depth of field, camera motion blur, bloom, and much more. You can create Eevee materials using the same shader nodes as Cycles, which makes it easier to render existing scenes. 2D animation with Grease Pencil Grease Pencil enables you to combine 2D and 3D worlds together right in the viewport. With this release, it has now become a “full 2D drawing and animation system.” It comes with a new multi-frame edition mode with which you can change and edit several frames at the same time. It has a build modifier to animate the drawings similar to the Build modifier for 3D objects. There are many other features added to grease pencil. Watch this video to get a glimpse of what you can create with it: https://www.youtube.com/watch?v=JF3KM-Ye5_A Check out for more features in Blender 2.80 on its official website. Blender celebrates its 25th birthday! Following Epic Games, Ubisoft joins Blender Development fund; adopts Blender as its main DCC tool Epic Games grants Blender $1.2 million in cash to improve the quality of their software development projects  
Read more
  • 0
  • 0
  • 4139

article-image-numpy-1-17-0-is-here-officially-drops-python-2-7-support-pushing-forward-python-3-adoption
Vincy Davis
31 Jul 2019
5 min read
Save for later

NumPy 1.17.0 is here, officially drops Python 2.7 support pushing forward Python 3 adoption

Vincy Davis
31 Jul 2019
5 min read
Last week, the Python team released NumPy version 1.17.0. This version has many new features, improvements and changes to increase the performance of NumPy. The major highlight of this release includes a new extensible numpy.random module, new radix sort & timsort sorting methods and a NumPy pocketfft FFT implementation for accurate transforms and better handling of datasets of prime length. Overriding of numpy functions has also been made possible by default. NumPy 1.17.0 will support Python versions 3.5 - 3.7. Python 3.8b2 will work with the new release source packages, but may not find support in future releases. The Python team had previously updated users that Python 2.7 maintenance will stop on January 1, 2020. NumPy 1.17.0 officially dropping Python 2.7 is a step towards the adoption of Python 3. Developers who want to port their Python 2 code in Python 3, can check out the official porting guide, released by Python. Read More: NumPy drops Python 2 support. Now you need Python 3.5 or later. What’s new in NumPy 1.17.0? New extensible numpy.random module with selectable random number generators NumPy 1.17.0 has a new extensible numpy.random module. It also includes four selectable random number generators and improved seeding designed for use in parallel processes. PCG64 is the new default numpy.random module while MT19937 is retained for backwards compatibility. Timsort and radix sort have replaced mergesort for stable sorting Both the radix sort and timsort have been implemented and can be used instead of mergesort. The sorting kind options ‘stable’ and ‘mergesort’ have been made aliases of each other with the actual sort implementation for maintaining backward compatibility. Radix sort is used for small integer types of 16 bits or less and timsort is used for all the remaining types of bits. empty_like and related functions now accept a shape argument Functions like empty_like, full_like, ones_like and zeros_like will now accept a shape keyword argument, which can be used to create a new array as the prototype and overriding its shape also. These functions become extremely useful when combined with the __array_function__ protocol, as it allows the creation of new arbitrary-shape arrays from NumPy-like libraries. User-defined LAPACK detection order numpy.distutils now uses an environment variable, comma-separated and case insensitive detection order to determine the detection order for LAPACK libraries. This aims to help users with MKL installation to try different implementations. .npy files support unicode field names A new format version of .npy files has been introduced. This enables structured types with non-latin1 field names. It can be used automatically when needed. New mode “empty” for pad The new mode “empty” pads an array to a desired shape without initializing any new entries. New Deprications in NumPy 1.17.0 numpy.polynomial functions warn when passed float in place of int Previously, functions in numpy.polynomial module used to accept float values. With the latest NumPy version 1.17.0, using float values is deprecated for consistency with the rest of NumPy. In future releases, it will cause a TypeError. Deprecate numpy.distutils.exec_command and temp_file_name The internal use of these functions has been refactored for better alternatives such as replace exec_command with subprocess. Also, replace Popen and temp_file_name <numpy.distutils.exec_command> with tempfile.mkstemp. Writeable flag of C-API wrapped arrays When an array is created from the C-API to wrap a pointer to data, the writeable flag set during creation indicates the read-write nature of the data. In the future releases, it will not be possible to convert the writeable flag to True from python as it is considered dangerous. Other improvements and changes Replacement of the fftpack based fft module by the pocketfft library pocketfft library contains additional modifications compared to fftpack which helps in improving accuracy and performance. If FFT lengths has large prime factors then pocketfft uses Bluestein's algorithm, which maintains O(N log N) run time complexity instead of deteriorating towards O(N*N) for prime lengths. Array comparison assertions include maximum differences Error messages from array comparison tests such as testing.assert_allclos now include “max absolute difference” and “max relative difference” along with previous “mismatch” percentage. This makes it easier to update absolute and relative error tolerances. median and percentile family of functions no longer warn about nan Functions like numpy.median, numpy.percentile, and numpy.quantile are used to emit a RuntimeWarning when encountering a nan. Since these functions return the nan value, the warning is redundant and hence has been removed. timedelta64 % 0 behavior adjusted to return NaT The modulus operation with two np.timedelta64 operands now returns NaT in case of division by zero, rather than returning zero. Though users are happy with NumPy 1.17.0 features, some are upset over the Python version 2.7 being officially dropped. https://twitter.com/antocuni/status/1156236201625624576 For the complete list of updates, head over to NumPy 1.17.0 release notes. Plotly 4.0, popular python data visualization framework, releases with Offline Only, Express first, Displayable anywhere features Python 3.8 new features: the walrus operator, positional-only parameters, and much more Azure DevOps report: How a bug caused ‘sqlite3 for Python’ to go missing from Linux images
Read more
  • 0
  • 0
  • 4842
article-image-stack-overflow-faces-backlash-for-removing-the-hot-meta-posts-section-community-feels-left-out-of-decisions
Bhagyashree R
30 Jul 2019
4 min read
Save for later

Stack Overflow faces backlash for removing the “Hot Meta Posts” section; community feels left out of decisions

Bhagyashree R
30 Jul 2019
4 min read
Last week, Stack Overflow users took to its Meta site to express their concern regarding the communication breakdown between the site and its community. The users highlighted that Stack Overflow has repeatedly failed at consulting the community before coming up with a major change recent being removal of the “Hot Meta Posts” section. “It has become a trend that changes ("features") are pushed out without any prior consultation. Then, in the introductory meta post of said change, push-back is shown by the community and as a result, once again, everyone is left with a bad taste in their mouth from another sour experience,” a user wrote. The backlash comes after Stack Overflow announced last week that it is removing the “Hot Meta Posts” section from its right sidebar. This section listed questions picked semi-randomly every 20 minutes from all posts scoring 3 or more and posted within the past two weeks. As an alternative, moderators can highlight important posts with the help of the “featured” tag. Some of the reasons that it cited for removing this feature were that Meta hasn’t scaled very well since its introduction and the questions on the “Hot Meta Posts” section does not really look ideal for attracting new people. Sara Chipps, an Engineering Manager at StackOverflow, further said that the feature was also affecting the well-being of Stack Overflow employees. “Stack Overflow Employees have panic attacks and nightmares when they know they will need to post something to Meta. They are real human beings that are affected by the way people speak to them. This is outside of the CM team, who have been heroes and who I constantly see abused here,” she wrote. Earlier this month, Stack Overflow faced a similar backlash when it updated its home page. Many users were upset that the page was giving more space to Stack Overflow’s new proprietary products while hiding away the public Q&A feature, which is the main feature Stack Overflow is known for. The company apologized and acted on the feedback. However, users think that this has become a routine. A user added, “It's almost as though you (the company, not the individual) don't care about the users (or, from a cynic's perspective, are actively trying to push out the old folks to make way for the new direction SO is headed in) who have been participating for the best part of a decade.” Some users felt that Stack Overflow does consult with users but not out in the open. “I think they are consulting the community, they're just doing it non publicly and in a different forum, via external stakeholders and interested people on external channels, via data science, and via research interviews and surveys from the research list,“ a user commented. Yaakov Ellis, a developer in the Community dev team at Stack Overflow, assured that Stack Overflow is committed to making the community feel involved. It has no intention to cease the interaction between the company and the community. However, he did admit that there is “internal anxiety” to be more open to the community about the different projects and initiatives. He listed the following reasons: Plans can change, and it is more awkward to do that when it is under the magnifying glass of community discussion.  Functionality being worked on can change direction. Some discussions and features may not be things that the Meta community will be big fans of. And even if we believe that these items are for the best, there will also be times when (with the best intentions), as these decisions have been made after research, data, and users have been consulted, the actual direction is not up for discussion. We can't always share those for privacy purposes, and this causes the back and forth with objectors to be difficult. He further said that there is a need to reset some expectations regarding what happens with the feedback provided by the users. “We value it, and we absolutely promise to listen to all of it [...], but we can't always take the actions that folks here might prefer. We also can't commit to communicating everything in advance, especially when we know that we're simply not open to feedback about certain things, because that would be wasting people's time. Do Google Ads secretly track Stack Overflow users? Stack Overflow confirms production systems hacked Stack Overflow faces backlash for its new homepage that made it look like it is no longer for the open community
Read more
  • 0
  • 0
  • 2184

article-image-along-with-platforms-like-facebook-now-websites-using-embedded-like-buttons-are-jointly-responsible-for-what-happens-to-the-collected-user-data-rules-eu-court
Vincy Davis
30 Jul 2019
5 min read
Save for later

Along with platforms like Facebook, now websites using embedded 'Like' buttons are jointly responsible for what happens to the collected user data, rules EU court

Vincy Davis
30 Jul 2019
5 min read
Yesterday, a significant judgement was made on the usage of Facebook’s ‘Like’ feature by third party websites. The Court of Justice of the European Union (ECJ) ruled that the operator of a third party website with an embedded Facebook ‘Like’ button can be held jointly responsible for the initial collection and transmission of the visitor’s personal data to its website, under the European Union’s General Data Protection Regulation (GDPR). It also stated that “By contrast, that operator is not, in principle, a controller in respect of the subsequent processing of those data carried out by Facebook alone.” This ruling was made in a case filed by a German consumer protection association, Verbraucherzentrale NRW against an online clothing retailer Fashion ID. The court found that the installed Facebook 'Like' button on a third party website allows Facebook to collect user’s information without their consent, irrespective of the fact that users did not click the button or were not part of the social media network. According to the press release, the ECJ has set guidelines that third party website operators  must seek consent from site visitors by clarifying the identity and purpose of the information transmission, before the data is handed over to Facebook. It also adds that for lawful purposes, operators “must pursue a legitimate interest through the collection and transmission of personal data in order for those operations to be justified in that regard.” The ‘Like’ button feature, introduced by Facebook 10 years ago,  is one of the most utilized features by users. The feature has also been adapted into most other social media platforms like Youtube, Twitter and Instagram. It makes sharing content or opinions about content on social platforms extremely convenient with a single click. This ruling is significant because many online portals use the 'Like' button to make their products more visible on Facebook and do not bother about the consequences of data sharing with social media platforms. Last year, Facebook had notified the UK parliament that between April 9 and April 16, the 'Like' button appeared on 8.4M websites. This judgement comes as a warning to all third party websites, as they can no longer hide behind Facebook for their complicity in dodgy data gathering practices. Last month, the BBC reported that Facebook uses information from the 'Like' button feature to not only alter newsfeeds and to apply behavioural advertising, but also to use it as a tool to target elections and manipulate people’s emotional state. Addressing the court judgement, Jack Gilbert, Associate general counsel at Facebook, says that “Website plugins are common and important features of the modern Internet. We welcome the clarity that today’s decision brings to both websites and providers of plugins and similar tools.” He further added that they are reviewing the court’s decision and “will work closely with our partners to ensure they can continue to benefit from our social plugins and other business tools in full compliance with the law.” Facebook, which believes there’s no expectation of privacy on social media, has a record of trying to evade or delay justice by manipulating laws using legal loopholes. A GDPR-violation lawsuit filed against Facebook by a privacy activist went on for 5 long years as Facebook constantly questioned whether GDPR-based cases fall under the jurisdiction of courts, until it was rejected by an Austrian Supreme Court this year. Last year, a lawsuit was filed against Facebook over a data breach impacting nearly 30 million users. In response, Facebook argued that some of the leaked information were not sensitive. However, an appellate court in San Francisco ruled against Facebook’s appeal, last month. When caught red-handed, Facebook has attempted to deploy their PR and lobbying juggernaut to turn verdicts in their favour. Two months ago, reports emerged claiming that Facebook allegedly pressured and “arm-wrestled” EU expert group to soften European guidelines for fake news. Facebook’s chief lobbyist, Richard Allan allegedly threatened the expert group by saying, “We are happy to make our contribution, but if you go in that direction, we will be controversial”, and would stop Facebook’s support for journalistic and academic projects. We will have to wait and watch if Facebook and other social media & content platforms will comply with the GDPR data protection framework this time or will it again try to escape laws using its might. https://twitter.com/MaxMoranHi/status/1155855819868688384 https://twitter.com/PepperoniRollz/status/1155785646595817473 Some people are satisfied that the court ruling will ensure third party websites think hard before sharing user information with Facebook. A user on Hacker News comments, “Good. I don't want you telling Facebook I've visited your website.” https://twitter.com/LguzzardiM/status/1155894027046158336 https://twitter.com/ChopinOpera/status/1155962319874080768 To tackle the plague of tracking in Facebook ‘Like’ button, open source developers are coming up with their own solutions. Social Share Privacy is one such plugin project in jQuery. It enables third party websites to disable the Like/recommend button on their websites, by default. Read the Court of Justice of the European Union’s press release for more information. “Why was Rust chosen for Libra?”, US Congressman questions Facebook on Libra security design choices CraftAssist: An open-source framework to enable interactive bots in Minecraft by Facebook researchers Facebook released Hermes, an open source JavaScript engine to run React Native apps on Android
Read more
  • 0
  • 0
  • 3030

article-image-wewontbuildit-amazon-workers-demand-company-to-stop-working-with-palantir-and-take-a-stand-against-ice
Fatema Patrawala
30 Jul 2019
4 min read
Save for later

#WeWontBuildIt: Amazon workers demand company to stop working with Palantir and take a stand against ICE

Fatema Patrawala
30 Jul 2019
4 min read
On Monday, a group of Amazon employees sent out an internal email to the We Won’t Build it mailing list, calling on Amazon to stop working with Palantir. Palantir is a data analytics company, founded by Peter Thiel, one of President Trump’s most vocal supporters in Silicon Valley, has a strong association with the Immigration and Customs Enforcement (ICE). https://twitter.com/WeWontBuildIt/status/1155872860742664194 Last year in June, an alliance of more than 500 Amazon employees had signed a petition addressing to CEO Jeff Bezos and AWS head Andy Jassy to abandon its contracts with government agencies. It seems that those protests are ramping up again. The email sent to employee mailing lists within Amazon Web Services demanded that Palantir to be removed from Amazon’s cloud for violating its terms of service. It also called on Amazon to take a stand against ICE by making a statement establishing its position against immigration raids, deportations and camps for migrants at the border. They have also demanded to stop selling its facial recognition tech to the government agencies. https://twitter.com/WeWontBuildIt/status/1155872862055485441 In May, Amazon shareholders had rejected the proposal to ban the sale of its facial recognition tech to government. With this they had also rejected eleven other proposals made by employees including a climate resolution, salary transparency and other issues. "The world is watching the abuses in ICE's concentration camps unfold. We know that our company should, and can do better,” the email read. The protests broke out at Amazon’s AWS Summit, held in New York, last week on Thursday. As Amazon CTO Werner Vogels gave a presentation, a group led by a man identified in a tweet as a tech worker interrupted to protest Amazon ties with ICE. https://twitter.com/altochulo/status/1149305189800775680 https://twitter.com/MaketheRoadNY/status/1149306940377448449 Vogels was caught off guard by the protests but continued on about the specifics of AWS, according to ZDNet. “I’m more than willing to have a conversation, but maybe they should let me finish first,” Vogels said amidst protesters, whose audio was cut off on Amazon’s official livestream of the event, per ZDNet. “We’ll all get our voices heard,” he said before returning to his planned speech. According to Business Insider reports, Palantir has a $51 million contract with ICE, which entails providing software to gather data on undocumented immigrant’s employment information, phone records, immigration history and similar information. Its software is hosted in the AWS cloud. The email states that Palantir enables ICE to violate the rights of others and working with such a company is harmful to Amazon’s reputation. The employees also state that their protest is in the spirit of similar actions at companies including Wayfair, Microsoft and Salesforce where workers have protested against their employers to cut ties with ICE and US Customs and Border Protection (CBP). Amazon has been facing increasing pressure from its employees. Last week workers had protested on Amazon Prime day demanding a safe working conditions and fair wages. Amazon, which typically takes a cursory view of such employee outcry, has so far given no indication that it will reconsider providing services to Palantir and other law enforcement agencies. Instead the company argued that the government should determine what constitutes “acceptable use” of technology of the type it sells. “As we’ve said many times and continue to believe strongly, companies and government organizations need to use existing and new technology responsibly and lawfully,” Amazon said to BuzzFeed News. “There is clearly a need for more clarity from governments on what is acceptable use of AI and ramifications for its misuse, and we’ve provided a proposed legislative framework for this. We remain eager for the government to provide this additional clarity and legislation, and will continue to offer our ideas and specific suggestions.” Other tech worker groups like Google Walkout For Real Change, Ban Google for Pride stand in solidarity with Amazon workers on this protest. https://twitter.com/GoogleWalkout/status/1155976287803998210 https://twitter.com/NoPrideForGoog/status/1155906615930806276 #TechWontBuildIt: Entropic maintainer calls for a ban on Palantir employees contributing to the project and asks other open source communities to take a stand on ethical grounds Amazon workers protest on its Prime day, demand a safe work environment and fair wages Amazon shareholders reject proposals to ban sale of facial recognition tech to govt and to conduct independent review of its human and civil rights impact
Read more
  • 0
  • 0
  • 3087
article-image-baidu-open-sources-ernie-2-0-a-continual-pre-training-nlp-model-that-outperforms-bert-and-xlnet-on-16-nlp-tasks
Fatema Patrawala
30 Jul 2019
3 min read
Save for later

Baidu open sources ERNIE 2.0, a continual pre-training NLP model that outperforms BERT and XLNet on 16 NLP tasks

Fatema Patrawala
30 Jul 2019
3 min read
Today Baidu released a continual natural language processing framework ERNIE 2.0. ERNIE stands for Enhanced Representation through kNowledge IntEgration. Baidu claims in its research paper that ERNIE 2.0 outperforms BERT and the recent XLNet in 16 NLP tasks in Chinese and English. Additionally, Baidu has open sourced ERNIE 2.0 model. In March Baidu had announced the release of ERNIE 1.0, its pre-trained model based on PaddlePaddle, Baidu’s deep learning open platform. According to Baidu, ERNIE 1.0 outperformed BERT in all Chinese language understanding tasks. Pre-training procedures of the models such as BERT, XLNet and ERNIE 1.0 are mainly based on a few simple tasks modeling co-occurrence of words or sentences, highlights the paper. For example, BERT constructed a bidirectional language model task and the next sentence prediction task to capture the co-occurrence information of words and sentences; XLNet constructed a permutation language model task to capture the co-occurrence information of words. But besides co-occurring information, there are much richer lexical, syntactic and semantic information in training corpora. For example, named entities, such as person names, place names, and organization names, contain concept information; Sentence order and sentence proximity information can enable the models to learn structure-aware representations; Semantic similarity at the document level or discourse relations among sentences can enable the models to learn semantic-aware representations. So is it possible to further improve the performance if the model was trained to learn more kinds of tasks constantly? Source: ERNIE 2.0 research paper Based on this idea, Baidu has proposed a continual pre-training framework for language understanding in which pre-training tasks can be incrementally built and learned through multi-task learning in a continual way. According to Baidu, in this framework, different customized tasks can be incrementally introduced at any time and these tasks are trained through multi-task learning, which enables the encoding of lexical, syntactic and semantic information across tasks. And whenever a new task arrives, this framework can incrementally train the distributed representations without forgetting the previously trained parameters. The Structure of Released ERNIE 2.0 Model Source: ERNIE 2.0 research paper ERNIE is a continual pre-training framework which provides a feasible scheme for developers to build their own NLP models. The fine-tuning source codes of ERNIE 2.0 and pre-trained English version models can be downloaded from the GitHub page. The team at Baidu compared the performance of ERNIE 2.0 model with the existing  pre-training models on the English dataset GLUE and 9 popular Chinese datasets separately. The results show that ERNIE 2.0 model outperforms BERT and XLNet on 7 GLUE language understanding tasks and outperforms BERT on all of the 9 Chinese NLP tasks, such as DuReader Machine Reading Comprehension, Sentiment Analysis and Question Answering.  Specifically, according to the experimental results on GLUE datasets, ERNIE 2.0 model almost comprehensively outperforms BERT and XLNET on English tasks, whether it is a base model or the large model. Furthermore, the research paper shows that ERNIE 2.0 large model achieves the best performance and creates new results on the Chinese NLP tasks. Source: ERNIE 2.0 research paper To know more about ERNIE 2.0, read the research paper and check out their official blog on Baidu’s website. DeepMind’s AI uses reinforcement learning to defeat humans in multiplayer games CMU and Google researchers present XLNet: a new pre-training method for language modeling that outperforms BERT on 20 tasks Transformer-XL: A Google architecture with 80% longer dependency than RNNs  
Read more
  • 0
  • 0
  • 6073

article-image-c20-committee-draft-finalized-with-a-new-text-formatting-api-contracts-unanimously-deferred-and-more
Bhagyashree R
30 Jul 2019
4 min read
Save for later

C++20 Committee Draft finalized with a new text formatting API, contracts unanimously deferred, and more

Bhagyashree R
30 Jul 2019
4 min read
The ISO C++ Committee met last week at Cologne, Germany to complete and publish the Committee Draft (CD) of the next C++ standard called C++20. This standard will bring some of the game-changing advancements to C++ including modules, concepts, coroutines, and ranges to C++. Here are some of the changes made to the draft in this meeting: Contracts moved out of C++20 A contract specifies a set of preconditions, postconditions, and assertions that a software component should adhere to. The committee unanimously decided to move contracts out of C++20 and defer it to a later standard because it has recently gone through major design changes. They were unsure of the impact or implications of these changes as they did not have much usage experience with contracts. “In short, contracts were just not ready. It's better for us to ship contracts in a form that better addresses the use cases of interest in a future standard instead of shipping something we are uncertain about in C++20. Notably, this decision was unanimous -- all of the contracts’ co-authors agreed to this approach,” wrote the committee. To continue the work on contracts a new study group is created named SG21. It will be chaired by John Spicer from Edison Design Group and includes all original authors and members who are interested to work on contracts. std::format, a new text formatting API One of the key advantages of the ‘printf’ syntax is its familiarity among developers. However, it does suffer from a few drawbacks. The format specifiers it provides like hh, h, l, and j are redundant in type-safe formatting. They can unnecessarily make specification and parsing complicated. The printf syntax also does not provide a standard way for extending the syntax for user-defined types. C++20 will come with a new text formatting API called ‘std::format’ that aims to offer a flexible, safe, and fast alternative to (s)printf and iostreams. Based on the syntax we see in Python, the .NET family of languages, and Rust, it uses ‘{‘ and ‘}’ as replacement field delimiters instead of %. The C++20 synchronization library This new standard will bring new improved synchronization and thread coordination facilities. It will support efficient ‘atomic’ waiting and semaphores, latches, barriers, atomic_flag::test, lockfree integral types, and more. The next step for the committee is to submit the draft to all the national standard bodies to gather their feedback. The committee plans to address their feedback in the next two meetings and then publish the C++20 standard at the February 2020 meeting in Prague. Developers are excited about the new features C++20 will come with. A Reddit user commented, “Wow, the C++ committee is really doing a great job. There are so many good features coming into the standard (std::format, constexpr features, better threading support, etc, etc). Thank you all for all of your hard work.” Others are not very impressed by the ‘web_view’ proposal. This introduces a facility that aims to enable natural, multimodal user interaction with the help of existing web standards and technologies. Another user added, “Very surprising, I didn't expect that because personally, I think that the proposal is not very good. If we use JS and other technologies to display stuff, why not directly use those languages? Why go through C++? But maybe I don't understand it; I'll make sure to go through the minutes.” You can read the full report posted by the ISO C++ Committee for more details. ISO C++ Committee announces that C++20 design is now feature complete GCC 9.1 releases with improved diagnostics, simpler C++ errors and much more Code completion suggestions via IntelliCode comes to C++ in Visual Studio 2019  
Read more
  • 0
  • 0
  • 1652