Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Data

1208 Articles
article-image-2018-prediction-was-reinforcement-learning-applied-to-many-real-world-situations
Prasad Ramesh
27 Feb 2019
4 min read
Save for later

2018 prediction: Was reinforcement learning applied to many real-world situations?

Prasad Ramesh
27 Feb 2019
4 min read
Back in 2017, we predicted that reinforcement learning would be an important subplot in the growth of artificial intelligence. After all, a machine learning agent that adapts and ‘learns’ according to environmental changes has all the makings of an incredibly powerful strain of artificial intelligence. Surely, then, the world was going to see new and more real-world uses for reinforcement learning. But did that really happen? You can bet it did. However, with all things intelligent subsumed into the sexy, catch-all term artificial intelligence, you might have missed where reinforcement learning was used. Let’s go all the way back to 2017 to begin. This was the year that marked a genesis in reinforcement learning. The biggest and most memorable event was perhaps when Google’s AlphaGo defeated the world’s best Go player. Ultimately, this victory could be attributed to reinforcement learning; AlphaGo ‘played’ against itself multiple times, each time becoming ‘better’ at the game, developing an algorithmic understanding of how it could best defeat an opponent. However, reinforcement learning went well beyond board games in 2018. Reinforcement learning in cancer treatment MIT researchers used reinforcement learning to improve brain cancer treatment. Essentially, the reinforcement learning system is trained on a set of data on established treatment regimes for patients, and then ‘learns’ to find the most effective strategy for administering cancer treatment drugs. The important point is that artificial intelligence here can help to find the right balance between administering and withholding the drugs. Reinforcement learning in self-driving cars In 2018, UK self-driving car startup Wayve trained a car to drive using its ‘imagination’. Real world data was collected offline to train the model, which was then used to observe and predict the ‘motion’ of items in a scene and drive on the road. Even though the data was collected in sunny conditions, the system can also drive in rainy situations adjusting itself to reflections from puddles etc. As the data is collected from the real world, there aren’t any major differences in simulation versus real application. UC Berkeley researchers also developed a deep reinforcement learning method to optimize SQL joins. The join ordering problem is formulated as a Markov Decision Process (MDP). A method called Q-learning is applied to solve the join-ordering MDP. The deep reinforcement learning optimizer called DQ offers out solutions that are close to an optimal solution across all cost models. It does so without any previous information about the index structures. Robot prosthetics OpenAI researchers created a robot hand called Dactyl in 2018. Dactyl has human-like dexterity for performing complex in hand manipulations, achieved through the use of reinforcement learning. Finally, it’s back to Go. Well, not just Go - chess, and a game called Shogi too. This time, Deepmind’s AlphaZero was the star. Whereas AlphaGo managed to master Go, AlphaZero mastered all three. This was significant as it indicates that reinforcement learning could help develop a more generalized intelligence than can currently be developed through artificial intelligence. This is an intelligence that is able to adapt to new contexts and situations - to almost literally understand the rules of very different games. But there was something else impressive about AlphaZero - it was only introduced to a set of basic rules for each game. Without any domain knowledge or examples, the newer program outperformed the current state-of-the-art programs in all three games with only a few hours of self-training. Reinforcement learning: making an impact irl These were just some of the applications of reinforcement learning to real-world situations to come out of 2018. We’re sure we’ll see more as 2019 develops - the only real question is just how extensive its impact will be. This AI generated animation can dress like humans using deep reinforcement learning Deep reinforcement learning – trick or treat? DeepMind open sources TRFL, a new library of reinforcement learning building blocks
Read more
  • 0
  • 0
  • 2541

article-image-cloudflare-takes-a-step-towards-transparency-by-expanding-its-government-warrant-canaries
Amrata Joshi
27 Feb 2019
3 min read
Save for later

Cloudflare takes a step towards transparency by expanding its government warrant canaries

Amrata Joshi
27 Feb 2019
3 min read
Just two days ago, Cloudflare, a U.S. based company that provides content delivery network services, DDoS (Denial of Service) mitigation, Internet security, etc, took a strong step towards transparency by releasing its transparency report for the second half of 2018. The company has been publishing biannual Transparency Reports since 2013. A post by Cloudflare reads, “We believe an essential part of earning the trust of our customers is being transparent about our features and services, what we do – and do not do – with our users’ data, and generally how we conduct ourselves in our engagement with third parties such as law enforcement authorities.” The company believes in allowing companies to silently warn customers when the government secretly tries to acquire customer data. The “warrant canaries” is named after the canary bird. Back then, coal miners used to take canaries to the mines. And if the canary bird died, they would get a signal (of bad happening). It has been referred to as a key transparency tool that can be used by privacy-focused companies for keeping their customers aware of the whereabouts with regards to data. Cloudflare’s current canaries Cloudflare has set forth certain ‘warrant canaries’ statements of things that they claim have never done as a company. According to Cloudflare, the company has never leaked their SSL keys or customers’ SSL keys to anyone. The company claims to never have installed any law enforcement software or equipment anywhere on their network. The report by the company also states that they have never terminated a customer or taken down content due to political pressure. The company further states that it has never provided customers’ content to any law enforcement organization. Cloudflare’s updated warrant canaries The company has never modified customer content at the request of law enforcement or another third party. Cloudflare has never modified the destination of DNS responses at the request of the third party or law enforcement. It has never compromised, weakened, or subverted any of its encryption at the request of law enforcement or another third party. Cloudflare has expanded its first canary and has confirmed that the company has never turned over our encryption or authentication keys or our customers’ encryption or authentication keys to anyone. Cloudflare said that if it were ever asked to do any of the above, the company would “exhaust all legal remedies” to protect customer data, and remove the statements from its site. Big companies like Apple also have worked in this direction. Apple had included a statement in its most recent transparency reports stating that the company has to date “not received any orders for bulk data.” Reddit had also removed its warrant canary in 2015, which indicated that it had received a national security order it wasn’t permitted to disclose. Currently, Cloudflare has just responded to seven subpoenas of the 19 requests, affecting 12 accounts and 309 domains. It has also responded to 44 court orders of the 55 requests, affecting 134 accounts and 19,265 domains. To know more about this news, check out Cloudflare’s official post. workers.dev will soon allow users to deploy their Cloudflare Workers to a subdomain of their choice Cloudflare’s 1.1.1.1 DNS service is now available as a mobile app for iOS and Android Cloudflare’s Workers enable containerless cloud computing powered by V8 Isolates and WebAssembly  
Read more
  • 0
  • 0
  • 2265

article-image-mariadb-announces-mariadb-enterprise-server-and-welcomes-amazons-mark-porter-as-an-advisor-to-the-board-of-directors
Amrata Joshi
27 Feb 2019
4 min read
Save for later

MariaDB announces MariaDB Enterprise Server and welcomes Amazon’s Mark Porter as an advisor to the board of directors

Amrata Joshi
27 Feb 2019
4 min read
Yesterday, at the MariaDB OpenWorks 2019, the annual user and developer conference of the database, the MariaDB Corporation team announced a new, fully open source MariaDB Enterprise Server. The MariaDB Enterprise Server will support customers by delivering a database engineered for greater reliability and stability. It will be used as the default version for customers for on-premise or in the cloud. Max Mether, VP of Server Product Management at MariaDB Corporation said to us in an email, "We're seeing that our enterprise customers have very different needs from the average community user. These customers are working on a completely different scale with a strong focus on stability and security. In order to be able to cater to these requirements, it is clear that we need to focus on a different solution by creating another version of MariaDB Server specifically focused on enterprise production workloads." Features of MariaDB Enterprise Server New enterprise-centric features MariaDB Enterprise Server comes with features that solve specific enterprise requirements. The new features in development include enhanced MariaDB Backup, improved audit plugin, and full data-at-rest encryption of MariaDB Cluster. Security, performance and scalability for production MariaDB Enterprise Server is configured for secure, high-performance production environments, unlike Community Server. It provides reliable and faster backups for large databases. It provides end-to-end encryption for all data at rest in MariaDB clusters. Stability at scale The MariaDB Enterprise Server goes through rigorous quality assurance and testing and is pre-configured to fulfill the requirements of secure production environments. Release Integrity MariaDB Enterprise Server is distributed securely with a clearly established chain of custody from MariaDB to customers for ensuring that binaries cannot be tampered with. Pat Casey, SVP of Development and Operations, ServiceNow, said to us via an email, "Thousands of the world's largest organizations depend on the Now Platform to create great experiences and unlock productivity. Better quality assurance and stability of critical enterprise features are extremely compelling. At our scale and in production with 100,000 MariaDB databases, reliability is what matters most." MariaDB Enterprise Server 10.4 is available with next version of MariaDB Platform in spring 2019. The team will also release GA versions of MariaDB Enterprise Server 10.2 and 10.3, this spring, that will include high-end enterprise features, such as enhanced Backup. Amazon’s Mark Porter joins MariaDB as the advisor to board of directors The company further announced that Mark Porter, who ran the Amazon Relational Database Service (RDS) recently, joined MariaDB as an advisor to the board of directors. Porter is currently the CTO at Grab, a transportation and mobile payments company. He has previously served as the vice president at Oracle Corporation. Porter said, "MariaDB's DBaaS solutions give businesses many advantages. By focusing on customer needs and using their deep database expertise, they have built optimizations, flexibility and enterprise capabilities that no one else can deliver. With MariaDB's growing popularity as an option to escape Oracle, the opportunity is extremely strong to capture large market share and delight customers. I'm both humbled and thrilled to be part of the MariaDB team as relational databases continue to run the most important companies on the internet." According to the team at MariaDB, Mark Porter will contribute his expertise of cloud, distributed systems and database operations to help MariaDB rapidly grow its database-as-a-service (DBaaS) offering. He will also work in the direction of growth for SkySQL, and will further integrate new distributed technology into MariaDB Platform. Michael Howard, CEO, MariaDB Corporation, said, "Mark's guidance will be a tremendous asset in building a next-generation MariaDB cloud. Mark has a proven record of operating and scaling database services while driving rapid growth. SkySQL is designed from the ground up to offer the best MariaDB service for multi-cloud, including private cloud environments. It offers enterprise product capabilities beyond the MariaDB community server, that is used widely in public clouds, to ensure quality of service, security and features otherwise only found in proprietary legacy databases." TiDB open sources its MySQL/MariaDB compatible data migration (DM) tool MariaDB acquires Clustrix to give database customers ‘freedom from Oracle lock-in’ MariaDB 10.3.7 releases
Read more
  • 0
  • 0
  • 2535
Visually different images

article-image-australias-assistance-and-access-bill-causes-fastmail-to-lose-customers
Savia Lobo
26 Feb 2019
2 min read
Save for later

FastMail expresses issues with Australia’s Assistance and Access bill

Savia Lobo
26 Feb 2019
2 min read
Australia’s email provider, FastMail, recently reported that they are losing their customers following Australia’s Assistance and Access (A&A) bill. They have also received requests to shift their email operations outside Australia. Australia’s bill faced a lot of opposition from the tech community when it was first passed during the end of last year. FastMail CEO Bron Gondwana said, “The way in which [the laws] were introduced, debated, and ultimately passed ... creates a perception that Australia has changed - that we are no longer a country which respects the right to privacy.” “We have seen existing customers leave, and potential customers go elsewhere, citing this bill as the reason for their choice. We are [also] regularly being asked by customers if we plan to move”, Gondwana said in an email. Gondwana mentions that the problems caused by this bill revolve around perception and trust. His email states, “Our staff are curious and capable - if our system is behaving unexpectedly, they will attempt to understand why. This is a key part of bug discovery and keeping our systems secure.” The email further states, “Technology is a tinkerer’s arena. Tools exist to monitor network data, system calls and give computer users more observability than ever before. Secret data exfiltration code may be discovered by tinkerers or even anti-virus firms looking at unexpected behaviour.” “Additionally, as code is refactored and products change over time, ensuring that a technical capability isn’t lost means that everybody working on the design and implementation needs to know that the technical capability exists and take it into account.” To know more about this news in detail, read the complete email. Three major Australian political parties hacked by ‘sophisticated state actor’ ahead of election Australian intelligence and law enforcement agencies already issued notices under the ‘Assistance and Access’ Act despite opposition from industry groups Australia’s ACCC publishes a preliminary report recommending Google Facebook be regulated and monitored for discriminatory and anti-competitive behavior  
Read more
  • 0
  • 0
  • 2148

article-image-sensetime-researchers-train-imagenet-alexnet-in-record-1-5-minutes-using-gradientflow
Melisha Dsouza
26 Feb 2019
4 min read
Save for later

SenseTime researchers train ImageNet/AlexNet in record 1.5 minutes using ‘GradientFlow’

Melisha Dsouza
26 Feb 2019
4 min read
Researchers from SenseTime Research and Nanyang Technological University have broken the record to train ImageNet/AlexNet in 1.5 minutes. The previous record was held by a model developed by the researchers at TenCent, a Chinese tech giant, and Hong Kong Baptist University, that took four minutes. This is a significant 2.6 times speedup over the previous record. The SenseTime and Nanyang team used a communication backend called “GradientFlow” along with a set of network optimization techniques to reduce the deep neural network (DNN) model training time. The researchers also proposed a technique called “lazy allreduce” to combine multiple communication operations into a single one. The researchers say that high communication overhead is one of the major performance bottlenecks for distributed DNN training across multiple GPUs. To combat this issue, one of the techniques used was increasing the batch size and running through the dataset quickly to process more samples per iteration. They also used a mixture of half-precision floating point, aka FP16, as well as single-precision floating point, FP32. Both these techniques reduce the memory bandwidth pressure on the GPUs used to accelerate the machine-learning math in hardware, but cause some loss of accuracy. How does GradientFlow work? GradientFlow is a software toolkit, to tackle the high communication cost of distributed DNN training. It is a communication backend that sla shed training times on GPUs, as described in their paper, published earlier this month. GradientFlow employs lazy allreduce, to reduce network cost, and improves network throughput by fusing multiple allreduce operations into a single one. It employs “coarse-graining sparse communication” to reduce network traffic and sends only important gradient chucks. Every GPU stores batches of data from ImageNet and uses gradient descent to crunch through their pixels. These gradient values are passed onto server nodes in order to update the parameters in the overall model. This is done using a type of parallel-processing algorithm known as allreduce. Trying to ingest these values, or tensors, from hundreds of GPUs at a time will result into bottlenecks. GradientFlow increases the efficiency of the code by allowing the GPUs to communicate and exchange gradients locally before final values are sent to the model. “Instead of immediately transmitting generated gradients with allreduce, GradientFlow tries to fuse multiple sequential communication operations into a single one, avoiding sending a huge number of small tensors via network,” the researchers wrote. Lazy allreduce Lazy allreduce fuses multiple allreduce operations into a single operation with minimal GPU memory copy overhead. On completing a backward computation, a layer with learnable parameters generates one or more gradient tensors. Every tensor is allocated a separate GPU memory space by the baseline system. With the help of lazy allreduce, all gradient tensors are placed in a memory pool. Lazy allreduce waits for the lower layer’s gradient tensors, until the total size of waited tensors  is greater than a given threshold θ. Then, a single allreduce operation is performed on all waited gradient tensors. This avoids transmitting small tensors via network and improves network utilization. Coarse-grained sparse communication (CSC) To further reduce network traffic with high bandwidth utilization, the researchers have proposed coarse-grained sparse communication to select important gradient chunks for allreduce. The generated tensors are placed in a memory pool with continuance address space, based on their generated order. The CSC will equally partition the gradient memory pool into chunks. Each chunk contains a number of gradients. In this research, each chunk contains 32K gradients and the CSC partitions the gradient memory pool of AlexNet and ResNet-50 into 1903 and 797 chunks respectively. A percent (e.g., 10%) of gradient chunks are selected as important chunks at the end of each iteration. Design of coarse-grained sparse communication (CSC) Conclusion GradientFlow improves network performance for distributed DNN training. When training ImageNet/AlexNet on 512 GPUs, the researchers achieved up to 410.2 speedup ratio, and completed 95-epoch training in 1.5 minutes, outperforming existing approaches. You can head over to the research paper for a more in-depth performance analysis of the model proposed. Generating automated image captions using NLP and computer vision [Tutorial] Facebook’s artificial intelligence research team, FAIR, turns five. But what are its biggest accomplishments? Exploring Deep Learning Architectures [Tutorial]  
Read more
  • 0
  • 0
  • 1962

article-image-the-verge-spotlights-the-hidden-cost-of-being-a-facebook-content-moderator-a-role-facebook-outsources-to-3rd-parties-to-make-the-platform-safe-for-users
Amrata Joshi
26 Feb 2019
4 min read
Save for later

The Verge spotlights the hidden cost of being a Facebook content moderator, a role Facebook outsources to 3rd parties to make the platform safe for users

Amrata Joshi
26 Feb 2019
4 min read
Facebook has been in news in recent years for its data leaks and data privacy concerns. This time the company is on the radar because of the deplorable working conditions of content moderators. The reviewers are so much affected by the content on the platform that they are trying to overcome their PTSD by having sex and getting into drugs at work, reports The Verge in a compelling and horrifying insight into the lives of content moderators who work as contract workers at Facebook’s Arizona office. Last year there was a similar report against Facebook. An ex-employee had filed a lawsuit against Facebook, in September for not providing enough protection to the content moderators who are responsible for reviewing disturbing content on the platform. The platform has millions of videos, images of child sexual abuse, rape, torture, bestiality, beheadings, suicide, and murder. The platform relies on machine learning augmented by human content moderators to keep the platform safe for the users. This means any image that violates the corporation’s terms of use is removed from the platform. In a statement to CNBC, a Facebook spokesperson said, "We value the hard work of content reviewers and have certain standards around their well-being and support. We work with only highly reputable global partners that have standards for their workforce, and we jointly enforce these standards with regular touch points to ensure the work environment is safe and supportive, and that the most appropriate resources are in place." The company has also posted a blog post about its work with its partners like Cognizant and its steps towards ensuring a healthy working environment for content reviewers. As reported by The Verge, the contracted moderators get one 30-minute lunch, two 15-minute breaks, and nine minutes of "wellness time" per day. But much of this time is spent waiting in queues for the bathroom where three stalls per restroom serve hundreds of employees. Facebook’s environment is such that workers cope with stress by telling dark jokes about committing suicide, then smoke weed during breaks to numb their emotions. According to the report, it’s a place where employees can be fired for making just a few errors a week. Even the team leaders give a hard time to the content moderators by micromanaging their bathroom and prayer break. The moderators are paid $15 per hour for moderating content that could range from offensive jokes to potential threats to videos depicting murder. A Cognizant spokesperson said, “The company has investigated the issues raised by The Verge and previously taken action where necessary and have steps in place to continue to address these concerns and any others raised by our employees. In addition to offering a comprehensive wellness program at Cognizant, including a safe and supportive work culture, 24x7 phone support and onsite counselor support to employees, Cognizant has partnered with leading HR and Wellness consultants to develop the next generation of wellness practices." Public reaction to this news is mostly negative with users complaining and condemning how the company is being run. https://twitter.com/waltmossberg/status/1100245569451237376 https://twitter.com/HawksNest/status/1100068105336774656 https://twitter.com/likalaruku/status/1100194103902523393 https://twitter.com/blakereid/status/1100094391241170944 People are angry with the fact that the content moderators at Facebook endure such trauma in their role. Some believe some compensation should be given to those suffering from PTSD as a result of working in certain high-stress roles in companies across industries. https://twitter.com/hypatiadotca/status/1100206605356851200 According to Kevin Collier, a Cyber reporter, Facebook is underpaying and making content moderators overwork in a desperate attempt to reign in abuse of the platform it created. https://twitter.com/kevincollier/status/1100077425357176834 One of the users tweeted, “And I've concluded that FB is run by sociopaths.” Youtube has rolled out a feature in the US that displays notices below videos uploaded by news broadcasters which receive government or public money. Alex Stamos, former Chief Security Officer at Facebook, highlighted something similar but with reference to Facebook. According to him, Facebook needs a state-sponsored label and people should know the human cost of policing online humanity. https://twitter.com/alexstamos/status/1100157296527589376 To know more about this news, check out the report by The Verge. Ex-employee on contract sues Facebook for not protecting content moderators from mental trauma NIPS 2017 Special: Decoding the Human Brain for Artificial Intelligence to make smarter decisions Facebook and Google pressurized to work against ‘Anti-Vaccine’ trends after Pinterest blocks anti-vaccination content from its pinboards  
Read more
  • 0
  • 0
  • 1930
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €14.99/month. Cancel anytime
article-image-google-introduces-and-open-sources-lingvo-a-scalable-tensorflow-framework-for-sequence-to-sequence-modeling
Natasha Mathur
26 Feb 2019
3 min read
Save for later

Google introduces and open-sources Lingvo, a scalable TensorFlow framework for Sequence-to-Sequence Modeling

Natasha Mathur
26 Feb 2019
3 min read
Google researchers announced a new TensorFlow framework, called Lingvo, last week. Lingvo offers a complete solution for collaborative deep learning research, with a particular focus towards sequence modeling tasks such as machine translation, speech recognition, and speech synthesis.   The TensorFlow team also announced yesterday that it is open sourcing Lingvo. “To show our support of the research community and encourage reproducible research effort, we have open-sourced the framework and are starting to release the models used in our papers”, states the TensorFlow team. https://twitter.com/GoogleAI/status/1100177047857487872 Lingvo has been designed for collaboration and has a code with a consistent interface and style that is easy to read and understand. It also consists of a flexible modular layering system that promotes code reuse. Since it involves many people using the same codebase, it is easier to employ other ideas within your models. Also, you can adapt to the existing models to new datasets with ease. Other than that, Lingvo makes it easier to reproduce and compare the results in research. This is because Lingvo adopts a system where all the hyperparameters of a model get configured within their own dedicated sub-directory that is separate from the model logic. All the models within Lingvo are built from the same common layers which allow them to be compared with each other easily. Also, all of these models have the same overall structure from input processing to loss computation, and all the layers consist of the same interface.                                         Overview of the LINGVO framework Moreover, all the hyperparameters within Lingvo get explicitly declared and their values get logged at runtime. This makes it easier to read and understand the models. Lingvo can also easily train the on production scale datasets. There’s also additional support provided for synchronous and asynchronous distributed training. In Lingvo, inference-specific graphs are built from the same shared code used for training. Also, quantization support has been built in directly into the framework. Lingvo initially started out for natural language processing (NLP) tasks but has become very flexible and is also applicable to models used for tasks such as image segmentation and point cloud classification. It also supports distillation, GANs, and multi-task models. Additionally, the framework does not compromise on the speed and comes with an optimized input pipeline and fast distributed training. For more information, check out the official LINGVO research paper. Google engineers work towards large scale federated learning Google AI researchers introduce PlaNet, an AI agent that can learn about the world using only images Google to acquire cloud data migration start-up ‘Alooma’
Read more
  • 0
  • 0
  • 2812

article-image-mozilla-partners-with-scroll-to-understand-consumer-attitudes-for-an-ad-free-experience-on-the-web
Sugandha Lahoti
26 Feb 2019
2 min read
Save for later

Mozilla partners with Scroll to understand consumer attitudes for an ad-free experience on the web

Sugandha Lahoti
26 Feb 2019
2 min read
Mozilla has partnered with News subscription service Scroll, to provide a transparent news experience for publishers and users alike. They will work with Scroll to better understand how consumers react to ad-free experiences on the web and subscription-based funding models. With Scroll, they will conduct product explorations by inviting small groups of browser users at random to respond to surveys, provide feedback and potentially test proposed new features, products or services. This initiative with Scroll will help Mozilla in finding alternatives to the status quo advertising models. In a blog post, Mozilla mentioned the reason behind this initiative. “We are turning our attention towards finding a more sustainable ecosystem balance for publishers and users alike. We’re transparent and experiment with new ideas in the open, especially when those ideas could have a significant impact on how the web ecosystems works. In 2019, we will continue to explore new product features and offerings, including our ongoing focus on identifying a more sustainable ecosystem balance for both publishers and users.” In a conversation with VentureBeat, Scroll CEO Tony Haile said the company began talking with Mozilla for partnership opportunities last year. “It’s early days for this partnership, but we want to be transparent from the get-go and are hugely excited about what we might learn as we seek a better web ecosystem together,” he said. “One of the companies we’ve always looked up to most has been Mozilla. From their inception they have been dedicated to the concept of an internet that puts people first. They’ve been at the forefront of driving forward user experience on the web as well as how we think about data and privacy online. In this, they have been an inspiration to all of us at Scroll,” he added. Mozilla shares key takeaways from the Design Tools survey Mozilla partners with Ubisoft to Clever-Commit its code, an artificial intelligence assisted assistant Open letter from Mozilla Foundation and other companies to Facebook urging transparency in political ads.
Read more
  • 0
  • 0
  • 1732

article-image-core-cpython-developer-publishes-a-post-analyzing-his-phones-silent-connections
Natasha Mathur
25 Feb 2019
4 min read
Save for later

Core CPython developer unveils a new project that can analyze his phone's ‘silent connections’

Natasha Mathur
25 Feb 2019
4 min read
Kushal Das, a staff member at Freedom of the Press Foundation, privacy advocate, and a CPython core developer published a post earlier this month, titled, ‘Tracking my phone’s silent connections’. In the post, Das talks about the new system that he has built using the existing open source projects and tools, to track what his phone does, what servers it connects to and to look deeper into the network traffic from the phone. How did he start? Das mentions that his initial trial involved creating a wifi hotspot at home using a Raspberry Pi. He then started to capture all the packets from this device with the help of standard tools (dumpcap) and via the logs using Wireshark, a network protocol analyzer. This procedure, however, was only capable of capturing the data when connected to the network at home. So, to take the procedure further ahead, Das took a different approach where he chose ‘algo’ to create a VPN server. He then made use of WireGuard, a modern VPN tunnel, to connect his iPhone to the VPN. This process allowed capturing all the traffic from the phone easily on the VPN server. Analyzing the data post one week Das captured the data initially for only one week. He then started to capture pcap files into his computer, where he also wrote Python code to put the data into an SQLite database. This allowed him to query the data very fast. Das plotted a graph with all the different domains that got queried at least 10 times in a week where he observed that his phone was trying to find servers from Apple as it is an iPhone. He also noted many queries related to Twitter as he uses the Twitter app frequently. Then it was Google, for which the phone queried many other Google domains (although he only sometimes browsed through YouTube). He also observed queries to Akamai CDN service and Amazon AWS related hosts. Many data analytics related companies were also queried including dev.appboy.com. Tracking the data flow After looking at the DNS queries, Das wanted to look deeper into the actual servers that his phone communicates with. Das put together a graph of all the major companies that his phone communicates to, here’s the graph:                                                                   Major Companies Das discovered that Apple is the leading firm that takes about 44% of all the connections in his phone, and the number is 495225 times. Twitter earns the second place, with Edgecastcdn taking the third. He noticed that his phone communicated with Google servers 67344 number of times i.e. 7 times less than Apple. He then further removed big firms such as Google and Amazon from the graph and observed that the analytics companies such as nflxso.net and mparticle.com make up about 31% of the connections. The 3 other CDN companies are Akamai, CloudFront, and Cloudflare that make up 8%, 7%, and 6% each. Das mentions that he doesn’t have information about the things that these companies track on his phone which he finds scary. “Do I know what all things are these companies tracking? Nope, and that is scary enough,” said Das. Future work Das mentions that he’s looking into creating a set of tools in the future that can: Be deployed on the VPN server are user-friendly and easy to monitor block/unblock traffic from their phone. “The major part of the work is to make sure that the whole thing is easy to deploy, and can be used by someone with less technical knowledge”, states Das. For more information, check out the official blog post by Kushal Das. OpenAI team publishes a paper arguing that long term AI safety research needs social scientists China’s Huawei technologies accused of stealing Apple’s trade secrets, reports The Information UK lawmakers publish a report after 18 month long investigation condemning Facebook’s disinformation and fake news practices
Read more
  • 0
  • 0
  • 2167

article-image-microsoft-mwc-mobile-world-congress-day-1-hololens-2-azure-powered-kinect-camera-and-more
Melisha Dsouza
25 Feb 2019
4 min read
Save for later

Microsoft @MWC (Mobile World Congress) Day 1: HoloLens 2, Azure-powered Kinect camera and more!

Melisha Dsouza
25 Feb 2019
4 min read
The ongoing Mobile World Conference 2019 at Barcelona, has an interesting line-up of announcements, keynote speakers, summits, seminars and more. It is the largest mobile event in the world, that brings together the latest innovations and leading-edge technology from more than two thousand leading companies. The theme of this year’s conference is ‘Intelligent Connectivity’ which comprises of the combination of flexible, high-speed 5G networks, the Internet of Things (IoT), artificial intelligence (AI) and big data. Microsoft unveiled a host of new products along the same theme on the first day of the conference. Let’s have a look at some of them. #1 Microsoft HoloLens 2 AR announced! Microsoft unveiled the HoloLens 2 AR device at the Mobile World Congress (MWC). This $3,500 AR device is aimed for businesses, and not for the average person, yet. It is designed primarily for situations where field workers might need to work hands-free, such as manufacturing workers, industrial designers and those in the military, etc. This device is definitely an upgrade from Microsoft’s very first HoloLens that recognized basic tap and click gestures. The new headset recognizes 21 points of articulation per hand and accounts for improved and realistic hand motions. The device is less bulky and its eye tracking can measure eye movement and use it to interact with virtual objects. It is built to be a cloud- and edge-connected device. The HoloLens 2 field of view more than doubles the area covered by HoloLens 1. Microsoft said it has plans to announce a  follow-up to HoloLens 2 in the next year or two. According to Microsoft, this device will be even more comfortable and easier to use, and that it'll do more than the HoloLens 2. HoloLens 2 is available on preorder and will be shipping later this year. The device has already found itself in the midst of a controversy after the US Army invested $480 million in more than 100,000 headsets. The contract has stirred dissent amongst Microsoft workers. #2 Azure-powered Kinect camera for enterprise The Azure-powered Kinect camera is an “Intelligent edge device that doesn’t just see and hear but understands the people, the environment, the objects, and their actions,” according to Azure VP, Julia White. This AI-powered smart enterprise camera leverages Microsoft’s 3D imaging technology and can possibly serve as a companion hardware piece for HoloLens in the enterprise. The system has a 1-megapixel depth camera, a 12-megapixel camera and a seven-microphone array on board to help it work  with "a range of compute types, and leverage Microsoft’s Azure solutions to collect that data.” The system, priced at $399, is available for pre-order. #3 Azure Spatial Anchors Azure Spatial Anchors are launched as a part of the Azure mixed reality services. These services will help developers and business’ build cross-platform, contextual and enterprise-grade mixed reality applications. According to the Azure blog, these mixed reality apps can map, designate and recall precise points of interest which are accessible across HoloLens, iOS, and Android devices. Developers can integrate their solutions with IoT services and artificial intelligence, and protect their sensitive data using security from Azure. Users can easily infuse artificial intelligence (AI) and integrate IoT services to visualize data from IoT sensors as holograms. The Spatial Anchors will allow users to map their space and connect points of interest “to create wayfinding experiences, and place shareable, location-based holograms without any need for environmental setup or QR codes”. Users will also be able to manage identity, storage, security, and analytics with pre-built cloud integrations to accelerate their mixed reality projects. #4 Unreal Engine 4 Support for Microsoft HoloLens 2 During the  Mobile World Congress (MWC), Epic Games Founder and CEO, Tim Sweeney announced that support for Microsoft HoloLens 2 will be coming to Unreal Engine 4 in May 2019. Unreal Engine will fully support HoloLens 2 with streaming and native platform integration. Sweeney says that “AR is the platform of the future for work and entertainment, and Epic will continue to champion all efforts to advance open platforms for the hardware and software that will power our daily lives.” Unreal Engine 4 support for Microsoft HoloLens 2 will allow for "photorealistic" 3D in AR apps. Head over to Microsoft's official blog for an in-depth insight on all the products released. Unreal Engine 4.22 update: support added for Microsoft’s DirectX Raytracing (DXR) Microsoft acquires Citus Data with plans to create a ‘Best Postgres Experience’ Microsoft joins the OpenChain Project to help define standards for open source software compliance
Read more
  • 0
  • 0
  • 2380
article-image-youtube-demonetizes-anti-vaccination-videos-after-buzzfeed-news-reported-that-it-is-promoting-medical-misinformation
Bhagyashree R
25 Feb 2019
4 min read
Save for later

YouTube demonetizes anti-vaccination videos after Buzzfeed News reported that it is promoting medical misinformation

Bhagyashree R
25 Feb 2019
4 min read
On Friday, YouTube told Buzzfeed News that it is demonetizing channels that promote anti-vaccination content. YouTube said that this type of content does not align with its policy and called it “dangerous and harmful” content. This decision comes just after the Buzzfeed News report about YouTube’s algorithm recommending videos that described vaccines as dangerous and also showed ads on these videos. In an email to Buzzfeed News, a YouTube spokesperson said, “We have strict policies that govern what videos we allow ads to appear on, and videos that promote anti-vaccination content are a violation of those policies. We enforce these policies vigorously, and if we find a video that violates them, we immediately take action and remove ads.” Recently, Youtube also faced backlash for monetizing paedophilic videos by displaying ads from big brands such as Nestle, Disney, Fortnite on them. In addition to demonetizing the anti-vaccination videos, YouTube will also be showing an information panel that will link to a Wikipedia page about “vaccine hesitancy.” YouTube also introduced information panels to prevent misinformation around measles, mumps, and rubella (MMR) vaccine. Right from the start of this year, it seems that things are not going really well for YouTube. In early January, it has to revise its policies to ban dangerous pranks and challenges. Later, it announced an update to reduce the recommendations of videos related to conspiracy, false claims about historical events, flat earth videos, etc. What did Buzzfeed News report? Very often, users visit YouTube not only for entertainment but also to get answers on their health-related questions. When the Buzzfeed News team searched for “Should I vaccinate my kids?”, they were presented with search results and recommendations for videos that were about anti-vaccination. One of the examples they shared was of a YouTube search for “immunization”  that showed a video from the Rehealthify channel that says that vaccination is important to keep children protected from certain diseases. But, just after this video, YouTube recommended a video related to anti-vaccination called  “Mom Researches Vaccines, Discovers Vaccination Horrors and Goes Vaccine Free”. In this video, a mother was sharing why she decided to stop vaccinating her children. She said, “I wasn't always that person who was going to not vaccinate, but it has to start somewhere. If you go down a road, follow the road, and see where it leads. Unless you know for sure that your child will be 100% safe, do you want to play that game? If you can’t say ‘yes’ right now, pause.” Buzzfeed News conducted a bunch of search tests from Feb 14 - Feb 20. Some search results showed videos from professional medical channels and celebrity doctors. In some tests, the Up Next recommendation videos were 100% related to anti-vaccination. Even before the Buzzfeed News report, California Rep. Adam Schiff contacted both Facebook and Google asking them to address the anti-vaccination issue. “YouTube is surfacing and recommending messages that discourage parents from vaccinating their children, a direct threat to public health, and reversing progress made in tackling vaccine-preventable diseases,” wrote Schiff in the letter. To which Facebook responded that they are taking “steps to reduce the distribution of health-related misinformation on Facebook.” Last week, Pinterest also took a strong stand against the spread of misinformation related to vaccines by blocking all “vaccination” related searches. The report also shared that YouTube was showing ads on these videos. Seven advertisers told Buzzfeed News that they were not even aware that their ads were shown on these channels. Nomad Health, a health tech company, told Buzzfeed News, “...not aware of our ads running alongside anti-vaccination videos.” These companies have asked YouTube to pull down their ads from these videos. You can read the full report on Buzzfeed News’ official website. Nestle, Disney, Fortnite pull out their YouTube ads from paedophilic videos as YouTube’s content regulation woes continue Youtube promises to reduce recommendations of ‘conspiracy theory’. Ex-googler explains why this is a ‘historic victory’ YouTube to reduce recommendations of ‘conspiracy theory’ videos that misinform users in the US  
Read more
  • 0
  • 0
  • 2028

article-image-icann-calls-for-dnssec-across-unsecured-domain-names-amidst-increasing-malicious-activity-in-the-dns-infrastructure
Amrata Joshi
25 Feb 2019
3 min read
Save for later

ICANN calls for DNSSEC across unsecured domain names amidst increasing malicious activity in the DNS infrastructure

Amrata Joshi
25 Feb 2019
3 min read
Last week, the Internet Corporation for Assigned Names and Numbers (ICANN) decided to call for the full deployment of the Domain Name System Security Extensions (DNSSEC) across all unsecured domain names. ICANN took this decision because of the increasing reports of malicious activity targeting the DNS infrastructure. According to ICANN, there is an ongoing and significant risk to key parts of the Domain Name System (DNS) infrastructure. The DNS that converts numerical internet addresses to domain names, has been the victim of various attacks by the use of different methodologies. https://twitter.com/ICANN/status/1099070857119391745?ref_src=twsrc%5Egoogle%7Ctwcamp%5Eserp%7Ctwgr%5Etweet Last month security company FireEye revealed that hackers associated with Iran were hijacking DNS records, by rerouting users from a legitimate web address to a malicious server in order to steal passwords. This “DNSpionage” campaign, was targeting governments in the United Arab Emirates and Lebanon. The Homeland Security’s Cybersecurity Infrastructure Security Agency had warned that U.S. agencies were also under attack. In its first emergency order amid a government shutdown, the agency ordered federal agencies to take action against DNS tampering. David Conrad, ICANN’s chief technology officer told the AFP news agency that the hackers are “going after the Internet infrastructure itself.” ICANN is urging domain owners for deploying DNSSEC, which is a more secure version of DNS and is difficult to manipulate. DNSSEC cryptographically signs data which makes it more difficult to be spoofed. Some of the attacks target the DNS where the addresses of intended servers are changed with addresses of machines controlled by the attackers. This type of attack that targets the DNS only works when DNSSEC is not in use. ICANN also reaffirms its commitment towards engaging in collaborative efforts for ensuring the security, stability, and resiliency of the internet’s global identifier systems. This month, ICANN offered a checklist of recommended security precautions for members of the domain name industry, registries, registrars, resellers, and related others, to proactively take steps to protect their systems. ICANN aims to assure that internet users reach their desired online destination by preventing “man in the middle” attacks where a user is unknowingly re-directed to a potentially malicious site. Few users have previously been a victim of DNS hijacking and think that this move won’t help them out. One user commented on HackerNews, “This is nonsense, and possibly crossing the border from ignorant nonsense to malicious nonsense.” Another user said, “There is in fact very little evidence that we "need" the authentication provided by DNSSEC.” Few others think that this might work as a good solution. A comment reads, “DNSSEC is quite famously a solution in search of a problem.” To know more about this news, check out ICANN’s official post. Internet governance project (IGP) survey on IPV6 adoption, initial reports Root Zone KSK (Key Sign Key) Rollover to resolve DNS queries was successfully completed RedHat shares what to expect from next week’s first-ever DNSSEC root key rollover
Read more
  • 0
  • 0
  • 1651

article-image-tensorflow-1-13-0-rc2-releases
Natasha Mathur
25 Feb 2019
2 min read
Save for later

TensorFlow 1.13.0-rc2 releases!

Natasha Mathur
25 Feb 2019
2 min read
After the TensorFlow 1.13.0-rc0 release last month, the TensorFlow team is out with another update 1.13.0-rc2, unveiling major features and updates. The new release explores minor bug fixes, improvements, and other changes. Let’s have a look at the noteworthy features in TensorFlow 1.13.0-rc2. Major Improvements TensorFlow Lite has moved from contrib to core. TensorFlow GPU binaries are built against CUDA 10 and TensorRT 5.0. There’s newly added support for Python3.7 on all operating systems. NCCL has been moved to core. Behavioral and other changes Conversion of python floating types to uint32/64 in tf.constant is not allowed. The gain argument of convolutional orthogonal initializers has consistent behavior with the tf.initializers.orthogonal initializer. Subclassed Keras models can be saved via tf.contrib.saved_model.save_keras_model. LinearOperator.matmul now returns a new LinearOperator. Performance of GPU cumsum/cumprod has improved by up to 300x. Support has been added for weight decay in most TPU embedding optimizers, including AdamW and MomentumW. Tensorflow/contrib/lite has been moved to tensorflow/lite. An experimental Java API is added to inject TensorFlow Lite delegates. Support has been added for strings in TensorFlow Lite Java API. All the occurences of tf.contrib.estimator.DNNLinearCombinedEstimator has been replaced with tf.estimator.DNNLinearCombinedEstimator. Regression_head has been updated to the new Head API for Canned Estimator V2. XLA HLO graphs can be rendered as SVG/HTML. Bug Fixes Documentation has been updated with the details regarding the rounding mode used in quantize_and_dequantize_v2. OpenSSL compatibility has been fixed by avoiding EVP_MD_CTX_destroy. CUDA dependency has been upgraded to 10.0. All occurences of tf.contrib.estimator.InMemoryEvaluatorHook and tf.contrib.estimator.make_stop_at_checkpoint_step_hook have been replaced with tf.estimator.experimental.InMemoryEvaluatorHook and tf.estimator.experimental.make_stop_at_checkpoint_step_hook. tf.data.Dataset.make_one_shot_iterator() has been deprecated in V1, removed from V2, and tf.compat.v1.data.make_one_shot_iterator() has instead been added. keep_prob is deprecated and Dropout now takes rate argument. NUMA-aware MapAndBatch dataset has been added. Apache Ignite Filesystem plugin has been added to support accessing Apache IGFS. For more information, check out the official TensorFlow 1.13.0-rc2 release notes TensorFlow 2.0 to be released soon with eager execution, removal of redundant APIs, tf function and more Building your own Snapchat-like AR filter on Android using TensorFlow Lite [ Tutorial ] TensorFlow 1.11.0 releases
Read more
  • 0
  • 0
  • 3607
article-image-redis-labs-moves-from-apache2-modified-with-commons-clause-to-redis-source-available-license-rsal
Melisha Dsouza
22 Feb 2019
3 min read
Save for later

Redis Labs moves from Apache2 modified with Commons Clause to Redis Source Available License (RSAL)

Melisha Dsouza
22 Feb 2019
3 min read
Redis Labs joins the streak of software firms tweaking their licenses to prevent cloud service providers from misusing their open source code. Today, Redis Labs announced a change in their license from Apache2 modified with Commons Clause to Redis Source Available License (RSAL). This has been the second time that the company has changed its license. Back in August 2018, Redis Labs changed the license of their Redis Modules from AGPL to Apache2 modified with Commons Clause, to ensure that open source companies would continue to contribute to their projects and maintain sustainable business in the cloud era. This move was initially received with some skepticism when some people incorrectly assumed that the Redis core went proprietary, which was wrong to assume. Relating this move to open source companies like MongoDB and Confluent, Redis Labs says that every company has taken a different approach to stop cloud providers from exploiting open source projects developed by others by packaging them into proprietary services, and using their “monopoly power to generate significant revenue streams”. Feedback from multiple users to improve their license to favor developers’ needs identified three major areas needed to be addressed: The term Apache2 modified by Commons Clause caused confusion with some users, who thought they were only bound by the Apache2 terms. Common Clause’s language included the term “substantial” as a definition for what is and what isn’t allowed- there was a lack of clarity around the meaning of this term. Some Commons Clause restrictions regarding “support” worked against Redis Lab’s intention to help grow the ecosystem around Redis Modules. Taking all of this into consideration, Redis Labs has changed the license of Redis Modules to Redis Source Available License (RSAL).   What is Redis Source Available License (RSAL)? RSAL is a software license created by Redis Labs applicable only to a certain Redis Modules running on top of open source Redis. This aims to grant equivalent rights to permissive open source licenses for the vast majority of users. This license will “allow developers to use the software; modify the source code, integrate it with an application; and use, distribute or sell their application.” The RSAL introduces just one restriction; the application cannot be a database, a caching engine, a stream processing engine, a search engine, an indexing engine or an ML/DL/AI serving engine. According to Yiftach Shoolman, Co-founder and Chief Technology Officer, Redis Labs, this movement will not have any effect on the  Redis core license and shouldn’t really affect most developers who use the company’s modules (and these modules are RedisSearch, RedisGraph, RedisJSON, RedisML, and RedisBloom). Red Hat Satellite to drop MongoDB and will support only PostgreSQL backend MongoDB switches to Server Side Public License (SSPL) to prevent cloud providers from exploiting its open source code Confluent, an Apache Kafka service provider adopts a new license to fight against cloud service providers  
Read more
  • 0
  • 0
  • 2788

article-image-google-finally-ends-forced-arbitration-for-all-its-employees
Natasha Mathur
22 Feb 2019
4 min read
Save for later

Google finally ends Forced arbitration for all its employees

Natasha Mathur
22 Feb 2019
4 min read
Google announced yesterday that it is ending forced arbitration for its full-time employees as well as for the Temps, Vendors, and Contractors (TVCs) for cases of harassment, discrimination or wrongful termination. The changes will go into effect starting March 21 and employees will be able to litigate their past claims. Moreover, Google has also lifted the ban on class action lawsuits for the employees, reports WIRED. https://twitter.com/GoogleWalkout/status/1098692468432867328 In the case of contractors, Google has removed forced arbitration from the contracts of those who work directly with the firm. But, outside firms employing contractors are not required to follow the same. Google, however, will notify other firms and ask them to consider the approach and see if it works for them. Although this is very good news, the group called ‘Googlers for ending forced arbitration’ published a post on Medium stating that the “fight is not over”. They have planned a meeting with legislators in Washington D.C. for the next week, where six members of the group will advocate for an end to forced arbitration for all workers. “We will stand with Senators and House Representatives to introduce multiple bills that end the practice of forced arbitration across all employers. We’re calling on Congress to make this a law to protect everyone”, states the group. https://twitter.com/endforcedarb/status/1098697243517960194 It was back in November when 20,000 Google employees along with Temps, Vendors, and Contractors walked out to protest the discrimination, racism, and sexual harassment encountered at Google’s workplace. Google had waived forced arbitration for sexual harassment and assault claims, as a response to Google walkout (a move that was soon followed by Facebook), but employees were not convinced. Also, the forced arbitration policy was still applicable for contractors, temps, vendors, and was still in effect for other forms of discrimination within the firm. This was soon followed by Google contractors writing an open letter on Medium to Sundar Pichai, CEO, Google, in December, demanding him to address their demands of better conditions and equal benefits for contractors. Also, Googlers launched an industry-wide awareness campaign to fight against forced arbitration last month, where they shared information about arbitration on their Twitter and Instagram accounts throughout the day.  The employees mentioned in a post on Medium that there were “no meaningful gains for worker equity … nor any actual change in employee contracts or future offer letters”. The pressure on Google regarding more transparency around its sexual assault policies had been building up for quite a while. For instance, two shareholders, James Martin, and two other pension funds sued Alphabet’s board members, last month, for protecting the top execs accused of sexual harassment. The lawsuit urged for more clarity surrounding Google’s policies. Similarly, Liz Fong Jones, developer advocate at Google Cloud platform, revealed earlier last month, that she was leaving Google due to its lack of leadership in case of the demands made by employees during the Google walkout. Jones also published a post on Medium, last week, where she talked about the ‘grave concerns’ she had related to the strategic decisions made at Google.   “We commend the company in taking this step so that all its workers can access their civil rights through public court. We will officially celebrate when we see these changes reflected in our policy websites and/or employment agreements”, states the end forced arbitration group. Public reaction to the news is largely positive, with people cheering on Google employees for the victory: https://twitter.com/VidaVakil/status/1098773099531493376 https://twitter.com/jas_mint/status/1098723571948347392 https://twitter.com/teamcoworker/status/1098697515858182144 https://twitter.com/PipelineParity/status/1098721912111464450 Recode Decode #GoogleWalkout interview shows why data and evidence don’t always lead to right decisions in even the world’s most data-driven company Tech Workers Coalition volunteers talk unionization and solidarity in Silicon Valley Sally Hubbard on why tech monopolies are bad for everyone: Amazon, Google, and Facebook in focus
Read more
  • 0
  • 0
  • 1850