Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3709 Articles
article-image-lyft-introduces-amundsen-a-data-discovery-and-metadata-engine-for-its-researchers-and-data-scientists
Amrata Joshi
03 Apr 2019
4 min read
Save for later

Lyft introduces Amundsen;  a data discovery and metadata engine for its researchers and data scientists

Amrata Joshi
03 Apr 2019
4 min read
Yesterday, the team at Lyft introduced a data discovery and metadata engine called Amundsen. Amundsen is introduced to increase the productivity of data scientists and research scientists at Lyft. The team named it Amundsen inspiring from the Norwegian explorer, Roald Amundsen. The aim is to improve the productivity of data users by making their lives simple with this data search interface. According to UNECE (United Nations Economic Commission for Europe), the data in our world has grown over 40x spanning the last 10 years. The growth in data volumes has given rise to major challenges of productivity and compliance issues which were important to solve. The team at Lyft found the solution to these problems in the metadata and not in the actual data. “Metadata, also defined as ‘data about the data’, is a set of data that describes and gives information about other data.” The team solved a part of the productivity problem using the metadata. How did the team come up with Amundsen The team at Lyft realized that the majority of their time was spent in data discovery instead of prototyping and productionalization, where they actually wanted to invest more time. Data discovery involves answering questions like “If a certain type of data exists? Where is it? What is the source of truth of that data? Does it need to be accessed? And similar types of questions are answered in the process. This the reason why the team at Lyft brought the idea of Amundsen inspired a lot by search engines like Google. But Amundsen is more of searching data in the organization. Users can search for data by typing in their search term in the search box. For instance, “election results” or “users”. For ones who aren’t aware of what they are searching for, the platform offers a list of popular tables in the organization to browse through them. [caption id="attachment_26978" align="alignnone" width="696"] Image Source: Lyft[/caption] How does the search ranking feature function Once the user enters the search term, the results show in-line metadata and description of the table as well the last date when the table was updated. These results are chosen by fuzzy matching the entered text with a few metadata fields such as the table name, column name, table description and column descriptions. It uses an algorithm which is similar to Page Rank, where highly queried tables show up above, while those queried less are shown later in the search results. How does the detail page look like After selecting a result, users get to the detail page which shows the name of the table along with it’s manually curated description which is followed by the column list. A special blue arrow by a column indicates that it’s a popular column which encourages users to use it. On the right-hand pane, users can see who’s the owner, who are frequent users and a general profile of the data. [caption id="attachment_26980" align="alignnone" width="696"] Image source: Lyft[/caption] Further classification of metadata The team Lyft divided the metadata into a few categories and gave different access to each of the categories. Existence and other fundamental metadata This category includes name and description of table and fields, owners, last updated, etc. This metadata is available to everyone and anyone can access it. Richer metadata This category includes column stats and preview. This metadata is available to the users who have access to the data because these stats may have sensitive information which should be considered privileged. According to the team at Lyft, Amundsen has been successful at Lyft and has shown a high adoption rate and Customer Satisfaction (CSAT) score. Users can now easily discover more data in a shorter time. Amundsen can also be used to store, and tag all personal data within the organization which can help an organization remain compliant. To know more about this news, check out the official post by Lyft. Lyft acquires computer vision startup Blue Vision Labs, in a bid to win the self driving car race Uber and Lyft drivers strike in Los Angeles Uber open-sources Peloton, a unified Resource Scheduler  
Read more
  • 0
  • 0
  • 4001

article-image-zabbix-4-2-release-for-data-collection-processing-and-visualization
Fatema Patrawala
03 Apr 2019
7 min read
Save for later

Zabbix 4.2 release packed with modern monitoring system for data collection, processing and visualization

Fatema Patrawala
03 Apr 2019
7 min read
Zabbix Team announced the release of Zabbix 4.2. The latest release of Zabbix is packed with modern monitoring system for: data collection and processing, distributed monitoring, real-time problem and anomaly detection, alerting and escalations, visualization and more. Let us check out what Zabbix 4.2 has actually brought to the table. Here is a list of the most important functionality included into the new release. Official support of new platforms In addition to existing official packages and appliances, Zabbix 4.2 will now cater to the following platforms: Zabbix package for RaspberryPi Zabbix package for SUSE Enterprise Linux Server Zabbix agent for Mac OS/X Zabbix agent for MSI for Windows Zabbix Docker images Built-in support of Prometheus data collection Zabbix is able to collect data in many different ways (push/pull) from various data sources including JMX, SNMP, WMI, HTTP/HTTPS, RestAPI, XML Soap, SSH, Telnet, agents, scripts and other data sources, with Prometheus being the latest addition to the bunch. Now the 4.2 release will offer an integration with the exporters using native support of PromQL language. Moreover, the use of dependent metrics will give the Zabbix team ability to collect massive amounts of Prometheus metrics in a highly efficient way: this way they get all the data using a single HTTP call and then just reuse it for corresponding dependent metrics. Zabbix can also transform Prometheus data into JSON format, which can be used directly for low-level discovery. Efficient high-frequency monitoring We all want to discover problems as fast as possible. Now with 4.2 we can collect data with high frequency, instantly discover problems without keeping excessive amount of history data in the Zabbix database. Validation of collected data and error handling No one wants to collect incorrect data. With Zabbix 4.2 we can address that via built-in preprocessing rules that validate data by matching or not matching regular expression, using JSONPath or XMLPath. Now it is also possible to extract error messages from collected data. This can be especially handy if we get an error from external APIs. Preprocessing data with JavaScript In Zabbix 4.2 you can fully harness the power of user-defined scripts written in JavaScript. Support of JavaScript gives absolute freedom of data preprocessing! In fact, you can now replace all external scripts with JavaScript. This will enable all sorts of data transformation, aggregation, filtering, arithmetical and logical operations and much more. Test preprocessing rules from UI As preprocessing becomes much more powerful, it is important to have a tool to verify complex scenarios. Zabbix 4.2 will allow to test preprocessing rules straight from the Web UI! Processing millions of metrics per second! Prior to 4.2, all preprocessing was handled solely by the Zabbix server. A combination of proxy-based preprocessing with throttling gives us the ability to perform high-frequency monitoring collecting millions of values per second without overloading the Zabbix Server. Proxies will perform massive preprocessing of collected data while the Server will only receive a small fraction of it. Easy low level discovery Low-level discovery (LLD) is a very effective tool for automatic discovery of all sorts of resources (filesystems, processes, applications, services, etc) and automatic creation of metrics, triggers and graphs related to them. It tremendously helps to save time and effort allowing to use just a single template for monitoring devices with different resources. Zabbix 4.2 supports processing based on arbitrary JSON input, which in turn allows us to communicate directly with external APIs, and use received data for automatic creation of hosts, metrics and triggers. Combined with JavaScript preprocessing it opens up fantastic opportunities for templates, that may work with various external data sources such as cloud APIs, application APIs, data in XML, JSON or any other format. Support of TimescaleDB TimescaleDB promises better performance due to more efficient algorithms and performance oriented data structures. Another significant advantage of TimescaleDB is automatic table partitioning, which improves performance and (combined with Zabbix) delivers fully automatic management of historical data. However, Zabbix team hasn’t performed any serious benchmarking yet. So it is hard to comment on real life experience of running TimescaleDB in production. At this moment TimescaleDB is an actively developed and rather young project. Simplified tag management Prior to Zabbix 4.2 we could only set tags for individual triggers. Now tag management is much more efficient thanks to template and host tags support. All detected problems get tag information not only from the trigger, but also from the host and corresponding templates. More flexible auto-registration Zabbix 4.2 auto-registration options gives the ability to filter host names based on a regular expression. It’s really useful if we want to create different auto-registration scenarios for various sets of hosts. Matching by regular expression is especially beneficial in case we have complex naming conventions for our devices. Control host names for auto-discovery Another improvement is related to naming hosts during auto-discovery. Zabbix 4.2 allows to assign received metric data to a host name and visible name. It is an extremely useful feature that enables great level of automation for network discovery, especially if we use Zabbix or SNMP agents. Test media type from Web UI Zabbix 4.2 allows us to send a test message or check that our chosen alerting method works as expected straight from the Zabbix frontend. This is quite useful for checking the scripts we are using for integration with external alerting and helpdesk systems etc. Remote monitoring of Zabbix components Zabbix 4.2 introduces remote monitoring of internal performance and availability metrics of the Zabbix Server and Proxy. Not only that, it also allows to discover Zabbix related issues and alert us even if the components are overloaded or, for example, have a large amount of data stored in local buffer (in case of proxies). Nicely formatted email messages Zabbix 4.2 comes with support of HTML format in email messages. It means that we are not limited to plain text anymore, the messages can use all power of HTML and CSS for much nicer and easy to read alert messages. Accessing remote services from network maps A new set of macros is now supported in network maps for creation of user-defined URLs pointing to external systems. It allows to open external tickets in helpdesk or configuration management systems, or do any other actions using just one or two mouse-clicks. LLD rule as a dependant metric This functionality allows to use received values of a master metric for data collection and LLD rules simultaneously. In case of data collection from Prometheus exporters, Zabbix will only execute HTTP query once and the result of the query will be used immediately for all dependent metrics (LLD rules and metric values). Animations for maps Zabbix 4.2 comes with support of animated GIFs making problems on maps more noticeable. Extracting data from HTTP headers Web-monitoring brings the ability to extract data from HTTP headers. With this we can now create multi-step scenarios for Web-monitoring and for external APIs using the authentication token received in one of the steps. Zabbix Sender pushes data to all IP addresses Zabbix Sender will now send metric data to all IP addresses defined in the “ServerActive” parameter of the Zabbix Agent configuration file. Filter for configuration of triggers Configuration of triggers page got a nice extended filter for quick and easy selection of triggers by a specified criteria. Showing exact time in graph tooltip It is a minor yet very useful improvement. Zabbix will show you timestamp in graph tooltip. Other improvements Non-destructive resizing and reordering of dashboard widgets Mass-update for item prototypes Support of IPv6 for DNS related checks (“net.dns” and “new.dns.record”) “skip” parameter for VMWare event log check “vmware.eventlog” Extended preprocessing error messages to include intermediate step results Expanded information and the complete list of Zabbix 4.2 developments, improvements and new functionality is available in Zabbix Manual. Encrypting Zabbix Traffic Deploying a Zabbix proxy Zabbix and I – Almost Heroes
Read more
  • 0
  • 0
  • 4700

article-image-ahead-of-redisconf-2019-redis-labs-adds-intel-optane-dc-persistent-memory-support-for-redis-enterprise-users
Amrata Joshi
03 Apr 2019
4 min read
Save for later

Ahead of RedisConf 2019, Redis Labs adds Intel Optane DC persistent memory support for Redis Enterprise users

Amrata Joshi
03 Apr 2019
4 min read
Yesterday, the team at Redis Labs, the provider of Redis Enterprise announced that its customers can now scale their datasets using Intel Optane DC persistent memory. Scaling will be offered cost-effectively at a  multi-petabyte scale, at sub-millisecond speeds. Also, the two-day RedisConf2019 (2-3 April) was held at San Francisco, yesterday, where 1500 Redis developers, innovators and contributors shared their use cases and experiences. Redis Enterprise, a linearly scalable, in-memory multi-model database, supports native and probabilistic data structures, AI, streams, document, graph, time series, and search. It has been designed and optimized to be operated in either mode of Intel’s persistent memory technology that is, Memory Mode and App Direct Mode. Redis Enterprise offers the customers flexibility for using the most effective mode to process their massive data sets quickly and cost-effectively. Intel Optane DC persistent memory is a memory technology that provides a combination of affordable large capacity and support for data persistence. Redis Labs collaborated closely with Intel throughout the development of Intel Optane DC persistent memory for providing high-performance to the Redis Enterprise database. Also, it drastically improved performance in Benchmark testing while offering huge cost savings at the same time. The benchmark testing conducted by various companies to test Intel Optane DC persistent memory, reveals that Redis Enterprise has proved that a single cluster node with a multi-terabyte dataset can support over one million operations per second at sub-millisecond latency while also serving over 80% of the requests from persistent memory. Redis Enterprise on Intel Optane DC persistent memory also offered more than 40 percent cost savings as compared to the traditional DRAM-only memory. Key features of Intel Optane DC persistent memory It optimizes in-memory databases for advanced analytics in multi-cloud environments. It reduces the wait time associated with fetching the data sets from the system storage. It also helps in transforming the content delivery networks while bringing in greater memory capacity for delivering immersive content at the intelligent edge and provides better user experiences. It provides consistent QoS (Quality of Service) levels in order to reach out to more customers while managing TCO (Total Cost of Ownership) both from hardware and operating cost levels. It also provides cost-effective solutions for customers. Intel Optane DC persistent memory provides with a persistent memory tier between DRAM and SSD that provides up to 6TBs of non-volatile memory capacity in a two-socket server and up to 1.5TB of DRAM. Moreover, it extends a standard machine’s memory capacity to 7.5TBs of byte-addressable memory (DRAM + persistent memory), while also providing persistence. This technology is available in a DIMM form factor and as a 128, 256, and 512GB persistent memory module. Alvin Richards, chief product officer at Redis Labs wrote to us in an email, “Enterprises are faced with increasingly massive datasets that require instantaneous processing across multiple data-models. With Intel Optane DC persistent memory, combining with the rich data models supported by Redis Enterprise, global enterprises can now achieve sub-millisecond latency while processing millions of operations per second with affordable server infrastructure costs.” He further added, “Through our close collaboration with Intel, Redis Enterprise on Intel Optane DC persistent memory our customers will not have to compromise on performance, scale, and budget for their multi-terabyte datasets.” Redis Enterprise is available for any cloud service or as downloadable software for hardware along with Intel Optane DC persistent memory support. To know more about Intel Optane DC persistent memory, check out the Intel’s page. Announcements at RedisConf 19 Yesterday at the RedisConf19, Redis Labs introduced two new data models and a data programmability paradigm for multi-model operation. The company made major announcements including Redis TimeSeries, RedisAI and RedisGears. RedisTimeSeries Redis TimeSeries is designed to collect and store high volume and velocity data and organize it by time intervals. It helps organizations to easily process useful data points with built-in capabilities for downsampling, aggregation, and compression. This provides organizations with the ability to query and extract data in real-time for analytics. RedisAI RedisAI eliminates the need to migrate data to and from different environments and it allows developers to apply state-of-the-art AI models to the data. RedisAI reduces processing overhead by integrating with common deep learning frameworks including TensorFlow, PyTorch, and TorchScript, and by utilizing Redis Cluster capabilities over GPU-based servers. RedisGears RedisGears, an in-database serverless engine, can operate multiple models simultaneously. It is based on the efficient Redis Cluster distributed architecture and enables infinite programmability options supporting event-driven or transaction-based operations. Today, Redis Labs will be showing how to get the most of Redis Enterprise on Intel’s persistent memory at RedisConf19. Redis Labs moves from Apache2 modified with Commons Clause to Redis Source Available License (RSAL) Redis Labs announces its annual Growth of more than 60% in the Fiscal Year 2019 Redis Labs raises $60 Million in Series E Funding led by Francisco partners
Read more
  • 0
  • 0
  • 2204

article-image-facebook-ai-open-sources-pytorch-biggraph-for-faster-embeddings-in-large-graphs
Natasha Mathur
03 Apr 2019
3 min read
Save for later

Facebook AI open-sources PyTorch-BigGraph for faster embeddings in large graphs

Natasha Mathur
03 Apr 2019
3 min read
The Facebook AI team yesterday announced, the open-sourcing of PyTorch-BigGraph (PBG), a tool that enables faster and easier production of graph embeddings for large graphs.   With PyTorch-BigGraph, anyone can take a large graph and produce high-quality embeddings with the help of a single machine or multiple machines in parallel. PBG is written in PyTorch, allowing researchers and engineers to easily swap in their own loss functions, models, and other components. Other than that, PBG can also compute the gradients and is automatically scalable. Facebook AI team states that the standard graph embedding methods don’t scale well and are not able to operate on large graphs consisting of billions of nodes and edges. Also, many graphs exceed the memory capacity of commodity servers creating problems for the embedding systems. But, PBG helps prevent that issue. PBG performs block partitioning of the graph that helps overcome the memory limitations of graph embeddings. Also, nodes are randomly divided into P partitions ensuring the two partitions fit easily in memory. The edges are then further divided into P2 buckets depending on their source and the destination node. After this partitioning, training can be performed on one bucket at a time. PBG offers two different ways to train embeddings of partitioned graph data, namely, single machine and distributed training. In a single-machine training, embeddings and edges are swapped out in case they are not being used. In distributed training, PBG uses PyTorch parallelization primitives and embeddings are distributed across the memory of multiple machines. Facebook AI team also made several modifications to the standard negative sampling, which is necessary for large graphs. “We took advantage of the linearity of the functional form to reuse a single batch of N random nodes to produce corrupted negative samples for N training edges..this allows us to train on many negative examples per true edge at a little computational cost”, says the Facebook AI team. To produce embeddings useful in different downstream tasks, Facebook AI team found an effective approach that involves corrupting edges with a mix of 50 percent nodes sampled uniformly from the nodes, and with 50 percent nodes sampled based on their number of edges. Apart from that, to analyze PBG’s performance, Facebook AI used the publicly available Freebase knowledge graph comprising more than 120 million nodes and 2.7 billion edges. A smaller subset of the Freebase graph, known as FB15k. was also used. As a result, PBG performed comparably to other state-of-the-art embedding methods for the FB15k data set. PBG was also used to train embeddings for the full Freebase graph where PBG’s partitioning scheme reduced both memory usage and training time. PBG embeddings were also evaluated for several publicly available social graph data sets and it was found that PBG outperformed all the competing methods. “We..hope that PBG will be a useful tool for smaller companies and organizations that may have large graph data sets but not the tools to apply this data to their ML applications. We hope that this encourages practitioners to release and experiment with even larger data sets”, states the Facebook AI team. For more information, check out the official Facebook AI blog. PyTorch 1.0 is here with JIT, C++ API, and new distributed packages PyTorch 1.0 preview release is production ready with torch.jit, c10d distributed library, C++ API PyTorch-based HyperLearn Statsmodels aims to implement a faster and leaner GPU Sklearn
Read more
  • 0
  • 0
  • 2976

article-image-google-rolls-out-mandatory-benefits-for-contractors-after-they-protest-for-fair-treatment-at-work
Natasha Mathur
03 Apr 2019
4 min read
Save for later

Google workers demand fair treatment for contractors; company rolls out mandatory benefits, in response, to improve working conditions

Natasha Mathur
03 Apr 2019
4 min read
Over 900 Google workers signed a letter, yesterday, urging Google to treat its contract workers fair. The contract workers at Google make up to nearly 54% of the workforce. The letter was published on Medium by the Google Walkout For Real Change group. It states that on 8th March, about 82% of the Google’s ‘Personality team of 43 members’ were informed that their existing contract term has been shortened and they will be terminated by 5th April. Personality team describes themselves as an international contract team responsible for the voice of Google Assistant across the world.  “We are the human labor that makes the Google Assistant relevant, funny, and relatable in more than 50 languages”, reads the letter. Given that the contract team consists of expats from around the world, means that many would have to make big changes in their personal life and move back to their respective homes, without any financial support. The letter states that contractors were assured by their leads that the contract would be respected, however, the onset of layoff globally at the Google offices seemed to belie that assurance. Other than this, the contractors were not informed by Google about the layoffs and termed it as a  “change in strategy”. The letter also sheds light on the discriminatory environment within Google towards its TVCs (temps, vendors, contractors). For instance, neither are the contractors offered paid holidays nor any health care. Moreover, during the layoff process, Google had asked the managers and full-time employees to distance themselves from the contractors and to not offer them any support for Google to not come under legal obligations. The letter condemns the fact that Google boasts of its ability to scale up and down with agility, stating, “the whole team thrown into financial uncertainty is what scaling down quickly looks like for Google workers. This is the human cost of agility”. The group has laid down three demands in the letter: Google should respect and uphold the existing contract. In case, the contracts were shortened, payment should be made for the remaining length of the contract. Google should respect the work of contractors and should convert them to full-time employees. Google should respect humanity. A policy should be implemented that allows FTEs (full-time employees) to openly empathize with TVCs. FTEs should be able to thank TVCs for the kind of job they’ve done. Google’s response to the letter Google responded to the letter yesterday, stating that they are improving the working conditions of TVCs. As per the new changes, by 2022, all contractors who work at least 33 hours per week for Google would receive full benefits including: comprehensive health care paid parental leave a $15 minimum wage a minimum of eight days of sick leave $5,000 per year in tuition reimbursement for workers wanting to learn new skills and courses. “These changes are significant and we're inspired by the thousands of full-time employees and TVCs who came together to make this happen”, reads the letter. However, the Personality Team is still waiting to hear back from Google on whether the company will respect the current contracts or convert them into full-time positions. https://twitter.com/GoogleWalkout/status/1113206052957433856 Eileen Naughton, VP of people operations, Google told the Hill "These are meaningful changes, and we’re starting in the U.S., where comprehensive healthcare and paid parental leave are not mandated by U.S. law. As we learn from our implementation here, we’ll identify and address areas of potential improvement in other areas of the world." Check out the official letter by Google workers here. #GooglePayoutsForAll: A digital protest against Google’s $135 million execs payout for misconduct Google confirms it paid $135 million as exit packages to senior execs accused of sexual harassment Google finally ends Forced arbitration for all its employees
Read more
  • 0
  • 0
  • 2237

article-image-mozilla-is-exploring-ways-to-reduce-notification-permission-prompt-spam-in-firefox
Bhagyashree R
03 Apr 2019
3 min read
Save for later

Mozilla is exploring ways to reduce notification permission prompt spam in Firefox

Bhagyashree R
03 Apr 2019
3 min read
Yesterday, Mozilla announced that it is launching two experiments to understand how they can reduce “permission prompt spam” in Firefox. Last year, it did add a feature in Firefox that allows users to completely block the permission prompts. It is now planning to come up with a new option for those who do not want to take such a drastic step. Permission prompts have become quite common nowadays. It allows websites to get user permission for accessing powerful features when needed. But, often it gets annoying for users when they are shown unsolicited, out-of-context permission prompts, for instance, the ones that ask for permission to send push notifications. Mozilla's telemetry data shows that notifications prompt is the most frequently shown permission prompt, with about 18 million prompts shown on Firefox Beta from Dec 25 2018 to Jan 24 2019. Out of these 18 million prompts, not even 3 percent were accepted by users. And 19 percent of the prompts caused users to immediately leave the site. Such a low acceptance of this feature led to the following two conclusions: One, that there are some websites that show the notification prompt without the intent of using it to enhance the user experience, or fail to express their intent in the prompt clearly. Second, there are websites that show the notification permission prompt for too early, without giving users enough time to decide if they want them. To get a better idea on how and when websites should ask for notification permissions, Mozilla is launching these two experiments: Experiment 1: Requiring user interaction for notification permission prompts in Nightly 68 The first experiment involves requiring a user gesture, like a click or a keystroke to trigger the code that requests permission. From April 1st to 29th, requests for permission to use Notifications will be temporarily denied unless they follow a click or keystroke. In the first two weeks, no user-facing notifications will be shown when the restriction is applied to a website. In the last two weeks of this experiment, an animated icon will be shown in the address bar when this restriction is applied. If the user clicks on the icon, they will be presented with the prompt at that time. Experiment 2: Collecting interaction and environment data around permission prompts from release users Mozilla believes that requiring user interaction is not the perfect solution to the permission spam problem. To come up with a better approach, it wants to get more insights about how Firefox users interact with permission prompts. So, they are planning to launch an experiment in Firefox Release 67 to gather information about the circumstances in which users interact with permission prompts. They will collect information about: Have they been on the site for a long time? Have they rejected a lot of permission prompts before? With this experiment, it aims to collect a set of possible heuristics for future permission prompt restrictions. To know more in detail, visit Mozilla’s official blog. Mozilla launches Firefox Lockbox, a password manager for Android Mozilla’s Firefox Send is now publicly available as an encrypted file sharing service Mozilla Firefox will soon support ‘letterboxing’, an anti-fingerprinting technique of the Tor Browser  
Read more
  • 0
  • 0
  • 2108
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $15.99/month. Cancel anytime
article-image-epic-releases-unreal-engine-4-22-focuses-on-adding-photorealism-in-real-time-environments
Sugandha Lahoti
03 Apr 2019
4 min read
Save for later

Epic releases Unreal Engine 4.22, focuses on adding “photorealism in real-time environments”

Sugandha Lahoti
03 Apr 2019
4 min read
Epic games released a new version of it’s flagship game engine, Unreal Engine 4.22. This release comes with a total of 174 improvements, focused on “pushing the boundaries of photorealism in real-time environments”. It also comes with improved build times, up to 3x faster, and new features such as real-time ray tracing. Unreal Engine 4.22 also adds support for Microsoft HoloLens remote streaming and Visual Studio 2019. What’s new in Unreal Engine 4.22? Real-Time Ray Tracing and Path Tracing (Early Access): The Ray Tracing features, first introduced in a preview in mid-february,  are composed of a series of ray tracing shaders and ray tracing effects. They help in achieving natural realistic looking lighting effects in real-time. The Path Tracer includes a full global illumination path for indirect lighting that creates ground truth reference renders right inside of the engine. This improves workflow content in a scene without needing to export to a third-party offline path tracer for comparison. New Mesh drawing pipeline: The new pipeline for Mesh drawing results in faster caching of information for static scene elements. Automatic instancing merges draw calls where possible, resulting in four to six time fewer lines of code. This change is a big one so backwards compatibility for Drawing Policies is not possible. Any Custom Drawing Policies will need to be rewritten as FMeshPassProcessors in the new architecture. Multi-user editing (Early Access): Simultaneous multi user editing allows multiple level designers and artists to connect multiple instances of Unreal Editor together to work collaboratively in a shared editing session. Faster C++ iterations: Unreal has licensed Molecular Matters' Live++ for all developers to use on their Unreal Engine projects, and integrated it as the new Live Coding feature. Developers can now make C++ code changes in their development environment and compile and patch it into a running editor or standalone game in a few seconds. UE 4.22 also optimizes UnrealBuildTool and UnrealHeaderTool, reducing build times and resulting in up to 3x faster iterations when making C++ code changes. Improved audio with TimeSynth (Early access): TimeSynth is a new audio component with features like sample accurate starting, stopping, and concatenation of audio clips. Also includes precise and synchronous audio event queuing. Enhanced Animation: Unreal Engine 4.22 comes with a new Animation Plugin which is based upon the Master-Pose Component system and adds blending and additive Animation States. It reduces the overall amount of animation work required for a crowd of actors. This release also features an Anim Budgeter tool to help developers set a fixed budget per platform (ms of work to perform on the gamethread). Improvements in the Virtual Production Pipeline: New Composure UI: Unreal’s built-in compositing tool Composure has an updated UI to achieve real time compositing capabilities to build images, video feeds, and CG elements directly within the Unreal Engine. OpenColorIO (OCIO) color profiles: Unreal Engine now supports the Open Color IO framework for transforming the color space of any Texture or Composure Element directly within the Unreal Engine. Hardware-accelerated video decoding (Experimental): On Windows platforms, UE 4.22 can use the GPU to speed up the processing of H.264 video streams to reduce the strain on the CPU when playing back video streams. New Media I/O Formats: UE 4.22 ships with new features for professional video I/O input formats and devices, including 4K UHD inputs for both AJA and Blackmagic and AJA Kona 5 devices. nDisplay improvements (Experimental): Several new features make the nDisplay multi-display rendering system more flexible, handling new kinds of hardware configurations and inputs. These were just a select few updates. To learn more about Unreal Engine 4.22 head on over to the Unreal Engine blog. Unreal Engine 4.22 update: support added for Microsoft’s DirectX Raytracing (DXR) Unreal Engine 4.20 released with focus on mobile and immersive (AR/VR/MR) devices Implementing an AI in Unreal Engine 4 with AI Perception components [Tutorial]
Read more
  • 0
  • 0
  • 4592

article-image-you-can-now-integrate-chaos-engineering-into-your-ci-and-cd-pipelines-thanks-to-gremlin-and-spinnaker
Richard Gall
02 Apr 2019
3 min read
Save for later

You can now integrate chaos engineering into your CI and CD pipelines thanks to Gremlin and Spinnaker

Richard Gall
02 Apr 2019
3 min read
Chaos engineering is a trend that has been evolving quickly over the last 12 months. While for the decade it has largely been the preserve of Silicon Valley's biggest companies, thanks to platforms and tools like Gremlin, and an increased focus on software resiliency, that's been changing. Today, however, is a particularly important step in chaos engineering, as Gremlin have partnered with Netflix-built continuous deployment platform Spinnaker to allow engineering teams to automate chaos engineering 'experiments' throughout their CI and CD pipelines. Ultimately it means DevOps teams can think differently about chaos engineering. Gradually, this could help shift the way we think about chaos engineering, as it moves from localized experiments that require an in depth understanding of one's infrastructure, to something that is built-into the development and deployment process. More importantly, it makes it easier for engineering teams to take complete ownership of the reliability of their software. At a time when distributed systems bring more unpredictability into infrastructure, and when downtime has never been more costly (a Gartner report suggested downtime costs the average U.S. company $5,600 a minute all the way back in 2014) this is a step that could have a significant impact on how engineers work in the future. Read next: How Gremlin is making chaos engineering accessible [Interview] Spinnaker and chaos engineering Spinnaker is an open source continuous delivery platform built by Netflix and supported by Google, Microsoft, and Oracle. It's a platform that has been specifically developed for highly distributed and hybrid systems. This makes it a great fit for Gremlin, and also highlights that the growth of chaos engineering is being driven by the move to cloud. Adam Jordens, a Core Contributor to Spinnaker and a member of the Spinnaker Technical Oversight Committee said that "with the rise of microservices and distributed architectures, it’s more important than ever to understand how your cloud infrastructure behaves under stress.” Jordens continued; "by integrating with Gremlin, companies will be able to automate chaos engineering into their continuous delivery platform for the continual hardening and resilience of their internet systems.” Kolton Andrus, Gremlin CEO and Co-Founder explained the importance of Spinnaker in relation to chaos engineering, saying that "by integrating with Gremlin, users can now automate chaos experiments across multiple cloud providers including AWS EC2, Kubernetes, Google Compute Engine, Google Kubernetes Engine, Google App Engine, Microsoft Azure, Openstack, and more, enabling enterprises to build more resilient software." In recent months Gremlin has been working hard on products and features that make chaos engineering more accessible to companies and their engineering teams. In February, it released Gremlin Free, a free version of Gremlin designed to offer users a starting point for performing chaos experiments.
Read more
  • 0
  • 0
  • 3022

article-image-chris-dickinson-on-how-to-implement-git-in-rust
Amrata Joshi
02 Apr 2019
3 min read
Save for later

Chris Dickinson on how to implement Git in Rust

Amrata Joshi
02 Apr 2019
3 min read
Chris Dickinson, a developer working on implementing Git in Rust shared updates on his project Git-rs. This is his second try over the same project. He writes, “I'm trying again this year after reading more of "Programming Rust" (Blandy, Orendorff).” Dickinson has maintained a ‘To Do’ list wherein he has written the steps right from reading the objects from loose store to creating a packfile and publishing it to crates. You can checkout his full project for his day-by-day updates. It is also quite interesting to see how developers are sharing their projects on Git and learning something new on a daily basis based on their experience. Users are overall happy to see Dickinson’s contribution. A user commented on Reddit, “Maybe everybody is happy just to use this as a personal learning experience for now, but I think there will be a lot of interest in a shared project eventually.” Users are also sharing their experiences from their own projects. A user commented on HackerNews, “I love to see people reimplementing existing tools on their own, because I find that to be a great way to learn more about those tools. I started on a Git implementation in Rust as well, though I haven't worked on it in a while.” Why work with Rust? Rust has been gaining tremendous popularity in recent times. Steve Klabnik, a popular blogger/developer shares his experiences working with Rust and how the language has outgrown him. He writes in his blog post, “I’m the only person who has been to every Rust conference in existence so far. I went to RustCamp, all three RustConfs, all five RustFests so far, all three Rust Belt Rusts. One RustRush. Am I forgetting any? Thirteen Rust conferences in the past four years.” He further adds, “ I’m starting to get used to hearing “oh yeah our team has been using Rust in production for a while now, it’s great.” The first time that happened, it felt very strange. Exciting, but strange. I wonder what the next stage of Rust’s growth will feel like.” Rust is also in the top fifteen (by the number of pull requests) as of 2018 in the GitHub Octoverse report. Moreover, according to the Go User Survey 2018, 19% of the respondents ranked it as a top preferred language which indicates a high level of interest in Rust among this audience. Last month, the team at Rust announced the stable release, Rust 1.33.0. This release brought improvements to const fns, compiler, and libraries. Last week, the Rust community organized the Rust Latam 2019 Conference at Montevideo for the Rust community. It involved 200+ Rust developers and enthusiasts from the world. https://twitter.com/Sunjay03/status/1112095011951308800 ‘Developers’ lives matter’: Chinese developers protest over the “996 work schedule” on GitHub Sublime Text 3.2 released with Git integration, improved themes, editor control and much more! Microsoft open sources the Windows Calculator code on GitHub  
Read more
  • 0
  • 0
  • 3313

article-image-google-employees-filed-petition-to-remove-anti-trans-anti-lgbtq-and-anti-immigrant-kay-coles-james-from-the-ai-council
Amrata Joshi
02 Apr 2019
3 min read
Save for later

Google employees filed petition to remove anti-trans, anti-LGBTQ and anti-immigrant Kay Coles James from the AI council

Amrata Joshi
02 Apr 2019
3 min read
Last week, Google announced the formation of Advanced Technology External Advisory Council, to help Google with the major issues in AI such as facial recognition and machine learning fairness. The group was announced by Kent Walker, Google's senior vice president of global affairs, and according to Kent, the council will provide diverse perspectives to Google. Google appointed eight members for the council coming from diverse fields including, behavioural economy, privacy, applied mathematics, machine learning, industrial engineering, AI ethics, digital ethics, foreign policy, and public policy. Now a group of Google employees on the selection of the council is insisting the company on the removal of, Kay Coles James, the Heritage Foundation President who promotes anti-trans and anti-immigrant thoughts. Her tweets are proof of her thoughts against the idea of LGBTQ. Heritage has even hosted a panel of anti-transgender activists, and the panel lobbied against LGBTQ discrimination protections that were proposed by congressional Democrats. https://twitter.com/KayColesJames/status/1108768455141007360 https://twitter.com/KayColesJames/status/1108365238779498497 Yesterday, a group of employees which was known as ‘Googlers Against Transphobia and Hate’ filed a petition. The petition reads, "In selecting James, Google is making clear that its version of 'ethics' values proximity to power over the wellbeing of trans people, other LGBTQ people, and immigrants. Such a position directly contravenes Google’s stated values." The petition is already been signed by more than 1k Google employees. The employees voiced their opinion in the petition, “By appointing James to the ATEAC, Google elevates and endorses her views, implying that hers is a valid perspective worthy of inclusion in its decision making, this is unacceptable.” Few researchers and civil society activists have joined in the force against the idea of anti-trans and anti-LGBTQ.  Alessandro Acquisti, a behavioral economist and privacy researcher, has declined an invitation to join the council. https://twitter.com/ssnstudy/status/1112099054551515138 Google employees and researchers wrote that appointing James to the council "significantly undermines Google’s position on AI ethics and fairness” pointing out that there have been consistent civil rights concerns around some AI technology. The petition further reads, "Not only are James’ views counter to Google’s stated values, but they are directly counter to the project of ensuring that the development and application of AI prioritizes justice over profit.” According to a few people, James’ views are uncommon and they are taking a stand for her. Cal Smith, on Medium wrote, “Her views are not uncommon, and in fact are shared by a good percentage of Americans. If you are to have a truly representative AI that prioritizes non-discrimination then you must have a wide range of views included, including those you disagree with.” It seems the petition by Google employees will definitely put some pressure over the company, considering that the intention is more about strengthening the Human Rights than anything else. But it is yet to be known what Google finally decides! Check out the letter by the Google employees here. Is Google trying to ethics-wash its decisions with its new Advanced Tech External Advisory Council? European Union fined Google 1.49 billion euros for antitrust violations in online advertising Google Podcasts is transcribing full podcast episodes for improving search results  
Read more
  • 0
  • 0
  • 2600
article-image-surprise-npm-layoffs-raise-questions-about-the-company-culture
Fatema Patrawala
02 Apr 2019
8 min read
Save for later

Surprise NPM layoffs raise questions about the company culture

Fatema Patrawala
02 Apr 2019
8 min read
Headlines about the recent NPM layoff has raised questions about the company culture and ethics. NPM which stands for Node Package Manager is now being regarded as “Not Politely Managed”. The San Francisco startup, NPM Inc, the company behind the widely used NPM JavaScript package repository,  laid off 5 employees in a wrong, unprofessional and unethical manner. The incident stands imperative of the fact that many of us while accepting those lucrative job offers merely ask companies not to be unethical, and seldom expect them to actually be good. Indeed, social psychologist, Roy Baumeister convincingly argues there’s an evolutionary reason to focus more on getting people to avoid bad things than to do good things; among other reasons, humans are hardwired to consider potential threats that could harm us. Bob Sutton, author of fantastic and influential books like Good Boss, Bad Boss and The No Asshole Rule, draws on Baumeister’s work to highlight why it’s so critical to stamp out poorly behaving leaders (and employees) in organizations. Frédéric Harper, a developer advocate who was among those who lost their jobs, posted at length about the situation on Twitter. His concerns, did not come from being laid off. That happens, he said, and will happen again. "It’s the total lack of respect, empathy and professionalism of the process," he said. In an email to The Register, he said there appeared to be a disconnect between the company's professed values and its behavior. NPM layoff took its roots under the new leadership The layoffs actually started last summer when the company hired a new CEO, Bryan Bogensberger, to take the company from about $3m in annual revenue to 10x-20x, explained an early NPM employee who spoke with The Register on condition of anonymity. Bogensberger was previously the CEO and co-founder of Inktank, a leading provider of scale-out, open source storage systems that was acquired by Red Hat, Inc. for $175 million in 2014. , He has been running NPM since around July or August 2018, a source explained, but wasn't actually announced as CEO until January 2019 because his paperwork wasn't in order. Bryan brought in his own people, displacing longtime NPM staffers. "As he stacked the management ranks with former colleagues from a previous startup, there were unforced errors," another source explained to the Register. A culture of suspicion and hostility emerged under the new leadership. At NPM an all-hands meeting was held at which employees were encouraged to ask frank questions about the company's new direction. Those who spoke up were summarily fired last week, the individual said, at the recommendation of an HR consultant. https://twitter.com/ThatMightBePaul/status/1112843936136159232 People were very surprised by the layoffs at NPM. "There was no sign it was coming. It wasn't skills based because some of them heard they were doing great." said CJ Silverio, ex-CTO at NPM who was laid off last December. Silverio and Harper both are publicizing the layoff as they had declined to sign the non-disparagement clause in the NPM severance package. The non-disparagement clause prevents disclosure of the company’s wrongdoing publicly. A California law which came into effect in January, SB 1300 prohibits non-disparagement clause in the employment severance package but in general such clauses are legal. One of the employees fired last Friday was a month away from having stock options vest. The individual could have retained those options by signing a non-disparagement clause, but refused. https://twitter.com/neverett/status/1110626264841359360 “We can not comment on confidential personnel matters," CEO Bryan Bogensberger mentioned. "However, since November 1, we have approximately doubled in size to 55 people today, and continue to hire aggressively for many positions that will optimize and expand our ability to support and grow the JavaScript ecosystem over the long term.” Javascript community sees it as a leadership failure The community is full of outrage on this incident, many of them have regarded this as a 100% leadership failure. Others have commented that they would put NPM under the list of “do not apply” for jobs in this company. This news comes to them as a huge disappointment and there are questions asked about the continuity of the npm registry. Some of them also commented on creating a non profit node packages registry. While others have downgraded their paid package subscription to a free subscription. Rebecca Turner, core contributor to the project and one of the direct reportees to Harper has voluntarily put down her papers in solidarity with her direct reports who were let go. https://twitter.com/ReBeccaOrg/status/1113121700281851904 How goodness inspires goodness in organization Compelling research by David Jones and his colleagues finds that job applicants would prefer to work for companies that show real social responsibility–those that improve their communities, the environment, and the world. Employees are most likely to be galvanized by leaders who are actively perceived to be fair, virtuous, and self-sacrificing. Separate research by Ethical Systems founder, Jonathan Haidt demonstrates that such leaders influence employees to feel a sense of “elevation”—a positive emotion that lifts them up as a result of moral excellence. Liz Fong, a developer advocate at Honeycomb tweets on the npm layoff that she will never want to be a manager again if she had to go through this kind of process. https://twitter.com/lizthegrey/status/1112902206381064192 Layoffs becoming more common and frequent in Tech Last week we also had IBM in news for being sued by former employees for violating laws prohibiting age discrimination in the workplace: the Older Workers Benefit Protection Act (OWBPA) and the Age Discrimination in Employment Act (ADEA). Another news last week which came as a shocker was Oracle laying off a huge number of employees as a part of its “organizational restructuring”. The reason behind this layoff round was not clear, while some said that this was done to save money, some others said that people working on a legacy product were let go. While all of these does raise questions about the company culture, it may not be wrong to say that the Internet and social media makes corporate scandals harder than ever to hide. With real social responsibility easier than ever to see and applaud–we hope to see more of “the right things” actually getting done. Update from the NPM statement after 10 days of the incident After receiving public and community backlash on such actions, NPM published a statement on Medium on April 11 that, "we let go of 5 people in a company restructuring. The way that we undertook the process, unfortunately, made the terminations more painful than they needed to be, which we deeply regret, and we are sorry. As part of our mission, it’s important that we treat our employees and our community well. We will continue to refine and review our processes internally, utilizing the feedback we receive to be the best company and community we can be." Does this mean that any company for its selfish motives can remove its employees and later apologize to clean its image? Update on 14th June, Special report from The Register The Register published a special report last Friday saying that JavaScript package registry and NPM Inc is planning to fight union-busting complaints brought to America's labor watchdog by fired staffers, rather than settling the claims. An NLRB filing obtained by The Register alleges several incidents in which those terminated claim executives took action against them in violation of labor laws. On February 27, 2019, the filing states, a senior VP "during a meeting with employees at a work conference in Napa Valley, California, implicitly threatened employees with unspecified reprisals for raising group concerns about their working conditions." The document also describes a March 25, 2019, video conference call in which it was "impliedly [sic] threatened that [NPM Inc] would terminate employees who engaged in union activities," and a message sent over the company's Keybase messaging system that threatened similar reprisals "for discussing employee layoffs." The alleged threats followed a letter presented to this VP in mid-February that outlined employee concerns about "management, increased workload, and employee retention." The Register has heard accounts of negotiations between the tech company and its aggrieved former employees, from individuals apprised of the talks, during which a clearly fuming CEO Bryan Bogensberger called off settlement discussions, a curious gambit – if accurate – given the insubstantial amount of money on the table. NPM Inc has defended its moves as necessary to establish a sustainable business, but in prioritizing profit – arguably at the expense of people – it has alienated a fair number of developers who now imagine a future that doesn't depend as much on NPM's resources. The situation has deteriorated to the point that former staffers say the code for the npm command-line interface (CLI) suffers from neglect, with unfixed bugs piling up and pull requests languishing. The Register understands further staff attrition related to the CLI is expected. To know about this story in detail check out the report published by The Register. The npm engineering team shares why Rust was the best choice for addressing CPU-bound bottlenecks npm Inc. announces npm Enterprise, the first management code registry for organizations npm JavaScript predictions for 2019: React, GraphQL, and TypeScript are three technologies to learn
Read more
  • 0
  • 0
  • 7687

article-image-ahead-of-indian-elections-facebook-removes-hundreds-of-assets-spreading-fake-news-and-hate-speech-but-are-they-too-late
Sugandha Lahoti
02 Apr 2019
6 min read
Save for later

Ahead of Indian elections, Facebook removes hundreds of assets spreading fake news and hate speech, but are they too late?

Sugandha Lahoti
02 Apr 2019
6 min read
As India prepares to vote for its general elections, starting April 11, Facebook has identified and removed several pages, groups, and accounts that were engaged in coordinated inauthentic behavior, spreading fake news and hate speech. These pages either supported the ruling Bharatiya Janata Party (BJP) or the main opposition party, the Indian National Congress (INC). Simultaneously, Facebook also took down a set of pages linked to Pakistan that engaged in coordinated inauthentic behavior after the Pulwama attacks. Read also: Zuckerberg wants to set the agenda for tech regulation in yet another ‘digital gangster’ move. Facebook: Removing Coordinated Inauthentic Behavior and Spam Facebook announced take-downs of coordinated manipulation by the two biggest Indian political parties as well as by Pakistan's military. In a blog post, Facebook explained the actions it took: We removed 103 Pages, Groups and accounts on both Facebook and Instagram for engaging in coordinated inauthentic behavior as part of a network that originated in Pakistan. We removed 687 Facebook Pages and accounts — the majority of which had already been detected and suspended by our automated systems — that engaged in coordinated inauthentic behavior in India and were linked to individuals associated with an IT Cell of the Indian National Congress (INC). We removed 15 Facebook Pages, Groups and accounts that engaged in coordinated inauthentic behavior in India and were linked to individuals associated with an Indian IT firm, Silver Touch. We removed 321 Facebook Pages and accounts in India that have broken our rules against spam. Unlike the first three actions, this last activity does not represent a single or coordinated operation — instead, there are multiple sets of Pages and accounts that behaved similarly and violated our policies. Read also: Ahead of EU 2019 elections, Facebook expands its Ad Library to provide advertising transparency in all active ads The story behind, ‘The Indian Eye’ The company took down a pro-BJP page from Facebook and Instagram called the “Indian eye” which had over 2 million followers. Altnews, a fact-checking outlet which unearthed information about this page. The page vocally supported Prime Minister Narendra Modi and was a critic of Congress leader Rahul Gandhi. Source: The India Eye/archive The related website to this page called theindiaeye.com was hosted on Silver Touch servers, which was connected to an Indian IT company called Silver Touch Technologies Ltd, responsible for creating Prime Minister Narendra Modi’s official app. Moreover, domain names ‘theindiaeye.com’, listed on the Facebook page, and ‘theindiaeye.in’, were registered to Himanshu Jain, who was the director of Silver Touch technologies and was also managing several government projects via his company. The page Indian eye was created in 2016. It is disconcerting is to see that Facebook not taking any actions even after several regional media outlets reported that the page was spreading false information related to Indian politics. The engagements on the posts kept increasing, with a significant uptick from June 2018 onward. At the time, when Altnews published its discovery, the page had over 1.7 million following and its posts were liked and shared by thousands on a daily basis. https://twitter.com/KaranKanishk/status/1112651599434973185 India, an interesting battleground for internet regulation Indian political parties and their supporters have been increasingly using social media (Facebook, Whatsapp, Instagram) for spreading their propaganda. Often, they are accused of running deceptive social-media accounts that run disinformation and fake news campaigns. Often this partisan content is in regional languages, thus failing to be captured in Facebook’s automated screening software and its human moderators, both of which are built largely around English. Often fake news is spread via WhatsApp, where messages are e2e encrypted reducing visibility into what is being shared. In India, especially it is extremely prevalent and has often spurred mobs that have led to killings, initiated by nothing more than rumors sent over WhatsApp. Also, India’s many misinformation campaigns are developed and run by political parties themselves. Their main targets - political opponents, religious minorities and disagreeing individuals. “India’s elections present a unique set of issues, including a large number of languages and an extended time period for voting,” said Katie Harbath, Facebook’s public policy head for global elections. She said the company had been planning for the election for more than a year. “India is a strong battleground to test Facebook’s services. It will also help Facebook get the US 2020 elections right.” Per New York Times, “Facebook’s performance will be a prelude for how it navigates a likely onslaught of propaganda, false information, and foreign meddling during the 2020 presidential election in the United States.” Alex Stamos, a former chief security officer at Facebook also tweeted about his views on the take-downs of coordinated manipulation by Facebook. https://twitter.com/alexstamos/status/1112756785675296772 https://twitter.com/alexstamos/status/1112756786564489217 Is fact checking by Facebook a mere PR stunt? For the past year, Facebook was relying on two organizations that assist them in flag checking fake news in India, Boom and Agence France-Presse, for posts written in English. After Facebook’s algorithms flag potentially fake posts, these fact-checkers decide which ones to investigate. In preparation for the upcoming National elections, Facebook also added five more organizations covering seven languages in February. However, some of Facebook's media partners, two large publishing houses, India Today Group, and Jagran Media Network had repeatedly published false information related to the Kashmir attack.  This was recently found by Alt News. Pratik Sinha, the founder of Alt News, said Facebook did not seem to view false news as a serious problem. “The whole thing is a P.R. effort,” he said. Thenmozhi Soundararajan, the founder of Equality Labs, a human rights group in the United States, said her organization recently studied more than 1,000 Facebook posts that attacked caste and religious minorities. It found that 80 percent of the posts stayed on the social network after they were reported as hate speech, and nearly half of the posts that were initially removed were up again several months later. Indian National Elections begin this month and containing fake news will be one of the biggest challenges the country will face. And it is going to be a lot harder for Facebook to provide attribution and control the speech of political parties, considering these political parties will be the ones regulating Facebook after the elections. UK lawmakers publish a report after 18 month long investigation condemning Facebook’s disinformation and fake news practices. WhatsApp limits users to five text forwards to fight against fake news and misinformation Is Anti-trust regulation coming to Facebook following fake news inquiry made by a global panel in the House of Commons, UK?
Read more
  • 0
  • 0
  • 2051

article-image-cloudflare-adds-warp-a-free-vpn-to-1-1-1-1-dns-app-to-improve-internet-performance-and-security
Natasha Mathur
02 Apr 2019
3 min read
Save for later

Cloudflare adds Warp, a free VPN to 1.1.1.1 DNS app to improve internet performance and security

Natasha Mathur
02 Apr 2019
3 min read
Cloudflare announced yesterday that it is adding Warp, a free VPN to the 1.1.1.1 DNS resolver app. Cloudflare team states that it began its plans to integrate 1.1.1.1 app with warp performance and security tech, about two years ago. The 1.1.1.1 app was released in November last year for iOS and Android. The mobile app included features such as VPN support that helped move the mobile traffic towards 1.1.1.1 DNS servers, thereby, helping improve speeds. Now with warp integration, 1.1.1.1 app will speed up mobile data using Cloudflare network to resolve DNS queries at a faster pace.  With Warp, all the unencrypted connections are encrypted automatically by default. Also, Warp comes with end-to-end encryption and doesn’t require users to install a root certificate to observe the encrypted Internet traffic. For cases when you browse the unencrypted Internet through Warp, Cloudflare’s network can cache and compress content to improve performance and decrease your data usage and mobile carrier bill. “In the 1.1.1.1 App, if users decide to enable Warp, instead of just DNS queries being secured and optimized, all Internet traffic is secured and optimized. In other words, Warp is the VPN for people who don't know what V.P.N. stands for”, states the Cloudflare team. Apart from that, Warp also offers excellent performance and reliability. Warp is built around a UDP-based protocol that has been optimized for the mobile Internet. Warp also makes use of Cloudflare’s massive global network and allows Warp to connect with servers within milliseconds. Moreover, Warp has been tested to show that it increases internet performance. Another factor is reliability which has also significantly improved. Warp is not as capable of eliminating mobile dead spots, but it is very efficient at recovering from loss. Warp doesn’t increase your battery usage as it is built around WireGuard, a new and efficient VPN protocol. The basic version of Warp has been added as a free option with the 1.1.1.1 app for free. However, Cloudflare team will be charging for Warp+, a premium version of Warp, that will be even faster with Argo technology. A low monthly fee will be charged for Warp+ that will vary based on different regions. Also, the 1.1.1.1 App with Warp will have all the privacy protections launched formerly with the 1.1.1.1 app. Cloudflare team states that 1.1.1.1 app with warp is still under works, and although sign-ups for Warp aren’t open yet, Cloudflare has started a waiting list where you can “claim your place” by downloading the 1.1.1.1 app or by updating the existing app. Once the service is available, you’ll be notified. “Our whole team is proud that today, for the first time, we’ve extended the scope of that mission meaningfully to the billions of other people who use the Internet every day”, states the Cloudflare team. For more information, check out the official Warp blog post. Cloudflare takes a step towards transparency by expanding its government warrant canaries Cloudflare raises $150M with Franklin Templeton leading the latest round of funding workers.dev will soon allow users to deploy their Cloudflare Workers to a subdomain of their choice
Read more
  • 0
  • 0
  • 3858
article-image-django-2-2-is-now-out-with-classes-for-custom-database-constraints
Bhagyashree R
02 Apr 2019
2 min read
Save for later

Django 2.2 is now out with classes for custom database constraints

Bhagyashree R
02 Apr 2019
2 min read
Yesterday, the Django team announced the release of Django 2.2. This release comes with classes for custom database constraints, Watchman compatibility for runserver, and more. It comes with support for Python 3.5, 3.6, and 3.7. As this version is a long-term support (LTS) release it will receive security and data loss updates for at least the next three years. Also, this release marks the end of the mainstream support for Django 2.1 and it will continue to receive security and data loss fixes until December 2019. Following are some of the updates Django 2.2 comes with: Classes for custom database constraints Two new classes are introduced to create custom database constraints: CheckConstraint and UniqueConstraint. You can add constraints to the models using the 'Meta.constraints' option. Watchman compatibility for runserver This release comes with Watchman compatibility for runserver replacing Pyinotify. Watchman is a service used to watch files and record when they change and also trigger actions when matching files change. Simple access to request headers Django 2.2 comes with HttpRequest.headers to allow simple access to a request’s headers. It provides a case insensitive, dict-like object for accessing all HTTP-prefixed headers from the request. Each header name is stylized with title-casing when it is displayed, for example, User-Agent. Deserialization using natural keys and forward references To perform deserialization you can now use natural keys containing forward references by passing ‘handle_forward_references=True’ to ‘serializers.deserialize()’. In addition to this, forward references are automatically handled by ‘loaddata’. Some backward incompatible changes and deprecations Starting from this release, admin actions are not collected from base ModelAdmin classes. Support is dropped for Geospatial Data Abstraction Library (GDAL) 1.9 and 1.10. Now, the team has made sqlparse a required dependency to simplify Django’s database handling. Permissions for proxy models are now created using the content type of the proxy model. With this release, model Meta.ordering will not affect GROUP By queries such as  .annotate().values(). Now, a deprecation warning will be shown with the advice to add an order_by() to retain the current query. To read the entire list of updates, visit Django’s official website. Django 2.2 alpha 1.0 is now out with constraints classes, and more! Django is revamping its governance model, plans to dissolve Django Core team Django 2.1.2 fixes major security flaw that reveals password hash to “view only” admin users
Read more
  • 0
  • 0
  • 3244

article-image-researchers-trick-tesla-autopilot-into-driving-into-opposing-traffic
Fatema Patrawala
02 Apr 2019
4 min read
Save for later

Researchers successfully trick Tesla autopilot into driving into opposing traffic via “small stickers as interference patches on the ground”

Fatema Patrawala
02 Apr 2019
4 min read
Progress in the field of machine vision is one of the most important factors in the rise of the self-driving car. An autonomous vehicle has to be able to sense its environment and react appropriately. Free space has to be calculated, solid objects avoided, and all of the instructions  painted on the tarmac or posted on signs have to be obeyed. Deep neural networks turned out to be pretty good at classifying images, but it's still worth remembering that the process is quite unlike the way humans identify images, even if the end results are fairly similar. Researchers from Tencent Keen Security Lab have published a report detailing their successful attacks on Tesla firmware. It includes remote control over the steering, and an adversarial example attack on the autopilot that confuses the car into driving into the oncoming traffic lane. The researchers used an attack chain that they disclosed to Tesla, and which Tesla now claims has been eliminated with recent patches. To effect the remote steering attack, the researchers had to bypass several redundant layers of protection. But having done this, they were able to write an app that would let them connect a video-game controller to a mobile device and then steer a target vehicle, overriding the actual steering wheel in the car as well as the autopilot systems. This attack has some limitations: while a car in Park or traveling at high speed on Cruise Control can be taken over completely, a car that has recently shifted from R to D can only be remote controlled at speeds up to 8km/h. Tesla vehicles use a variety of neural networks for autopilot and other functions (such as detecting rain on the windscreen and switching on the wipers); the researchers were able to use adversarial examples (small, mostly human-imperceptible changes that cause machine learning systems to make gross, out-of-proportion errors) to attack these. Most dramatically, the researchers attacked the autopilot lane-detection systems. By adding noise to lane-markings, they were able to fool the autopilot into losing the lanes altogether, however, the patches they had to apply to the lane-markings would not be hard for humans to spot. Much more seriously, they were able to use "small stickers" on the ground to effect a "fake lane attack" that fooled the autopilot into steering into the opposite lanes where oncoming traffic would be moving. This worked even when the targeted vehicle was operating in daylight without snow, dust or other interference. Misleading the autopilot vehicle to the wrong direction with some patches made by a malicious attacker, in sometimes, is more dangerous than making it fail to recognize the lane. The researchers painted three inconspicuous tiny square in the picture took from camera, and the vision module would recognize it as a lane with a high degree of confidence as below shows… After that they tried to build such a scene in the physical world: pasted some small stickers as interference patches on the ground in an intersection. They used these patches to guide the Tesla vehicle in the Autosteer mode driving to the reverse lane. The test scenario like Fig 34 shows, red dashes are the stickers, the vehicle would regard them as the continuation of its right lane, and ignore the real left lane opposite the intersection. When it travels to the middle of the intersection, it would take the real left lane as its right lane and drive into the reverse lane. Tesla autopilot module’s lane recognition function has a good robustness in an ordinary external environment (no strong light, rain, snow, sand and dust interference), but it still doesn’t handle the situation correctly in the test scenario. This kind of attack is simple to deploy, and the materials are easy to obtain. As talked in the previous introduction of Tesla’s lane recognition function, Tesla uses a pure computer vision solution for lane recognition, and as found in this attack experiment that the vehicle driving decision is only based on computer vision lane recognition results. The experiments proved that this architecture has security risks and reverse lane recognition is one of the necessary functions for autonomous driving in non-closed roads. In the scene built, if the vehicle knows that the fake lane is pointing to the reverse lane, it should ignore this fake lane and then it could avoid a traffic accident. Tesla is building its own AI hardware for self-driving cars Tesla v9 to incorporate neural networks for autopilot Aurora, a self-driving startup, secures $530 million in funding from Amazon, Sequoia, and T. Rowe Price among others
Read more
  • 0
  • 0
  • 2756