Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Data

1208 Articles
article-image-gitlab-goes-multicloud-using-crossplane-with-kubectl
Savia Lobo
21 May 2019
3 min read
Save for later

GitLab goes multicloud using Crossplane with kubectl

Savia Lobo
21 May 2019
3 min read
GitLab announced yesterday that it's being deployed across multiple clouds via Crossplane, an open source multi-cloud control plane sponsored by Upbound. Yesterday, the Crossplane community also demonstrated the entire process of GitLab deployment across multi-cloud. During early December, last year, GitLab announced it had been chosen as the first complex app to be deployed on Crossplane. Crossplane follows established Kubernetes patterns such as persistent volume claims for supporting a clean separation of concerns between application and infrastructure owners. It also provides a self-service model for managed services entirely within the Kubernetes API. With Crossplanes, real-world application deployments from kubectl are easily accessible with enhanced support for composing external fully-managed services including Redis, PostgreSQL, and object storage. “We’ve been working with GitLab to validate our approach and are proud to unveil the deployment of GitLab to multiple clouds entirely with kubectl using Crossplane, including the use of fully-managed services offered by the respective cloud providers”, the official Crossplane blog mentions. Deployment of GitLab with external managed services using Kubectl Crossplane extends the Kubernetes API by adding resource claims and resource classes to support composability of managed service dependencies in Kubernetes, similar to persistent volume claims and storage classes. Crossplane can be easily added to any existing Kubernetes cluster and neatly layers on top of clusters provisioned by Anthos, EKS, AKS, and OpenShift. Cluster administrators install Crossplane on a Kubernetes cluster, set cloud credentials, and specify which managed services they want to make available for self-service provisioning within the cluster. Policies guide binding to specific managed service offerings configured by the cluster administrator. With this, application owners can consume and compose these managed services on-demand with the familiar Kubernetes patterns, without having to know about the infrastructure details or having to manage credentials. For production deployments, GitLab recommends using external managed services for Redis, PostgreSQL, and object storage. Crossplane supports composability of both out-of-cluster public cloud managed services (GCP, AWS, Azure) and in-cluster managed services like those provided by Rook, a storage orchestrator for in-cluster cloud-native storage including Ceph, Minio, and Cassandra. Bassam Tabbara, CEO of Upbound and maintainer on Crossplane said, “We’re showing a real-world example of the future of multi-cloud today. GitLab is a production application that relies on multiple fully-managed services, so by abstracting these services and integrating them with the declarative Kubernetes API, we are demonstrating the ability to standardize on a single declarative API to manage it all.” To know more about Crossplane in detail and also the steps to deploy GitLab to multiple clouds using Crossplane, head over to Crossplane’s official website. Atlassian Bitbucket, GitHub, and GitLab take collective steps against the Git ransomware attack GitLab 11.10 releases with enhanced operations dashboard, pipelines for merged results and much more! Attackers wiped many GitHub, GitLab, and Bitbucket repos with ‘compromised’ valid credentials leaving behind a ransom note
Read more
  • 0
  • 0
  • 2317

article-image-core-security-features-of-elastic-stack-are-now-free
Amrata Joshi
21 May 2019
3 min read
Save for later

Core security features of Elastic Stack are now free!

Amrata Joshi
21 May 2019
3 min read
Today, the team at Elastic announced that the core security features of the Elastic Stack are now free. They also announced about releasing Elastic Stack versions 6.8.0 and 7.1.0 and the alpha release of Elastic Cloud on Kubernetestoday. With the free core security features, users can now define roles that protect index and cluster level access, encrypt network traffic, create and manage users, and fully secure Kibana with Spaces. The team had opened the code for these features last year and has finally made them free today which means the users can now run a fully secure cluster. https://twitter.com/heipei/status/1130573619896225792 Release of Elastic Stack versions 6.8.0 and 7.1.0 The team also made an announcement about releasing versions 6.8.0 and 7.1.0 of the Elastic Stack, today. These versions do not contain new features but they make the core security features free in the default distribution of the Elastic Stack. The core security features include TLS for encrypted communications, file and native realm to create and manage users, and role-based access control to control user access to cluster APIs and indexes. The features also include allowing multi-tenancy for Kibana with security for Kibana Spaces. Previously, these core security features required a paid gold subscription, however, now, they are free as a part of the basic tier. Alpha release of Elastic Cloud on Kubernetes The team has also announced the alpha release of Elastic Cloud on Kubernetes (ECK) which is the official Kubernetes Operator for Elasticsearch and Kibana. It is a new product based on the Kubernetes Operator pattern that lets users manage, provision, and operate Elasticsearch clusters on Kubernetes. It is designed for automating and simplifying how Elasticsearch is deployed and operated in Kubernetes. It also provides an official way for orchestrating Elasticsearch on Kubernetes and provides a SaaS-like experience for Elastic products and solutions on Kubernetes. The team has moved the core security features into the default distribution of Elastic Stack to ensure that all clusters launched and managed by ECK are secured by default at creation time. The clusters that are deployed via ECK include free features and tier capabilities such as Kibana Spaces, frozen indices for dense storage, Canvas, Elastic Maps, and more. Users can now monitor Kubernetes logs and infrastructure with the help of Elastic Logs and Elastic Infrastructure apps. Few users think that security shouldn’t be an added feature, it should be inbuilt. A user commented on HackerNews, “Security shouldn't be treated as a bonus feature.” Another user commented, “Security should almost always be a baseline requirement before something goes up for public sale.” Few others are happy about this news. A user commented, “I know it's hard to make a buck with an open source business model but deciding to charge more for security-related features is always so frustrating to me. It leads to a culture of insecure deployments in environments when the business is trying to save money. Differentiate on storage or number of cores or something, anything but auth/security. I'm glad they've finally reversed this.” To know more about this news, check out the blog post by Elastic. Elasticsearch 7.0 rc1 releases with new allocation and security features Elastic Stack 6.7 releases with Elastic Maps, Elastic Update and much more! AWS announces Open Distro for Elasticsearch licensed under Apache 2.0  
Read more
  • 0
  • 0
  • 3536

article-image-now-theres-a-cyclegan-to-visualize-the-effects-of-climate-change-but-is-this-enough-to-mobilize-action
Vincy Davis
20 May 2019
5 min read
Save for later

Now there’s a CycleGAN to visualize the effects of climate change. But is this enough to mobilize action?

Vincy Davis
20 May 2019
5 min read
Climate change effects are now visible in all countries around the globe. The world is witnessing phenomena like higher temperature, flooding, ice-melting, and much more. There have been many technologies invented in the last decade to help humans understand and adapt to these climatic changes. Earlier this month, researchers from Montreal Institute for Learning Algorithms, ConscientAI Labs and Microsoft Research came up with a project that aims to generate images which will depict accurate, vivid, and personalized outcomes of climate change using Machine Learning (ML) and Cycle-Consistent Adversarial Networks (CycleGANs). This will enable individuals to make more informed choices about their climate future by creating an understanding of the effects of climate change, while maintaining scientific credibility using climate model projections. The project is to develop a Machine Learning (ML) based tool. This tool will show in a personalized way, the probable effect that climate change will have on a specific location familiar to the viewer. When given an address, the tool will generate an image projecting transformations which are likely to occur there, based on a formal climate model. For the initial version, the generated images consist of houses and buildings specifically after flooding events. The challenge in generating realistic images using Cycle-Consistent Adversarial Networks (CycleGANs) is to collect the training data needed in order to extract the mapping function. The researchers manually searched open source photo-sharing websites for images of houses from various neighborhoods and settings, such as suburban detached houses, urban townhouses, and apartment buildings. They have gathered over 500 images of non-flooded houses and the same number of flooded locations, and re-sized them to 300x300 pixels. The networks were trained using the publicly available PyTorch. The CycleGAN model for 200 epochs were trained using the trained images, using the Adam solver with a batch size of 1 and training the model from scratch with a learning rate of 0.0002. As per the CycleGAN training procedure, the learning rate is constant for the first 100 epochs and linearly decayed to zero over the next 100 epochs. Project Output and Future Plan The trained CycleGAN model was successful in learning an adequate mapping between grass and water, which could be applied to generate fairly realistic images of flooded houses. This will work best with single-family, suburban-type houses which are surrounded by an expanse of grass. From the 80 images in the test set, it was found that about 70% were successfully mapped to realistically flooded houses. This initial version of the CycleGAN model will illustrate the feasibility of applying generative model to create personalized images of an extreme climate event i.e., flooding, that is expected to increase in frequency based on climate change projections. Subsequent versions of this model will integrate more varied types of houses and surroundings, as well as different types of climate-change related extreme event phenomena (i.e. droughts, hurricanes, wildfires, air pollution etc), depending on the expected impacts at a given location, as well as forecast time horizons. There’s still scope for improvement with regard to the color scheme of the generated images and the visual artifacts. Furthermore to channel the emotional response of the public, into behavioural change or actions, the researchers are planning another improvement to the model called ‘choice knobs’. This will enable users to visually see the impact of their personal choices, such as deciding to use more public transportation, as well as the impact of broader policy decisions, such as carbon tax and increasing renewable portfolio standards. The projects greater aim is to help the general population progress towards more visible public support for climate change mitigation steps on a national level, facilitating governmental interventions and helping make the required rapid changes to a global sustainable economy. The researchers have stated that they need to explore more physical constraints to GAN training in order to incorporate more physical knowledge into these projections. This will enable a GAN model to transform a house to its projected flooded state and also take into account the forecast simulations of the flooding event represented by the physical variable outputs and probabilistic scenarios by a climate model for a given location. Response to the project Few developers have liked the idea of using technology, to produce realistic images depicting the effect of climate change in your own hometown which may make people understand the adverse effects of it. https://twitter.com/jameskobielus/status/1129392932988096513 While some developers are not sure if showing people a picture of their house submerged in water is going to create any difference. A user on Hacker news comments, “The threshold for believing the effects of climate change has to change from reading/seeing to actually being there and touching it. Or some far more reliable system of remote verification has to be established” Another user adds, “Is this a real paper? It's got to be a joke, right? a parody? It's literally a request to develop images to be used for propaganda purposes. And for those who will say that climate change is going to end the world, yeah, but that doesn't mean we should develop propaganda technology that could be used for some other political purpose.” There are already many studies/evidences to make people aware of the effects of climate change, depicting a picture of their house submerged in water is not going to move them anymore. Climate change is already happening and effecting our day to day lives. What we need now are stronger approaches towards analysing, mitigating, and adapting to these changes and inspiring more government policies to fight against these climate changes. To know more details about the project, head over to the research paper. Read More Amazon employees get support from Glass Lewis and ISS on its resolution for Amazon to manage climate change risks ICLR 2019 Highlights: Algorithmic fairness, AI for social good, climate change, protein structures, GAN magic, adversarial ML and much more Responsible tech leadership or climate washing? Microsoft hikes its carbon tax and announces new initiatives to tackle climate change
Read more
  • 0
  • 0
  • 3637
Visually different images

article-image-matplotlib-3-1-releases-with-python-3-6-support-secondary-axis-support-and-more
Bhagyashree R
20 May 2019
3 min read
Save for later

Matplotlib 3.1 releases with Python 3.6+ support, secondary axis support, and more

Bhagyashree R
20 May 2019
3 min read
Last week, the team behind Matplotlib announced the release of Matplotlib 3.1. This release comes with the support for Python 3.6+, a helper method for scatter legends, secondary axis support, a concise date formatter, and more. A helper method for scatter legends Previously, for obtaining a legend for a scatter plot, users had two options: plotting several scatters, each with an individual label or, creating proxy artists to show in the legend manually. In Matplotlib 3.1, the PathCollection class comes with the legend_elements() method to obtain the handles and labels for a scatter plot in an automated way. Formatting date ticks better with ConciseDateFormatter Matplotlib’s automatic date formatter is quite verbose, and that is why this version brings ConciseDateFormatter, which helps to minimize the strings used in the tick labels as much as possible. ConciseDateFormatter is a candidate for becoming the default date tick formatter in Matplotlib’s future releases. Source: Matplotlib Secondary x/y axis support Matplotlib 3.1 introduces a way to add a secondary axis on a plot for cases like converting radians to degrees on the same plot. With the help of Axes.axes.secondary_xaxis and Axes.axes.secondary_yaxis, you will now be able to make child axes with only one axis visible. Source: Matplotlib FuncScale and FuncTransform for arbitrary axes scales Two new classes, FuncScale and FuncTransform are introduced to provide users arbitrary scale transformations without having to write a new subclass of ScaleBase. You can use these through the following code: ‘ax.set_yscale('function', functions=(forward, inverse))’ Working with Matplotlib on MacOSX no longer requires a Python framework build Previously, in order to interact correctly with MacOSX through the native GUI framework, users required a framework build of Python. In this version, the app type is updated to remove this dependency so that the MacOSX backend works with non-framework Python. Support for forward/backward mouse buttons Similar to the key_press events, figure managers now support a ‘button_press’ event that allows binding actions to mouse buttons. One of the applications of this event is supporting forward/backward mouse buttons in figures created with Qt5 backend. These are a select few updates and additions. To read the full list of updates in Matplotlib 3.1, check out the official announcement. Matplotlib 3.0 is here with new cyclic colormaps, and convenience methods Creating 2D and 3D plots using Matplotlib How to Customize lines and markers in Matplotlib 2.0  
Read more
  • 0
  • 0
  • 2822

article-image-google-ai-engineers-introduce-translatotron-an-end-to-end-speech-to-speech-translation-model
Amrata Joshi
17 May 2019
3 min read
Save for later

Google AI engineers introduce Translatotron, an end-to-end speech-to-speech translation model

Amrata Joshi
17 May 2019
3 min read
Just two days ago, the research team at Google AI introduced Translatotron, an end to end, speech to speech translation model.  In their research paper, “Direct speech-to-speech translation with a sequence-to-sequence model” they demonstrated the Translatotron and realized that the model achieves high translation quality on two Spanish-to-English datasets. Speech-to-speech translation systems have usually been broken into three separate components: Automatic speech recognition: It used to transcribe the source speech as text. Machine translation: It is used for translating the transcribed text into the target language Text-to-speech synthesis (TTS): It is used to generate speech in the target language from the translated text. Dividing the task into such systems have been working successfully and have powered many commercial speech-to-speech translation products, including Google Translate. In 2016, most of the engineers and researchers realized the need for end-to-end models on speech translation when researchers demonstrated the feasibility of using a single sequence-to-sequence model for speech-to-text translation. In 2017, the Google AI team demonstrated that such end-to-end models can outperform cascade models. Recently, many approaches for improving end-to-end speech-to-text translation models have been proposed. Translatotron demonstrates that a single sequence-to-sequence model can directly translate speech from one language into another. Also, it doesn’t rely on an intermediate text representation in either language, as required in cascaded systems. It is based on a sequence-to-sequence network that takes source spectrograms as input and then generates spectrograms of the translated content in the target language. Translatotron also makes use of two separately trained components: a neural vocoder that converts output spectrograms to time-domain waveforms and a speaker encoder, which is used to maintain the source speaker’s voice in the synthesized translated speech. The sequence-to-sequence model uses a multitask objective for predicting source and target transcripts and generates target spectrograms during training. But during the inference, no no transcripts or other intermediate text representations are used. The engineers at Google AI validated Translatotron’s translation quality by measuring the BLEU (bilingual evaluation understudy) score, computed with text transcribed by a speech recognition system. The results do lag behind a conventional cascade system but the engineers have managed to demonstrate the feasibility of the end-to-end direct speech-to-speech translation. Translatotron can retain the original speaker’s vocal characteristics in the translated speech by incorporating a speaker encoder network. This makes the translated speech sound natural and less jarring. According to the Google AI team, the Translatotron gives more accurate translation than the baseline cascade model, while retaining the original speaker’s vocal characteristics. The engineers concluded that Translatotron is the first end-to-end model that can directly translate speech from one language into speech in another language and can retain the source speaker’s voice in the translated speech. To know more about this news, check out the blog post by Google AI. Google News Initiative partners with Google AI to help ‘deep fake’ audio detection research Google AI releases Cirq and Open Fermion-Cirq to boost Quantum computation Google’s Cloud Healthcare API is now available in beta
Read more
  • 0
  • 0
  • 2818

article-image-did-you-know-hackers-could-hijack-aeroplane-systems-by-spoofing-radio-signals
Amrata Joshi
17 May 2019
4 min read
Save for later

Did you know hackers could hijack aeroplane systems by spoofing radio signals?

Amrata Joshi
17 May 2019
4 min read
According to a latest research paper and demonstration from researchers at Northeastern University in Boston, hackers can hijack the systems used to guide aeroplanes by spoofing and compromising the radio signals used during landing. By using a $600 software defined radio, the researchers can now spoof airport signals that cause a pilot’s navigation instruments to falsely indicate that a plane is off course. Attackers can attack by sending a signal that causes a pilot’s course deviation indicator in order to show that a plane is slightly too far to the left of the runway, even when the plane is perfectly aligned. The pilot will react by guiding the plane to the right and inadvertently steer over the centerline. The spoofed signals can also be used to indicate that a plane’s angle of descent is more gradual than it actually is. The spoofed message can also generate a “fly down” signal that instructs the pilot to steepen the angle of descent, possibly causing the aircraft to touch the ground before reaching the start of the runway. In this paper, the researchers have investigated and demonstrated the vulnerability of aircraft instrument landing systems to wireless attacks. The researchers have further analyzed the instrument landing system (ILS) waveforms’ and have shown the feasibility of spoofing radio signals. This might lead to last-minute go around decisions, and in worst case scenarios, it can even lead to missing the landing zone in low-visibility scenarios. The researchers have first shown that it is possible to fully and in fine-grain control the course deviation indicator, as displayed by the ILS receiver, in real time, and further demonstrate it on aviation-grade ILS receivers. They have also analyzed the potential of both an overshadowing attack, and a lower-power single-tone attack. Note: The overshadowing attack involves sending specific ILS signals at a high power level to overpower legitimate ILS signals. The single-tone attack interferes with a legitimate ILS signal through the transmission of a lower power frequency tone which alters the plane's course deviation indicator needle. For evaluating the complete attack, the researchers have developed a tightly-controlled closed-loop ILS spoofer. This spoofer adjusts the adversary’s transmitted signals as a function of the aircraft GPS location which maintains power and keeps the deviation consistent with the adversary’s target position, causing an undetected off-runway landing. They have also demonstrated the integrated attack on an FAA (Federal Aviation Administration) certified flight-simulator (XPlane) by incorporating a spoofing region detection mechanism. This mechanism triggers the controlled spoofing on entering the landing zone to reduce detectability. The researchers have evaluated the performance of the attack against X-Plane’s AI-based autoland feature, and demonstrated a systematic success rate with offset touchdowns of 18 meters to over 50 meters. The researchers have investigated the security of aircraft instrument landing system against wireless attacks. For both these attacks, the researchers have generated specially crafted radio signals that are similar to the legitimate ILS signals using low-cost software-defined radio hardware platform. They have successfully induced aviation-grade ILS receivers, in real time, to lock and display arbitrary alignment to both horizontal and vertical approach path. This also demonstrates the potential for an adversary to trigger multiple aborted landings that would cause air traffic disruption and might let the aircraft to overshoot the landing zone or miss the runway entirely. The researchers then discuss potential countermeasures including failsafe systems such as GPS and show that these systems also do not provide sufficient security guarantees. They have also highlighted that implementing cryptographic authentication on ILS signals is not enough as the system could be vulnerable to record and replay attacks. Therefore, the researchers highlight on an open research challenge of building secure, scalable and efficient aircraft landing systems. To know more about this, check out the research paper. Researchers from China introduced two novel modules to address challenges in multi-person pose estimation AI can now help speak your mind: UC researchers introduce a neural decoder that translates brain signals to natural sounding speech OpenAI researchers have developed Sparse Transformers, a neural network which can predict what comes next in a sequence
Read more
  • 0
  • 0
  • 3258
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €14.99/month. Cancel anytime
article-image-after-refusing-to-sign-call-fight-online-extremism-trump-admin-launch-tool-to-defend-free-speech
Fatema Patrawala
16 May 2019
5 min read
Save for later

After refusing to sign the Christchurch Call to fight online extremism, Trump admin launches tool to defend “free speech” on social media platforms

Fatema Patrawala
16 May 2019
5 min read
The Trump administration on Wednesday launched a new tool where US citizens can complain about social media bias. Indirectly this is a platform for conservatives to “share their story”. The White House launched the tool just hours after it broke with more than a dozen world leaders and top technology companies in an international call to action around the rise of online extremism on social platforms. Over the past few months, Republicans have taken aim at social media networks, citing claims that conservatives have been wrongly censored on these platforms. In a recent poll, 83 percent of Republicans thought the tech companies were biased against conservatives.Committees like House of Energy and Commerce and Senate Judiciary, have even held hearings on the issue where lawmakers questioned officials from companies like Facebook and Twitter over the alleged bias. The outrage started last year in April when the House Judiciary Committee invited pro-Trump online personalities Diamond and Silk to discuss being “censored” on social media. And again this year Facebook in the wake of realworld hate crimes and violent terror attacks, banned six extremist account and a conspiracy theory organization. Additionally last month it was reported that President Trump met with Twitter founder and CEO Jack Dorsey. Twitter representatives said that the meeting was supposed to discuss the health of the platform, but it was later reported that Trump spent a significant portion of their 30-minute discussion complaining that he was losing followers on Twitter. Other members of the Trump family, like Don Jr., also voiced concern of the deplatforming of right-wing activists. After Facebook announced that it would banning conspiracy theorist Alex Jones along with other extremists accounts, Trump’s eldest son tweeted, “The purposeful & calculated silencing of conservatives on Facebook & the rest of the Big Tech monopoly men should terrify everyone,” https://twitter.com/DonaldJTrumpJr/status/1124339494616993792 When Vice reported about an all-hands meeting held on March 22 in Twitter, it stated that an employee asked a question, “Twitter has largely eradicated Islamic State propaganda off its platform. Why can’t it do the same for white supremacist content?” To which another Twitter executive who works on machine learning and artificial intelligence responded that such algorithms can be implemented but it would remove content from some of the Republican politicians when algorithms aggressively remove white supremacist material. The White House says the tool which is hosted on Typeform is meant to help people share stories about ways they were unfairly targeted by social platforms for free speech. But the online form where users can submit requests also appears to be an email collection mechanism. https://twitter.com/WhiteHouse/status/1128765001223663617 The form begins by asking users to submit basic information about themselves, like their first and last names. It then asks users if they are US citizens or permanent resident. If a user clicks "yes," the form continues. If a user clicks "no," a screen pops up saying: "Unfortunately, we can't gather your response through this form. Please feel free to contact us at WhiteHouse.gov/contact." This means immigrants will not be able to submit their views. There is also the risk of the US government gathering such information for the purpose of deportation from the country. If users clicked Yes, the tool will ask them to click which platform they've experienced bias on: Facebook, Instagram, Twitter, YouTube or Other. It asks users to link to the suspected post and post a screenshot from the platform, if applicable, of the rule violation notification. Critics were quick to point out that the online form was not very sophisticated and could be easily gamed by anyone. For example, the "captcha" response test used at the end of the survey to determine if the respondent is a bot asks users to type the year the Declaration of Independence was signed. "I tried it with '1945,' it cleared it. You just need to type four numbers," tweeted Quentin Hardy, head of editorial at Google Cloud. The form also asks you if you would want to be added to their mailing list. "We want to keep you posted on President Trump's fight for free speech," the form states after a few questions. "Can we add you to our email newsletters so we can update you without relying on platforms like Facebook and Twitter?" The move is yet another example of ways the administration has chosen to gather personal information of US citizens, promoting hate and bigotry under the veil of “free speech”, and unfairly excluding migrant voices from political discourse. https://twitter.com/RMac18/status/1128791345898745856 https://twitter.com/rob_sheridan/status/1128784373895974912 Twitter launches a new ‘search prompt’ feature to help users find credible sources about vaccines U.S. Supreme Court ruled 5-4 against Apple on its App Store monopoly case Facebook bans six toxic extremist accounts and a conspiracy theory organization
Read more
  • 0
  • 0
  • 1701

article-image-microsoft-open-sources-sptag-algorithm-to-make-bing-smarter
Amrata Joshi
16 May 2019
3 min read
Save for later

Microsoft open sources SPTAG algorithm to make Bing smarter!

Amrata Joshi
16 May 2019
3 min read
Yesterday, Microsoft announced that it has open-sourced an algorithm called Space Partition Tree And Graph (SPTAG) to make the Bing search engine quickly return search results. This algorithm allows users to take advantage of the intelligence from deep learning models for searching through billions of pieces of information, called vectors, in milliseconds. Machine-learning algorithms help search engines to deliver the best answers by building vectors. They are long lists of numbers that represent their input data, whether it be text on a webpage, images, sound, or videos. With the help of the vector search, it becomes easier to search by concept rather than keyword. For example, if a user types in “How tall is the tower in Paris?” Bing can return a natural language result telling the user the Eiffel Tower is 1,063 feet, even though the word “Eiffel” never appeared in the search query and the word “tall” never appears in the result. Bing captures billions of vectors for all the different kinds of media that it indexes and Microsoft uses SPTAG for searching these vectors. In this process, firstly, the team took a pre-trained model and then encoded that data into vectors, where each vector represents a word or pixel. With the help of SPTAG library, which is at the core of the open-sourced Python library, it was possible to generate a vector index. So, when the queries come in, the deep learning model translates that text or image into a vector and the library finds the most related vectors in that index. To explain this in detail, when an input query is converted into a vector, SPTAG is used to quickly find "approximate nearest neighbors" (ANN), or in other words, it searches the vectors that are similar to the input. The SPTAG library is now available under the MIT license and provides all of the tools for building and searching distributed vector indexes. According to the Microsoft team, the vectorizing effort has extended to over 150 billion pieces of data with Bing search which brings improvement over traditional keyword matching. Jeffrey Zhu, program manager on Microsoft’s Bing team, said, “Bing processes billions of documents every day, and the idea now is that we can represent these entries as vectors and search through this giant index of 100 billion-plus vectors to find the most related results in 5 milliseconds.” Microsoft’s official blog reads, “Only a few years ago, web search was simple. Users typed a few words and waded through pages of results. Today, those same users may instead snap a picture on a phone and drop it into a search box or use an intelligent assistant to ask a question without physically touching a device at all. They may also type a question and expect an actual reply, not a list of pages with likely answers.” The Bing team is expecting that the algorithm could be used for enterprise or consumer-facing applications for identifying a language being spoken based on an audio snippet. It could be even used for image-heavy services such as an app that lets people take pictures of flowers and for identifying what type of flower it is. It seems that there are endless possibilities with this algorithm when fused with the vector concept! To know more about this news, check out Microsoft’s blog post. #MSBuild2019: Microsoft launches new products to secure elections and political campaigns Microsoft Build 2019: Introducing Windows Terminal, application packed with multiple tab opening, improved text and more Microsoft Build 2019: Introducing WSL 2, the newest architecture for the Windows Subsystem for Linux  
Read more
  • 0
  • 0
  • 3449

article-image-facebook-tightens-rules-around-live-streaming-in-response-to-the-christchurch-terror-attack
Vincy Davis
15 May 2019
3 min read
Save for later

Facebook tightens rules around live streaming in response to the Christchurch terror attack

Vincy Davis
15 May 2019
3 min read
After the recent incident of the Christ Church terrorist attack in New Zealand tech companies had scrambled to take action on time due to the speed and volume of content which was uploaded, reuploaded and shared by the users worldwide. Facebook had received severe global pressure to ‘restrict’ the use of Facebook Live considering the shootings were live streamed on its app. Following this pressure, Facebook has now decided to impose restrictions on its live streaming feature. Yesterday in a statement, Facebook declared that from now on they will start restricting users from using Facebook Live if they break certain rules-including their Dangerous Organizations and Individuals policy. What is the restriction? Facebook has called this ‘restriction’ as a ‘one strike’ policy to tighten the rules, specifically to Live. If anybody violates any serious policies like violence and criminal behavior, coordinating harm, etc they will be restricted from using Live for a set period of time– for example, 30 days – starting on their first offense. If a user shares a link to a statement from a terrorist group with no context, he/she will be immediately blocked from using Live for a set period of time. These restrictions will eventually be implemented in other areas of Facebook, like creating ads. The Facebook announcement comes on the eve of a meeting hosted by New Zealand Prime Minister Jacinda Ardern and French President Emmanuel Macron in Paris. This meeting is being conducted to confirm the "Christchurch Call" pledge that will seek participants to eliminate terrorist and violent extremist content on social media and other online platforms. The main aim of this meeting is to bring stricter laws to commit social media firms to keep terrorism and violent extremism off their platforms. Per a report by Stuff, Ardern has described the crackdown by Facebook on the abuse of its live streaming service as a good first step "that shows the Christchurch Call is being acted on". Last month, Australia had introduced hefty fines and even jail time for executives at social media companies who fail to remove violent content quickly. The new legislation could also fine companies up to 10 percent of their annual revenue. Other steps taken by Facebook One of the main challenges faced by Facebook after the Christchurch attack was to remove the edited versions of the video of the attack. These type of videos were hard to detect. For this, Facebook is investing $7.5 million in research in partnership with the University of Maryland, Cornell University and the University of California, Berkeley. Their main aim is to research new techniques to : Detect manipulated media across images, video, and audio. Distinguish between unwitting posters and adversaries who intentionally manipulate videos and photographs. Facebook also hopes to add other research partners to the initiative, which is focused on combating deepfake videos. To read their full statement, head over to Facebook newsroom website. How social media enabled and amplified the Christchurch terrorist attack How social media enabled and amplified the Christchurch terrorist attack Facebook bans six toxic extremist accounts and a conspiracy theory organization  
Read more
  • 0
  • 0
  • 1560

article-image-twitter-launches-a-new-search-prompt-feature-about-vaccines
Vincy Davis
15 May 2019
3 min read
Save for later

Twitter launches a new ‘search prompt’ feature to help users find credible sources about vaccines

Vincy Davis
15 May 2019
3 min read
Facebook, Twitter and other social media firms are facing increasing pressure from lawmakers and the public to remove anti-vaccination propaganda from their platforms. ‘Vaccine hesitancy’ has been listed as the top 10 threats to Global Health in 2019 by the World Health Organisation (WHO). Following this, last week, Twitter launched a new ‘search prompt’ feature to prevent misinformation about vaccines. If a user searches for vaccines on Twitter, a prompt will lead them to a credible public health resource which will guide them to information about vaccines from authoritative sources. Twitter’s Vice President of Trust and Safety, Del Harvey posted on her blog about this new feature. It reads, “We’re committed to protecting the health of the public conversation on Twitter — ensuring individuals can find information from authoritative sources is a key part of that mission” With regard to this tool, if a user searches for any keyword related to vaccines, a prompt will appear on your feed which will direct individuals to a credible public health resource. For the U.S., a website by the Department of Health and Human Services, called Vaccines.gov will appear. Twitter will also not auto-suggest queries that are likely to direct individuals to non-credible commentary and information about vaccines.          Image source: Del Harvey’s Blog The search prompt feature will be available on iOS, Android, and mobile.twitter.com in the United States (in English and Spanish), Canada (in English and French), UK, Brazil, Korea, Japan, Indonesia, Singapore, and in Spanish-speaking Latin American countries. The new initiative will enable Twitter to guard users against the artificial amplification of non-credible content about the safety and effectiveness of vaccines. Twitter also ensures that their advertising content does not contain misleading claims about the cure, treatment, diagnosis or prevention of certain diseases and conditions, including vaccines. Twitter’s new ‘search prompt’ is an extension of Twitter’s #ThereIsHelp initiative where if a user searches for terms associated with suicide or self-harm, the top search result is a prompt encouraging them to reach out for help. These new features from social media giants come after there have been reports about Anti-Vaccine groups using social media to target parents with misinformation.  Pinterest was the first to take a strong stand against the spread of misinformation related to vaccines. In February, it blocked all vaccination related searches as the majority of shared images on Pinterest cautioned people against vaccinations. The same month, Youtube also started demonetizing channels which promoted anti-vaccination views. Also,   started placing new information panel that links to the Wikipedia entry on “vaccine hesitancy” before anti-vaccination videos. Two months ago, Facebook in its effort to minimize the spread of vaccination misinformation and to point users away from inaccurate anti-vaccination propaganda, started downranking groups and pages that spread this kind of content across both News Feed and its search function. It also started to reject ads promoting anti-vaccination misinformation. Instagram, began blocking hashtags that return anti-vaccination misinformation earlier this month. WhatsApp limits users to five text forwards to fight against fake news and misinformation Twitter launches a new reporting feature that allows users to flag tweets about voting that may mislead voters Dorsey meets Trump privately to discuss how to make public conversation “healthier and more civil” on Twitter
Read more
  • 0
  • 0
  • 1661
article-image-san-francisco-board-of-supervisors-vote-in-favour-to-ban-use-of-facial-recognition-tech-in-city
Amrata Joshi
15 May 2019
3 min read
Save for later

San Francisco Board of Supervisors vote in favour to ban use of facial recognition tech in city

Amrata Joshi
15 May 2019
3 min read
In January, San Francisco legislation proposed a ban on using facial recognition technology by the government. The ban is imposed on government agencies, including the city police and county sheriff’s department, but excludes the technology that unlocks the iPhone or cameras installed by businesses or individuals. Again this month, San Francisco Supervisor Aaron Peskin introduced the Stop Secret Surveillance Ordinance. And yesterday it has been reported that the Board of Supervisors voted in favor of the ban to use facial recognition by city agencies. https://twitter.com/UberFacts/status/1128454197324800000 https://twitter.com/SarahNEmerson/status/1128424297003868160 Northern California’s Matt Cagle and Brian Hofer, chair of Oakland’s Privacy Advisory Commission, came in support of this ordinance and wrote in an op-ed last week, “If unleashed, face surveillance would suppress civic engagement, compound discriminatory policing, and fundamentally change how we exist in public spaces.” https://twitter.com/Matt_Cagle/status/1128418575159418880 The proposal faced opposition from few, a local group named Stop Crime SF argued a ban might not be that fruitful when talking about property crime and might also impact in collecting and presenting evidence of crime. Though the vice president of Stop Crime SF, Joel Engardio, seems to be satisfied with the amended bill. In a statement to Wired, he says, “We agree with the concerns that people have about facial ID technology. The technology is bad and needs a lot of improvement.” This move definitely would impact the use of technology all over the world and might motivate other cities to adopt the same. Last month, the Oakland Privacy Advisory Commission released 2 key documents, an initiative to protect Oaklanders’ privacy namely, Proposed ban on facial recognition and City of Oakland Privacy Principles. Techies and developers of the facial recognition systems have been showing their concern in this regard and think that introducing strict rules and surveillance would be better than putting up a ban. Benji Hutchinson, vice president of federal operations for NEC, a major supplier of facial-recognition technology, says, “I think there’s a little bit too much fear and loathing in the land around facial-recognition technology.” In a statement to Wired, Daniel Castro, vice president of the Information Technology and Innovation Forum believes in calling for safeguards on the use of the technology rather than prohibitions. He also calls ban a “step backward for privacy,” as it will leave more people reviewing surveillance video. Though in the board meeting, Peskin said, “I want to be clear — this is not an anti-technology policy.”  He further clarified that the ordinance is also an accountability measure which would ensure safe and responsible use for surveillance tech. Update from ACLU on 21st May San Francisco's ban on using facial recognition technology by the government is now official. Yesterday, Matt Cagle tweeted that San Francisco has approved the ban by a vote of 10 to 1. https://twitter.com/Matt_Cagle/status/1130947088605298688 Amazon finally agrees to let shareholders vote on selling facial recognition software Oakland Privacy Advisory Commission lay out privacy principles for Oaklanders and propose ban on facial recognition China is using facial recognition tech to profile 11 million Uighurs Muslim minority: NYT report  
Read more
  • 0
  • 0
  • 1751

article-image-facebook-again-caught-tracking-stack-overflow-user-activity-and-data
Amrata Joshi
14 May 2019
3 min read
Save for later

Facebook again, caught tracking Stack Overflow user activity and data

Amrata Joshi
14 May 2019
3 min read
Facebook has been trending in the news because of its ethics and data privacy issues. Right from the Cambridge Analytica scandal to multiple hearings and fine against the company, Facebook has been surrounded by these controversies since quite some time now. Lately,  the Canadian and British Columbia privacy commissioners decided to take Facebook to Federal Court to seek an order because of its privacy practices. And once again, the company makes the headline for tracking users across Stack Overflow. Well, to explain this better, Stack Overflow directly links to Facebook profile pictures. You must be wondering many third-party platforms allow such tracking, then what’s the big deal in this one? So, the trap is, that this linking unintentionally allows user activity throughout Stack Exchange to be tracked by Facebook and surprisingly, it also tracks the topics you are interested in! To explain this further, let’s take an example from a Stack Overflow user. Image source: Stack Overflow The user says, “Have a look: when I load a page containing any avatars hot-linked from Facebook, my browser automatically sends a request including a Facebook identifying cookie and the URL of the page I'm viewing on Stack Exchange. They don't just know that I'm visiting the site, they also get to know which topics I'm interested on throughout the network.” Another user commented on the thread, “Facebook creates 'shadow' accounts for many people who don't have actual accounts (or at least, for people they can't find an actual account for) in order to consistently/reliably track/gather data to sell.” Few others are complaining about their profile pictures being attributed directly to facebook.com domains. The browser is basically making a request to Facebook and the Facebook session cookie identifies the user as well as a referrer header. This header tells Facebook what page the users were on at the time they check the image. How to save yourself from such creepy activity by Facebook? A lot of users have suggested selecting the cookies they should be accepting on each of the sites they visit. Also, blocking third-party cookies and setting the browser to remove cookies while closing the browser as a viable option. Manual removal of cookies is advisable while quitting a browser. Few others have suggested using an ad blocker which will refrain the users from going on fishy sites. It is suggested to enable Strict Content Blocking in Firefox for security concerns. But the matter of concern is that even other tech companies must be involved in collecting the user data and manipulating them and basically playing around our privacy. Just a few years ago, Google was trying to patent the collection of user data. It’s surprising to see how is the world changing around us and we are forced to live in an era where the tech giants are data minded. To know more about this news, check out the Stack Overflow thread. Facebook bans six toxic extremist accounts and a conspiracy theory organization Facebook open-sources F14 algorithm for faster and memory-efficient hash tables Facebook shareholders back a proposal to oust Mark Zuckerberg as the board’s chairperson
Read more
  • 0
  • 0
  • 3638

article-image-facebook-files-a-lawsuit-against-south-korean-data-analytics-firm-rankwave-for-unlawful-data-use-amidst-high-profile-calls-to-break-it-up
Savia Lobo
13 May 2019
7 min read
Save for later

Facebook files a lawsuit against South Korean data analytics firm, Rankwave, for unlawful data use amidst high profile calls to “break it up”

Savia Lobo
13 May 2019
7 min read
On Friday, Facebook revealed that it has filed a lawsuit against a South Korean data analytics firm, Rankwave claiming that it is unlawfully using its app data for personal marketing and advertising while not adhering to Facebook’s data policies. Facebook further stated that Rankwave failed to cooperate with the compliance audit, which Facebook says it requires from all developers using their platform. The lawsuit was filed in a California superior court in San Mateo County claims that Rankwave operated minimum 30 apps through Facebook’s platform and used “Facebook data in order to market and sell its own services, specifically tools used by various customers and businesses to track Facebook interactions such as likes and comments on their pages”. Rankwave also apparently misused data taken in by its own consumer app, called “Rankwave App”, for checking one’s social media ‘influencer score’. The app “could pull data about your Facebook activity such as location check-ins, determine that you’ve checked into a baseball stadium, and then Rankwave could help its clients target you with ads for baseball tickets”, TechCrunch reports. The lawsuit also mentions that the RankWave App stopped operating on the Facebook Platform around about March 30, 2018. On January 17, 2019, Facebook sent a written request for information (“RFI”) to Rankwave that requested proof that Rankwave was in compliance with its contractual obligations under Facebook’s Policies and TOS. Moreover, they also wanted to determine the Facebook data Rankwave were used to sell advertising and marketing, including whether any user data had been impacted. Rankwave did not respond to Facebook’s RFI, nor to an email, which reminded them that their response to the RFI was due on January 31, 2019. On February 13, 2019, Facebook sent Rankwave a Cease and desist letter (C&D Letter) which informed Rankwave that it had violated and continued to violate the platform policies, including Policy 7.9, by failing to provide proof of compliance with Facebook’s Platform Policies and TOS. Facebook Platform Policy, Section 7.9 states: “[Facebook] or an independent auditor acting on our behalf may audit your app, systems, and records to ensure your use of Platform and data you receive from us is safe and complies with our Terms, and that you've complied with our requests and requests from people who use Facebook to delete user data obtained through our Platform. If requested, you must provide proof that your app complies with our terms.” According to the lawsuit, in an email response on February 19, 2019, Rankwave ignored the demands in the C&D letter, including the audit request. It also claimed that it had not had access to any of its Facebook apps since 2018. Jessica Romero, Facebook’s Director of Platform Enforcement and Litigation, writes, “By filing the lawsuit, we are sending a message to developers that Facebook is serious about enforcing our policies, including requiring developers to cooperate with us during an investigation.” According to TechCrunch, “Rankwave came into Facebook’s crosshairs in June 2018 after it was sold to a Korean entertainment company in May 2017. Facebook assesses that the value of its data at the time of the buyout was $9.8 million. Worryingly, Facebook didn’t reach out to Rankwave until January 2019 for information proving it complied with the social network’s policies.” “Now Facebook is seeking money to cover the $9.8 million value of the data, additional monetary damages, and legal fees, plus injunctive relief restraining Rankwave from accessing the Facebook Platform, requiring it to comply with Facebook’s audit, requiring that it delete all Facebook data”, TechCrunch further added. Many are speculating this incident to the Cambridge Analytics scandal that abused private Facebook data in order to inform political campaigning efforts, leading the social media firm into a huge crisis. On Friday, Facebook co-founder and chief executive Mark Zuckerberg met French President Emmanuel Macron in Paris to discuss potential regulation of social networks. "We need new rules for the internet that will spell out the responsibilities of companies and those of governments," Mr. Zuckerberg told French TV channel France 2 after the meeting. One of the users on HackerNews writes, “My prediction after the Cambridge Analytica scandal broke is that it would lead to an explosion in wealthy people who want to play at noopolitics. I suspect they have dozens of CA's on their hands currently. At the very least, if not hundreds. The key takeaway that some people will have had from Cambridge Analytica, is not 'they got caught, don't do this', but rather 'they were largely successful and incredibly cheap'.” “The upshot from having lots of players in this space, however, is not one of greater control by insidious power addicts, but rather a loss of control as the players compete for attention and influence. So, chaos in the news and the elimination of any kind of consistent narrative from on high. I think we have been experiencing this for a while now. In some ways, it is almost an improvement”, the user further added. Facebook responds to Chris Hughes’ “It’s time to break up Facebook” Facebook co-founder Chris Hughes recently wrote an opinion piece in The New York Times that said, the company should be broken up. “Hughes stated that CEO Mark Zuckerberg’s “focus on growth led him to sacrifice security and civility for clicks,” and that he should be held accountable for his company’s mistakes” Nick Clegg, Facebook’s Vice President for global affairs and communications, in his response to Hughes’ thoughts states, “what matters is not size but rather the rights and interests of consumers, and our accountability to the governments and legislators who oversee commerce and communications.” Clegg, in his article, highlights on various achievements by Facebook, the key areas that FB is planning to concentrate on, and the misunderstanding associated with the company. He mentions, “The first misunderstanding is about Facebook itself and the competitive dynamics in which we operate.” The other one he mentions is that of antitrust laws. “Over the past two years we’ve focused heavily on blocking foreign adversaries from trying to influence democratic elections by using our platforms. We’ve done the same to protect against terrorism and hate speech and to better safeguard people’s data”, Clegg writes. Zuckerberg, also responded to Hughes in a TV interview with France Info while in Paris to meet with French President Emmanuel Macron, ”When I read what he wrote, my main reaction was that what he’s proposing that we do isn’t going to do anything to help solve those issues.” He further added, “So I think that if what you care about is democracy and elections, then you want a company like us to be able to invest billions of dollars per year like we are in building up really advanced tools to fight election interference.” A user on HackerNews writes, “Mr. Clegg starts his opinion piece with a nirvana fallacy: breaking up Facebook won't solve all the world's problem, so why bother? More appeals are made throughout to Facebook's large user-base, as justification for continued market dominance. Yet Mr. Clegg claims anti-trust laws do not apply to Facebook those laws are to ensure "low-cost, high-quality products" - and since Facebook is free, they're immune from such rules. He proudly denigrates and defies the laws simply because they were "developed in the 1800s" which is an outright disgrace. I find his arguments wholly unedifying and severely lacking in substance and creativity. That pro-Facebook propaganda by their PR head is even deemed worthy of publishing (in NYTimes of all places!) is frankly a disappointment.” https://twitter.com/ewarren/status/1126493176406081537 https://twitter.com/SenSanders/status/1126848277083717633 To know more about this news in detail, head over to Nick Clegg’s post on The New York Times. Facebook bans six toxic extremist accounts and a conspiracy theory organization Facebook open-sources F14 algorithm for faster and memory-efficient hash tables New York AG opens investigation against Facebook as Canada decides to take Facebook to Federal Court for repeated user privacy violations
Read more
  • 0
  • 0
  • 1967
article-image-researchers-from-china-introduced-two-novel-modules-to-address-challenges-in-multi-person-pose-estimation
Amrata Joshi
13 May 2019
4 min read
Save for later

Researchers from China introduced two novel modules to address challenges in multi-person pose estimation

Amrata Joshi
13 May 2019
4 min read
One of the major challenges in computer vision is Multi-person pose estimation. Though few of the currently used approaches have achieved significant progress by fusing the multi-scale feature maps, little attention is paid to enhancing the channel-wise and spatial information of the feature maps. Last week, researchers from Southeast University, Nanjing, China proposed a paper Multi-Person Pose Estimation with Enhanced Channel-wise and Spatial Information, wherein they have introduced two novel modules for enhancing the information for the multi-person pose estimation. Firstly, the researchers have proposed a Channel Shuffle Module (CSM) to adopt the channel shuffle operation on the feature maps with different levels while promoting cross-channel information communication among the feature maps. Secondly, they have designed a Spatial, Channel-wise Attention Residual Bottleneck (SCARB) to boost the original residual unit with attention mechanism. And further highlighting the information of the feature maps both in the spatial and channel-wise context. Lastly, they have evaluated the effectiveness of their proposed modules on the COCO keypoint benchmark, and experimental results show that their approach achieves the state-of-the-art results. So hereby we discuss the modules introduced by the researchers in detail. Modules proposed by the researchers Channel Shuffle Module (CSM) The levels of the feature maps are enriched by the depth of the layers in the deep convolutional neural networks and many visual tasks have also made major improvements. However, in the case of multi-person pose estimation, there are still limitations in the trade-off between the low-level and high-level feature maps. Here, the channel information with different characteristics can complement and reinforce with each other. So, the researchers decided to propose the Channel Shuffle Module (CSM) to further calculate the interdependencies between the low-level and high-level feature maps. Image Source: Multi-Person Pose Estimation with Enhanced Channel-wise and Spatial Information Let’s assume that the pyramid features extracted from the ResNet backbone are denoted as Conv2∼5 (as shown in the figure). In this case, Conv-3∼5 are first upsampled to the same resolution as the Conv2, and then these feature maps are concatenated together. Then the channel shuffle operation is performed on the concatenated features in order to fuse the complementary channel information among different levels. The shuffled features then are split and downsampled to the original resolution separately which are denoted as C-Conv-2∼5. C-Conv-2∼5. Next, the researchers perform 1×1 convolution to further fuse C-Conv-2∼5, and obtain the shuffled features that are denoted as SConv-2∼5. And they concatenate the shuffled feature maps S-Conv-2∼5 with the original pyramid feature maps Conv2∼5 for achieving the final enhanced pyramid feature representations. These enhanced pyramid feature maps contain the information from the original pyramid features and fused cross-channel information from the shuffled pyramid feature maps. Attention Residual Bottleneck (ARB) The researchers introduced Attention Residual Bottleneck based on the enhanced pyramid feature representations mentioned above. With the help of Attention Residual Bottleneck, they enhanced the feature responses both in the spatial and channel-wise context. Image Source: Multi-Person Pose Estimation with Enhanced Channel-wise and Spatial Information In the figure, the schema of the original Residual Bottleneck and the Spatial, Channel-wise Attention Residual Bottleneck is composed of the spatial attention and channel-wise attention. The dashed links in the figure, indicate the identity mapping. The ARB learns the spatial attention weights β and the channel-wise attention weights α respectively. By applying the whole feature maps, the project leads to sub-optimal results due to the irrelevant regions. Whereas, the spatial attention mechanism attempts to highlight the task-related regions in the feature maps. Evaluating the models on COCO keypoint The team evaluates the models on the challenging COCO keypoint benchmark and train them on the COCO dataset that includes 57K images and 150K person instances with no extra data involved. The ablation studies are then validated on the COCO minival dataset and the final results are reported on the COCO test-dev dataset compared with the public state-of-the-art results. The team uses the official evaluation metric that reports the OKS-based AP (average precision) in the experiments. Here the OKS (object keypoints similarity) defines the similarity between the ground truth pose and predicted pose. In the Channel-wise Attention Residual Bottleneck (SCARB) experiment, the team explores the effects of different implementation orders of the spatial attention and the channelwise attention in the Attention Residual Bottleneck, i.e., SCARB and CSARB. To know more about this news, check out the paper, Multi-Person Pose Estimation with Enhanced Channel-wise and Spatial Information. AI can now help speak your mind: UC researchers introduce a neural decoder that translates brain signals to natural-sounding speech OpenAI researchers have developed Sparse Transformers, a neural network which can predict what comes next in a sequence Researchers propose a reinforcement learning method that can hack Google reCAPTCHA v3  
Read more
  • 0
  • 0
  • 2468

article-image-postgresql-12-progress-update
Amrata Joshi
13 May 2019
2 min read
Save for later

PostgreSQL 12 progress update

Amrata Joshi
13 May 2019
2 min read
Last week, the team at PostgreSQL released a progress update for the eagerly awaited PostgreSQL 12. This release  comes with performance improvements and better server configuration, indexes, recovery parameters and much more.   This article was updated 05.14.2019 to correct the fact that this was a progress update for PostgreSQL, not a software release. What’s going to be coming in PostgreSQL 12? Performance In PostgreSQL 12  the Just-in-Time (JIT) compilation will be enabled by default. Memory consumption of COPY and function calls will be reduced and the search performance for multi-byte characters will also be improved. Server configuration Updates to server configuration will add the ability to enable/disable cluster checksums using pg_checksums. It should also reduce the default value of autovacuum_vacuum_cost_delay to 2ms and allows time-based server variables to use micro-seconds. Indexes in PostgreSQL 12  The speed of btree index insertions should be optimized for PostgreSQL. The new code will also improve the space-efficiency of page splits and should further reduce locking overhead, and gives better performance for UPDATEs and DELETEs on indexes with many duplicates. Recovery parameters PostgreSQL 12 should also allow recovery parameters to be changed with reload. These parameters include, archive_cleanup_command, promote_trigger_file, recovery_end_command, and recovery_min_apply_delay. It also allows streaming replication timeout. OID columns The special behavior of OID columns will likely be removed, but columns will still be explicitly specified as type OID. The operations on tables that have columns named OID will need to be adjusted. Data types Data types abstime, reltime, and tinterval look as though they'll be removed from PostgreSQL 12. Geometric functions Geometric functions and operators will be refactored to produce better results than are currently available. The geometric types can be restructured to handle NaN, underflow, overflow and division by zero. To learn more about what's likely to be coming to PostgreSQL 12, check out the official announcement. Building a scalable PostgreSQL solution PostgreSQL security: a quick look at authentication best practices [Tutorial] How to handle backup and recovery with PostgreSQL 11 [Tutorial]  
Read more
  • 0
  • 0
  • 4052