Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Data

1208 Articles
article-image-intels-new-brain-inspired-neuromorphic-ai-chip-contains-8-million-neurons-processes-data-1k-times-faster
Fatema Patrawala
18 Jul 2019
5 min read
Save for later

Intel’s new brain inspired neuromorphic AI chip contains 8 million neurons, processes data 1K times faster

Fatema Patrawala
18 Jul 2019
5 min read
On Monday, Intel announced the Pohoiki Beach, a neuromorphic system comprising of 8 million neurons, multiple Nahuku boards and 64 Loihi research chips. The Intel team unveiled this new system at the DARPA Electronics Resurgence Initiative Summit held in Detroit. Intel introduced Loihi in 2017, its first brain inspired neuromorphic research chip. Loihi applies the principles found in biological brains to computer architectures. It enables users to process information up to 1,000 times faster and 10,000 times more efficiently than CPUs for specialized applications like sparse coding, graph search and constraint-satisfaction problems.The Pohoiki Beach is now available for the broader research community and they can experiment with Loihi. “We are impressed with the early results demonstrated as we scale Loihi to create more powerful neuromorphic systems. Pohoiki Beach will now be available to more than 60 ecosystem partners, who will use this specialized system to solve complex, compute-intensive problems,” says Rich Uhlig, managing director of Intel Labs. According to Intel, Pohoiki Beach will enable researchers to efficiently scale novel neural inspired algorithms such as sparse coding, simultaneous localization and mapping (SLAM) and path planning. The Pohoiki Beach system is different in a way because it will demonstrate the benefits of a specialized architecture for emerging applications, including some of the computational problems hardest for the internet of things (IoT) and autonomous devices to support. By using this type of specialized system, as opposed to general-purpose computing technologies, Intel expects to realize orders of magnitude gains in speed and efficiency for a range of real-world applications, from autonomous vehicles to smart homes to cybersecurity. Pohoiki Beach will mark a major milestone in Intel’s neuromorphic research, as it will lay the foundation for Intel Labs to scale the architecture to 100 million neurons later this year. Rich Uhlig says he, “predicts the company will produce a system capable of simulating 100 million neurons by the end of 2019. Researchers will then be able to apply it to a whole new set of applications, such as better control of robot arms.” Ars Technica writes that Loihi, the underlying chip in Pohoiki Beach consists of 130,000 neuron analogs—hardware-wise, this is roughly equivalent to half of the neural capacity of a fruit fly. Pohoiki Beach scales that up to 8 million neurons—about the neural capacity of a zebrafish. But what perhaps is more interesting than the raw computational power of the new neural network is how well it scales. “With the Loihi chip we’ve been able to demonstrate 109 times lower power consumption running a real-time deep learning benchmark compared to a GPU, and 5 times lower power consumption compared to specialized IoT inference hardware. Even better, as we scale the network up by 50 times, Loihi maintains real-time performance results and uses only 30 percent more power, whereas the IoT hardware uses 500 percent more power and is no longer real-time,” says Chris Eliasmith, co-CEO of Applied Brain Research and professor at the University of Waterloo As per the IEEE Spectrum, Intel and its research partners are just beginning to test what massive neural systems like Pohoiki Beach can do, but so far the evidence points to even greater performance and efficiency, says Mike Davies, director of neuromorphic research at Intel. “We’re quickly accumulating results and data that there are definite benefits… mostly in the domain of efficiency. Virtually every one that we benchmark…we find significant gains in this architecture,” he says. Going from a single-Loihi to 64 of them is more of a software issue than a hardware one. “We designed scalability into the Loihi chip from the beginning,” says Davies. “The chip has a hierarchical routing interface…which allows us to scale to up to 16,000 chips. So 64 is just the next step.” According to Davies, Loihi can run networks which are immune to catastrophic forgetting and can learn more like humans. He proved this with an evidence of research work done by the Thomas Cleland’s group at Cornell University, that Loihi can achieve one-shot learning. That is, learning a new feature after being exposed to it only once. Loihi can also run feature-extraction algorithms immune to the kinds of adversarial attacks that can confuse image recognition systems. Traditional neural networks don’t really understand the features they’re extracting from an image in the way our brains do. “They can be fooled with simplistic attacks like changing individual pixels or adding a screen of noise that wouldn’t fool a human in any way,” Davies explains. But the sparse-coding algorithms Loihi can run work more like the human visual system and so wouldn’t fall for such shenanigans. This news brings a lot of excitement amongst the community and they are awaiting to see a system that will contain 100 million neurons by the end of this year. https://twitter.com/javiermendonca/status/1151131213576359937 https://twitter.com/DSakya/status/1150988779143880704 Intel discloses four new vulnerabilities labeled MDS attacks affecting Intel chips Intel plans to exit from the 5G smartphone modem business, following the Apple Qualcomm dispute Google researchers present Zanzibar, a global authorization system, it scales trillions of access control lists and millions of authorization requests per second
Read more
  • 0
  • 0
  • 2485

article-image-eu-commission-opens-an-antitrust-case-against-amazon-on-grounds-of-violating-eu-competition-rules
Fatema Patrawala
17 Jul 2019
3 min read
Save for later

EU Commission opens an antitrust case against Amazon on grounds of violating EU competition rules

Fatema Patrawala
17 Jul 2019
3 min read
Today European Commission has issued a formal antitrust investigation to assess whether Amazon’s use of sensitive data from independent retailers who sell on its marketplace is in breach of EU competition rules. https://twitter.com/EU_Competition/status/1151428097847287808 Commissioner Margrethe Vestager, in charge of competition policy, said: "European consumers are increasingly shopping online. E-commerce has boosted retail competition and brought more choice and better prices. We need to ensure that large online platforms don't eliminate these benefits through anti-competitive behaviour. I have therefore decided to take a very close look at Amazon's business practices and its dual role as marketplace and retailer, to assess its compliance with EU competition rules.” The Commission has noticed that Amazon while providing a marketplace for competitive sellers collects data about the activity on its platform. Based on the preliminary fact-finding, Amazon appears to use competitively sensitive information – about marketplace sellers, their products and transactions on the marketplace. As a part of its in-depth investigation the Commission will look into: the standard agreements between Amazon and marketplace sellers, which allow Amazon's retail business to analyse and use third party seller data. In particular, the Commission will focus on whether and how the use of accumulated marketplace seller data by Amazon as a retailer affects competition. the role of data in the selection of the winners of the “Buy Box” and the impact of Amazon's potential use of competitively sensitive marketplace seller information on that selection. The “Buy Box” is displayed prominently on Amazon and allows customers to add items from a specific retailer directly into their shopping carts. Winning the “Buy Box” seems key for marketplace sellers as a vast majority of transactions are done through it. If proven, the practices under investigation may breach the EU competition rules on anticompetitive agreements between the company under Article 101 of the Treaty on the Functioning of the European Union (TFEU).    Source: EU Commission Commissioner Margrethe Vestager hinted for months that she wanted to escalate a preliminary inquiry into how Amazon may be unfairly using sales data to undercut smaller shops on its Marketplace platform. By ramping up the probe, officials can start to build a case that could ultimately lead to fines or an order to change the way the Seattle-based company operates in the EU. “If powerful platforms are found to use data they amass to get an edge over their competitors, both consumers and the market bear the cost,” said Johannes Kleis of BEUC, the European consumer organization in Brussels. The Commission has already informed Amazon about opening the case proceedings. It will open the investigations on priority basis and there is no legal deadline attached to put an end to this case. The current Chief Economist at the EU Commission approached Sen Elizabeth Warren, who wants to break the big tech, to umpire and build a team to lead this case. https://twitter.com/TomValletti/status/1151430006209482752 To know more about this news, you can check out the official EU Commission page. Elizabeth Warren wants to break up tech giants like Amazon, Google Facebook, and Apple and build strong antitrust laws Amazon is the next target on EU’s antitrust hitlist Amazon workers protest on its Prime day, demand a safe work environment and fair wage  
Read more
  • 0
  • 0
  • 2031

article-image-graphql-api-is-now-generally-available
Amrata Joshi
17 Jul 2019
3 min read
Save for later

GraphQL API is now generally available

Amrata Joshi
17 Jul 2019
3 min read
Last month, the team at Fauna, provider of FaunaDB, the cloud-first database announced the general availability of its GraphQL API, a query language for APIs. With the support for GraphQL, FaunaDB now provides cloud database services in the market and allows developers to use any API of choice to manipulate all their data. GraphQL also helps developers with their productivity by enabling fast, easy development of serverless applications. It makes FaunaDB the only serverless backend that has support for universal database access. Matt Biilmann, CEO at Netlify, a Fauna partner said, “Fauna’s GraphQL support is being introduced at a perfect time as rich, serverless apps are disrupting traditional development models.” Biilmann added, “GraphQL is becoming increasingly important to the entire developer community as they continue to leverage JAMstack and serverless to simplify cloud application development. We applaud Fauna’s work as the first company to bring a serverless GraphQL database to market.” GraphQL helps developers in specifying the shape of the data they need without requiring changes to the backend components that provide data. The GraphQL API in FaunaDB helps teams in collaborating smoothly and allows back-end teams to focus on security and business logic, and helps front-end teams to concentrate on presentation and usability.  In 2017, the global serverless architecture market was valued at $3.46 billion in 2017 and is expected to reach $18.04 billion by 2024 as per the Zion Research. GraphQL brings growth and development to serverless development so developers can look for back-end GraphQL support like the one found in FaunaDB. The GraphQL API also supports three general functions: Queries, Mutations, and Subscriptions and currently, FaunaDB natively supports Queries and Mutations.  FaunaDB's GraphQL API provides developers with uniform access to transactional consistency, quality of service (QoS), user authorization, data access, and temporal storage. No limits on data history FaunaDB is the only database that provides support without any limits on data history. Any API such as SQL in FaunaDB can return data at any given time. Consistency FaunaDB provides the highest consistency levels for its transactions that are automatically applied to all APIs. Authorization FaunaDB provides access control at the row level which is applicable to all APIs, be it GraphQL or SQL. Shared data access It also features shared data access, so the data which is written by one API (e.g., GraphQL) can be read and modified by another API such as FQL.  To know more about the news, check out the press release. 7 reasons to choose GraphQL APIs over REST for building your APIs Best practices for RESTful web services : Naming conventions and API Versioning [Tutorial] Implementing routing with React Router and GraphQL [Tutorial]
Read more
  • 0
  • 0
  • 3772
Visually different images

article-image-meredith-whittaker-google-walkout-organizer-and-ai-ethics-researcher-is-leaving-the-company-adding-to-its-brain-drain-woes-over-ethical-concerns
Sugandha Lahoti
16 Jul 2019
4 min read
Save for later

Meredith Whittaker, Google Walkout organizer, and AI ethics researcher is leaving the company, adding to its brain-drain woes over ethical concerns

Sugandha Lahoti
16 Jul 2019
4 min read
Meredith Whittaker who played a major role in Google’s Walkout last year is leaving the company amid facing retaliation at work. The news was disclosed when a software engineer at Google posted a tweet about her last day. https://twitter.com/thegreenfrog611/status/1150859347766833152   A Google spokeswoman also confirmed Whittaker’s departure to Bloomberg. However, Whittaker has not yet shared the news on her Twitter account. Last year in November, a global Google Walkout for Real Change was organized by Claire Stapleton, Meredith Whittaker and six other employees at the company. It prompted 20,000 Google employees and contractors to walk off the job opposing the company’s handling of sexual harassment allegations. In April, Stapleton and Whittaker accused the company of retaliation against them over last year’s Google Walkout protest. Both their roles changed dramatically including calls to abandon AI ethics work, demotion, and more. After the announcement of Google disbanding it’s AI Ethics council, Whittaker said, she was informed that to remain at the company she will have to abandon her work on AI ethics and the AI Now Institute. She said that her manager told her in late December she would likely need to leave Google’s Cloud division. The same manager told her in March that the “Cloud division was seeking more revenue and that AI Now and her AI ethics work was no longer a fit. This was a strange request because the Cloud unit has a team working on ethical concerns related to AI.” Similar retaliation was faced by Stapleton, who was told she would be demoted from her role as marketing manager at YouTube. “My manager started ignoring me, my work was given to other people, and I was told to go on medical leave, even though I’m not sick,” Following continuous counter-attacks, Stapelton was prompted to resign from her position last month. https://twitter.com/clairewaves/status/1137002800053985280 Whittaker had then tweeted in her support. https://twitter.com/mer__edith/status/1137006840313548801 Whittaker had signed the petition protesting Google’s infamous Project Dragonfly, the secretive search engine that Google is allegedly developing which will comply with the Chinese rules of censorship. Meredith Whittaker was also a leader in the anti-Maven movement. Google’s Project Maven, was focused on analyzing drone footage and could have been eventually used to improve drone strikes on the battlefield. More than 3,000 Google employees signed a petition against this project that led to Google deciding not to renew its contract with the U.S. Department of Defense in 2019. Google announced in June it would not renew the contract. Whittaker tweeted at the time that she was “incredibly happy about this decision, and have a deep respect for the many people who worked and risked to make it happen. Google should not be in the business of war.” People have commented on how Meredith's departure will only intensify activism at Google. “The impact @mer__edith has in AI ethics is second to none. What happens to her at Google will be a gauge for the wellbeing of the entire field. Watch closely,” Moritz Hardt, Assistant Professor of Electrical Engineering and Computer Science, Berkeley University https://twitter.com/mrtz/status/1121110692843507712 https://twitter.com/Kantrowitz/status/1150992543691108352 Liz Fong-Jones, Xoogler, who left Google over ethical concerns earlier this year, tweeted about the number of Google Walkout and other organizing leaders that have left the company. There are five who have left, Claire Stapleton, Meredith Whittaker, Liz Fong-Jones, Celie O'Neil-Hart, and Erica Anderson. https://twitter.com/lizthegrey/status/1150960547803860993 Google rejects all 13 shareholder proposals at its annual meeting, despite protesting workers Google Walkout organizer, Claire Stapleton resigns after facing retaliation from management #NotOkGoogle: Employee-led town hall reveals hundreds of stories of retaliation at Google
Read more
  • 0
  • 0
  • 1746

article-image-volkswagen-under-its-new-self-driving-vehicle-alliance-with-ford-invests-2-6-billion-in-argo-ai
Bhagyashree R
15 Jul 2019
3 min read
Save for later

Volkswagen, under its new self-driving vehicle alliance with Ford, invests $2.6 billion in Argo AI

Bhagyashree R
15 Jul 2019
3 min read
After ending its ties with self-driving developer Aurora earlier this year, Volkswagen on Friday disclosed that it is now investing $2.6 billion in Ford’s autonomous-car partner, Argo AI. This deal, that values the operation at more than $7 billion, is part of a broader alliance between Volkswagen and Ford that covers autonomous and electric vehicles. “While Ford and Volkswagen remain independent and fiercely competitive in the marketplace, teaming up and working with Argo AI on this important technology allows us to deliver unmatched capability, scale, and geographic reach,” Ford Chief Executive Officer Jim Hackett said. Under this alliance, Ford and Volkswagen joining forces to take advantage of each other’s strengths. While Ford is ahead of Volkswagen in the autonomous driving field, Volkswagen is more advanced than Ford in electric cars. Volkswagen plans to merge its Munich-based subsidiary, Autonomous Intelligent Driving (AID) including its 200 employees and the intellectual property they’ve developed into Argo. Argo AI, that was founded in 2016, boasts of 500 employees and with this merge, it will increase to 700. Before coming to Argo AI, Bryan Salesky, its chief executive worked for Google. He believes that this deal will help his company scale. He, in a Reuters interview, said, “We have two great customers and investors who are going to help us really scale and are committed to us for the long term.” He further said that Argo is open to additional strategic or financial investors to help share the costs of bringing self-driving vehicles to market. “We all realize this is a time-, talent- and capital-intensive business,” he said. Ford and VW, Argo’s two investors will hold equal, minority stake in the startup that together make up a majority. Like its employee count, Argo’s board will also expand from five to seven members. This investment looks promising for Volkswagen, as it opens an opportunity to catch up with Alphabet Inc.’s Waymo, and General Motors Co.’s Cruise unit. Given that this field is so resource-intensive it makes sense to make alliances to achieve the goal. Ferdinand Dudenhöffer, a professor at the University of Duisburg-Essen in Germany, said in an email to the New York Times, “Autonomous driving is a very, very expensive technology. One has to invest today in order to make the first sales in 2030, maybe. Therefore it makes a lot of sense for Ford and VW to work together.” Alphabet’s Waymo to launch the world’s first commercial self driving cars next month Apple gets into chip development and self-driving autonomous tech business Tesla Autonomy Day takeaways: Full Self-Driving computer, Robotaxis launching next year, and more
Read more
  • 0
  • 0
  • 1229

article-image-stripes-api-degradation-rca-found-unforeseen-interaction-of-database-bugs-and-a-config-change-led-to-cascading-failure-across-critical-services
Vincy Davis
15 Jul 2019
4 min read
Save for later

Stripe’s API degradation RCA found unforeseen interaction of database bugs and a config change led to cascading failure across critical services

Vincy Davis
15 Jul 2019
4 min read
On 10th July, Stripe’s API services went down twice, from 16:36–17:02 UTC and again from 21:14–22:47 UTC. Though the services recovered immediately, it had caused significantly elevated error rates and response times. Two days after the incident, i.e., on 12th July, Stripe has shared a root cause analysis on the repeated degradation, as requested by the users. David Singleton, Stripe CTO describes the summary of API failures as  “two different database bugs and a configuration change interacted in an unforeseen way, causing a cascading failure across several critical services.” What was the cause of Stripe’s first API degradation? Three months ago, Stripe had upgraded to a new minor version and had performed the necessary testing to maintain a quality assured environment. This included executing a phased production rollout with the less critical as well as the increasingly critical clusters. Though it operated properly for the first three months, on the day of the event, it failed due to the presence of multiple stalled nodes. This occurred due to a shard, which was unable to elect a new primary state. [box type="shadow" align="" class="" width=""]“Stripe splits data by kind into different database clusters and by quantity into different shards. Each cluster has many shards, and each shard has multiple redundant nodes.”[/box] As the shard was used widely, its unavailability caused the compute resources for the API to starve and thus resulted in a severe degradation of the API services. The Stripe team detected  the failed election within a minute and started incident response within two minutes. The team forced the election of a new primary state, which led to restarting the database cluster. Thus, 27 minutes after the degradation, the Stripe API fully recovered. What caused Stripe’s API to degrade again? Once the Stripe’s API recovered, the team started investigating the root cause of the first degradation. They identified a code path in the new version of the database’s election protocol and decided to revert back to the previous known stable version for all the shards of the impacted cluster. This was deployed within four minutes. Until 21.14 UTC, the cluster was working fine. Later, the automated alerts fired indicating that some shards in the cluster were again unavailable, including the shard implicated in the first degradation. Though the symptoms appeared to be the same, the second degradation was caused due to a different reason. The prior reverted stable version interacted poorly with a configuration change to the production shards. Once the CPU starvation was observed, the Stripe team updated the production configuration and restored the affected shards. Once the shard was verified as healthy, the team began increasing the traffic back up, including prioritizing services as required by user-initiated API requests. Finally, Stripe’s API services were recovered at 22:47 UTC. Remedial actions taken The Stripe’s team has undertaken certain measures to ensure such degradation does not occur in the future An additional monitoring system has been implemented to alert whenever nodes stop reporting replication lag. Several changes have been introduced to prevent failures of individual shards from cascading across large fractions of API traffic. Further, Stripe will introduce more procedures and tooling to increase safety using which operators can make rapid configuration changes during incident response. Reactions to Stripe’s analysis of the API degradation has been mixed. Some users believe that the Stripe team should have focussed more on mitigating the error completely, rather than analysing the situation, at that moment. A Hacker News comment read, “In my experience customers deeply detest the idea of waiting around for a failure case to re-occur so that you can understand it better. When your customers are losing millions of dollars in the minutes you're down, mitigation would be the thing, and analysis can wait. All that is needed is enough forensic data so that testing in earnest to reproduce the condition in the lab can begin. Then get the customers back to working order pronto. 20 minutes seems like a lifetime if in fact they were concerned that the degradation could happen again at any time. 20 minutes seems like just enough time to follow a checklist of actions on capturing environmental conditions, gather a huddle to make a decision, document the change, and execute on it. Commendable actually, if that's what happened.” Few users appreciated Stripe’s analysis report. https://twitter.com/thinkdigitalco/status/1149767229392769024 Visit the Stripe website for a detailed timeline report. Twitter experienced major outage yesterday due to an internal configuration issue Facebook, Instagram and WhatsApp suffered a major outage yesterday; people had trouble uploading and sending media files Cloudflare suffers 2nd major internet outage in a week. This time due to globally deploying a rogue regex rule.
Read more
  • 0
  • 0
  • 3225
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $15.99/month. Cancel anytime
article-image-google-cloud-and-nvidia-tesla-set-new-ai-training-records-with-mlperf-benchmark-results
Amrata Joshi
15 Jul 2019
3 min read
Save for later

Google Cloud and Nvidia Tesla set new AI training records with MLPerf benchmark results

Amrata Joshi
15 Jul 2019
3 min read
Last week, the MLPerf effort released the results for MLPerf Training v0.6, the second round of results from their machine learning training performance benchmark suite. These benchmarks are used by the AI practitioners to adopt common standards for measuring the performance and speed of hardware that is used to train AI models. As per these benchmark results, Nvidia and Google Cloud set new AI training time performance records. MLPerf v0.6 studies the training performance of machine learning acceleration hardware in 6 categories including image classification, object detection (lightweight), object detection (heavyweight), translation (recurrent), translation (non-recurrent) and reinforcement learning. MLPerf is an association of more than 40 companies and researchers from leading universities, and the MLPerf benchmark suites are being the industry standard for measuring machine learning performance.  As per the results, Nvidia’s Tesla V100 Tensor Core GPUs used an Nvidia DGX SuperPOD for completing on-premise training of the ResNet-50 model for image classification in 80 seconds. Also, Nvidia turned out to be the only vendor who submitted results in all six categories. In 2017, when Nvidia launched the DGX-1 server, it took 8 hours to complete model training. In a statement to ZDNet, Paresh Kharya, director of Accelerated Computing for Nvidia said, “The progress made in just a few short years is staggering." He further added, “The results are a testament to how fast this industry is moving." Google Cloud entered five categories and had set three records for performance at scale with its Cloud TPU v3 Pods. Google Cloud Platform (GCP) set three new performance records in the latest round of the MLPerf benchmark competition. The three record-setting results ran on Cloud TPU v3 Pods, are Google’s latest generation of supercomputers, built specifically for machine learning.  The speed of Cloud TPU Pods was better and used less than two minutes of compute time. The TPU v3 Pods also showed the record performance results in machine translation from English to German of the Transformer model within 51 seconds. Cloud TPU v3 Pods train models over 84% faster than the fastest on-premise systems in the MLPerf Closed Division. TPU pods has also achieved record performance in the image classification benchmark of the ResNet-50 model with the ImageNet data set, as well as model training in another object detection category in 1 minute and 12 seconds. In a statement to ZDNet, Google Cloud's Zak Stone said, "There's a revolution in machine learning.” He further added, "All these workloads are performance-critical. They require so much compute, it really matters how fast your system is to train a model. There's a huge difference between waiting for a month versus a couple of days." Google suffers another Outage as Google Cloud servers in the us-east1 region are cut off Google Cloud went offline taking with it YouTube, Snapchat, Gmail, and a number of other web services Google Cloud introduces Traffic Director Beta, a networking management tool for service mesh  
Read more
  • 0
  • 0
  • 2323

article-image-googles-language-experts-are-listening-to-some-recordings-from-its-ai-assistant
Bhagyashree R
12 Jul 2019
4 min read
Save for later

Google’s language experts are listening to some recordings from its AI assistant

Bhagyashree R
12 Jul 2019
4 min read
After the news of Amazon employees listening to your Echo audio recordings, now we have the non-shocker report of Google employees doing the same. The news was reported by Belgian public broadcaster, VRT NWS on Wednesday. Addressing this news, Google accepted in yesterday’s blog post that it does this to make its AI assistant smarter to understand user commands regardless of what their language is. In its privacy policies, the tech giant states, “Google collects data that's meant to make our services faster, smarter, more relevant, and more useful to you. Google Home learns over time to provide better and more personalized suggestions and answers.” Its privacy policies also have a mention that it shares information with its affiliates and other trusted businesses. What it does not explicitly say is that these recordings are shared with its employees too. Google hires language experts to transcribe audio clips recorded by Google’s AI assistant who can end up listening to sensitive information about users. Whenever you make a request to Google Home smart speaker or any other smart speaker for that matter, your speech is recorded. These audio recordings are sent to the servers of the companies that they use to train their speech recognition and natural language understanding systems. A small subset of these recordings, 0.2% in the case of Google, are sent to language experts around the globe who transcribe them as accurately as possible. Their work is not about analyzing what the user is saying, but, in fact, how they are saying it. This helps Google’s AI assistant to understand the nuances and accents of a particular language. The problem is these recordings often contain sensitive data. Google in the blog post claims that these audio snippets are analyzed in an anonymous fashion, which means that reviewers will not be able to identify the user they are listening to. “Audio snippets are not associated with user accounts as part of the review process, and reviewers are directed not to transcribe background conversations or other noises, and only to transcribe snippets that are directed to Google,” the tech giant said. Countering this claim, VRT NWS was able to identify people through personal addresses and other sensitive information in the recordings. “This is undeniably my own voice,” said one man. Another family was able to recognize the voice of their son and grandson in the recording. What is worse is that sometimes these smart speakers record the audio clips entirely by accident. Despite the companies claiming that these devices only start recording when they hear their “wake words” like “Okay Google”, there are many reports showing the devices often start recording by mistake. Out of the thousand or so recordings reviewed by VRT NWS, 153 were captured accidentally. Google in the blog post mentioned that it applies “a wide range of safeguards to protect user privacy throughout the entire review process.” It further accepted that these safeguards failed in the case of the Belgian contract worker who shared the audio recordings to VRT NWS, violating the company’s data security and privacy rules in the process. “We just learned that one of these language reviewers has violated our data security policies by leaking confidential Dutch audio data. Our Security and Privacy Response teams have been activated on this issue, are investigating, and we will take action. We are conducting a full review of our safeguards in this space to prevent misconduct like this from happening again,” the tech giant wrote. Companies not being upfront about the transcription process can cause legal trouble for them. Michael Veale, a technology privacy researcher at the Alan Turing Institute in London, told Wired that this practice of sharing personal information of users might not meet the standards set by the EU’s GDPR regulations. “You have to be very specific on what you’re implementing and how. I think Google hasn’t done that because it would look creepy,” he said. Read the entire story on VRT NWS’s official website. You can watch the full report on YouTube. https://youtu.be/x8M4q-KqLuo Amazon’s partnership with NHS to make Alexa offer medical advice raises privacy concerns and public backlash Amazon is being sued for recording children’s voices through Alexa without consent Amazon Alexa is HIPAA-compliant: bigger leap in the health care sector
Read more
  • 0
  • 0
  • 1943

article-image-pluribus-an-ai-bot-built-by-facebook-and-cmu-researchers-has-beaten-professionals-at-six-player-no-limit-texas-hold-em-poker
Sugandha Lahoti
12 Jul 2019
5 min read
Save for later

Pluribus, an AI bot built by Facebook and CMU researchers, has beaten professionals at six-player no-limit Texas Hold ’Em Poker

Sugandha Lahoti
12 Jul 2019
5 min read
Researchers from Facebook and Carnegie Mellon University have developed an AI bot that has defeated human professionals in six-player no-limit Texas Hold’em poker.   Pluribus defeated pro players in both “five AIs + one human player” format and a “one AI + five human players” format. Pluribus was tested in 10,000 games against five human players, as well as in 10,000 rounds where five copies of the AI  played against one professional. This is the first time an AI bot has beaten top human players in a complex game with more than two players or two teams. Pluribus was developed by Noam Brown of Facebook AI Research and Tuomas Sandholm of Carnegie Mellon University. Pluribus builds on Libratus, their previous poker-playing AI which defeated professionals at Heads-Up Texas Hold ’Em, a two-player game in 2017. Mastering 6-player Poker for AI bots is difficult considering the number of possible actions. First, obviously since this involves six players, the games have a lot more variables and the bot can’t figure out a perfect strategy for each game - as it would do for a two player game. Second, Poker involves hidden information, in which a player only has access to the cards that they see. AI has to take into account how it would act with different cards so it isn’t obvious when it has a good hand. Brown wrote on a Hacker News thread, “So much of early AI research was focused on beating humans at chess and later Go. But those techniques don't directly carry over to an imperfect-information game like poker. The challenge of hidden information was kind of neglected by the AI community. This line of research really has its origins in the game theory community actually (which is why the notation is completely different from reinforcement learning). Fortunately, these techniques now work really really well for poker.” What went behind Pluribus? Initially, Pluribus engages in self-play by playing against copies of itself, without any data from human or prior AI play used as input. The AI starts from scratch by playing randomly, and gradually improves as it determines which actions, and which probability distribution over those actions, lead to better outcomes against earlier versions of its strategy. Pluribus’s self-play produces a strategy for the entire game offline, called the blueprint strategy. This online search algorithm can efficiently evaluate its options by searching just a few moves ahead rather than only to the end of the game. Pluribus improves upon the blueprint strategy by searching for a better strategy in real time for the situations it finds itself in during the game. Real-time search The blueprint strategy in Pluribus was computed using a variant of counterfactual regret minimization (CFR). The researchers used Monte Carlo CFR (MCCFR) that samples actions in the game tree rather than traversing the entire game tree on each iteration. Pluribus only plays according to this blueprint strategy in the first betting round (of four), where the number of decision points is small enough that the blueprint strategy can afford to not use information abstraction and have a lot of actions in the action abstraction. After the first round, Pluribus instead conducts a real-time search to determine a better, finer-grained strategy for the current situation it is in. https://youtu.be/BDF528wSKl8 What is astonishing is that Pluribus uses very little processing power and memory, less than $150 worth of cloud computing resources. The researchers trained the blueprint strategy for Pluribus in eight days on a 64-core server and required less than 512 GB of RAM. No GPUs were used. Stassa Patsantzis, a Ph.D. research student appreciated Pluribus’s resource-friendly compute power. She commented on Hacker News, “That's the best part in all of this. I'm hoping that there is going to be more of this kind of result, signaling a shift away from Big Data and huge compute and towards well-designed and efficient algorithms.” She also said how this is significantly lesser than ML algorithms used at DeepMind and Open AI. “In fact, I kind of expect it. The harder it gets to do the kind of machine learning that only large groups like DeepMind and OpenAI can do, the more smaller teams will push the other way and find ways to keep making progress cheaply and efficiently”, she added. Real-life implications AI bots such as Pluribus give a better understanding of how to build general AI that can cope with multi-agent environments, both with other AI agents and with humans. A six-player AI bot has better implications in reality because two-player zero-sum interactions (in which one player wins and one player loses) are common in recreational games, but they are very rare in real life.  These AI bots can be used for handling harmful content, dealing with cybersecurity challenges, or managing an online auction or navigating traffic, all of which involve multiple actors and/or hidden information. Apart from fighting online harm, four-time World Poker Tour title holder Darren Elias helped test the program's skills, said, Pluribus could spell the end of high-stakes online poker. "I don't think many people will play online poker for a lot of money when they know that this type of software might be out there and people could use it to play against them for money." Poker sites are actively working to detect and root out possible bots. Brown, Pluribus' developer, on the other hand, is optimistic. He says it's exciting that a bot could teach humans new strategies and ultimately improve the game. "I think those strategies are going to start penetrating the poker community and really change the way professional poker is played," he said. For more information on Pluribus and it’s working, read Facebook’s blog. DeepMind’s Alphastar AI agent will soon anonymously play with European StarCraft II players Google DeepMind’s AI AlphaStar beats StarCraft II pros TLO and MaNa OpenAI Five bots destroyed human Dota 2 players this weekend
Read more
  • 0
  • 0
  • 3769

article-image-stripes-api-suffered-two-consecutive-outages-yesterday-causing-elevated-error-rates-and-response-times
Vincy Davis
11 Jul 2019
3 min read
Save for later

Stripe's API suffered two consecutive outages yesterday causing elevated error rates and response times

Vincy Davis
11 Jul 2019
3 min read
Yesterday, Stripe’s API services went down twice, from 16:36–17:02 UTC and again from 21:14–22:47 UTC. Though the API services were recovered immediately after the disruptions, it caused elevated error rates and response times. Stripe has not yet specified the cause of the degradation, they have promised to share a root analysis of the issue later. Stripe constantly updated users about the repeated degradation and recovery on Twitter. Meanwhile, Stripe users expressed their outrage on the social media platform. https://twitter.com/OBX_Kayak/status/1149091674620080128 https://twitter.com/secretveneers/status/1138061688186576896 https://twitter.com/shazbegg/status/1138035967095390209 https://twitter.com/katetomasdphil/status/1138075917283188736 The issue started at 16.36 UTC with some Stripe payouts to GBP bank accounts being delayed. Next, Stripe informed users that they are investigating the issue with their UK banking partner which resulted in the delay of some GBP payouts. Later, Stripe confirmed that all the affected payouts have been processed and the issue has been resolved. Stripe’s CEO Patrick Collison commented on one of the Hacker News threads, “Stripe CEO here. We're very sorry about this. We work hard to maintain extreme reliability in our infrastructure, with a lot of redundancy at different levels. This morning, our API was heavily degraded (though not totally down) for 24 minutes. We'll be conducting a thorough investigation and root-cause analysis.” Later in the day, around 21.14 UTC, Stripe informed users that their error rates and response times have again increased. Finally their API services were restored at 22:47 UTC. Stripe assured users that all the delayed bank payments have been successfully deposited to the corresponding bank accounts. Though many users were distraught over Stripe’s service degradation, some users came out in support of Stripe. https://twitter.com/macrodesiac_/status/1138072815603769348 https://twitter.com/nickjanetakis/status/1149079993437380608 A user on Hacker News comments, “This is causing a big problem for my business right now, but I am not mad at Stripe because you earned that level of credibility and respect in my opinion. I understand these things happen and am glad to know a team as excellent as Stripe's is on the job.” Many users have asked Stripe to give a post mortem analysis about the issue. https://twitter.com/thinkdigitalco/status/1149092661082693633 https://twitter.com/DahmianOwen/status/1149071761188589568 This month, many other services like GitLab, Google Cloud, Cloudflare, Facebook, Instagram, Whatsapp and Apple’s iCloud also suffered major outages. A comment on Hacker News reads, “Most services are going down from time to time, it's just that the big one are widely used and so people notice quickly” Another user comments, “Between Cloudflare, Google, and now Stripe, I feel like there's been a huge cluster of services that never go down, going down. Curious to see Stripe's post-mortem here.” To know Stripe’s exact system status, head over to Stripe Status page. Stripe updates its product stack to prepare European businesses for SCA-compliance Former Google Cloud CEO joins Stripe board just as Stripe joins the global Unicorn Club Stripe open sources ‘Skycfg’, a configuration builder for Kubernetes
Read more
  • 0
  • 0
  • 3148
article-image-deepminds-alphastar-ai-agent-will-soon-anonymously-play-with-european-starcraft-ii-players
Sugandha Lahoti
11 Jul 2019
4 min read
Save for later

DeepMind's Alphastar AI agent will soon anonymously play with European StarCraft II players

Sugandha Lahoti
11 Jul 2019
4 min read
Earlier this year, DeepMind’s AI Alphastar defeated two professional players at StarCraft II, a real-time strategy video game. Now, European Starcraft II players will get a chance to face off experimental versions of AlphaStar, as part of ongoing research into AI. https://twitter.com/MaxBakerTV/status/1149067938131054593 AlphaStar learns by imitating the basic micro and macro-strategies used by players on the StarCraft ladder. A neural network was trained initially using supervised learning from anonymised human games released by Blizzard. Once the agents get trained from human game replays, they’re then trained against other competitors in the “AlphaStar league”. This is where a multi-agent reinforcement learning process starts. New competitors are added to the league (branched from existing competitors). Each of these agents then learns from games against other competitors. This ensures that each competitor performs well against the strongest strategies, and does not forget how to defeat earlier ones. Anyone who wants to participate in this experiment will have to opt into the chance to play against the StarCraft II program. There will be an option provided in the in-game pop-up window. Users can alter their opt-in selection at any time. To ensure anonymity, all games will be blind test matches. European players that opt-in won't know if they've been matched up against AlphaStar. This will help ensure that all games are played under the same conditions, as players may tend to react differently when they know they’re against an AI. A win or a loss against AlphaStar will affect a player’s MMR (Matchmaking Rating) like any other game played on the ladder. "DeepMind is currently interested in assessing AlphaStar’s performance in matches where players use their usual mix of strategies," Blizzard said in its blog post. "Having AlphaStar play anonymously helps ensure that it is a controlled test, so that the experimental versions of the agent experience gameplay as close to a normal 1v1 ladder match as possible. It also helps ensure all games are played under the same conditions from match to match." Some people have appreciated the anonymous testing feature. A Hacker News user commented, “Of course the anonymous nature of the testing is interesting as well. Big contrast to OpenAI's public play test. I guess it will prevent people from learning to exploit the bot's weaknesses, as they won't know they are playing a bot at all. I hope they eventually do a public test without the anonymity so we can see how its strategies hold up under focused attack.” Others find it interesting to see what happens if players know they are playing against AlphaStar. https://twitter.com/hardmaru/status/1149104231967842304   AlphaStar will play in Starcraft’s three in-universe races (Terran, Zerg, or Protoss). Pairings on the ladder will be decided according to normal matchmaking rules, which depend on how many players are online while AlphaStar is playing. It will not be learning from the games it plays on the ladder, having been trained from human replays and self-play. The Alphastar will also use a camera interface and more restricted APMs. Per the blog post, “AlphaStar has built-in restrictions, which cap its effective actions per minute and per second. These caps, including the agents’ peak APM, are more restrictive than DeepMind’s demonstration matches back in January, and have been applied in consultation with pro players.” https://twitter.com/Eric_Wallace_/status/1148999440121749504 https://twitter.com/Liquid_MaNa/status/1148992401157054464   DeepMind will be benchmarking the performance of a number of experimental versions of AlphaStar to enable DeepMind to gather a broad set of results during the testing period. DeepMind will use a player’s replays and the game data (skill level, MMR, the map played, race played, time/date played, and game duration) to assess and describe the performance of the AlphaStar system. However, Deepmind will remove identifying details from the replays including usernames, user IDs and chat histories. Other identifying details will be removed to the extent that they can be without compromising the research DeepMind is pursuing. For now, AlphaStar agents will play only in Europe. The research results will be released in a peer-reviewed scientific paper along with replays of AlphaStar’s matches. Google DeepMind’s AI AlphaStar beats StarCraft II pros TLO and MaNa; wins 10-1 against the gamers Deepmind’s AlphaZero shows unprecedented growth in AI, masters 3 different games Deepmind’s AlphaFold is successful in predicting the 3D structure of a protein making major inroads for AI use in healthcare
Read more
  • 0
  • 0
  • 2822

article-image-amazon-aurora-makes-postgresql-serverless-generally-available
Vincy Davis
10 Jul 2019
3 min read
Save for later

Amazon Aurora makes PostgreSQL Serverless generally available

Vincy Davis
10 Jul 2019
3 min read
Yesterday, Danilo Poccia, an Evangelist at Amazon Web Services announced the PostgreSQL-compatible edition of Aurora Serverless will be generally available. Aurora PostgreSQL Serverless lets customers create database instances that only run when needed and automatically scale up or down based on demand. If a database isn’t needed, it will shut down until it is needed. With Aurora Serverless, users have to pay on a per-second basis for the database capacity one uses when the database is active, plus the usual Aurora storage costs. Last year, Amazon had made the Aurora Serverless MySQL generally available. How the Aurora PostgreSQL Serverless storage works When a database is created with Aurora Serverless, users set the minimum and maximum capacity. The client applications transparently connect to a proxy fleet that routes the workload to a pool of resources that are automatically scaled. Scaling is done quickly, as the resources are ‘warm’ and ready to be added to serve user requests. Image Source: Amazon blog The storage layer is independent from the computer resources, used by the database, as the storage is not provisioned in advance. The minimum storage is 10GB, however based on the database usage, the Amazon Aurora storage will automatically grow, up to 64 TB, in 10GB increments with no impact to database performance. How to create an Aurora Serverless PostgreSQL Database Create a database from the Amazon RDS console, using Amazon Aurora as engine. Select the PostgreSQL version compatible with Aurora serverless. After selecting the version, the serverless option becomes available. Currently, its is version 10.5. Enter an identifier to the new DB cluster, choose the master username, and let Amazon RDS generate a password. This will let users retrieve their credentials during database creation. Select the minimum and maximum capacity for the database, in terms of Aurora Capacity Units (ACUs), and in the additional scaling configuration, choose to pause compute capacity after 5 minutes of inactivity. Based on the setting, Aurora Serverless will automatically create scaling rules for thresholds for CPU utilization, connections, and the available memory. Aurora Serverless PostgreSQL will now be available in US East (N. Virginia and Ohio), US West (Oregon), EU (Ireland), and Asia Pacific (Tokyo). Many developers are happy with the announcement. https://twitter.com/oxbits/status/1148840886224265218 https://twitter.com/sam_jeffress/status/1148845547110854656 https://twitter.com/maciejwalkowiak/status/1148829295948771331 Visit the Amazon blog for more details. How do AWS developers manage Web apps? Amazon launches VPC Traffic Mirroring for capturing and inspecting network traffic Amazon adds UDP load balancing support for Network Load Balancer
Read more
  • 0
  • 0
  • 3164

article-image-mozilla-announces-a-subscription-based-service-for-providing-ad-free-content-to-users
Amrata Joshi
05 Jul 2019
4 min read
Save for later

Mozilla announces a subscription-based service for providing ad-free content to users

Amrata Joshi
05 Jul 2019
4 min read
Earlier this year the team at Mozilla had announced that they were partnering with Scroll, a startup for allowing users to explore an ad-free internet without hurting publishers. Several months after this news, yesterday Mozilla teased a new landing page detailing a subscription-based service where users can get access to content from “some of the world’s greatest publishers to bring you a better journalism experience,” for $4.99 per month.  Users often have a bad experience with advertisements as it is quite disturbing and sometimes irrelevant. Ads also capture user data and predict user behaviour. These ads could also be malicious at times and clicking on them could end up storing users’ personal information. It seems Mozilla is trying to make an effort towards providing a better user experience by giving them an ad-free experience. According to a Mozilla blog post, it hit upon the new idea because it wasn’t happy with the “terrible experiences and pervasive tracking” designed to persuade users to click on ads and share their personal data. The official post reads, “We share your payment directly with the sites you read. They make more money which means they can bring you great content without needing to distract you with ads just to keep the lights on.”The idea is that users will be paying for an ad-free experience, and the money will then be distributed to publishers that users choose.  Subscribers will also get access to audio versions of articles as well as bookmarks that are synced across devices. They will also get exclusive top recommended reads and an app that would help users to find great content without the disturbance from ads. This service would be cross-platform which would allow users to read the content on a phone or a PC. Reading an article will still be ad-free in case a reader reaches the content by clicking a link from Twitter or opening a website. Mozilla is yet to share the specifics of this service and says that the initiative “will help shape our direction with respect to finding alternatives to the status quo advertising models. As always, we will continue to put users first and operate transparently as our explorations progress towards more concrete product plans” The partnership with Scroll will also help balance the digital advertising revenue, most of which is going to a handful to big companies, endangering the existence of many smaller publishers. Though, it is still not clear if the revenue generated for the publishers would be enough for them. Users have given a mixed reaction and few think that it is not a new idea and companies like Google, Patreon, and Flattr have already tried this. A user commented on HackerNews, “I mean, isn't this essentially the idea behind patreon? You batch the micropayments payments into single transactions on the credit card network to reduce the marginal cost of the fixed fees? Didn't Google already do something similar with new subscriptions? Didn't flattr do this a decade ago? It's not a new idea, and as I ranted elsewhere, is only even required because of the fixed fees on credit card transactions.” While the rest are expecting the company to opt for better partners. Another user commented, “In principle, I could be interested in this, but they'll need a much better range of partners than what currently shows up on https://scroll.com/ before it looks worthwhile to me. Less of the celebrity/pop-culture gossip, and more real news, please.” Mozilla partners with Scroll to understand consumer attitudes for an ad-free experience on the web Mozilla introduces Track THIS, a new tool that will create fake browsing history and fool advertisers Facebook has blocked 3rd party ad monitoring plugin tools from the likes of ProPublica and Mozilla that let users see how they’re being targeted by advertisers    
Read more
  • 0
  • 0
  • 1568
article-image-china-is-forcing-tourists-crossing-xinjiang-borders-to-install-an-android-app-that-sends-personal-information-to-authorities-reports-the-vice-news
Bhagyashree R
05 Jul 2019
6 min read
Save for later

China is forcing tourists crossing Xinjiang borders to install an Android app that sends personal information to authorities, reports the Vice News

Bhagyashree R
05 Jul 2019
6 min read
Yesterday, the Vice News, in an investigative piece reported that China is forcing tourists who cross certain borders into Xinjiang, a western region of China to install an Android app that shares their personal information with the authorities. This news comes after in April it was reported that China is forcing residents of Xinjiang to install a similar Android app.  Since 2016, China has been conducting mass surveillance on the 13 million ethnic Uyghurs and other Turkic Muslims in Xinjiang. According to a report by Human Rights Watch, up to one million people are being held in “political education” camps. The residents have been subject to mass arbitrary detention, restrictions on movement, and religious oppression. All this is happening under the Chinese government’s Strike Hard Campaign against Violent Terrorism. China is taking mass surveillance to the next level by installing the surveilling Android app on tourists’ phones. Tourists crossing the border are taken to a clean, sterile environment to get searched. They have to go through several stages of scrutiny and security that takes around half a day. Their phones are seized and the malware called BXAQ or Fengcai is installed.  What the analysis of BXAQ, the Android malware, revealed The Vice News, Guardian, and New York Times teamed up to commission several technical analyses on the app to understand its inner workings. Cybersecurity firm Cure53, researchers from CitizenLab and Ruhr University Bochum also analyzed the code that included names like "CellHunter" and "MobileHunter." The Vice News shared a copy of the malware installed in their tourists’ phones with Süddeutsche Zeitung, a German news publishing company and Motherboard, which is available on the Motherboard’s GitHub account. Unlike normal apps that we install via app stores, this app is installed by sideloading. Once installed, it collects information like phone’s calendar entries, phone contacts, call logs, and text messages. The app goes as far as scanning all the apps installed on the subject’s phone and extracts usernames from some of them. All this collected data goes to a server, according to expert analysis. People with iPhones were also not spared from the scrutiny. Their iPhones were unlocked and connected via a USB cable to a hand-held device.  The app’s code also has hashes for over 73,000 different files that the malware scans for.  The team and researchers who were analyzing the app managed to uncover the inputs of around 1,300 of them by searching for connected files on VirusTotal, a file search engine.  Many of the files that the malware scans contain extremist content. However, it also scans for parts of the innocuous Islamic material, academic books on Islam by leading researchers, and even a music file from Japanese metal band Unholy Grave. The report revealed that one of the scanned files was The Syrian Jihad, written by Charles Lister, who is a senior fellow and director of the Countering Terrorism and Extremism program at the Middle East Institute.  When the Vice News told this to the writer he was surprised, to say the least. He wrote in an email, "This is news to me! I’ve never had any criticism for the book—in fact, in all honesty, the opposite. Instead, I suspect China’s authorities would find anything with the word 'jihad' in the title to be potentially suspicious. The book covers, albeit minimally, the role of Turkistan Islamic Party in Syria, which may also be a point of sensitivity for Beijing. I’ve met with and engaged with Chinese officials to brief them on these issues, so I’m not aware of any problem Beijing would have with me." What Human Rights Defenders and other governments are saying about China’s domestic surveillance China has been widely criticized for its dystopian digital dictatorship. Maya Wang, China senior researcher at Human Rights Watch told the Vice News that the Chinese government often relates harmless religious activities with terrorism. She said, "The Chinese government, both in law and practice, often conflates peaceful religious activities with terrorism. Chinese law defines terrorism and extremism in a very broad and vague manner. For example, terrorism charges can stem from mere possession of 'items that advocate terrorism,' even though there is no clear definition of what these materials may be." This extreme use of cutting edge technologies for social control has also raised concern among other governments. On Tuesday, the United States and Germany condemned China during a closed-door United Nations Security Council meeting.  A U.S. State Department official told the Reuters, “The United States is alarmed by China’s highly repressive campaign against Uighurs, ethnic Kazakhs, Kyrgyz, and other Muslims in Xinjiang, and efforts to coerce members of its Muslim minority groups residing abroad to return to China to face an uncertain fate.” The Chinese officials in the meeting responded that this matter is purely internal and U.S. and Germany are making "unwarranted criticism”. China’s U.N. Ambassador Ma Zhaoxu said that the United States and Germany do not have any right to raise the issue in the Security Council. When asked about the state-run detention camp, Xinjiang vice-governor Erkin Tuniyaz said they are just vocational centers that are built to “save” people from extremist influences. What role tech plays in enabling such dystopia China has stepped up surveillance in every part of the country, and the extreme case is in Xinjiang. These steps, it says are taken to counter security threats and religious extremism. What has changed over the years is that these surveillance measures have become smarter. Today, Xinjiang has a massive security presence along with millions of surveillance technologies tracking every move you make. The technologies like facial-recognition cameras, iris and body scanners at checkpoints, mandatory apps like the one we discussed earlier that monitor messages and data flow on Uyghurs' smartphones are everywhere. Tech giants including Alibaba Group, Huawei are working with the government to come up with such systems. The data from the surveillance systems matched with your personal data determine your “social credit score”. This social credit system is a way of monitoring the citizens’ behavior to determine their rank in society. According to the Chinese government, it aims to reinforce the idea, “keeping trust is glorious and breaking trust is disgraceful.” If your score is high your life is convenient, if not you will have limited options for traveling, schooling, and other basic needs.  Not only China, but other countries are also stepping towards mass surveilling its citizens. For instance, the Trump administration is forcing its tourists to give away a list of all their social media accounts and all their email accounts.  https://twitter.com/BrennanCenter/status/1146253731232669697 Read the investigate piece by the Vice News to know more in detail. Following EU, China releases AI Principles As US-China tech cold war escalates, Google revokes Huawei’s Android support, allows only those covered under open source licensing Silicon Valley investors envy China’s 996 work culture; but tech workers stand in solidarity with their Chinese counterparts  
Read more
  • 0
  • 0
  • 2248

article-image-almost-all-of-apples-icloud-services-were-down-for-some-users-most-of-yesterday-now-back-to-operation
Sugandha Lahoti
05 Jul 2019
2 min read
Save for later

Almost all of Apple’s iCloud services were down for some users, most of yesterday; now back to operation

Sugandha Lahoti
05 Jul 2019
2 min read
The whole world of social media has had a meltdown this week. After a major outage on Facebook, and it’s family of apps, Apple’s iCloud service was also down for most of yesterday. Users reported trouble signing into iCloud and accessing their accounts. Other Apple services like Photos, Mail, Backup, Find My Friends, Contacts, Calendars, also witnessed downtime. Apple Stores are also reportedly affected by the outage and not currently able to process transactions. Apple’s system status page noted the downtime across various iCloud services. https://twitter.com/wildrustic/status/1146820719277477890 https://twitter.com/TomSchmitz/status/1146815544391114752 https://twitter.com/SBenovitz/status/1146831989657542659 Apple Pay and Apple Cash: Apple Pay card holders were unable to add, suspend, delete, or use existing cards in Apple Pay. Some users were also not be able to set up Apple Cash, send and receive money, or transfer to bank with Apple Cash. Find my friend: Users were unable to find the location of their friends or devices, list registered devices, play a sound on their device, remotely wipe a device, or put the device in lost mode. Find my iPhone: Users may have been unable to find the location of their friends or devices, list registered devices, play a sound on their device, remotely wipe a device, or put the device in lost mode. iCloud: Issues were found in Account & Sign In, iWork, iCloud Backup, Bookmarks & Tabs, Calendar, Contacts, Drive, Keychain, Mail, Notes, Reminders, and Storage Upgrades Developer tools and third-party apps were also affected. According to 9to5Mac, a user who was trying to get her iPhone fixed at an Apple Store was told that the outage is nationwide (She was from the U.S). However down detector, an outage tracking website reported that issues were also observed in some parts of Europe, Canada, Mexico, and Brazil. Source: downdetector All services have since been resolved. Source: Apple What is surprising is that Apple has not informed or warned its users about the outage. There was no tweet or update released. Only the status page was updated. Facebook, Instagram and WhatsApp suffered a major outage yesterday; people had trouble uploading and sending media files. Cloudflare suffers 2nd major internet outage in a week. This time due to globally deploying a rogue regex rule. Why did Slack suffer an outage on Friday?
Read more
  • 0
  • 0
  • 1743