Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Data

1208 Articles
article-image-youtubes-ban-on-instructional-hacking-and-phishing-videos-receives-backlash-from-the-infosec-community
Savia Lobo
04 Jul 2019
7 min read
Save for later

YouTube’s ban on “instructional hacking and phishing” videos receives backlash from the infosec community

Savia Lobo
04 Jul 2019
7 min read
Updated: Mentioned MalwareTech's article, which shows a bigger picture of how YouTube’s ban can suppress education and the aspirants may turn to other shady websites to learn hacking, which is highly lethal. A month ago, in June, YouTube, in their blog post said, “The openness of YouTube’s platform has helped creativity and access to information thrive. It’s our responsibility to protect that, and prevent our platform from being used to incite hatred, harassment, discrimination, and violence.” YouTube said it plans to moderate content on its platform via three ways: By removing more hateful and supremacist content from the platform by banning supremacists, which will remove Nazis and other extremists who advocate segregation or exclusion based on age, gender, race, religion, sexual orientation, or veteran status. Reducing the spread of “borderline content and harmful misinformation” such as videos promoting a phony miracle cure for a serious illness, or claiming the earth is flat, etc. and recommend videos from more authoritative sources, like top news channels, in its “next watch” panel. Will suspend channels that repeatedly brush up against its hate speech policies from the YouTube Partner program. This means they will not be able to run ads on their channel or use other monetization features like Super Chat, which lets channel subscribers pay creators directly for extra chat features Following those lines, a few days ago, YouTube decided that it will ban all “instructional hacking and phishing” videos and listed it as “harmful or dangerous content” prohibited on its platform. YouTube mentioned that videos that demonstrate how to bypass secure computer systems or steal user credentials and personal data will be pulled from the platform. This recent addition to YouTube’s content policy is a big blow to all users in the infosec industry watching such videos for educational purposes or to develop their skills and also to the infosec Youtube content creators who make a living on maintaining dedicated channels on cybersecurity. The written policy first appears in the Internet Wayback Machine's archive of web history in an April 5, 2019 snapshot. According to The Register, "Lack of clarity about the permissibility of cyber-security related content has been an issue for years. In the past, hacking videos in years past could be removed if enough viewers submitted reports objecting to them or if moderators found the videos violated other articulated policies. Now that there's a written rule, there's renewed concern about how the policy is being applied". Kody Kinzie, a security researcher, educator, and owner of the popular ethical hacking and infosec YouTube channel, Null Byte, tweeted that on Tuesday they could not upload a video because of the rule. He said the video was created for the US July 4th holiday to demonstrate launching fireworks over Wi-Fi. https://twitter.com/KodyKinzie/status/1146196570083192832 After refraining Kinzie from uploading videos, he said that YouTube started to flag and remove his existing content and also issued a further strike on his channel. https://twitter.com/fuzz_sh/status/1146197679434883074 https://twitter.com/KodyKinzie/status/1146202025513771010 "I'm worried for everyone that teaches about infosec and tries to fill in the gaps for people who are learning," Kinzie said via Twitter. "It is hard, often boring, and expensive to learn cybersecurity." A lot of learners and the infosec community responded in support of Null Byte. YouTube then reversed its decision and removed the strikes, thereby restoring the channel to full functionality. https://twitter.com/myexploit2600/status/1146327656658550785 https://twitter.com/KodyKinzie/status/1146566379962695681 The YouTube policy page includes a list for content creators on things they should be careful of while uploading content. However, this is not a new policy and Youtube highlights, “the article now includes more examples of content that violates this policy. There are no policy changes.” According to Boing Boing, “This may sound like a commonsense measure but consider: the "bad guys" can figure this stuff out on their own. The two groups that really benefit from these disclosures are: Users, who get to know which systems they should and should not trust; and Developers, who learn from other developers' blunders and improve their own security.” A YouTube spokesperson told The Verge that Kody Kinzie’s channel was flagged by mistake and the videos have since been reinstated. “With the massive volume of videos on our site, sometimes we make the wrong call,” the spokesperson said. “We have an appeals process in place for users, and when it’s brought to our attention that a video has been removed mistakenly, we act quickly to reinstate it.” Dale Ruane, a hacker and penetration tester who runs a YouTube channel called DemmSec, told The Register via email that he believes this policy has always existed in some form. "But recently I've personally noticed a lot more people having issues where videos are being taken down," he said. "It seems adding video tags or titles which could be interpreted as malicious results in your video being 'dinged,'” he said. "For example, I made a video about a tool which basically provided instructions of how to phish a Facebook user. That video was taken down by YouTube after a couple of weeks." He also said, "I think the way in which this policy is written is far too broad. I also find the policy extremely hypocritical from a company (Google) that has a history of embracing 'hacker' culture and claims to have the goal of organizing the world's information." YouTube has recently taken actions towards content moderation, like taking down videos fighting white supremacy alongside white supremacist content. Also, on May 30th Vox host Carlos Maza tweeted a thread that pointed to a pattern of homophobic harassment from conservative pundit Steven Crowder on Youtube. In one of his comments, Crowder referred to Maza as a “little queer,” “lispy queer,” and “the gay Vox sprite.” After several days of investigation, YouTube said that Crowder did not violate the platform’s policies, but the company did not provide any insight into its process, and it chose to issue an unsigned statement via a reply to Maza on Twitter. Following YouTube’s decision, some Google employees said this does not send a positive message to everyone. An employee said, “This kind of makes me feel like it would be okay if my coworkers started calling me a lispy queer”. “...It’s the latest in a long series of really, really shitty behavior and double-talking on the part of my employer as pertains to anything to do with queer shit.” After a lot of opposition from people, YouTube opted to demonetize Crowder’s channel, citing “widespread harm to the YouTube community resulting from the ongoing pattern of egregious behavior.” The company has now also promised to “evolve its policies” on harassment in response to widespread backlash to these moves. A lot of YouTube creators have publicly derided the company for its decision calling it an unsurprising move from a platform they feel has failed to properly address harassment. Also, the recent taking down of videos that benefit a lot of users to develop skills with a fear that it can be misused, is not a correct move too. Hackers can implement a lot of stuff without the help of these videos. Youtube banning videos may not make the platform more secure, nor will it prevent attackers from exploiting defects. MalwareTech in its blog post mentions, “when it comes to hacking, it matters not what is taught, but how and by whom. Context is extremely important, especially with a potential audience of young and impressionable teens. Hacking tutorials will always be available no matter what, the only real question is where”. In its post, MalwareTech has also shown a bigger picture of how YouTube’s ban can suppress education and the aspirants may turn to other shady websites to learn hacking, which is highly lethal. FTC to investigate YouTube over mishandling children’s data privacy YouTube disables all comments on videos featuring children in an attempt to curb predatory behavior and appease advertisers Facebook fined $2.3 million by Germany for providing incomplete information about hate speech content
Read more
  • 0
  • 0
  • 2493

article-image-facebook-instagram-and-whatsapp-suffered-a-major-outage-yesterday-people-had-trouble-uploading-and-sending-media-files
Sugandha Lahoti
04 Jul 2019
3 min read
Save for later

Facebook, Instagram and WhatsApp suffered a major outage yesterday; people had trouble uploading and sending media files

Sugandha Lahoti
04 Jul 2019
3 min read
Facebook and it’s sibling platforms Instagram and Whatsapp suffered a major outage most of yesterday relating to image display. The issues started around 3:04 pm Wednesday, PT. Users were unable to send and receive images, videos and other files over these social media platforms. This marks the third major outage of Facebook and its family of apps this year. Source: Down detector Instagram users reported that their feed might load, but they were unable to post anything new into it. Doing so brings up an error message indicating that "Photo Can't Be Posted", according to users experiencing the problems. For Whatsapp, texts were going through, but for videos and images, users saw a message reading "download failed" and the content did not arrive. https://twitter.com/Navid_kh/status/1146419297385713665 Issues were particularly focused on the east coast of the US, according to the tracking website Down Detector. But they were reported across the world, with significant numbers of reports from Europe, South America and East Asia. More than 14,000 users reported issues with Instagram, while more than 7,500 and 1,600 users complained about Facebook and WhatsApp noted Down Detector. What was the issue? According to ArsTechnica, the issue was because of a bad timestamp data being fed to the company's CDN in some image tags. All broken images had different timestamp arguments embedded in the same URLs. Loading an image from fbcdn.net with bad "oh=" and "oe=" arguments—or no arguments at all—results in an HTTP 403 "Bad URL timestamp". Interestingly, because of this image outage people were able to see how Facebook's AI automatically tags photos behind the scenes. The outage stopped social-media images from loading and left in their place descriptions like: "image may contain: table, plant, flower, and outdoor" and "image may contain: tree, plant, sky." https://twitter.com/zackwhittaker/status/1146456836998144000 https://twitter.com/jfruh/status/1146460397009924101 According to Reuters who talked to Facebook representatives, “During one of our routine maintenance operations, we triggered an issue that is making it difficult for some people to upload or send photos and videos,” Facebook said. Around 6 PM PT services were restored, with Facebook and Instagram both tweeting that the problems are resolved. There was no response from Whatsapp’s twitter account about the acknowledgement or resolution of the outage. https://twitter.com/instagram/status/1146565551520534528 https://twitter.com/facebook/status/1146571015872552961   Twitter also suffered an unexplained downtime in its direct messaging service. https://twitter.com/TwitterSupport/status/1146447958952439809 The latest string of outages follow a recurring trend of issues hitting social media over the past six months. It started in March when Facebook family of apps were hit with a  14 hours outage, longest in its history. Then in June, Google Cloud went offline taking with it YouTube, Snapchat, Gmail, and a number of other web services. This month Verizon caused a major internet outage affecting Amazon, Facebook, CloudFare among others. In the same week, Cloudflare suffered it’s 2nd major internet outage. Cloudflare suffers 2nd major internet outage in a week. This time due to globally deploying a rogue regex rule. Why did Slack suffer an outage on Friday? Facebook tweet explains ‘server config change’ for 14-hour outage on all its platforms
Read more
  • 0
  • 0
  • 3418

article-image-samsung-speeds-up-on-device-ai-processing-with-a-4x-lighter-and-8x-faster-algorithm
Vincy Davis
03 Jul 2019
4 min read
Save for later

Samsung speeds up on-device AI processing with a 4x lighter and 8x faster algorithm

Vincy Davis
03 Jul 2019
4 min read
Yesterday, Samsung announced an on-device AI lightweight algorithm that can deliver optimization of low-power and high-speed computations. It uses an NPU (Neural Processing Unit) solution for speeding processing to enable 4 times lighter and 8 times faster computing than the existing algorithms of 32-bit deep learning data used in servers. Last month, Samsung Electronics had announced their goal of expanding its proprietary NPU technology development, in order to strengthen Samsung’s leadership in the global system semiconductor industry by 2030. Recently, the company also delivered an update to this goal, at the conference on Computer Vision and Pattern Recognition (CVPR), with a paper titled “Learning to Quantize Deep Networks by Optimizing Quantization Intervals With Task Loss”. A Neural Processing Unit (NPU) is a processor which is optimized for deep learning algorithm computation, and designed to efficiently process thousands of computations simultaneously. The Vice President and head of Computer Vision Lab of Samsung Advanced Institute of Technology, Chang-Kyu Choi says that, “Ultimately, in the future we will live in a world where all devices and sensor-based technologies are powered by AI. Samsung’s On-Device AI technologies are lower-power, higher-speed solutions for deep learning that will pave the way to this future. They are set to expand the memory, processor and sensor market, as well as other next-generation system semiconductor markets.” Last year, Samsung had introduced Exynos 9 (9820), which featured a Samsung NPU inside the mobile System on Chip (SoC). This product allows mobile devices to perform AI computations independent of any external cloud server. Samsung uses Quantization Interval Learning (QIL) to retain data accuracy The Samsung Advanced Institute of Technology (SAIT) developed the on-device AI lightweight technology by adjusting data into groups of under 4 bits, while maintaining accurate data recognition. The technology is using ‘Quantization Interval Learning (QIL)’ to retain data accuracy. The QIL allows the quantized networks to maintain the accuracy of the full-precision (32-bit) networks with bit-width as low as 4-bit and minimize the accuracy degeneration with further bit width reduction like 3-bit and 2-bit. The 4-bit networks preserve the accuracy of the full-precision networks with various architectures. The 3-bit networks yield comparable accuracy to the full-precision networks. The 2-bit networks suffer from minimal accuracy loss. The quantizer also achieves good quantization performance that outperforms the existing methods even when trained on a heterogeneous dataset and applied to a pretrained network. When the data of a deep learning computation is presented in bit groups lower than 4 bits, computations of ‘and’ and ‘or’ are allowed, on top of the simpler arithmetic calculations of addition and multiplication. By using the QIL process, the 4-bit computation gives the same results as existing processes while using 1/40 to 1/120 fewer transistors. As the system requires less hardware and less electricity, it can be mounted directly in-device at the place where the data for an image or fingerprint sensor is being obtained. Benefits of Samsung’s on-device AI technology A large amount of data can be computed at a high speed without consuming excessive amounts of electricity. Samsung’s system semiconductor capacity will be developed and strengthened by directly computing data from within the device itself. By reduction in the cost of cloud construction for AI operations, Samsung’s on-device AI technology will provide quick and stable performance for use cases such as virtual reality and autonomous driving. It will save personal biometric information used for device authentication, such as fingerprint, iris and face scans, onto mobile devices safely. Earlier this month, Samsung Electronics announced a multi-year strategic partnership with AMD. The strategic alliance is for ultra low power, high-performance mobile graphics IP based on AMD Radeon graphics technologies. Surprisingly though, many users are not impressed with Samsung’s new technology, due to poor performances of Samsung’s previous devices. https://twitter.com/Wayfarerathome/status/1146013820051218433 https://twitter.com/JLP20/status/1146279124408971264 https://twitter.com/ronEgee/status/1146052914315706368 This technology is not yet implemented in Samsung phones. It remains to be seen if the new on-device AI technology can make users change their opinion about Samsung. Visit the Samsung Newsroom site for more details. Samsung AI lab researchers present a system that can animate heads with one-shot learning Facebook app is undeletable on Samsung phones and can possibly track your movements, reports Bloomberg Samsung opens its AI based Bixby voice assistant to third-party developers
Read more
  • 0
  • 0
  • 2607
Visually different images

article-image-introducing-vector-a-high-performance-data-router-written-in-rust
Amrata Joshi
03 Jul 2019
3 min read
Save for later

Introducing Vector, a high-performance data router, written in Rust

Amrata Joshi
03 Jul 2019
3 min read
Yesterday, the team at Timber.io, a cloud-based logging platform, released Vector, a high-performance observability data router that makes transforming, collecting, and sending logs, metrics, and events easy. One of the reasons behind building Vector was to integrate mtail's functionality into a bigger project. mtail is a tool which is used for extracting metrics from application logs. Licensed under the Apache License, Version 2.0, Vector decouples data collection and routing from user services that give users the control and data ownership. Vector which written in Rust, compiles to a single static binary and it has been designed to be deployed across the entire infrastructure.  Concepts of Vector Following is a diagram depicting the basic concepts that Vector comprises of: Image source: Vector Sources When Vector ingests data it proceeds to normalize that data into a record, which sets the stage for easy and consistent processing of the data. Examples of sources include syslog, tcp, file, and stdin. Transforms Transform modifies an event or the stream as a whole like a filter,  parser, sampler, or aggregator.  Sinks A sink is a destination for events and its design and transmission method is controlled by the downstream service it is interacting with. For instance, the TCP sink will stream individual records, while the S3 sink will buffer and flush data. Features of Vector Memory efficient and fast Vector is fast and memory-efficient and doesn't have a runtime and garbage collector. Test cases Vector involves performance and correctness tests, where the performance tests measure performance and capture detailed performance data, whereas, correctness tests verify behavior.  The team behind Vector has also invested in a robust test harness that provides a data-driven testing environment.  Here are the test results: Image source: GitHub Processing data Vector is used for collecting data from various sources in various shapes. It also sets the stage for easy and consistent processing of the data. Serves as a single tool It serves as a light-weight agent as well as a service that works as a single tool for users. Guarantee support matrix It features a guarantee support matrix that helps users understand their tradeoffs. Easy deployment Vector cross-compiles to a single static binary without any runtime. Users seem to be happy about this news as they think Vector is useful for them. A user commented on HackerNews, "I'm learning Rust and eventually plan to build such a solution but I think a lot of this project can be repurposed for what I asked much faster than building a new one. Cheers on this open source project. I will contribute whatever I can. Thanks!!" It seems more metrics-focused sources and Sinks are expected in Vector in the future. A member from the Vector project commented, "It's still slightly rough around the edges, but Vector can actually ingest metrics today in addition to deriving metrics from log events. We have a source component that speaks the statsd protocol which can then feed into our prometheus sink. We're planning to add more metrics-focused sources and sinks in the future (e.g. graphite, datadog, etc), so check back soon!" To know more about this news, check out Vector's page. Implementing routing with React Router and GraphQL [Tutorial] TP-Link kept thousands of vulnerable routers at risk of remote hijack, failed to alert customers Amazon buys ‘Eero’ mesh router startup, adding fuel to its in-house Alexa smart home ecosystem ambitions
Read more
  • 0
  • 0
  • 3270

article-image-cloudflare-suffers-2nd-major-internet-outage-in-a-week-this-time-due-to-globally-deploying-a-rogue-regex-rule
Savia Lobo
03 Jul 2019
4 min read
Save for later

Cloudflare suffers 2nd major internet outage in a week. This time due to globally deploying a rogue regex rule.

Savia Lobo
03 Jul 2019
4 min read
For the second time in less than a week, Cloudflare was part of the major internet outage affecting many websites for about an hour, yesterday due to a software glitch. Last week, Cloudflare users faced a major outage when Verizon accidentally rerouted IP packages after it wrongly accepted a network misconfiguration from a small ISP in Pennsylvania, USA. Cloudflare’s CTO John Graham-Cumming wrote yesterday’s outage was due to a massive spike in CPU utilization in the network. Source: Cloudflare Many users complained of seeing "502 errors" displayed in their browsers when they tried to visit its clients. Downdetector, the website which updates users of the ongoing outages, service interruptions also flashed a 502 error message. https://twitter.com/t_husoy/status/1146058460141772802 Graham-Cumming wrote, “This CPU spike was caused by a bad software deploy that was rolled back. Once rolled back the service returned to normal operation and all domains using Cloudflare returned to normal traffic levels”. A single misconfigured rule, the actual cause of the outage What must have been the cause of the outage is a single misconfigured rule within the Cloudflare Web Application Firewall (WAF), deployed during a routine deployment of new Cloudflare WAF Managed rules. Though the company has automated systems to run test suites and a procedure for deploying progressively to prevent incidents, these WAF rules were deployed globally in one go and caused yesterday’s outage. https://twitter.com/mjos_crypto/status/1146168236393807872 These new rules were to improve the blocking of inline JavaScript that is used in attacks. “Unfortunately, one of these rules contained a regular expression that caused CPU to spike to 100% on our machines worldwide. This 100% CPU spike caused the 502 errors that our customers saw. At its worst traffic dropped by 82%”, Graham-Cumming writes. After finding out the actual cause of the issue, Cloudflare issued a ‘global kill’ on the WAF Managed Rulesets, which instantly dropped CPU back to normal and restored traffic at 1409 UTC. They also ensured that the problem was fixed correctly and re-enabled the WAF Managed Rulesets at 1452 UTC. https://twitter.com/SwiftOnSecurity/status/1146260831899914247 “Our testing processes were insufficient in this case and we are reviewing and making changes to our testing and deployment process to avoid incidents like this in the future”, the Cloudflare blog states. A user said Cloudflare should have been careful of rolling out the feature globally while it was staged for a rollout. https://twitter.com/copyconstruct/status/1146199044965797888 Cloudflare confirms the outage was ‘a mistake’ and not an attack Cloudflare also received speculations that this outage was caused by a DDoS from China, Iran, North Korea, etc. etc, which Graham-Cumming tweeted were untrue and “It was not an attack by anyone from anywhere”. CloudFare’s CEO, Matthew Prince, also confirmed that the outage was not a result of the attack but a “mistake on our part.” https://twitter.com/jgrahamc/status/1146078278278635520 Many users have applauded that Cloudflare has accepted the fact that it was an organizational / engineering management issue and not an individual’s fault. https://twitter.com/GossiTheDog/status/1146188220268470277 Prince told Inc., “I'm not an alarmist or a conspiracy theorist, but you don't have to be either to recognize that it is ultimately your responsibility to have a plan. If all it takes for half the internet to go dark for 20 minutes is some poorly deployed software code, imagine what happens when the next time it's intentional.” To know more about this news in detail, read Cloudflare’s official blog. A new study reveals how shopping websites use ‘dark patterns’ to deceive you into buying things you may not want OpenID Foundation questions Apple’s Sign In feature, says it has security and privacy risks Email app Superhuman allows senders to spy on recipients through tracking pixels embedded in emails, warns Mike Davidson
Read more
  • 0
  • 0
  • 3790

article-image-facebook-fined-2-3-million-by-germany-for-providing-incomplete-information-about-hate-speech-content
Sugandha Lahoti
03 Jul 2019
4 min read
Save for later

Facebook fined $2.3 million by Germany for providing incomplete information about hate speech content

Sugandha Lahoti
03 Jul 2019
4 min read
Yesterday, German authorities said that they have imposed a 2 million-euro ($2.3 million) fine on Facebook under a law designed to combat hate speech. German authorities said that Facebook had provided "incomplete" information in mandatory transparency reports about illegal content, such as hate speech. Facebook received 1,704 complaints and removed 362 posts between January 2018 and June 2018. In the second half of 2018, the company received 1,048 complaints. In a statement to Reuters, Germany’s Federal Office of Justice said that by tallying only certain categories of complaints, the web giant had created a skewed picture of the extent of violations on its platform. It says, "The report lists only a fraction of complaints about illegal content which created a distorted public image about the size of the illegal content and the way the social network deals with the complaints.” The agency said Facebook’s report did not include complaints relating to anti-Semitic insults and material designed to incite hatred against persons or groups based on their religion or ethnicity. Germany’s NetzDG law has been criticized by experts The NetzDG law, under which Facebook was fined, is Germany's internet transparency law passed in 2017 for combating agitation and fake news in social networks. Under this law, commercial social networks are obliged to establish a transparent procedure for dealing with complaints about illegal content and are subject to a reporting and documentation obligation. Per the law, social media platform should check complaints immediately, delete "obviously illegal" content within 24 hours, delete any illegal content within 7 days after checking and block access to it. The deleted content must be stored for at least ten weeks for evidence purposes. In addition, providers must provide a service agent in Germany, both to the authorities and for civil proceedings and submit a six-monthly report on complaints received and how they have been dealt with. However, the law has been on the receiving end of constant criticism from various experts, journalists, social networks, UN, and the EU. Experts said that short and rigid deletion periods and the high threat of fines would compromise freedom of speech of individuals. The social networks will be forced to remove contributions in case of doubt, even if they require a context-related consideration. Facebook had also criticized the NetzDG draft. In a statement sent to the German Bundestag at the end of May 2017, the company stated, "The constitutional state must not pass on its own shortcomings and responsibility to private companies. Preventing and combating hate speech and false reports is a public task from which the state must not escape." In response to the fine, Facebook said, "We want to remove hate speech as quickly and effectively as possible and work to do so. We are confident our published NetzDG reports are in accordance with the law, but as many critics have pointed out, the law lacks clarity.” “ We will analyze the fine notice carefully and reserve the right to appeal,” Facebook added. Facebook is also facing privacy probes over its policies and data breaches and was fined by the EU for failing to give correct information during the regulatory review of its WhatsApp takeover. Last week, Italy's privacy regulator fined Facebook €1 million for violations connected to the Cambridge Analytica scandal. The agency said 57 Italians had downloaded a personality test app called ThisIsYourDigitalLife, which was used to collect Facebook information on both themselves and their Facebook friends. The app was then used to provide data to Cambridge Analytica, for targeting voters during the 2016 U.S. presidential election. Facebook content moderators work in filthy, stressful conditions and experience emotional trauma daily. Mark Zuckerberg is a liar, fraudster, unfit to be the C.E.O. of Facebook, alleges Aaron Greenspan. YouTube’s new policy to fight online hate and misinformation misfires due to poor execution
Read more
  • 0
  • 0
  • 2530
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $15.99/month. Cancel anytime
article-image-microsoft-will-not-support-windows-registry-backup-by-default-to-reduce-disk-footprint-size-from-windows-10-onwards
Vincy Davis
02 Jul 2019
3 min read
Save for later

Microsoft will not support Windows registry backup by default, to reduce disk footprint size from Windows 10 onwards

Vincy Davis
02 Jul 2019
3 min read
After the release of Windows 10 in October 2018, it was speculated that Windows 10 might have a bug which is preventing the successful execution of the registry backup task, usually enabled by default on PCs running the operating system. After eight months, Microsoft has now come back with an answer to this speculation, by stating that it was not a bug but a change in “design” that prevented the execution of registry backups. All along these eight months, users were not notified about this change in feature by Microsoft. Around 800M Windows 10 users would have lost their data, if by any chance, the Windows System Restore point would have failed. Last week, Microsoft released a support document stating Windows 10 version 1803 onwards, Windows will no longer back the system registry to the RegBack folder, by default. Also it has been said that this change is “intended to help reduce the overall disk footprint size of Windows.” If browsed through the Windows\System32\config\RegBack folder, all registry hives are still present, however with each having 0kb file size. Registry backups are extremely important for users as they are the only option available, if the Windows System Restore point fails. How to manually switch back automatic registry backups Though Windows will not support registry backups by default, Microsoft has not entirely  removed the feature. Users can still create registry backups automatically by using a system restore point. Windows 10 users can change the new default behavior using the following steps: First configure a new  REG_DWORD registry entry at HKLM\System\CurrentControlSet\Control\Session Manager\Configuration Manager\EnablePeriodicBackup. Assign it to value 1. After restarting the system, Windows will back up the registry to the RegBack folder. A RegIdleBackup task will be created to manage subsequent backups. Windows will store the task information in the Scheduled Task Library, in Microsoft\Windows\Registry folder. The task has the following properties: Image Source: Microsoft Document Users are skeptical that Microsoft has removed registry backups, for saving disk footprint space. A user on Hacker News comments that, “50-100MB seems like a miniscule amount of space to warrant something like this. My WinSxS folder alone is almost 10GB. If they wanted to save space, even a modest improvement in managing updates would yield space saving results orders of magnitude greater than this.” Another user adds, “Of all the stuff crammed automatically on Windows 10 install .. they can't be serious about saving space.” Another user wrote that, “This sort of thinking might have been understandable back during the '90's. However, today, people have plenty of free space on their hard disk. The track record of Windows 10 has been so poor lately that it's surprising that MS got so overconfident that they decided that they didn't need safeguards like this any longer.” Read the Microsoft support document for more details. Microsoft is seeking membership to Linux-distros mailing list for early access to security vulnerabilities Docker and Microsoft collaborate over WSL 2, future of Docker Desktop for Windows is near Microsoft finally makes Hyper-V Server 2019 available, after a delay of more than six months
Read more
  • 0
  • 0
  • 1991

article-image-ai-systems-should-be-developed-and-operated-in-a-manner-that-respects-internationally-recognized-human-rights-declares-ieee
Sugandha Lahoti
02 Jul 2019
3 min read
Save for later

“AI systems should be developed and operated in a manner that respects internationally recognized human rights”, declares IEEE

Sugandha Lahoti
02 Jul 2019
3 min read
This is a big win for the Artificial Intelligence community. IEEE has released a  statement from the IEEE Board of Directors stating that the committee will now support the inclusion of ethical considerations in the design and deployment of autonomous and intelligent systems (A/IS). The IEEE committee recognizes that present AI systems present new social, legal and ethical challenges. They also have to address issues of systemic risk, diminishing trust, privacy challenges and issues of data transparency, ownership and agency. Therefore, there is a need for developers of such systems to use practices and standards that respect and acknowledge the ethical obligation of such systems in their human and social context. Concrete steps taken by IEEE A/IS systems should be developed and operated in a manner that respects internationally recognized human rights. A/IS developers should consider impact on individual and societal well-being to be central in development. Developers should respect each individual’s ability to maintain appropriate control over their personal data and identifying information. Developers and operators should consider the effectiveness and fitness of A/IS technologies for the purpose of their systems. Technical basis of particular decisions made by an A/IS should be discoverable. A/IS should be designed and operated in a manner that permits production of an unambiguous rationale for the decisions made by the system. Designers of A/IS creators should consider and guard against potential misuses and operational risks. Designers of A/IS should specify and operators should possess the knowledge and skills required for safe and effective operation. To that extent, the IEEE committee has taken various initiatives to build ethically aligned AI systems. In March, they released a report, “Ethically Aligned Design – A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems, Edition 1.0,” that sets forth scientific analysis and resources, high-level principles and actionable recommendations for ethical implementation of A/IS. They also launched the IEEE Tech Ethics program which seeks to ensure that ethical and societal implications of technology become an integral part of the development process by driving conversation and debate on these issues. The IEEE Code of Ethics also showcases IEEE’s commitment to ethical design and the societal implications of intelligent systems. In a statement the IEEE committee said, “IEEE is committed to developing trust in technologies through transparency, technical community building, and partnership across regions and nations, as a service to humanity. Measures that ensure that A/IS are developed and deployed with appropriate ethical consideration for human and societal values will enhance trust in these technologies, which in turn will increase the ability of the technologies to achieve much broader beneficial societal impacts.” The news was quite well received by the developer community after John C. Havens, Executive Director at The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems shared the news on Twitter. Users called it as arguably the most globally impactful step in this space and a milestone for all. https://twitter.com/jameshorton/status/1145900183042973698 https://twitter.com/GReal1111/status/1145826945336262662   Some pointed out that all tech companies should sign on to this statement. https://twitter.com/Dktr_Sus/status/1145866352176979968 Read the full report here. The US puts Huawei on BIS List forcing IEEE to ban Huawei employees from peer-reviewing or editing research papers. IEEE Standards Association releases ethics guidelines for automation and intelligent systems IEEE Computer Society predicts top ten tech trends for 2019: assisted transportation, chatbots, and deep learning accelerators among others.
Read more
  • 0
  • 0
  • 1748

article-image-introducing-qwant-maps-an-open-source-and-privacy-preserving-maps-with-exclusive-control-over-geolocated-data
Vincy Davis
01 Jul 2019
3 min read
Save for later

Introducing Qwant Maps: an open source and privacy-preserving maps, with exclusive control over geolocated data

Vincy Davis
01 Jul 2019
3 min read
Last week, Betterweb announced the release of Qwant Maps, an open source and privacy-preserving map. In the current scenario where services like Google Maps are always tracking user data, Qwant Maps respects user privacy and proposes to give users exclusive control over their geolocated data. All components developed by Qwant Maps are open source, enabling users to improve their experience by contributing directly with the Qwant map. Qwant map uses OpenStreetMap as their main data source. OpenStreetMap is a free and collaborative geographical database supported today by more than a million contributors around the world. Any voluntary user can freely contribute to enrich their database with new places. Qwant Maps also uses OpenStreetMap data to generate its own vector tiles, base map, and web APIs. Key components of Qwant Maps Inbuilt search-engine Qwant Maps uses Mimirsbrunn search engine, which allows users to search for "punctual" geospatial objects, such as addresses, administrative areas and points of interest. Mimirsbrunn also called Mimir, is a web service of geocoding that matches the user unstructured text query with a specific point on the map. Renders visual-art based on vector tiles Qwant Maps illustrates a rendering of visual art based on vector tiles, which are generated, served and rendered by the Kartotherian stack. It is developed by the Wikimedia Foundation according to the OpenMapTiles open source data schema. The varied options for vector tiles offers more technical flexibility, which allows users to easily integrate different styles and native support for specific renderings like 3D, rotations, etc. The Qwant Maps tiles are updated every 24 hours to incorporate daily changes from OpenStreetMap data. Quant Maps uses Python web API Idunn is the Python web API, which exploits different data sources to provide users with the most useful information. It highlights the map in such a way that all the information is provided in an understandable manner. The main goal of Idunn is to add context for all the required ‘points-of-interest’ areas in a consistent referential. Users are quite excited with the open source and privacy preserving features of Qwant Maps https://twitter.com/TonioBerry/status/1145072595601121281 https://twitter.com/TFressin/status/1145091285105164288 https://twitter.com/AnC0mmie/status/1144630389224431617 However, some users are already complaining about its inaccuracy. https://twitter.com/Syenta1/status/1144616659195441152 A user on Hacker News states that, “Quant Maps search seems to be quite lacking. Searched for a large store in my city, where I recently drove using Google Maps, and it can't find it. It just responds to a match to the city name. When I used just the name without the city, it found a pub halfway around the world with exact name match.” Another user comments, “I used Qwant for a while (lite, the main version is so cluttered), but found the results to be hardly usable. I do hope they manage to stay afloat though, as I am happy about any Google challenger.” Visit the Qwant Maps website for more details. Apple previews iOS 13: Sign in with Apple, dark mode, advanced photo and camera features, and an all-new Maps experience European Consumer groups accuse Google of tracking its users’ location, calls it a breach of GDPR Launching soon: My Business App on Google Maps that will let you chat with businesses, on the go
Read more
  • 0
  • 0
  • 4415

article-image-microsoft-is-seeking-membership-to-linux-distros-mailing-list-for-early-access-to-security-vulnerabilities
Vincy Davis
01 Jul 2019
4 min read
Save for later

Microsoft is seeking membership to Linux-distros mailing list for early access to security vulnerabilities

Vincy Davis
01 Jul 2019
4 min read
Microsoft is now aiming to add its own contributions and strengthen Linux, by getting an early access to its security vulnerabilities. Last week, Microsoft applied for membership to join the official closed group of Linux, called the Linux-distros mailing list. The Linux-distros mailing list is used by Linux distributors to privately report, coordinate and discuss security issues. The issues revealed in this group are not made public for 14 days. Members of this group include Amazon Linux AMI, Openwall, Oracle, Red Hat, SUSE and Ubuntu. Sasha Levin, a Microsoft Linux kernel developer has applied for the membership application on behalf of Microsoft, to join the exclusive group. If approved, it would allow Microsoft to be part of the private behind-the-scenes chatter about vulnerabilities, patches, and ongoing security issues with the open-source kernel and related code. These discussions are crucial for getting early information and coordinating the deployment of fixes before they are made public. One of the main criteria for membership in the Linux-distros mailing list, is to have a Unix-like distro that makes use of open source components.  To indicate that Microsoft deserves this membership, Levin has cited Microsoft's Azure Sphere and the Windows Subsystem For Linux (WSL) 2 as examples of distro-like builds.  Last month, Microsoft announced that Windows Subsystem for Linux 2 (WSL 2) is available in Windows Insiders. With availability in build 18917, Windows will now be shipping with a full Linux kernel. This will allow WSL 2 to run inside a VM and provide full access to Linux system calls. The kernel will be specifically tuned for WSL 2 and will be fully open sourced with the full configuration available on GitHub. This will enable users for a faster turnaround on updating the kernel, when new versions become available. Thus the new architecture aims to increase file system performance and provide full system call compatibility, in a Linux environment. Levin also highlighted that Microsoft’s Linux builds are open sourced and that it contributes to the community. Levin has also revealed that Linux is used more on Azure than Windows server. This does not come as a surprise, as this is not the first time that Microsoft is being aligned to Linux. There are at least eight Linux-distros available on Azure. Also Microsoft’s former CEO Steve Balmer, who has previously quoted Linux as “Cancer”, now says that he loves Linux.  This move by Microsoft to embrace Linux, is being seen as Microsoft’s way of staying relevant in the industry. In a statement to Register, the open-source pioneer Bruce Perens says that, “What we are seeing here is that Microsoft wants access to early security alerts on Linux,  They’re joining it as a Linux distributor because that’s how it’s structured. Microsoft obviously has a lot of Linux plays, and it’s their responsibility to fix known security bugs as quickly as other Linux distributors.” Most users are of the opinion that, Microsoft embracing Linux was bound to happen. With its immense advantages, Linux is the default option for many. A user on Hacker News says that,  “The biggest practical advantage I have found is that Linux has dramatically better file system I/O performance. Like, a C++ project that builds in 20 seconds on Linux, takes several minutes to build on the same hardware in Windows.” Another user comments that, “I'm surprised it took this long. With Linux support for .NET and SQL Server, there is zero reason to host anything new on Windows now (of course legacy enterprise software is another story). I wouldn't be surprised if Windows Server is fully EOL'd in a few years.” Another user wrote that, “On Azure, a Windows VM instance tends to cost about 50% more than the equivalent instance running Linux, so it is a no brainer to use Linux if your application is operating system independent.” Another comment reads, “Linux is the default choice when you set up a VM.” Debian GNU/Linux port for RISC-V 64-bits: Why it matters and roadmap Netflix security engineers report several TCP networking vulnerabilities in FreeBSD and Linux kernels Unity Editor will now officially support Linux
Read more
  • 0
  • 0
  • 3951
article-image-google-launches-beta-version-of-deep-learning-containers-for-developing-testing-and-deploying-ml-applications
Amrata Joshi
28 Jun 2019
3 min read
Save for later

Google launches beta version of Deep Learning Containers for developing, testing and deploying ML applications

Amrata Joshi
28 Jun 2019
3 min read
Yesterday, Google announced the beta availability of Deep Learning Containers, a new cloud service that provides environments for developing, testing as well as for deploying machine learning applications. In March this year, Amazon also launched a similar offering, AWS Deep Learning Containers with Docker image support for easy deployment of custom machine learning (ML) environments. The major advantage of Deep Learning containers is its ability to test machine learning applications on-premises and it can quickly move them to cloud. Support for PyTorch, TensorFlow scikit-learn and R Deep Learning Containers, launched by Google Cloud Platform (GCP) can be run both in the cloud as well as on-premise. It has support for machine learning frameworks like PyTorch, TensorFlow 2.0, and TensorFlow 1.13. Deep Learning Containers by AWS has support for TensorFlow and Apache MXNet frameworks. Whereas Google’s ML containers don’t support Apache MXNet but they come with pre-installed PyTorch, TensorFlow scikit-learn and R. Features various tools and packages GCP Deep Learning Containers consists of several performance-optimized Docker containers that come along with various tools used for running deep learning algorithms. These tools include preconfigured Jupyter Notebooks that are interactive tools used to work with and share code, visualizations, equations and text. Google Kubernetes Engine clusters is also one of the tools and it used for orchestrating multiple container deployments. It also comes with access to packages and tools such as Nvidia’s CUDA, cuDNN, and NCCL. Docker images now work on cloud and on-premises  The docker images also work on cloud, on-premises, and across GCP products and services such as Google Kubernetes Engine (GKE), Compute Engine, AI Platform, Cloud Run, Kubernetes, and Docker Swarm. Mike Cheng, software engineer at Google Cloud in a blog post, said, “If your development strategy involves a combination of local prototyping and multiple cloud tools, it can often be frustrating to ensure that all the necessary dependencies are packaged correctly and available to every runtime.” He further added, “Deep Learning Containers address this challenge by providing a consistent environment for testing and deploying your application across GCP products and services, like Cloud AI Platform Notebooks and Google Kubernetes Engine (GKE).” For more information, visit the AI Platform Deep Learning Containers documentation. Do Google Ads secretly track Stack Overflow users? CMU and Google researchers present XLNet: a new pre-training method for language modeling that outperforms BERT on 20 tasks Curl’s lead developer announces Google’s “plan to reimplement curl in Libcrurl”    
Read more
  • 0
  • 0
  • 3948

article-image-apple-gets-into-chip-development-and-self-driving-autonomous-tech-business
Amrata Joshi
28 Jun 2019
3 min read
Save for later

Apple gets into chip development and self-driving autonomous tech business

Amrata Joshi
28 Jun 2019
3 min read
Apple recently hired Mike Filippo, lead CPU architect and one of the top chip engineers from ARM Holdings, which is a semiconductor and software design company. According to Mike Filippo’s updated profile in LinkedIn, he joined Apple in May as the architect and is working out of the Austin, Texas area.  He worked at ARM for ten years as the lead engineer for designing the chips used in most smartphones and tablets. Previously, he had also worked as the key designer at chipmakers Advanced Micro Devices and Intel Corp.  In a statement to Bloomberg, a spokesman from ARM said, “Mike was a long-time valuable member of the ARM community.” He further added, “We appreciate all of his efforts and wish him well in his next endeavor.” Apple’s A series chips that are used in the mobile devices use ARM technology. For almost two decades, the Mac computers had Intel processors. Hence, Filippo’s experience in these companies could prove to be a major plus point for Apple. Apple had planned to use its own chips in Mac computers in 2020, and further replace processors from Intel Corp with ARM architecture based processors.  Apple also plans to expand its in-house chip making work to new device categories like a headset that meshes augmented and virtual reality, Bloomberg reports. Apple acquires Drive.ai, an autonomous driving startup Apart from the chip making business there are reports of Apple racing in the league of self-driving autonomous technology. The company had also introduced its own self-driving vehicle called Titan, which is still a work in progress project.  On Wednesday, Axios reported that Apple acquired Drive.ai, an autonomous driving startup valued at $200 million. Drive.ai was on the verge of shutting down and was laying off all its staff. This news indicates that Apple is interested in tasting the waters of the self-driving autonomous technology and this move might help in speeding up the Titan project. Drive.ai was in search of a buyer since February this year and had also communicated with many potential acquirers before getting the deal cracked by Apple. The company also purchased Drive.ai's autonomous cars and other assets. The amount for which Apple has acquired Drive.ai is yet not disclosed, but as per a recent report, Apple was expected to pay an amount lesser than the $77 million invested by venture capitalists. The company has also hired engineers and managers from  Waymo and Tesla. Apple has recruited around five software engineers from Drive.ai as per a report from the San Francisco Chronicle. It seems Apple is mostly hiring people that are into engineering and product design. Declarative UI programming faceoff: Apple’s SwiftUI vs Google’s Flutter US regulators plan to probe Google on anti-trust issues; Facebook, Amazon & Apple also under legal scrutiny Apple previews macOS Catalina 10.15 beta, featuring Apple music, TV apps, security, zsh shell, driverKit, and much more!  
Read more
  • 0
  • 0
  • 1949

article-image-axon-a-major-police-body-worn-camera-maker-says-no-to-facial-recognition-tech-in-its-devices-taking-ethics-advisory-panels-advice
Fatema Patrawala
28 Jun 2019
6 min read
Save for later

Axon, a major police body-worn camera maker, says no to facial recognition tech in its devices taking ethics advisory panel’s advice

Fatema Patrawala
28 Jun 2019
6 min read
Facial recognition is a contentious technology, to say the least, these days. Yesterday, Axon Enterprises formerly known as Taser International, the largest police body-camera making company in the US announced that it will not incorporate facial-recognition technology in its law-enforcement devices. https://twitter.com/wsisaac/status/1144199471657553920 This move coincides with growing public opposition to facial recognition technology, including from tech workers with some cities in the US mulling to ban its use. Last month, San Francisco became the first city to ban local government use of facial recognition, with Oakland, California, Somerville and Massachusetts, expected to enact similar legislation soon. California's state Legislature is also considering a bill that would ban the use of facial recognition on police body cameras. Axon came to this decision after reviewing a report published by its ethics advisory panel. The panel urged the company not to pair its best-selling body cameras with software that could allow officers to identify people in real time based on their faces. Last year in April, Axon established an AI and Policing Technology Ethics Board. The purpose of the board was to guide and advise the company on ethical issues related to the development and deployment of new artificial intelligence (AI) powered policing technologies. They would advise the company on products which are under consideration or development, and would not formally approve or reject any particular product. This is the first board report that provides thoughtful and actionable recommendations to Axon regarding face recognition technology. The board is an eleven-member external advisory body made up of experts from various fields including AI, computer science, privacy, law enforcement, civil liberties, and public policy. The company also emphasizes on the importance of having a diverse board for the guidance. The current board members are: Ali Farhadi, an Associate Professor in the Department of Computer Science and Engineering at the University of Washington Barry Friedman, an academic and one of the leading authorities on constitutional law, policing, criminal procedure, and federal courts Christy E. Lopez, a Georgetown Law Distinguished Visitor from Practice and former Deputy Chief in the DOJ Civil Rights Division Jeremy Gillula, Tech Projects Director at the Electronic Frontier Foundation Jim Bueermann President of the Police Foundation in Washington, DC Kathleen M. O’Toole, former Chief of Police for the Seattle Police Department Mecole Jordan, Executive Director at United Congress of Community and Religious Organization (UCCRO) Miles Brundage, AI Policy Research Fellow with the Strategic AI Research Center at FHI Tracy Ann Kosa, Senior Program Manager at Google Vera Bumpers, President at National Organization of Black Law Enforcement Executives (NOBLE) Walt McNeil, a Leon County Sheriff in Florida Here are few tweets from some of the board members as well. https://twitter.com/Miles_Brundage/status/1144234344250109952 https://twitter.com/Christy_E_Lopez/status/1144328348040085504   The members of the board cited facial recognition tech's accuracy problems, that it could lead to false identifications, particularly of women and people with dark skin. The technology also could lead to expanded government surveillance and intrusive police activity, the board said. More specifically, the findings of the report are as follows: [box type="shadow" align="" class="" width=""]Facial recognition simply isn’t good enough right now for it to be used ethically. Don’t talk about “accuracy,” talk about specific false negatives and positives, since those are more revealing and relevant. Any facial recognition model that is used shouldn’t be overly customizable, or it will open up the possibility of abuse. Any application of facial recognition should only be initiated with the consent and input of those it will affect. Until there is strong evidence that these programs provide real benefits, there should be no discussion of use. Facial recognition technologies do not exist, nor will they be used, in a political or ethical vacuum, so consider the real world when developing or deploying them.[/box] In a blog post on Axon's website, CEO Rick Smith said current facial recognition technology "raises serious ethical concerns." But Smith also said that his team of artificial intelligence researchers would "continue to evaluate the state of facial recognition technologies," leaving open the possibility of adding the software to body cameras in the future. Axon holds the largest market share among the body cam manufacturer in the United States; it  supplies cameras to 47 of the 60 biggest police agencies. However, it does not say how many police agencies are under the contract, but says that more than 200,000 of its cameras are in use around the country. As per reports from NBC, this move from Axon is appreciated by civil rights and privacy advocates ─ but with skepticism. They noted that real-time facial recognition on police body cameras is not considered feasible at the moment, and they expressed concern that Axon could reverse course once that changed. "This is ultimately an issue about the kind of society we want to live in, not about technical specs," said Harlan Yu, executive director of Upturn, which monitors police agencies' body camera policies, and who is an outspoken Axon critic. https://twitter.com/harlanyu/status/1144278309842370560 Rather than rely on pledges from technology companies, lawmakers should impose regulations on how facial recognition is used, the advocates said. "Axon leaves open the possibility that it may include face recognition in the future, which is why we need federal and state laws ─ like the current proposal in California ─ that would ban the use of facial recognition on body cameras altogether," said Jennifer Lynch, surveillance litigation director at the Electronic Frontier Foundation, a civil liberties nonprofit. Brendan Klare, CEO of Rank One Computing, whose facial recognition software is used by many police departments to identify people in still images, said to NBC that Axon's announcement is a way to make the company look good while making little substantive impact. "The more important thing to point out here is that face recognition on body cameras really isn't technically feasible right now anyways," Klare said. While Axon has very little to lose from its announcement, other players in this industry took this as an opportunity. A couple hours after Axon's announcement, the head of U.K. based company Digital Barriers, trying to break into the U.S. body camera market with its facial recognition-enabled devices ─ tweeted that Axon's move was good news for his company. https://twitter.com/UKZak/status/1144225152915378176 Amazon rejects all 11 shareholder proposals including the employee-led climate resolution at Annual shareholder meeting Amazon patents AI-powered drones to provide ‘surveillance as a service’ San Francisco Board of Supervisors vote in favour to ban use of facial recognition tech in city
Read more
  • 0
  • 0
  • 1504
article-image-jony-ive-apples-chief-design-officer-departs-after-27-years-at-apple-to-form-an-independent-design-company-apple-to-be-a-key-client
Sugandha Lahoti
28 Jun 2019
5 min read
Save for later

Jony Ive, Apple’s chief design officer departs after 27 years at Apple to form an independent design company; Apple to be a key client

Sugandha Lahoti
28 Jun 2019
5 min read
The man who shaped iPhone, Jony Ive, is departing from his position as Apple's chief design officer to start his own independent design company LoveFrom. After 27 years at Apple, he will now transit later this year, and LoveFrom will formally launch in 2020. Apple will be one of Ive’s primary clients through his new design company and he will continue to work closely on projects for Apple. “Jony is a singular figure in the design world and his role in Apple’s revival cannot be overstated, from 1998’s groundbreaking iMac to the iPhone and the unprecedented ambition of Apple Park, where recently he has been putting so much of his energy and care,” said Tim Cook, Apple’s CEO in the official press release. Ive has helped create some of Apple’s most recognized and popular products. He joined the firm in the early 1990s, and began leading Apple’s design team from 1996. He became the senior vice president of industrial design in 1997 and subsequently headed the industrial design team responsible for most of the company's significant hardware products. During his stint at Apple, Ive has worked on products, including a wide range of Macs, the iPod, iPhone, iPad, Apple Watch, and more. He also had a hand in designing the company’s “spaceship” Apple Park campus and establishing the look and feel of Apple retail stores. Since 2012, Ive had overseen design for both hardware and software at Apple, roles that had previously been separate. Apple said on Thursday the roles would again be split, with design team leaders Evans Hankey taking over as vice president of industrial design and Alan Dye becoming vice president of human interface design. “This just seems like a natural and gentle time to make this change,” Ive said in the interview to the Financial Times. “After nearly 30 years and countless projects, I am most proud of the lasting work we have done to create a design team, process and culture at Apple that is without peer,” Ive said in the press release. “Today it is stronger, more vibrant and more talented than at any point in Apple’s history. The team will certainly thrive under the excellent leadership of Evans, Alan and Jeff, who have been among my closest collaborators. I have the utmost confidence in my designer colleagues at Apple, who remain my closest friends, and I look forward to working with them for many years to come.” On the Ive-Jobs-Cook conundrum Jony Ive and Steve Jobs had shared a close relationship. According to Jobs biographer Walter Isaacson, the two would have lunch together every day and talk about design in the afternoon. Jobs considered Ive a "spiritual partner," according to Isaacson's book. After the death of Steve Jobs, there was speculation that Jony Ive might one day move into the chief executive's office. However, it was Tim Cook who took over. Tim was more interested in managing the supply chains than focusing on innovating new products and devices. Ive's presence has helped deflect some criticism that the company has lost some of its innovative flair after Jobs' death. John Gruber, a writer, and the inventor of the Markdown markup language wrote a blog post on Ivy’s departure pointing out the big difference between Ive under Jobs and Ive under Cook. He says, “This news dropped like a bomb. As far as I can tell no one in the media got a heads up about this news. Ever since Steve Jobs died it’s seemed to me that Ive ran his own media interaction.” He further adds, “From a product standpoint, the post-Jobs era at Apple has been the Jony Ive era, not the Tim Cook era. That’s not a knock on Tim Cook. To his credit, Tim Cook has never pretended to be a product guy. My gut sense for years has been that Ive without Jobs has been like McCartney without Lennon. On Ive working with Apple post departure, he writes, “This angle that he’s still going to work with Apple as an independent design firm seems like pure spin. You’re either at Apple or you’re not. Ive is out. Also, Apple’s hardware and industrial design teams work so far out that, even if I’m right and Ive is now effectively out of Apple, we’ll still be seeing Ive-designed hardware 5 years from now. It is going to take a long time to evaluate his absence. I don’t worry that Apple is in trouble because Jony Ive is leaving; I worry that Apple is in trouble because he’s not being replaced.” People on Twitter seemed to agree to Gruber’s analysis https://twitter.com/waltmossberg/status/1144418270000402433 https://twitter.com/reckless/status/1144376472100061184 https://twitter.com/kanishkdudeja/status/1144497808017203200     Others celebrated Ive’s work and offered him their best wishes. https://twitter.com/surabhi140/status/1144498594407276550 https://twitter.com/Shravster/status/1144498391147147265   Ive is the second major departure for Apple this year. In April, Apple retail chief Angela Ahrendts left the company. Her departure drew mixed reactions from consumers and critics. Though Apple has been recruiting some high profile people this year. In April, Apple took a major step towards strengthening its AI team by hiring Ian Goodfellow, as the director of machine learning. Recently it also hired high-profile marketing exec Nick Law, previously the chief creative officer of Publicis Groupe. The company also recruited Michael Schwekutsch, the Tesla VP overseeing electric powertrains, as a Senior Director of Engineering at the Special Project Group. WWDC 2019 highlights: Apple introduces SwiftUI, new privacy-focused sign in, updates to iOS, macOS, iPad and more. Ian Goodfellow quits Google and joins Apple as a director of machine learning Apple proposes a “privacy-focused” ad click attribution model for counting conversions without tracking users.
Read more
  • 0
  • 0
  • 2222

article-image-mozilla-introduces-track-this-a-new-tool-that-will-create-fake-browsing-history-and-fool-advertisers
Amrata Joshi
27 Jun 2019
4 min read
Save for later

Mozilla introduces Track THIS, a new tool that will create fake browsing history and fool advertisers

Amrata Joshi
27 Jun 2019
4 min read
Most of us somewhere worry about our activities getting tracked on the internet, remember the last time you got the ads based on your interests or based on your browsing history and you start thinking as to ‘if at all I am getting tracked’? Most of our activities are getting tracked by the web through cookies that make a note of things such as language preferences, websites visited by the user and much more. But the problem gets doubled when even data brokers and advertising networks use these cookies to collect user information without the consent. In this case, users need to have control over what advertisers know about them. This month the team at Mozilla Firefox announced the Enhanced Tracking Protection that is by default in flagship Firefox Quantum browser against third-party cookies. In addition to this two days ago, the team also announced the launch of a project called Track THIS, a tool that can help you fool the advertisers. Track THIS opens up 100 tabs that are crafted to fit a specific character which includes a hypebeast, a filthy rich person, a doomsday prepper, or an influencer.  The users’ browsing history will be depersonalized in a way that advertisers will struggle targeting ads to the users as the tool will confuse them. Track This will show users the ads for the products that they might not be interested in, users will still continue to see ads but not the targeted ones.  The official blog post reads, “Let’s be clear, though. This will show you ads for products you might not be interested in at all, so it’s really just throwing off brands who want to advertise to a very specific type of person. You’ll still be seeing ads. And eventually, if you just use the internet as you typically would day to day, you’ll start seeing ads again that align more closely to your normal browsing habits. If you’d rather straight-up block third-party tracking cookies, go ahead and get Enhanced Tracking Protection in Firefox.” Let’s now understand the working of Track THIS  Before trying Track THIS, users need to manage their tabs and save their work or they can open up a new window or browser to start the process. Track THIS will itself open 100 tabs. Users then need to choose a profile to trick advertisers into thinking that a user is someone else Users need to confirm that they are ready to open 100 tabs based on that profile. Users then need to close all 100 tabs and open up a new window. The ads will only be impacted for a few days but ad trackers can soon start reflecting users’ normal browsing habits. Once done with experimenting, users can get Firefox with Enhanced Tracking Protection to block third-party tracking cookies by default. It seems users are excited about this news as they will be able to get rid of targeted advertisements. https://twitter.com/minnakank/status/1143863045447458816 https://twitter.com/inthecompanyof/status/1143842275476299776 Few users are scared of using the tool on their phones and are a little skeptical about the 100 tabs. A user commented on HackerNews, “I'm really afraid to click one of those links on mobile. Does it just spawn 100 new tabs?” Another user commented, “Not really sure that a browser should allow a site to open 100 tabs programmatically, if anything this is telling me that Firefox is open to such abuse.” To know more about this news, check out the official blog post. Mozilla releases Firefox 67.0.3 and Firefox ESR 60.7.1 to fix a zero-day vulnerability, being abused in the wild Mozilla to bring a premium subscription service to Firefox with features like VPN and cloud storage Mozilla makes Firefox 67 “faster than ever” by deprioritizing least commonly used features  
Read more
  • 0
  • 0
  • 4616