Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3709 Articles
article-image-ai-systems-should-be-developed-and-operated-in-a-manner-that-respects-internationally-recognized-human-rights-declares-ieee
Sugandha Lahoti
02 Jul 2019
3 min read
Save for later

“AI systems should be developed and operated in a manner that respects internationally recognized human rights”, declares IEEE

Sugandha Lahoti
02 Jul 2019
3 min read
This is a big win for the Artificial Intelligence community. IEEE has released a  statement from the IEEE Board of Directors stating that the committee will now support the inclusion of ethical considerations in the design and deployment of autonomous and intelligent systems (A/IS). The IEEE committee recognizes that present AI systems present new social, legal and ethical challenges. They also have to address issues of systemic risk, diminishing trust, privacy challenges and issues of data transparency, ownership and agency. Therefore, there is a need for developers of such systems to use practices and standards that respect and acknowledge the ethical obligation of such systems in their human and social context. Concrete steps taken by IEEE A/IS systems should be developed and operated in a manner that respects internationally recognized human rights. A/IS developers should consider impact on individual and societal well-being to be central in development. Developers should respect each individual’s ability to maintain appropriate control over their personal data and identifying information. Developers and operators should consider the effectiveness and fitness of A/IS technologies for the purpose of their systems. Technical basis of particular decisions made by an A/IS should be discoverable. A/IS should be designed and operated in a manner that permits production of an unambiguous rationale for the decisions made by the system. Designers of A/IS creators should consider and guard against potential misuses and operational risks. Designers of A/IS should specify and operators should possess the knowledge and skills required for safe and effective operation. To that extent, the IEEE committee has taken various initiatives to build ethically aligned AI systems. In March, they released a report, “Ethically Aligned Design – A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems, Edition 1.0,” that sets forth scientific analysis and resources, high-level principles and actionable recommendations for ethical implementation of A/IS. They also launched the IEEE Tech Ethics program which seeks to ensure that ethical and societal implications of technology become an integral part of the development process by driving conversation and debate on these issues. The IEEE Code of Ethics also showcases IEEE’s commitment to ethical design and the societal implications of intelligent systems. In a statement the IEEE committee said, “IEEE is committed to developing trust in technologies through transparency, technical community building, and partnership across regions and nations, as a service to humanity. Measures that ensure that A/IS are developed and deployed with appropriate ethical consideration for human and societal values will enhance trust in these technologies, which in turn will increase the ability of the technologies to achieve much broader beneficial societal impacts.” The news was quite well received by the developer community after John C. Havens, Executive Director at The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems shared the news on Twitter. Users called it as arguably the most globally impactful step in this space and a milestone for all. https://twitter.com/jameshorton/status/1145900183042973698 https://twitter.com/GReal1111/status/1145826945336262662   Some pointed out that all tech companies should sign on to this statement. https://twitter.com/Dktr_Sus/status/1145866352176979968 Read the full report here. The US puts Huawei on BIS List forcing IEEE to ban Huawei employees from peer-reviewing or editing research papers. IEEE Standards Association releases ethics guidelines for automation and intelligent systems IEEE Computer Society predicts top ten tech trends for 2019: assisted transportation, chatbots, and deep learning accelerators among others.
Read more
  • 0
  • 0
  • 1748

article-image-introducing-qwant-maps-an-open-source-and-privacy-preserving-maps-with-exclusive-control-over-geolocated-data
Vincy Davis
01 Jul 2019
3 min read
Save for later

Introducing Qwant Maps: an open source and privacy-preserving maps, with exclusive control over geolocated data

Vincy Davis
01 Jul 2019
3 min read
Last week, Betterweb announced the release of Qwant Maps, an open source and privacy-preserving map. In the current scenario where services like Google Maps are always tracking user data, Qwant Maps respects user privacy and proposes to give users exclusive control over their geolocated data. All components developed by Qwant Maps are open source, enabling users to improve their experience by contributing directly with the Qwant map. Qwant map uses OpenStreetMap as their main data source. OpenStreetMap is a free and collaborative geographical database supported today by more than a million contributors around the world. Any voluntary user can freely contribute to enrich their database with new places. Qwant Maps also uses OpenStreetMap data to generate its own vector tiles, base map, and web APIs. Key components of Qwant Maps Inbuilt search-engine Qwant Maps uses Mimirsbrunn search engine, which allows users to search for "punctual" geospatial objects, such as addresses, administrative areas and points of interest. Mimirsbrunn also called Mimir, is a web service of geocoding that matches the user unstructured text query with a specific point on the map. Renders visual-art based on vector tiles Qwant Maps illustrates a rendering of visual art based on vector tiles, which are generated, served and rendered by the Kartotherian stack. It is developed by the Wikimedia Foundation according to the OpenMapTiles open source data schema. The varied options for vector tiles offers more technical flexibility, which allows users to easily integrate different styles and native support for specific renderings like 3D, rotations, etc. The Qwant Maps tiles are updated every 24 hours to incorporate daily changes from OpenStreetMap data. Quant Maps uses Python web API Idunn is the Python web API, which exploits different data sources to provide users with the most useful information. It highlights the map in such a way that all the information is provided in an understandable manner. The main goal of Idunn is to add context for all the required ‘points-of-interest’ areas in a consistent referential. Users are quite excited with the open source and privacy preserving features of Qwant Maps https://twitter.com/TonioBerry/status/1145072595601121281 https://twitter.com/TFressin/status/1145091285105164288 https://twitter.com/AnC0mmie/status/1144630389224431617 However, some users are already complaining about its inaccuracy. https://twitter.com/Syenta1/status/1144616659195441152 A user on Hacker News states that, “Quant Maps search seems to be quite lacking. Searched for a large store in my city, where I recently drove using Google Maps, and it can't find it. It just responds to a match to the city name. When I used just the name without the city, it found a pub halfway around the world with exact name match.” Another user comments, “I used Qwant for a while (lite, the main version is so cluttered), but found the results to be hardly usable. I do hope they manage to stay afloat though, as I am happy about any Google challenger.” Visit the Qwant Maps website for more details. Apple previews iOS 13: Sign in with Apple, dark mode, advanced photo and camera features, and an all-new Maps experience European Consumer groups accuse Google of tracking its users’ location, calls it a breach of GDPR Launching soon: My Business App on Google Maps that will let you chat with businesses, on the go
Read more
  • 0
  • 0
  • 4415

article-image-microsoft-is-seeking-membership-to-linux-distros-mailing-list-for-early-access-to-security-vulnerabilities
Vincy Davis
01 Jul 2019
4 min read
Save for later

Microsoft is seeking membership to Linux-distros mailing list for early access to security vulnerabilities

Vincy Davis
01 Jul 2019
4 min read
Microsoft is now aiming to add its own contributions and strengthen Linux, by getting an early access to its security vulnerabilities. Last week, Microsoft applied for membership to join the official closed group of Linux, called the Linux-distros mailing list. The Linux-distros mailing list is used by Linux distributors to privately report, coordinate and discuss security issues. The issues revealed in this group are not made public for 14 days. Members of this group include Amazon Linux AMI, Openwall, Oracle, Red Hat, SUSE and Ubuntu. Sasha Levin, a Microsoft Linux kernel developer has applied for the membership application on behalf of Microsoft, to join the exclusive group. If approved, it would allow Microsoft to be part of the private behind-the-scenes chatter about vulnerabilities, patches, and ongoing security issues with the open-source kernel and related code. These discussions are crucial for getting early information and coordinating the deployment of fixes before they are made public. One of the main criteria for membership in the Linux-distros mailing list, is to have a Unix-like distro that makes use of open source components.  To indicate that Microsoft deserves this membership, Levin has cited Microsoft's Azure Sphere and the Windows Subsystem For Linux (WSL) 2 as examples of distro-like builds.  Last month, Microsoft announced that Windows Subsystem for Linux 2 (WSL 2) is available in Windows Insiders. With availability in build 18917, Windows will now be shipping with a full Linux kernel. This will allow WSL 2 to run inside a VM and provide full access to Linux system calls. The kernel will be specifically tuned for WSL 2 and will be fully open sourced with the full configuration available on GitHub. This will enable users for a faster turnaround on updating the kernel, when new versions become available. Thus the new architecture aims to increase file system performance and provide full system call compatibility, in a Linux environment. Levin also highlighted that Microsoft’s Linux builds are open sourced and that it contributes to the community. Levin has also revealed that Linux is used more on Azure than Windows server. This does not come as a surprise, as this is not the first time that Microsoft is being aligned to Linux. There are at least eight Linux-distros available on Azure. Also Microsoft’s former CEO Steve Balmer, who has previously quoted Linux as “Cancer”, now says that he loves Linux.  This move by Microsoft to embrace Linux, is being seen as Microsoft’s way of staying relevant in the industry. In a statement to Register, the open-source pioneer Bruce Perens says that, “What we are seeing here is that Microsoft wants access to early security alerts on Linux,  They’re joining it as a Linux distributor because that’s how it’s structured. Microsoft obviously has a lot of Linux plays, and it’s their responsibility to fix known security bugs as quickly as other Linux distributors.” Most users are of the opinion that, Microsoft embracing Linux was bound to happen. With its immense advantages, Linux is the default option for many. A user on Hacker News says that,  “The biggest practical advantage I have found is that Linux has dramatically better file system I/O performance. Like, a C++ project that builds in 20 seconds on Linux, takes several minutes to build on the same hardware in Windows.” Another user comments that, “I'm surprised it took this long. With Linux support for .NET and SQL Server, there is zero reason to host anything new on Windows now (of course legacy enterprise software is another story). I wouldn't be surprised if Windows Server is fully EOL'd in a few years.” Another user wrote that, “On Azure, a Windows VM instance tends to cost about 50% more than the equivalent instance running Linux, so it is a no brainer to use Linux if your application is operating system independent.” Another comment reads, “Linux is the default choice when you set up a VM.” Debian GNU/Linux port for RISC-V 64-bits: Why it matters and roadmap Netflix security engineers report several TCP networking vulnerabilities in FreeBSD and Linux kernels Unity Editor will now officially support Linux
Read more
  • 0
  • 0
  • 3951

article-image-stack-overflow-faces-backlash-for-its-new-homepage-that-made-it-look-like-it-is-no-longer-for-the-open-community
Bhagyashree R
01 Jul 2019
5 min read
Save for later

Stack Overflow faces backlash for its new homepage that made it look like it is no longer for the open community

Bhagyashree R
01 Jul 2019
5 min read
After facing a device fingerprinting bug and security breach, Stack Overflow was again in the news on Thursday. This time it was about its homepage that showcased its new proprietary products while hiding away the primary feature it is widely known for: open, public Q&A. How the updated Stack Overflow homepage looked like? The updated homepage showed the various products Stack Overflow provides. However, it did not show any straightforward way to reach the Q&A site. Here is how the updated UI looked like: Source: Stack Overflow A Stack Overflow user wrote, how he felt when he first saw this homepage: Private Q&A. Oh, this one of those exclusive sites, maybe a forum, where you get to discuss stuff in private, probably need to pay for it, it says coworker, flagship, those are pricey words. Jobs? Oh, this must be like LinkedIn. Probably only professionals and such that only elevate themselves and talk boring stuff. You probably need to pay for exposing your account or something, as you need to on those other job sites to stand a chance. Create an account? And next they'll ask for my credit card, right? No thanks, I'll move on to TechNet or wherever. Other regular users also found this abrupt change frustrating and confusing. A Stack Overflow user compared the updated homepage to that of Facebook and LinkedIn where you require to have an account to post things. He wrote, "Today before I logged in I saw the new home page, and it immediately felt the same to me as going to Facebook or LinkedIn before you have an account. There's a big wall of gibberish that essentially says, "You can't do anything here until you start handing over information about yourself.” It is understandable that Stack Overflow is looking for new avenues for revenues. In 11 years of its existence, it has become much more than a Q&A site with voting and editing functionalities. It provides Stack Overflow for Teams, a private place for your team members to exchange questions and answers about your proprietary software. Another one is, Stack Overflow Talent that helps employers post job listings and discover talents around the globe for their organizations. Stack Overflow for Enterprise provides a platform for building a standalone Q&A community. Despite these new incredible offerings, for most people the Q&A site is what Stack Overflow is, rest all is just an addition to the main product. Hiding the actual feature for which developers visit the site behind a hamburger, while giving the actual screen space to proprietary products is what has turned off many developers. How Stack Overflow responded? After facing backlash, Stack Overflow responded with a workaround for the moment and is currently reviewing the feedback it is getting from the users. Stack Overflow said, “Overall changes in design will not be made at this moment (we are still collecting the feedback you are all posting - thanks for that). And we are carefully reviewing it and will make them later if it's necessary, however, we do want to make it easier to get to the open Q&A as fast as possible, and that means not changing the design right now.” To make it somewhat easier for the users to reach the Q&A section, it has hyperlinked the "open community" in the description. Also, the blue button which was earlier called “Create an account” now goes directly to the Q&A page. Source: Stack Overflow Developers also suggested what Stack Overflow can do to fix this problem, while also showcasing its proprietary products. Here's what a user recommended: “If you're really serious about improving it, then I have some recommendations. 1) reduce the size of the hero banner by ~50%. 2) Remove the "for developers, by developers" section and have the "Developers" button at the top go straight to stackoverflow.com/questions. 3) Remove the section on SO for Teams pricing -- that belongs as a click-through page via the "Private Q&A" link on the "For business by developers" section. On that subject, "Private Q&A" should say "Teams (Private Q&A)". 4) Remove redundant .talent-slope div and .py64 div below it.” Providing teams and enterprises a private area to discuss their coding problems is an incredible idea and there is no wrong in advertising these products to people who love using Stack Overflow. However, it does feel a little overboard to make it the main centerpiece of the homepage, when Stack Overflow is mainly known for its free Q&A feature. Also, considering the huge user base, the whole outcry could have been avoided by a little consultation from the users. Approx. 250 public network users affected during Stack Overflow’s security attack Do Google Ads secretly track Stack Overflow users?
Read more
  • 0
  • 0
  • 2785

article-image-an-attack-on-sks-keyserver-network-a-write-only-program-poisons-two-high-profile-openpgp-certificates
Savia Lobo
01 Jul 2019
6 min read
Save for later

An attack on SKS Keyserver Network, a write-only program, poisons two high-profile OpenPGP certificates

Savia Lobo
01 Jul 2019
6 min read
Robert J. Hansen, a maintainer of the GnuPG FAQ, revealed about a certificate spamming attack against him and Daniel Kahn Gillmor, two high-profile contributors in the OpenPGP community, in the last week of June 2019. The attack exploited a defect in the OpenPGP protocol to "poison" both Hansen’s and Gillmor’s OpenPGP certificates. “Anyone who attempts to import a poisoned certificate into a vulnerable OpenPGP installation will very likely break their installation in hard-to-debug ways”, Hansen wrote on his GitHub blog post. Gillmor said his OpenPGP certificate was flooded with bogus certifications which were uploaded to the SKS keyserver network. The main use of OpenPGP today is to verify downloaded packages for Linux-based operating systems, usually using a software tool called GnuPG. This attack has the following consequences: If you fetch a poisoned certificate from the keyserver network, you will break your GnuPG installation. Poisoned certificates cannot be deleted from the keyserver network. The number of deliberately poisoned certificates, currently at only a few, will only rise over time. The attackers may have an intent on poisoning other certificates and the scope of the damage is still unknown A year ago, OpenPGP experienced similar certificate flooding, one, a spam on Werner Koch's key and second, abuse tools made available years ago under the name "trollwot". There's a keyserver-backed filesystem proposed as a proof of concept to point out the abuse. “Poisoned certificates are already on the SKS keyserver network. There is no reason to believe the attacker will stop at just poisoning two certificates. Further, given the ease of the attack and the highly publicized success of the attack, it is prudent to believe other certificates will soon be poisoned”, Hansen further added. He also said that the mitigation to this attack cannot be carried out “in any reasonable time period” and that the future releases of OpenPGP software may have mitigation. However, he said he is unsure of the time frame. The best mitigation that can be applied at present is simple: stop retrieving data from the SKS keyserver network, Hansen says. The “keyserver software” was written to facilitate the discovery and distribution of public certificates. Users can search the keyserver by a variety of different criteria to discover public certificates which claim to belong to the desired user. The keyserver network, however, does not attest to the accuracy of the information. This was left for each user to ascertain according to their own criteria. According to the Keyserver design goals, “Keyservers could add information to existing certificates but could never, ever, ever, delete either a certificate or information about a certificate”, Hansen said as he was involved in the PGP community since 1992 and was present for these discussions. “In the early 1990s this design seemed sound. It is not sound in 2019. We've known it has problems for well over a decade”, Hansen adds. This shows that Keyservers are vulnerable and susceptible to attacks and how the data can be easily misused. Why SKS Keyserver Network can never be fixed Hansen has also given some reasons why the software was not fixed or updated for security to date. A difficult to understand algorithm The SKS or standard keyserver software was written by Yaron Minsky. It became the keystone of his Ph.D. thesis, and he wrote SKS originally as a proof of concept of his idea. The algorithm is written in an unusual programming language called OCaml, which Hansen says has an idiosyncratic dialect. “ Not only do we need to be bright enough to understand an algorithm that's literally someone's Ph.D. thesis, but we need expertise in obscure programming languages and strange programming customs”, Hansen says. Change in design goal may result in changes from scratch Due to a difficult programming language it is written in, there are hardly any programmers who are qualified to do such a major overhaul, Hansen says. Also, the design goal of the keyserver network is "baked into" essentially every part of the infrastructure and changing it may lead to huge changes in the entire software. Lack of a centralized authority The lack of centralized authority was a feature, not a bug. This means there is no single point of failure for a government to go after. This makes it even harder to change the design goals as the network works as a confederated system. Keyserver network is a Write-only file system The Keyserver network is based on a write-only, which makes it susceptible to a lot of attacks as one can only write into it and have a tough time deleting files. The keyserver network can be thought of as an extremely large, extremely reliable, extremely censorship-resistant distributed file system which anyone can write to. Attackers can easily add any malicious or censored content files or media, which no one can delete. Mitigations for using the Synchronization Key server Hansen says high-risk users should stop using the keyserver network immediately. For those confident with editing their GnuPG configuration files, the following process is recommended: Open gpg.conf in a text editor. Ensure there is no line starting with keyserver. If there is, remove it. Open dirmngr.conf in a text editor. Add the line keyserver hkps://keys.openpgp.org to the end of it. keys.openpgp.org is a new experimental keyserver which is not part of the keyserver network and has some features which make it resistant to this sort of attack. It has some limitations like its search functionality is sharply constrained. However, once changes are made users will be able to run gpg --refresh-keys with confidence. Daniel Kahn Gillmor, in his blogpost, says, “This is a mess, and it's a mess a long time coming. The parts of the OpenPGP ecosystem that rely on the naive assumptions of the SKS keyserver can no longer be relied on because people are deliberately abusing those keyservers. We need significantly more defensive programming and a better set of protocols for thinking about how and when to retrieve OpenPGP certificates”. Public reaction to this attack is quite speculative. People shared their opinions on Twitter. Some have also suggested migrating the SKS server towards the new OpenPGP key server called Hagrid. https://twitter.com/matthew_d_green/status/1145030844131753985 https://twitter.com/adulau/status/1145045929428443137 To know more about this in detail, head over to Robert J. Hansen’s GitHub post. Training Deep Convolutional GANs to generate Anime Characters [Tutorial] Former npm CTO introduces Entropic, a federated package registry with a new CLI and much more! Microsoft introduces Service Mesh Interface (SMI) for interoperability across different service mesh technologies
Read more
  • 0
  • 0
  • 3237

article-image-why-did-slack-suffer-an-outage-on-friday
Fatema Patrawala
01 Jul 2019
4 min read
Save for later

Why did Slack suffer an outage on Friday?

Fatema Patrawala
01 Jul 2019
4 min read
On Friday, Slack, an instant messaging platform for work spaces confirmed news of the global outage. Millions of users reported disruption in services due to the outage which occurred early Friday afternoon. Slack experienced a performance degradation issue impacting users from all over the world, with multiple services being down. Yesterday the Slack team posted a detailed incident summary report of the service restoration. The Slack status page read: “On June 28, 2019 at 4:30 a.m. PDT some of our servers became unavailable, causing degraded performance in our job processing system. This resulted in delays or errors with features such notifications, unfurls, and message posting. At 1:05 p.m. PDT, a separate issue increased server load and dropped a large number of user connections. Reconnection attempts further increased the server load, slowing down customer reconnection. Server capacity was freed up eventually, enabling all customers to reconnect by 1:36 p.m. PDT. Full service restoration was completed by 7:20 p.m. PDT. During this period, customers faced delays or failure with a number of features including file uploads, notifications, search indexing, link unfurls, and reminders. Now that service has been restored, the response team is continuing their investigation and working to calculate service interruption time as soon as possible. We’re also working on preventive measures to ensure that this doesn’t happen again in the future. If you’re still running into any issues, please reach out to us at [email protected].” https://twitter.com/SlackStatus/status/1145541218044121089 These were the various services which were affected due to outage: Notifications Calls Connections Search Messaging Apps/Integrations/APIs Link Previews Workspace/Org Administration Posts/Files Timeline of Friday’s Slack outage According to user reports it was observed that some Slack messages were not delivered with users receiving an error message. On Friday, at 2:54 PM GMT+3, Slack status page gave the initial signs of the issue,  "Some people may be having an issue with Slack. We’re currently investigating and will have more information shortly. Thank you for your patience,". https://twitter.com/SlackStatus/status/1144577107759996928 According to the Down Detector, Slack users noted that message editing also appeared to be impacted by the latest bug. Comments indicated it was down around the world, including Sweden, Russia, Argentina, Italy, Czech Republic, Ukraine and Croatia. The Slack team continued to give updates on the issue, and on Friday evening they reported of services getting back to normal. https://twitter.com/SlackStatus/status/1144806594435117056 This news gained much attraction on Twitter, as many of them commented saying Slack is already preps up for the weekend. https://twitter.com/RobertCastley/status/1144575285980999682 https://twitter.com/Octane/status/1144575950815932422 https://twitter.com/woutlaban/status/1144577117788790785   Users on Hacker News compared Slack with other messaging platforms like Mattermost, Zulip chat, Rocketchat etc. One of the user comments read, “Just yesterday I was musing that if I were King of the (World|Company) I'd want an open-source Slack-alike that I could just drop into the Cloud of my choice and operate entirely within my private network, subject to my own access control just like other internal services, and with full access to all message histories in whatever database-like thing it uses in its Cloud. Sure, I'd still have a SPOF but it's game over anyway if my Cloud goes dark. Is there such a project, and if so does it have any traction in the real world?” To this another user responded, “We use this at my company - perfectly reasonable UI, don't know about the APIs/integrations, which I assume are way behind Slack…” Another user also responded, “Zulip, Rocket.Chat, and Mattermost are probably the best options.” Slack stocks surges 49% on the first trading day on the NYSE after direct public offering Dropbox gets a major overhaul with updated desktop app, new Slack and Zoom integration Slack launches Enterprise Key Management (EKM) to provide complete control over encryption keys  
Read more
  • 0
  • 0
  • 2709
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at £15.99/month. Cancel anytime
article-image-openid-foundation-questions-apples-sign-in-feature-says-it-has-security-and-privacy-risks
Sugandha Lahoti
01 Jul 2019
3 min read
Save for later

OpenID Foundation questions Apple’s Sign In feature, says it has security and privacy risks

Sugandha Lahoti
01 Jul 2019
3 min read
The OpenID foundation has written an open letter to Apple arguing that the upcoming ‘Sign in with Apple’ feature bears similarities to OpenID Connect,  but lacks privacy and security. ‘Sign in with Apple’ was launched at WWDC 2019 earlier this month. Users can simply use their Apple ID for authentication purpose instead of using a social account, or their email addresses, etc. Apple will be protecting users’ privacy by providing developers with a unique random ID. However, the OpenID Foundation is questioning some of the decisions Apple made for Sign In with Apple. The OpenID Foundation is a non-profit organization with members such as PayPal, Google, Microsoft, and more. The OpenID Foundation controls numerous universal sign-in platforms using its OpenID Connect platform. The letter states, “It appears Apple has largely adopted OpenID Connect for their Sign In with Apple implementation offering, or at least has intended to. However, there are differences between the two are tracked in a document managed by the OIDF certification team. The current set of differences between OpenID Connect and Sign In with Apple reduces the places where users can use Sign In with Apple and exposes them to greater security and privacy risks. It also places an unnecessary burden on developers of both OpenID Connect and Sign In with Apple.” Issues with Sign in with Apple and differences with OpenID The OpenID team has listed down the differences between Apple’s Sign in and OpenID Connect. The differences were identified by the OpenID Foundation’s Certification team and the identity community at large. In Apple’s No Discovery document, developers have to read through the Apple docs to find out about endpoints, scopes, signing algorithms, authentication methods, etc. No UserInfo endpoint is provided, which means all of the claims about users have to be included in the (expiring and potentially large) id_token. Does not include different claims in the id_token based on requested scopes. The token endpoint does not accept client_secret_basic as a client authentication method. Using unsupported or wrong parameters always results in the same message in the browser that says “Your request could not be completed because of an error. Please try again later.” without any explanation about what happened, why this is an error, or how to fix it. Absence of PKCE [Proof Key for Code Exchange] in the Authorization Code grant type, which could nominally leave people exposed to code injection and replay attacks. When using the sample app, adding openid as a scope leads to an error message and it works just with name and email as scope values. The letter asks for Apple to "address the gaps," use the OpenID Connect Self Certification Test Suite, state that Sign in with Apple is compatible with Relying Party software, and finally join the OpenID Foundation. You can read the full open letter here. Testing of Sign in with Apple will start later this summer ahead of iOS 13's fall launch window. Apple showcases privacy innovations at WWDC 2019: Sign in with Apple, AdGuard Pro, and more. WWDC 2019 highlights: Apple introduces SwiftUI, new privacy-focused sign in, updates to iOS, macOS, and more. Jony Ive, Apple’s chief design officer departs after 27 years at Apple to form an independent design company.
Read more
  • 0
  • 0
  • 2784

article-image-google-proposes-a-libc-in-llvm-rich-felker-of-musl-libc-thinks-its-a-very-bad-idea
Vincy Davis
28 Jun 2019
4 min read
Save for later

Google proposes a libc in LLVM, Rich Felker of musl libc thinks it’s a very bad idea

Vincy Davis
28 Jun 2019
4 min read
Earlier this week, Siva Chandra, Google LLVM contributor asked all LLVM developers on their opinion about starting a libc in LLVM. He mentioned a list of high-level goals and guiding principles, that they are intending to pursue. Three days ago, Rich Felker the creator of musl libc, made his thoughts about libc very clear by saying that “this is a very bad idea.” In his post, Chandra has said that he believes that a libc in LLVM will be beneficial and usable for the broader LLVM community, and may serve as a starting point for others in the community to flesh out an increasingly complete set of libc functionality.  Read More: Introducing LLVM Intermediate Representation One of the goals, mentioned by Chandra, states that the libc project would mesh with the “as a library” philosophy of the LLVM and would help in making the “the C Standard Library” more flexible. Another goal for libc states that it will support both static non-PIE and static-PIE linking. This means enabling the C runtime and the PIE loader for static non-PIE and static-PIE linked executables. Rich Felker posted his thoughts on the libc in LLVM as follows: Writing and maintaining a correct, compatible, high-quality libc is a monumental task. Though the amount of code needed is not that large, but “the subtleties of how it behaves and the difficulties of implementing various interfaces that have no capacity to fail or report failure, and the astronomical "compatibility surface" of interfacing with all C and C++ software ever written as well as a large amount of software written in other languages whose runtimes "pass through" the behavior of libc to the applications they host,”. Felkar believes that this will make libc not even of decent quality.  A corporate-led project is not answerable to the community, and hence they will leave whatever bugs it introduces, for the sake of compatibility with their own software, rather than fixing them. This is the main reason that Felkar thinks that if at all, a libc is created, it should not be a Google project.  Lastly Felkar states that avoiding monoculture preserves the motivation for consensus-based standard processes rather than single-party control. This will prove to be a motivation for people writing software, so they will write it according to proper standards, rather than according to a particular implementation.   Many users agree with Rich Felkar’s views.  A user on Hacker News states that “This speaks volumes very clearly. This highlights an immense hazard. Enterprise scale companies contributing to open-source is a fantastic thing, but enterprise scale companies thrusting their own proprietary libraries onto the open-source world is not. I'm already actively avoiding becoming beholden to Google in my work as it is already, let alone in the world where important software uses a libc written by Google. If you're not concerned by this, refer to the immense power that Google already wields over the extremely ubiquitous web-standards through the market dominance that Chrome has.” Another user says that, “In the beginning of Google's letter they let us understand they are going to create a simplified version for their own needs. It does mean they don't care about compatibility and bugs, if it doesn't affect their software. That's not how this kind of libraries should be implemented.” Another comment reads, “If Google wants their own libc that’s their business. But LLVM should not be part of their “manifest destiny”. The corporatization of OSS is a scary prospect, and should be called out loud and clear like this every time it’s attempted” While there are few others who think that Siva Chandra’s idea of a libc in LLVM might be a good thing. A user on Hacker News comments that “That is a good point, but I'm in no way disputing that Google could do a great job of creating their own libc. I would never be foolish enough to challenge the merit of Google's engineers, the proof of this is clear in the tasting of the pudding that is Google's software. My concerns lie in the open-source community becoming further beholden to Google, or even worse with Google dictating the direction of development on what could become a cornerstone of the architecture of many critical pieces of software.” For more details, head over to Rich Felkar’s pipermail.  Introducing InNative, an AOT compiler that runs WebAssembly using LLVM outside the Sandbox at 95% native speed LLVM 8.0.0 releases! LLVM officially migrating to GitHub from Apache SVN
Read more
  • 0
  • 0
  • 4037

article-image-mozilla-launches-firefox-preview-an-early-version-of-a-geckoview-based-firefox-for-android
Bhagyashree R
28 Jun 2019
3 min read
Save for later

Mozilla launches Firefox Preview, an early version of a GeckoView-based Firefox for Android

Bhagyashree R
28 Jun 2019
3 min read
Yesterday, Mozilla announced the first preview of a redesigned version of Firefox for Android, called Firefox Preview. It is powered by the GeckoView rendering engine and will eventually replace the current Firefox app for Android. Why Mozilla is introducing a new Firefox for Android Back in 2016, Mozilla introduced Firefox Focus, a privacy-focused mobile browser for Android and iOS users. It was initially launched as a tracker-blocking application and then was developed into a minimalistic browser app. The team has been putting their efforts into improving the Firefox Focus app. However, the demand for a full-fledged private and secure mobile browsing experience has increased in recent years. The team realized this could be best addressed by launching a new browser app that is similar to Focus, but provides all the "ease and amenities of a full-featured mobile browser." Sharing the idea behind the new browser, Firefox Mobile Team said, "With Firefox Preview, we’re combining the best of what our lightweight Focus application and our current mobile browsers have to offer to create a best in class mobile experience." What features does Firefox Preview come with Unlike some of the major browsers that use the Blink rendering engine, Firefox Preview is backed by GeckoView. This gives Firefox and its users the independence of making decisions for what they want in the browser instead of enforcing whatever Google decides. GeckoView also accounts for “greater flexibility in terms of the types of privacy and security features" Mozilla can offer its mobile users.” Following are some of the features Firefox Preview offers: Up to two times faster: It is up to two times faster as compared to the previous versions of Firefox for Android. Minimalistic design: It comes with a minimalist start screen and bottom navigation bar to enable you get things done faster on the go. Includes Collections, a new take on bookmarks: Its Collections feature allows you to save, organize, and share collections of sites. Tracking Protection on by default: It comes with Tracking Protection on by default giving you freedom from advertising trackers and other bad actors. As a side effect, this also gives a faster browsing experience. This is an early version of the experimental browser for Android users based on GeckoView, which means there are many features like support for ad blocking extensions, Reader Mode is not yet available. You can try it out and provide feedback for improvements to the team via email or on Github. Check out the official announcement by Mozilla to know more. Mozilla introduces Track THIS, a new tool that will create fake browsing history and fool advertisers Mozilla releases Firefox 67.0.3 and Firefox ESR 60.7.1 to fix a zero-day vulnerability, being abused in the wild Mozilla to bring a premium subscription service to Firefox with features like VPN and cloud storage
Read more
  • 0
  • 0
  • 2248

article-image-an-iot-worm-silex-developed-by-a-14-year-old-resulted-in-malware-attack-and-taking-down-2000-devices
Amrata Joshi
28 Jun 2019
5 min read
Save for later

An IoT worm Silex, developed by a 14 year old resulted in malware attack and taking down 2000 devices

Amrata Joshi
28 Jun 2019
5 min read
This week, an IoT worm called Silex that targets a Unix-like system took down around 2,000 devices, ZDNet reports. This malware attacks by attempting a login with default credentials and after gaining access. Larry Cashdollar, an Akamai researcher, the first one to spot the malware, told ZDNet in a statement, "It's using known default credentials for IoT devices to log in and kill the system.” He added, “It's doing this by writing random data from /dev/random to any mounted storage it finds. I see in the binary it's calling fdisk -l which will list all disk partitions."  He added, "It then writes random data from /dev/random to any partitions it discovers." https://twitter.com/_larry0/status/1143532888538984448 It deletes the devices' firewall rules and then removes its network config and triggers a restart, this way the devices get bricked. Victims are advised to manually reinstall the device's firmware for recovering. This malware attack might remind you of the BrickerBot malware that ended up destroying millions of devices in 2017. Cashdollar told ZDNet in a statement, "It's targeting any Unix-like system with default login credentials." He further added, "The binary I captured targets ARM devices. I noticed it also had a Bash shell version available to download which would target any architecture running a Unix like OS." This also means that this malware might affect Linux servers if they have Telnet ports open and in case they are secured with poor or widely-used credentials. Also, as per the ZDNet report, the attacks were carried out from a VPS server that was owned by a company operating out of Iran. Cashdollar said, "It appears the IP address that targeted my honeypot is hosted on a VPS server owned by novinvps.com, which is operated out of Iran."  With the help of NewSky Security researcher Ankit Anubhav, ZDNet managed to reach out to the Silex malware author who goes by the pseudonym Light Leafon. According to Anubhav, Light Leafon, is a 14-year-old teenager responsible for this malware.  In a statement to Anubhav and ZDNet, he said, “The project started as a joke but has now developed into a full-time project, and has abandoned the old HITO botnet for Silex.” Light also said that he has plans for developing the Silex malware further and will add even more destructive functions. In a statement to Anubhav and ZDNet, he said, "It will be reworked to have the original BrickerBot functionality."  He is also planning to add the ability to log into devices via SSH apart from the current Telnet hijacking capability. He plans to give the malware the ability to use vulnerabilities for breaking into devices, which is quite similar to most of the IoT botnets. Light said, "My friend Skiddy and I are going to rework the whole bot.” He further added, "It is going to target every single publicly known exploit that Mirai or Qbot load." Light didn’t give any justification for his actions neither have put across any manifesto as the author of BrickerBot (goes with the pseudonym-Janit0r) did post before the BrickerBot attacks. Janit0r motivated the 2017 attacks to protest against owners of smart devices that were constantly getting infected with the Mirai DDoS malware. In a statement to ZDNet, Anubhav described the teenager as "one of the most prominent and talented IoT threat actors at the moment." He further added, "Its impressive and at the same time sad that Light, being a minor, is utilizing his talent in an illegal way." People are surprised how a 14-year-old managed to work this out and are equally worried about the consequences the kid might undergo. A user commented on Reddit, “He's a 14-year old kid who is a bit misguided in his ways and can easily be found. He admits to DDoSing Wix, Omegle, and Twitter for lols and then also selling a few spots on the net. Dude needs to calm down before it goes bad. Luckily he's under 18 so really the worst that would happen in the EU is a slap on the wrist.”  Another user commented, “It’s funny how those guys are like “what a skid lol” but like ... it’s a 14-year-old kid lol. What is it people say about the special olympics…” Few others said that developers need to be more vigilant and take security seriously. Another comment reads, “Hopefully manufacturers might start taking security seriously instead of churning out these vulnerable pieces of shit like it's going out of fashion (which it is).” To know more about this news, check out the report by ZDNet. WannaCry hero, Marcus Hutchins pleads guilty to malware charges; may face upto 10 years in prison FireEye reports infrastructure-crippling Triton malware linked to Russian government tech institute ASUS servers hijacked; pushed backdoor malware via software updates potentially affecting over a million users  
Read more
  • 0
  • 0
  • 3693
article-image-google-launches-beta-version-of-deep-learning-containers-for-developing-testing-and-deploying-ml-applications
Amrata Joshi
28 Jun 2019
3 min read
Save for later

Google launches beta version of Deep Learning Containers for developing, testing and deploying ML applications

Amrata Joshi
28 Jun 2019
3 min read
Yesterday, Google announced the beta availability of Deep Learning Containers, a new cloud service that provides environments for developing, testing as well as for deploying machine learning applications. In March this year, Amazon also launched a similar offering, AWS Deep Learning Containers with Docker image support for easy deployment of custom machine learning (ML) environments. The major advantage of Deep Learning containers is its ability to test machine learning applications on-premises and it can quickly move them to cloud. Support for PyTorch, TensorFlow scikit-learn and R Deep Learning Containers, launched by Google Cloud Platform (GCP) can be run both in the cloud as well as on-premise. It has support for machine learning frameworks like PyTorch, TensorFlow 2.0, and TensorFlow 1.13. Deep Learning Containers by AWS has support for TensorFlow and Apache MXNet frameworks. Whereas Google’s ML containers don’t support Apache MXNet but they come with pre-installed PyTorch, TensorFlow scikit-learn and R. Features various tools and packages GCP Deep Learning Containers consists of several performance-optimized Docker containers that come along with various tools used for running deep learning algorithms. These tools include preconfigured Jupyter Notebooks that are interactive tools used to work with and share code, visualizations, equations and text. Google Kubernetes Engine clusters is also one of the tools and it used for orchestrating multiple container deployments. It also comes with access to packages and tools such as Nvidia’s CUDA, cuDNN, and NCCL. Docker images now work on cloud and on-premises  The docker images also work on cloud, on-premises, and across GCP products and services such as Google Kubernetes Engine (GKE), Compute Engine, AI Platform, Cloud Run, Kubernetes, and Docker Swarm. Mike Cheng, software engineer at Google Cloud in a blog post, said, “If your development strategy involves a combination of local prototyping and multiple cloud tools, it can often be frustrating to ensure that all the necessary dependencies are packaged correctly and available to every runtime.” He further added, “Deep Learning Containers address this challenge by providing a consistent environment for testing and deploying your application across GCP products and services, like Cloud AI Platform Notebooks and Google Kubernetes Engine (GKE).” For more information, visit the AI Platform Deep Learning Containers documentation. Do Google Ads secretly track Stack Overflow users? CMU and Google researchers present XLNet: a new pre-training method for language modeling that outperforms BERT on 20 tasks Curl’s lead developer announces Google’s “plan to reimplement curl in Libcrurl”    
Read more
  • 0
  • 0
  • 3948

article-image-apple-gets-into-chip-development-and-self-driving-autonomous-tech-business
Amrata Joshi
28 Jun 2019
3 min read
Save for later

Apple gets into chip development and self-driving autonomous tech business

Amrata Joshi
28 Jun 2019
3 min read
Apple recently hired Mike Filippo, lead CPU architect and one of the top chip engineers from ARM Holdings, which is a semiconductor and software design company. According to Mike Filippo’s updated profile in LinkedIn, he joined Apple in May as the architect and is working out of the Austin, Texas area.  He worked at ARM for ten years as the lead engineer for designing the chips used in most smartphones and tablets. Previously, he had also worked as the key designer at chipmakers Advanced Micro Devices and Intel Corp.  In a statement to Bloomberg, a spokesman from ARM said, “Mike was a long-time valuable member of the ARM community.” He further added, “We appreciate all of his efforts and wish him well in his next endeavor.” Apple’s A series chips that are used in the mobile devices use ARM technology. For almost two decades, the Mac computers had Intel processors. Hence, Filippo’s experience in these companies could prove to be a major plus point for Apple. Apple had planned to use its own chips in Mac computers in 2020, and further replace processors from Intel Corp with ARM architecture based processors.  Apple also plans to expand its in-house chip making work to new device categories like a headset that meshes augmented and virtual reality, Bloomberg reports. Apple acquires Drive.ai, an autonomous driving startup Apart from the chip making business there are reports of Apple racing in the league of self-driving autonomous technology. The company had also introduced its own self-driving vehicle called Titan, which is still a work in progress project.  On Wednesday, Axios reported that Apple acquired Drive.ai, an autonomous driving startup valued at $200 million. Drive.ai was on the verge of shutting down and was laying off all its staff. This news indicates that Apple is interested in tasting the waters of the self-driving autonomous technology and this move might help in speeding up the Titan project. Drive.ai was in search of a buyer since February this year and had also communicated with many potential acquirers before getting the deal cracked by Apple. The company also purchased Drive.ai's autonomous cars and other assets. The amount for which Apple has acquired Drive.ai is yet not disclosed, but as per a recent report, Apple was expected to pay an amount lesser than the $77 million invested by venture capitalists. The company has also hired engineers and managers from  Waymo and Tesla. Apple has recruited around five software engineers from Drive.ai as per a report from the San Francisco Chronicle. It seems Apple is mostly hiring people that are into engineering and product design. Declarative UI programming faceoff: Apple’s SwiftUI vs Google’s Flutter US regulators plan to probe Google on anti-trust issues; Facebook, Amazon & Apple also under legal scrutiny Apple previews macOS Catalina 10.15 beta, featuring Apple music, TV apps, security, zsh shell, driverKit, and much more!  
Read more
  • 0
  • 0
  • 1949

article-image-axon-a-major-police-body-worn-camera-maker-says-no-to-facial-recognition-tech-in-its-devices-taking-ethics-advisory-panels-advice
Fatema Patrawala
28 Jun 2019
6 min read
Save for later

Axon, a major police body-worn camera maker, says no to facial recognition tech in its devices taking ethics advisory panel’s advice

Fatema Patrawala
28 Jun 2019
6 min read
Facial recognition is a contentious technology, to say the least, these days. Yesterday, Axon Enterprises formerly known as Taser International, the largest police body-camera making company in the US announced that it will not incorporate facial-recognition technology in its law-enforcement devices. https://twitter.com/wsisaac/status/1144199471657553920 This move coincides with growing public opposition to facial recognition technology, including from tech workers with some cities in the US mulling to ban its use. Last month, San Francisco became the first city to ban local government use of facial recognition, with Oakland, California, Somerville and Massachusetts, expected to enact similar legislation soon. California's state Legislature is also considering a bill that would ban the use of facial recognition on police body cameras. Axon came to this decision after reviewing a report published by its ethics advisory panel. The panel urged the company not to pair its best-selling body cameras with software that could allow officers to identify people in real time based on their faces. Last year in April, Axon established an AI and Policing Technology Ethics Board. The purpose of the board was to guide and advise the company on ethical issues related to the development and deployment of new artificial intelligence (AI) powered policing technologies. They would advise the company on products which are under consideration or development, and would not formally approve or reject any particular product. This is the first board report that provides thoughtful and actionable recommendations to Axon regarding face recognition technology. The board is an eleven-member external advisory body made up of experts from various fields including AI, computer science, privacy, law enforcement, civil liberties, and public policy. The company also emphasizes on the importance of having a diverse board for the guidance. The current board members are: Ali Farhadi, an Associate Professor in the Department of Computer Science and Engineering at the University of Washington Barry Friedman, an academic and one of the leading authorities on constitutional law, policing, criminal procedure, and federal courts Christy E. Lopez, a Georgetown Law Distinguished Visitor from Practice and former Deputy Chief in the DOJ Civil Rights Division Jeremy Gillula, Tech Projects Director at the Electronic Frontier Foundation Jim Bueermann President of the Police Foundation in Washington, DC Kathleen M. O’Toole, former Chief of Police for the Seattle Police Department Mecole Jordan, Executive Director at United Congress of Community and Religious Organization (UCCRO) Miles Brundage, AI Policy Research Fellow with the Strategic AI Research Center at FHI Tracy Ann Kosa, Senior Program Manager at Google Vera Bumpers, President at National Organization of Black Law Enforcement Executives (NOBLE) Walt McNeil, a Leon County Sheriff in Florida Here are few tweets from some of the board members as well. https://twitter.com/Miles_Brundage/status/1144234344250109952 https://twitter.com/Christy_E_Lopez/status/1144328348040085504   The members of the board cited facial recognition tech's accuracy problems, that it could lead to false identifications, particularly of women and people with dark skin. The technology also could lead to expanded government surveillance and intrusive police activity, the board said. More specifically, the findings of the report are as follows: [box type="shadow" align="" class="" width=""]Facial recognition simply isn’t good enough right now for it to be used ethically. Don’t talk about “accuracy,” talk about specific false negatives and positives, since those are more revealing and relevant. Any facial recognition model that is used shouldn’t be overly customizable, or it will open up the possibility of abuse. Any application of facial recognition should only be initiated with the consent and input of those it will affect. Until there is strong evidence that these programs provide real benefits, there should be no discussion of use. Facial recognition technologies do not exist, nor will they be used, in a political or ethical vacuum, so consider the real world when developing or deploying them.[/box] In a blog post on Axon's website, CEO Rick Smith said current facial recognition technology "raises serious ethical concerns." But Smith also said that his team of artificial intelligence researchers would "continue to evaluate the state of facial recognition technologies," leaving open the possibility of adding the software to body cameras in the future. Axon holds the largest market share among the body cam manufacturer in the United States; it  supplies cameras to 47 of the 60 biggest police agencies. However, it does not say how many police agencies are under the contract, but says that more than 200,000 of its cameras are in use around the country. As per reports from NBC, this move from Axon is appreciated by civil rights and privacy advocates ─ but with skepticism. They noted that real-time facial recognition on police body cameras is not considered feasible at the moment, and they expressed concern that Axon could reverse course once that changed. "This is ultimately an issue about the kind of society we want to live in, not about technical specs," said Harlan Yu, executive director of Upturn, which monitors police agencies' body camera policies, and who is an outspoken Axon critic. https://twitter.com/harlanyu/status/1144278309842370560 Rather than rely on pledges from technology companies, lawmakers should impose regulations on how facial recognition is used, the advocates said. "Axon leaves open the possibility that it may include face recognition in the future, which is why we need federal and state laws ─ like the current proposal in California ─ that would ban the use of facial recognition on body cameras altogether," said Jennifer Lynch, surveillance litigation director at the Electronic Frontier Foundation, a civil liberties nonprofit. Brendan Klare, CEO of Rank One Computing, whose facial recognition software is used by many police departments to identify people in still images, said to NBC that Axon's announcement is a way to make the company look good while making little substantive impact. "The more important thing to point out here is that face recognition on body cameras really isn't technically feasible right now anyways," Klare said. While Axon has very little to lose from its announcement, other players in this industry took this as an opportunity. A couple hours after Axon's announcement, the head of U.K. based company Digital Barriers, trying to break into the U.S. body camera market with its facial recognition-enabled devices ─ tweeted that Axon's move was good news for his company. https://twitter.com/UKZak/status/1144225152915378176 Amazon rejects all 11 shareholder proposals including the employee-led climate resolution at Annual shareholder meeting Amazon patents AI-powered drones to provide ‘surveillance as a service’ San Francisco Board of Supervisors vote in favour to ban use of facial recognition tech in city
Read more
  • 0
  • 0
  • 1504
article-image-cloud-hopper-the-chinese-group-that-hacked-eight-major-u-s-computer-service-firms-to-boost-economic-interests-reuters-reports
Vincy Davis
28 Jun 2019
5 min read
Save for later

Cloud Hopper: The Chinese group that hacked eight major U.S. computer service firms to boost economic interests, Reuters reports

Vincy Davis
28 Jun 2019
5 min read
A recent report by Reuters has revealed that a global hacking group, working for China’s Ministry of State Security known as Cloud Hopper, broke into networks of eight of the world’s biggest technology service providers, in order to steal commercial secrets from their clients. The infringement by the hackers exploited these companies, their customers, and the Western system of technological defense. This hacking campaign is believed to have been done to boost Chinese economic interests.  How Cloud Hopper penetrated into U.S. firms Reuters reports that the Swedish telecoms equipment giant Ericsson were hacked five times by suspected Chinese cyber spies, between 2014 to 2017. After successfully repelling the many attacks, a year earlier, Ericsson discovered the intruders were back. Though this time, the path taken by the attackers were clear. The team of hackers had actually penetrated through Hewlett Packard Enterprise’s cloud computing service and used it as a launchpad to attack its customers. They managed to steal reams of corporate and government secrets for years, reports Reuters. In December 2018, the U.S. government charged the Chinese government of conducting an operation to steal Western intellectual property in order to advance China’s economic interests. They named the hackers from APT10 – Advanced Persistent Threat 10 hacking group, as agents of China’s Ministry of State Security. The U.S. also accused two Chinese nationals of identity theft and fraud, but did not divulge any victim names. Around the same time, Reuters reported Hewlett Packard Enterprise and IBM as the affected victims of this hacking campaign. The public attribution garnered widespread international support: Germany, New Zealand, Canada, Britain, Australia and other allies, issued statements backing the U.S. allegations against China. Key findings from Reuters investigation of Cloud Hopper hacking Two days ago, Reuters have made their new investigation report public, which states that along with Hewlett Packard Enterprise and IBM, the hackers had also managed to penetrate into Fujitsu, Tata Consultancy Services, NTT Data, Dimension Data, Computer Sciences Corporation and DXC Technology companies. According to the report, the Chinese hackers used these eight companies’ platform to attack their clients too. Along with Ericsson, a company which competes with Chinese firms in the strategically critical mobile telecoms business, the others include, travel reservation system Sabre, the American leader in managing plane bookings, and the largest shipbuilder for the U.S. Navy, Huntington Ingalls Industries, which builds America’s nuclear submarines at a Virginia shipyard. Though Reuters were unable to determine the full extent of the damage done by the hacking campaign, they claim that many victims are still unsure about the kind of information stolen by hackers. “This was the theft of industrial or commercial secrets for the purpose of advancing an economy”, said the former Australian National Cyber Security Adviser Alastair MacGibbon. This global hacking campaign also highlights the security vulnerabilities posed by cloud computing services. The former director of the U.S. National Security Agency, Mike Rogers says that, “For those that thought, the cloud was a panacea, I would say you haven’t been paying attention.” According to a senior adviser to the U.S. National Security Agency, Rob Joyce, the companies were battling against a skilled adversary. He says that the hacking was “high leverage and hard to defend against.” The Reuters report states that, according to Western officials, the attackers were from multiple Chinese government-backed hacking groups. The most feared was the APT10 hackers and were directed by the Ministry of State Security, says the U.S. prosecutors. The National security experts have said that the Chinese intelligence services are comparable to the U.S. Central Intelligence Agency, capable of pursuing both electronic and human spying operations. The Chinese government has firmly denied all accusations of involvement in hacking. In a statement to Reuters, the Chinese Foreign Ministry has said that “The Chinese government has never in any form participated in or supported any person to carry out the theft of commercial secrets.” China’s Foreign Ministry has also said that the charges were “warrantless accusations” and it urged the United States to “withdraw the so-called lawsuits against Chinese personnel, so as to avoid causing serious harm to bilateral relations.” The U.S. Justice Department has called the Chinese denials “ritualistic and bogus”. The DOJ Assistant Attorney General John Demers has told Reuters that, “The Chinese Government uses its own intelligence services to conduct this activity and refuses to cooperate with any investigation into thefts of intellectual property emanating from its companies or its citizens.” To know how the Chinese cyber spies infiltrated Western businesses in detail, head over to the Reuters investigation report. Following EU, China releases AI Principles As US-China tech cold war escalates, Google revokes Huawei’s Android support, allows only those covered under open source licensing US blacklist China’s telecom giant Huawei over threat to national security
Read more
  • 0
  • 0
  • 2298

article-image-jony-ive-apples-chief-design-officer-departs-after-27-years-at-apple-to-form-an-independent-design-company-apple-to-be-a-key-client
Sugandha Lahoti
28 Jun 2019
5 min read
Save for later

Jony Ive, Apple’s chief design officer departs after 27 years at Apple to form an independent design company; Apple to be a key client

Sugandha Lahoti
28 Jun 2019
5 min read
The man who shaped iPhone, Jony Ive, is departing from his position as Apple's chief design officer to start his own independent design company LoveFrom. After 27 years at Apple, he will now transit later this year, and LoveFrom will formally launch in 2020. Apple will be one of Ive’s primary clients through his new design company and he will continue to work closely on projects for Apple. “Jony is a singular figure in the design world and his role in Apple’s revival cannot be overstated, from 1998’s groundbreaking iMac to the iPhone and the unprecedented ambition of Apple Park, where recently he has been putting so much of his energy and care,” said Tim Cook, Apple’s CEO in the official press release. Ive has helped create some of Apple’s most recognized and popular products. He joined the firm in the early 1990s, and began leading Apple’s design team from 1996. He became the senior vice president of industrial design in 1997 and subsequently headed the industrial design team responsible for most of the company's significant hardware products. During his stint at Apple, Ive has worked on products, including a wide range of Macs, the iPod, iPhone, iPad, Apple Watch, and more. He also had a hand in designing the company’s “spaceship” Apple Park campus and establishing the look and feel of Apple retail stores. Since 2012, Ive had overseen design for both hardware and software at Apple, roles that had previously been separate. Apple said on Thursday the roles would again be split, with design team leaders Evans Hankey taking over as vice president of industrial design and Alan Dye becoming vice president of human interface design. “This just seems like a natural and gentle time to make this change,” Ive said in the interview to the Financial Times. “After nearly 30 years and countless projects, I am most proud of the lasting work we have done to create a design team, process and culture at Apple that is without peer,” Ive said in the press release. “Today it is stronger, more vibrant and more talented than at any point in Apple’s history. The team will certainly thrive under the excellent leadership of Evans, Alan and Jeff, who have been among my closest collaborators. I have the utmost confidence in my designer colleagues at Apple, who remain my closest friends, and I look forward to working with them for many years to come.” On the Ive-Jobs-Cook conundrum Jony Ive and Steve Jobs had shared a close relationship. According to Jobs biographer Walter Isaacson, the two would have lunch together every day and talk about design in the afternoon. Jobs considered Ive a "spiritual partner," according to Isaacson's book. After the death of Steve Jobs, there was speculation that Jony Ive might one day move into the chief executive's office. However, it was Tim Cook who took over. Tim was more interested in managing the supply chains than focusing on innovating new products and devices. Ive's presence has helped deflect some criticism that the company has lost some of its innovative flair after Jobs' death. John Gruber, a writer, and the inventor of the Markdown markup language wrote a blog post on Ivy’s departure pointing out the big difference between Ive under Jobs and Ive under Cook. He says, “This news dropped like a bomb. As far as I can tell no one in the media got a heads up about this news. Ever since Steve Jobs died it’s seemed to me that Ive ran his own media interaction.” He further adds, “From a product standpoint, the post-Jobs era at Apple has been the Jony Ive era, not the Tim Cook era. That’s not a knock on Tim Cook. To his credit, Tim Cook has never pretended to be a product guy. My gut sense for years has been that Ive without Jobs has been like McCartney without Lennon. On Ive working with Apple post departure, he writes, “This angle that he’s still going to work with Apple as an independent design firm seems like pure spin. You’re either at Apple or you’re not. Ive is out. Also, Apple’s hardware and industrial design teams work so far out that, even if I’m right and Ive is now effectively out of Apple, we’ll still be seeing Ive-designed hardware 5 years from now. It is going to take a long time to evaluate his absence. I don’t worry that Apple is in trouble because Jony Ive is leaving; I worry that Apple is in trouble because he’s not being replaced.” People on Twitter seemed to agree to Gruber’s analysis https://twitter.com/waltmossberg/status/1144418270000402433 https://twitter.com/reckless/status/1144376472100061184 https://twitter.com/kanishkdudeja/status/1144497808017203200     Others celebrated Ive’s work and offered him their best wishes. https://twitter.com/surabhi140/status/1144498594407276550 https://twitter.com/Shravster/status/1144498391147147265   Ive is the second major departure for Apple this year. In April, Apple retail chief Angela Ahrendts left the company. Her departure drew mixed reactions from consumers and critics. Though Apple has been recruiting some high profile people this year. In April, Apple took a major step towards strengthening its AI team by hiring Ian Goodfellow, as the director of machine learning. Recently it also hired high-profile marketing exec Nick Law, previously the chief creative officer of Publicis Groupe. The company also recruited Michael Schwekutsch, the Tesla VP overseeing electric powertrains, as a Senior Director of Engineering at the Special Project Group. WWDC 2019 highlights: Apple introduces SwiftUI, new privacy-focused sign in, updates to iOS, macOS, iPad and more. Ian Goodfellow quits Google and joins Apple as a director of machine learning Apple proposes a “privacy-focused” ad click attribution model for counting conversions without tracking users.
Read more
  • 0
  • 0
  • 2222