Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3709 Articles
article-image-apple-releases-native-swiftui-framework-with-declarative-syntax-live-editing-and-support-of-xcode-11-beta
Vincy Davis
04 Jun 2019
4 min read
Save for later

Apple releases native SwiftUI framework with declarative syntax, live editing, and support of Xcode 11 beta

Vincy Davis
04 Jun 2019
4 min read
Yesterday, at the ongoing Worldwide Developers Conference (WWDC) 2019, Apple announced a new framework called SwiftUI for building user interfaces across all Apple platforms. With an aim to decrease the line of codes, SwiftUI supports declarative syntax, design tools, and live editing. SwiftUI has an incredible native performance, thus allowing developers to feel fully integrated by taking advantage of the features from the previous technologies and developer experiences of Apple platforms. It's also automatically supported for dynamic type, dark mode, localization, and accessibility. The tools for SwiftUI development are only available when running on macOS 10.15 beta. Declarative syntax SwiftUI enables a developer to simply state the requirements of a user interface and it will be done directly. For example, if a developer wants a list of items consisting of text fields, then the developer will have to just describe the alignment, font, and color for each field. This makes the code simpler and easier to read, thus saving time and maintenance. SwiftUI also makes complex concepts like animation, much simpler. It enables developers to add animation to almost any control and choose a collection of ready-to-use effects with only a few lines of code. Design tools During the WWDC, Xcode 11 beta release notes were also released. Xcode 11 beta includes SDKs for iOS 13, macOS 10.15, watchOS 6, and tvOS 13.  Xcode 11 beta also supports development with SwiftUI. It supports uploading apps from the Organizer window and its editors can now be added to any window without needing an Assistant Editor. Also the LaunchServices on macOS, now respects the selected Xcode when launching Instruments, Simulator, and other developer tools embedded within Xcode. Thus using these intuitive new design tools of Xcode11, SwiftUI can be used to build interfaces like dragging and dropping, dynamic replacement, and previews. Drag and drop A developer can arrange components within the user interface by simply dragging controls on the canvas. It can be done by opening an inspector to select font, color, alignment, and other design options, and easily rearrange controls with the cursor. Many of these visual editors are also available within the code editor. It is also possible to drag controls from the library and drop them on the design canvas or directly on the code. Dynamic replacement When working in a design canvas, every edit by the developer will be completely in sync with the code in the adjoining editor. Xcode will recompile the changes instantly such that a developer can constantly build an app and run it at the same time, like a ‘live app’. With this feature, Xcode can also swap the edited code directly in the live app. Previews It is now possible to create one or many previews of any SwiftUI views to get sample data and configure almost anything the users can see, such as large fonts, localizations, or dark mode. The users' code will be instantly visible as a preview, and if any change is made in the preview, it will immediately appear in the code. Previews can also display a UI, in any device and any orientation. Native on all Apple platforms SwiftUI has been created in such a way that all controls and platform-specific experiences are included in the code. It allows an app to directly access the features from the previous technologies of each platform, with a small amount of code and an interactive design canvas. It can be used to build user interfaces for any Apple device, including iPhone, iPad, iPod touch, Apple Watch, and Apple TV. SwiftUI’s striking features have made developers very excited to try out the framework. https://twitter.com/stroughtonsmith/status/1135647926439632899 https://twitter.com/fjeronimo/status/1135626395168563201 https://twitter.com/sascha_p/status/1135626257884782592 https://twitter.com/cocoawithlove/status/1135626052678574080 For more details on SwiftUI framework, head over to the Apple Developers website. Apple promotes app store principles & practices as good for developers and consumers following rising antitrust worthy allegations Apple proposes a “privacy-focused” ad click attribution model for counting conversions without tracking users Apple Pay will soon support NFC tags to trigger payments
Read more
  • 0
  • 0
  • 2932

article-image-former-npm-cto-introduces-entropic-a-federated-package-registry-with-a-new-cli-and-much-more
Amrata Joshi
03 Jun 2019
3 min read
Save for later

Former npm CTO introduces Entropic, a federated package registry with a new CLI and much more!

Amrata Joshi
03 Jun 2019
3 min read
Yesterday, at JSConfEU '19, the team behind Entropic announced Entropic, a federated package registry with a new CLI that works smoothly with the network.  Entropic is also Apache 2 licensed and is federated. It mirrors all packages that users install from the legacy package manager. Entropic offers a new file-centric API and a content-addressable storage system that minimizes the amount of data that should be retrieved over a network. This file-centric approach also applies to the publication API. https://www.youtube.com/watch?v=xdLMbvEc2zk C J Silverio, Principal Engineer at Eaze said during the announcement, “I actually believe in open source despite everything I think it's good for us as human beings to give things away to each other but I think it's important. It's going to be plenty for my work so Chris tickets in news isn't it making out Twitter moment now Christensen and I have the natural we would like to give something away to you all right now.” https://twitter.com/kosamari/status/1134876898604048384 https://twitter.com/i/moments/1135060936216272896 https://twitter.com/colestrode/status/1135320460072296449 Features of Entropic Package specifications All the Entropic packages are namespaced, and a full Entropic package spec includes the hostname of its registry. The package specifications are also fully qualified with a namespace, hostname, and package name. They appear to be: [email protected]/pkg-name. For example, the ds cli is specified by [email protected]/ds. If a user publishes a package to their local registry that depends on packages from other registries, then the local instance will mirror all the packages on which the user’s package depend on. The team aims to keep each instance entirely self-sufficient, so installs aren’t dependent on a resource that might vanish. And the abandoned packages are moved to the abandonware namespace. The packages can be easily updated by any user in the package's namespace and can also have a list of maintainers. The ds cli Entropic requires a new command-line client known as ds or "entropy delta". According to the Entropic team, the cli doesn't have a very sensible shell for running commands yet. Currently, if users want to install packages using ds then they can now run ds build in a directory with a Package.toml to produce a ds/node_modules directory. The GitHub page reads, “This is a temporary situation!” But Entropic appears to be more like an alternative to npm as it seeks to address the limitations of the ownership model of npm.Inc. It aims to shift from centralized ownership to federated ownership, to restore power back to the commons. https://twitter.com/deluxee/status/1135489151627870209 To know more about this news, check out the GitHub page. GitHub announces beta version of GitHub Package Registry, its new package management service npm Inc. announces npm Enterprise, the first management code registry for organizations Using the Registry and xlswriter modules
Read more
  • 0
  • 0
  • 4486

article-image-following-eu-china-releases-ai-principles
Vincy Davis
03 Jun 2019
5 min read
Save for later

Following EU, China releases AI Principles

Vincy Davis
03 Jun 2019
5 min read
Last week, the Beijing Academy of Artificial Intelligence (BAAI) released 15-point principles calling for Artificial Intelligence to be beneficial and responsible termed as Beijing AI Principles. It has been proposed as an initiative for the research, development, use, governance and long-term planning of AI. The article is a well-described guideline on the principles to be followed for the research and development of AI, the use of AI, and the governance of AI. The Beijing Academy of Artificial Intelligence (BAAI) is an organization backed by the Chinese Ministry of Science and Technology and the Beijing municipal government. These principles have been developed in collaboration with Peking University, Tsinghua University, the Institute of Automation and Institute of Computing Technology within the Chinese Academy of Sciences, and China’s three big tech firms: Baidu, Alibaba, and Tencent. Research and Development Do Good It states that AI should be developed to benefit all humankind and the environment, and to enhance the well-being of society and ecology. For Humanity AI should always serve humanity and conform to human values as well as the overall interests of humankind. It also specifies that AI should never go against, utilize or harm human beings. Be Responsible Researchers while developing AI should be aware of its potential ethical, legal, and social impacts and risks. They should also be provided with concrete actions to reduce and avoid them. Control Risks AI systems should be developed in a way that ensures the security of data along with the safety and security for the AI system itself. Be Ethical AI systems should be trustworthy, in a way that the system can be traceable, auditable and accountable. Be Diverse and Inclusive The development of AI should reflect diversity and inclusiveness, such that nobody is easily neglected or underrepresented in AI applications. Open and Share An open AI platform will help avoid data/platform monopolies, and share the benefits of AI development. Use of AI Use Wisely and Properly The users of AI systems should have sufficient knowledge and ability to avoid possible misuse and abuse, so as to maximize its benefits and minimize the risks. Informed-consent AI systems should be developed such that in an unexpected circumstance, the users' own rights and interests are not compromised. Education and Training Stakeholders of AI systems should be educated and trained to help them adapt to the impact of AI development in psychological, emotional and technical aspects. Governance of AI Optimizing Employment Developers should have a cautious attitude towards the potential impact of AI on human employment. Explorations on Human-AI coordination and new forms of work should be encouraged. Harmony and Cooperation This should be imbibed in an AI governance ecosystem, so as to avoid malicious AI race, to share AI governance experience, and to jointly cope with the impact of AI with the philosophy of "Optimizing Symbiosis". Adaptation and Moderation Revisions of AI principles, policies, and regulations should be actively considered to adjust them to the development of AI. This will prove beneficial to society and nature. Subdivision and Implementation Various fields and scenarios of AI applications should be actively researched, so that more specific and detailed guidelines can be formulated. Long-term Planning Constant research on the potential risks of Augmented Intelligence, Artificial General Intelligence (AGI) and Superintelligence should be encouraged, which will make AI always beneficial to society and nature in the future. These AI principles are aimed at enabling the healthy development of AI, in such a way that it supports the human community, for a shared future. This will prove beneficial for humankind and nature, in general. China releasing its version of AI principles, has come as a surprise for many. China has always been infamous for using AI to monitor citizens. This move by China comes after the European High-Level Expert Group on AI released ‘Ethics guidelines for trustworthy AI’ , this year. The Beijing AI Principles provided by BAAI, is similar to the AI principles published by Google last year. Google’s AI principles also provided a guideline for AI applications, such that it becomes beneficial for humans. By releasing its own version of AI principles, is China signalling the world that its ready to talk about AI ethics, especially after the U.S. blacklisted China’s telecom giant Huawei over threat to national security. As expected, users are also surprised with China showing this sudden care towards AI ethics. https://twitter.com/sherrying/status/1133804303150305280 https://twitter.com/EBKania/status/1134246833100865536 While others are impressed with this move by China. https://twitter.com/t_gordon/status/1135491979276685312 https://twitter.com/mgmazarakis/status/1134127349392465920 Visit the BAAI website, to read more details of the Beijing AI Principles. Samsung AI lab researchers present a system that can animate heads with one-shot learning What can Artificial Intelligence do for the Aviation industry Packt and Humble Bundle partner for a new set of artificial intelligence eBooks and videos
Read more
  • 0
  • 0
  • 3021

article-image-opensuse-may-go-independent-from-suse-reports-lwn-net
Vincy Davis
03 Jun 2019
3 min read
Save for later

OpenSUSE may go independent from SUSE, reports LWN.net

Vincy Davis
03 Jun 2019
3 min read
Lately, the relationship between SUSE and openSUSE community has been under discussion. Different options are being considered, among which the possibility of setting up openSUSE into an entirely independent foundation is gaining momentum. This will enable openSUSE to have greater autonomy and control over its own future and operations. Though openSUSE board chair Richard Brown and SUSE leadership have publicly reiterated that SUSE remains committed to openSUSE. There has been a lot of concern over the ability of openSUSE to be able to operate in a sustainable way, without being entirely beholden to SUSE. The idea of an independent openSUSE foundation has popped up many times in the past. Former openSUSE board member Peter Linnell says, “Every time, SUSE has changed ownership, this kind of discussion pops up with some mild paranoia IMO, about SUSE dropping or weakening support for openSUSE”. He also adds, “Moreover, I know SUSE's leadership cares a lot about having a healthy independent openSUSE community. They see it as important strategically and the benefits go both ways.” On the contrary, openSUSE Board member Simon Lees says, “it is almost certain that at some point in the future SUSE will be sold again or publicly listed, and given the current good working relationship between SUSE and openSUSE it is likely easier to have such discussions now vs in the future should someone buy SUSE and install new management that doesn't value openSUSE in the same way the current management does.” In an interview with LWN, Brown described the conversation between SUSE and the broader community, about the possibility of an independent foundation, as being frank, ongoing, and healthy. He also mentioned that everything from a full independent openSUSE foundation to a tweaking of the current relationship that provides more legal autonomy for openSUSE can be considered. Also, there is a possibility for some form of organization to be run under the auspices of the Linux Foundation. Issues faced by openSUSE Brown has said, “openSUSE has multiple stakeholders, but it currently doesn't have a separate legal entity of its own, which makes some of the practicalities of having multiple sponsors rather complicated”. Under the current arrangement, it is difficult for OpenSUSE to directly handle financial contributions. Sponsorship and the ability to raise funding have become a prerequisite for the survival of openSUSE. Brown comments, “openSUSE is in continual need of investment in terms of both hardware and manpower to 'keep the lights on' with its current infrastructure”. Another concern has been the tricky collaboration between the community and the company across all SUSE products. In particular, Brown has stated issues with the openSUSE Kubic and SUSE Container-as-a-Service Platform. With a more distinctly separate openSUSE, the implication and the hope is that openSUSE projects will have increased autonomy over its governance and interaction with the wider community. According to LWN, if openSUSE becomes completely independent, it will have increased autonomy over its governance and interaction with the wider community. Though different models for openSUSE's governance are under consideration, Brown has said, “The current relationship between SUSE and openSUSE is unique and special, and I see these discussions as enhancing that, and not necessarily following anyone else's direction”. There has also been no declaration of any hard deadline in place. For more details, head over to LWN article. SUSE is now an independent company after being acquired by EQT for $2.5 billion 389 Directory Server set to replace OpenLDAP as Red Hat and SUSE withdraw support for OpenLDAP in their Enterprise Linux offerings Salesforce open sources ‘Lightning Web Components framework’
Read more
  • 0
  • 0
  • 3056

article-image-deepminds-ai-uses-reinforcement-learning-to-defeat-humans-in-multiplayer-games
Savia Lobo
03 Jun 2019
3 min read
Save for later

DeepMind's AI uses reinforcement learning to defeat humans in multiplayer games

Savia Lobo
03 Jun 2019
3 min read
Recently, researchers from DeepMind released their research where they designed AI agents that can team up to play Quake III Arena’s Capture the Flag mode. The highlight of this research is, these agents were able to team up against human players or play alongside them, tailoring their behavior accordingly. We have previously seen instances of an AI agent beating humans in video games like StarCraft II and Dota 2. However, these games did not involve agents playing in a complex environment or required teamwork and interaction between multiple players. In their research paper titled, “Human-level performance in 3D multiplayer games with population-based reinforcement learning”, a group of 30 AIs were collectively trained to play five-minute rounds of Capture the Flag, a game mode in which teams must retrieve flags from their opponents while retaining their own. https://youtu.be/OjVxXyp7Bxw While playing the rounds in Capture the Flag the DeepMind AI was able to outperform human teammates, with the reaction time slowed down to that of a typical human player. Rather than a number of AIs teaming up on a group of human players in a game of Dota 2, the AI was able to play alongside them as well. Using Reinforcement learning, the AI taught itself the skill which helped it to pick up the rules of the game over thousands of matches in randomly generated environments. “No one has told [the AI] how to play the game — only if they’ve beaten their opponent or not. The beauty of using [an] approach like this is that you never know what kind of behaviors will emerge as the agents learn,” said Max Jaderberg, a research scientist at DeepMind who recently worked on AlphaStar, a machine learning system that recently bested a human team of professionals at StarCraft II. Greg Brockman, a researcher at OpenAI told The New York Times, “Games have always been a benchmark for A.I. If you can’t solve games, you can’t expect to solve anything else.” According to The New York Times, “such skills could benefit warehouse robots as they work in groups to move goods from place to place, or help self-driving cars navigate en masse through heavy traffic.” Talking about limitations, the researchers say, “Limitations of the current framework, which should be addressed in future work, include the difficulty of maintaining diversity in agent populations, the greedy nature of the meta-optimization performed by PBT, and the variance from temporal credit assignment in the proposed RL updates.” “Our work combines techniques to train agents that can achieve human-level performance at previously insurmountable tasks. When trained in a sufficiently rich multiagent world, complex and surprising high-level intelligent artificial behavior emerged”, the paper states. To know more about this news in detail, visit the official research paper on Science. OpenAI Five beats pro Dota 2 players; wins 2-1 against the gamers Samsung AI lab researchers present a system that can animate heads with one-shot learning Amazon is reportedly building a video game streaming service, says Information  
Read more
  • 0
  • 0
  • 3069

article-image-facebook-argues-it-didnt-violate-users-privacy-rights-and-thinks-theres-no-expectation-of-privacy-because-there-is-no-privacy-on-social-media
Amrata Joshi
03 Jun 2019
2 min read
Save for later

Facebook argues it didn’t violate users' privacy rights and thinks there's no expectation of privacy because there is no privacy on social media

Amrata Joshi
03 Jun 2019
2 min read
With more than a year of scandals and data breach from Facebook, the company leaves no stone unturned to prove itself right and follow ethics washing. The company has also been in the radar of FTC and is expected to be fined around $5 billion because of its user data practices. Last week, Facebook argued that it didn't violate users' privacy rights because there's no expectation of privacy when using social media and the company wants to dismiss a lawsuit related to the Cambridge Analytica scandal, by arguing the same. Facebook counsel Orin Snyder said during a pretrial hearing to dismiss a lawsuit, "There is no invasion of privacy at all because there is no privacy." Facebook didn't deny that third parties accessed users' data, but the company told Vince Chhabria, US District Judge that there's no "reasonable expectation of privacy" on Facebook or any other social media site. But the argument coming from Facebook appears to be more like the company is trying to convince people that it knows how to protect their personal information. This month Sheryl Sandberg, Facebook COO said that she and Mark Zuckerberg at Facebook, will do "whatever it takes" to keep people safe on Facebook. Calls to curb Zuckerberg's control over Facebook have now taken rounds as the issues around data privacy and security continue. It seems Chhabria is making sure that at least some of the lawsuit continues, saying in an order before the hearing (PDF) that the plaintiffs should expect the court to accept their argument that private information was disclosed without express consent. Facebook releases Pythia, a deep learning framework for vision and language multimodal research Facebook tightens rules around live streaming in response to the Christchurch terror attack Facebook again, caught tracking Stack Overflow user activity and data
Read more
  • 0
  • 0
  • 2564
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at ₹800/month. Cancel anytime
article-image-google-cloud-went-offline-taking-with-it-youtube-snapchat-gmail-and-a-number-of-other-web-services
Sugandha Lahoti
03 Jun 2019
4 min read
Save for later

Google Cloud went offline taking with it YouTube, Snapchat, Gmail, and a number of other web services

Sugandha Lahoti
03 Jun 2019
4 min read
Update: The article has been updated to include Google's response on Sunday's disruption service. Over the weekend, Google Cloud suffered a major outage taking down a number of Google services, YouTube, GSuite, Gmail, etc. It also affected services dependent on Google such as Snapchat, Nest, Discord, Shopify and more. The problem was first reported by East Coast users in the U.S around 3 PM ET / 12 PM PT, and the company resolved them after more than four hours. According to downdetector, UK, France, Austria, Spain, Brazil, also reported they are suffering from the outage. https://twitter.com/DrKMhana/status/1135291239388143617 In a statement posted to its Google Cloud Platform the company said it experiencing a multi-region issue with the Google Compute Engine. “We are experiencing high levels of network congestion in the eastern USA, affecting multiple services in Google Cloud, GSuite, and YouTube. Users may see a slow performance or intermittent errors. We believe we have identified the root cause of the congestion and expect to return to normal service shortly,” the company said in a statement. The issue was sorted four hours after Google acknowledged the downtime. “The network congestion issue in the eastern USA, affecting Google Cloud, G Suite, and YouTube has been resolved for all affected users as of 4:00 pm US/Pacific,” the company said in a statement. “We will conduct an internal investigation of this issue and make appropriate improvements to our systems to help prevent or minimize future recurrence. We will provide a detailed report of this incident once we have completed our internal investigation. This detailed report will contain information regarding SLA credits.” This outage resulted in some major suffering. Not only did it impact one of the most used apps by Netziens (YouTube and Sanpchat), people also reported that they were unable to use their NEST controlled devices such as turn on their AC or open their "smart" locks to let people into the house. https://twitter.com/davidiach/status/1135302533151436800 Even Shopify experienced problems because of the Google outage, which prevented some stores (both brick-and-mortar and online) from processing credit card payments for hours. https://twitter.com/LarryWeru/status/1135322080512270337 The entire dependency of the world’s most popular applications on just one backend in the hands of one company seems a bit startling. It is also surprising how so many people just rely on one hosting service. At the very least, companies should think of setting up a contingency plan, in case the services go down again. https://twitter.com/zeynep/status/1135308911643451392 https://twitter.com/SeverinAlexB/status/1135286351962812416 Another issue which popped up was how Google cloud randomly being down is proof that cloud-based gaming isn't ready for mass audiences yet. At this year’s Game Developers Conference (GDC), Google marked its entry in the game industry with Stadia, its new cloud-based platform for streaming games. It will be launching later this year in select countries including the U.S., Canada, U.K., and Europe. https://twitter.com/BrokenGamezHDR/status/1135318797068488712 https://twitter.com/soul_societyy/status/1135294007515500549 On Monday, Google released an apologetic update on the outage. They outlined the incident, detection and their response. In essence, the root cause of Sunday’s disruption was a configuration change that was intended for a small number of servers in a single region. The configuration was incorrectly applied to a larger number of servers across several neighboring regions, and it caused those regions to stop using more than half of their available network capacity. The network traffic to/from those regions then tried to fit into the remaining network capacity, but it did not. The network became congested, and our networking systems correctly triaged the traffic overload and dropped larger, less latency-sensitive traffic in order to preserve smaller latency-sensitive traffic flows, much as urgent packages may be couriered by bicycle through even the worst traffic jam. Next, Google’s engineering teams are conducting a thorough post-mortem to understand all the contributing factors to both the network capacity loss and the slow restoration. Facebook family of apps hits 14 hours outage, longest in its history Worldwide Outage: YouTube, Facebook, and Google Cloud goes down affecting thousands of users YouTube went down, Twitter flooded with deep questions, YouTube back and everyone is back to watching cat videos.
Read more
  • 0
  • 0
  • 2507

article-image-amazon-managed-streaming-for-apache-kafka-amazon-msk-is-now-generally-available
Amrata Joshi
03 Jun 2019
3 min read
Save for later

Amazon Managed Streaming for Apache Kafka (Amazon MSK) is now generally available

Amrata Joshi
03 Jun 2019
3 min read
Last week, Amazon Web Services announced the general availability of Amazon Managed Streaming for Apache Kafka (Amazon MSK). Amazon MSK makes it easy for developers to build and run applications based on Apache Kafka without having to manage the underlying infrastructure. It is fully compatible with Apache Kafka that enables customers to easily migrate their on-premises or Amazon Elastic Cloud Compute (Amazon EC2) clusters to Amazon MSK without code changes. Customers can use Apache Kafka for capturing and analyzing real-time data streams from a range of sources, including database logs, IoT devices, financial systems, and website clickstreams. Many customers choose to self-manage their Apache Kafka clusters and they end up spending their time and cost in securing, scaling, patching, and ensuring high availability for Apache Kafka and Apache ZooKeeper clusters. But Amazon MSK offers attributes of Apache Kafka that are combined with the availability, security, and scalability of AWS. Customers can now create Apache Kafka clusters that are designed for high availability that span multiple Availability Zones (AZs) with few clicks. Amazon MSK also monitors the server health and automatically replaces servers when they fail. Customers can now easily scale out cluster storage in the AWS management console to meet changes in demand. Amazon MSK runs the Apache ZooKeeper nodes at no additional cost and provides multiple levels of security for Apache Kafka clusters which include VPC network isolation, AWS Identity and Access Management (IAM), etc. It allows customers to continue to run applications built on Apache Kafka and allow them to use Apache Kafka compatible tools and frameworks. General Manager of Amazon MSK, AWS, Rajesh Sheth, wrote to us in an email, "Customers who are running Apache Kafka have told us they want to spend less time managing infrastructure and more time building applications based on real-time streaming data.” He further added, “Amazon MSK gives these customers the ability to run Apache Kafka without having to worry about managing the underlying hardware, and it gives them an easy way to integrate their Apache Kafka applications with other AWS services. With Amazon MSK, customers can stand up Apache Kafka clusters in minutes instead of weeks, so they can spend more time focusing on the applications that impact their businesses.” Amazon MSK is currently available in the US East (Ohio), US East (N. Virginia), US West (Oregon), EU (Ireland), EU (Frankfurt), EU (Paris), EU (London), Asia Pacific (Singapore), Asia Pacific (Tokyo), and Asia Pacific (Sydney), and will expand to additional AWS Regions in the next year. Amazon rejects all 11 shareholder proposals including the employee-led climate resolution at Annual shareholder meeting Amazon to roll out automated machines for boxing up orders: Thousands of workers’ job at stake Amazon resists public pressure to re-assess its facial recognition business; “failed to act responsibly”, says ACLU
Read more
  • 0
  • 0
  • 2875

article-image-storm-2-0-0-releases-with-java-enabled-architecture-new-core-and-streams-api-and-more
Vincy Davis
03 Jun 2019
4 min read
Save for later

Storm 2.0.0 releases with Java enabled architecture, new core and streams API, and more

Vincy Davis
03 Jun 2019
4 min read
Last week, Apache Storm PMC announced the release of Storm 2.0.0. The major highlight of this release is that Storm has been re-architected in pure Java. Previously a large part of Storm's core functionality was implemented in Clojure. This release also includes significant improvements in terms of performance, a new stream API, windowing enhancements, and Kafka integration changes. New Architecture Implemented in Java With this release, Storm has been re-architected, with its core functionality implemented in pure Java. This new implementation has improved its performance significantly and also has made internal APIs more maintainable and extensible. The previous language Clojure often posed a barrier for entry to new contributors. Storm's codebase will be now more accessible to developers who don't want to learn Clojure in order to contribute. New High-Performance Core Storm 2.0.0 has a new core featuring a leaner threading model, a blazing fast messaging subsystem and a lightweight back pressure model. This has been designed to push boundaries on throughput, latency, and energy consumption while maintaining backward compatibility. Also, this makes Storm 2.0, the first streaming engine to break the 1-microsecond latency barrier. New Streams API This version has a new typed API, which will express streaming computations more easily, using functional style operations. It builds on top of the Storm's core spouts and bolt APIs and automatically fuses multiple operations together. This will help in optimizing the pipeline. Windowing Enhancements Storm 2.0.0's windowing API can now save/restore the window state to the configured state backend. This will enable larger continuous windows to be supported. Also, the window boundaries can now be accessed via the APIs. Improvements in Kafka Kafka Integration Changes Removal of Storm-Kafka Due to Kafka's deprecation of the underlying client library, the storm-kafka module has been removed. Users will have to move, to the storm-kafka-client module. This uses Kafka's ‘kafka-clients’ library for integration. Move to Using the KafkaConsumer.assign API Kafka's own mechanism which was used in Storm 1.x has been removed entirely in 2.0.0. The storm-kafka-client subscription interface has also been removed, due to the limited control it offered over the subscription behavior. It has been replaced with the ‘TopicFilter’ and ‘ManualPartitioner’ interfaces. For custom subscription users, head over to the storm-kafka-client documentation, which describes how to customize assignment. Other Kafka Highlights The KafkaBolt now allows you to specify a callback that will be called when a batch is written to Kafka. The FirstPollOffsetStrategy behavior has been made consistent between the non-Trident and Trident spouts. Storm-kafka-client now has a transactional non-opaque Trident spout. Users have also been notified that the 1.0.x version line will no longer be maintained and have strongly encouraged users to upgrade to a more recent release. The Java 7 support has also been dropped, and Storm 2.0.0 requires Java 8. There has been a mixed reaction from users over the changes, in Storm 2.0.0. Few users are not happy with Apache dropping the Clojure language. As a user on Hacker News comments, “My team has been using Clojure for close to a decade, and we found the opposite to be the case. While the pool of applicants is smaller, so is the noise ratio. Clojure being niche means that you get people who are willing to look outside the mainstream, and are typically genuinely interested in programming. In case of Storm, Apache commons is run by Java devs who have zero interest in learning Clojure. So, it's not surprising they would rewrite Storm in their preferred language.” Some users think that this move of dropping Clojure language shows that developers nowadays are unwilling to learn new things As a user on Hacker News comments, “There is a false cost assigned to learning a language. Developers are too unwilling to even try stepping beyond the boundaries of the first thing they learned. The cost is always lower than they may think, and the benefits far surpassing what they may think. We've got to work at showing developers those benefits early; it's as important to creating software effectively as any other engineer's basic toolkit.” Others are quite happy with Storm getting Java enabled. A user on Reddit said, “To me, this makes total sense as the project moved to Apache. Obviously, much more people will be able to consider contributing when it's in Java. Apache goal is sustainability and long-term viability, and Java would work better for that.” To download the Storm 2.0.0 version, visit the Storm downloads page. Walkthrough of Storm UI Storing Apache Storm data in Elasticsearch Getting started with Storm Components for Real Time Analytics
Read more
  • 0
  • 0
  • 2411

article-image-safari-technology-preview-release-83-now-available-for-macos-mojave-and-macos-high-sierra
Amrata Joshi
03 Jun 2019
2 min read
Save for later

Safari Technology Preview release 83 now available for macOS Mojave and macOS High Sierra

Amrata Joshi
03 Jun 2019
2 min read
Last week, the team at WebKit announced that Safari Technology Preview release 83 is now available for macOS Mojave and macOS High Sierra. Safari Technology Preview is a version of Safari for OS X includes an in-development version of the WebKit browser engine. What’s new in Safari Technology Preview release 83? Web authentication This release comes with web authentication enabled by default on macOS. The web authentication has been changed to cancel the pending request when a new request is made. Web authentication has been changed to return InvalidStateError to sites whenever authenticators return such error. Pointer events With this release, the issue with isPrimary property of pointercancel events has been fixed. Also, the issue with calling preventDefault() on pointerdown has been fixed. Rendering The team has implemented backing-sharing in compositing layers and have further allowed overlap layers to paint into the backing store of another layer. The team has also fixed rendering of backing-sharing layers with transforms. The issue with layer-related flashing with composited overflow: scroll has been fixed. CSS In this release, “clearfix” with display: flow-root has been implemented. Also, page-break-* and -webkit-column-break-* have been implemented. The issue with font-optical-sizing applying the wrong variation value has been implemented. The CSS grid has also  been updated. WebRTC This release now allows sequential playback of media files. Also, the issue with video stream freezing has been fixed. Major bug fixes In this release, the CPU timeline and memory timeline bars have been fixed. The colors in the network table waterfall container have been fixed. The issue with context menu items in the DOM tree has been fixed. To know more about this news, check out the release notes. Chrome, Safari, Opera, and Edge to make hyperlink auditing compulsorily enabled Safari Technology Preview 71 releases with improvements in Dark Mode, Web Inspector, WebRTC, and more! Is it time to ditch Chrome? Ad blocking extensions will now only be for enterprise users
Read more
  • 0
  • 0
  • 2447
article-image-is-it-time-to-ditch-chrome-ad-blocking-extensions-will-now-only-be-for-enterprise-users
Sugandha Lahoti
03 Jun 2019
6 min read
Save for later

Is it time to ditch Chrome? Ad blocking extensions will now only be for enterprise users

Sugandha Lahoti
03 Jun 2019
6 min read
Update: Opera, Brave, Vivaldi have ignored Chrome's anti-ad-blocker changes, despite shared codebase. On June 12, Google published a blog post clarifying it's intentions with ad blocking extension system saying it isn't trying to kill ad blockers. "This has been a controversial change since the Web Request API is used by many popular extensions, including ad blockers. We are not preventing the development of ad blockers or stopping users from blocking ads. Instead, we want to help developers, including content blockers, write extensions in a way that protects users’ privacy." In January, Chrome updated its Manifest V3 extension system that could lead to crippling all ad blockers. Even though Chrome’s Manifest extension system received overwhelmingly negative feedback, Google is standing firm on Chrome’s ad blocking changes. Last week, the company shared a statement on Google groups that current ad blocking capabilities will not be changed. Chrome will still have the capability to block unwanted content, but this will be restricted to only paid, enterprise users of Chrome. “Chrome is deprecating the blocking capabilities of the webRequest API in Manifest V3, not the entire webRequest API (though blocking will still be available to enterprise deployments).” What is this Manifest v3 controversy? Google developers have introduced an alternative to the webRequest API named the declarativeRequest API, which limits the blocking version of the webRequest API. declarativeNetRequest is a less effective, rules-based system. Chrome currently imposes a limit of 30,000 rules, However, most popular ad blocking rules lists use almost 75,000 rules. Although Google claimed that they’re looking to increase this number, they didn’t assure it. “We are planning to raise these values but we won’t have updated numbers until we can run performance tests to find a good upper bound that will work across all supported devices.” According to Manifest V3, the declarativeNetRequest API will be treated as the primary content-blocking API in extensions. Chrome developers listed two reasons behind this new update, one was performance and the other was better privacy guarantee to users. What this API does is, allow extensions to tell Chrome what to do with a given request, rather than have Chrome forward the request to the extension. This allows Chrome to handle a request synchronously. Regarding performance upgrade, however, a study was published on WhoTracks.me who analyzed the network performance of the most commonly used ad blockers: uBlock Origin, Adblock Plus, Brave, DuckDuckGo and Cliqz’z Ghostery. The study revealed that these content-blockers, except DuckDuckGo, have only sub-millisecond median decision time per request. This small amount of time will not have any overhead noticeable by users. Additionally, the efficiency of content blockers is continuously being improved with innovative approaches or with the help of technologies like WebAssembly. A uBlock maintainer had earlier reported an issue on the Chromium bug tracker for this feature: “If this (quite limited) declarativeNetRequest API ends up being the only way content blockers can accomplish their duty, this essentially means that two content blockers I have maintained for years, uBlock Origin (“uBO”) and uMatrix, can no longer exist.” In their update, Google wrote that appropriately permissioned extensions will still be able to observe network requests using the webRequest API, which he insisted is "foundational for extensions that modify their behavior based on the patterns they observe at runtime." Now, the lead developer of uBlock Origin, Raymond Hill has commented on the situation. Losing the ability to block content with the webRequest API is his main concern. "This breaks uBlock Origin and uMatrix, [which] are incompatible with the basic matching algorithm [Google] picked, ostensibly designed to enforce EasyList-like filter lists," he explained in an email to The Register. "A blocking webRequest API allows open-ended content blocker designs, not restricted to a specific design and limits dictated by the same company which states that content blockers are a threat to its business." He also called out Google’s business model on uBlock Origin’s GitHub. “The blocking ability of the webRequest API caused Google to yield control of content blocking to content blockers. Now that Google Chrome is the dominant browser, it is in a better position to shift the optimal point between the two goals which benefits Google's primary business. The deprecation of the blocking ability of the webRequest API is to gain back this control, and to further now instrument and report how web pages are filtered since now the exact filters which are applied to web page is information which will be collectable by Google Chrome.” For a number of web users, this was the last straw. Many said they'd be moving on from Chrome to other privacy-friendly browsers. A comment reads, “If you use an iOS device, Safari is awesome. The integration between all your hardware devices syncing passwords, tabs, bookmarks, reading list, etc. kicks ass. That’s all not to mention its excellent built-in privacy features and that it’s really really fast.” Another comment reads, “I used to have Firefox. When I heard that even Microsoft was going to use chromium I realized, Firefox is literally the last front ! I installed Firefox and started using it as my main browser.” Another says, “Genuinely, most people are choosing between privacy and convenience. And with Firefox you don't need to choose.” Mozilla’s Firefox has taken this opportunity to attract Chrome users with a new page detailing how to Switch from Chrome to Firefox. “Switching to Firefox is fast, easy and risk-free. Firefox imports your bookmarks, autofill, passwords and preferences from Chrome.” The latest Firefox release also comes with a new feature that can help users block fingerprinting coming from ad trackers. The brave browser also tweeted about Chrome’s development, stating it will block ads regardless of Chrome’s decisions. https://twitter.com/brave/status/1134182650615173120 Users also appreciated Brave’s privacy features. https://twitter.com/jenzhuscott/status/1134035348240109568 Chrome software security engineer Chris Palmer took to Twitter to claim the move was intended to help improve the end-user browsing experience, and paid enterprise users would be exempt from the changes. https://twitter.com/fugueish/status/1133851275794059265 Chrome security leader Justin Schuh also said the changes were driven by privacy and security concerns. https://twitter.com/justinschuh/status/1134092257190064128 Top browsers, Opera, Brave, Vivaldi have ignored Chrome's anti-ad-blocker changes, despite having a shared codebase. https://twitter.com/opera/status/1137717494733508609 https://twitter.com/brave/status/1134182650615173120 https://twitter.com/vivaldibrowser/status/1136204715786719232 Google Chrome developers “clarify” the speculations around Manifest V3 after a study nullifies their performance hit argument. Flutter gets new set of lint rules to build better Chrome OS apps Chromium developers propose an alternative to webRequest API that could result in existing ad blockers’ end
Read more
  • 0
  • 0
  • 5744

article-image-unity-editor-will-now-officially-support-linux
Vincy Davis
31 May 2019
2 min read
Save for later

Unity Editor will now officially support Linux

Vincy Davis
31 May 2019
2 min read
Yesterday Martin Best, Senior Technical Product Manager at Unity, briefly announced that the Unity Editor will now officially support Linux. Currently the Editor is available only on ‘preview’ for Ubuntu and CentOS, but Best has stated that it will be fully supported by Unity 2019.3. Another important note is to make sure that before opening projects via the Linux Editor, the 3rd-party tools also support it. Unity has been offering an unofficial, experimental Unity Editor for Linux since 2015. Unity had released the 2019.1 version in April this year, in which it was mentioned that the Unity editor for Linux has moved into preview mode from the experimental status. Now the status has been made official. Best mentions in the blog post, “growing number of developers using the experimental version, combined with the increasing demand of Unity users in the Film and Automotive, Transportation, and Manufacturing (ATM) industries means that we now plan to officially support the Unity Editor for Linux.” The Unity Editor for Linux will be accessible to all Personal (free), Plus, and Pro licenses users, starting with Unity 2019.1. It will be officially supported on the following configurations: Ubuntu 16.04, 18.04 CentOS 7 x86-64 architecture Gnome desktop environment running on top of X11 windowing system Nvidia official proprietary graphics driver and AMD Mesa graphics driver Desktop form factors, running on device/hardware without emulation or compatibility layer Users are quite happy that the Unity Editor will now officially support Linux. A user on Reddit comments, “Better late than never.” Another user added, “Great news! I just used the editor recently. The older versions were quite buggy but the latest release feels totally on par with Windows. Excellent work Unity Linux team!” https://twitter.com/FourthWoods/status/1134196011235237888 https://twitter.com/limatangoalpha/status/1134159970973470720 For the latest builds, check out the Unity Hub. For giving feedback on the Unity Editor for Linux, head over to the Unity Forum page. Obstacle Tower Environment 2.0: Unity announces Round 2 of its ‘Obstacle Tower Challenge’ to test AI game players. Unity has launched the ‘Obstacle Tower Challenge’ to test AI game players Unity updates its TOS, developers can now use any third party service that integrate into Unity
Read more
  • 0
  • 0
  • 6512

article-image-pypi-announces-2fa-for-securing-python-package-downloads
Savia Lobo
31 May 2019
2 min read
Save for later

PyPI announces 2FA for securing Python package downloads

Savia Lobo
31 May 2019
2 min read
Yesterday, Python’s core development team announced that PyPI now offers two-factor authentication to increase the security of Python package downloads and thus reduce the risk of unauthorized account access. The team announced that the 2FA will be introduced as a login security option on the Python Package Index. “We encourage project maintainers and owners to log in and go to their Account Settings to add a second factor”, the team wrote on the official blog. The blog also mentions that this project is a “grant from the Open Technology Fund; coordinated by the Packaging Working Group of the Python Software Foundation.” PyPI currently supports a single 2FA method that generates code through a Time-based One-time Password (TOTP) application. After users set up a 2FA on their PyPI account, they must provide a TOTP (along with your username and password) to log in. Therefore, to use 2FA on PyPI, users will need to provide an application (usually a mobile phone app) in order to generate authentication codes. Currently, only TOTP is supported as a 2FA method. Also, 2FA only affects login via the website, which safeguards against malicious changes to project ownership, deletion of old releases, and account takeovers. Package uploads will continue to work without 2FA codes being provided. Developers said that they are working on WebAuthn-based multi-factor authentication, which will allow the use of Yubikeys for your second factor, for example. They further plan to add API keys for package upload, along with an advanced audit trail of sensitive user actions. A user on HackerNews answered a question, “Will I lock myself out of my account if I lose my phone?” by saying,  “You won't lock yourself out. I just did a quick test and if you reset your password (via an email link) then you are automatically logged in. At this point you can even disable 2FA. So 2FA is protecting against logging in with a stolen password, but it's not protecting against logging in if you have access to the account's email account. Whether or not that's the intended behaviour is another question…” To know more about the ongoing security measures taken, visit Python’s official blog post. Salesforce open sources ‘Lightning Web Components framework’ Time for data privacy: DuckDuckGo CEO Gabe Weinberg in an interview with Kara Swisher Which Python framework is best for building RESTful APIs? Django or Flask?
Read more
  • 0
  • 0
  • 2711
article-image-salesforce-open-sources-lightning-web-components-framework
Savia Lobo
30 May 2019
4 min read
Save for later

Salesforce open sources ‘Lightning Web Components framework’

Savia Lobo
30 May 2019
4 min read
Yesterday, the developers at Salesforce open sourced Lightning Web Components framework, a new JavaScript framework that leverages the web standards breakthroughs of the last five years. This will allow developers to contribute to the roadmap and also use the framework irrespective if they are building applications on Salesforce or on any other platform. The Lightning Web Components was first introduced in December 2018. The developers in their official blog post mention, “The last five years have seen an unprecedented level of innovation in web standards, mostly driven by the W3C/WHATWG and the ECMAScript Technical Committee (TC39): ECMAScript 6, 7, 8, 9 and beyond, Web components, Custom elements, Templates and slots, Shadow DOM, etc.” The introduction of Lightning Web Components framework has lead to a dramatic transformation of the web stack. Many features that required frameworks are now standard.   The framework was “born as a modern framework built on the modern web stack”, developers say. Lightning Web Components framework includes three key parts: The Lightning Web Components framework, the framework’s engine. The Base Lightning Components, which is a set of over 70 UI components all built as custom elements. Salesforce Bindings, a set of specialized services that provide declarative and imperative access to Salesforce data and metadata, data caching, and data synchronization. The Lightning Web Components framework doesn’t have dependencies on the Salesforce platform. However, Salesforce-specific services are built on top of the framework. The layered architecture means that one can now use the Lightning Web Components framework to build web apps that run anywhere. The benefits of this include: You only need to learn a single framework You can share code between apps. As Lightning Web Components is built on the latest web standards, you know you are using a cutting-edge framework based on the latest patterns and best practices. Many users said they are unhappy and that the Lightning Web Components framework is comparatively slow. One user wrote on HackerNews, “the Lightning Experience always felt non-performant compared to the traditional server-rendered pages. Things always took a noticeable amount of time to finish loading. Even though the traditional interface is, by appearance alone, quite traditional, as least it felt fast. I don't know if Lightning's problems were with poor performing front end code, or poor API performance. But I was always underwhelmed when testing the SPA version of Salesforce.” Another user wrote, “One of the bigger mistakes Salesforce made with Lightning is moving from purely transactional model to default-cached-no-way-to-purge model. Without letting a single developer to know that they did it, what are the pitfalls or how to disable it (you can't). WRT Lightning motivation, sounds like a much better option would've been supplement older server-rendered pages with some JS, update the stylesheets and make server language more useable. In fact server language is still there, still heavily used and still lacking expressiveness so badly that it's 10x slower to prototype on it rather than client side JS…” In support of Salesforce, a user on HackerNews explains why this Framework might be slow. He said, “At its core, Salesforce is a platform. As such, our customers expect their code to work for the long run (and backwards compatibility forever). Not owning the framework fundamentally means jeopardizing our business and our customers, since we can't control our future. We believe the best way to future-proof our platform is to align with standards and help push the web platform forward, hence our sugar and take on top of Web Components.” He further added, “about using different frameworks, again as a platform, allowing our customers to trivially include their framework choice of the day, will mean that we might end up having to load seven versions of react, five of Vue, 2 Embers .... You get the idea :) Outside the platform we love all the other frameworks (hence other properties might choose what it fits their use cases) and we had a lot of good discussions with framework owners about how to keep improving things over the last two years. Our goal is to keep contributing to the standards and push all the things to be implemented natively on the platform so we all get faster and better.” To know more about this news visit the Lightning Web Components Framework’s official website. Applying styles to Material-UI components in React [Tutorial] 5 reasons Node.js developers might actually love using Azure [Sponsored by Microsoft] Github Sponsors: Could corporate strategy eat FOSS culture for dinner?
Read more
  • 0
  • 0
  • 3465

article-image-apple-promotes-app-store-principles-practices-as-good-for-developers-and-consumers-following-rising-antitrust-worthy-allegations
Fatema Patrawala
30 May 2019
5 min read
Save for later

Apple promotes app store principles & practices as good for developers and consumers following rising antitrust worthy allegations

Fatema Patrawala
30 May 2019
5 min read
On Wednesday, Apple refreshed the App Store website with a new page called "Principles and Practices". The page lists all the good the platform has done for customers and developers using language emphasizing the fairness to Apple's approach. It seems that this page presents Apple’s defense against recent anti-monopoly lawsuits. The Verge postulates the new page clearly shows how difficult it can be for other app stores and developers to compete against Apple. Regulatory scrutiny for iOS App Store has been intensifying in recent months, largely due to a formal complaint by rival Spotify to European regulators earlier this year, which alleged that Apple engages in anticompetitive behavior. The fundamental criticism is that Apple derives unfair advantages by competing on its own platform, it undercuts competition by taking a 15% to 30% cut on sale of apps. As Apple launches more and more services, the list of developers that become rivals is growing too. In a major legal blow earlier this month, the U.S. Supreme Court ruled that consumers can sue Apple for antitrust violations. In contrast to this the new page for the App Store principles and practices reads, "We're proud that, to date, developers have earned more than $120 billion worldwide from selling digital goods," the page says. Eighty-four percent of apps are free, "and developers pay nothing to Apple." It also says the App Store "welcomes competition" and the page lists the third-party apps that compete against Apple's own products, like the iOS Mail app compared to Gmail and the various music-streaming services that compete with Apple Music. But the company fails to mention that none of the third party apps can be chosen as the default messaging app, maps service, email client, web browser, or music player. Source - Apple website Additionally Apple claims its control over the iOS ecosystem is both fair for developers and good for the consumer. "We take responsibility for ensuring that apps are held to a high standard for privacy, security, and content because nothing is more important than maintaining the trust of our users," the page says, noting that Apple carefully reviews all apps to ensure quality before they go up on the store. The page also mentions Apple's key argument against the app store antitrust battle. ”The company doesn't set the prices on the store; the developers do. The business model has worked and helped create more than 1.5 million jobs in the US, and another 1.5 million in Europe, devoted to iOS app design,” the company adds. Apple also listed other types of apps in the store, some of them are completely free, while few are paid and few are with in-app purchases or monthly subscriptions. Some of the essential apps are classified as “reader” apps because those companies have decided against giving Apple a cut of their in-app purchases and subscriptions. This category includes Amazon Kindle, Netflix, and Spotify. Apple says customers of these services “enjoy access to that content inside the app on their Apple devices” and that “developers receive all of the revenue they generate from bringing the customer to the app.” Netflix, on the other hand, recently decided to circumvent the 30 percent App Store "tax" by forcing new subscribers to ditch the Apple iTunes payment method and make the purchase on Netflix's mobile website. Overall, Apple's defense is essentially telling the public, "Hey trust us, we're doing a good job. You don't want to deal with another iOS app store." One of the users on Hacker News commented, “Their comparison of other apps in the store is quite disingenuous. Yes, you technically have other music players, but they're not as integrated into the OS as Apple Music is. We picked Apple Music for this reason, even though it's a rather bad UI. Same with Maps. Not that I want to give Google more of my location data but others that want to use Google Maps as their default maps app, can't right now without all kinds of third party hacks. So yes, the competition does exist, but due to deliberate actions BY APPLE to stifle their APIs to keep them heavily restricted, these apps really aren't first class citizens on the OS. I largely favor Apple's approach of minimizing data sharing, but their apps are often inferior to the third party alternatives. They should use their app store stick to instead have a MFi-like certification program for data. If you want to be a first-party app for Maps, Mail, locations, etc, you have to demonstrate that you won't abuse that data, and have the right infrastructure to protect it.” Hence, it is all up to the courts and regulators to determine whether Apple's arguments against antitrust action hold any sway. But there's lot of money at stake. Although Apple makes most of its revenue from iPhone sales, the company's "service" category, which includes fees taken from App Store sales, is its second-largest profit making business for Apple. Apple proposes a “privacy-focused” ad click attribution model for counting conversions without tracking users Apple Pay will soon support NFC tags to trigger payments U.S. Supreme Court ruled 5-4 against Apple on its App Store monopoly case  
Read more
  • 0
  • 0
  • 2247