Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Mobile

204 Articles
article-image-cta-announces-its-first-ar-vr-standard-terminology
Natasha Mathur
11 Jul 2018
3 min read
Save for later

CTA announces its first AR/VR Standard terminology

Natasha Mathur
11 Jul 2018
3 min read
Early this month, the Consumer Technology Association (CTA) published its first set of standards for the augmented reality (AR) and virtual reality (VR) technologies. Now with AR/VR added in the library, it will be able to provide support to an even larger number of audiences. CTA’s decision to include AR/VR within its free standard library is due to the ever-increasing popularity of AR/VR among the gamers across the globe. Also, the market for AR/VR accessories is quite competitive and continually improving. Brian Markwalter, senior VP of research and standards CTA, said that their first standard for AR/VR represents an important step in addressing key emerging technology areas. Brian pointed out that the reason for creating this standard is to promote the definitions created by the CTA’s AR/VR working group to spread awareness regarding different technologies and experiences. This standard helps the industry create a blueprint which can support the AR/VR technologies more effectively. The CTA-2069 publication or “Definitions and Characteristics of Augmented and Virtual Reality Technologies”,  introduces various terms based on the evolving and upcoming technologies such as Augmented reality (AR), Mixed reality (MR), Virtual reality (VR) videos and images, X reality, outside-in tracking and room-scale VR. You can find the free standards library easily online for free. This library gives you complete transparency and insight on the standards being used across the industry. Standards which cover and improve the accuracy of the key aspects of the consumer tech industry including audio, video, health, fitness, closed captioning, etc. The new standard has made it to market at the right time. CTA Sales and Forecasts report states that VR has become a $ 1 billion sector in the U.S and 4.9 million units will be sold in 2018 which is a 25 percent bump from 2017. It will also generate $ 1.2 billion in revenues. CTA accredits the boost to the growing popularity and market among AR/VR gamers within the sector. According to David McIntyre, the Senior Vice President of strategy and standards, Xperi Corporation, “Standardized, market-centric definitions are an important first step for the industry”. He also said how he’s looking forward to increased industry involvement while CTA works on other areas of XR (which is any hardware that combines aspects of AR, MR, and VR) standardization in service of the industry and consumers. For more coverage on CTA’s first AR/VR standard, check out the official CTA press release. Game developers say Virtual Reality is here to stay Adobe glides into Augmented Reality with Adobe Aero
Read more
  • 0
  • 0
  • 2260

article-image-meet-sapper-a-military-grade-pwa-framework-inspired-by-next-js
Sugandha Lahoti
10 Jul 2018
3 min read
Save for later

Meet Sapper, a military grade PWA framework inspired by Next.js

Sugandha Lahoti
10 Jul 2018
3 min read
There is a new web application framework in town. Categorized as “Military grade” by its creator, Rich Harris, Sapper is a Next.js-style framework that is almost close to being the ideal web application framework. [box type="info" align="" class="" width=""]Fun Fact: Sapper, the name comes from the term for combat engineers, hence the term Military grade. It is also short for Svelte app maker.[/box] Sapper offers high grade development experience, with declarative routing, hot-module replacement, and scoped styles. It also includes modern development practices at par with current web application frameworks such as code-splitting, server-side rendering, and offline support. It is powered by Svelte, the UI framework which is essentially a compiler that turns app components into standalone JavaScript modules. What makes Sapper unique is that it dramatically reduces the amount of code that gets sent to the browser. In the RealWorld project challenge, Sapper implementation took 39.6kb (11.8kb zipped) to render an interactive homepage. The entire app cost 132.7kb (39.9kb zipped), which is significantly smaller than the reference React/Redux implementation at 327kb (85.7kb). Infact, the implementation totals 1,201 lines of source code, compared to 2,377 for the reference implementation. Another crucial feature of Sapper is code splitting. If an app uses React or Vue, there's a hard lower bound on the size of the initial code-split chunk, the framework itself, which is likely to be a significant portion of the total app size. Sapper has no lower bound for initial code splitting, which makes the app even faster. The framework is also extremely performant, memory-efficient, and easy to learn with Svelte's template syntax. It has scoped CSS, with unused style removal and minification built-in. The framework also has a svelte/store, a tiny global store that synchronises state across the component hierarchy with zero boilerplate. Currently Sapper is not released in version 1.0.0. Currently, Svelte's compiler operates at the component level. For the stable release, the team is looking for ‘whole-app optimisation' where the compiler understands the boundaries between the components to generate even more efficient code. Also, because Sapper is written in TypeScript, there may be plans to officially support TypeScript. Sapper may not be ready yet to take over an established framework such as React. The reason being, that the developers may have an aversion to any form of 'template language'. Moreover, React is extremely flexible and appealing to new developers. This is because of its highly active community and other learning resources, in particular, the devtools, editor integrations, tutorials, StackOverflow answers, and even job opportunities. When compared to such a giant, Sapper still has a long way to go. You can view the framework's progress and contribute your own ideas at Sapper GitHub and Gitter. Top frameworks for building your Progressive Web Apps (PWA) 5 reasons why your next app should be a PWA (progressive web app) Progressive Web AMPs: Combining Progressive Wep Apps and AMP
Read more
  • 0
  • 0
  • 5892

article-image-15-year-old-uncovers-snapchats-secret-visual-search-function
Richard Gall
10 Jul 2018
3 min read
Save for later

15 year old uncovers Snapchat's secret visual search function

Richard Gall
10 Jul 2018
3 min read
A 15 year old app researcher has discovered something hidden inside Snapchat's code: text that reads “Press and hold to identify an object, song, barcode, and more! This works by sending data to Amazon, Shazam, and other partners." The find, uncovered by Ishan Agarwal, who sent the tip to TechCrunch, strongly suggests that Snapchat is working on a 'visual search engine' that has some kind of link to Amazon. The remarkable find pushed up Snap Inc's value on the stock market up 3%. This underlines just how much of a big deal this could be for the social media company. The feature, which is known internally as 'Eagle', indicates that Snapchat is moving quickly when it comes to developing new features. Given Snap Inc. reported a $385 million loss last quarter, this could be the shot in the arm the company needs. Snap Inc. declined to comment on Agarwal's discovery, when requested by TechCrunch. But although nothing has been confirmed, the evidence is clear enough inside the code. How Snapchat's 'Eagle' search function works It's likely that the new feature will pull up the Amazon product page, a list of sellers and reviews for the object - or product - that is snapped by the user. You’ll probably also be able to copy that Amazon link and share the info regarding the product with your friends. However, there is an element of speculation here - we'll have to wait and see when it finally launches. The TechCrunch report, links them to Snapchat's context card feature, launched towards the end of 2017. These are essentially cards which offer detailed information about, things like restaurants, such as opening times and reviews. Snapchat has been coming up with different digital commerce tools and this feature fits the bill. This is especially true if the platform is aiming to move deeper into the eCommerce world. It's worth noting that other social media platforms like Pinterest have similar features. In fact, Pinterest has already partnered with retailers like Target. In this instance, the visual search feature is directly embedded into the Target mobile app. Google Lens also works in a similar fashion. The only difference is that Snapchat will use its third-party integration (with Amazon) for the identification process. Ishan Agarwal: the 15 year old app researcher This isn't Ishan Agarwal's first software discovery. He's uncovered a number of new Instagram features, such as video calling and focus portrait mode before they were officially launched, all of which were sent as tips to TechCrunch He's clearly a valuable asset for TechCrunch - and now, given the positive movements on the stock market, an unexpectedly valuable asset for Snap too. Follow Ishan Agarwal on Twitter: @IshanAgarwal24 Source: TechCrunch Read next: There’s another player in the advertising game: augmented reality Apple’s new ARKit 2.0 brings persistent AR, shared augmented reality experiences and more
Read more
  • 0
  • 0
  • 2163
Visually different images

article-image-react-native-0-56-is-now-available
Sugandha Lahoti
10 Jul 2018
2 min read
Save for later

React Native 0.56 is now available

Sugandha Lahoti
10 Jul 2018
2 min read
React Native, Facebook’s framework for building native apps using React is now available as a new version.The version 0.56, is a fundamental building block towards a more stable framework: leading to a better July 2018 (0.57.0) release. This was a long-awaited release with a lot of discussion between "waiting for more stability" versus "testing led to successful results so it can push forward". The ride to release was not smooth but eventually with dedicated community communication the react native 0.56.0 release was stabilized. The major changes include: Support for Babel 7 The version 0.56 now allows support for the latest version of Babel. Babel is the transpiler tool that allows React Native to use the latest features of JavaScript. Babel 7 hosts a variety of important changes and the React team will now allow Metro, the JavaScript bundler for React Native to leverage its improvements. Modernizing Android support React Native has added updates to Android support for faster builds. It will also help developers comply with the new Play Store requirements coming into effect next month. Version 0.56 now supports Gradle 3.5, Android SDK 26, Fresco to 1.9.0, and OkHttp to 3.10.0 and the NDK API target to API 16. Interested developers can follow the discussion on Android developments in the dedicated issue list. New Node, Xcode, React, and Flow Node 8 is now the standard for React Native and React is also updated to v16.4. Version 0.56 has dropped support for iOS 8, making iOS 9 the oldest iOS version that can be targeted. Also, the continuous integration toolchain has been updated to use Xcode 9.4, ensuring that all iOS tests are run on the latest developer tools provided by Apple. They have also upgraded to Flow 0.75 to use the new error format and also created types for many more components. YellowBox is replaced with a new implementation that makes debugging easier. For the complete release notes, you can reference the full changelog. Also, keep an eye on the upgrading guide to avoid issues moving to this new version. React Native announces re-architecture of the framework for better performance Is React Native really a Native framework? React Native Performance
Read more
  • 0
  • 0
  • 2833

article-image-niantic-of-the-pokemon-go-fame-releases-a-preview-of-its-ar-platform
Sugandha Lahoti
02 Jul 2018
2 min read
Save for later

Niantic, of the Pokemon Go fame, releases a preview of its AR platform

Sugandha Lahoti
02 Jul 2018
2 min read
Niantic gained worldwide popularity when they launched their popular AR game, Pokemon Go, two years ago. Now they are offering a preview of the technology they have been developing: the Niantic Real World Platform. According to the company's blog post, “This public preview will provide a sense of how committed Niantic is to the future of AR, and to furthering the type of experiences they have pioneered.”  They will give select third-party developers access to their cross-platform AR tools. Niantic has also acquired the computer vision and machine learning company Matrix Mill, and along with their previous acquisition Escher Reality, they’ve been able to establish what the Niantic Real World Platform looks like today. https://youtu.be/7ZrmPTPgY3I The Niantic Real World Platform can digitize parks, trails, sidewalks, and other publicly accessible spaces and model them in an interactive 3D space that a computer can quickly and easily read. They also will work toward making this technology available to power-limited mobile devices. The Niantic Real World Platform makes use of advanced computer vision, to make AR objects understand and interact with real-world objects in unique ways–stopping in front of them, running past them, or maybe even jumping into them. They are also working towards Contextual AR, where they will explore ideas, for testing and creating demos that can think and visualize. For example, if the platform is able to identify and contextualize the presence of flowers, then it will know to make a bumblebee appear. Matrix Mill, their new collaboration will use computer vision and deep learning, to develop techniques to understand 3D space, enabling more realistic AR interactions than are currently present in the market. Their cross-platform AR technology will also facilitate shared AR experiences. They have developed proprietary, low-latency AR networking techniques for shared AR experiences with a single code base. Niantic will be selecting a handful of third-party developers to begin working with their AR  tools later this year. Developers interested in news about the Niantic Real World Platform can sign up to get early looks of the platform. Apple’s new ARKit 2.0 brings persistent AR, shared augmented reality experiences and more Amazon open sources Amazon Sumerian, its popular AR/VR app toolkit Google open sources Seurat to bring high precision graphics to Mobile VR
Read more
  • 0
  • 0
  • 2953

article-image-google-releases-android-things-library-for-google-cloud-iot-core
Sugandha Lahoti
26 Jun 2018
3 min read
Save for later

Google releases Android Things library for Google Cloud IoT Core

Sugandha Lahoti
26 Jun 2018
3 min read
Google has released the Android Things client library for making it easy for Android Things users to utilize Google Cloud IoT core. Last month, Google announced the developer preview release of Android Things, solidifying the chances of it becoming the official IoT platform for Google. Google IoT core is a complete managed service on the Google Cloud Platform. The client library will help the system to collect, process, analyze, and visualize IoT data in real time. It will provide powerful computer vision, audio processing, and machine learning applications, all on devices. It will also work with Cloud IoT Core, to push data into GCP for further analysis. The Android Things client library will also provide means for developers to easily connect to the IoT Core MQTT bridge, authenticate the device, publish device telemetry and state, subscribe to configuration changes, and handle errors and network outages. The client library completely handles the networking, threading, and message handling enabling Android Things developers to get started with just a few lines of code. Authentication and Security Android Things library provides a hardware-backed Android Keystore that ensures cryptographic key material is protected. The client library supports both RSA and ECC keys and implements the generation of JSON Web Tokens (JWTs) for authentication with Cloud IoT Core. Device provisioning and Error handling IoT devices generally operate in poor wireless conditions in the real world. The Android things client library will provide support for handling errors, and for caching and retransmitting events later. The library's queue is configurable and replaceable for developers requiring custom offline behavior. Developers are provided with detailed control over which events to save and the order in which they are sent when back online. Wayne Piekarski, Developer Advocate for IoT notes that “The Cloud IoT Core client library is part of our overall vision for device provisioning and authentication with Android Things.” A more detailed report of notable features can be read on the Google developer blog. The library is also available as open source on GitHub for developers who want to build it themselves. Google has also provided a sample that shows how to implement a sensor hub on Android Things, collecting sensor data from connected sensors and publishing them to a Google Cloud IoT Pub/Sub topic. Getting Started with Android Things Top 5 Google I/O 2018 conference Day 1 Highlights: Android P, Android Things, ARCore, ML kit and Lighthouse Google updates biometric authentication for Android P, introduces BiometricPrompt API
Read more
  • 0
  • 0
  • 2342
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $15.99/month. Cancel anytime
article-image-google-updates-biometric-authentication-for-android-p-introduces-biometricprompt-api
Sugandha Lahoti
22 Jun 2018
2 min read
Save for later

Google updates biometric authentication for Android P, introduces BiometricPrompt API

Sugandha Lahoti
22 Jun 2018
2 min read
Google is looking for ways to improve their biometric-based authentication features for Android P, their upcoming OS. For this they are taking two major steps: First, Google has defined a better model to measure biometric security and constrain weaker authentication methods. Secondly, they are providing a common platform-provided entry point for developers to integrate biometric authentication into their apps. Google has combined secure design principles, a more attacker-aware measurement methodology, and an easy to use BiometricPrompt API for developers to integrate authentication in their devices in a simple manner. Current, biometric models quantify performance from two machine learning inspired metrics, False Accept Rate (FAR), and False Reject Rate (FRR). Both metrics do a great job of measuring the accuracy and precision of a given biometric model. However, they do not provide very useful information about its resilience against attacks. In Android 8.1, Google introduced two new metrics Spoof Accept Rate (SAR) and Imposter Accept Rate (IAR) to measure how easily an attacker can bypass a biometric authentication scheme. The SAR/IAR metrics categorize biometric authentication mechanisms as either strong or weak. While both strong and weak biometrics allowed to unlock a device, weak biometrics did not allow app developers to securely authenticate users on a device in a modality-agnostic way. This was what inspired the development of a Biometric authentication API. With Android P, mobile developers can use the BiometricPrompt API to integrate biometric authentication into their apps in a device. Developers can be assured of a consistent level of security across all devices their application runs on because BiometricPrompt only exposes strong modalities. BiometricPrompt API architecture The API is automated and easy to use. Instead of forcing app developers to implement biometric logic, the platform automatically selects an appropriate biometric to authenticate. For devices running Android O and earlier, a support library is provided for allowing applications to utilize this API across other devices. Further details on BiometricPrompt API are available on the Android developer blog. Top 5 Google I/O 2018 conference Day 1 Highlights: Android P, Android Things, ARCore, ML kit and Lighthouse Android P new features: artificial intelligence, digital wellbeing, and simplicity
Read more
  • 0
  • 0
  • 3576

article-image-google-flutter-moves-out-of-beta-with-release-preview-1
Sugandha Lahoti
21 Jun 2018
2 min read
Save for later

Google Flutter moves out of beta with release preview 1

Sugandha Lahoti
21 Jun 2018
2 min read
Google Flutter hits another release milestone on the way to version 1.0. Google Flutter, the cross-platform SDK is moving out of beta with Flutter Release Preview 1. Flutter is one of the most ambitious projects of Google in the field of cross-platform app development. Flutter apps run on the Flutter rendering engine (written in C++) and Flutter framework (written in Google's Dart language, just like Flutter apps). Google Flutter reached beta as announced at Google I/O last month. It also featured various technical sessions, on topics like UI design with Flutter and Material, mobile development with Flutter and Firebase, and architectural practices for complex Flutter apps. The shift from beta to release preview announcement was made during the keynote of the GMTC Global Front-End Conference in Beijing, China, a gathering of around a thousand front-end and mobile developers. It focuses on scenario completeness, bug fixing, and stabilization. Release preview 1 features improvements to the video player package, adding broader format support and reliability improvements. Firebase support is further extended to include Firebase Dynamic Links, an app solution for creating and handling links across multiple platforms. 32-bit iOS devices with ARMv7 chips are also added, enabling apps written with Flutter to run on older devices. Flutter release preview 1 also adds experimental instructions on adding Flutter widgets to an existing Android or iOS app. It also brings improvements to Flutter Tools. Flutter tools have a new update in the form of Flutter extension for Visual Studio Code. This extension adds a new outline view, statement completion, and the ability to launch emulators directly from Visual Studio Code. The latest Release Preview 1 SDK will be available on Flutter's site. Also, check out the Flutter app showcase. Top 5 Google I/O 2018 conference Day 1 Highlights: Android P, Android Things, ARCore, ML kit and Lighthouse 9 Most Important features in Android Studio 3.2 Google’s Android Things, developer preview 8: First look
Read more
  • 0
  • 0
  • 3194

article-image-apple-releases-ios-12-beta-2-with-screen-time-and-battery-usage-updates-among-others
Natasha Mathur
20 Jun 2018
3 min read
Save for later

Apple releases iOS 12 beta 2 with screen time and battery usage updates among others

Natasha Mathur
20 Jun 2018
3 min read
The second beta of iOS 12 has been released by Apple yesterday to the registered developers for testing purposes. This is two weeks after the first beta was rolled out following the much-awaited Worldwide Developers Conference. All thanks to the ongoing beta releases, beta 2 includes modifications to many of the new features which are introduced in iOS 12 such as changes to screen time, battery usage, and other smaller tweaks. Let’s have a look at the key updates that will change your iPhone or iPad for the better. Key Updates Battery Usage The usage charts that represent the activity and battery level for the past 24 hours is redesigned in iOS 12 beta 2. Also, fonts and wordings have been updated in this section. Source: macrumors Screen Time The existing toggle that helps with clearing the Screen Time data is removed. The interface which lets you add time limits to apps via the Screen Time screen has been modified. With the first beta, when you tapped an app it would go right into the limits interface. Now when you tap on an app, more information gets displayed on the app. This information includes daily average use, developer, category, and more. There's a new splash screen available for the Screen Time feature. There are also new options in screen time which lets you view your activity on either one or all devices. Notifications The new iOS 12 comes with a feature where Siri makes suggestions to the user about limiting the notifications from the sparingly used apps. Now with beta 2, the Notifications section of the Settings app has a new toggle that will allow you to get rid of the suggestions made by Siri for the individual apps. Photos Search With the iOS 12 beta 2, the Photos now support more advanced searches. So if you search for a photo taken on a specific date, say, May 15, all the photos from all years taken on May 15 will pop up. This is quite different than the iOS 12 beta 1 behavior. Also, the font of listings such as "Media Types" and "Albums" has changed. Now the listings’ font size in the Photos app is way bigger, which makes it easier for the users to read. Voice Memos A new introductory splash screen is added for Voice Memos in iOS 12 beta 2. Apart from these updates, there are also certain minor changes which are listed below: On unlocking any content using Face ID, the iPhone X now says "Scanning with Face ID." Now, on opening iPhone apps on the iPad, such as Instagram, these apps get displayed in a modern device size (iPhone 6) in both the modes namely: 1x and 2x. A new interface for auto-filling a password saved in iCloud Keychain is added. Podcasts app will now show ‘Now Playing’ indicator for the currently playing chapters. Time Travel references have been removed from the Watch app. The iOS 12 public beta will launch after iOS 12 developer beta 3 around June 26. The release date for the final version of iOS 12 is set sometime in September 2018. Also, there are some known issues regarding the latest iOS 12 beta 2 update that needs resolving. Registered developers can check out the release notes for beta 2 on the official Apple developer website. WWDC 2018 Preview: 5 Things to expect from Apple’s Developer Conference Apple releases iOS 11.4 update with features including AirPlay 2, and HomePod among others Apple introduces macOS Mojave with UX enhancements like voice memos, redesigned App Store, Apple News, & more security controls  
Read more
  • 0
  • 0
  • 2967

article-image-introducing-vue-native-for-building-native-mobile-apps-with-vue-js
Sugandha Lahoti
15 Jun 2018
2 min read
Save for later

Introducing Vue Native for building native mobile apps with Vue.js

Sugandha Lahoti
15 Jun 2018
2 min read
If Vue.js is the Javascript framework of your choice, you will definitely enjoy Vue Native. Developed by Geeky Ants, Vue Native can help you build powerful native mobile apps using JavaScript and Vue.js. It is designed to connect React Native and Vue.js, making app development simpler. It has gained quite some popularity with already 2500+ starts on Github, since its official announcement, a few days ago. It was originally started by SmallComfort as react-vue which transpiles Vue files to React & React Native Components. GeekyAnts forked it later. It brings the goodness of Vue ecosystem to native mobile app development such as support for templating, styling, state-management with Vuex and router makes. More distinctive features include declarative rendering, two-way binding, and completeness of React Native ecosystem. How does it work Vue Native transpiles to React Native. React Native is a framework to build native Android and iOS apps using JavaScript. The Vue Native CLI is used to generate a Vue Native app, which is a React Native API wrapper. It generates a simple single page application (SPA) using create-react-native-app and vue-native-core. React Native is a direct dependency of Vue Native. Once you initialize a new app using vue-native-cli, the entry script is App.vue. You can use: the reactivity system of Vue to observe React component the react-vue-loader to run Vue component in React application the vue-native-scripts to run Vue component in React Native How to get started The first step is to install React Native on your system. Next, install the Vue Native CLI on your system using npm. $ npm install -g vue-native-cli Now all you need to do is initialize a new Vue-Native project directory in your system. You can now run this app on an iOS or Android Simulator using the npm run command. The installation guide contains the full list of features along with details to get started and code examples. Vue Native is open source; you can find it on GitHub. Why has Vue.js become so popular? Vue.js developer special: What we learnt from VUECONF.US 2018 Testing Single Page Applications (SPAs) using Vue.js developer tools
Read more
  • 0
  • 0
  • 4918
article-image-apple-changes-app-store-guidelines-on-cryptocurrency-mining
Richard Gall
12 Jun 2018
2 min read
Save for later

Apple changes app store guidelines on cryptocurrency mining

Richard Gall
12 Jun 2018
2 min read
If you thought everyone loved cryptocurrency, think again: Apple has banned cryptocurrency mining on iOS devices. In a change that has coincided with WWDC, Apple has quietly updated the terms on the app store. Guidelines now state that "Apps, including any third party advertisements displayed within them, may not run unrelated background processes, such as cryptocurrency mining." First spotted by Apple Insider, this move by Apple actually follows a series of actions by tech companies to tackle a number of issues in the cryptocurrency world. Earlier this year both Google and Facebook banned cryptocurrency ads too - all of which suggests the cryptocurrency bubble might be slowly bursting. There is a slight loophole - Apple's guidelines allow you to mine cryptocurrency provided it's done externally to the device. Cloud mining would be fine. Of course, why you'd want to mine, say, Bitcoim, with your IPhone does seema bit strange. Given users complain about battery life now, the processing power required to mine cryptocurrency will sap your device's life incredibly quickly. Here is the main section of cryptocurrencies in the app store guidelines: 3.1.5 (b) Cryptocurrencies: (i) Wallets: Apps may facilitate virtual currency storage, provided they are offered by developers enrolled as an organization. (ii) Mining: Apps may not mine for cryptocurrencies unless the processing is performed off device (e.g. cloud-based mining). (iii) Exchanges: Apps may facilitate transactions or transmissions of cryptocurrency on an approved exchange, provided they are offered by the exchange itself. (iv) Initial Coin Offerings: Apps facilitating Initial Coin Offerings (“ICOs”), cryptocurrency futures trading, and other crypto-securities or quasi-securities trading must come from established banks, securities firms, futures commission merchants (“FCM”), or other approved financial institutions and must comply with all applicable law. (v) Cryptocurrency apps may not offer currency for completing tasks, such as downloading other apps, encouraging other users to download, posting to social networks, etc.
Read more
  • 0
  • 0
  • 2121

article-image-facebook-open-sources-sonar-their-cross-platform-debugging-tool
Sugandha Lahoti
12 Jun 2018
3 min read
Save for later

Facebook open sources Sonar, their cross-platform debugging tool

Sugandha Lahoti
12 Jun 2018
3 min read
Facebook has announced the open-source release of Sonar, their cross-platform debugging tool. Sonar is designed to aid developers, framework experts and engineers collaborate on the app development process. Sonar is built on Facebook’s Stetho, an Android debugging bridge built on Chrome's developer tools. Sonar further adds more extensible features to Stetho such as plugins to help engineers develop new features, investigate bugs and optimize apps. Sonar is a cross-platform debugging tool. Hence developers can connect their mobile devices (Android and iOS or an emulator) to a desktop client for inspection. Sonar works as a guide and interpreter to a running app providing developers with stats on what an app is doing to better understand bugs and system capabilities. Sonar is now available for the developer community at large, not just Facebook engineers, as an open-source software project via GitHub. As Sonar was designed with extensibility in focus, it made use of a lot of external plugins. With the open sourcing of Sonar, these plugins are also being open-sourced. Some of these include: Logs, a plugin that shows device logs without the need for additional setup. Layout Inspector, a debugging platform that provides deep dives into user interface hierarchies and supports Litho and ComponentKit components. Network, a plugin that enables the inspection of network packets as they pass into and out of the app in question. Sonar architecture Sonar’s architecture has two parts, a desktop client, and a mobile SDK. The desktop client is built on top of Electron, and Facebook's projects such as React.js, Flow, Metro, RSocket, and Yarn. The mobile SDK is installed within the Android or iOS application and interacts with the desktop client. The mobile SDK makes use of Facebook open source projects such as Folly and RSocket. Plugins are available for both Desktop client and Mobile SDK. Desktop client plugins render the UI and a mobile SDK plugin exposes the data. A React component extends the desktop plugin class. This React component is in charge of communicating with the mobile SDK plugin and rendering any data it delivers. The mobile SDK plugin is developed in the language native to the platform on which it will run (Swift/Objective-C on iOS or Java/Kotlin on Android). It registers a set of handlers and defines responses for them. Source: Facebook Blog Emil Sjölander, Facebook software engineer hopes that “open-sourcing Sonar and the accompanying plugins will provide a useful tool for other engineers working on mobile applications.” He says that “As we've already seen Sonar prove useful internally at Facebook, we think Sonar's APIs will help other engineers build great new experiences to improve their workflows.” You can read about the full release coverage on Facebook code blog. Jest 23, Facebook’s popular framework for testing React applications is now released Facebook’s F8 Conference – 5 key announcements Testing Single Page Applications (SPAs) using Vue.js developer tools
Read more
  • 0
  • 0
  • 1677

article-image-nativescript-4-1-has-been-released
Sugandha Lahoti
11 Jun 2018
2 min read
Save for later

Nativescript 4.1 has been released

Sugandha Lahoti
11 Jun 2018
2 min read
Nativescript version 4.1 has been released with multiple performance improvements and major highlights such as support for Angular 6, faster app launch time, new UI scenarios and much more. Android 6 support NativeScript 4.1 Angular integration has been updated with support for Angular 6. And since Angular 6 requires webpack 4, developers have to update to nativescript-dev-webpack 0.12.0 as well. V8 engine now available in v6.6 The V8 Engine is now upgraded to version 6.6. The new engine adds major performance boost, JavaScript language features, code caching after execution, removal of AST numbering, and multiple asynchronous performance improvements. Faster Android startup time The upgrade to V8 6.6 and webpack 4, also brings faster app launch time on Android making it on par with iOS. Depending on the device and the specific app, Android app startup time is now between 800ms on high-end devices, to 1.8s on older devices. This is an improvement of almost 50% compared to the startup time in Nativescript v4.0! Improvements to user interfaces Nativescript 4.1 adds augmented reality with support for ARKit for iOS by adding the built-in vector type. Multiple iOS Simulators Nativescript 4.1, can now run applications simultaneously on multiple iOS Simulators, iOS devices, Android emulators, and Android devices. Developers can LiveSync change on iPhone X, iPhone 8, iPad 2, etc. and immediately see how the application looks and behaves on all of them. Navigation in a Modal View There are updates to navigation in modal view as well. Before version 4.1, for navigation inside a modal view, you had to remove the implicit wrapper and specify the root element of the modal view. With the new changes, the first element of the modal view becomes its root view and developers can place a page router outlet there, for proper navigation. LayoutChanged Event Nativescript 4.1 introduces layoutChanged event that will be fired when the layout bounds of a view changes due to layout processing. The correct view sizes are obtained with getActualSize(). Proper view sizes are especially useful when dealing with size-dependent animations. The full list of bug updates and other fixes are available in the changelog. You can read more about the release on the Nativescript blog. NativeScript: What is it, and how to set it up How to integrate Firebase with NativeScript for cross-platform app development
Read more
  • 0
  • 0
  • 2748
article-image-game-developers-say-virtual-reality-is-here-to-stay
Natasha Mathur
08 Jun 2018
5 min read
Save for later

Game developers say Virtual Reality is here to stay

Natasha Mathur
08 Jun 2018
5 min read
“I don’t want to spend $600 to have a box on my head when I play video games” or “Too many cables, not enough games” are some of the statements you’ll hear quite often if you come across a Virtual Reality non-believer. Despite all the criticism that the Virtual Reality gaming industry receives, game developers across the world think differently. This year’s Skill Up report highlights what game developers feel about the VR world. A whopping 86% of respondents said ‘Yes, VR is here to stay’. With issues like heavy hardware, motion sickness, etc, getting fixed as the advancement in the VR technology continues, let’s have a look at other reasons for VR commanding a high level of confidence among game developers. Why is VR here to stay? VR hardware manufacturing is rising The future of Virtual Reality is already set in motion. From Google kickstarting the VR industry by releasing Google Cardboard back in 2014, just look at the number of latest releases in VR headsets in the past six months now and you can do the math yourself. With the likes of Lenovo Mirage Solo, Oculus Go, HTC Vive Pro, Oculus Rift, Sony PlayStation VR and Samsung Odyssey entering the market, it’s quite evident that there is a growing demand for the VR headsets. In addition to headsets being produced extensively, there are dedicated chipsets such as the latest Qualcomm’s XR1 chipset being built to support these headsets and solve the ever-concerning problem in the VR world: High prices. HTC (Vive), Vuzix, Meta, and Pico, are among the others that are working towards creating dedicated chipsets for the standalone headsets. Prices are falling Virtual Reality manufacturers across the globe have a common goal in mind: to make manufacturing of the VR hardware cheaper. Oculus was the first one to drop the prices of the Oculus Rift permanently to $399. Later, Sony joined in by bringing the price of its PlayStation VR  down to as low as $200. Another common complaint regarding the price of the VR headsets is that all VR headsets required additional computing power and hardware to make them operable making it not an average Jane/Joe’s cup of tea. But that problem seems to be fast disappearing with the release of standalone headsets now. Qualcomm recently announced a new chipset for standalone AR/VR headsets at Augmented World Expo. More games to hit the market “There aren’t enough AAA games for the VR world”. Listen closely, and you’ll notice a VR non-believer expressing their disbelief towards the VR world. But, the Virtual Reality Industry is smart. It keeps coming up with ways to pull even the hardcore console gamers into the enticing VR space. The immersive nature of Virtual Reality provides a potential to build fascinating games that catch people’s interest. Events such as the Annual Game developers conference, VR & AR Developer Conference, and PAX East ignite the interest of developers to create these innovative games even more. With already popular VR games such as Doom, Fallout, Perception, etc in existence, there are other games on their way to hit the market. For instance, creators of Titanfall, Respawn Entertainment have announced that a brand new AAA VR title in partnership with Oculus will get released in 2019. Respawn is working hard in this new field and will be challenging other studios in what future gaming for VR looks like. VR isn’t just limited to headsets People confuse VR being limited to just headsets and games. But it is so much more than that. There are different fields leveraging the potential of Virtual Reality. For instance, NASA uses Virtual Reality Lab to train astronauts. NASA is also looking into using headsets like the HTC Vive, VR gloves from 3rd party developers, and assets from games like Mars 2030 and Earthlight to make VR training simulations for a fraction of the cost. Other industries that can immediately benefit from using Virtual Reality are Healthcare, Education, Museum, Entertainment, among others. For instance, Doctors use VR to treat anxiety disorders and phobias, while some at Stanford University used it to set up practice spaces for surgeons. Testing autonomous vehicles for safety purposes also uses the wonders of Virtual Reality for simulation purposes. It will also help speed up the development of Autonomous Vehicles. Similarly, in the education space, one can use Virtual Reality in the classroom to visualize certain concepts related to physics for students. People can be transported back to the Bronze Age with the help of VR in Museums. The Entertainment Industry has been making VR movies such as Walking New York, From nothing, Surge, etc, which create an altogether different experience for the viewers by making them feel like they’re actually present in the scenario of the VR world. Imagine watching Jurassic Park or Avatar in VR! It takes time for any new technology to find user adoption and Virtual Reality has been no exception. But in recent times, it seems to have broken the barriers. One proxy for this claim is the news of VR headsets sale crossing 1 million last year. The ball has started rolling more and more in favor of the VR world. A whole industry is being formed, new technology being made, and VR is no longer just some hype. Top 7 modern Virtual Reality hardware systems SteamVR introduces new controllers for game developers, the SteamVR Input system Build a Virtual Reality Solar System in Unity for Google Cardboard  
Read more
  • 0
  • 0
  • 3747

article-image-adobe-glides-into-augmented-reality-with-adobe-aero
Sugandha Lahoti
08 Jun 2018
2 min read
Save for later

Adobe glides into Augmented Reality with Adobe Aero

Sugandha Lahoti
08 Jun 2018
2 min read
Adobe has entered the augmented reality space with Adobe Aero, a new AR authoring tool, and multi-platform system. This system will provide means to both developers and creatives to build simple AR scenes and experiences leveraging Apple’s ARKit. Adobe sees Aero as the first step to outfit creative professionals with the tools they need to create immersive experiences that will transform the world. Announced as an early preview at Apple’s WWDC 2018, Adobe Aero will provide a means for designers to create immersive AR experiences from familiar creative tools such as Photoshop CC and Dimension CC. Adobe also plans to add usdz support to Adobe Creative Cloud apps and services in collaboration with Apple and Pixar. USDZ is described by its co-creator Pixar as a "zero-compression, unencrypted zip archive" of the USD (Universal Scene Description) format used for creating AR experiences. .usdz is now supported by Apple, Adobe, Pixar, and many others. Creative Cloud’s support of the usdz format will help speed up the development of new AR content and allow users to seamlessly move from one application to another. This immersive content can then be brought into Xcode for further refinement and development. Adobe Aero will also feature deep integration with Sensei, Adobe’s machine learning platform, to take care of the technical complexity so designers can focus on creative work alone. Adobe also plans to include the glTF file format, currently supported by Google, Facebook, Microsoft, Adobe, and other industry leaders. AR projects created with Adobe Aero will be displayed at The Festival of the Impossible, a 3-day large-scale immersive art exhibition celebrating artwork by 15 talented artists. “This is just the beginning of our journey to extend digital experiences beyond the screen and I couldn’t be more excited about what’s ahead,” says Adobe CTO Abhay Parasnis. “We’ll have much more to share at the Adobe MAX Creativity Conference later this fall.” Developers and designers may request early access for Aero with a new form provided by Adobe. Adobe is going to acquire Magento for $1.68 Billion Amazon open sources Amazon Sumerian, its popular AR/VR app toolkit Microsoft introduces SharePoint Spaces, adds virtual reality support to SharePoint
Read more
  • 0
  • 0
  • 3124