Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Mobile

204 Articles
article-image-9-most-important-features-in-android-studio-3-2
Amarabha Banerjee
07 Jun 2018
3 min read
Save for later

9 Most Important features in Android Studio 3.2

Amarabha Banerjee
07 Jun 2018
3 min read
Android Studio has been the IDE of choice for Android developers since its release in 2014. Version 3.2 of Android Studio was released at the end of April, bringing about a few very interesting changes to the Android Studio ecosystem. Here are the most important changes you need to be aware of: IAndroid Studio Jetpack has been updated and improved. The new, updated jetpack is going to make the overall development process much smoother and easier. It should minimize repetitive work and help to streamline the development workflow. New Navigation Editor. The new navigation editor helps developers to gain a better view of their app’s design and layout. It should make it much easier to plan navigational patterns between different parts of an app. AndroidX Refactoring. Android has introduced a new refactoring mechanism for the Android Support Libraries to a new Android extension library using the androidx namespace. The New App Bundle. The new dynamic app bundle is much smarter and intuitive than the previous version. Once you have created your app and uploaded its  resources, you won’t now need to generate customized APKs for different types of machines on which these APKs are going to run. You can now simply use the dynamic APK builder. This will automatically create different versions of the APK best suited to different machines. You can then add some added bundles for your app that can be downloaded by users on demand. Layout preview. During the app development process, the presence of runtime data can hamper the visualization capability of the app. This can affect the app design process. With the new  layout preview, you can preview your design using sample data in the layout editor. You can then change the data as you require, which will allow you to see a complete preview of your app design. Slice Functionality. Android Studio 3.2 will now create a preview of your app in Google Search results. This is what’s being called ‘slice functionality’. This will be particularly useful for mobile developers that want to think carefully and thoroughly about how they market their app. More new lint checks. Beyond Kotlin interoperability lint checks, Android Studio 3.2 is implementing 20 new lint checks. These will help developers find and identify common code problems. These new checks range from warnings about potential usability issues to high-priority errors regarding security vulnerabilities. New Gradle target. You can use the new lintFix Gradle task to apply all of the safe fixes suggested by the lint check directly to the source code. An example of a lint check that suggests a safe fix to apply is SyntheticAccessor. Metadata updates. Various metadata, such as the service cast check, have been updated for lint checks to work with Android P Developer Preview. Android Studio has been the default development environment for Android developers and with these new changes, it is trying to incorporate some cool new smart features which are sure to help the developers create better and faster apps more efficiently and save a lot of their development time. What is Android Studio and how does it differ from other IDEs? Unit Testing Apps with Android Studio The Art of Android Development Using Android Studio  
Read more
  • 0
  • 0
  • 4344

article-image-apple-introduces-macos-mojave-with-ux-enhancements-like-voice-memos-redesigned-app-store-apple-news-more-security-controls
Natasha Mathur
06 Jun 2018
4 min read
Save for later

macOS Mojave: Apple updates the Mac experience for 2018

Natasha Mathur
06 Jun 2018
4 min read
The new version of macOS called Mojave was announced at Apple’s ongoing annual developer conference, WWDC 2018. It includes a bunch of new features namely dark mode, revamped Mac app store, desktop stacks, security control, safari privacy in addition to other updates and features. The final release will be sometime in fall during September or October, with public beta releasing this summer. Let’s have a look at what’s new in the shiny new macOS version Mojave. Key macOS Mojave Features Dark mode Dark mode is added by Apple to macOS with the latest release. It has the ability to change the dock, taskbar, and chrome around apps into a dark gray color. It doesn’t come with a new functionality though, it’s mainly for aesthetics, just like all the other dark modes. There is also an API available for developers to implement Dark Mode in their apps. Mojave also presents a new Dynamic Desktop which is capable of automatically changing the desktop picture to match the time of day. Revamped Mac app store The Mac app store is finally revamped in Mojave. Taking inspiration from the iOS store that underwent a makeover last year, the redesigned Mac app store consists of new app collection along with a lot more editorial content. There are also going to be many apps from top developers coming to the Mac App Store namely Office from Microsoft and Lightroom CC from Adobe, among others. Apple News, Stocks, Home, Voice memos Apps such as News, Stocks, Voice Memos and Home are available on Mac for the first time. News app comes with articles, photos, and videos which will look great on the Mac display. The home app allows the Mac users to control their HomeKit-enabled accessories. It lets users perform tasks like turning lights off and on, adjusting thermostat settings. Voice Memos makes it easy for you to record personal notes, lectures, interviews, song ideas, etc. You can also access them from iPhone, iPad or Mac. Stocks provides curated market news along with a personalized watchlist which is complete with quotes and interactive charts. Desktop stacks A new feature called stacks cleans up a messy desktop by dedicating folders to specific file types. These folders automatically collect files that belong to them. This way there will be stacks of PDFs, images, movies, etc. Clicking the folders will bring the files to the desktop to make it easy for you to browse through them. Security controls With more pop-ups added by Apple in the new Mojave, you can now control what apps can access your information and hardware. With newly added security controls, you can now decide if you want an app to have access to your location, photos, contacts, microphone, etc. Safari privacy Apple started blocking websites that track you based on your system configuration in Safari. Now, it comes with an added ability which will help you block social networks like Facebook from tracking you across the web using “like” buttons. It also flags reused passwords so users can change them. Finder updates Finder has a new view called “gallery view” which helps scroll through small previews of files There is also going to be a way to view metadata inside a finder window. You can also perform quick actions on files such as rotating a photo or assembling multiple files into a PDF. Markup and screenshots Users can mark up documents and make changes inside of Quick Look which will help quickly deal with files. If you take a screenshot, you will be presented with a button to mark them up. To know more about macOS Mojave, check out the official blog post by Apple. Apple releases iOS 11.4 update with features including AirPlay 2, and HomePod among others WWDC 2018 Preview: 5 Things to expect from Apple’s Developer Conference Apple steals AI chief from Google
Read more
  • 0
  • 0
  • 2376

article-image-apples-new-arkit-2-0-brings-persistent-ar-shared-augmented-reality-experiences-and-more
Sugandha Lahoti
06 Jun 2018
3 min read
Save for later

Apple’s new ARKit 2.0 brings persistent AR, shared augmented reality experiences and more

Sugandha Lahoti
06 Jun 2018
3 min read
In the keynote, at the ongoing WWDC 2018, Apple has shared their latest version of Augmented reality toolkit, ARKit 2.0. The primary focus of Apple this year is primarily on improving the user experience and making the Apple devices perform better with improved functionalities. ARKit 2 features realistic rendering, multiplayer experiences, a new file format, and more. Shared Augmented reality With ARKit 2.0, you can now collaborate with multiple other users in a virtual environment. Apple says “Shared experiences with ARKit 2 make AR, even more, engaging on iPhone and iPad, allowing multiple users to play a game or collaborate on projects like home renovations.” There is also a new spectator mode, if you are keen on watching the game, instead of playing it. With this mode, you will see and experience what the players see and observe. AR that stays the same Persistent AR, as Apple likes to call it, is also another fabulous feature in ARKit 2.0. You can now leave virtual objects in the living world and then return back to them later in time. Interacting with AR becomes more life-like as you can now start a puzzle on a table and come back to it later in the same state. Image detection and tracking also get an update with ARKit 2.0. It can now detect 3D objects like toys or sculptures, and can also apply reflections of the real world onto AR objects. A new file format Apple has introduced a new open file format, usdz, in collaboration with Pixar. This file format is optimized for sharing in apps like Messages, Safari, Mail, Files, and News while retaining powerful graphics and animation features. The format enables the new Quicklook for AR feature, which allows users to place 3D objects into the real world. usdz is a part of the developer preview of iOS 12. It will be available this fall as part of a free software update for iPhone and iPad 2018 models. The Measure app Apple also unveiled its very own AR measuring app. The new iOS 12 app automatically provides the dimensions of objects like picture frames, posters, and signs, and can also show diagonal measurements, and compute area. Users can either take a photo or share these dimensions from their iPhone or iPad. You can tune into Apple’s WWDC event website to watch the keynote and read about other exciting releases. WWDC 2018 Preview: 5 Things to expect from Apple’s Developer Conference. Apple releases iOS 11.4 update with features including AirPlay 2, and HomePod among others. Apple steals AI chief from Google.
Read more
  • 0
  • 0
  • 3482
Visually different images

article-image-apple-releases-ios-11-4-update-with-features-including-airplay-2-and-homepod-among-others
Natasha Mathur
30 May 2018
3 min read
Save for later

Apple releases iOS 11.4 update with features including AirPlay 2, and HomePod among others

Natasha Mathur
30 May 2018
3 min read
A week before its annual Worldwide Developers Conference, Apple released iOS 11.4 for iPhones, iPads, and iPods. The new update got released just a month after the release of 11.3.1. The new release includes features such as AirPlay 2 multi-room audio, HomePod stereo pairing support, and storing messages in iCloud. Let’s have a look at the key features and upgrades in this new iOS 11.4 release: Major updates and highlights AirPlay 2 AirPlay 2 was announced by Apple last year at the Worldwide Developers Conference. It is Apple’s proprietary system that enables users to stream audio and videos from Apple devices to any other device with the help of wifi or other wired networks. It comes with following features: AirPlay 2 lets you play a particular song on different speakers situated throughout the house. It allows you to play music in a room from another room using an iOS device, HomePod, Apple TV, or Siri voice commands. After AirPlay 2’s integration with HomeKit, you can find the AirPlay 2 devices now displayed in the Apple Home app. AirPlay 2 allows you to control where music plays with the help of control center, lock screen or within apps on an iOS device. You can ask Siri to play music for you in any or multiple rooms with a device compatible with AirPlay 2 such as iPhone, iPad, HomePod, or Apple TV. Picking a call or playing a game on your iPhone or iPad is possible with AirPlay 2 without interrupting the playback on speakers. HomePod Stereo pairing HomePod which is Apple’s powerful speaker has also been updated in iOS 11.4 and has gained support for stereo pairing which allows you to group two HomePod speakers within a room as one. Updates to the HomePod Stereo are as follows: You can set up the HomePod stereo pair using iPhone or iPad. HomePod pair senses the location automatically and balances the sound on the speakers depending on its location. Messages in iCloud First promised as an iOS 11 feature, you’ll now be able to synchronize and store the Messages app history across all of your iOS and macOS devices in iOS 11.4. You can store all your messages, videos, photos, etc for free on iCloud. This helps free up space on your Apple device. On signing into a new device with the same iMessage account, all the messages appear. On deletion, conversations and messages are instantly removed from all your devices. All conversations are end-to-end encrypted. Apart from these key features and upgrades, there have also been certain improvements and fixes which can be found on the official iOS 11.4 update release notes by Apple. WWDC 2018 Preview: 5 Things to expect from Apple’s Developer Conference MariaDB 10.3.7 releases TensorFlow.js 0.11.1 releases!  
Read more
  • 0
  • 0
  • 2119

article-image-wwdc-2018-preview-5-things-to-expect-from-apples-developer-conference
Kunal Chaudhari
30 May 2018
6 min read
Save for later

WWDC 2018 Preview: 5 Things to expect from Apple’s Developer Conference

Kunal Chaudhari
30 May 2018
6 min read
The Worldwide Developer Conference (WWDC) is Apple’s yearly summer event for developers where the tech giant announces all the major updates related to its software and hardware platform. The event spanning from June 4-8, at McEnery Convention Center, San Jose, is expected to make big news as Apple plans to launch iOS 12 for iPhone and iPad, MacOS 10.14 for desktops among other announcements. Along with these annual OS releases we can also expect to see improvements to Apple’s very own Swift programming language slated to be released later in the year and plenty of other updates to ARKit and Siri as well. iOS 12 will have rigorous testing and digital wellbeing in focus Image Source: Apple iOS A number of rumors are circulating about what new features will be added to iOS 12 and it sounds like it is going to be a subtle change rather than drastic upgrades. Codenamed “Peace” iOS 12 will heavily focus on performance and bug fixes. Earlier this year on his visit to China, Tim Cook, stressed on the fact that due to the growing user base of the iOS devices, it becomes imperative to do rigorous testing before releasing an update to the software. This response from the Apple CEO came after a lot of advertisements featuring iOS 11 bugs surfaced on the internet. So what features are expected to be rolled out in iOS 12? Here is a list of possible announcements at WWDC: A Digital Health tool for parents, enabling them to manage children’s digital time. Animoji Update Animoji integration into FaceTime Sleep tracking improvements Multi-user sessions support for AR games and FaceTime Improved message and presencing technologies Another important update that is rumored to be added this time around is the Unified App Framework named Project Marzipan, which will let developers create a single app with an interface that adapts to the device it’s running on. While sources from Bloomberg say that this could be rolled out only as early as next year, we could see some sort of announcement at the conference next month. Will macOS 10.14 finally bring iOS into its fold? Image Source: Apple macOS The 2018 edition of Apple’s Mac operating system is likely to be released as a public beta, a month after the WWDC; followed by an actual release in September or October, based on the release trends from the previous editions. While Tim Cook, hinted that there would be no cross-platform compatibility between iOS and macOS applications, such a feature will greatly increase the number of available apps on Macs. It could also mean that Apple will bring some of its iOS-only apps, like Home, to the Mac. Apart from this, we can see several minor additions to macOS like improvements in Safari for better video conferencing and AI image identification capabilities. ABI (application binary interface) comes to Swift 5 Image Source: Apple Swift Swift continues its evolution as one of the safest, fastest, and most expressive languages, with better performance and new features in every release. One of the most awaited features in this year’s edition is Application Binary Interface (ABI) stability, a feature which was originally intended for the Swift 4 release, but got delayed. If you already know what an API is, then understanding ABI becomes a lot easier. It is a compiled version of an API. When you write source code, you access the library through an API. Once the code is compiled, your application accesses the binary data in the library through the ABI. One of the big advantages of ABI is that it enables OS vendors to embed a Swift Standard Library and runtime in the OS that is compatible with applications built with Swift 5 or later. Other notable features to be announced at WWDC are: String ergonomics to improve processing of the string type Improvements to existing standard library facilities Improvements to the Foundation API so the Cocoa SDK can work seamlessly with Swift Syntactic additions Laying the groundwork for a new concurrency model Is 2018 the year when users say, “Hey Siri, you the best!”? Image Source: Apple Siri Siri was one of the most widely-used voice assistants in the world when it was first introduced in 2011. Since then Apple has faced some stiff competition from Amazon powered Alexa and Google's me-too efforts tied to its Assistant running on Android, iOS and most of the Google products. The pressure is mounting on Apple to compete in this wildly competitive voice platform space. Hopefully this year Apple would address or add several key capabilities in Siri that could help them stand out. The most likely enhancements include: SiriKit: New domains to be the added to the current APIs. Siri already includes messaging, payments, phone calls, and ride bookings. Workflow: Apple acquired Workflow a year ago, a Workflow Intent for Siri could enable third parties to craft any sort of skill for users to launch with their voice, or graphically from any iOS device. Apples hope that this could be their answer to Alexa skills or Google’s Actions. Better “Hey Siri”: Apple has been enhancing how different Siri devices work together on the same network, an issue that neither Amazon or Google have really had. This is still a work in progress, but HomePod already does a good job of silencing your iPhone when both are listening for "Hey Siri." ARKit Image Source: Apple ARKit Last year at WWDC Apple debuted ARKit and enabled developers to create engaging virtual experiences augmented over the real world. This year we can expect a lot of improvements in this tool. For starters, the new ARKit 1.5 which was released last month brings a lot of new features to the framework such as the ability to detect vertical and irregularly shaped surfaces, detecting 2D objects and allowing developers to interact with them with better resolution. With the framework now evolving we expect to see some cool demos in this year’s conference; perhaps an integration of ARKit with Maps, along the lines of what we saw in the I/O conference earlier this month. These announcements are exciting, confirming our hunch that Apple will show off their high-profile initiatives that will shape the coming year. While only a few thousand lucky developers will get a chance to attend, others can live stream the event on Apple’s official website. Apple steals AI chief from Google F8 AR Announcements Watson-CoreML : IBM and Apple’s new machine learning collaboration project
Read more
  • 0
  • 0
  • 2762

article-image-htc-vive-focus-2-0-update-promises-long-battery-life-among-other-things-for-the-vr-headset
Natasha Mathur
29 May 2018
4 min read
Save for later

HTC Vive Focus 2.0 update promises long battery life, among other things for the VR headset

Natasha Mathur
29 May 2018
4 min read
HTC Vive Focus 2.0 update promises long battery life, among other things for the VR headset HTC Vive Focus, the standalone VR headset, gets a major system update 2.0, making the headset even more versatile, before its global release. HTC announced at the annual Vive Ecosystem Conference in Shenzhen, that its standalone 6DoF Vive headset can install a System update 2.0 which promises long battery life, ability to link with HTC smartphones, passenger and surroundings mode along with other exciting features. Vive Focus Smartphone Integration Features Here’s a quick rundown of updates made to HTC Vive Focus: Key feature updates Smartphone Integration: There is a newly added ability to link HTC Smartphone with the Vive Focus. This helps users take calls, receive messages, and view social media notifications from a paired HTC smartphone, without taking the headset off. This new Focus feature will be made available first to the HTC U12+ and will later get distributed for all the other HTC smartphone users through HTC’s and Tencent’s app stores. Surroundings Mode: This mode is handy when using Vive focus in a moving vehicle. It is a see-through mode which can be enabled by double-clicking the power button on the headset. The camera in the headset then gets activated. This helps the user to see the world outside the headset without taking it off. Passenger Mode: This mode makes it possible for the users to experience the Virtual world seamlessly, making sure that the user does not drift away in the virtual world due to the turbulence of a moving vehicle. Stream Content from Viveport or SteamVR: The new Vive Focus update also provides users the ability to stream Viveport or SteamVR content from a PC to a Vive Focus using Riftcat VRidge app over 5GHz WiFi. VR apps installation: You can now install apps directly on the microSD card with the new System Update 2.0. You can also purchase apps using credit cards from within the Viveport store. Other Upcoming Features There are other upcoming features lined up for HTC Vive Focus in the third quarter of 2018. These features include: Software update which will make Vive’s 3DoF headset controller behave like a 6DoF controller by using computer vision technology on focus’ camera. Hand movements tracking with the help of front cameras using gesture recognition technology. You will be able to stream different forms of media such as apps, videos and games directly from the six-inch U12+ phone screen to the bigger VR display. An additional combination of media storage device and external battery pack, the Seagate VR Power Drive will also be enabled in the HTC Vive Focus. The VR power drive is also compatible with U12+ and will be optimized for the Focus as well. This promises to improve the battery life considerably. Vive Focus system update 2.0 is available on all HTC Vive Focus devices which are available only in China for now. No announcement has been made regarding its availability in the West, but with the company shipping dev kits to developers in the past few weeks, the announcement will be made soon. Also, details regarding the cost and storage capacity will be announced later this year but it’s not yet confirmed. With competitors such as Google daydream powered Lenovo Mirage Solo, the latest updates made to the HTC Vive Focus have built major anticipation in the mind of the users regarding what more to expect in the VR world. Qualcomm may announce a new chipset for standalone AR/VR headsets this week at Augmented World Expo Top 7 modern Virtual Reality hardware systems Understanding the hype behind Magic Leap’s New Augmented Reality Headsets  
Read more
  • 0
  • 0
  • 4138
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $15.99/month. Cancel anytime
article-image-qualcomm-may-announce-a-new-chipset-snapdragon-rx-1-for-standalone-ar-vr-headsets
Natasha Mathur
28 May 2018
4 min read
Save for later

Qualcomm announces a new chipset for standalone AR/VR headsets at Augmented World Expo

Natasha Mathur
28 May 2018
4 min read
Qualcomm Inc announced a new chipset namely Snapdragon XR1, its first dedicated Extended Reality (XR) platform, to power standalone virtual reality (VR) and augmented reality (AR) headsets. This is Qualcomm’s attempt to expand its business beyond the realms of smartphones. The newly dedicated chipset got introduced at the Augmented World Expo in Santa Clara, California today. The AR/VR industry seems to have developed quite an interest in building standalone headsets recently. With standalone headsets such as Facebook’s Oculus Go and Google Daydream powered Lenovo Mirage Solo ruling the market, it is quite evident that the need for powerful chipsets to power these devices is only going to rise. Snapdragon XR1, the first ever dedicated XR platform Snapdragon XR1 Key Features: Let’s look at the features that the makes the new chipset quite a powerful device to have for the standalone headsets: XR1 is a system-on-a-chip (SoC). It has all the required electronic circuitry and smartphone parts on a single integrated circuit (IC). The chip includes ARM-based CPU, a GPU, a vector processor and AI engine. AI engine will be able to optimize different AI functions such as object recognition and pose prediction on the device. With this chip, Head-tracking interaction with headsets will also be possible. It will also be capable of handling voice control. It enables better user experience with high-quality visual and audio playback, as well as 3-DoF and 6-DoF interactive controls.  The XR1 will provide support for 4K video at up to 60 frames per second, dual displays, 3D overlays and popular graphics such as APIs OpenGL, OpenCL, and Vulkan. The chipset consists of Spectra image signal processor which will help reduce noise for clearer image quality. Currently, Qualcomm's audio technologies like Aqstic, 3D Audio Suite, and aptX are being used by XR1 which enables high-fi sound. It also uses Aqstic's always-on and always-listening assistance. There will also be a system, namely, Head Related Transfer Functions which will give an impression of sounds coming from a specific point in space. This will create a more realistic experience. It will be able to delegate tasks to various different cores for more efficient performance by using heterogeneous computing. Qualcomm’s goal with this chip design is an effort to make it easy for the AR/VR hardware manufacturers to design and build headsets which are cheap yet powerful and energy-efficient. The XR1 consists of an SDK which helps manufacturers to implement some of these features, as well as Bluetooth and WiFi capabilities. The famous Oculus Go makes use of Qualcomm smartphone chip and Lenovo Mirage solo also uses Qualcomm phone processors but the battery life of the standalone headsets is not comparable to that of a smart-phone. Now, with chipsets being built specifically for these headsets, the battery life would improve considerably. Qualcomm is not the only one working on manufacturing chips dedicated to headsets, others are aiming at similar technologies too. Apple is working on developing its own chip for the AR glasses which will be on sale in early 2020. Nvidia and Intel are among the others that want to join the game. It is also worth noting that Qualcomm is on the lookout for new sources of revenue as the smartphone industry is drying up and competition is continually increasing. Qualcomm will team up with other existing headset makers that plan to include the chip such as HTC ( Vive ), Vuzix, Meta, and Pico. With Qualcomm unveiling the new Snapdragon XR1 at the Augmented World Expo today, the AR/VR manufacturers across the globe have received an extra boost in terms of shipping hardware for the AR/VR space. For more details on Snapdragon XR1, check out the official Qualcomm press release. Top 7 modern Virtual Reality hardware systems Types of Augmented Reality targets Google open sources Seurat to bring high precision graphics to Mobile VR  
Read more
  • 0
  • 0
  • 2857

article-image-microsoft-introduces-sharepoint-spaces-adds-virtual-reality-support-to-sharepoint
Sugandha Lahoti
24 May 2018
2 min read
Save for later

Microsoft introduces SharePoint Spaces, adds virtual reality support to SharePoint

Sugandha Lahoti
24 May 2018
2 min read
Microsoft has brought its Sharepoint software to Virtual reality headsets. This new technology, called SharePoint Spaces will allow SharePoint users to visualize and interact with data and product models using Virtual reality. SharePoint Spaces were announced at the opening keynote of SharePoint Virtual Summit, an online event coverage of the SharePoint Conference. Sharepoint is a team collaboration software tool by Microsoft, used by Office 365 users for organizing documents, managing content, and building sites internally. SharePoint Spaces will expand on SharePoint capabilities by creating mixed reality experiences for businesses viewable in VR headsets and in any browsers that support WebVR. It has templates for VR experiences such as a gallery of 3D models or 360-degree videos. Possible use cases of SharePoint spaces, as shared by Microsoft include: Recruiting and onboarding SharePoint Spaces can be used for 360-degree virtual induction of new recruits. Instead of being informed verbally, the employee can learn about the organizational structure, and campus layout through an interactive, and immersive experience. They can also access information about the products of the company. Learning Combining virtual reality with SharePoint can also be used to enhance learning techniques. Microsoft says “With SharePoint spaces, learning comes to life as you gain broad perspective with a panoramic view of a topic and learning objectives.” SharePoint Spaces allow readers and learners to explore personalized, and dynamic content by experiencing with your senses. Product development This tool can also be used for creating 3D prototypes, especially necessary in the product development landscape. Experts can study and evaluate data, content, and processes from every angle. They can also attach annotations, and visualize improvements. Microsoft will soon open a preview version of SharePoint Spaces. It will later be made available to all Office 365 commercial users. SharePoint has also incorporated AI for powerful content collaboration in Office 365. This includes: Personalized and intelligent search in the SharePoint mobile app, Personalized Office.com, Enhanced image capabilities, and Cognitive services for business process automation. Complete details about SharePoint innovations are available on the Microsoft blog. Amazon open sources Amazon Sumerian, its popular AR/VR app toolkit Verizon launches AR Designer, a new tool for developers Google open sources Seurat to bring high precision graphics to Mobile VR
Read more
  • 0
  • 0
  • 3908

article-image-f8-ar-announcements
Amarabha Banerjee
22 May 2018
4 min read
Save for later

F8 AR Announcements

Amarabha Banerjee
22 May 2018
4 min read
What do they mean for the developers Facebook is having a rough year to say the least. The Cambridge Analytica scandal seems to have impacted it in a way that’s yet to unfold in it’s full form. Having the annual F8 conference during this tumultuous period can have different types of repercussions. More scrutiny of the announcements is among one of them. Today we are going to ask some questions about the recent AR related announcements made in the Facebook F8 conference, how important they are to the user as well as the developers, and what do they really imply for facebook’s future development plans. AR in Messenger Facebook has introduced AR services to its messenger platform. This will enable the developers to build bots on the messenger platform and make it a much more interactive and informative platform that it currently is, potentially. As and when these features are implemented, the question on everyone’s mind is: will our data be safe? Additionally, Facebook is also planning to implement payment gateways on their messenger platform. This opens up a new avenue for developers since now they can create third party bots for messenger. But can we really trust the security and privacy features keeping in mind that they will be majorly third party? AR in Facebook lite Facebook lite had arrived with the promise of a limited version of the social networking site for smaller and older android phones and also for the ones which have slow data connection. In the latest F8 conference, Facebook has promised to bring AR to its lite version. How does that impact the goal of the lite platform of being lightweight and being easy on system resources? Will the introduction of AR to Facebook lite make it bloated? Or it might just make it unfit for older phones since their processors and system configurations might not  also support the AR features. AR Camera effects in Instagram Last year, Facebook had announced the AR camera platform, and this time around they plan to bring it to Instagram. Using an updated version of AR Studio, creators will be able to design unique, interactive camera experiences, including face filters and world effects, for their followers on Instagram. There have been these rumors that some of these features and filters have been heavily inspired by the Snapchat features which are hugely popular. The real question is whether Facebook and Instagram together can add something more to their AR camera features which would enable them to compete with Snapchat in this particular category. Update to AR Studio Facebook has updated their inhouse AR development platform, AR Studio and the new features are the following: Filtering of image textures can now be set to none or bilinear (default: bilinear). Wrapping modes of image textures can now be set to repeat, clamp or mirror (default: clamp). Tiling scale and offset for material textures can now be specified. When using the plane tracker, you can choose to target specific textures in the world from the Inspector panel. Improved Adds some more tooltips to the texture inspector. Facebook has also signalled at the upgrading of their underlying AR technology. For instance adding advanced target tracking, which includes hand tracking as well as high-fidelity body tracking. This should make for a lot more precise filters, improving their real time object and face detection capabilities etc. But since AR Studio was introduced in last year’s F8 conference, are these changes really breaking ones or are they just a means to stay alive in the competitive AR development market? With the renewed presence of Google with its recently updated AR core and Apple’s AR Kit, how much share can AR Studio maintain is a question that can only be answered in time. AR and its implications in real life can be very interesting but AR on mobile seems to have been suffering for some time due to the lack of system resources, the non-uniformity of the mobile platforms and location based data connection related issues. How these new announcements are going to change the current mobile AR landscape is something that we will have to wait and watch. Understanding the hype behind Magic Leap’s New Augmented Reality Headsets Leap Motion open sources its $100 augmented reality headset, North Star Unity plugins for augmented reality application development  
Read more
  • 0
  • 0
  • 2394

article-image-apps-need-real-time-mobile-analytics
Amarabha Banerjee
21 May 2018
4 min read
Save for later

Why your app needs real time mobile analytics

Amarabha Banerjee
21 May 2018
4 min read
What’s every mobile developer’s worst nightmare? The mere idea that their app has fallen into obscurity and it doesn’t have a single user engagement or installs! If you are a mobile developer and you are reading this, you might be well aware of this thought, in imagination as well as in reality sometime or the other. We all know that traditional analytics methods adopted and made popular by Google don’t really have a great impact on mobile apps. They are not helpful in finding out the exact reasons why your app might have failed to register a high number of installs or user engagements. So the real question to alleviate your fear is: what are the data pointers necessary to figure out a way to filter out the noise and make your app stand out among the clutter? The answer is not merely a name change, but a change in approach and it’s called mobile analytics. For starters, some reasons users typcially don’t interact with your app are: The UX is not tempting enough The app is slow The app doesn’t live up to what it promised The target audience segment is wrong Your app is really bad Barring the last pointer, the other four can have real life solutions that can salvage your app, if applied in time.  Here we are putting more focus on the phrase “In Time”. That’s where real time mobile analytics come in. Because in case of mobile apps, every minute counts, literally. Mobile analytics works on the ways and types of data collected. In order to understand why your app is not an instant hit, you will have to keep a track of: Geographical data of app installs: This will help you to identify your geographical strongholds i.e.,  from where you have got the most response. You can then analyze other geographies or similar locations that you can target in order to make your ad campaigns effective. Demographics of the users who engage with your app: This data will be particularly helpful in identifying the age group and the type of users who are engaging in in-app purchases. Thus, helping you to reach your overall goal. Which Sources provide loyal users and generate more revenue: Knowing the right media outlet to promote your ad is imperative to its success. This will enable you to target the correct media sources for maximum revenue and in creating more loyal fanbase. What are the reasons for the users to quit: This will identify the reason behind the app not getting popular. Analyzing this data will enable you to learn about potential flaws in the UX or in the app performance or any security issues which might be prompting the users to quit your app suddenly. So how do you enable real time mobile analytics? There are a few platforms which provide ready-to-deploy real time mobile analytics. Fair warning, you might end up feeling like you used a black box where you feed data and the result comes out without knowing why you came up with those results. However there are other solutions being provided by IBM cloud, AWS Pinpoint, among others which will enable the developers to be a part of the overall analytics process and also play with the parameters to see predictions of app usage and conversion. The real challenge however lies in bringing all these analytics into your mobile device. For example, if you have seeing sudden uninstalls of your app and what you have right now is your mobile device, then you should be able to access the cloud and upload that data and analyze that on your mobile to get insights on what should be done. Like whether there is an urgent UX issue that needs fixing or there is a sudden bug in the application, or there might be a sudden security threat that potentially can compromise user data. To perform these mobile analytics natively and real time, we would most definitely need better computation capabilities and battery power. Whether the tech giants like Google, AMD, Microsoft will come up with a possible solution to this mobile computation problem with a much longer battery life, is something that time can only tell. Packt teams up with Humble Bundle to bring developers a selection of mobile development bundles Five reasons why Xamarin will change mobile development Hybrid Mobile apps: What you need to know
Read more
  • 0
  • 1
  • 1885
article-image-introducing-fuse-open-fuse-app-engine-and-apps-as-a-service-from-the-native-app-development-platform-fuse
Sugandha Lahoti
17 May 2018
2 min read
Save for later

Introducing Fuse Open, Fuse App Engine and apps-as-a-service from the native app development platform, Fuse

Sugandha Lahoti
17 May 2018
2 min read
Fuse has announced two major new products in the app development space - Fuse Open, and Fuse App Engine, followed by a brand new business model, called as Apps-as-a-Service. First, they have open sourced their Fuse app development platform, called Fuse Open. Thus making the entire Fuse Platform, tooling and premium libraries available for free. Second, they have introduced the Fuse App Engine, combined with a new business model the Apps-as-a-service to provide existing SaaS and digital services with state-of-the-art native apps tailored to their specific needs. Fuse Open Fuse open makes it easy for students and beginner developers to create new mobile app prototypes. They can now build native mobile user interfaces using the UX Markup language, and JavaScript for adding the business logic. Among the tools and platforms that have been open sourced, include Fuse platform, Uno (the foundation of Fuse), Fuse Studio (the desktop design tool), premium code libraries, the documentation and the iOS and Android preview apps. The source code is hosted on GitHub under the MIT license. The company will continue to host the Fuse forums, documentation and Slack community. However, the forums and Slack community will transition to being managed by the community directly. Fuse App Engine + Apps-as-a-service With their new business model, Fuse aims to address one of the crucial limitations that exists in the app development space - misalignment between the people who need apps and the people who make them. With Apps as a Service, businesses and enterprises can have an app based on existing product or service, without having to develop them from scratch. This business model is fueled by the Fuse App Engine which connects with the backend, and hosts the data and logic for your mobile app. The mobile app consists of an App Model and an App. App Models are defined by adding a thin layer of semantic information to existing REST APIs. They can be configured into Apps with ease, giving each customer or use case the right amount of customization options. Visit the Fuse Blog for a comprehensive list of announcements. Android P new features: artificial intelligence, digital wellbeing, and simplicity Xamarin Forms 3, the popular cross-platform UI Toolkit, is here! Google’s Android Things, developer preview 8: First look
Read more
  • 0
  • 0
  • 2630

article-image-twitter-disdain-for-third-party-clients-gets-real
Richard Gall
17 May 2018
3 min read
Save for later

Twitter's disdain for third-party clients gets real

Richard Gall
17 May 2018
3 min read
There are a huge range of Twitter clients out there offering alternative ways to access Twitter. For many users, these provide a better experience. They provide a level of functionality that Twitter's own suite of applications don't. However, Twitter has revealed that this August it will be bringing in new restrictions and limitations on how these applications are built. Of course, for Twitter these restrictions aren't restrictions as such; it's actually a new developer API called 'Account Activity API'. The reason for the change is that it will give Twitter more power and control over what developers build - it also allows them to monetize the developer API in a different way too. [caption id="attachment_19298" align="aligncenter" width="300"] From blog.twitter.com[/caption] This is undoubtedly going to make applications like Twitterific significantly worse.  For Twitter, that might make some sense. It goes without saying that the platform would prefer users to use its own applications to access the service. But for developers and users of these applications, it might make life a little more difficult. What restrictions will third party Twitter clients face? Twitter will be changing the way developers of third party Twitter clients access Twitter. Back in April, Twitter explained the changes it planned to make to it's developer API in a Twitter thread: https://twitter.com/TwitterDev/status/982346370882461696 Because of the delay, the date that this change will come into effect is now August 16 2018. Essentially, on that date Twitter will turn off a number of legacy services including Site Streams and User Streams. Developers will then have to migrate to the new Account Activity API. Learn more about migrating to Account Activity API here. What impact will Twitter's change have? As already mentioned, this is going to have a big impact on the way developers build third party Twitter clients. The knock-on effect on users will be substantial. Essentially the 'real-time' experience of Twitter that you get in Twitter's own applications will be missing. Users will have to refresh their Twitter feed; push notifications are unlikely to work, and Direct Messages may be hampered too, especially on mobile. A lot of people are very unhappy about the changes Twitter is making. On Twitter the response was incredibly negative: https://twitter.com/objectivechad/status/982353715708362752 https://twitter.com/merrickluo/status/983001459078742021 https://twitter.com/DanDuivel/status/982806945772912641 However, there was some support, or awareness for the change when it was announced... https://twitter.com/trisweb/status/984005164372779010 Of course, this is one step in Twitter trying to give itself a boost - the company has been struggling for some time. However, you do wonder how successful this change will be for Twitter. Although the premium Account Activity API costs around $11.60 a month, this is only open to applications with less than 250 users. Clearly, this isn't going to be feasible for many of the leading Twitter clients with thousands of users. Read next: Facebook’s F8 Conference – 5 key announcements The Cambridge Analytica scandal and ethics in data science Sentiment Analysis of the 2017 US elections on Twitter
Read more
  • 0
  • 0
  • 1846

article-image-amazon-open-sources-amazon-sumerian-its-popular-ar-vr-app-toolkit
Sugandha Lahoti
17 May 2018
2 min read
Save for later

Amazon open sources Amazon Sumerian, its popular AR/VR app toolkit

Sugandha Lahoti
17 May 2018
2 min read
Last year at re:invent 2017, Amazon unveiled Amazon Sumerian, a toolkit for easily creating AR, VR, and 3D apps. Now Amazon has open-sourced it to allow all developers to create compelling virtual environments and scenes for their AR, VR, and 3D apps without having to acquire or master specialized tools. The open sourcing of Amazon Sumerian comes as a part of Amazon’s strategy to expand its reach and revenues by offering its cloud services to the largest number of developers, startups, and organizations as possible. As Kyle Roche, the GM of Amazon Sumerian, puts it “We are targeting enterprises who don’t have the talent in-house. Tackling new tech can sometimes be too overwhelming, and this is one way of getting inspiration or prototypes going. Sumerian is a stable way to bootstrap ideas and start conversations. There is a huge business opportunity here.” Most importantly, with Amazon Sumerian, you don’t necessarily need 3D graphics or programming experience to build rich, interactive VR and AR scenes. And hence, open sourcing Sumerian will only give it more traction from both non-developers, and trained professionals alike. Amazon Sumerian is equipped with multiple user-friendly features. Editor: A web-based editor for constructing 3D scenes, importing assets, scripting interactions, and special effects, with cross-platform publishing. Object Library: a library of pre-built objects and templates. Asset Import: Upload 3D assets to use in your scene. Sumerian supports importing FBX, OBJ, and Unity projects. Scripting Library: provides a JavaScript scripting library via its 3D engine for advanced scripting capabilities. Hosts: animated, lifelike 3D characters that can be customized for gender, voice, and language. Amazon Sumerian also has baked in integration with Amazon Polly and Amazon Lex to add speech and natural language into Sumerian hosts. Additionally, the scripting library can be used with AWS Lambda allowing the use of the full range of AWS services. The VR and AR apps created using Sumerian ca run in browsers that support WebGL or WebVR and on popular devices such as the Oculus Rift, HTC Vive, and those powered by iOS or Android. You can learn more details by visiting the Amazon Sumerian homepage and browsing through Sumerian Tutorials. Google open sources Seurat to bring high precision graphics to Mobile VR [news] Verizon launches AR Designer, a new tool for developers [news] Getting started with building an ARCore application for Android [tutorial]
Read more
  • 0
  • 0
  • 3189
article-image-android-p-new-features-artificial-intelligence-digital-wellbeing-and-simplicity
Kunal Chaudhari
14 May 2018
9 min read
Save for later

Android P new features: artificial intelligence, digital wellbeing, and simplicity

Kunal Chaudhari
14 May 2018
9 min read
Google announced the beta version of Android P at the I/O 2018 conference last week. This is one of the major updates to the mobile operating system since the release of Android 5.0 Lollipop, with a myriad of features like design changes, new animations, better notification system, and plenty of helpful shortcuts that improve the overall user experience. A decade has gone by since Google showcased the first version of Android in 2008. So it was obvious that this 10th version of the OS called for an update that would grab the attention of users, and developers alike. The previous version, Android Oreo, failed to delight users by going beyond their expectations. It holds the least amount of market share when compared to its previous three predecessors. So the stakes are higher than usual this time around for the world’s favorite mobile OS by Google. In his opening keynote, Sundar Pichai, CEO at Google, came all guns blazing with the focus, as usual, on the new developments in AI, a somewhat controversial demo from the Google’s voice assistant, and Google’s very own AI-specific processing units (TPUs). But amidst all these cool AI related stuff, he gave the world a peek into the new features of the much awaited Android P. He spoke of how Google have introduced some key capabilities in Android to help people find the right balance between digital and real life. After some more keynotes and sessions, it became clear that these new Android features have a theme which can be classified into three broad areas: Intelligence, Digital Wellbeing, and Simplicity. Machine intelligence on mobile Machine learning has been a key area of development for Google since the last few years. With each Android release, more and more features have started using these machine learning capabilities. And Android P is a step in this direction to bring AI at the core of the operating system, making smartphones smarter. Here’s a quick rundown of enhancements in this category: Adaptive battery Pretty much found in every user survey, battery life has been a top priority. With Android P, Google has partnered with its AI subsidiary DeepMind, to provide a more consistent battery experience to the users. It uses a ‘deep convolutional neural network’ or in simple words ‘on-device machine learning’, to figure out which apps the users are likely to use in the next few hours and which apps are not going to be used at all throughout the day. This usage pattern is taken into consideration by the Android P operating system to spend battery power on the apps which you are actually going to use. This results in a considerable improvement in the battery performance by the OS, which is mostly required to update the apps in the background. Image source: Google Blogs Adaptive Brightness Another AI-powered feature learns how users set their brightness according to the surrounding ambient lightning. Based on these user preferences Android P automatically sets the brightness, in a power efficient way. Although most smartphones already have ‘auto-brightness’ as an inbuilt feature, the main difference here is, they do not take the user preference and the environmental conditions into a picture. Google claims that more than 50% of the users testing on Android P have stopped adjusting the brightness manually. App Actions Last year in Android Oreo, Google launched a new feature called ‘predicted apps’, which predicts the next app that the user is most likely to launch. If this wasn’t spooky enough, Google released App Actions this year, which predicts the next action or the task that the user is going to perform and pins it on top of the Google Launcher. Image source: Google Blogs Slices This is one interesting feature where Google tries to bring a slice of the app UI to the users while they are searching for the app on the phone. Suppose you were searching for the ride-sharing app Lyft on Google, it would provide a slice of the UI from Lyft in the search drop down with your preferred options. In this case, it might show your predetermined rides to home or work which you can select right then and there from the search menu in Google. This feature totally depends on the developers, if they decide to provide a snapshot of the UI from the app on Google as it risks the users from not visiting the actual app. While all these AI features sound cool and claim to provide a rich user experience, it also factors in the ‘big question’ about the user data. From the looks of it, these features leverage a lot of user data and utilize app usage patterns, which to some or most of the users is quite alarming. Take the case of the recent breach of user data on Facebook. Google claims that these features are a result of ‘on-device machine learning’ where the data is kept private or restricted to just the users’ phone. Image source: Google Blogs Digital wellbeing takes center stage The next set of features and tools is what Google is calling ‘Digital Wellbeing’. The goal here is to enable users to understand their habits, control the demands technology places on their attention, and focus on what actually matters. Digital wellbeing was started by Tristan Harris, a former Product Manager at Google and co-founder of the Center for Humane Technology. While working on the Inbox app, Tristan found himself becoming increasingly disillusioned by the overwhelming demands of the tech industry and wrote a memo on Digital wellbeing that went viral in the company. Sameer Samat, vice president of Product Management at Google, gave an interesting talk at I/O this year, which extended Tristan’s philosophy and talked about the digital wellbeing of the users and how Android P claims to help users achieve it with its brand new set of tools. Image source: Google Blogs Dashboard Just as a Fitbit tracker gauges for activity and informs to motivate you, Google's Android P update includes a dashboard to monitor how long you've been using your phone and specific apps. It's supposed to aid you in understanding what you're spending too much time on so that you can adjust your behavior. App timer While Dashboard gives a summary of the time spent on the phone, it also allows users to tap into the apps they are using and set a time limit on it for daily usage. Once the app crosses the time limit the app icon will soon fade to gray on the home screen and it won’t launch, suggesting that the user has crossed the time limit. Do Not Disturb Do Not Disturb is already available as a feature on Android devices to prevent users from hearing any kind of notification from text or emails. This feature comes in handy particularly when you are in a meeting or away or not paying attention to your phone. The new Do Not Disturb in Android P one step further and takes out all the visual indicators or notifications even if you have the device in your hand allowing you to do better things with your phone, like reading. Google is also adding a feature where you can turn the phone on its face to activate do not disturb automatically. No more dinner interruptions. Wind down Generally, people tend to spend a lot of time on their phones while they are in bed just before sleeping. Previously smartphones used to block the notification light at bedtime but Google is going one step further with the ‘Wind Down’ feature. As your bedtime approaches this feature would turn your screen to grayscale making the apps less tempting. Google hopes this will let users "remember to get to sleep at the time [they] want”. Overall, these features sound like a real step forward taken by Google in making the phones less addictive, but there is no proven research. Much of what we know about these features is based not on peer-reviewed research but on anecdotal data. And if users don’t enable any of these Digital Wellbeing features then the new version of Android isn’t going to do anything better. UI Simplicity once again in vogue One of the key takeaways from the previous versions of Android releases has been the simplicity in the UI. Google has been trying to make the UI more accessible and approachable to the current as well as the new users. Android P is not only banking on the suggestions and patterns from the machine learning capabilities but also making the user experience more simplistic. Gesture-based navigation controls Navigation gestures aren’t new. Mobile operating systems such as webOS, MeeGo, and Blackberry 10 all had previously supported navigation gestures. But iPhone X popularized it with the removal of the home button. this meant that gestures are the only way to navigate the device. This change has been generally appreciated by users as it is simple and easy to learn. Google has introduced Gestures in the Android P to substitute the buttons for various actions such as swipe up to open the recent apps menu called Overview while double tapping it opens the app drawer. Swipe down to return to the home screen, and swipe left and right to switch between the recently-opened apps. Image source: Google Blogs Other features in this segment include manual rotation, smart text selection, quick setting, among others. You can read about these features on the official web page of Android. Beyond Intelligence, simplicity and digital wellbeing, there are hundreds of additional improvements coming in Android P, including security and privacy improvements such as DNS over TLS, encrypted backups, Protected Confirmations and more. The initial reaction to all these features was decidedly mixed; while some praised the evolution of Google’s operating system, others slammed it for looking or adopting similar features from Apple’s iOS. Overall the features look great, but we would like to see some rigorous investigation that suggests that people actually feel empowered while using the latest version of Android. Google still haven’t told us what dessert-themed name the Android update will take, saving the naming announcement for later in the summer, closer to the actual release date. Pancake, Peanut butter, Pumpkin Pie, and Popsicle are some of our top predictions. The Android P Beta is available now on Google Pixel, Essential Phone, OnePlus, Mi, Sony, Essential and Oppo handsets. Top 5 Google I/O 2018 conference Day 1 Highlights Google News’ AI revolution strikes balance between personalization and the bigger picture Google’s Android Things, developer preview 8: First look  
Read more
  • 0
  • 0
  • 2928

article-image-top-5-google-i-o-2018-conference-day-1-highlights-android-p-android-things-arcore-ml-kit-and-lighthouse
Sugandha Lahoti
10 May 2018
7 min read
Save for later

Top 5 Google I/O 2018 conference Day 1 Highlights: Android P, Android Things, ARCore, ML kit and Lighthouse

Sugandha Lahoti
10 May 2018
7 min read
Google I/O 2018, the most anticipated conference by Google kicked off yesterday at Shoreline Amphitheatre in Mountain View, California. Seems like it was just yesterday that Google I/O 2017 was over and we were still in awe of the new AI capabilities they announced last time but here we are, with the next annual I/O event in front of us. On the 1st day, CEO Sundar Pichai delivered the keynote promising a 3-day gala event for over 7,200 attendees with a plethora of announcements and updates to Google products. I/O’18 will conduct 400+ extended events in 85 countries. Artificial intelligence was a big theme throughout. Google showcased ML Kit, a SDK for adding Google’s machine learning smarts to Android and iOS apps. New features were added to Android P, Google’s most ambitious Android update. Not to mention the release of Lighthouse 3.0, new anchor tools for multiplayer AR, updates to Google assistant, Gmail, Google Maps and more. Here are our top picks from Day 1 of Google I/O 2018. Machine Learning for Mobile Developers Google’s newly launched ML Kit SDK, allows mobile developers to make use of Google’s machine learning expertise in the development of Android and iOS apps. This kit allows integration of mobile apps with a number of pre-built Google-provided machine learning models. These models support text recognition, face detection, barcode scanning, image labeling and landmark recognition, among other things. What stands out here is the fact that the ML Kit is available both online and offline, depending on network availability and the developer’s preference. In the coming months, Google plans to add a smart reply API and a high-density face contour feature for the face detection API, in the list of currently available APIs. New Augmented Reality experiences come to Android At the Google I/O conference, Google also announced several updates to its ARCore platform focused on overcoming the limitations of existing AR-enabled smartphones. Multi-User and shared AR New cloud anchor tools will enable developers to create new types of collaborative experiences, which can be shared with multiple users across both Android and iOS devices. More surfaces to play around with Vertical Plane Detection, a new feature of ARCore, allows users to place AR objects on more surfaces, like textured walls. Another capability, Augmented Images, brings images to life just by pointing a phone at them. https://www.youtube.com/watch?v=uDs9rd7yD0I Simple AR development New ARcore updates also simplify the process of AR development for Java developers with the introduction of Sceneform. Developers can now build immersive, 3D apps, optimized for mobile without having to learn complicated APIs like OpenGL. They can use Sceneform to build AR apps from scratch as well as to add AR features to existing ones. Android P: the most ambitious Android OS yet The name for the new version is yet to be decided but judging by their trend of naming the OS after a dessert it may be Pumpkin Pie, Peppermint Patty, Or Popsicle? I’m voting for Popsicle! Apart from the name, here are the other major features of the new OS: Jetpack: Jetpack is the next generation of the Android Support Library, redefining how developers write applications for Android. Jetpack manages tedious activities like background tasks, navigation, and lifecycle management, so developers can focus on core app development. Android KTX: In the last I/O conference, Google made Kotlin language a first-class citizen for developing Android apps. Continuing on the same trend, Google announced Android KTX in I/O’18. It is a part of Jetpack that further optimizes the Kotlin developer experience across libraries, tooling, runtime, documentation, and training. Android Studio 3.2: There are 20 major features in this release of Android Studio spanning from ultra-fast Android Emulator Snapshots and Sample Data in the Layout Editor, to a brand new Energy Profiler to measure battery impact of the app. Material Design 2: While other Google apps like Gmail and Tasks have already gotten a recent visual update, in Android P, Google is overhauling the OS’ overall look with what people are calling Material Design 2. Google calls it Material Themes, a powerful plugin to help designers implement Material Design in their apps. This new interface is designed to be “responsive and efficient,” while feeling “cohesive” with the rest of the G Suite family of apps. Adaptive Battery: Apart from refreshing the looks, Google has been busy thinking about improving performance. Google has partnered with its AI subsidiary DeepMind on a smart battery management system for Android. Scaling IoT with Android Things 1.0 After over 100,000 SDK downloads of the Developer Preview of Android Things, Google announced the long-term release of Android Things 1.0 to developers with long-term support for production devices. App Library, allows developers to manage APKs more easily without the need to package them together in a separate zipped bundle. Visual storage layout helps in configuring the device storage allocated to apps and data for each build and helps in getting an overview of how much storage your apps require. Group sharing, where product sharing has been extended to include support for Google Groups. Updated permissions, to give developers more control over the permissions used by apps on their devices. Developers can manage their Android Things devices via a cloud-based Android Things Console. Devices themselves can manage OS and app updates, view analytics for device health and performance, and issue test builds of the software package. Lighthouse 3.0 for better web optimization A new update to Lighthouse, the web optimization tool of Google, was also announced at Google I/O. Lighthouse 3.0 offers smaller waiting periods more updates to developers to efficiently optimize their websites and audit their performance. It uses Simulated throttling, with a new Lighthouse internal auditing engine, that runs audits under normal network and CPU settings, and then estimates how long the page would take to load under mobile conditions. Lighthouse 3.0 also features a new report UI along with invocation, scoring, audit, and output changes. Other highlights Google announced the rebranding of its Google Research division to Google AI. Google made a massive “continued conversation” update to Google Assistant with Google Duplex, a new technology that enables Google's machine intelligence–powered virtual assistant, to conduct a natural conversation with a human over the phone. Google has also announced the release of the third beta of Flutter. Flutter is Google’s mobile app SDK used for creating high-quality, native user experiences on mobile. Google Photos get more AI-powered fixes such as B&W photo colorization, brightness correction and suggested rotations. Google’s first Smart Displays, the screen-enriched smart speakers, will launch in July, powered by Google Assistant and YouTube. Google Assistant is coming to Google Maps, available on iOS and Android. There are still 2 more days left for Google I/O to conclude and going by day 1 announcements, I can’t wait to see what’s next. I am especially looking forward to knowing more about Android Auto, Google’s Tour Creator,  and Google Lens. You can view the Livestream and other sessions on the Google I/O conference page. Keep visiting Packt Hub for more updates on Google I/O, Microsoft Build and other key tech conferences happening this month. Google’s Android Things, developer preview 8: First look Google open sources Seurat to bring high precision graphics to Mobile VR Microsoft Build 2018 Day 1: Azure meets Artificial Intelligence
Read more
  • 0
  • 0
  • 3357