Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3709 Articles
article-image-build-ar-experiences-for-iphone-and-ipad-from-news-apple-developer
Matthew Emerick
13 Oct 2020
3 min read
Save for later

Build AR experiences for iPhone and iPad from News - Apple Developer

Matthew Emerick
13 Oct 2020
3 min read
Discover how you can create unparalleled augmented reality experiences within your apps and games on iOS and iPadOS. We’ll show you how to work with powerful frameworks like ARKit and RealityKit, bring your AR scenes to life with creative tools like Reality Composer and Reality Converter, and take advantage of LiDAR Scanner depth data. Explore the LiDAR Scanner for iPhone and iPad Discover how you can take advantage of the LiDAR Scanner on iPhone and iPad to create AR experiences that interact with real-world objects. When you pair the LiDAR Scanner with the ARKit and RealityKit frameworks in your app, you can instantly place AR objects in the real world without scanning and take advantage of depth information to create experiences with real-world physics, object occlusion, and lighting effects. Tech Talks Advanced Scene Understanding in AR ARKit 3.5 and RealityKit provide new capabilities that take full advantage of the LiDAR Scanner on the new iPad Pro. Check out ARKit 3.5 and learn about Scene Geometry, enhanced raycasting, instantaneous virtual object placement, and more. See how RealityKit takes advantage of these features to... Visualizing and Interacting with a Reconstructed Scene Creating a Fog Effect Using Scene Depth Visualizing a Point Cloud Using Scene Depth Creating a Game with SceneUnderstanding Discover ARKit and RealityKit ARKit 4 enables you to build the next generation of augmented reality apps to transform how people connect with the world around them, while RealityKit is Apple's rendering, animation, physics, and audio engine built from the ground up for augmented reality. Both frameworks help developers prototype and produce high-quality AR experiences. Explore an overview of each framework to learn more about building a great augmented reality experience for your app or game, including harnessing the LiDAR Scanner on iPhone and iPad, tracking faces for AR, and more. WWDC20 Explore ARKit 4 ARKit 4 enables you to build the next generation of augmented reality apps to transform how people connect with the world around them. We’ll walk you through the latest improvements to Apple’s augmented reality platform, including how to use Location Anchors to connect virtual objects with a... WWDC20 What's new in RealityKit RealityKit is Apple’s rendering, animation, physics, and audio engine built from the ground up for augmented reality: It reimagines the traditional 3D renderer to make it easy for developers to prototype and produce high-quality AR experiences. Learn how to effectively implement each of the... ARKit RealityKit Explore the ARKit Developer Forums Explore the RealityKit Developer Forums Learn more about ARKit and RealityKit LiDAR is only one aspect of developing for augmented reality. Dive deeper into ARKit and RealityKit to discover how you can add new dimensions to retail experiences, or pair these frameworks with Machine Learning and Computer Vision to create even smarter apps or games. Augment reality What's new in Machine Learning and Computer Vision
Read more
  • 0
  • 0
  • 1883

article-image-mlops-devops-for-machine-learning-from-net-blog
Matthew Emerick
13 Oct 2020
1 min read
Save for later

MLOps: DevOps for Machine Learning from .NET Blog

Matthew Emerick
13 Oct 2020
1 min read
Machine Learning Operations (MLOps) is like DevOps for the machine learning lifecycle. This includes things like model deployment & management and data tracking, which help with productionizing machine learning models. Through the survey below, we’d love to get feedback on your current DevOps practices as well as your prospective usage of MLOps in .NET. We’ll use your feedback to drive the direction of MLOps support in .NET. Take the survey The post MLOps: DevOps for Machine Learning appeared first on .NET Blog.
Read more
  • 0
  • 0
  • 1309

Matthew Emerick
13 Oct 2020
1 min read
Save for later

On-premises data gateway October 2020 update is now available from Microsoft Power BI Blog | Microsoft Power BI

Matthew Emerick
13 Oct 2020
1 min read
October release of gateway
Read more
  • 0
  • 0
  • 876

article-image-rob-sullivan-using-pg_repack-in-aws-rds-from-planet-postgresql
Matthew Emerick
13 Oct 2020
4 min read
Save for later

rob sullivan: Using pg_repack in AWS RDS from Planet PostgreSQL

Matthew Emerick
13 Oct 2020
4 min read
As your database keeps growing, there is a good chance you're going to have to address database bloat. While Postgres 13 has launched with some exciting features with built-in methods to rebuild indexes concurrently, many people still end up having to use pg_repack to do an online rebuild of the tables to remove the bloat. Customers on AWS RDS struggle figuring out how to do this. Ready to learn how? Since you have no server to access the local binaries, and because AWS RDS provides no binaries for the versions they are using, you're going to have to build your own. This isn't as hard as one might think because the official pg repos have an installer (ie: sudo apt install postgresql-10-pg_repack). If you don't use the repos, the project itself, is an open source project with directions: http://reorg.github.io/pg_repack/ While you were getting up to speed above, I was spinning up a postgres 10.9 db on RDS. I started it yesterday so that it would be ready by the time you got to this part of the post. Lets create some data: -- let's create the tableCREATE TABLE burritos (id SERIAL UNIQUE NOT NULL primary key,title VARCHAR(10) NOT NULL,toppings TEXT NOT NULL,thoughts TEXT,code VARCHAR(4) NOT NULL,UNIQUE (title, toppings));--disable auto vacuumALTER TABLE burritos SET (autovacuum_enabled = false, toast.autovacuum_enabled = false);-- orders upINSERT INTO burritos (title, toppings, thoughts, code)SELECT left(md5(i::text), 10), md5(random()::text), md5(random()::text), left(md5(random()::text), 4)FROM GENERATE_SERIES(1, 1000000) s(i);UPDATE burritos SET toppings = md5(random()::text) WHERE id < 250;UPDATE burritos SET toppings = md5(random()::text) WHERE id between 250 and 500;UPDATE burritos SET code = left(md5(random()::text), 4) WHERE id between 2050 and 5000;UPDATE burritos SET thoughts = md5(random()::text) WHERE id between 10000 and 20000;UPDATE burritos SET thoughts = md5(random()::text) WHERE id between 800000 and 900000;UPDATE burritos SET toppings = md5(random()::text) WHERE id between 600000 and 700000; (If you are curious how Magistrate presents bloat, here is a clip of the screen:) Much like a human that has had that much interaction with burritos... our database has quite a bit of bloat. Assuming we already have the pg_repack binaries in place, either though compilation or installing the package on the OS, we now need to enable the extension. We've put together a handy reference for installing extensions to get you going. pg_repack has a lot of options. Feel free to check them out, but I'm going to start packing: /usr/local/bin/pg_repack -U greataccounthere -h bloatsy.csbv99zxhbsh.us-east-2.rds.amazonaws.com -d important -t burritos -j 4NOTICE: Setting up workers.connsERROR: pg_repack failed with error: You must be a superuser to use pg_repack This might feel like game over because of the implementation of superuser on RDS, but the trick is to take a leap of faith and add another flag (-k) that skips the superuser check: /usr/local/bin/pg_repack-1.4.3/pg_repack -U greataccounthere -h bloatsy.csbv99zxhbsh.us-east-2.rds.amazonaws.com -k -d important -t burritos -j 4NOTICE: Setting up workers.connsINFO: repacking table "public.burritos"LOG: Initial worker 0 to build index: CREATE UNIQUE INDEX index_16449 ON repack.table_16442 USING btree (id) TABLESPACE pg_defaultLOG: Initial worker 1 to build index: CREATE UNIQUE INDEX index_16451 ON repack.table_16442 USING btree (title, toppings) TABLESPACE pg_defaultLOG: Command finished in worker 0: CREATE UNIQUE INDEX index_16449 ON repack.table_16442 USING btree (id) TABLESPACE pg_defaultLOG: Command finished in worker 1: CREATE UNIQUE INDEX index_16451 ON repack.table_16442 USING btree (title, toppings) TABLESPACE pg_default It works! The table is feeling fresh and tidy and your application has a little more pep in its step. When using Magistrate our platform matrix also knows when you have pg_repack installed and gives you the commands to run for tables it detects with high bloat percentage.
Read more
  • 0
  • 0
  • 1244

article-image-open-source-processes-driving-software-defined-everything-linuxinsider-from-linux-com
Matthew Emerick
12 Oct 2020
1 min read
Save for later

Open Source Processes Driving Software-Defined Everything (LinuxInsider) from Linux.com

Matthew Emerick
12 Oct 2020
1 min read
Jack Germain writes at LinuxInsider: The Linux Foundation (LF) has been quietly nudging an industrial revolution. It is instigating a unique change towards software-defined everything that represents a fundamental shift for vertical industries. LF on Sept. 24 published an extensive report on how software-defined everything and open-source software is digitally transforming essential vertical industries worldwide. “Software-defined vertical industries: transformation through open source” delves into the major vertical industry initiatives served by the Linux Foundation. It highlights the most notable open-source projects and why the foundation believes these key industry verticals, some over 100 years old, have transformed themselves using open source software. Digital transformation refers to a process that turns all businesses into tech businesses driven by software. This change towards software-defined everything is a fundamental shift for vertical industry organizations, many of which typically have small software development teams relative to most software vendors. Read more at LinuxInsider The post Open Source Processes Driving Software-Defined Everything (LinuxInsider) appeared first on Linux.com.
Read more
  • 0
  • 0
  • 698

Matthew Emerick
12 Oct 2020
3 min read
Save for later

Jonathan Katz: PostgreSQL Monitoring for App Developers: Alerts & Troubleshooting from Planet PostgreSQL

Matthew Emerick
12 Oct 2020
3 min read
We've seen an example of how to set up PostgreSQL monitoring in Kubernetes. We've looked at two sets of statistics to keep track of in your PostgreSQL cluster: your vitals (CPU/memory/disk/network) and your DBA fundamentals. While starting at these charts should help you to anticipate, diagnose, and respond to issues with your Postgres cluster, the odds are that you are not staring at your monitor 24 hours a day. This is where alerts come in: a properly set up alerting system will let you know if you are on the verge of a major issue so you can head it off at the pass (and alerts should also let you know that there is a major issue). Dealing with operational production issues was a departure from my application developer roots, but I looked at it as an opportunity to learn a new set of troubleshooting skills. It also offered an opportunity to improve communication skills: I would often convey to the team and customers what transpired during a downtime or performance degradation situation (VSSE: be transparent!). Some of what I observed I used to  help us to improve to application, while other parts helped me to better understand how PostgreSQL works. But I digress: let's drill into alerts on your Postgres database. Note that just because an alert or alarm is going off, it does not mean you need to immediately react: for example, a transient network degradation issue may cause a replica to lag further behind a primary for a bit too long but will clear up when the degradation passes. That said, you typically want to investigate the alert to understand what is causing it. Additionally, it's important to understand what actions you want to take to solve the problem. For example, a common mistake during an "out-of-disk" error is to delete the PostgreSQL WAL logs with a rm command; doing so can lead to a very bad day (and is also an advertisement for ensuring you have backups). As mentioned in the post on setting up PostgreSQL monitoring in Kubernetes, the Postgres Operator uses pgMonitor for metric collection and visualization via open source projects like Prometheus and Grafana. pgMonitor uses open source Alertmanager for configuring and sending alerts, and is what the PostgreSQL Operator uses. Using the above, let's dive into some of the items that you should be alerting on, and I will describe how my experience as an app developer translated into troubleshooting strategies.
Read more
  • 0
  • 0
  • 805
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at ₹800/month. Cancel anytime
Matthew Emerick
12 Oct 2020
1 min read
Save for later

R 4.0.3 now available from Revolutions

Matthew Emerick
12 Oct 2020
1 min read
The R Core Team has released R 4.0.3 (codename: "Bunny-Wunnies Freak Out"), the latest update to the R statistical computing system. This is a minor update to the R 4.0.x series, and so should not require any changes to existing R 4.0 scripts or packages. It is available for download for Windows, Mac and Linux systems from your local CRAN mirror.  This release includes minor improvements and bug fixes including improved timezone support on MacOS, improved labels on dot charts, better handling of large tables for Fisher's Exact Test and Chi-Square tests, and control over timeouts for internet connections.  For complete details on the R 4.0.3, check out the announcement linked below. R-announce mailing list: R 4.0.3 is released  
Read more
  • 0
  • 0
  • 1826

article-image-interactive-notebook-style-analysis-in-tableau-for-data-science-extensibility-from-whats-new
Matthew Emerick
12 Oct 2020
4 min read
Save for later

Interactive, notebook-style analysis in Tableau for data science extensibility from What's New

Matthew Emerick
12 Oct 2020
4 min read
Tamas Foldi CTO of Starschema Inc., Tableau Zen Master, and White Hat Hacker Tanna Solberg October 12, 2020 - 8:40pm October 13, 2020 Tableau's intuitive drag and drop interface is one of the key reasons it has become the de facto standard for data visualization. With its easy-to-use interface, not only analysts, but everyone can see and understand their data. But let's look at who mean when we say "everyone." Does this include sophisticated users like data scientists or statisticians? In short, yes, but their workflow is slightly different; they rely heavily on statistical and machine learning algorithms, usually only accessible from R, Python, or Matlab. To interact with these libraries, statisticians and data scientists have to write code, experiment with their model parameters, and visualize the results. The usual tool of choice for data scientists is some notebook environment—such as RStudio or Jupyter—where they can mix their code and the visualizations. Figure 1: The traditional Jupyter notebook environmentIn the past, the number one reason for the lower adoption of Tableau for data scientists was the lack of support of this code-driven, iterative development methodology. However, with the Dashboard Extensions API and the Analytics Extensions API things have changed. The platform for everyone offers the best from code-driven data science and easy-to-use, drag-and-drop visualization worlds. Tableau Python and R Integration Analytical Extension presents the standard way to use Python, R, Matlab and other platforms' libraries and functions in Tableau workbooks. With standard SCRIPT Tableau functions, users can add their Python or R codes as Tableau calculated fields, opening up a whole new world in data enrichment and analysis.  Figure 2: A simple example of a Python calculated field in Tableau DesktopWhile it's a convenient way to use existing calculations, this is not the same iterative experience as a notebook. Here comes the Dashboard Extensions API to the rescue, providing the user experience to work with the code in a code editor—while seeing the results immediately as Tableau charts. CodePad editor for Tableau The Tableau Extension Gallery was recently updated with a new extension that allows interaction with your code—just like you would have in a notebook. As you change the code in the code editor, Tableau executes it and recalculates the marks and updates the visualization before your very eyes. Figure 3: Updating your viz with code editor extension CodePadTo use the extension, you need to create a string parameter and create a SCRIPT based calculated field with the relevant fields mapped as script parameters. Figure 4: Create a parameter to store the program codeFigure 5: Use parameter in SCRIPT functionThen add the extension to your dashboard, select the previously created parameter, and choose the same programming language configured to what you have in the Analytics Extension API: Figure 6: Add and configure the CodePad extension to your dashboardNow you can start building your views, adding to your machine learning models and use external APIs to enrich the data—all from the same platform. The best part is that you can reuse the same workbook to share the analysis with end users, which could potentially be placed on a different worksheet. Sample workbook analysis in Tableau To show some of real-life use cases, we put together an example workbook with three Python-based algorithms: Clustering - The clustering dashboard uses scikit learn’s DBSCAN algorithm to apply clustering to a set of points. Figure 7: Clustering using DBSCAN algorithm Seasonality Analysis – Use statsmodel’s seasonal_decompose to remove seasonality from time series data and show the pure trends. Sentiment Analysis – Compare the titles and ratings of product reviews with their sentiment scores.  Figure 8: Sentiment analysis using ntlk or textblob   Excited to try out this interactive, notebook-style analysis in Tableau? Download the demo workbook and add the extension to the dashboard. If you want to learn more about the Dashboard Extensions and Analytics Extensions API, you can join the Tableau Developer program for additional resources and community interaction.
Read more
  • 0
  • 0
  • 720

Matthew Emerick
12 Oct 2020
1 min read
Save for later

Data Source management on Power platform admin center from Microsoft Power BI Blog | Microsoft Power BI

Matthew Emerick
12 Oct 2020
1 min read
Data source management in admin center
Read more
  • 0
  • 0
  • 799

Matthew Emerick
12 Oct 2020
3 min read
Save for later

Cloudera acquires Eventador to accelerate Stream Processing in Public & Hybrid Clouds from Cloudera Blog

Matthew Emerick
12 Oct 2020
3 min read
We are thrilled to announce that Cloudera has acquired Eventador, a provider of cloud-native services for enterprise-grade stream processing. Eventador, based in Austin, TX, was founded by Erik Beebe and Kenny Gorman in 2016 to address a fundamental business problem – make it simpler to build streaming applications built on real-time data. This typically involved a lot of coding with Java, Scala or similar technologies. Eventador simplifies the process by allowing users to use SQL to query streams of real-time data without implementing complex code. We believe Eventador will accelerate innovation in our Cloudera DataFlow streaming platform and deliver more business value to our customers in their real-time analytics applications. The DataFlow platform has established a leading position in the data streaming market by unlocking the combined value and synergies of Apache NiFi, Apache Kafka and Apache Flink. We recently delivered all three of these streaming capabilities as cloud services through Cloudera Data Platform (CDP) Data Hub on AWS and Azure. We are especially proud to help grow Flink, the software, as well as the Flink community.  The next evolution of our data streaming platform is to deliver a seamless cloud-native DataFlow experience where users can focus on creating simple data pipelines that help ingest data from any streaming source, scale the data management with topics, and generate real-time insights by processing the data on the pipeline with an easy-to-use interface. Our primary design principles are self-service, simplicity and hybrid. And, like all CDP data management and analytic cloud services, DataFlow will offer a consistent user experience on public and private clouds – for real hybrid cloud data streaming.  The Eventador technology’s ability to simplify access to real-time data with SQL, and their expertise in managed service offerings will accelerate our DataFlow experience timelines and make DataFlow a richer streaming data platform that can address a broader range of business use cases.  With the addition of Eventador we can deliver more customer value for real-time analytics use cases including: Inventory optimization, predictive maintenance and a wide variety of IoT use cases for operations teams.  Personalized promotions and customer 360 use cases for sales and marketing teams. Risk management and real-time fraud analysis for IT and finance teams. To summarize, the addition of the Eventador technology and team to Cloudera will enable our customers to democratize cross-organizational access to real-time data. We encourage you to come with us on this journey as we continue to innovate the data streaming capabilities within the Cloudera Data Platform as part of the DataFlow experience. We are excited about what the future holds and we warmly welcome the Eventador team into Cloudera. Stay tuned for more product updates coming soon! The post Cloudera acquires Eventador to accelerate Stream Processing in Public & Hybrid Clouds appeared first on Cloudera Blog.
Read more
  • 0
  • 0
  • 1214
article-image-tools-projects-and-examples-for-feathersjs-developers-in-2020-from-dailyjs-medium
Matthew Emerick
12 Oct 2020
4 min read
Save for later

Tools, projects, and examples for FeathersJS developers in 2020 from DailyJS - Medium

Matthew Emerick
12 Oct 2020
4 min read
As any JavaScript framework community grows, it becomes difficult to navigate which avenues developers have to look for solutions to problems they have encountered. FeathersJS has continually been at the forefront of JavaScript discussions since its inception, as illustrated in the annual State of JS survey. We created FeathersJS Resources as a hub, or rather a starting point, to assist people in the Feathers community find what they may be searching for. There are many resource lists available, however, we noticed a lacking of curated examples. Our goal with this list is to provide an up-to-date account of which libraries are maintained, projects are active, and examples of FeathersJS in the wild. Our general rules for curation are as follows: projects on npm must have been published in the past two years; projects on GitHub must have been updated in the past two years; projects should be well documented; articles and tutorials should be topical to the FeathersJS community; examples should use FeathersJS as a part of their stack; support channels should be qualified by the FeathersJS core team. Not to overestimate our abilities to keep track of all projects in the ecosystem, we have a channel for people to submit projects for review, which can be added to the Feathers Resources site. With the above criteria, we have broken the list into several categories which will certainly be expanded as time goes on. A few notable examples from each category are: Articles and Tutorials How we debug Feathers APIs using Postman Author Juan Orozco details a general process, conventions, and steps on how Aquil.io uses Postman to develop and debug APIs with Feathers. A few gotchas are elaborated and patterns on how Aquil.io rapidly develops APIs are explained. Read article Build a CRUD App Using React, Redux and FeathersJS Author Michael Wanyoike takes readers through getting a contact manager application set up with Feathers, Create React App, and MongoDB. The benefits of following conventions in REST and a bit of Feathers background is explained with a concrete example. Read article Tools feathers-hooks-common feathers-hooks-common is a suite of hooks allowing developers to architect common processes in a composable fashion. Readable hook flows are created such as: iff(isProvider('external'), disallow()). Learn more feathers-sync feathers-sync allows for scaling Feathers APIs and keeping socket events in sync through an intermediary data source, such as Redis. With this project, a modifying request, such as a POST, may ping a single server while the {service}::created event will be relayed to all other servers in the cluster and their corresponding subscribers. Learn more Support Feathers Slack channel The official channel for FeathersJS discussions. Open to the public and staffed by the core team. The core maintainer and author, David Luecke, is typically available to answer in depth questions regarding the usage of Feathers. Slack channel Office hours Members of the FeathersJS core team and experts at Aquil.io are available for questions that may require a call or a screen share to debug or discuss issues the community is facing. Developers at Aquil.io have been power users of Feathers since 2014 and have experience in many of the nuances in real-world settings. Visit aquil.io Our hope is this list provides a bit of direction, if you are new to the community, and a place to quickly find support if you need it. The above is a sample, but be sure to read the full list at feathersresources.dev. If you want to check out what I’m working on or have web development needs, visit Aquil.io. Originally published at https://aquil.io on September 28, 2020. Tools, projects, and examples for FeathersJS developers in 2020 was originally published in DailyJS on Medium, where people are continuing the conversation by highlighting and responding to this story.
Read more
  • 0
  • 0
  • 2043

Matthew Emerick
09 Oct 2020
1 min read
Save for later

Updated APNs provider API deadline from News - Apple Developer

Matthew Emerick
09 Oct 2020
1 min read
The HTTP/2-based Apple Push Notification service (APNs) provider API lets you take advantage of great features, such as authentication with a JSON Web Token, improved error messaging, and per-notification feedback. If you send push notifications with the legacy binary protocol, we strongly recommend upgrading to the APNs provider API. To give you additional time to prepare, the deadline to upgrade to the APNs provider API has been extended to March 31, 2021. APNs will no longer support the legacy binary protocol after this date. Learn about the APNs provider API
Read more
  • 0
  • 0
  • 1602

article-image-india-engages-in-a-national-initiative-to-support-its-ai-industry-from-ai-trends
Matthew Emerick
08 Oct 2020
5 min read
Save for later

India Engages in a National Initiative to Support Its AI Industry  from AI Trends

Matthew Emerick
08 Oct 2020
5 min read
By AI Trends Staff    The government of India is engaged in an initiative on AI that aims to promote the industry, which a recent IDC report maintains is growing at over a 30% annual clip.   India’s Artificial Intelligence spending will grow from $300.7 million in 2019 to $880.5 million in 2023 at a compound annual growth rate (CAGR) of 30.8 per cent, states IDC’s Worldwide Artificial Intelligence Spending Guide.  Rishu Sharma, Principal Analyst, Cloud and AI at IDC in India Enterprises are relying on AI to maintain business continuity, transform how businesses operate, and gain competitive advantage. “COVID-19 is pushing the boundaries of organizations’ AI lens. Businesses are considering investments in intelligent solutions to tackle issues associated with business continuity, labor shortages, and workspace monitoring. Organizations are now realizing that their business plans must be closely aligned with their AI strategies,” stated Rishu Sharma, Principal Analyst, Cloud and AI at IDC in India, in an IDC press release.  In other report highlights:   Enterprises rely on AI to maintain business continuity, transform how businesses operate and gain competitive advantage. Almost 20% of enterprises are still devising AI strategies to explore new businesses and ventures; Half of India enterprises plan to increase their AI spending in 2020; Data trustworthiness and difficulty in selecting the right algorithm, are among the top challenges that hold organizations back from implementing AI technology. “The variety of industry-specific tech solutions supported by emerging technologies like IoT and Robotics are getting powered by complex AI algorithms,” stated Ashutosh Bisht, Senior Research Manager for IDC’s Customer Insights and Analysis group. “With the fast adoption of cloud technologies in India, more than 60% of AI Applications will be migrated to the cloud by 2024.”    As per IDC’s 2020 COVID-19 Impact Survey, half of Indian enterprises plan to increase their AI spending this year. However, data trustworthiness and difficulty in selecting the right algorithm, are among top challenges that hold organizations back from implementing AI technology, according to IDC.  Prime Minister Speaking at RAISE 2020 Global Summit  Indian Prime Minister Nrendra Modi was to address a virtual summit on AI this week (October 5) in India. Called RAISE 2020, for Responsible AI for Social Empowerment, the summit is planned as a global meeting to exchange ideas and chart a course for using AI for social transformation, inclusion and empowerment in areas like healthcare, agriculture, education and smart mobility, according to an account from the South Asian news agency ANI.  Indian AI startups will be showcasing their offerings as part of the AI Solution Challenge, a government effort to support tech entrepreneurs and startups by providing exposure, recognition and guidance.  India’s strengths that position it well to become an AI lead include its healthy startup ecosystem, home to elite science and technology institutions, a robust digital infrastructure and millions of STEM graduates each year, the release indicated.    Prime Minister Modi was to articulate an “AI for All” strategy, intent on building a model for the world on how to responsibly direct AI for social empowerment, the release stated. Government Has Launched AI Portal   The Indian government earlier this year launched the National AI Portal, as a collaboration of the National Association of Software and Service Companies (Nasscom) and the National e-Governance Division of the Ministry of Electronics and Information Technology (MeitY).   The portal’s objective is to function as a platform for AI-related advancements in India, with sharing of resources in articles, investment funding news for AI startups, and AI education resources in India. The portal will also distribute documents, case studies and research reports, and describe new job roles related to AI.   Named IndiaAI, the site’s education focus aims to help professionals and students learn about and find work in the field of AI. Free and paid AI courses are available on subjects of Machine Learning, Data Visualization, and Cybersecurity, provided by educational institutions including IIT Bombay, third party content providers including SkillUp and edX, or private companies like IBM. The AI education program is open to students in classes 8-12 across thousands of schools in India.   Some Skeptical of India’s Ability to Unlock AI’s Potential  Skepticism about India’s ability to capitalize on its opportunities in AI is being voiced in some quarters. “The country is still miles away from unlocking the true value of AI in both the government and the private sector,” stated an account from CXOToday.com.   India lags behind the top five geographies for private sector investment in AI, the account stated. The US is far ahead, with investments worth $18 billion, followed by Europe ($2.6 billion) and Israel ($1.8 billion).   Only a few large companies are investing in AI R&D, being “risk averse.” Startups are having difficulty finding capital. Most vital is the need for the government and the private sectors to work hand-in-hand, particularly on investment in AI R&D.   Sanjay Gupta, Country Head & VP, Google India, has stated that close collaboration between the private and public sector, and a focus on collective expertise and energies on the most pressing problems of today, will go a long way towards achieving the vision of a socially empowered, inclusive, and digitally transformed India, where AI has a big role to play.  Read the source articles in an IDC press release, from the South Asian news agency ANI and CXOToday.com. 
Read more
  • 0
  • 0
  • 1834
article-image-update-pandemic-driving-more-ai-business-researchers-fighting-fraud-cure-posts-from-ai-trends
Matthew Emerick
08 Oct 2020
6 min read
Save for later

Update: Pandemic Driving More AI Business; Researchers Fighting Fraud ‘Cure’ Posts  from AI Trends

Matthew Emerick
08 Oct 2020
6 min read
By AI Trends Staff   The impact of the coronavirus pandemic around AI has many shades, from driving higher rates of IT spending on AI, to spurring researchers to fight fraud “cure” claims on social media, and hackers seeking to tap the medical data stream   IT leaders are planning to spend more on AI/ML, and the pandemic is increasing demand for people with related job skills, according to the survey of over 100 IT executives with AI initiatives going on at companies spending at least $1 million annually on AI/ML before the pandemic. The survey was conducted in August by Algorithmia, a provider of ML operations and management platforms.  Some 50% of respondents reported they are planning to spend more on AI/ML in the coming year, according to an account based on the survey from TechRepublic.   A lack of in-house staff with AI/ML skills was the primary challenge for IT leaders before the pandemic, according to 59% of respondents. The most important job skills coming out of the pandemic are going to be security (69%), data management (64%), and systems integration (62%).   Diego Oppenheimer, CEO of Algorithmia “When we come through the pandemic, the companies that will emerge the strongest will be those that invested in tools, people, and processes that enable them to scale delivery of AI and ML-based applications to production,” stated Diego Oppenheimer, CEO of Algorithmia, in a press release. “We believe investments in AI/ML operations now will pay off for companies sooner than later. Despite the fact that we’re still dealing with the pandemic, CIOs should be encouraged by the results of our survey.”     Researchers Tracking Increase in Fraudulent COVID-19 ‘Cure’ Posts   Legitimate businesses are finding opportunities from COVID-19, and so are the scammers. Researchers at UC San Diego are studying the increase of fraudulent posts around COVID-19 “cures” being posted on social media.   In a new study published in the Journal of Medical Internet Research Public Health and Surveillance on August 25, 2020, researchers at University of California San Diego School of Medicine found thousands of social media posts on two popular platforms — Twitter and Instagram — tied to financial scams and possible counterfeit goods specific to COVID-19 products and unapproved treatments, according to a release from UC San Diego via EurekAlert  “We started this work with the opioid crisis and have been performing research like this for many years in order to detect illicit drug dealers,” stated Timothy Mackey, PhD, associate adjunct professor at UC San Diego School of Medicine and lead author of the study. “We are now using some of those same techniques in this study to identify fake COVID-19 products for sale. From March to May 2020, we have identified nearly 2,000 fraudulent postings likely tied to fake COVID-19 health products, financial scams, and other consumer risk.”   The first two waves of fraudulent posts focused on unproven marketing claims for prevention or cures and fake testing kits. The third wave of fake pharmaceutical treatments is now materializing. Prof. Mackey expects it to get worse when public health officials announce development of an effective vaccine or other therapeutic treatments.   The research team identified suspect posts through a combination of Natural Language Processing and machine learning. Topic model clusters were transferred into a deep learning algorithm to detect fraudulent posts. The findings were customized to a data dashboard in order to enable public health intelligence and provide reports to authorities, including the World Health Organization and U.S. Food & Drug Administration (FDA).   “Criminals seek to take advantage of those in need during times of a crisis,” Mackey stated.   Sandia Labs, BioBright Working on a Better Way to Secure Critical Health Data    Complementing the scammers, hackers are also seeing opportunity in these pandemic times. Hackers that threaten medical data are of particular concern.    One effort to address this is a partnership between Sandia National Laboratories and the Boston firm BioBright to improve the security of synthetic biology data, a new commercial field.   Corey Hudson, senior member, technical staff, Sandia Labs “In the past decade, genomics and synthetic biology have grown from principally academic pursuits to a major industry,” said computational biology manager Corey Hudson, senior member of the technical staff at Sandia Labs in a press release. “This shift paves the way toward rapid production of small molecules on demand, precision healthcare, and advanced materials.”  BioBright is a scientific lab data automation company, recently acquired by Dotmatics, a UK company working on the Lab of the Future. The two companies are working to develop a better security model since currently, large volumes of data about the health and pharmaceutical information of patients are being handled with security models developed two decades ago, Hudon suggested.  The situation potentially leaves open the risk of data theft or targeted attack by hackers to interrupt production of vaccines and therapeutics or the manufacture of controlled, pathogenic, or toxic materials, he suggested.  “Modern synthetic biology and pharmaceutical workflows rely on digital tools, instruments, and software that were designed before security was such an important consideration,” stated Charles Fracchia, CEO of BioBright. The new effort seeks to better secure synthetic biology operations and genomic data across industry, government, and academia.  The team is using Emulytics, a research initiative developed at Sandia for evaluating realistic threats against critical systems, to help develop countermeasures to the risks.  C3.ai Sponsors COVID-19 Grand Challenge Competition with $200,000 in Awards  If all else fails, participate in a programming challenge and try to win some money.  Enterprise AI software provider C3.ai is inviting data scientists, developers, researchers and creative thinkers to participate in the C3.ai COVID-19 Grand Challenge and win prizes totaling $200,000.    The judging panel will prioritize data science projects that help to understand and mitigate the spread of the virus, improve the response capabilities of the medical community, minimize the impact of this disease on society, and help policymakers navigate responses to COVID-19.  C3.ai will award one Grand Prize of $100,000, two second-place awards of $25,000 each, and four third-place awards of $12,500 each.   “The C3.ai COVID-19 Grand Challenge represents an opportunity to inform decision makers at the local, state, and federal levels and transform the way the world confronts this pandemic,” stated Thomas M. Siebel, CEO of C3.ai, in a press release.  “As with the C3.ai COVID-19 Data Lake and the C3.ai Digital Transformation Institute, this initiative will tap our community’s collective IQ to make important strides toward necessary, innovative solutions that will help solve a global crisis.”   The competition is now open. Registration ends Oct. 25 and final submissions are due Nov. 18, 2020. By Dec. 9, C3.ai will announce seven competition winners and award $200,000 in cash prizes to honorees.  Judges include Michael Callagy, County Manager, County of San Mateo; S. Shankar Sastry, Professor of Electrical Engineering & Computer Science, UC Berkeley; and Zico Kolter, Associate Professor Computer Science, Carnegie Mellon University.   Launched in April 2020, the C3.ai COVID-19 Data Lake now consists of 40 unique datasets, said to be among the largest unified, federated image of COVID-19 data in the world.  Read the source articles and information at TechRepublic, from UC San Diego via EurekAlert, a press release from Sandia Labs, a press release from C3.ai about the COVID-19 Grand Challenge. 
Read more
  • 0
  • 0
  • 1527

article-image-breaking-ai-workflow-into-stages-reveals-investment-opportunities-from-ai-trends
Matthew Emerick
08 Oct 2020
6 min read
Save for later

Breaking AI Workflow Into Stages Reveals Investment Opportunities  from AI Trends

Matthew Emerick
08 Oct 2020
6 min read
By John P. Desmond, AI Trends Editor  An infrastructure–first approach to AI investing has the potential to yield greater returns with a lower risk profile, suggests a recent account in Forbes. To identify the technologies supporting the AI system, deconstruct the workflow into two steps as a starting point: training and inference.    MBA candidate at Columbia Business School, MBA Associate at Primary Venture Partners “Training is the process by which a framework for deep-learning is applied to a dataset,” states Basil Alomary, author of the Forbes account. An MBA candidate at Columbia Business School and MBA Associate at Primary Venture Partners, his background and experience are in early-stage SaaS ventures, as an operator and an investor. “That data needs to be relevant, large enough, and well-labeled to ensure that the system is being trained appropriately. Also, the machine learning models being created need to be validated, to avoid overfitting to the training data and to maintain a level of generalizability. The inference portion is the application of this model and the ongoing monitoring to identify its efficacy.”  He identifies these stages in the AI/ML development lifecycle: data acquisition, data preparation, training, inference, and implementation. The stages of acquisition, preparation, and implementation have arguably attracted the least amount of attention from investors.   Where to get the data for training the models is a chief concern. If a company is old enough to have historical customer data, it can be helpful. That approach should be inexpensive, but the data needs to be clean and complete enough to help in whatever decisions it works on. Companies without the option of historical data, can try publicly-available datasets, or they can buy the data directly. A new class of suppliers is emerging that primarily focus on selling clean, well-labeled datasets specifically for machine learning applications.   One such startup is Narrative, based in New York City. The company sells data tailored to the client’s use case. The OpenML and Amazon Datasets have marketplace characteristics but are entirely open source, which is limiting for those who seek to monetize their own assets.    Nick Jordan, CEO and founder, Narrative “Essentially, the idea was to take the best parts of the e-commerce and search models and apply that to a non-consumer offering to find, discover and ultimately buy data,” stated Narrative founder and CEO Nick Jordan in an account in TechCrunch. “The premise is to make it as easy to buy data as it is to buy stuff online.”  In a demonstration, Jordan showed how a marketer could browse and search for data using the Narrative tools. The marketer could select the mobile IDs of people who have the Uber Driver app installed on their phone, or the Zoom app, at a price that is often subscription-based. The data selection is added to the shopping cart and checked out, like any online transaction.   Founded in 2016, Narrative collects data sellers into its market, vetting each one, working to understand how the data is collected, its quality, and whether it could  be useful in a regulated environment. Narrative does not attempt to grade the quality of the data. “Data quality is in the eye of the beholder,” Jordan stated. Buyers are able to conduct their own research into the data quality if so desired. Narrative is working on building a marketplace of third-party applications, which could include scoring of data sets.    Data preparation is critical to making the machine learning model effective. Raw data needs to be preprocessed so that machine learning algorithms can produce a model, a structural description of the data. In an image database, for example, the images may have to be labelled, which can be labor-intensive.    Automating Data Preparation is an Opportunity Area   Platforms are emerging to support the process of data preparation with a layer of automation that seeks to accelerate the process. Startup Labelbox recently raised a $25 million Series B financing round to help grow its data labeling platform for AI model training, according to a recent account in VentureBeat.  Founded in 2018 in San Francisco, Labelbox aims to be the data platform that acts as a central hub for data science teams to coordinate with dispersed labeling teams. In April, the company won a contract with the Department of Defense  for the US Air Force AFWERX program, which is building out technology partnerships.   Manu Sharma, CEO and co-founder, Labelbox A press release issued by Labelbox on the contract award contained some history of the company. “I grew up in a poor family, with limited opportunities and little infrastructure” stated Manu Sharma, CEO and one of Labelbox’s co-founders, who was raised in a village in India near the Himalayas. He said that opportunities afforded by the U.S. have helped him achieve more success in ten years than multiple generations of his family back home. “We’ve made a principled decision to work with the government and support the American system,” he stated.   The Labelbox platform is supporting supervised-learning, a branch of AI that uses labeled data to train algorithms to recognize patterns in images, audio, video or text. The platform enables collaboration among team members as well as these functions: rework, rework, quality assurance, model evaluation, audit trails, and model-assisted labeling.   “Labelbox is an integrated solution for data science teams to not only create the training data but also to manage it in one place,” stated Sharma. “It’s the foundational infrastructure for customers to build their machine learning pipeline.”  Deploying the AI model into the real world requires an ongoing evaluation, a data pipeline that can handle continued training, scaling and managing computing resources, suggests Alomary in Forbes. An example product is Amazon’s Sagemaker, supporting deployment. Amazon offers a managed service that includes human interventions to monitor deployed models.   DataRobot of Boston in 2012 saw the opportunity to develop a platform for building, deploying, and managing machine learning models. The company raised a Series E round of $206 million in September and now has $431 million in venture-backed funding to date, according to Crunchbase.   Unfortunately DataRobot in March had to shrink its workforce by an undisclosed number of people, according to an account in BOSTINNO. The company employed 250 full-time employees as of October 2019.   DataRobot announced recently that it was partnering with Amazon Web Services to provide its enterprise AI platform free of charge to anyone using it to help with the coronavirus response effort.  Read the source articles and releases in Forbes, TechCrunch, VentureBeat and BOSTINNO. 
Read more
  • 0
  • 0
  • 1512