Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Author Posts - Data

31 Articles
article-image-understanding-the-fundamentals-of-analytics-teams-with-john-k-thompson
Expert Network
06 Apr 2021
6 min read
Save for later

Understanding the Fundamentals of Analytics Teams with John K. Thompson

Expert Network
06 Apr 2021
6 min read
Key-takeaways:   Data scientists need a tailored portfolio of projects that they own and manage to have a sense of autonomy.  The top skill or personality trait a successful data scientist can possess (and should possess) is curiosity.  Managing a successful analytics team and individual analytics professionals is different than managing any other type of team.  Data and analytics will be ubiquitous in the very near future. Analytics teams are different than any other team in the organization and analytics professionals are unique variant of creative professionals. Providing challenging, interesting and valuable work in the form of a personal project portfolio of work for a data scientist can be done and needs to be done to ensure productivity, job satisfaction, value delivery, and retention.  We interviewed Analytics Leader, and bestselling author, John K Thompson on data analytics, the future of analytics and his recent book, Building Analytics Teams. The interview in detail:  1. What are the fundamental concepts of building and managing a high-performing analytics team?  It is critically important to remember that data scientists are creative and intelligent people. They cannot be managed well in a command-and-control environment.  Data scientists need a tailored portfolio of projects that they own and manage to have a sense of autonomy. If they have a portfolio of projects and can manage their time and effort, the productivity of the team will be much higher than what is typically seen in teams managed in a traditional manner.  The relationship of the analytics leader with their peers and executives of the company is critically important to the success of the analytics team.  It is very important to realize that most analytics project fail at the point of where analytical models are to be implemented in production systems. 2. Tell us about your book, Building Analytics Teams. How is your book new and/or different from other books on Data Analytics?   Building Analytics Teams is focused on the practical challenges faced by people who are building and managing high performance analytics teams and the staff members who make up those analytics teams.    The book is different from other books in that it examines the process of building and managing a team from a holistic view.  The book considers the organization framework, the required processes, the people, the projects, the problems, and pitfalls.    The content of the book guides the reader through how to navigate these challenges and provides illustrations and examples of how to be successful.  The book is a “how to” guide on how to successfully manage the analytics process in a large corporate environment.  3. What was the motivation behind writing this book?   I have not seen a book like this, and I wish I had a book like this earlier in my career.  I have built a number of analytics teams. While building and growing those teams, I noticed certain recurring patterns. I wanted to address the misconceptions and the misperceptions people hold about analytics teams.    Analytics teams are unique. The team members who are successful have a different mindset and attitude toward project work and team work. I wanted to communicate the differences inherent in a high-performance analytics team when compared to other teams.  Also, I wanted to communicate that managing a successful analytics team and individual analytics professionals is different than managing any other type of team.    I wanted to write a guide for managers and analytics professionals to help them understand how the broader organization views them and how they can interface and interact with their peers in related organizational functions to increase the probability of joint success.  4. What should be the starting point for data analytics enthusiasts aiming to begin their journey in Data Analytics? How do you think your book will help them in their journey?  It depends on where they are starting their journey.    If they are in the process of completing their undergraduate or graduate studies, I would suggest that they take classes in programming, data science or analytics.    If they are professionals, I would suggest that they take classes on Coursera, Udemy or any other on-line educational platform to see if they have a real interest in, and affinity for, analytics.  If they do have an interest, then they should start working on analytics for themselves to test out analytical techniques, apply critical thinking and try to understand what they can see or cannot see in the data.  If that works out and their interest remains, they should volunteer for projects at work that will enable them to work with data and analytics in a work setting.  If they have the education, the affinity and the skill, then apply for a data science position.  Grab some data and make a difference!  5. What are the key skills required for someone to be successful working in Data Analytics? What are the pain points/challenges one should know?  The top skill or personality trait a successful data scientist can possess (and should possess) is curiosity. Without curiosity, you will find it difficult to be successful as a data scientist.  It helps to be talented and well educated, but I have met many stellar data scientists that are neither.  Beyond those traits, it is more important to be diligent and persistent.  The most successful business analysts and data scientists I have ever worked with were all naturally and perpetually curious and had a level of diligence and persistence that was impressive.  As for pain points and challenges; data scientists need to work on improving their listening skills, their written & verbal communication and presentation skills.  All data scientists need improvement in these areas.  6. What is the future of analytics? What will we see next?  I do believe that we are entering an era where data and analytics will be increasing in importance in all human endeavors. Certainly, corporate use of data and analytics will increase in importance, hence the focus of the book.    But beyond corporations, the active and engaged use of data and analytics will increase in importance and daily use in managing multiple aspects of - people’s personal lives, academic pursuits, governmental policy, military operations, humanitarian aid, tailoring of products and services; building of roads, towns and cities, planning of traffic patterns, provisioning of local federal and state services, intergovernmental relationships and more.    There will not be an element of human endeavor that will not be touched and changed by data and analytics. Data is ubiquitous today and data and analytics will be ubiquitous in the very near future.  We will see more discussions on who owns data and who should be able to monetize data.  We will experience increasing levels of AI and analytics across all systems that we interact with, and most of it will be unnoticed and operate in the background for our benefit.  About:  John K. Thompson is an international technology executive with over 30 years of experience in the business intelligence and advanced analytics fields. Currently, John is responsible for the global Advanced Analytics and Artificial Intelligence team and efforts at CSL Behring. 
Read more
  • 0
  • 0
  • 5768

article-image-imran-bashir-on-the-fundamentals-of-blockchain-its-myths-and-an-ideal-path-for-beginners
Expert Network
15 Feb 2021
5 min read
Save for later

Imran Bashir on the Fundamentals of Blockchain, its Myths, and an Ideal Path for Beginners

Expert Network
15 Feb 2021
5 min read
With the invention of Bitcoin in 2008, the world was introduced to a new concept, Blockchain, which revolutionized the whole of society. It was something that promised to have an impact upon every industry. This new concept is the underlying technology that underpins Bitcoin.  Blockchain technology is the backbone of cryptocurrencies, and it has applications in finance, government, media, and many other industries.   Some describe blockchain as a revolution, whereas another school of thought believes that it is going to be more evolutionary, and it will take many years before any practical benefits of blockchain reach fruition. This thinking is correct to some extent, but, in Imran Bashir’s opinion, the revolution has already begun. It is a technology that has an impact on current technologies too and possesses the ability to change them at a fundamental level.  Let’s hear from Imran on fundamentals of blockchain technology, its myths and his recent book, Mastering Blockchain, Third Edition. What is blockchain technology? How would you describe it to a beginner in the field? Blockchain is a distributed ledger which runs on a decentralized peer to peer network. First introduced with Bitcoin as a mechanism that ensures security of the electronic cash system, blockchain has now become a prime area of research with many applications in a variety of industries and sectors.   What should be the starting point for someone aiming to begin their journey in Blockchain? Focus on the underlying principles and core concepts such as distributed systems, consensus, cryptography, and development using no helper tools in the start. Once you understand the basics and the underlying mechanics, then you can use tools such as truffle or some other framework to make your developer life easier, however it is extremely important to learn the underlying concepts first.   What is the biggest myth about blockchain? Sometimes people believe that blockchain IS cryptocurrency, however that is not the case. Blockchain is the underlying technology behind cryptocurrencies that ensures the security, and integrity of the system and prevents double spends. However, cryptocurrency can be considered one application of blockchain technology out of many.      “Blockchain is one of the most disruptive emerging technologies today.” How much do you agree with this? Indeed, it is true.  Blockchain is changing the way we do business. In the next 5 years or so, financial systems, government systems and other major sectors will all have blockchain integrated in one way or another.   What are the factors driving development of the mainstream adoption of Blockchain? The development of standards, interoperability efforts, and consortium blockchain are all contributing towards mainstream adoption of blockchain. Also demand for more security, transparency, and decentralization in some sectors are also key drivers behind more adoption, e.g., a prime solution for decentralized sovereign identity is blockchain.   How do you explain the term bitcoin mining? Mining is a colloquial term used to describe the process of creating new bitcoins where a miner repeatedly tries to find a solution to a math puzzle and whoever finds it first wins the right to create new block and earn bitcoins as a reward.    How can Blockchain protect the Global economy? I think with the trust, transparency and security guarantees provided by blockchain we can perceive a future where financial crime can be limited to a great degree. That can have a good impact on the global economy. Furthermore, the development of CDBCs (central bank digital currencies) are expected to have a major impact on the economy and help to stabilize it. From an inclusion point of view, blockchain can allow unbanked populations to play a role in the global financial system. If cryptocurrencies replace the current monetary system, then because of the decentralized nature of blockchain, major cost savings can be achieved as no intermediaries or banks will be required, and a peer to peer, extremely low cost, global financial system can emerge which can transform the world economy. The entire remittance ecosystem can evolve into an extremely low cost, secure, real-time system which can include people who were porously unbanked. The possibilities are endless.   Tell us a bit about your book, Mastering Blockchain, Third Edition? Mastering Blockchain, Third Edition is a unique combination of theory and practice. Not only does it provides a holistic view of most areas of blockchain technology, it also covers hands on exercises using Ethereum, Bitcoin, Quroum and Hyperledger to equip readers with both theory and practical knowledge of blockchain technology. The third edition includes four new chapters on hot topics such as blockchain consensus, tokenization, Ethereum 2 and Enterprise blockchains.  About the author  Imran Bashir has an M.Sc. in Information Security from Royal Holloway, University of London, and has a background in software development, solution architecture, infrastructure management, and IT service management. He is also a member of the Institute of Electrical and Electronics Engineers (IEEE) and the British Computer Society (BCS). Imran has extensive experience in both the public and financial sectors, having worked on large-scale IT projects in the public sector before moving to the financial services industry. Since then, he has worked in various technical roles for different financial companies in Europe's financial capital, London. 
Read more
  • 0
  • 0
  • 5836

article-image-understand-quickbooks-online-desktop-online-security-use-cases-and-more-with-crystalynn-shelton-a-certified-quickbooks-proadvisor
Vincy Davis
27 Dec 2019
8 min read
Save for later

Understand Quickbooks online/desktop, online security, use cases, and more with Crystalynn Shelton, a certified QuickBooks ProAdvisor

Vincy Davis
27 Dec 2019
8 min read
Quickbooks, the accounting software package developed and marketed by Intuit is targeted towards small and medium-sized businesses. It offers on-premises accounting applications and cloud-based versions that can undertake remote access capabilities like remote payroll assistance and outsourcing, electronic payment functions, online banking, and reconciliation, mapping features, and more. To know more about Quickbooks’ latest features and its learning curve for beginners, we did a quick interview with Crystalynn Shelton, a certified QuickBooks ProAdvisor and author of the book ‘Mastering QuickBooks 2020’. With more than 10 years of experience in Quickbooks, Shelton says Quickbooks is not only user-friendly but also cost-effective. Further, when asked about her views on QuickBooks online, Shelton points out that its live unlimited technical support is one of its main features  On Quickbooks, its benefits and use cases What are some of the advantages of Quickbooks that sets it apart from its competitors?  QuickBooks has a number of advantages that set them apart from its competitors. First, it is affordable for most small businesses. Whether you purchase an Online subscription (starting at $20/month) or a desktop product (starting at a one-time fee of $199), there is something for every budget. Another benefit of using QuickBooks is the program is very user-friendly. Most small business owners purchase the software and are able to set it up without having an IT person on staff.  In addition, there are a number of training videos, an extensive help menu within the program not to mention live tech support if you need it. Because QuickBooks is the most widely used accounting software program used by small businesses, most accountants and CPAs are familiar with the program. Some of these folks are certified ProAdvisors (like myself). They can offer consulting, training, and even bookkeeping services to small business owners who use QuickBooks. Can you elaborate on how small businesses can take benefit from Quickbooks? Also, how does Quickbooks simplifies tasks for them?  While there are numerous reasons why small businesses decide to use QuickBooks, there are five that tend to be the most common reasons: small businesses who can’t afford to hire a bookkeeper, small businesses who have outgrown the use of Excel spreadsheets and need a more sophisticated way to track income and expenses, small businesses who need financial statements in order to apply for a line of credit or business loan, small businesses whose tax professional will no longer accept a shoebox of receipts to file taxes. QuickBooks simplifies bookkeeping by allowing you to track all aspects of the business in one place: accounts payable, accounts receivable, income, and expenses. It uses simple language such as “people who owe you” (aka accounts receivable) or “what you owe to others” (aka accounts payable) to help business owners without prior bookkeeping knowledge comprehend the program. QuickBooks allows you to accept credit card payments from customers so you can get paid faster and easily reconcile payments to open invoices. Not to mention you can reduce (if not eliminate) manual data entry by connecting all of your business bank and credit card accounts to QBO.  Can you elaborate on how your book ‘Mastering QuickBooks 2020’ will prepare bookkeepers and accounting students in learning the ropes of QuickBooks? Also, how does the learning curve look like for users who have no bookkeeping knowledge and no experience with QuickBooks? This book was written with the assumption that the reader has no experience or knowledge of bookkeeping. We use simple language to explain how QuickBooks works and we have also provided screenshots to support the concepts being taught.  Chapter 1 includes a section that covers bookkeeping basics which will help non-accountants gain a better understanding of the terminology used in the field of accounting as well as QuickBooks. This information will help aspiring accountants build on their existing bookkeeping knowledge.  In addition, we have included the behind the scenes debits and credits for certain transactions to help accounting students prepare for the CPA exam or other academic tests. Shelton’s views on QuickBooks Online and Desktop What are your thoughts on QuickBooks Online and Quickbooks Desktop? What are the benefits of cloud accounting over Desktop? Do factors such as the size of an organization, or its maturity matter in choosing between the online and the desktop version? There are several benefits of using cloud accounting software over the desktop. Cloud accounting software allows you to manage your business from any device with an internet connection; whereas desktop limits you to a desktop computer. With cloud accounting software like QuickBooks Online, you can give anyone access to your QuickBooks data without them having to travel to your office. Cloud accounting software includes automatic real-time updates of your data. Unlike desktop software, you don’t have to worry about backing up your data with Online; its automatically done for you. Finally, QuickBooks Online includes unlimited live technical support. This is an invaluable feature for small business owners who are managing their own books and need the ability to get help when they need it. The size of an organization, structure, and length of time in business can definitely impact whether a business should choose QuickBooks Online or desktop. As a QuickBooks ProAdvisor, one of the first things I do is conduct an assessment to determine what the needs of my clients are. This involves documenting the details of their current processes (i.e. invoicing customers, paying bills, managing inventory, etc.)  Once I have this information, I am able to determine whether QuickBooks desktop is right or if QuickBooks Online is the best fit. If both products are ideal, I provide my clients with the downsides (if any) of going with one product over the other. This gives my clients all of the information they need to make an informed decision. On how Quickbook secures online data How does Quickbooks help in securing payments? How does QuickBooks keep online data safe? To secure payments, intuit transmits, support, protect, and access all cardholder information in compliance with the Payment Card Industry’s (PCI) data security standards. Additional security precautions Intuit has implemented are as follows: All data between Intuit servers and their customers is encrypted with at least 128-bit TLS, and all copies of daily backup data are encrypted with 256-bit AES encryption. Data is kept secure with multiple servers housed in Tier-3 data centers that have strict access controls and real-time video monitoring of the data center. All servers are hardened Linux installations, which are monitored in real-time and kept up-to-date with security patches. Can you suggest some best practices (at least five) that will help Quickbook aspirants in saving time and becoming a Quickbook pro? There are several ways you can save time and become proficient in QuickBooks Online. First, I recommend that you use QuickBooks on a daily basis. The more hands-on experience with QuickBooks, the more proficient you will become. Second, take the time to properly set up your QuickBooks account before you start entering transactions.  In Chapter 2, we provide you with a detailed checklist which includes what information you need to setup QuickBooks. By taking the time to set up customers, vendors, the chart of accounts, and your products and services upfront, the less time you will spend having to do it later on when you are trying to enter data. Third, all aspiring bookkeepers and accountants should get certified in QuickBooks Online. Certification is offered through Intuit and it is free.  As a Certified QuickBooks ProAdvisor, you get access to product discounts, marketing materials to promote bookkeeping services to prospective clients, a certification badge and designation you can put on business cards, websites and email signature lines.  Fourth, utilize keyboard shortcuts. They will save you time as you navigate the program. We have included a list of QBO keyboard shortcuts in the appendix of this book. Finally, connect as many bank and credit card accounts as you can to QBO. By doing so, you will reduce the amount of manual data entry required which will help you to keep your books up-to-date. If you want to learn how to build the perfect budget, simplify tax return preparation, manage inventory, track job costs, generate income statements and financial reports, check out Crystalynn’s book ‘Mastering QuickBooks 2020’. This book will work for a small business owner, bookkeeper, or accounting student who wants to learn how to make the most of QuickBooks Online. About the author Crystalynn Shelton is a licensed Certified Public Accountant, a certified QuickBooks ProAdvisor and has been certified in QuickBooks for more than 10 years. Crystalynn is currently a staff writer for Fit Small Business and an Adjunct Instructor at UCLA Extension where she teaches accounting, bookkeeping and QuickBooks to hundreds of small business owners and accounting students each year. Her previous experience includes working at Intuit (QuickBooks) as a Sr. Learning Specialist.  MongoDB’s CTO Eliot Horowitz on what’s new in MongoDB 4.2, Ops Manager, Atlas, and more New QGIS 3D capabilities and future plans presented by Martin Dobias, a core QGIS developer Greg Walters on PyTorch and real-world implementations and future potential of GANs Elastic marks its entry in security analytics market with Elastic SIEM and Endgame acquisition “The challenge in Deep Learning is to sustain the current pace of innovation”, explains Ivan Vasilev, machine learning engineer
Read more
  • 0
  • 0
  • 5144
Banner background image

article-image-greg-walters-on-pytorch-and-real-world-implementations-and-future-potential-of-gans
Vincy Davis
13 Dec 2019
10 min read
Save for later

Greg Walters on PyTorch and real-world implementations and future potential of GANs

Vincy Davis
13 Dec 2019
10 min read
Introduced in 2014, GANs (Generative Adversarial Networks) was first presented by Ian Goodfellow and other researchers at the University of Montreal. It comprises of two deep networks, the generator which generates data instances, and the discriminator which evaluates the data for authenticity. GANs works not only as a form of generative model for unsupervised learning, but also has proved useful for semi-supervised learning, fully supervised learning, and reinforcement learning. In this article, we are in conversation with Greg Walters, one of the authors of the book 'Hands-On Generative Adversarial Networks with PyTorch 1.x', where we discuss some of the real-world applications of GANs. According to Greg, facial recognition and age progression will one of the areas where GANs will shine in the future. He believes that with time GANs will soon be visible in more real-world applications, as with GANs the possibilities are unlimited. On why PyTorch for building GANs Why choose PyTorch for GANs? Is PyTorch better than other popular frameworks like Tensorflow? Both PyTorch and Tensorflow are good products. Tensorflow is based on code from Google and PyTorch is based on code from Facebook. I think that PyTorch is more pythonic and (in my opinion) is easier to learn. Tensorflow is two years older than PyTorch, which gives it a bit of an edge, and does have a few advantages over PyTorch like visualization and deploying trained models to the web. However, one of the biggest advantages that PyTorch has is the ability to handle distributed training. It’s much easier when using PyTorch. I’m sure that both groups are looking at trying to lessen the gaps that exist and that we will see big changes in both. Refer to Chapter 4 of my book to learn how to use PyTorch to train a GAN model. Have you had a chance to explore the recently released PyTorch 1.3 version? What are your thoughts on the experimental feature - named tensors? How do you think it will help developers in getting a more readable and maintainable code? What are your thoughts on other features like PyTorch Mobile and 8-bit model quantization for mobile-optimized AI? The book was originally written to introduce PyTorch 1.0 but quickly evolved to work with PyTorch 1.3.x. Things are moving very quickly for PyTorch, so it presents an evermoving target.  Named tensors are very exciting to me. I haven’t had a chance to spend a tremendous amount of time on them yet, but I plan to continue working with them and explore them deeply. I believe that they will help make some of the concepts of manipulating tensors much easier for beginners to understand and read and understand the code created by others. This will help create more novel and useful GANs for the future. The same can be said for PyTorch Mobile. Expanding capabilities to more (and less expensive) processor types like ARM creates more opportunities for programmers and companies that don’t have the high-end capabilities. Consider the possibilities of running a heavy-duty AI on a $35 Raspberry Pi. The possibilities are endless. With PyTorch Mobile, both Android and iOS devices can benefit from the new advances in image recognition and other AI programs. The 8-bit model quantization allows tensor operations to be done using integers rather than floating-point values, allowing models to be more compact. I can’t begin to speculate on what this will bring us in the way of applications in the future. You can read Chapter 2 of my book to know more about the new features in PyTorch 1.3. On challenges and real-world applications of GANs GANs have found some very interesting implementations in the past year like a deepfake that can animate your face with just your voice, a neural GAN to fight fake news, a CycleGAN to visualize the effects of climate change, and more. Most of the GAN implementations are built for experimentation or research purposes. Do you think GANs can soon translate to solve real-world problems? What do you think are the current challenge that restrict GANs from being implemented in real-world scenarios? Yes. I do believe that we will see GANs starting to move to more real-world applications. Remember that in the grand scheme of things, GANs are still fairly new. 2014 wasn’t that long ago. We will see things start to pop in 2020 and move forward from there. As to the current challenges, I think that it’s simply a matter of getting the word out. Many people who are conversant with Machine Learning still haven’t heard of GANs, mainly due to the fact that they are so busy with what they know and are comfortable with, so they haven’t had the time and/or energy to explore GANs yet. That will change. Of course, things change on almost a daily basis, so who can guess where we will be in another two years? Some of the existing and future applications that GANs can help implement include new photo-realistic scenes for video games, movies, and television, taking sketches from designers and making realistic photographs in both the fashion industry and architecture, taking a partial facial image and making a rotated view for better facial recognition, age progression and regression and so much more. Pretty much anything with a pattern, be it image or text can be manipulated using GANs. There are a variety of GANs available out there. How should one approach them in terms of problem solving? What are the other possible ways to group GANs? That’s a very hard question to answer. You are correct, there are a large number of GANs in “the wild” and some work better for some things than others. That was one of the big challenges of writing the book.  Add to that, new GANs are coming out all the time that continue to get better and better and extend the possibility matrix. The best suggestion that I could make here is to use the resources of the Internet and read, read and read. Try one or two to see what works best for your application. Also, create your own category list that you create based on your research. Continue to refine the categories as you go. Then share your findings so others can benefit from what you’ve learned. New GANs implementations and future potential In your book, 'Hands-On Generative Adversarial Networks with PyTorch 1.x', you have demonstrated how GANs can be used in image restoration problems, such as super-resolution image reconstruction and image inpainting. How do SRGAN help in improving the resolution of images and performing image inpainting? What other deep learning models can be used to address image restoration problems? What are other keep image related problems where GANs are useful and relevant? Well, that is sort of like asking “how long is a piece of string”. Picture a painting in a museum that has been damaged from fire or over time. Right now, we have to rely on very highly trained experts who spend hundreds of hours to bring the painting back to its original glory. However, it’s still an approximation of what the expert THINKS the original was to be. With things like SRGAN, we can see old photos “restored” to what they were originally. We already can see colorized versions of some black and white classic films and television shows. The possibilities are endless. Image restoration is not limited to GANs, but at the moment seems to be one of the most widely used methods. Fairly new methods like ARGAN (Artifact Reduction GAN) and FD-GAN (Face De-Morphing GAN or Feature Distilling GAN) are showing a lot of promise. By the time I’m finished with this interview, there could be three or more others that will surpass these.  ARGAN is similar and can work with SRGAN to aid in image reconstruction. FD-GAN can be used to work with human position images, creating different poses from a totally different pose. This has any number of possibilities from simple fashion shots too, again, photo-realistic images for games, movies and television shows. Find more about image restoration from Chapter 7 of my book. GANs are labeled as innovative due to its ability to generate fake data that looks real. The latest developments in GANs allows it to generate high-dimensional fake data or image video that can easily go undetected. What is your take on the ethical issues surrounding GANs? Don’t you think developers should target creating GANs that will be good for humanity rather than developing scary AI capabilities? Good question. However, the same question has been asked about almost every advance in technology since rainbows were in black and white. Take, for example, the discussion in Chapter 6 where we use CycleGAN to create van Gogh like images. As I was running the code we present, I was constantly amazed by how well the Generator kept coming up with better fakes that looked more and more like they were done by the Master. Yes, there is always the potential for using the technology for “wrong” purposes. That has always been the case. We already have AI that can create images that can fool talent scouts and fake news stories. J. Hector Fezandie said back in 1894, "with great power comes great responsibility" and was repeated by Peter Parker’s Uncle Ben thanks to Stan Lee. It was very true then and is still just as true. How do you think GANs will be contributing to AI innovations in the future? Are you expecting/excited to see an implementation of GANs in a particular area/domain in the coming years? 5 years ago, GANs were pretty much unknown and were only in the very early stages of reality.  At that point, no one knew the multitude of directions that GANs would head towards. I can’t begin to imagine where GANs will take us in the next two years, much let the far future. I can’t imagine any area that wouldn’t benefit from the use of GANs. One of the subjects we wanted to cover was facial recognition and age progression, but we couldn’t get permission to use the dataset. It’s a shame, but that will be one of the areas that GANs will shine in for the future. Things like biomedical research could be one area that might really be helped by GANs. I hate to keep using this phrase, but the possibilities are unlimited. If you want to learn how to build, train, and optimize next-generation GAN models and use them to solve a variety of real-world problems, read Greg’s book ‘Hands-On Generative Adversarial Networks with PyTorch 1.x’. This book highlights all the key improvements in GANs over generative models and will help guide you to make the GANs with the help of hands-on examples. What are generative adversarial networks (GANs) and how do they work? [Video] Generative Adversarial Networks: Generate images using Keras GAN [Tutorial] What you need to know about Generative Adversarial Networks ICLR 2019 Highlights: Algorithmic fairness, AI for social good, climate change, protein structures, GAN magic, adversarial ML and much more Interpretation of Functional APIs in Deep Neural Networks by Rowel Atienza
Read more
  • 0
  • 0
  • 4906

article-image-prof-rowel-atienza-discusses-the-intuition-behind-deep-learning-techniques-advances-in-gans
Packt Editorial Staff
30 Sep 2019
6 min read
Save for later

Prof. Rowel Atienza discusses the intuition behind deep learning, advances in GANs & techniques to create cutting-edge AI models

Packt Editorial Staff
30 Sep 2019
6 min read
In recent years, deep learning has made unprecedented progress in vision, speech, natural language processing and understanding, and other areas of data science. Developments in deep learning techniques, including GANs, variational autoencoders and deep reinforcement learning, are creating impressive AI results. For example, DeepMind's AlphaGo Zero became a game changer in AI research when it beat world champions in the game of Go. In this interview, Professor Rowel Atienza, author of the book Advanced Deep Learning with Keras talks about the recent developments in the field of deep learning. This book is a comprehensive guide to the advanced deep learning techniques available today, so you can create your own cutting-edge AI. This book strikes a balance between advanced concepts in deep learning and practical implementations with Keras. Key takeaways from the interview The intuition of deep learning is built on the fact that the deeper the network gets, the more feature representations the network learns in order to solve complex real-world problems. The objective of deep learning is to enable agents to be more robust to unforeseen events and to lessen the dependency on huge data. Advances in GANs enable us to generate high-dimensional fake data such as high-resolution images or videos that look very convincing. Deep learning tackles the curse of dimensionality by finding efficient data structures and layers that could represent complex data in the most efficient manner. The interview in detail What is the intuition behind deep learning? What are the recent developments in deep learning? Rowel Atienza: Deep learning is built on the intuition that the deeper the network gets, the more feature representations the network learns in order to solve complex real-world problems. Unlike machine learning, deep learning learns these features automatically from data in different degrees of supervision. There are many recent developments in deep learning. There are advances on graph neural networks because people are realizing the limits of NLP (Natural Language Processing), CNN (Convolution Neural Networks), and RNN (Recurrent Neural Networks) in representing more complex data structures such as social network, 3D shapes, molecular structures, etc. Implementing the causality in reasoning on data is another area of strong interest. Deep learning is strong on correlation not on discovering causality in data. Meta learning or learning to learn is also another area of interest. The objective is to enable agents to be more robust to unforeseen events and to lessen the dependency on huge data. What are different deep learning techniques to create successful AI? RA: A successful AI is dependent on two things: 1) deep domain knowledge and 2) deep understanding of state of the art techniques that will work on the domain problem. Domain knowledge comes from someone who is very familiar with the domain problem. This person is not necessarily knowledgeable in AI. This domain knowledge is then modelled in AI to automate the process of problem solving. How deep learning tackles the curse of dimensionality? RA: One of the goals of deep learning is to keep on finding efficient data structures and layers that could represent complex data in the most efficient manner. For example, geometric deep learning is able to circumvent the limitations of representing and learning from 3D data by avoiding inefficient 3D convolutions. There is still so much to be done in this space. What is autoencoders? What is the need of autoencoders in deep learning? How do you create an autoencoder? RA: Autoencoders compress high dimensionality data into low dimensionality code without losing important information. Low-dimensional code is suitable for further processing by other deep learning models such as in generative models like GANs and VAEs. Autoencoder can easily be implemented using two networks, an encoder and decoder. The depth, width, and type of layers are dependent on the original data to be encoded. Why are GANs so innovative? RA: GANs are innovative since they are good in generating fake data that look real. It is something that is hard to accomplish using other generative models. The advances in GANs enable us to generate high-dimensional fake data such as high resolution image or video that look very convincing. Tell us a little bit about this book? What makes this book necessary? What gap does it fill? RA: Advanced Deep Learning with Keras focuses on recent advances on deep learning It starts with a quick review of deep learning concepts (NLP, CNN, RNN). The discussions on deep neural networks, autoencoders, generative adversarial network (GAN), variational autoencoders (VAE), and deep reinforcement learning (DRL) follow. The book is important for everyone who would like to understand advanced concepts on deep learning and their corresponding implementation in Keras. The current version has in depth focus on generative models (autoencoders, GANs, VAEs) that could be used in-practical setting. The DRL explains the core concepts of value-based and policy-based methods in reinforcement learning and the corresponding working implementations in Keras which are difficult to make them right. About the Book Advanced Deep Learning with Keras is a comprehensive guide to the advanced deep learning techniques available today, so you can create your own cutting-edge AI. Using Keras as an open-source deep learning library, you'll find hands-on projects throughout that show you how to create more effective AI with the latest techniques. About the Author Rowel Atienza is an Associate Professor at the Electrical and Electronics Engineering Institute of the University of the Philippines, Diliman. He holds the Dado and Maria Banatao Institute Professorial Chair in Artificial Intelligence. Rowel has been fascinated with intelligent robots since he graduated from the University of the Philippines. He received his MEng from the National University of Singapore for his work on an AI-enhanced four-legged robot. He finished his Ph.D. at The Australian National University for his contribution to the field of active gaze tracking for human-robot interaction. Deep learning models have massive carbon footprints, can photonic chips help reduce power consumption? Machine learning experts on how we can use machine learning to mitigate and adapt to the changing climate Google launches beta version of Deep Learning Containers for developing, testing and deploying ML applications
Read more
  • 0
  • 0
  • 3714

article-image-listen-researcher-rowel-atienza-discusses-artificial-intelligence-deep-learning-and-why-we-dont-need-to-fear-a-robot-ruled-future-podcast
Richard Gall
08 Apr 2019
2 min read
Save for later

Listen: researcher Rowel Atienza discusses artificial intelligence, deep learning, and why we don't need to fear a robot-ruled future [Podcast]

Richard Gall
08 Apr 2019
2 min read
Artificial intelligence threats are regularly talked up by the media. This is largely because the area is widely misunderstood. The robot revolution and dangerous algorithms are, unfortunately, much sexier than math and statistics. Artificial intelligence isn't really that scary. And while it does pose many challenges for society, it's essential to remember that these are practical challenges that don't exist in some abstract realm. They are rather engineering and ethical problems that we can all help solve. In this edition of the Packt podcast, we spoke to Rowel Atienza about the reality of artificial intelligence. In particular we wanted to understand the practical realities behind the buzz. As an Associate Professor at the University of the Philipines researching numerous different aspects of artificial intelligence - and author of Advanced Deep Learning with Keras  - he's someone with experience and insight on what really matters across the field. Getting past the artificial intelligence hype with Rowel Atienza In the episode we discussed: The distinction between AI, machine learning and deep learning Why artificial intelligence is so hot right now The key machine learning frameworks - TensorFlow, PyTorch, and Keras How they compare and why Rowel loves Keras The importance of ethics and transparency Essential skills for someone starting or building a career in the field How far are we really are from AGI Listen here:  https://soundcloud.com/packt-podcasts/were-still-very-far-from-robots-taking-over-society-rowel-atienza-on-deep-learning-and-ai
Read more
  • 0
  • 0
  • 5194
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at £15.99/month. Cancel anytime
article-image-deep-learning-is-not-an-optimum-solution-for-every-problem-faced-an-interview-with-valentino-zocca
Sunith Shetty
14 Nov 2018
11 min read
Save for later

“Deep learning is not an optimum solution for every problem faced”: An interview with Valentino Zocca

Sunith Shetty
14 Nov 2018
11 min read
Over the past few years, we have seen some advanced technologies in artificial intelligence shaping human life. Deep learning (DL) has become the main driving force in bringing new innovations in almost every industry. We are sure to continue to see DL everywhere. Most of the companies including startups are already integrating deep learning into their own day-to-day process. Deep learning techniques and algorithms have made building advanced neural networks practically feasible, thanks to high-level open source libraries such as TensorFlow, Keras, PyTorch and more. We recently interviewed Valentino Zocca, a deep learning expert and the author of the book, Python Deep Learning. Valentino explains why deep learning is getting so much hype, and what's the roadmap ahead in terms of new technologies and libraries. He will also talks about how major vendors and tech-savvy startups adopt deep learning within their organization. Being a consultant and an active developer he is expecting a better approach than back-propagation for carrying out various deep learning tasks. Author’s Bio Valentino Zocca graduated with a Ph.D. in mathematics from the University of Maryland, USA, with a dissertation in symplectic geometry, after having graduated with a laurel in mathematics from the University of Rome. He spent a semester at the University of Warwick. After a post-doc in Paris, Valentino started working on high-tech projects in the Washington, D.C. area and played a central role in the design, development, and realization of an advanced stereo 3D Earth visualization software with head tracking at Autometric, a company later bought by Boeing. At Boeing, he developed many mathematical algorithms and predictive models, and using Hadoop, he has also automated several satellite-imagery visualization programs. He has since become an expert on machine learning and deep learning and has worked at the U.S. Census Bureau and as an independent consultant both in the US and in Italy. He has also held seminars on the subject of machine learning and deep learning in Milan and New York. Currently, Valentino lives in New York and works as an independent consultant to a large financial company, where he develops econometric models and uses machine learning and deep learning to create predictive models. But he often travels back to Rome and Milan to visit his family and friends. Key Takeaways Deep learning is one of the most adopted techniques used in image and speech recognition and anomaly detection research and development areas. Deep learning is not the optimum solution for every problem faced. Based on the complexity of the challenge, the neural network building can be tricky. Open-source tools will continue to be in the race when compared to enterprise software. More and more features are expected to improve on providing efficient and powerful deep learning solutions. Deep learning is used as a tool rather than a solution across organizations. The tool usage can differ based on the problem faced. Emerging specialized chips expected to bring more developments in deep learning to mobile, IoT and security domain. Valentino Zocca states We have a quantity vs. quality problem. We will be requiring better paradigms and approaches in the future which can be improved through research driven innovative solutions instead of relying on hardware solutions. We can make faster machines, but our goal is really to make more intelligent machines for performing accelerated deep learning and distributed training. Full Interview Deep learning is as much infamous as it is famous in the machine learning community with camps supporting and opposing the use of DL passionately. Where do you fall on this spectrum? If you were given a chance to convince the rival camp with 5-10 points on your stand about DL, what would your pitch be like? The reality is that Deep Learning techniques have their own advantages and disadvantages. The areas where Deep Learning clearly outperforms most other machine learning techniques are in image and speech recognition and anomaly detection. One of the reasons why Deep Learning does so much better is that these problems can be decomposed into a hierarchical set of increasingly complex structures, and, in multi-layer neural nets, each layer learns these structures at different levels of complexity. For example, an image recognition, the first layers will learn about the lines and edges in the image. The subsequent layers will learn how these lines and edges get together to form more complex shapes, like the eyes of an animal, and finally the last layers will learn how these more complex shapes form the final image. However, not every problem can suitably be decomposed using this hierarchical approach. Another issue with Deep Learning is that it is not yet completely understood how it works, and some areas, for example, banking, that are heavily regulated, may not be able to easily justify their predictions. Finally, many neural nets may require a heavier computational load than other classical machine learning techniques. Therefore, the reality is that one still needs a proficient machine learning expert who deeply understands the functioning of each approach and can make the best decision depending on each problem. Deep Learning is not, at the moment, a complete solution to any problem, and, in general, there can be no definite side to pick, and it really depends on the problem at hand. Deep learning can conquer tough challenges, no doubt. However, there are many common myths and realities around deep learning. Would you like to give your supporting reasoning on whether the following statements are myth or fact? You need to be a machine learning expert or a math geek to build deep learning models We need powerful hardware resources to use deep learning Deep learning models are always learning, they improve with new data automagically Deep learning is a black box, so we should avoid using it in production environments or in real-world applications. Deep learning is doomed to fail. It will be replaced eventually by data sparse, resource economic learning methods like meta-learning or reinforcement learning. Deep learning is going to be central to the progress of AGI (artificial general intelligence) research Deep Learning has become almost a buzzword, therefore a lot of people are talking about it, sometimes misunderstanding how it works. People hear the word DL together with "it beats the best player at go", "it can recognize things better than humans" etc., and people think that deep learning is a mature technology that can solve any problem. In actuality, deep learning is a mature technology only for some specific problems, you do not solve everything with deep learning and yet at times, whatever the problem, I hear people asking me "can't you use deep learning for it?" The truth is that we have lots of libraries ready to use for deep learning. For example, you don’t need to be a machine learning expert or a math geek to build simple deep learning models for run-of-the-mill problems, but in order to solve for some of the challenges that less common issues may present, a good understanding of how a neural network works may indeed be very helpful. Like everything, you can find a grain of truth in each of those statements, but they should not be taken at face value. With MLaaS being provided by many vendors from Google to AWS to Microsoft, deep learning is gaining widespread adoption not just within large organizations but also by data-savvy startups. How do you view this trend? More specifically, is deep learning being used differently by these two types of organizations? If so, what could be some key reasons? Deep Learning is not a monolithic approach. We have different types of networks, ANNs, CNNs, LSTMs, RNNs, etc. Honestly, it makes little sense to ask if DL is being used differently by different organizations. Deep Learning is a tool, not a solution, and like all tools it should be used differently depending on the problem at hand, not depending on who is using it. There are many open source tools and enterprise software (especially the ones which claim you don't need to code much) in the race. Do you think this can be the future where more and more people will opt for ready-to-use (MLaaS) enterprise backed cognitive tools like IBM Watson rather than open-source tools? This holds true for everything. At the beginning of the internet, people would write their own HTML code for their web pages, now we use tools who do most of the work for us. But if we want something to stand-out we need a professional designer. The more a technology matures, the more ready-to-use tools will be available, but that does not mean that we will never need professional experts to improve on those tools and provide specialized solutions. Deep learning is now making inroads to mobile, IoT and security domain as well. What makes DL great for these areas? What are some challenges you see while applying DL in these new domains? I do not have much experience with DL in mobiles, but that is clearly a direction that is becoming increasingly important. I believe we can address these new domains by building specialized chips. Deep learning is a deeply researched topic within machine learning and AI communities. Every year brings us new techniques from neural nets to GANs, to capsule networks that then get widely adopted both in research and in real-world applications. What are some cutting-edge techniques you foresee getting public attention in deep learning in 2018 and in the near future? And why? I am not sure we will see anything new in 2018, but I am a big supporter of the idea that we need a better paradigm that can excel more at inductive reasoning rather than just deductive reasoning. At the end of last year, even DL pioneer Geoff Hinton admitted that we need a better approach than back-propagation, however, I doubt we will see anything new coming out this year, it will take some time. We keep hearing noteworthy developments in AI and deep learning by DeepMind and OpenAI. Do you think they have the required armory to revolutionize how deep learning is performed? What are some key challenges for such deep learning innovators? As I mentioned before, we need a better paradigm, but what this paradigm is, nobody knows. Gary Marcus is a strong proponent of introducing more structure in our networks, and I do concur with him, however, it is not easy to define what that should be. Many people want to use the brain as a model, but computers are not biological structures, and if we had tried to build airplanes by mimicking how a bird flies we would not have gone very far. I think we need a clean break and a new approach, I do not think we can go very far by simply refining and improving what we have. Improvement in processing capabilities and the availability of custom hardware have propelled deep learning into production-ready environments in recent years. Can we expect more chips and other hardware improvements in the coming years for GPU accelerated deep learning and distributed training? What other supporting factors will facilitate the growth of deep learning? Once again, foreseeing the future is not easy, however, as these questions are related, I think only so much can be gained by improving chips and GPUs. We have a quantity vs. quality problem. We can improve quantity (of speed, memory, etc.) through hardware improvements, but the real problem is that we need a real quality improvement, better paradigms, and approaches, that needs to be achieved through research and not with hardware solutions. We can make faster machines, but our goal is really to make more intelligent machines. A child can learn by seeing just a few examples, we should be able to create an approach that allows a machine to also learn from few examples, not by cramming millions of examples in a short time. Would you like to add anything more to our readers? Deep Learning is a fascinating discipline, and I would encourage anyone who wanted to learn more about it to approach it as a research project, without underestimating his or her own creativity and intuition. We need new ideas. If you found this interview to be interesting, make sure you check out other insightful interviews on a range of topics: Blockchain can solve tech’s trust issues – Imran Bashir “Tableau is the most powerful and secure end-to-end analytics platform”: An interview with Joshua Milligan “Pandas is an effective tool to explore and analyze data”: An interview with Theodore Petrou
Read more
  • 0
  • 0
  • 7443

article-image-with-python-you-can-create-self-explanatory-concise-and-engaging-data-visuals-and-present-insights-that-impact-your-business-tim-grosmann-and-mario-dobler-interview
Savia Lobo
10 Nov 2018
6 min read
Save for later

“With Python, you can create self-explanatory, concise, and engaging data visuals, and present insights that impact your business” - Tim Großmann and Mario Döbler [Interview]

Savia Lobo
10 Nov 2018
6 min read
Data today is the world’s most important resource. However, without properly visualizing your data to discover meaningful insights, it’s useless. Creating visualizations helps in getting a clearer and concise view of the data, making it more tangible for (non-technical) audiences. To further illustrate this, below are questions aimed at giving you an idea why data visualization is so important and why Python should be your choice. In a recent interview, Tim Großmann and Mario Döbler, the authors of the course titled, ‘Data Visualization with Python’, shared with us the importance of Data visualization and why Python is the best fit to carry out Data Visualization. Key Takeaways Data visualization is a great way, and sometimes the only way, to make sense of the constantly growing mountain of data being generated today. With Python, you can create self-explanatory, concise, and engaging data visuals, and present insights that impact your business. Your data visualizations will make information more tangible for the stakeholders while telling them an interesting story. Visualizations are a great tool to transfer your understanding of the data to a less technical co-worker. This builds a faster and better understanding of data. Python is the most requested and used language in the industry. Its ease of use and the speed at which you can manipulate and visualize data, combined with the number of available libraries makes Python the best choice. Full Interview Why is Data Visualization important? What problem is it solving? As the amount of data grows, the need for developers with knowledge of data analytics and especially data visualization spikes. In recent years we have experienced an exponential growth of data. Currently, the amount of data doubles every two years. For example, more than eight thousand tweets are sent per second; and more than eight hundred photos are uploaded to Instagram per second. To cope with the large amounts of data, visualization is essential to make it more accessible and understandable. Everyone has heard of the saying that a picture is worth a thousand words. Humans process visual data better and faster than any other type of data. Another important point is that data is not necessarily the same as information. Often people aren’t interested in the data, but in some information hidden in the data. Data visualization is a great tool to discover the hidden patterns and reveal the relevant information. It bridges the gap between quantitative data and human reasoning, or in other words, visualization turns data into meaningful information. What other similar solutions or tools are out there? Why is Python better? Data visualizations can be created in many ways using many different tools. MATLAB and R are two of the available languages that are heavily used in the field of data science and data visualization. There are also some non-coding tools like Tableau which are used to quickly create some basic visualizations. However, Python is the most requested and used language in the industry. Its ease of use and the speed at which you can manipulate and visualize data, combined with the number of available libraries makes Python the best choice. In addition to all the mentioned perks, Python has an incredibly big ecosystem with thousands of active developers. Python really differs in a way that allows users to also build their own small additions to the tools they use, if necessary. There are examples of pretty much everything online for you to use, modify, and learn from. How can Data Visualization help developers? Give specific examples of how it can solve a problem. Working with, and especially understanding, large amounts of data can be a hard task. Without visualizations, this might even be impossible for some datasets. Especially if you need to transfer your understanding of the data to a less technical co-worker, visualizations are a great tool for a faster and better understanding of data. In general, looking at your data visualized often speaks more than a thousand words. Imagine getting a dataset which only consists of numerical columns. Getting some good insights into this data without visualizations is impossible. However, even with some simple plots, you can often improve your understanding of even the most difficult datasets. Think back to the last time you had to give a presentation about your findings and all you had was a table with numerical values in it. You understood it, but your colleagues sat there and scratched their heads. Instead had you created some simple visualizations, you would have impressed the entire team with your results. What are some best practices for learning/using Data Visualization with Python? Some of the best practices you should keep in mind while visualizing data with Python are: Start looking and experimenting with examples Start from scratch and build on it Make full use of documentation Use every opportunity you have with data to visualize it To know more about the best practices in detail, read our detailed notes on 4 tips for learning Data Visualization with Python. What are some myths/misconceptions surrounding Data Visualization with Python? Data visualizations are just for data scientists Its technologies are difficult to learn Data visualization isn’t needed for data insights Data visualization takes a lot of time Read about these myths in detail in our article, ‘Python Data Visualization myths you should know about’. Data visualization in combination with Python is an essential skill when working with data. When properly utilized, it is a powerful combination that not only enables you to get better insights into your data but also gives you the tool to communicate results better. Data nowadays is everywhere so developers of every discipline should be able to work with it and understand it. About the authors Tim Großmann Tim Großmann is a CS student with interest in diverse topics ranging from AI to IoT. He previously worked at the Bosch Center for Artificial Intelligence in Silicon Valley in the field of big data engineering. He’s highly involved in different Open Source projects and actively speaks at meetups and conferences about his projects and experiences. Mario Döbler Mario Döbler is a graduate student with a focus in deep learning and AI. He previously worked at the Bosch Center for Artificial Intelligence in Silicon Valley in the field of deep learning, using state-of-the-art algorithms to develop cutting-edge products. Currently, he dedicates himself to apply deep learning to medical data to make health care accessible to everyone. Setting up Apache Druid in Hadoop for Data visualizations [Tutorial] 8 ways to improve your data visualizations Getting started with Data Visualization in Tableau  
Read more
  • 0
  • 0
  • 5788

article-image-instead-of-data-scientists-working-on-their-models-and-advancing-ai-they-are-spending-their-time-doing-deepops-work-missinglink-ceo-yogi-taguri-interview
Amey Varangaonkar
08 Nov 2018
10 min read
Save for later

“Instead of data scientists working on their models and advancing AI, they are spending their time doing DeepOps work”, MissingLink CEO, Yosi Taguri [Interview]

Amey Varangaonkar
08 Nov 2018
10 min read
Machine learning has shown immense promise across domains and industries over the recent years. From helping with the diagnosis of serious ailments to powering autonomous vehicles, machine learning is finding useful applications across a spectrum of industries. However, the actual process of delivering business outcomes using machine learning currently takes too long and is too expensive, forcing some businesses to look for other less burdensome alternatives. MissingLink.ai is a recently-launched platform to fix just this problem. It enables data scientists to spend less time on the grunt work by automating and streamlining the entire machine learning cycle, giving them more time to apply actionable insights gleaned from the data. Key Takeaways Processing and managing the sheer volume of data is one of the key challenges that today’s AI tools face Yosi thinks the idea of companies creating their own machine learning infrastructure doesn’t make a lot of sense. Data professionals should be focusing on more important problems within their organizations by letting the platform take care of the grunt work. MissingLink.ai is an innovative AI platform born out of the need to simplify AI development, by taking away the common, menial data processing tasks from data scientists and allowing them to focus on the bigger data-related issues; through experiment management, data management and resource management. MissingLink is a part of the Samsung NEXT product development team that aims to help businesses automate and accelerate their projects using machine learning We had the privilege of interviewing Mr. Yosi Taguri, the founder and CEO of MissingLink, to know more about the platform and how it enables more effective deep learning. What are the biggest challenges that companies come across when trying to implement a robust Machine Learning/Deep Learning pipeline in their organization? How does it affect their business? The biggest challenge, simply put, is that today’s AI tools can’t keep up with the amount of data being produced. And it’s only going to get more challenging from here! As datasets continue to grow, they will require more and more compute power, which means we risk falling farther behind unless we change the tools we’re using. While everyone is talking about the promise of machine learning, the truth is that today, assessing data is still too time-consuming and too expensive. Engineers are spending all their time managing the sheer volume of data, rather than actually learning from it and being empowered to make changes. Let’s talk about MissingLink.ai, the platform you and your team have launched for accelerating deep learning across businesses. Why the name MissingLink? What was the motivation to launch this platform? The name is actually a funny story, and it ties pretty neatly into why we created the platform. When we were starting out three years ago, deep learning was still a relatively new concept and my team and I were working hard to master the intricacies of it. As engineers, we primarily worked with code, so to be able to solve problems with data was a fascinating new challenge for us. We quickly realized that deep learning is really hard and moves very, very slow. So we set out to solve that problem of how to build really smart machines really fast. By comparison, we thought of it through the lens of software development. Our goal was to accelerate from a glacial pace to building machine learning algorithms faster -- because we felt that there was something missing, a missing link if you will. MissingLink is a part of the growing Samsung NEXT product development team. How does it feel? What role do you think MissingLink will play in Samsung NEXT’s plans and vision going forward? Samsung NEXT’s broader mission is to help startups reach their full potential and achieve their goals. More specifically, Samsung NEXT discovers and backs the engineers, innovators, builders, and entrepreneurs who will help Samsung define the future of software and services. The Samsung NEXT product development team is focused on building software and services that take advantage of and accelerate opportunities related to some of the biggest shifts in technology including automation, supply and demand, and interfaces. This will require hardware and software to seamlessly come together. Over the past few years, nearly all startups are leveraging AI for some component of their business, yet practical progress has been slower than promised. MissingLink is a foundational tool to enable the progress of these big changes, helping entrepreneurs with great use cases for machine learning to accelerate their projects. Could you give us the key features of Missinglink.ai that make it stand out from the other AI platforms available out there? How will it help data scientists and ML engineers build robust, efficient machine learning models? First off, MissingLink.ai is the most comprehensive AI platform out there. It handles the entire deep learning lifecycle and all its elements, including code, data, experiments, and resources. I’d say that our top features include: Experiment Management: See and compare the entire history of experiments. MissingLink.ai auto-documents every aspect Data Management: A unique data store tracks data versions used in every experiment, streams data, caches it locally and only syncs changes Resources Management: Manages your resources with no extra infrastructure costs using your AWS or other cloud credentials. It grows and shrinks your cloud resources as needed. These features, together with our intuitive interface, really put data scientists and engineers in the driver's seat when creating AI models. Now they can have more control and spend less energy repeating experiments, giving them more time to focus on what is important. Your press release on the release of MissingLink states “the actual process of delivering business outcomes currently takes too long and it is too expensive. MissingLink.ai was born out of a desire to fix that.” Could you please elaborate on how MissingLink makes deep learning less expensive and more accessible? Companies are currently spending too much time and devoting too many resources to the menial tasks that are necessary for building machine learning models. The more time data scientists spend on tasks like spinning machines, copying files and DevOps, the more money that a company is wasting. MissingLink changes that through the introduction of something we’re calling DeepOps or deep learning operations, which allows data scientists to focus on data science and let the machine take care of the rest. It’s like DevOps where the role is about how to make the process of software development more efficient and productionalized, but the difference is no one has been filling this role and it’s different enough that its very specific to the task of deep learning. Today, instead of data scientists working on their models and advancing AI, they are spending their time doing this DeepOps work. MissingLink reduces load time and facilitates easy data exploration by eliminating the need to copy files through data-management in a version-aware data store. Most of the businesses are moving their operations on to the cloud these days, with AWS, Azure, GCP, etc. being their preferred cloud solutions. These platforms have sophisticated AI offerings of their own. Do you see AI platforms such as MissingLink.ai as a competition to these vendors, or can the two work collaboratively? I wouldn’t call cloud companies our competitors; we don’t provide the cloud services they do, and they don’t provide the DeepOps service that we do. Yes, we all are trying to simplify AI, but we’re going about it in very different ways. We can actually use a customer’s public cloud provider as the infrastructure to run the MissingLink.ai platform. If customers provide us with their cloud credentials, we can even manage this for them directly. Concepts such as Reinforcement Learning and Deep Learning for Mobile are getting a lot of traction these days, and have moved out of the research phase into the application/implementation phase. Soon, they might start finding extensive business applications as well. Are there plans to incorporate these tools and techniques in the platform in the near future? We support all forms of Deep Learning, including Reinforcement Learning. On the Deep Learning for Mobile side, we think the Edge is going to be a big thing as more and more developers around the world are exposed to Deep Learning. We plan to support it early next year. Currently, data privacy and AI ethics have become a focal point of every company’s AI strategy. We see tech conglomerates increasingly coming under the scanner for ignoring these key aspects in their products and services. This is giving rise to an alternate movement in AI, with privacy and ethics-centric projects like Deon, Vivaldi, and Tim Berners-Lee’s Solid. How does MissingLink approach the topics of privacy, user consent, and AI ethics? Are there processes/tools or principles in place in the MissingLink ecosystem or development teams that balance these concerns? When we started MissingLink we understood that data is the most sensitive part of Deep Learning. It is the new IP. Companies spend 80% of their time attending to data, refining it, tagging it and storing it, and therefore are reluctant to upload it to a 3rd party vendor. We have built MissingLink with that in mind - our solution allows customers to simply point us in the direction of where their data is stored internally, and without moving it or having to access it as a SaaS solution we are able to help them manage it by enabling version management as they do with code. Then we can stream it directly to the machines that need the data for processing and document which data was used for reproducibility. Finally, where do you see machine learning and deep learning heading in the near future? Do you foresee a change in the way data professionals work today? How will platforms like MissingLink.ai change the current trend of working? Right now, companies are creating their own machine learning infrastructure - and that doesn’t make sense. Data professionals can and should be focusing on more important problems within their organizations. Platforms like MissingLink.ai free data scientists from the grunt work it takes to upkeep the infrastructure, so they can focus on bigger picture issues. This is the work that is not only more rewarding for engineers to work on, but also creates the unique value that companies need to compete.  We’re excited to help empower more data professionals to focus on the work that actually matters. It was wonderful talking to you, and this was a very insightful discussion. Thanks a lot for your time, and all the best with MissingLink.ai! Read more Michelangelo PyML: Introducing Uber’s platform for rapid machine learning development Tesseract version 4.0 releases with new LSTM based engine, and an updated build system Baidu releases a new AI translation system, STACL, that can do simultaneous interpretation
Read more
  • 0
  • 0
  • 4962

article-image-interview-tirthajyoti-sarkar-and-shubhadeep-roychowdhury-data-wrangling-with-python
Sugandha Lahoti
25 Oct 2018
7 min read
Save for later

“Data is the new oil but it has to be refined through a complex processing network” - Tirthajyoti Sarkar and Shubhadeep Roychowdhury [Interview]

Sugandha Lahoti
25 Oct 2018
7 min read
Data is the new oil and is just as crude as unrefined oil. To do anything meaningful - modeling, visualization, machine learning, for predictive analysis – you first need to wrestle and wrangle with data. We recently interviewed Dr. Tirthajyoti Sarkar and Shubhadeep Roychowdhury, the authors of the course Data Wrangling with Python. They talked about their new course and discuss why do data wrangling and why use Python to do it. Key Takeaways Python boasts of a large, broad library equipped with a rich set of modules and functions, which you can use to your advantage and manipulate complex data structures NumPy, the Python library for fast numeric array computations and Pandas, a package with fast, flexible, and expressive data structures are helpful in working with “relational” or “labeled” data. Web scraping or data extraction becomes easy and intuitive with Python libraries, such as BeautifulSoup4 and html5lib. Regex, the tiny, highly specialized programming language inside Python can create patterns that help match, locate, and manage text for large data analysis and searching operations Present interesting, interactive visuals of your data with Matplotlib, the most popular graphing and data visualization module for Python. Easily and quickly separate information from a huge amount of random data using Pandas, the preferred Python tool for data wrangling and modeling. Full Interview Congratulations on your new course ‘Data wrangling with Python’. What this course is all about? Data science is the ‘sexiest job’ of 21st century’ (at least until Skynet takes over the world). But for all the emphasis on ‘Data’, it is the ‘Science’ that makes you - the practitioner - valuable. To practice high-quality science with data, first you need to make sure it is properly sourced, cleaned, formatted, and pre-processed. This course teaches you the most essential basics of this invaluable component of the data science pipeline – data wrangling. What is data wrangling and why should you learn it well? “Data is the new Oil” and it is ruling the modern way of life through incredibly smart tools and transformative technologies. But oil from the rig is far from being usable. It has to be refined through a complex processing network. Similarly, data needs to be curated, massaged and refined to become fit for use in intelligent algorithms and consumer products. This is called “wrangling” and (according to CrowdFlower) all good data scientists spend almost 60-80% of their time on this, each day, every project. It generally involves the following: Scraping the raw data from multiple sources (including web and database tables), Inputing, formatting, transforming – basically making it ready for use in the modeling process (e.g. advanced machine learning), Handling missing data gracefully, Detecting outliers, and Being able to perform quick visualizations (plotting) and basic statistical analysis to judge the quality of your formatted data This course aims to teach you all the core ideas behind this process and to equip you with the knowledge of the most popular tools and techniques in the domain. As the programming framework, we have chosen Python, the most widely used language for data science. We work through real-life examples and at the end of this course, you will be confident to handle a myriad array of sources to extract, clean, transform, and format your data for further analysis or exciting machine learning model building. Walk us through your thinking behind how you went about designing this course. What’s the flow like? How do you teach data wrangling in this course? The lessons start with a refresher on Python focusing mainly on advanced data structures, and then quickly jumping into NumPy and Panda libraries as fundamental tools for data wrangling. It emphasizes why you should stay away from traditional ways of data cleaning, as done in other languages, and take advantage of specialized pre-built routines in Python. Thereafter, it covers how using the same Python backend, one can extract and transform data from a diverse array of sources - internet, large database vaults, or Excel financial tables. Further lessons teach how to handle missing or wrong data, and reformat based on the requirement from a downstream analytics tool. The course emphasizes learning by real example and showcases the power of an inquisitive and imaginative mind primed for success. What other tools are out there? Why do data wrangling with Python? First, let us be clear that there is no substitute for the data wrangling process itself. There is no short-cut either. Data wrangling must be performed before the modeling task but there is always the debate of doing this process using an enterprise tool or by directly using a programming language and associated frameworks. There are many commercial, enterprise-level tools for data formatting and pre-processing, which does not involve coding on the part of the user. Common examples of such tools are: General purpose data analysis platforms such as Microsoft Excel (with add-ins) Statistical discovery package such as JMP (from SAS) Modeling platforms such as RapidMiner Analytics platforms from niche players focusing on data wrangling such as – Trifacta, Paxata, Alteryx At the end of the day, it really depends on the organizational approach whether to use any of these off-the-shelf tools or to have more flexibility, control, and power by using a programming language like Python to perform data wrangling. As the volume, velocity, and variety (three V’s of Big Data) of data undergo rapid changes, it is always a good idea to develop and nurture significant amount of in-house expertise in data wrangling. This is done using fundamental programming frameworks so that an organization is not betrothed to the whims and fancies of any particular enterprise platform as a basic task as data wrangling. Some of the obvious advantages of using an open-source, free programming paradigm like Python for data wrangling are: General purpose open-source paradigm putting no restriction on any of the methods you can develop for the specific problem at hand Great eco-system of fast, optimized, open-source libraries, focused on data analytics Growing support to connect Python for every conceivable data source types, Easy interface to basic statistical testing and quick visualization libraries to check data quality Seamless interface of the data wrangling output to advanced machine learning models – Python is the most popular language of choice of machine learning/artificial intelligence these days. What are some best practices to perform data wrangling with Python? Here are five best practices that will help you out in your data wrangling journey with Python. And in the end, all you’ll have is clean and ready to use data for your business needs. Learn the data structures in Python really well Learn and practice file and OS handling in Python Have a solid understanding of core data types and capabilities of Numpy and Pandas Build a good understanding of basic statistical tests and a panache for visualization Apart from Python, if you want to master one language, go for SQL What are some misconceptions about data wrangling? Though data wrangling is an important task, there are certain myths associated with data wrangling which developers should be cautious of. Myths such as: Data wrangling is all about writing SQL query Knowledge of statistics is not required for data wrangling You have to be a machine learning expert to do great data wrangling Deep knowledge of programming is not required for data wrangling Learn in detail about these misconceptions. We hope that these misconceptions would help you realize that data wrangling is not as difficult as it seems. Have fun wrangling data! About the authors Dr. Tirthajyoti Sarkar works as a Sr. Principal Engineer in the semiconductor technology domain where he applies cutting-edge data science/machine learning techniques for design automation and predictive analytics. Shubhadeep Roychowdhury works as a Sr. Software Engineer at a Paris based Cyber Security startup. He holds a Master Degree in Computer Science from West Bengal University Of Technology and certifications in Machine Learning from Stanford. 5 best practices to perform data wrangling with Python 4 misconceptions about data wrangling Data cleaning is the worst part of data analysis, say data scientists
Read more
  • 0
  • 0
  • 4936
article-image-git-like-all-other-version-control-tools-exists-to-solve-for-one-problem-change-joseph-muli-and-alex-magana-interview
Packt Editorial Staff
09 Oct 2018
5 min read
Save for later

“Git, like all other version control tools, exists to solve for one problem: change” - Joseph Muli and Alex Magana [Interview]

Packt Editorial Staff
09 Oct 2018
5 min read
An unreliable versioning tool makes product development a herculean task. Creating and enforcing checks and controls for the introduction, scrutiny, approval, merging, and reversal of changes in your source code, are some effective methods to ensure a secure development environment. Git and GitHub offer constructs that enable teams to conduct version control and collaborative development in an effective manner.  When properly utilized, Git and GitHub promote agility and collaboration across a team, and in doing so, enable teams to focus and deliver on their mandates and goals. We recently interviewed Joseph Muli and Alex Magana, the authors of Introduction to Git and GitHub course. They discussed the various benefits of Git and GitHub while sharing some best practices and myths. Author Bio Joseph Muli loves programming, writing, teaching, gaming, and travelling. Currently, he works as a software engineer at Andela and Fathom, and specializes in DevOps and Site Reliability. Previously, he worked as a software engineer and technical mentor at Moringa School. You can follow him on LinkedIn and Twitter. Alex Magana loves programming, music, adventure, writing, reading, architecture, and is a gastronome at heart. Currently, he works as a software engineer with BBC News and Andela. Previously, he worked as a software engineer with SuperFluid Labs and Insync Solutions. You can follow him on LinkedIn or GitHub. Key Takeaways Securing your source code with version control is effective only when you do it the right way. Understanding the best practices used in version control can make it easier for you to get the most out of Git and GitHub. GitHub is loaded with an elaborate UI. It’ll immensely help your development process to learn how to navigate the GitHub UI and install the octo tree. GitHub is a powerful tool that is equipped with useful features. Exploring the Feature Branch Workflow and other forking features, such as submodules and rebasing, will enable you to make optimum use of the many features of GitHub. The more elaborate the tools, the more time they can consume if you don’t know your way through them. Master the commands for debugging and maintaining a repository, to speed up your software development process. Keep your code updated with the latest changes using CircleCI or TravisCI, the continuous integration tools from GitHub. The struggle isn’t over unless the code is successfully released to production. With GitHub’s release management features, you can learn to complete hiccup-free software releases. Full Interview Why is Git important? What problem is it solving? Git, like all other version control tools, exists to solve for one problem, change. This has been a recurring issue, especially when coordinating work on teams, both locally and distributed, that specifically being an advantage of Git through hubs such as GitHub, BitBucket and Gitlab. The tool was created by Linus Torvalds in 2005 to aid in development and contribution on the Linux Kernel. However, this doesn’t necessarily limit Git to code any product or project that requires or exhibits characteristics such as having multiple contributors, requiring release management and versioning stands to have an improved workflow through Git. This also puts into perspective that there is no standard, it’s advisable to use what best suits your product(s). What other similar solutions or tools are out there? Why is Git better? As mentioned earlier, other tools do exist to aid in version control. There are a lot of factors to consider when choosing a version control system for your organizations, depending on product needs and workflows. Some organizations have in-house versioning tools because it suits their development. Some organizations, for reasons such as privacy and security or support, may look for an integration with third-party and in-house tools. Git primarily exists to provide for a faster and distributed version system, that is not tied to a central repository, hub or project. It is highly scalable and portable. Other VC tools include Apache SubVersion, Mercurial and Concurrent Versions System (CVS). How can Git help developers? Can you list some specific examples (real or imagined) of how it can solve a problem? A simple way to define Git’s indispensability is enabling fast, persistent and accessible storage. This implies that changes to code throughout a product’s life cycle can be viewed and updated on demand, each with simple and compact commands to enable the process. Developers can track changes from multiple contributors, blame introduced bugs and revert where necessary. Git enables multiple workflows that align to practices such as Agile e.g. feature branch workflows and others including forking workflows for distributed contribution, i.e. to open source projects. What are some best tips for using Git and GitHub? These are some of the best practices you should keep in mind while learning or using Git and GitHub. Document everything Utilize the README.MD and wikis Keep simple and concise naming conventions Adopt naming prefixes Correspond a PR and Branch to a ticket or task. Organize and track tasks using issues. Use atomic commits [box type="shadow" align="" class="" width=""]Editor’s note: To explore these tips further, read the authors’ post ‘7 tips for using Git and GitHub the right way’.[/box] What are the myths surrounding Git and GitHub? Just as every solution or tool has its own positives and negatives, Git is also surrounded by myths one should be aware of. Some of which are: Git is GitHub Backups are equivalent to version control Git is only suitable for teams To effectively use Git, you need to learn every command to work [box type="shadow" align="" class="" width=""]Editor’s note: To explore these tips further, read the authors’ post ‘4 myths about Git and GitHub you should know about’.  [/box] GitHub’s new integration for Jira Software Cloud aims to provide teams a seamless project management experience GitLab raises $100 million, Alphabet backs it to surpass Microsoft’s GitHub GitHub introduces ‘Experiments’, a platform to share live demos of their research projects  
Read more
  • 0
  • 0
  • 3673

article-image-discussing-sap-past-present-and-future-with-rehan-zaidi-senior-sap-abap-consultant-interview
Savia Lobo
04 Oct 2018
11 min read
Save for later

Discussing SAP: Past, present and future with Rehan Zaidi, senior SAP ABAP consultant [Interview]

Savia Lobo
04 Oct 2018
11 min read
SAP, the market-leading enterprise software, recently became the first European technology company to create an AI ethics advisory panel where they announced seven guiding principles for AI development. These guidelines revolve around recognizing AI’s significant impact on people and society. Also, last week, at the Microsoft Ignite conference, SAP, in collaboration with Microsoft and Adobe announced the Open Data Initiative. This initiative aims to help companies to better govern their data and support privacy and security initiatives. For SAP, this initiative will further bring advancements to its SAP C/4HANA and S/4HANA platforms. All of these actions emphasize SAP’s focus on transforming itself into a responsible data use company. We recently interviewed Rehan Zaidi, a senior SAP ABAP consultant. Rehan became one of the youngest authors on SAP worldwide when he was published in the prestigious SAP Professional Journal in the year 2001. He has written a number of books, and over 20 articles and professional papers for the SAP Professional Journal USA and HR Expert USA, part of the prestigious sapexperts.com library. Following are some of his views on the SAP community and products and how the SAP suite can benefit people including budding professionals, developers, and business professionals. Key takeaways SAP HANA was introduced to accelerate jobs 200 times faster while maintaining the efficiency. The introduction of SAP Leonardo brought in the next wave of AI, machine learning, and blockchain services via the SAP cloud platform and other standalone projects. Experienced ABAP developers should look forward to getting certified in one of the newest technologies such as HANA, and Fiori. SAP ERP Central Component (SAP ECC) is the on-premises version of SAP, and it is usually implemented in medium and large-sized companies. For smaller companies, SAP offers its Business One ERP platform. SAP Fiori is a line of SAP apps meant to address criticisms of SAP's user experience and UI complexity. Q.1. SAP is one of the most widely used ERP tools. How has it evolved over the past few years from the traditional on-premise model to keep up with the cloud-based trends? Yes. Let me cover the main points. SAP started in 1973 as a company and the first product SAP R/98 was launched. In 1979, SAP launched the R/2 design. It had most of the typical processes such as accounting, manufacturing processes, supply chain logistics, and human resources. Then came R/3  that brought the more efficient three-tier (Application server -  Database and the presentation (GUI)) architecture, with more new modules and functionalities added. It was a smart system fully configurable by functional consultants. This was further enhanced with Netweaver capability that brought the integration of the internet and SOA capability.  SAP introduced the ECC 5 and subsequently the ECC 6 Release. Mobility was later added that lets mobile applications running on devices to access the business processes in SAP and execute them. Both display and updation of SAP data was possible. HANA system was then introduced. It is very fast and efficient - allows you to do 200 times faster jobs than before Cloud systems then became available that let customers connect to SAP Cloud Platform via their on-premise systems and then get access to services such as Mobile Service for app protection, Mobile Service for SAP Fiori, among others. SAP Leonardo was finally introduced, as a way of bringing in next-gen AI, machine learning and blockchain services via standalone projects and the SAP cloud platform. Q.2. Being a Senior ABAP Programming Analyst, how does your typical day look like? Ahh. Well, a typical day! No two days are the same for us. Each morning we find ourselves confronting a problem whose solution is to be devised. A different problem every day- followed by a unique solution. We spend hours and hours finding issues in custom developed programs. We learn about making custom programs run faster. We get requirements of a wide variety of users. They may be in the Human Resource, Materials Management, Sales and Distribution or Finance, and so on. This requirement may be pertaining to an entirely new report or a dialog program having a set of screens. We even do Fiori ( using Javascript based library) applications that may be accessible from the PC or a mobile device. I even get requirements of teaching junior or trainee SAP developers on a wide variety of technologies. Q.3. Can you tell us about the learning curve for SAP? There are different job profiles related to SAP which range from executives to consultants and managers. How do each of them learn or update themselves on SAP? Yes, this is a very important question. A simple answer to this question is that “there is no end to learning and at any stage, learning is never enough,” no matter to which field within SAP you belong to. Things are constantly changing. The more you read and the more you work, you feel that there is a lot to be done. You need to constantly update yourself and learn about new technologies. There is plenty of material available on the internet. I usually refer to the Official SAP website for newer courses available. They even tell you for which background (managers, developers) the courses are relevant to. I also go to open.sap.com for new courses. Whether they are consultants (functional and technical), or managers, all of them need to keep themselves up-to-date. They must take new courses and learn about innovation in their technology. For example, HR must now study and try to learn about Successfactors. Even integration of SAP HANA with other software might be an interesting topic of today. There are Fiori and HANA related courses for Basis consultants and the corresponding tracks for developers. Some knowhow of newer technologies is also important for managers and executives, since your decisions may need to be adapted based on the underlying technologies running in your systems. You should know the pros and cons of all technologies in order to make the correct move for your business. Q.4. Many believe an SAP certification improves their chances of getting jobs at competitive salaries. How important are certifications? Which SAP certifications should a buddying developer look forward to obtain? When I did my Certification in October 2000, I used to think that Certifications are not important. But now I have realized, yes, it makes a difference.  Well, certifications are definitely a plus point. They enhance your CV and allow you to have an edge over those who are not certified.  I found some jobs adverts that specifically mention that certification will be required or will be advantageous. However, they are only useful when you have at least 4 years of experience. For a fresh graduate, a certification might not be very useful. A useful SAP consultant/developer is a combination of solid base/foundation of knowledge along with a touch of experience. I suggest all my juniors to go for Certifications in order to strengthen concepts, which include: C_C4C30_1711 - SAP Certified Development Associate – SAP Hybris Cloud for Customer C_CP_11 - SAP Certified Development Associate - SAP Cloud Platform C_FIORDEV_20 - SAP Certified Development Associate - SAP Fiori Application Developer C_HANADEV_13 - SAP Certified Development Associate - SAP HANA C_SMPNHB_30 - SAP Certified Development Associate - SAP Mobile Platform Application Development (SMP 3.0) C_TAW12_750 - SAP Certified Development Associate - ABAP with SAP NetWeaver 7.50 E_HANAAW_12 - SAP Certified Development Specialist - ABAP for SAP HANA For experienced ABAP developers, I suggest getting certified on the newest technologies such as HANA, and Fiori. They may help you get a project quicker and/or at a better rate than others. Q.5. The present buzz is around AI, machine learning, IoT, Big data, and many other emerging technologies. SAP Leonardo works on making it easy to create frameworks for harnessing the latest tech. What are your thoughts on SAP Leonardo? Leonardo is SAP’s response to an AI platform. It should be an important part of SAP’s offerings, mostly built on the SAP cloud platform. SAP has relaunched Leonardo as a digital innovation system. As I understand it, Leonardo allows customers to take advantage of artificial intelligence (AI), machine learning, advanced analytics and blockchain on their company’s data. SAP gives customers an efficient way of using these technologies to solve business issues. It allows you to build a system which, in conjunction with machine learning, searches for results that can be combined with SAP transactions. The benefit with SAP Leonardo is that all the company’s data is available right in the SAP system. Using Leonardo, you have access to all human resources data and any other module data residing in the ERP system. Any company from any industry can make use of Leonardo; it works equally well for retailers, food and beverage companies and medical industries, for organizations working in retail, manufacturing and automotive. An approach that works for one company in a given industry can be applied to other companies in that industry. Suppose a company operates sensors. They can link the sensor data with the data in their SAP systems and even link that with other data, and they can then use the Leonardo capabilities to solve problems or optimize performance. When a problem for one company in an industry is solved, a similar solution may be applied to the entire industry. Yes, in my opinion, Leonardo has a bright future and should be successful. For more information about Leonardo success stories, I encourage readers to check out SAP Leonardo Internet of Things Portfolio & Success Stories. Q. 6. You are currently writing a book on ABAP Objects and Design Patterns expected to be published by the end of 2018. What was your motivation behind writing it? Can you tell us more about ABAP objects? What should readers expect from this book? ABAP and ABAP Objects has gone tremendous changes since some time both on the features (and capability) as well as the syntax. It is the most unsung topic of today. It has been there for quite long but most developers are not aware of it or are not comfortable enough to use them in their day to day work. ABAP is a vast community with developers working in a variety of functional areas. The concepts covered in the book will be generic, allowing the learner to apply them to his or her particular area. This book will cover ABAP objects (the object-oriented extension of the SAP language ABAP) in the latest release of SAP NetWeaver 7.5 and explain the newest advancements. It will start with the programming of objects in general and the basics of ABAP language the developer needs to know to get started. The book will cover the most important topics needed on everyday support jobs and for succeeding in projects. The book will be goal-directed, not a collection of theoretical topics. It won’t just touch on the surface of ABAP objects, but will go in depth from building the basic foundation (e.g., classes and objects created locally and globally) to the intermediary areas (e.g., ALV programming, method chaining, polymorphism, simple and nested interfaces), and then finally into the advanced topics (e.g., shared memory, persistent Objects). The best practices for making better programs via ABAP objects will be shown at the end. No long stories, no boring theory, only pure technical concepts followed by simple examples using coding pertaining to football players. Everything will be presented in a clear, interesting manner, and readers will learn tips and tricks they can apply immediately. Learners, students, new SAP programmers and SAP developers with some experience can use this as an alternative to expensive training books. The book will also save reader’s time searching the internet for help writing new programs. Knowing ABAP objects is key for ABAP developers these days to move forward. Starting from simple ALV reporting requirements, or defining and catching exceptional situations that may occur in a program or even the enhancement technology of BAdIs that lets you enhance standard SAP applications require sound ABAP Objects understanding. In addition, Web Dynpro application development, the Business Object Processing Framework, and even OData service creation to expose data that can be used by Fiori apps all demand solid knowledge of ABAP objects. How to perform predictive forecasting in SAP Analytics Cloud Popular Data sources and models in SAP Analytics Cloud Understanding Text Search and Hierarchies in SAP HANA  
Read more
  • 0
  • 0
  • 5105

article-image-deep-meta-reinforcement-learning-will-be-the-future-of-ai-where-we-will-be-so-close-to-achieving-artificial-general-intelligence-agi-sudharsan-ravichandiran
Sunith Shetty
13 Sep 2018
9 min read
Save for later

“Deep meta reinforcement learning will be the future of AI where we will be so close to achieving artificial general intelligence (AGI)”, Sudharsan Ravichandiran

Sunith Shetty
13 Sep 2018
9 min read
Mckinsey report predicts that artificial intelligence techniques including deep learning and reinforcement learning have the potential to create between $3.5 trillion and $5.8 trillion in value annually across nine business functions in 19 industries. Reinforcement learning (RL) is an increasingly popular technique for enterprises that deal with large complex problem spaces. It enables the agents to learn from their own actions and experiences. When working in an interactive environment, they use a trial and error process to find the best-optimized result. Reinforcement learning is at the cutting-edge right now and it's finally reached a point that it can be applied to real-world industrial systems. We recently interviewed Sudharsan Ravichandiran, a data scientist at param.ai, and the author of the book, Hands-On Reinforcement Learning with Python. Sudharsan takes us on an insightful journey explaining to us why reinforcement learning is trending and becoming so popular lately. He talks about the positive contributions of RL in various research fields such as gaming industry, robotics, inventory management, manufacturing, and finance. Author’s Bio Sudharsan Ravichandiran, author of the book, Hands-On Reinforcement Learning with Python is a data scientist, researcher, and YouTuber. His area of research focuses on practical implementations of deep learning and reinforcement learning, which includes natural language processing and computer vision. He used to be a freelance web developer and designer and has designed award-winning websites. He is an open source contributor and loves answering questions on Stack Overflow. You can follow his open source contributions on GitHub. Key Takeaways Reinforcement learning adoption among the community has increased exponentially because of the augmentation of reinforcement learning with state of the art deep learning algorithms. It is extensively used in the Gaming industry, robotics, Inventory management, and Finance. You can see more and more research papers and applications leading to full-fledged self-learning agents. One of the common challenges faced in RL is safe exploration. To avoid this problem, one can use imitation learning (learning from human demonstration) to provide the best-optimized solution. Deep meta reinforcement learning will be the future of artificial intelligence where we will implement artificial general intelligence (AGI) to build a single model to master a wide variety of tasks. Thus each model will be capable to perform a wide range of complex tasks. Sudharsan suggests the readers should learn to code the algorithms from scratch instead of using libraries. It will help them understand and implement complex concepts in their research work or projects far better. Full Interview Reinforcement learning is at the cutting-edge right now, with many of the world’s best researchers working on improving the core algorithms. What do you think is the reason behind RL success and why RL is getting so popular lately? Reinforcement learning has been around for many years, the reason it is so popular right now is because it is possible to augment reinforcement learning with state of the art deep learning algorithms. With deep reinforcement learning, researchers have obtained better results. Specifically, reinforcement learning started to grow on a massive scale after the reinforcement learning agent, AlphaGo, won over the world champion in a board game called AlphaGo. Also, Deep reinforcement learning algorithms help us in taking a closer step towards artificial general intelligence which is the true AI. Reinforcement learning is a pretty complex topic to wrap your head around, what got you into RL field? What keeps you motivated to keep on working on these complex research problems? I used to be a freelance web developer during my university days. I had a paper called Artificial Intelligence on my Spring semester, it really got me intrigued and made me want to explore more about the field. Later on, I got invited to Microsoft data science conference where I met many experts and learned more about the field way better. All these got me intriguing and made me to venture into AI. The one thing which motivates and keeps me excited are the advancements happening in the field of reinforcement learning lately. DeepMind and OpenAI are doing a great job and massively contributing to the RL community. Recent advancements like human-like robot hand control to manipulate physical objects with unprecedented dexterity, imagination augmented agents which can imagine and makes decisions, world models where the agents have the ability to dream excite me and keeps me going. Can you please list down 3 popular problem areas where RL is majorly used? Also, what are the 3 most current challenges faced while implementing RL in real-life? As a developer/researcher how you are gearing up to solve them? RL is predominantly used in the Gaming industry, robotics, and Inventory management. There are several challenges in Reinforcement learning. For instance, safe exploration. Reinforcement learning is basically a trial and error process where agents try several actions to find the best and optimal action. Consider an agent learning to navigate/learning to drive a car. Agents don't know which action is better unless they try them. The agent also has to be careful in not selecting actions which are harmful to others or itself, say, for example, colliding with other vehicles. To avoid this problem, we can use imitation learning or learning from a human demonstration where the agents learn directly from the human supervisor. Apart from these, there are various evolutionary strategies used to solve the challenges faced in RL. There are few positive developments in RL happened from Open AI and DeepMind team that have got widely adopted both in research and in real-world applications. What are some cutting-edge techniques you foresee getting public attention in RL in 2018 and in the near future?   Great things are happening around RL research each and every day. Deep Meta reinforcement learning will be the future of AI where we will be so close to achieving artificial general intelligence (AGI). Instead of creating different models to perform different tasks, with AGI, a single model can master a wide variety of tasks and mimics the human intelligence. Gaming and robotics or simulations are the two popular domains where reinforcement learning is extensively used. In what other domains does RL find important use cases and how? Manufacturing In manufacturing, intelligent robots are used to place objects in the right position. If it fails or succeeds in placing the object in the right position, it remembers the action and trains itself to do this with greater accuracy. The use of intelligent agents will reduce labor costs and result in better performance. Inventory management RL is extensively used in inventory management, which is a crucial business activity. Some of these activities include supply chain management, demand forecasting, and handling several warehouse operations (such as placing products in warehouses for managing space efficiently). Infrastructure management RL is also used in infrastructure management. For an instance, Google researchers in DeepMind have developed RL algorithms for efficiently reducing the energy consumption in their own data center. Finance RL is widely used in financial portfolio management, which is the process of constant redistribution of a fund into different financial products and also in predicting and trading in commercial transactions markets. JP Morgan has successfully used RL to provide better trade execution results for large orders. Your recently published ‘Hands-On Reinforcement Learning with Python‘ has received a very positive response from the readers. What are some key challenges in learning reinforcement learning and how does your book help them? One of the key challenges in learning reinforcement learning is the lack of intuitive examples and poor understanding of RL fundamentals with required math. The book addresses all the challenges by explaining all the reinforcement learning concepts from scratch and gradually takes readers to advanced concepts by exploring them one at a time. The book also explains all the required math step by step intuitively along with plenty of examples. My intention behind adding multiple examples and code to each chapter was to help the readers understand the concepts better. This will also help them in understanding when to apply a particular algorithm. This book also works as a perfect reference for beginners who are new to reinforcement learning. Are there any prerequisites needed to get the most out of the book? What do you think they should keep in mind while developing their own self-learning agents? Readers who are familiar with machine learning and Python basics can easily follow the book. The book starts with explaining reinforcement learning fundamentals and reinforcement learning algorithms with applications and then it takes the reader in understanding deep learning algorithms followed by the book explaining advanced deep reinforcement learning algorithms. While creating self-learning agents, one should be careful in designing reward and goal functions. What in your opinion are the 3-5 major takeaways from your book? The book serves as a solid go-to place for someone who wants to venture into deep reinforcement learning. The book is completely beginner friendly and takes the readers to the advanced concepts gradually. At the end of the book, the readers can master reinforcement learning, deep learning and deep reinforcement learning along with their applications in TensorFlow and all the required math. Would you like to add anything more to our readers? I would suggest the readers code the algorithms from scratch instead of using libraries, it will help them in understanding the concepts far better. I also would like to thank each and every reader for making this book a huge success. My best wishes to them for their reinforcement learning projects. If you found this interview to be interesting, make sure you check out other insightful articles on reinforcement learning: Top 5 tools for reinforcement learning This self-driving car can drive in its imagination using deep reinforcement learning Dopamine: A Tensorflow-based framework for flexible and reproducible Reinforcement Learning research by Google OpenAI builds reinforcement learning based system giving robots human like dexterity DeepCube: A new deep reinforcement learning approach solves the Rubik’s cube with no human help
Read more
  • 0
  • 0
  • 6526
article-image-what-should-we-watch-tonight-ask-a-robot-says-matt-jones-from-ovo-mobile
Neil Aitken
18 Aug 2018
11 min read
Save for later

What Should We Watch Tonight? Ask a Robot, says Matt Jones from OVO Mobile [Interview]

Neil Aitken
18 Aug 2018
11 min read
Netflix, the global poster child for streamed TV and the use of Big Data to inform the programs they develop, has shown steady customer growth for several years now. Recently, the company revealed that it would be shutting down the user reviews which have been so prominent in their media catalogue interface for so long. In the background, media and telco are merging. AT&T, the telco which undertook the biggest deal in history recently, acquired Time and wants HBO to become like Netflix. Telia, a Finnish telecommunications company bought Bonnier Broadcasting in late July 2018. The video content landscape has changed a great deal in the last decade. Everyone in the entertainment game wants to move beyond broadcast TV and to use data to develop content their users will love and which will give their customer base more variety. This means they can look to data to charge higher subscription rates per user, experiment with tiered subscriptions, decide to localize global content, globalize local content and more. These changes raise two key questions. First, are we heading for a world in which AI and ML based algorithms drive what we watch on TV? And second, are the days of human recommendation being quietly replaced by machine recommendations over which the user has no control? [caption id="attachment_21726" align="aligncenter" width="1392"] As you know, Netflix is acquiring customers fast.[/caption] Source: Statista To get an insider’s view on the answer to those questions, I sat down with Matt Jones of OVO Mobile, one of Australia’s fastest growing telecommunications companies. OVO offer their customers a unique point of difference – streaming video sports content, included in a phone plan. OVO has bought the rights to a number of niche sports in Australia which weren’t previously available and now offer free OTA (Over the Air) digital content for fans of ‘unusual’ sports like Drag Racing or Gymnastics. OTA content is anything delivered to a user’s phone over a wireless network. In OVO’s case, the data used to transport the video content they provide to their users is free. That means customers don’t have to worry about paying more for mobile data so they can watch it – a key concern for users. OVO Mobile and Netflix are in very similar businesses – and Matt has a unique point of view about how Artificial Intelligence and Machine Learning will impact the world of telco and media. Key takeaways What’s changed our media consumption habits: the ubiquitous mobile internet, the always on and connected younger generation, better mobile hardware, improved network performance and capabilities, need for control over content choices. Digitization allows new features –some of which that people have proven to love - binge watching, screening out advert breaks and time shifting. The key to understanding the value of ML and AI is not in understanding the statistical or technical models that are used to enable it, it’s the way AI is used to improve the customer experience your digital customers are having with you. The use of AI in digital/app experience has changed in a way to personalize what users can see which old media could not offer. Content producers use the information they have on us, about the programs we watch, when we watch them and for how long we watch to Contribution of AI / ML towards the delivery of online media is endless in terms of personalisation, context awareness, notification management etc. Social acceptance of media delivered to users on mobile phones is what’s driving change A number of overlapping factors are driving changes in how we engage with content. Social acceptance of the internet and mobile access to it as a core part of life is one key enabler. From a technology perspective, things have changed too. Smartphones now have bigger, higher resolution screens than ever before – and they’re with us all the time. Jones believes this change is part of a cultural evolution in how we relate to technology. He says, “There has also been a generational shift which has taken place. Younger people are used to the small screen being the primary device. They’re all about control, seeking out their interests and consuming these, as opposed to previous generations which was used to mass content distribution from traditional channels like TV.” Other factors include network performance and capability which has improved dramatically in recent years. Data speeds have grown exponentially from 3G networks – launched less than 15 years ago, which could support stuttered low resolution video to 4G and 4.5G enabled networks. These can now support live streaming of High Definition TV. Mobile data allowances in plans and offers from some phone companies to provide some content ‘data free’ (as OVO does with theirs) have also driven uptake. Finally, people want convenience and digital offers that in a way people have never experienced before. Digitization allows new features –some of which that people have proven to love - binge watching, screening out advert breaks and time shifting. What part can AI / machine learning play in the delivery of media online? Artificial Intelligence (AI) is already part of 85% of our online interactions. Gartner suggest, it will be part of every product in the future. The key to understanding the value of ML and AI is not in understanding the statistical or technical models that are used to enable it, it’s the way AI is used to improve the customer experience your digital customers are having with you. When you find a new band in Spotify, when YouTube recommends a funny video you’ll like, when Amazon show you other products that you might like to consider alongside the one you just put in to your basket, that’s AI working to improve your experience. “Over The Top content is exploding. Content owners are going direct to consumer and providing fantastic experiences for their users. What’s changing is the use of AI in digital / app experiences to personalize what users see in ways old media never could.” Says Matt. Matt’s video content recommendation app, for example, ‘learns’ not just what you like to watch but also the times you are most likely to watch it. It then prompts users with a short video to entice them to watch. And the analytics available show just how effective it is. Matt’s app can be up to 5 times more successful at encouraging customers to watch his content, than those who don’t use it. “The list of ways that AI / ML contributes to the delivery of media online is endless. Personalisation, context awareness, notification management …. Endless” By offering users recommendations on content they’ll love, producers can now engage more customers for longer. Content producers use the information they have on us, about the programs we watch, when we watch them and for how long we watch to: Personalise at volume: Apps used to deliver content can personalise what’s shown first to users, based on a number of variables known about them, including the sort of context awareness that can be relatively easy to find on mobile devices. Ultimately, every AI customer experience improvement (including the examples that follow) are all designed to automate the process of providing something special to each individual that they uniquely want. Automation means that can be done at scale, with every customer treated uniquely. Notification management: AI that tracks the success of notifications and acknowledges, critically, when they are not helpful to the user, can be employed to alert users only about things they want to know. These AI solutions provide updates to users based on their preferences and avoid the provision of irrelevant information. Content discovery & Re- engagement: AI and ML can be used in the provision of recommendations as to what users could watch, which expose customers to content they would not otherwise find, but which they are likely to value. Better / more relevant advertising: Advertising which targets a legitimately held, real, customer need is actually useful to viewers. Better analytics for AI can assist in targeting micro segments with ads which contain information customers will value. Lattice, is a Business Insights tool provider. Their ‘Lattice Engine’ product combined information held in multiple cloud based locations and uses AI to automatically assign customers to a segment which suits them. Those data are then provided to a customer’s eCommerce site and other channel interactions, and used to offer content which will help them convert better. Developing better segments: Raw data on real customers can be gathered from digital content systems to inform Above The Line marketing in the real, non digital world. Big data analytics can now be used with accurate segmentation for local area marketing and to tie together digital and retail customer experiences. McKinsey suggest that 36% of companies are actively pursuing strategies, driven from their Big Data reserves. They advise their clients that Big Data can be used to better understand and grow Customer Lifetime Values. In the future - Deep linking for calls-to-action: Some digital content is provided in a form such that viewers can find out more information about an item on screen. Providing a way to deep link from a video screen in to a shopping cart prepopulated with something just seen on screen is an exciting possibility for the future. Cutting steps out of the buying process to make it easier for eCommerce users to transact from within content apps to buying a product they’ve seen on the screen is likely to become a big business. Deep linking raises the value of the content shown to the degree it raise the sales of the products included. Bringing it all together Jones believes those that invest big in AI and machine learning, and of them, those who find a way to draw out insights and act upon them, will be the ultimate victors. “The big winners are going to be the people who connect a fan with content they love and use AI and ML to deliver the best possible experience. It’s about using all the information you have about your users and acting on them.” Said Jones. That commercial incentive is already driving behavior. AI and ML drive already provide personalized content recommendations. Progressive content companies, including Matt’s, are already working on building AI in to every facet of every Digital experience you have. As to whether AI is entirely replacing social media influence, I don’t think that’s the case. The research says people are still 4 times more likely to watch a video if it is recommended to them by a friend. Reviews have always been important to presales on the internet and that applies to TV shows, too. People want to know what real users felt when they used a product. If they can’t get reviews from Netflix, they will simply open a new tab and google for reviews in that while they are thinking of how to find something to watch on Netflix. About Matt Jones, Matt is an industry disruptor, launching the first of its kind Media and Telco brand OVO Mobile in 2015, Matt is the driving force behind convergence of new media & telco – by bringing together Telecommunications with Media Rights and digital broadcast for mass distribution. OVO is a new type of Telco, delivering content that fans are passionate about, streamed live on their mobile or tablet UNLIMITED & data free. OVO has secured exclusive 3 year+ digital broadcast and distribution rights for a range of content owners including Supercars, World Superbikes, 400 Thunder Drag Series, Audi Australia Racing & Gymnastics Australia – with a combined Australian audience estimated at over 7 Million. OVO is a multi-award winner, including winning the Money Magazine Best of the Best Award 2017 for high usage, as well as featuring on A Current Affair, Sunrise, The Today Show, Channel 7 News, Channel 9 News and multiple radio shows for their world-first kids’ mobile phone plan with built-in cyber security protection. As OVO CEO, Matt was nominated for Start-Up Executive of the Year at the CEO Magazine Awards 2017 and was awarded runner-up. The Award recognises the achievements of leaders and professionals, and the contributions they have made to their companies across industry-specific categories. Matt holds a Bachelor of Arts (BA) from the University of Tasmania and regularly speaks at Telco, Sports Marketing and Media forums and events. Matt has held executive leadership roles at leading Telecommunications brands including Telstra (Head of Strategy – Operations), Optus, Vodafone, AAPT, Telecom New Zealand as well as global Management Consulting firms including BearingPoint. Matt lives on the northern beaches of Sydney with his wife Mel and daughters Charlotte and Lucy. How to earn $1m per year? Hint: Learn machine learning We must change how we think about AI, urge AI founding fathers Alarming ways governments are using surveillance tech to watch you
Read more
  • 0
  • 0
  • 3371

article-image-blockchain-can-solve-tech-trust-issues-imran-bashir
Richard Gall
05 Jun 2018
4 min read
Save for later

Blockchain can solve tech's trust issues - Imran Bashir

Richard Gall
05 Jun 2018
4 min read
The hype around blockchain has now reached fever pitch. Now the Bitcoin bubble has all but burst, it would seem that the tech world - and beyond - is starting to think more creatively about how blockchain can be applied. We've started to see blockchain being applied in a huge range of areas; that's likely to grow over the next year or so. We certainly weren't surprised to see blockchain rated highly by many developers working in a variety of fields in this year's Skill Up survey. Around 70% of all respondents believe that blockchain is going to prove to be revolutionary. Read the Skill Up report in full. Sign up to our weekly newsletter and download the PDF for free. To help us make sense of the global enthusiasm and hype for blockchain, we spoke to blockchain expert Imran Bashir. Imran is the author of Mastering blockchain, so we thought he could offer some useful insights into where blockchain is going next. He didn't disappoint. Respondents to the Skill Up survey said that blockchain would be revolutionary. Do you agree? Why? I agree. The fundamental issue that blockchain solves is that of trust. It enables two or more mutually distrusting parties to transact with each other without the need of establishing trust and a trusted third party. This phenomenon alone is enough to start a revolution. Generally, we perform transactions in a centralised and trusted environment, which is a norm and works reasonably well but think about a system where you do not need trust or a central trusted third party to do business. This paradigm fundamentally changes the way we conduct business and results in significant improvements such as cost saving, security and transparency. Why should developers learn blockchain? Do you think blockchain technology is something the average developer should be learning? Why? Any developer should learn blockchain technology because in the next year or so there will be a high demand for skilled blockchain developers/engineers. Even now there are many unfilled jobs, it is said that there are 14 jobs open for every blockchain developer. The future will be built on blockchain; every developer/technologist should strive to learn it. What most excites you about blockchain technology? It is the concept of decentralisation and its application in almost every industry ranging from finance and government to medical and law. We will see applications of this technology everywhere. It will change our lives; just the way Internet did in the 1990s. Also, smart contracts constitute a significant part of blockchain technology, and it allows you to implement Contracts that are automatically executable an enforceable. This ability of blockchain allows you drastically reduce the amount of time it takes for contract enforcement and eliminates the need for third parties and manual processes that can take a long time to come into action. Enforcement in the real world takes a long time, in blockchain world, it is reduced to few minutes, if not seconds, depending on the application and requirements. What tools do you need to learn to take advantage of blockchain? What tools do you think are essential to master in order to take advantage of blockchain? Currently, I think there are some options available. blockchain platforms such as Ethereum and Hyperledger fabric are the most commonly used for development. As such, developers should focus on at least one of these platforms. It is best to start with necessary tools and features available in a blockchain, and once you have mastered the concepts, you can move to using frameworks and APIs, which will ease the development and deployment of decentralised applications. What do you think will be the most important thing for developers to learn in the next 12 months? Learn blockchain technology and at least one related platform. Also explore how to implement business solutions using blockchain which results in bringing about benefits of blockchain such as security, cost-saving and transparency. Thanks for taking the time to talk to us Imran! You can find Imran's book on the Packt store.
Read more
  • 0
  • 0
  • 6187