Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech Guides

851 Articles
article-image-packt-explains-deep-learning-in-90-seconds
Packt Publishing
01 Mar 2016
1 min read
Save for later

Packt Explains... Deep Learning in 90 seconds

Packt Publishing
01 Mar 2016
1 min read
If you've been looking into the world of Machine Learning lately you might have heard about a mysterious thing called “Deep Learning”. But just what is Deep Learning, and what does it mean for the world of Machine Learning as a whole? Take less than two minutes out of your day to find out and fully realize the awesome potential Deep Learning has with this video today.
Read more
  • 0
  • 0
  • 2396

article-image-4-ways-artificial-intelligence-leading-disruption-fintech
Pravin Dhandre
23 Nov 2017
6 min read
Save for later

4 ways Artificial Intelligence is leading disruption in Fintech

Pravin Dhandre
23 Nov 2017
6 min read
In the digital disruption era, Artificial Intelligence in Fintech is viewed as an emerging technology forming the sole premise for revolution in the sector. Tech giants positioned in the Fortune’s 500 technology list such as Apple, Microsoft, Facebook are putting resources in product innovations and technology automation. Businesses are investing hard to bring agility, better quality and high end functionality for driving their revenue growth by multi digits. Widely used AI-powered applications such as Virtual Assistants, Chatbots, Algorithmic Trading and Purchase Recommendation systems are fueling up the businesses with low marginal costs, growing revenues and providing a better customer experience. According to a survey, by National Business Research Institute, more than 62% of the companies will deploy AI powered fintech solutions in their applications to identify new opportunities and areas to scale the business higher. What has led the disruption? The Financial sector is experiencing a faster technological evolution right from providing personalized financial services, executing smart operations to simplify the complex and repetitive process. Use of machine learning and predictive analytics has enabled financial companies to provide smart suggestions on buying and selling stocks, bonds and commodities. Insurance companies are accelerating in automating their loan applications, thereby saving umpteen number of hours. Leading Investment Bank, Goldman Sachs automated their stock trading business replacing their trading professionals with computer engineers. Black Rock, one of the world’s largest asset management company facilitates high net worth investors with automated advice platform superseding highly paid wall street professionals. Applications such as algorithmic trading, personal chatbots, fraud prevention & detection, stock recommendations, and credit risk assessment are the ones finding their merit in banking and financial services companies.   Let us understand the changing scenarios with next-gen technologies: Fraud Prevention & Detection Fraud prevention is tackled by the firms using an anomaly detection API. The API is designed using machine learning & deep learning mechanism. It helps identify and report any suspicious or fraudulent activity taking place among-st billions of transactions on a daily basis. Fintech companies are infusing huge capital to handle cyber-crime, resulting into a global market spends of more than 400 billion dollars annually. Multi-national giants such as MasterCard, Sun Financial, Goldman Sachs, and Bank of England use AI-powered systems to safeguard and prevent money laundering, banking frauds and illegal transactions. Danske Bank, a renowned Nordic-based financial service provider, deployed AI engines in their operations helping them investigate millions of online banking transactions in less than a second. With this, cost of fraud investigation and delivering faster actionable insights reduced drastically. AI Powered Chatbots Chatbots are automated customer support chat applications powered by Natural Language Processing (NLP). They help deliver quick, engaging, personalized, and effective conversation to the end user. With an upsurge in the number of investors and varied investment options, customers seek financial guidance, profitable investment options and query resolution, faster and in real-time. Large number of banks such as Barclays, Bank of America, JPMorgan Chase are widely using AI-supported digital Chatbots to automate their client support, delivering effective customer experience with smarter financial decisions. Bank of America, the largest bank in US launched Erica, a Chatbot which guides customers with investment option notification, easy bill payments, and weekly update on their mortgage score.  MasterCard offers a chatbot to their customers which not only allows them to review their bank balance or transaction history but also facilitates seamless payments worldwide. Credit Risk Management For money lenders, the most common business risk is the credit risk and that piles up largely due to inaccurate credit risk assessment of borrowers. If you are unaware of the term credit risk, it is simply a risk associated with a borrower defaulting to repay the loan amount. AI backed Credit Risk evaluation tools developed using predictive analytics and advanced machine learning techniques has enabled bankers and financial service providers to simplify the borrower’s credit evaluation thereby transforming the labor intensive scorecard assessment method. Wells Fargo, an American international banking company adopted AI technology in executing mortgage verification and loan processing. It resulted in lower market exposure risk of their lending assets. With this, the team was able to establish smarter and faster credit risk management functionality. It resulted in analysis of millions of structured and unstructured data points for investigation thereby proving AI as an extremely valuable asset for credit security and assessment. Algorithmic Trading More than half a dozen US citizens own individual stocks, mutual funds, and exchange-traded mutual funds. Also, a good number of users trade on a daily basis, making it imperative for major broking and financial trading companies to offer AI powered algorithmic trading platform. The platform enables customers with strategic execution of trades offering significant returns. The algorithms analyse hundreds of millions of data pointers and draw down a decisive trading pattern enabling traders to book higher profits every microsecond of the trading hour. France-based international bank BNP Paribas deployed algorithmic trading which aids their customers in executing trades strategically and provides graphical representation of stock market liquidity. With the help of this, customers are able to determine the most appropriate ways of executing trade under various market conditions. The advances in automated trading has assisted users with suggestions and rich insights, helping humans to take better decisions. How do we see the Future of AI in Financial sector? The influence of AI in fintech has marked disruption in almost each and every financial institution, right from investment banks to retail banking, to small credit unions. Data science and machine learning practitioners are endeavoring to position AI as an essential part of the banking ecosystem. Financial companies are synergizing with data analytics and fintech professionals to orient AI as the primary interface for interaction with their customers. However, the sector commonly faces challenges in adoption of emerging technologies, making it inevitable for AI too. The foremost challenge companies face is availability of massive data which is clean and rich to train machine learning algorithms. The next hurdle in line would be the reliability and accuracy of the data insights provided by the AI mechanized solution. With dynamic market situation, businesses could experience decline in efficacy of their models causing serious harm to the company. Hence, they need to be smarter and cannot solely trust the AI technology in achieving the business mission. Absence of emotional intelligence in Chatbots is another area of concern resulting in an unsatisfactory customer service experience. While there may be other roadblocks, the rising investment in AI technology would definitely assist financial companies in overcoming such challenges and developing competitive intelligence in their product offerings. Predicting the near future, adoption of cutting edge technologies such as machine learning and predictive analytics will boost higher customer engagement, exceptional banking experience, lesser frauds and higher operating margins for banks, financial institutions and Insurance companies.
Read more
  • 0
  • 0
  • 2390

article-image-docker-isnt-going-anywhere
Savia Lobo
22 Jun 2018
5 min read
Save for later

Docker isn't going anywhere

Savia Lobo
22 Jun 2018
5 min read
To create good software, developers often have to weave in the UI, frameworks, databases, libraries, and of course a whole bunch of code modules. These elements together build an immersive user experience on the front-end. However, deploying and testing software is way too complex these days as all these elements should be properly set-up in order to build successful software. Here, containers are of a great help as they enable developers to pack all the contents of their app, including the code, libraries, and other dependencies, and ship it over as a singular package. One can think of software as a puzzle and containers just help one to get all the pieces in their proper position for the effective functioning of the software. Docker is one of the popular choices in containers. The rise of Docker Containers Linux Containers have been in the market for almost a decade. However, it was after the release of Docker five years ago that developers widely started using containers in a simple way. At present, containers, especially Docker containers are popular and in use everywhere and this popularity seems set to stay. As per our Packt Skill Up developer survey on top sysadmin and virtualization tools, almost 46% of the developer crowd voted that they use Docker containers on a regular basis.  It ranked third after Linux and Windows OS in the lead. Source: Packt Skill Up survey 2018 Also, organizations such as Red Hat, Canonical, Microsoft, Oracle and all other major IT companies and cloud businesses that have adopted Docker. Docker is often confused with virtual machines; read our article on Virtual machines vs Containers to understand the differences between the two. VMs such as Hyper-V, KVM, Xen, and so on are based on the concept of emulating hardware virtually. As such, they come with huge system requirements. On the other hand, Docker containers or in general containers use the same OS and kernel. Apart from this, Docker is just right if you want to use minimal hardware to run multiple copies of your app at the same time. This would, in turn, save huge costs on power and hardware for data centers annually. Docker containers boot within a fraction of seconds unlike virtual machines that require 10-20 GB of operating system data to boot, which eventually slows down the whole process. For CI/CD, Docker makes it easy to set up environments for local development that replicates a live server. It helps run multiple development environments using different software, OS, and configurations; all from the same host. One can run test projects on new or different servers and can also work on the same project with similar settings, irrespective of the local host environment. Docker can also be deployed on the cloud as it is designed for integration within most of the DevOps platforms including Puppet, Chef, and so on. One can even manage standalone development environments with it. Why developers love Docker Docker brought in novel ideas in the market for the organizations starting with making containers easy to use and deploy. In the year 2014, Docker announced that it was partnering with the major tech leaders Google, Red Hat, and Parallels on its open-source component libcontainer. This made libcontainer the defacto standard for Linux containers. Microsoft also announced that it would bring Docker-based containers to its Azure Cloud. Docker has also donated its software container format and its runtime, along with its specifications to Linux’s Open Container Project. This project includes all the contents of the libcontainer project, nsinit, and all other modifications such that it can independently run without Docker.  Further, Docker Containerd, is also hosted by Cloud Native Computing Foundation (CNCF). Few reasons why Docker is preferred by many: It has a great user experience, which helps developers to use the programming language of their choice. One requires to perform less amount of coding One can run Docker on any operating system such as Windows, Linux, Mac, and etc. The Docker Kubernetes combo DevOps can be used to deploy and monitor Docker containers but they are not highly optimized for this task. Containers need to be individually monitored as they contain huge density in respect to the matter they contain. The possible solution to this is cloud orchestration tools, and what better than Kubernetes as it is one of the most dominant cloud orchestration tools in the market. As Kubernetes has a bigger community and a bigger share of the market, Docker made a smart move to include Kubernetes as one of its offerings. With this, Docker users and customers can not only use Kubernetes’ secure orchestration experience but also an end-to-end Docker experience. Docker gives an A to Z experience for developers and system administrators. With Docker, Developers can focus on writing code and forget about the rest of the deployment. They can also make use of different programs designed to run on Docker and can make use of it in their own projects. System administrators in a way can reduce system overhead as compared to VMs. Docker’s portability and ease of installation make it easy for admins to save a bunch of time lost in installing individual VM components. Also, with Google, Microsoft, RedHat, and others absorbing Docker technology in their daily operations, it is surely not going anywhere soon. Docker’s future is bright and we can expect machine learning to be a part of it sooner than later. Are containers the end of virtual machines? How to ace managing the Endpoint Operations Management Agent with vROps Atlassian open sources Escalator, a Kubernetes autoscaler project  
Read more
  • 0
  • 0
  • 2386

article-image-beyondcorp-is-transforming-enterprise-security
Richard Gall
16 May 2018
3 min read
Save for later

BeyondCorp is transforming enterprise security

Richard Gall
16 May 2018
3 min read
What is BeyondCorp? Beyondcorp is an approach to cloud security developed by Google. It is a zero trust security framework that not only tackles many of today's cyber security challenges, it also helps to improve accessibility for employees. As remote, multi-device working shifts the way we work, it's a framework that might just be future proof. The principle behind it is a pragmatic one: dispensing with the traditional notion of a workplace network and using a public network instead. By moving away from the concept of a software perimeter, BeyondCorp makes it much more difficult for malicious attackers to penetrate your network. You're no longer inside or outside the network; there are different permissions for different services. While these are accessible to those that have the relevant permissions, the lack of perimeter makes life very difficult for cyber criminals. Read now: Google employees quit over company’s continued Artificial Intelligence ties with the Pentagon How does BeyondCorp work? BeyondCorp works by focusing on users and devices rather than networks and locations. It works through a device inventory service. This essentially logs information about the user accessing the service, who they are, and what device they're using. Google explained the concept in detail back in 2016: "Unlike the conventional perimeter security model, BeyondCorp doesn’t gate access to services and tools based on a user’s physical location or the originating network; instead, access policies are based on information about a device, its state, and its associated user." Of course, BeyondCorp encompasses a whole range of security practices. Implementation requires a good deal of alignment and effective internal communication. That's one of the challenges the Google team had when implementing the framework - getting the communication and buy-in from the whole organization without radically disrupting how people work. Is BeyondCorp being widely adopted by enterprises? Google has been developing BeyondCorp for some time. In fact, the concept was a response to the Operation Aurora cyber attack back in 2009. This isn't a new approach to system security, but it is only recently becoming more accessible to other organizations. We're starting to see a number of software companies offering what you might call BeyondCorp-as-a-Service. Duo is one such service: "Reliable, secure application access begins with trust, or a lack thereof" goes the (somewhat clunky) copy on their homepage. Elsewhere, ScaleFT also offer BeyondCorp services. Services like those offered by Duo and ScaleFT highlight that there is clearly an obvious demand for this type of security framework. But it is a nascent trend. Despite having been within Google for almost a decade, Thoughtworks' Radar first picked up on BeyondCorp in May 2018. Even then, ThoughtWorks placed it in the 'assess' stage. That means that it is still too early to adopt. It should simply be explored as a potential security option in the near future. Read next Amazon S3 Security access and policies IoT Forensics: Security in an always connected world where things talk
Read more
  • 0
  • 0
  • 2378

article-image-what-makes-programming-languages-simple-or-complex
Antonio Cucciniello
12 Jun 2017
4 min read
Save for later

What makes programming languages simple or complex?

Antonio Cucciniello
12 Jun 2017
4 min read
Have you been itching to learn a new programming language? Maybe you want to learn your first programming language and don't know what to choose. When learning a new language (especially your first language) you want to minimize the amount of unknowns that you will have. So you may want to choose a programming language that is simpler. Or maybe you are up for a challenge and want to learn something difficult! Today we are going to answer the question: What makes programming languages simple or complex? Previous experience The amount of experience you have programming or learning different programming concepts can greatly impact how well you learn a new language. If this is your tenth programming language you will have most likely seen plenty of the content before in the new language so that can greatly reduce the complexity. On the other hand, if this is your first language, you will be learning many new concepts and ideas that are natural to programming and that may make the language seem more complex than it probably is. Takeaway: The more programming experience you have, the lower the chances a programming language will be complex to you. Syntax The way you need to write code for that language can really affect the complexity. Some languages have many syntax rules that can be a nuisance when learning and will leave you confused. Other languages have fewer rules that will make it easier to understand for someone not familiar to the language. Additionally, for those with previous experience, if the new language has similar syntax to the old language it will help in the learning process. Another factor similar to syntax is how the code looks to the user. In my experience, the more the code has variable/function names that resemble the English language, the easier it is to understand it. Takeaway: The more syntax rules the more difficult a language can be to learn. Built-in functionality The next factor is how much built-in functionality a language has. If the language has been around for years and is being continuously updated, chances are it has plenty of helper functions and plenty of functionality. In the case of some newer languages, they might not have as much built-in functionality that allows you to develop easier. Takeaway: Generally, languages with more functionalitybuilt-in will make it easier to implement what you need in code. Concepts The fourth topic we are going to discuss here is concepts. That is, what programming concepts does this language use? There are plenty out there like object oriented programming, memory management, inheritance and more. Depending on what concepts are used in the language as well as your previous understanding of a concept, you could either really struggle with learning the language, or you could potentially find it easier than most. Takeaway: Your previous experience with specific concepts and the complexity of the concepts in the language could affect the complexity of the language as a whole. Frameworks & libraries Frameworks and libraries are very similar to built-in functionality. Frameworks and libraries are developed to make something in the language easier or to simplify a task that you would normally have to do yourself in code. So, with more frameworks you could make development easier than normal. Takeaway: If a language has plenty of support from libraries and frameworks, the language will decrease in complexity. Resources Our last topic here is arguably the most important. Ultimately, without high-quality documentation it can be very hard to learn a language. When looking for resources on a language, check out books, blog posts, tutorials, videos, documentation and forums for making sure there are plenty of resources on the topic. Takeaway: The more high-quality resources out there on the programming language, the easier it will be to learn.  When deciding on what programming languages are complex or simple, it truly depends on a few factors: your previous experience, the syntax of the language, built-in functionality, the concepts used, frameworks for support, and high-quality resources available. About the Author  Antonio Cucciniello is a Software Engineer with a background in C, C++ and JavaScript (Node.Js) from New Jersey.   His most recent project called Edit Docs is an Amazon Echo skill that allows users to edit Google Drive files using your voice.  He loves building cool things with software, reading books on self-help and improvement, finance, and entrepreneurship. Follow him on twitter @antocucciniello, and follow him on GitHub here: https://github.com/acucciniello
Read more
  • 0
  • 0
  • 2371

article-image-how-devops-can-improve-software-security
Hari Vignesh
11 Jun 2017
7 min read
Save for later

How DevOps can improve software security

Hari Vignesh
11 Jun 2017
7 min read
The term “security” often evokes negative feelings among software developers because it is associated with additional programming effort, uncertainty and roadblocks to fast development and release cycles. To secure software, developers must follow numerous guidelines that; while intended to satisfy some regulation or other, can be very restrictive and hard to understand. As a result, a lot of fear, uncertaintyand doubt can surround software security.  First, let’s consider the survey conducted by SpiceWorks, in which IT pros were asked to rank a set of threats in order of risk to IT security. According to the report, the respondents ranked the following threats as their organization’s three biggest risks to IT security as follows:  Human error Lack of process External threats  DevOps can positively impact all three of these major risk factors, without negatively impacting stability or reliability of the core business network. Let’s discuss how security in DevOps attempts to combat the toxic environment surrounding software security; by shifting the paradigm from following rules and guidelines to creatively determining solutions for tough security problems. Human error We’ve all fat-fingered configurations and code before. Usually we catch them, but once in a while they sneak into production and wreak havoc on security. A number of “big names” have been caught in this situation, where a simple typo introduced a security risk. Often these occur because we’re so familiar with what we’re typing that we see what we expect to see, rather than what we actually typed.  To reduce risk from human error via DevOps you can: Use templates to standardize common service configurations Automate common tasks to avoid simple typographical errors Read twice, execute once Lack of process First, there’s the fact that there’s almost no review of the scripts that folks already use to configure, change, shutdown, and start up services across the production network. Don’t let anyone tell you they don’t use scripts to eliminate the yak shaving that exists in networking and infrastructure, too. They do. But they aren’t necessarily reviewed and they certainly aren’t versioned like the code artifacts they are;they rarely are reused. The other problem is simply there’s no governed process. It’s tribal knowledge.  To reduce risk from a lack of process via DevOps: Define the deployment processclearly. Understand prerequisites, dependencies and eliminate redundancies or unnecessary steps. Move toward the use of orchestration as the ultimate executor of the deployment process, employing manual steps only when necessary. Review and manage any scripts used to assist in the process. External threats At first glance, this one seems to be the least likely candidate for being addressed with DevOps. Given that malware and multi-layered DDoS attacks are the most existential threats to businesses today, that’s understandable. There are entire classes of vulnerabilities that can only be detected manually by developers or experts reviewing the code. But it doesn’t really extend to production, where risks becomes reality when it’s exploited. One way that DevOps can reduce potential risk is, more extensive testing and development of web app security policies during development that can then be deployed in production.  Adopting a DevOps approach to developing those policies — and treating them like code too — provides a faster and a more likely, thorough policy that does a better job overall of preventing the existential threats from being all-too-real nightmares.  To reduce the risk of threats becoming reality via DevOps: Shift web app security policy development and testing left, into the app development life cycle. Treat web app security policies like code. Review and standardize. Test often, even in production. Automate using technology such as dynamic application security testing (DAST) and when possible, integrate results into the development life cycle for faster remediation that reduces risk earlier. Best DevOps practices Below is a list of the top five DevOps practices and tooling that can help improve overall security when incorporated directly into your end-to-end continuous integration/continuous delivery (CI/CD) pipeline: Collaboration Security test automation Configuration and patch management Continuous monitoring Identity management Collaboration and understanding your security requirements Many of us are required to follow a security policy. It may be in the form of a corporate security policy, a customer security policy, and/or a set of compliance standards (ex. SOX, HIPAA, etc). Even if you are not mandated to use a specific policy or regulating standard, we all still want to ensure we follow the best practices in securing our systems and applications. The key is to identify your sources of information for security expertise, collaborate early, and understand your security requirements early so they can be incorporated into the overall solution. Security test automation Whether you’re building a brand new solution or upgrading an existing solution, there likely are several security considerations to incorporate. Due to the nature of quick and iterative agile development, tackling all security at once in a “big bang” approach likely will result in project delays. To ensure that projects keep moving, a layered approach often can be helpful to ensure you are continuously building additional security layers into your pipeline as you progress from development to a live product. Security test automation can ensure you have quality gates throughout your deployment pipeline giving immediate feedback to stakeholders on security posture and allowing for quick remediation early in the pipeline. Configuration management In traditional development, servers/instances are provisioned and developers are able to work on the systems. To ensure servers are provisioned and managed using consistent, repeatable and reliable patternsit’s critical to ensure you have a strategy for configuration management. The key is ensuring you can reliably guarantee and manage consistent settings across your environments. Patch management Similar to the concerns with configuration management, you need to ensure you have a method to quickly and reliably patch your systems. Missing patches is a common cause of exploited vulnerabilities including malware attacks. Being able to quickly deliver a patch across a large number of systems can drastically reduce your overall security exposures. Continuous monitoring Ensuring you have monitoring in place across all environments with transparent feedback is vital so it can alert you quickly of potential breaches or security issues. It’s important to identify your monitoring needs across the infrastructure and applicationand then take advantage of some of the tooling that exists to quickly identify, isolate, shut down, and remediate potential issues before they happen or before they become exploited. Part of your monitoring strategy also should include the ability to automatically collect and analyze logs. The analysis of running logs can help identify exposures quickly. Compliance activities can become extremely expensive if they are not automated early. Identity management DevOps practices help allow us to collaborate early with security experts, increase the level of security tests and automation to enforce quality gates for security and provide better mechanisms for ongoing security management and compliance activities. While painful to some, it has to be important to all if we don’t want to make headlines.  About the Author Hari Vignesh Jayapalan is a Google Certified Android app developer, IDF Certified UI & UX Professional, street magician, fitness freak, technology enthusiast, and wannabe entrepreneur. He can be found on Twitter @HariofSpades.
Read more
  • 0
  • 0
  • 2369
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $15.99/month. Cancel anytime
article-image-why-is-pentaho-8-3-great-for-dataops
Guest Contributor
07 Oct 2019
6 min read
Save for later

Why is Pentaho 8.3 great for DataOps?

Guest Contributor
07 Oct 2019
6 min read
Announced in July, Pentaho 8.3 is the latest version of the data integration and analytics platform software from Hitachi Vantara. Along with new and improved features, this version will support DataOps, a collaborative data management practice that helps customers access the full potential of their data. “DataOps is about having the right data, in the right place, at the right time and the new features in Pentaho 8.3 ensure just that,” said John Magee, vice president, Portfolio Marketing, Hitachi Vantara. “Not only do we want to ensure that data is stored at the lowest cost at the right service level, but that data is searchable, accessible and properly governed so actionable insights can be generated and the full economic value of the data is captured.” How Pentaho prevents the loss of data According to Stewart Bond, research director, Data Integration and Integrity Software, and Chandana Gopal, research director, Business Analytics Solutions from IDC, “A vast majority of data that is generated today is lost. In fact, only about 2.5% of all data is actually analyzed. The biggest challenge to unlocking the potential that is hidden within data is that it is complicated, siloed and distributed. To be effective, decision makers need to have access to the right data at the right time and with context.” The struggle is how to manage all the incoming data in a way that exposes everyone to what’s coming down the pipeline. When data is siloed, there’s no guarantee the right people are seeing it to analyze it. Pentaho Development is a single platform to help businesses keep up with data growth in a way that enables real-time data ingestion. With available data services, you can:   Make data sets immediately available for reports and applications.   Reduce the time needed to create data models.   Improve collaboration between business and IT teams.   Analyze results with embedded machines and deep learning models without knowing how to code them into data pipelines.   Prepare and blend traditional data with big data. Making all the data more accessible across the board is a key feature of Pentaho that this latest release continues to strengthen. What’s new in Pentaho 8.3? Latest version of Pentaho includes new features to support DataOps DataOps limits the overall cycle time of big data analytics. Starting from the initial origin of the ideas to the making of the visualization, the overall data analytics process is transformed with DataOps. Pentaho 8.3 is conceptualized to promote the easy management and collaboration of the data. The data analytics process is much more agile. Therefore, the data teams are able to work in sync. Also, efficiency and effectiveness are increased with DataOps. Businesses are looking for ways to transform the data digitally. They want to get more value from the massive pool of information. And, as data is almost everywhere, and it is distributed more than ever before, therefore, the businesses are looking for ways to get the key insights from the data quickly and easily. This is exactly where the role of Pentaho 8.3 comes into the picture. It accelerates the businesses’ innovation and agility. Plenty of new and exciting time-saving enhancements have been done to make Pentaho a better and more advanced solution for the corporates. It helps the companies to automate their data management techniques.  Key enhancements in Pentaho 8.3 Each enhancement included with Pentaho 8.3 in some way helps organizations modernize their data management practices in ways that assist with removing friction between data and insight, including: Improved drag and drop pipeline capabilities These help access and blend data that are hard to reach to provide deeper insights into the greater analytic value from enterprise integration. Amazon Web Services (AWS) developers can also now ingest and process streaming data through a visual environment rather than having to write code that must blend with other data. Enhanced data visibility Improved integration with Hitachi Content Platform (HCP), a distributed, object storage system designed to support large repositories of content, makes it easier for users to read, write and update HCP customer metadata. They can also more easily query objects with their system metadata, making data more searchable, governable, and applicable for analytics. It’s also now easier to trace real-time data from popular protocols like AMQP, JMS, Kafka, and MQTT. Users can also view lineage data from Pentaho within IBM’s Information Governance Catalog (IGC) to reduce the amount of effort required to govern data. Expanded multi-cloud support AWS Redshift bulk load capabilities now automate the process of loading Redshift. This removes the repetitive SQL scripting to complete bulk loads and allows users to boost productivity and apply policies and schedules for data onboarding. Also included in this category are updates that address Snowflake connectivity. As one of the leading destinations for cloud warehousing, Snowflake’s primary hiccup is when an analytics project wants to include data from other sources. Pentaho 8.3 allows blending, enrichment and the analysis of Snowflake data in conjunction with other sources, including other cloud sources. These include existing Pentaho-supported cloud platforms like AWS and Google Cloud. Pentaho and DataOps Each of the new capabilities and enhancements for this release of Pentaho are important for current users, but the larger benefit to businesses is its association with DataOps. Emerging as a collaborative data management discipline, focused on better communication, integration, and automation of how data flows across an organization, DataOps is becoming a practice embraced more often, yet not without its own setbacks. Pentaho 8.3 helps businesses gain the ability to make DataOps a reality without facing common challenges often associated with data management. According to John Magee, Vice President Portfolio Marketing at Hitachi,  “The new Pentaho 8.3 release provides key capabilities for customers looking to begin their DataOps journey.” Beyond feature enhancements Looking past the improvements and new features of the latest Pentaho release, it’s a good product because of the support it offers its community of users. From forums to webinars to 24/7 support, it not only caters to huge volumes of data on a practical level, but it doesn’t ignore the actual people using the product outside of the data. Author Bio James Warner is a Business Intelligence Analyst with Excellent knowledge on Hadoop/Big data analysis at NexSoftSys.com  New MapR Platform 6.0 powers DataOps DevOps might be the key to your Big Data project success Bridging the gap between data science and DevOps with DataOps
Read more
  • 0
  • 0
  • 2368

article-image-service-mesh-trick-or-treat
Melisha Dsouza
31 Oct 2018
2 min read
Save for later

Service mesh - Trick or Treat?

Melisha Dsouza
31 Oct 2018
2 min read
‘Service mesh’ is a term that is relatively new and has gained visibility in the past year. It’s a configurable infrastructure layer for a microservices application that makes communication between service instances flexible, reliable, and fast. Why are people talking about ‘service meshes’? Modern applications contain a range of (micro)services that allow it to run effectively. Load balancing, traffic management, routing, security, user authentication - all of these things need to work together properly if the application is going to function as intended.. Managing these various services, across a whole deployment of containers, poses a challenge for those responsible for updating and maintaining them. How does a service mesh work? Enter the Service mesh. It works delivering these services from within the compute cluster through a set of APIs. These APIs, when brought together, form the ‘mesh’.. This makes it much easier to manage software infrastructures of particular complexity - hence why organizations like Netflix and Lyft have used them.. Trick or treat? With the service meshes addressing some of the key challenges when it comes to microservices, this is definitely a treat for 2018 and beyond. NGINX Hybrid Application Delivery Controller Platform improves API management, manages microservices and much more! Kong 1.0 launches: the only open source API platform specifically built for microservices, cloud, and serverless OpenFaaS releases full support for stateless microservices in OpenFaaS 0.9.0
Read more
  • 0
  • 0
  • 2367

article-image-minecraft-modding-experiences-and-starter-advice
Martijn Woudstra
18 Mar 2015
6 min read
Save for later

Minecraft Modding Experiences and Starter Advice

Martijn Woudstra
18 Mar 2015
6 min read
For three years now, I have tried to help a lot of people enjoy Minecraft in a different way. One specific thing I want to talk about today are add-ons for Minecraft. This article covers my personal experience, as well as some advice and tips for people who want to start developing and bring their own creativity to life in Minecraft. We all know the game Minecraft, where you live in a blocky world, and you can create literally everything you can imagine. So what could possibly be more fun than making the empire state building? Inventing new machines, formulating new spells,  designigning factories and automatic builders for starters! However, as most of you probably know already, these features are not present in the Minecraft world, and this is where people like me jump in. We are Mod Developers. We are the people who bring these awesome things to life, and change the game in ways you may find much more enjoyable to make Minecraft more enjoyable. Although all of this might seem easy to do, it actually takes a lot of effort and creativity.  Let me walk you through the process. Community feedback is priority number one. You can’t have a fun mod if nobody else enjoys it. Sometimes I read an article on a forum about a really good idea. I then get to work!.  However, just like traditional game development, a small idea that is posted on the forum, must be fully thought through. People who come up with the ideas usually don’t think of everything when they post their idea. You must think about things such as how to balance a given idea with vanilla Minecraft. What do you want your creation to look like? Do you need to ask for help from other authors? All of these things are essential steps for making a good modification to the amazing Minecraft experience that you crave. You should start by writing down all of your ideas and concepts. A workflow chart helps to make sure the most essential things are done first, and the details happen later. Usually I keep all of my files in a Google Drive, so I can share my files with others. In my opinion, the actual modding phase is the coolest part, but it takes the longest amount of time. If you want to create something that is new and innovative you might soon realize it is something that you’ve never worked with before, which can be hard to create. For example, for a simple feature such as making a block spin automatically, you could easily work for two hours just to create the basic movements.  This is where experience kicks in. When you make your first modification, you might bump into the smallest problems. These little problems kept me down for quite a long time. It can be quite a stressful process, but don’t give up! Luckily for me, there were a lot of people in the Minecraft modding community who were kind enough to help me out through the early stages of my development career. At this moment I have reached a point where my experience allows most problems to easily be solved. Therefore, mod development has become a lot more fun.  I even decided to join a modding team, and I took over as lead on that project. Our final mod turned out to be amazing. A little later, I started a tutorial series together with a good friend of me, for people who wanted to start with the amazing art of making Minecraft mods. This tutorial series was quite a success, with 7000 views on the website, and almost 2000 views on YouTube. I do my best to help people make their first steps into this amazing community,  by making tutorials, writing articles about my experiences, and describing my idea on how to get into modding. What I noticed right away, is that people tend to go too fast in the beginning. Minecraft is written in Java, a programming language. I have spoken to some people, who didn’t even know this, and yet were trying to make a mod. Unfortunately, life doesn’t work like that. You need to know about the basic language, before you can use it properly. Therefore, my first advice to you is to learn the basics of Java. There are hundreds of tutorials online that can teach you what you need to know. Personally, that’s how I learned Java too! Next up is to get involved into the community: Minecraft Forge is basically a bridge between the standard Minecraft experience and the limitless possibilities of a modded Minecraft game. Minecraft Forge has a wide range of modders, who definitely do not mind giving you some advice, or helping out with problems. Another good way to learn quickly is to team up with someone. Ask around on the forums for a teacher, or someone just as dedicated as you, and work together on a project you both want to truly bring to life. Start making a dummy mod, and help each other when you get stuck. Not a single person has the same way of tackling a task, and perhaps you can absorb some good habits from your teammate. When I did this, I learned a thousand new ways to write pieces of code I would have never thought of on my own. The last and most important thing I want to mention in this post is to always have fun doing what you’re doing. If you’re having a hard time enjoying modding, take a break. Modding will pull you back if you really want it again. And I am speaking from personal experience. About this author Martijn Woudstra lives in a city called Almelo in the Netherlands. Right now he is studying Clinical Medicine at the University of Twente. He learned Java programming about 3 years ago. For over a year he has been making Minecraft mods, which required him to learn how to translate Java into the API used to make the mods. He enjoys teaching others how to mod in Minecraft, and along with his friend Wuppy, has created the Orange Tutorial site (http://orangetutorial.com). The site contains tutorials and high quality videos and understandable code. This is must see resource if you are interested in crafting Minecraft mods.
Read more
  • 0
  • 0
  • 2354

article-image-15-ways-to-make-blockchains-scalable-secure-and-safe
Aaron Lazar
27 Dec 2017
14 min read
Save for later

15 ways to make Blockchains scalable, secure and safe!

Aaron Lazar
27 Dec 2017
14 min read
[box type="note" align="" class="" width=""]This article is a book extract from  Mastering Blockchain, written by Imran Bashir. In the book he has discussed distributed ledgers, decentralization and smart contracts for implementation in the real world.[/box] Today we will look at various challenges that need to be addressed before blockchain becomes a mainstream technology. Even though various use cases and proof of concept systems have been developed and the technology works well for most scenarios, some fundamental limitations continue to act as barriers to mainstream adoption of blockchains in key industries. We’ll start by listing down the main challenges that we face while working on Blockchain and propose ways to over each of those challenges: Scalability This problem has been a focus of intense debate, rigorous research, and media attention for the last few years. This is the single most important problem that could mean the difference between wider adaptability of blockchains or limited private use only by consortiums. As a result of substantial research in this area, many solutions have been proposed, which are discussed here: Block size increase: This is the most debated proposal for increasing blockchain performance (transaction processing throughput). Currently, bitcoin can process only about three to seven transactions per second, which is a major inhibiting factor in adapting the bitcoin blockchain for processing micro-transactions. Block size in bitcoin is hardcoded to be 1 MB, but if block size is increased, it can hold more transactions and can result in faster confirmation time. There are several Bitcoin Improvement Proposals (BIPs) made in favor of block size increase. These include BIP 100, BIP 101, BIP 102, BIP 103, and BIP 109. In Ethereum, the block size is not limited by hardcoding; instead, it is controlled by gas limit. In theory, there is no limit on the size of a block in Ethereum because it's dependent on the amount of gas, which can increase over time. This is possible because miners are allowed to increase the gas limit for subsequent blocks if the limit has been reached in the previous block. Block interval reduction: Another proposal is to reduce the time between each block generation. The time between blocks can be decreased to achieve faster finalization of blocks but may result in less security due to the increased number of forks. Ethereum has achieved a block time of approximately 14 seconds and, at times, it can increase. This is a significant improvement from the bitcoin blockchain, which takes 10 minutes to generate a new block. In Ethereum, the issue of high orphaned blocks resulting from smaller times between blocks is mitigated by using the Greedy Heaviest Observed Subtree (GHOST) protocol whereby orphaned blocks (uncles) are also included in determining the valid chain. Once Ethereum moves to Proof of Stake, this will become irrelevant as no mining will be required and almost immediate finality of transactions can be achieved. Invertible Bloom lookup tables: This is another approach that has been proposed to reduce the amount of data required to be transferred between bitcoin nodes. Invertible Bloom lookup tables (IBLTs) were originally proposed by Gavin Andresen, and the key attraction in this approach is that it does not result in a hard fork of bitcoin if implemented. The key idea is based on the fact that there is no need to transfer all transactions between nodes; instead, only those that are not already available in the transaction pool of the synching node are transferred. This allows quicker transaction pool synchronization between nodes, thus increasing the overall scalability and speed of the bitcoin network. Sharding: Sharding is not a new technique and has been used in distributed databases for scalability such as MongoDB and MySQL. The key idea behind sharding is to split up the tasks into multiple chunks that are then processed by multiple nodes. This results in improved throughput and reduced storage requirements. In blockchains, a similar scheme is employed whereby the state of the network is partitioned into multiple shards. The state usually includes balances, code, nonce, and storage. Shards are loosely coupled partitions of a blockchain that run on the same network. There are a few challenges related to inter-shard communication and consensus on the history of each shard. This is an open area for research. State channels: This is another approach proposed for speeding up the transaction on a blockchain network. The basic idea is to use side channels for state updating and processing transactions off the main chain; once the state is finalized, it is written back to the main chain, thus offloading the time-consuming operations from the main blockchain. State channels work by performing the following three steps: First, a part of the blockchain state is locked under a smart contract, ensuring the agreement and business logic between participants. Now off-chain transaction processing and interaction is started between the participants that update the state only between themselves for now. In this step, almost any number of transactions can be performed without requiring the blockchain and this is what makes the process fast and a best candidate for solving blockchain scalability issues. However, it could be argued that this is not a real on-blockchain solution such as, for example, sharding, but the end result is a faster, lighter, and robust network which can prove very useful in micropayment networks, IoT networks, and many other applications. Once the final state is achieved, the state channel is closed and the final state is written back to the main blockchain. At this stage, the locked part of the blockchain is also unlocked. This technique has been used in the bitcoin lightning network and Ethereum's Raiden. 6. Subchains: This is a relatively new technique recently proposed by Peter R. Rizun which is based on the idea of weak blocks that are created in layers until a strong block is found. Weak blocks can be defined as those blocks that have not been able to be mined by meeting the standard network difficulty criteria but have done enough work to meet another weaker difficulty target. Miners can build sub-chains by layering weak blocks on top of each other, unless a block is found that meets the standard difficulty target. At this point, the subchain is closed and becomes the strong block. Advantages of this approach include reduced waiting time for the first verification of a transaction. This technique also results in a reduced chance of orphaning blocks and speeds up transaction processing. This is also an indirect way of addressing the scalability issue. Subchains do not require any soft fork or hard fork to implement but need acceptance by the community. 7. Tree chains: There are also other proposals to increase bitcoin scalability, such as tree chains that change the blockchain layout from a linearly sequential model to a tree. This tree is basically a binary tree which descends from the main bitcoin chain. This approach is similar to sidechain implementation, eliminating the need for major protocol change or block size increase. It allows improved transaction throughput. In this scheme, the blockchains themselves are fragmented and distributed across the network in order to achieve scalability. Moreover, mining is not required to validate the blocks on the tree chains; instead, users can independently verify the block header. However, this idea is not ready for production yet and further research is required in order to make it practical. Privacy Privacy of transactions is a much desired property of blockchains. However, due to its very nature, especially in public blockchains, everything is transparent, thus inhibiting its usage in various industries where privacy is of paramount importance, such as finance, health, and many others. There are different proposals made to address the privacy issue and some progress has already been made. Several techniques, such as indistinguishability obfuscation, usage of homomorphic encryption, zero knowledge proofs, and ring signatures. All these techniques have their merits and demerits and are discussed in the following sections. Indistinguishability obfuscation: This cryptographic technique may serve as a silver bullet to all privacy and confidentiality issues in blockchains but the technology is not yet ready for production deployments. Indistinguishability obfuscation (IO) allows for code obfuscation, which is a very ripe research topic in cryptography and, if applied to blockchains, can serve as an unbreakable obfuscation mechanism that will turn smart contracts into a black box. The key idea behind IO is what's called by researchers a multilinear jigsaw puzzle, which basically obfuscates program code by mixing it with random elements, and if the program is run as intended, it will produce expected output but any other way of executing would render the program look random and garbage. This idea was first proposed by Sahai and others in their research paper Candidate Indistinguishability Obfuscation and Functional Encryption for All Circuits. Homomorphic encryption: This type of encryption allows operations to be performed on encrypted data. Imagine a scenario where the data is sent to a cloud server for processing. The server processes it and returns the output without knowing anything about the data that it has processed. This is also an area ripe for research and fully homomorphic encryption that allows all operations on encrypted data is still not fully deployable in production; however, major progress in this field has already been made. Once implemented on blockchains, it can allow processing on cipher text which will allow privacy and confidentiality of transactions inherently. For example, the data stored on the blockchain can be encrypted using homomorphic encryption and computations can be performed on that data without the need for decryption, thus providing privacy service on blockchains. This concept has also been implemented in a project named Enigma by MIT's Media Lab. Enigma is a peer-to-peer network which allows multiple parties to perform computations on encrypted data without revealing anything about the data. Zero knowledge proofs: Zero knowledge proofs have recently been implemented in Zcash successfully, as seen in previous chapters. More specifically, SNARKs have been implemented in order to ensure privacy on the blockchain. The same idea can be implemented in Ethereum and other blockchains also. Integrating Zcash on Ethereum is already a very active research project being run by the Ethereum R&D team and the Zcash Company. State channels: Privacy using state channels is also possible, simply due to the fact that all transactions are run off-chain and the main blockchain does not see the transaction at all except the final state output, thus ensuring privacy and confidentiality. Secure multiparty computation: The concept of secure multiparty computation is not new and is based on the notion that data is split into multiple partitions between participating parties under a secret sharing mechanism which then does the actual processing on the data without the need of the reconstructing data on single machine. The output produced after processing is also shared between the parties. MimbleWimble: The MimbleWimble scheme was proposed somewhat mysteriously on the bitcoin IRC channel and since then has gained a lot of popularity. MimbleWimble extends the idea of confidential transactions and Coinjoin, which allows aggregation of transactions without requiring any interactivity. However, it does not support the use of bitcoin scripting language along with various other features of standard Bitcoin protocol. This makes it incompatible with existing Bitcoin protocol. Therefore, it can either be implemented as a sidechain to bitcoin or on its own as an alternative cryptocurrency. This scheme can address privacy and scalability issues both at once. The blocks created using the MimbleWimble technique do not contain transactions as in traditional bitcoin blockchains; instead, these blocks are composed of three lists: an input list, output list, and something called excesses which are lists of signatures and differences between outputs and inputs. The input list is basically references to the old outputs, and the output list contains confidential transactions outputs. These blocks are verifiable by nodes by using signatures, inputs, and outputs to ensure the legitimacy of the block. In contrast to bitcoin, MimbleWimble transaction outputs only contain pubkeys, and the difference between old and new outputs is signed by all participants involved in the transactions. Coinjoin: Coinjoin is a technique which is used to anonymize the bitcoin transactions by mixing them interactively. The idea is based on forming a single transaction from multiple entities without causing any change in inputs and outputs. It removes the direct link between senders and receivers, which means that a single address can no longer be associated with transactions, which could lead to identification of the users. Coinjoin needs cooperation between multiple parties that are willing to create a single transaction by mixing payments. Therefore, it should be noted that, if any single participant in the Coinjoin scheme does not keep up with the commitment made to cooperate for creating a single transaction by not signing the transactions as required, then it can result in a denial of service attack. In this protocol, there is no need for a single trusted third party. This concept is different from mixing a service which acts as a trusted third party or intermediary between the bitcoin users and allows shuffling of transactions. This shuffling of transactions results in the prevention of tracing and the linking of payments to a particular user. Security Even though blockchains are generally secure and make use of asymmetric and symmetric cryptography as required throughout the blockchain network, there still are few caveats that can result in compromising the security of the blockchain. There are two types of Contract Verification that can solve the issue of Security. Why3 formal verification: Formal verification of solidity code is now available as a feature in the solidity browser. First the code is converted into Why3 language that the verifier can understand. In the example below, a simple solidity code that defines the variable z as maximum limit of uint is shown. When this code runs, it will result in returning 0, because uint z will overrun and start again from 0. This can also be verified using Why3, which is shown below: Once the solidity is compiled and available in the formal verification tab, it can be copied into the Why3 online IDE available at h t t p ://w h y 3. l r i . f r /t r y /. The example below shows that it successfully checks and reports integer overflow errors. This tool is under heavy development but is still quite useful. Also, this tool or any other similar tool is not a silver bullet. Even formal verification generally should not be considered a panacea because specifications in the first place should be defined appropriately: Oyente tool: Currently, Oyente is available as a Docker image for easy testing and installation. It is available at https://github.com/ethereum/oyente, and can be quickly downloaded and tested. In the example below, a simple contract taken from solidity documentation that contains a re-entrancy bug has been tested and it is shown that Oyente successfully analyzes the code and finds the bug: This sample code contains a re-entrancy bug which basically means that if a contract is interacting with another contract or transferring ether, it is effectively handing over the control to that other contract. This allows the called contract to call back into the function of the contract from which it has been called without waiting for completion. For example, this bug can allow calling back into the withdraw function shown in the preceding example again and again, resulting in getting Ethers multiple times. This is possible because the share value is not set to 0 until the end of the function, which means that any later invocations will be successful, resulting in withdrawing again and again. An example is shown of Oyente running to analyze the contract shown below and as can be seen in the following output, the analysis has successfully found the re-entrancy bug. The bug is proposed to be handled by a combination of the Checks-Effects-Interactions pattern described in the solidity documentation: We discussed the scalability, security, confidentiality, and privacy aspects of blockchain technology in this article. If you found it useful, go ahead and buy the book, Mastering Blockchain by Imran Bashir to become a professional Blockchain developer.      
Read more
  • 0
  • 0
  • 2348
article-image-edge-computing-trick-or-treat
Melisha Dsouza
31 Oct 2018
4 min read
Save for later

Edge computing - Trick or Treat?

Melisha Dsouza
31 Oct 2018
4 min read
According to IDC’s Digital Universe update, the number of connected devices is projected to expand to 30 billion by 2020 to 80 billion by 2025. IDC also estimates that the amount of data created and copied annually will reach 180 Zettabytes (180 trillion gigabytes) in 2025, up from less than 10 Zettabytes in 2015. Thomas Bittman, vice president and distinguished analyst at Gartner Research, in a session on edge computing at the recent Gartner IT Infrastructure, Operations Management and Data Center Conference predicted, “In the next few years, you will have edge strategies-you’ll have to.” This prediction was consistent with a real-time poll conducted at the conference which stated that 25% of the audience uses edge computing technology and more than 50% plan to implement it within two years. How does Edge computing work? 2018 marked the era of edge computing with the increase in the number of smart devices and the massive amounts of data generated by them. Edge computing allows data produced by the internet of things (IoT) devices to be processed near the edge of a user’s network. Instead of relying on the shared resources of large data centers in a cloud-based environment, edge computing will place more demands on endpoint devices and intermediary devices like gateways, edge servers and other new computing elements to encourage a complete edge computing environment. Some use cases of Edge computing The complex architecture of devices today demands a more comprehensive computing model to support its infrastructure. Edge computing caters to this need and reduces latency issues, overhead and cost issues associated with centralized computing options like the cloud. A good example of this is the launch of the world’s first digital drilling vessel, the Noble Globetrotter I by London-based offshore drilling company- ‘Noble Drilling’. The vessel uses data to create virtual versions of some of the key equipment on board. If the drawworks on this digitized rig begins to fail prematurely, information based on a ‘digital twin’ of that asset will notify a team of experts onshore. The “digital twin” is a virtual model of the device that lives inside the edge processor and can point out to tiny performance discrepancies human operators may easily miss. Keeping a watch on all pertinent data on a dashboard, the onshore team can collaborate with the rig’s crew to plan repairs before a failure. Noble believes that this move towards edge computing will lead to a more efficient, cost-effective offshore drilling. By predicting potential failures in advance, Noble can avert breakdowns at and also spare the expense of replacing/ repairing equipment. Another news that caught our attention was  Microsoft’s $5 billion investment in IoT to empower the intelligent cloud and the intelligent edge.  Azure Sphere is one of Microsoft’s intelligent edge solutions to power and protect connected microcontroller unit (MCU)-powered devices. MCU powered devices power everything from household stoves and refrigerators to industrial equipment and considering that there are 9 billion MCU-powered devices shipping every year, we need all the help we can get in the security spectrum! That’s intelligent edge for you on the consumer end of the application spectrum. 2018 also saw progress in the development of edge computing tools and solutions across the spectrum, from hardware to software. Take for instance OpenStack Rocky one of the most widely deployed open source cloud infrastructure software. It is designed to accommodate edge computing requirements by deploying containers directly on bare metal. OpenStack Ironic improves management and automation capabilities to bare metal infrastructure. Users can manage physical infrastructure just like they manage VMs, especially with new Ironic features introduced in Rocky. Intel’s OpenVIVO computer vision toolkit is yet another example of using edge computing to help developers to streamline their deep learning inferences and deploy high-performance computer vision solutions across a wide range of use-cases. Baidu, Inc. released the Kunlun AI chip built to handle AI models for both, edge computing on devices and in the cloud via data centers. Edge computing - Trick or Treat? However, edge computing does come with disadvantages like the steep cost of deploying and managing an edge network, security concerns and performing numerous operations. The final verdict: Edge computing is definitely a treat when complement by embedded AI for enhancing networks to promote efficiency in analysis and improve security for business systems. Intelligent Edge Analytics: 7 ways machine learning is driving edge computing adoption in 2018 Ubuntu 18.10 ‘Cosmic Cuttlefish’ releases with a focus on AI development, multi-cloud and edge deployments, and much more!
Read more
  • 0
  • 0
  • 2340

article-image-tim-berners-lees-solid-trick-or-treat
Natasha Mathur
31 Oct 2018
2 min read
Save for later

Tim Berners-Lee’s Solid - Trick or Treat?

Natasha Mathur
31 Oct 2018
2 min read
Solid is a set of conventions and tools developed by Tim Berners-Lee. It aims to build decentralized social applications based on Linked Data principles. It is modular, extensible and it relies as much as possible on existing W3C standards and protocols. This open-source project was launched earlier this month for “personal empowerment through data”. Why are people excited about Solid? Solid aims to radically transform the way Web applications work today, resulting in true data ownership as well as improved privacy. It hopes to empower individuals, developers, and businesses across the globe with completely new ways to build innovative and trusted applications. It gives users the freedom to choose where their data resides and who is allowed to access it. Solid collects all the data into a “Solid POD,” a personal online data repository, that you want to share with advertisers or apps. You get to decide which app gets your data and which does not.  Best thing is that you don’t need to enter any data in apps that support Solid. You can just allow or disallow access to the Solid POD, and the app will take care of the rest on its own. Moreover, Solid also offers every user a choice regarding where their data gets stored, and which specific people or groups can access the select elements in a data. Additionally, you can link to and share the data with anyone, be it your family, friends or colleagues. Is Solid a trick or a treat? That being said, a majority of the companies on the web are extremely sensitive when it comes to their data and might not be interested in losing control over that data. Hence, wide adoption seems to be a hurdle as of now. Also, since its only launched this month, there isn’t enough community support around it. However, Solid is surely taking us a step ahead, to a more free and open Internet, and seems to be a solid TREAT (pun intended) for all of us. For more information on Solid, check out the official Inrupt blog.
Read more
  • 0
  • 0
  • 2334

article-image-five-benefits-dot-net-going-open-source
Ed Bowkett
12 Dec 2014
2 min read
Save for later

Five Benefits of .NET Going Open Source

Ed Bowkett
12 Dec 2014
2 min read
By this point, I’m sure almost everyone has heard of the news about Microsoft’s decision to open source the .NET framework. This blog will cover what the benefits of this decision are for developers and what it means. Remember this is just an opinion and I’m sure there are differing views out there in the wider community. More variety People no longer have to stick with Windows to develop .NET applications. They can choose between operating systems and this doesn’t lock developers down. It makes it more competitive and ultimately, opens .NET up to a wider audience. The primary advantage of this announcement is that .NET developers can build more apps to run in more places, on more platforms. It means a more competitive marketplace, and improves developers and opens them up to one of the largest growing operating systems in the world, Linux. Innovate .NET Making .NET open source allows the code to be revised and rewritten. This will have dramatic outcomes for .NET and it will be interesting to see what developers do with the code as they continually look for new functionalities with .NET. Cross-platform development The ability to cross-develop on different operating systems is now massive. Previously, this was only available with the Mono project, Xamarin. With Microsoft looking to add more Xamarin tech to Visual Studio, this will be an interesting development to watch moving into 2015. A new direction for Microsoft By opening .NET up as open source software, Microsoft seems to have adopted a more "developer-friendly" approach under the new CEO, Satya Nadella. That’s not to say the previous CEO ignored developers, but by being more open as a company, and changing its view on open source, has allowed Microsoft to reach out to communities easier and quicker. Take the recent deal Microsoft made with Docker and it looks like Microsoft is heading in the right direction in terms of closing the gap between the company and developers. Acknowledgement of other operating systems When .NET first came around, around 2002, the entire world ran on Windows—it was the head operating system, certainly in terms of the mass audience. Today, that simply isn’t the case—you have Mac OSX, you have Linux—there is much more variety, and as a result .NET, by going open source, have acknowledged that Windows is no longer the number one option in workplaces.
Read more
  • 0
  • 0
  • 2318
article-image-data-science-folks-12-reasons-thankful-thanksgiving
Savia Lobo
21 Nov 2017
8 min read
Save for later

Data science folks have 12 reasons to be thankful for this Thanksgiving

Savia Lobo
21 Nov 2017
8 min read
We are nearing the end of 2017. But with each ending chapter, we have remarkable achievements to be thankful for. Similarly, for the data science community, this year was filled with a number of new technologies, tools, version updates etc. 2017 saw blockbuster releases such as PyTorch, TensorFlow 1.0 and Caffe 2, among many others. We invite data scientists, machine learning experts, and other data science professionals to come together on this Thanksgiving Day, and thank the organizations, which made our interactions with AI easier, faster, better and generally more fun. Let us recall our blessings in 2017, one month at a time... [dropcap]Jan[/dropcap] Thank you, Facebook and friends for handing us PyTorch Hola 2017! While the world was still in the New Year mood, a brand new deep learning framework was released. Facebook along with a few other partners launched PyTorch. PyTorch came as an improvement to the popular Torch framework. It now supported the Python language over the less popular Lua. As PyTorch worked just like Python, it was easier to debug and create unique extensions. Another notable change was the adoption of a Dynamic Computational Graph, used to create graphs on the fly with high speed and flexibility. [dropcap]Feb[/dropcap] Thanks Google for TensorFlow 1.0 The month of February brought Data Scientist’s a Valentine's gift with the release of TensorFlow 1.0. Announced at the first annual TensorFlow Developer Summit, TensorFlow 1.0 was faster, more flexible, and production-ready. Here’s what the TensorFlow box of chocolate contained: Fully compatibility with Keras Experimental APIs for Java and Go New Android demos for object and image detection, localization, and stylization A brand new Tensorflow debugger An introductory glance of  XLA--a domain-specific compiler for TensorFlow graphs [dropcap]Mar[/dropcap] We thank Francois Chollet for making Keras 2 a production ready API Congratulations! Keras 2 is here. This was a great news for Data science developers as Keras 2, a high- level neural network API allowed faster prototyping. It provided support both CNNs (Convolutional Neural Networks) as well as RNNs (Recurrent Neural Networks). Keras has an API designed specifically for humans. Hence, a user-friendly API. It also allowed easy creation of modules, which meant it is perfect for carrying out an advanced research. Developers can now code in  Python, a compact, easy to debug language. [dropcap]Apr[/dropcap] We like Facebook for brewing us Caffe 2 Data scientists were greeted by a fresh aroma of coffee, this April, as Facebook released the second version of it’s popular deep learning framework, Caffe. Caffe 2 came up as a easy to use deep learning framework to build DL applications and leverage community contributions of new models and algorithms. Caffe 2 was fresh with a first-class support for large-scale distributed training, new hardware support, mobile deployment, and the flexibility for future high-level computational approaches. It also provided easy methods to convert DL models built in original Caffe to the new Caffe version. Caffe 2 also came with over 400 different operators--the basic units of computation in Caffe 2. [dropcap]May[/dropcap] Thank you, Amazon for supporting Apache MXNet on AWS and Google for your TPU The month of May brought in some exciting launches from the two tech-giants, Amazon and Google. Amazon Web Services’ brought Apache MXNet on board and Google’s Second generation TPU chips were announced. Apache MXNet, which is now available on AWS allowed developers to build Machine learning applications which can train quickly and run anywhere, which means it is a scalable approach for developers. Next up, was Google’s  second generation TPU (Tensor Processing Unit) chips, designed to speed up machine learning tasks. These chips were supposed to be (and are) more capable of CPUs and even GPUs. [dropcap]Jun[/dropcap] We thank Microsoft for CNTK v2 The mid of the month arrived with Microsoft’s announcement of the version 2 of its Cognitive Toolkit. The new Cognitive Toolkit was now enterprise-ready, had production-grade AI and allowed users to create, train, and evaluate their own neural networks scalable to multiple GPUs. It also included the Keras API support, faster model compressions, Java bindings, and Spark support. It also featured a number of new tools to run trained models on low-powered devices such as smartphones. [dropcap]Jul[/dropcap] Thank you, Elastic.co for bringing ML to Elastic Stack July made machine learning generally available for the Elastic Stack users with its version 5.5. With ML, the anomaly detection of the Elasticsearch time series data was made possible. This allows users to analyze the root cause of the problems in the workflow and thus reduce false positives. To know about the changes or highlights of this version visit here. [dropcap]Aug[/dropcap] Thank you, Google for your Deeplearn.js August announced the arrival of Google’s Deeplearn.js, an initiative that allowed Machine Learning models to run entirely in a browser. Deeplearn.js was an open source WebGL- accelerated JS library. It offered an interactive client-side platform which helped developers carry out rapid prototyping and visualizations. Developers were now able to use hardware accelerator such as the GPU via the webGL and perform faster computations with 2D and 3D graphics. Deeplearn.js also allowed TensorFlow model’s capabilities to be imported on the browser. Surely something to thank for! [dropcap]Sep[/dropcap] Thanks, Splunk and SQL for your upgrades September surprises came with the release of Splunk 7.0, which helps in getting Machine learning to the masses with an added Machine Learning Toolkit, which is scalable, extensible, and accessible. It includes an added native support for metrics which speed up query processing performance by 200x. Other features include seamless event annotations, improved visualization, faster data model acceleration, a cloud-based self-service application. September also brought along the release of MySQL 8.0 which included a first-class support for Unicode 9.0. Other features included are An extended support for native JSOn data Inclusion of windows functions and recursive SQL syntax for queries that were previously impossible or difficult to write Added document-store functionality So, big thanks to the Splunk and SQL upgrades. [dropcap]Oct[/dropcap] Thank you, Oracle for the Autonomous Database Cloud and Microsoft for SQL Server 2017 As Fall arrived, Oracle unveiled the World’s first Autonomous Database Cloud. It provided full automation associated with tuning, patching, updating and maintaining the database. It was self scaling i.e., it instantly resized compute and storage without downtime with low manual administration costs. It was also self repairing and guaranteed 99.995 percent reliability and availability. That’s a lot of reduction in workload! Next, developers were greeted with the release of SQL Server 2017 which was a major step towards making SQL Server a platform. It included multiple enhancements in Database Engine such as adaptive query processing, Automatic database tuning, graph database capabilities, New Availability Groups, Database Tuning Advisor (DTA) etc. It also had a new Scale Out feature in SQL Server 2017 Integration Services (SSIS) and SQL Server Machine Learning Services to reflect support for Python language. [dropcap]Nov[/dropcap] A humble thank you to Google for TensorFlow Lite and Elastic.co for Elasticsearch 6.0 Just a month more for the year to end!! The Data science community has had a busy November with too many releases to keep an eye on with Microsoft Connect(); to spill the beans. So, November, thank you for TensorFlow Lite and Elastic 6. Talking about TensorFlow Lite, a lightweight product  for mobile and embedded devices, it is designed to be: Lightweight: It allows inference of the on-device machine learning models that too with a small binary size, allowing faster initialization/ startup. Speed: The model loading time is dramatically improved, with an accelerated hardware support. Cross-platform: It includes a runtime tailormade to run on various platforms–starting with Android and iOS. And now for Elasticsearch 6.0, which is made generally available. With features such as easy upgrades, Index sorting, better Shard recovery, support for Sparse doc values.There are other new features spread out across the Elastic stack, comprised of Kibana, Beats and Logstash. These are, Elasticsearch’s solutions for visualization and dashboards, data ingestion and log storage. [dropcap]Dec[/dropcap] Thanks in advance Apache for Hadoop 3.0 Christmas gifts may arrive for Data Scientists in the form of General Availability of Hadoop 3.0. The new version is expected to include support for Erasure Encoding in HDFS, version 2 of the YARN Timeline Service, Shaded Client Jars, Support for More than 2 NameNodes, MapReduce Task-Level Native Optimization, support for Opportunistic Containers and Distributed Scheduling to name a few. It would also include a rewritten version of Hadoop shell scripts with bug fixes, improved compatibility and many changes in some existing installation procedures. Pheww! That was a large list of tools for Data Scientists and developers to thank for this year. Whether it be new frameworks, libraries or a new set of software, each one of them is unique and helpful to create data-driven applications. Hopefully, you have used some of them in your projects. If not, be sure to give them a try, because 2018 is all set to overload you with new, and even more amazing tools, frameworks, libraries, and releases.
Read more
  • 0
  • 0
  • 2303

article-image-introducing-gluon-a-powerful-and-intuitive-deep-learning-interface
Sugandha Lahoti
21 Nov 2017
6 min read
Save for later

Introducing Gluon- a powerful and intuitive deep learning interface

Sugandha Lahoti
21 Nov 2017
6 min read
Amazon and Microsoft, the pioneer tech giants have collaborated their efforts to bring in a compelling, easy, and powerful deep learning interface known as Gluon. If you are into physics, you must be aware of the term Gluon. Gluon is a hypothetical particle believed to be exchanged between quarks in order to bind them. If we go by the literal meaning, Gluon, similar to a glue, works as a binding agent. Having gained inspiration from this, Amazon and Microsoft have glued in their efforts to bring deep learning to a wider developer audience with the launch of Gluon. It is a simple, efficient, and compact API for deep learning. Why is Gluon essential? Any Neural network, has three important phases: First, the manual coding, where the developer explains the specific behaviour of the network. Then is the training phase where the error of the output is calculated and subsequently, the weights are adjusted. This activity requires memory and is computationally exhaustive. After the training phase, the network is used to make predictions. The process of building up a neural network is labor-intensive as well as time consuming. These networks have to be trained to parse large and complex data sets and therefore they are  usually  constructed manually. Thus, making them difficult to debug and reuse. Also, manual construction requires expertise and advanced skill-sets which are possessed by experienced data scientists. However, the reach of machine learning technique is at every doorstep now. A large number of developers are looking for solutions that can help them build deep learning models with ease and practicality without compromising on the power. Gluon is a flexible and approachable way to train and construct neural networks. It comes across as a more concise and easy-to-use programming interface providing developers the ability to quickly prototype, build, and train deep learning models without sacrificing performance. The API plays around with MXNet to reduce the complexity of deep learning making it reachable to a large number of developers. How is it different? A few compelling advantages that makes Gluon stand out: An Easy to Use API A strong differentiating feature of Gluon is  that it provides interface, in the form of an API. Making it easier for the developers to grasp and develop DL models with the help of modular components. This functionality is simpler  to comprehend than the formal neural net definition methods. Data Structure Approach Deep learning models in Gluon can be defined, flexed, and modified in a way similar to a data structure. This ability makes it a  familiar interface especially for  developers who have just recently  stepped into the machine learning world. Dynamic networks can be easily managed with Gluon as it mixes the programming models from TensorFlow (symbolic representations) and PyTorch (imperative definitions of networks). Network Defining Ability Gluon provides the ability to define the network. Thus, the dynamic adjustment of the network is possible during the definition and the training process. This essentially means that the training algorithm and the neural model can inform one another. Due to this, developers can make use of standard programming structures to build, and can also use sophisticated algorithms and models to advance neural nets. High Speed Training Friendly APIs and flexible approaches are all great, but they shouldn't be incurred at the cost of training speed. Gluon is better than the manual approach as it can perform  all of the tasks without compromising on performance while providing abstractions without losing out on training speed. This is because, Gluon blends the formal definitions and specific details of the network under the hood of a concise API, allowing users to implement models, rather than doing tasks like compiler optimizations manually. Easy algorithmic implementations using Gluon Gluon supports a wide range of prebuilt and optimized components for building neural networks. Developers can build deep learning models using the MXNet framework in the Gluon interface. Gluon allows building neural nets from predefined layers. It  can also keep a note of when to record or not to record the computation graph. It can invoke highly optimized layers written in C++. Training of parallel data can also be accomplished easily. As compared to other interfaces, Gluon can run code, faster on both CPUs and GPUs.  Also, movement from one to multiple devices and initializing network parameters over them is pretty easy. Even for a simple problem like a linear regression model, Gluon can help in writing quick, and clean code. For linear regression, it eliminates the need of allocating parameters individually, implementing a stochastic gradient descent, or defining a loss function. Subsequently, it also reduces the workload required for multiclass logistic regression. On similar terms, Gluon can be used to transform the logic of a logistic regression model to a multilayer perceptron with a few additional lines of code. A convolutional neural network can also be designed easily and concisely. Limitations: On the flip side In spite of Gluon being easy, compact and efficient, it has certain limitations. Currently it  is available on Apache MXNet, and is awaiting a release in the Microsoft Cognitive Toolkit. However, not much has been known about other frameworks. For instance, it currently lacks support for the two most widely used deep learning frameworks, Caffe2 and TensorFlow. This could pose an issue for Gluon because most interfaces released, provide integration with multiple frameworks. Ultimately, it boils down to the project requirements including the model requirements and the difficulty associated with building networks from a particular tool. So, for a computer vision project people would prefer using Caffe. While TensorFlow is popular  among the developers because of the existing community, the complex nature of the platform, makes a digestible deep learning interface like Gluon highly appreciated.  Hence, each framework performs on its own tradeoffs. Conclusion Gluon comes as a boon, for both experienced data scientists and nascent developers alike. For developers, this interface, models like a data structure, providing more familiarity.  On the other side, for researchers and data scientists, it provides an interface to build prototypes quickly and easily for complex neural networks, without sacrificing training speeds. Overall, Gluon will accelerate the development of advanced neural networks and models, resulting in robust artificial intelligence based applications.
Read more
  • 0
  • 0
  • 2303