Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech Guides

851 Articles
article-image-how-move-server-serverless-10-steps
Erik Kappelman
27 Sep 2017
7 min read
Save for later

How to move from server to serverless in 10 steps

Erik Kappelman
27 Sep 2017
7 min read
If serverless computing sounds a little contrived to you, you’re right, it is. Serverless computing isn't really serverless, well not yet anyway. It would be more accurate to call it serverless development. If you are a backend boffin, or you spend most of your time writing Dockerfiles, you are probably not going to be super into serverless computing. This is because serverless computing allows for applications to consist of chunks of code that do things in response to stimulus. What makes this different that other development is that the chunks of code don’t need to be woven into a traditional frontend-backend setup. Instead, serverless computing allows code to execute without the need for complicated backend configurations. Additionally, the services that provide serverless computing can easily scale an application as necessary, based on the activity the application is receiving. How AWS Lambda supports serverless computing We will discuss Amazon Web Services (AWS) Lambda, Amazon’s serverless computing offering. We are going to go over one of Amazon’s use cases to better understand the value of serverless computing, and how someone can get started. Have an application, build an application, or have an idea for an application. This could also be step zero, but you can’t really have a serverless application without an application. We are going to be looking at a simple abstraction of an app, but if you want to put this into practice, you’ll need a project. Create an AWS account, if you don’t already have one, and set up the AWS Command Line Interface on your machine. Quick Note: I am on OSX and I had a lot of trouble getting the AWS Command Line Interface installed and working. AWS recommends using pip to install, but the bash command never seemed to end up in the right place. Instead I used Homebrew and then it worked fine. Navigate to the S3 on AWS and create two buckets for testing purposes. One is going to be used for uploading, and the other is going to receive uploaded pictures that have been transformed from the other bucket. The bucket used to receive the transformed pictures should have a name of this form “Other buckets name”+“resized”. The code we are using requires this format in order to work. If you really don’t like that, you can modify the code to use a different format. Navigate to the AWS Lambda Management Console and choose the Create Function option, choose Author from scratch, and click the empty box next to the Lambda symbol in order to create a trigger. Choose S3. Now specify the bucket that the pictures are going to be initially uploaded into. Then under the event type choose Object Created (All). Leave the trigger disabled and press the Next button. Give your function a name, and for now, we are done with the console. On your local machine set up a workspace creating a root directory for our project with a node_modules folder. Then install the async and gm libraries. Create a JavaScript file named index.js and copy and paste the code from the end of the blog into the file. It needs to be name index.js for this example to work. There are settings that determine what the function entry point is that can be changed to look for a different filename. The code we are using comes from an example on AWS located here. I recommend you check out their documentation. If we look at the code that we are pasting into our editor we can learn a few things about using Lambda. We can see that there is an aws-sdk in use and that we use that dependency to create an S3 object. We get the information about the source bucket from the event object that is passed into the main function. This is why we named our buckets the way we did. We can get our uploaded picture using the getObject method of our S3 object. We have the S3 file information we want to get from the event object passed into the main function. This code grabs that file, puts it into a buffer, uses the gm library to resize the object and then use the same S3 object, specifying the destination bucket this time, to upload the file. Now we are ready ZIP up your root folder and let's deploy this function to our new Lambda instance that we have created. Quick Note: While using OSX I had to zip my JS file and node_modules folder directly into a ZIP archive instead of recursively zipping the root folder. For some reason the upload doesn’t work unless the zipping is done this way. This is at least true when using OSX. We are going upload using the Lambda Management Console, if you’re fancy you can use the AWS Command Line Interface. So, get to the management console and choose Upload a .ZIP File. Click the upload button, specify your ZIP file and then press the Save button. Now we will test our work. Click the Actions drop down and choose the Configure test event option. Now choose the S3 PUT test event and specify the bucket that images will be uploaded too. This creates a test that simulates an upload and if everything goes according to plan, your function should pass. Profit! I hope this introduction in AWS Lambda serves as a primer on Serverless development in general. The goal here is to get you started. Serverless computing has some real promise. As a primarily front-end developer, I revel in the idea of serverless anything. I find that the absolute worst part of any development project is the back-end. That being said, I don’t think that sysadmins will be lining up for unemployment checks tomorrow. Once serverless computing catches on, and maybe grows and matures a little bit, we’re going to have a real juggernaut on our hands. The code below is used in this example and comes from AWS: // dependencies varasync = require('async'); var AWS = require('aws-sdk'); var gm = require('gm').subClass({ imageMagick: true }); // Enable ImageMagick integration. var util = require('util'); // constants var MAX_WIDTH = 100; var MAX_HEIGHT = 100; // get reference to S3 client var s3 = new AWS.S3(); exports.handler = function(event, context, callback) { // Read options from the event. console.log("Reading options from event:n", util.inspect(event, {depth: 5})); var srcBucket = event.Records[0].s3.bucket.name; // Object key may have spaces or unicode non-ASCII characters. var srcKey = decodeURIComponent(event.Records[0].s3.object.key.replace(/+/g, " ")); var dstBucket = srcBucket + "resized"; var dstKey = "resized-" + srcKey; // Sanity check: validate that source and destination are different buckets. if (srcBucket == dstBucket) { callback("Source and destination buckets are the same."); return; } // Infer the image type. var typeMatch = srcKey.match(/.([^.]*)$/); if (!typeMatch) { callback("Could not determine the image type."); return; } var imageType = typeMatch[1]; if (imageType != "jpg"&& imageType != "png") { callback('Unsupported image type: ${imageType}'); return; } // Download the image from S3, transform, and upload to a different S3 bucket. async.waterfall([ functiondownload(next) { // Download the image from S3 into a buffer. s3.getObject({ Bucket: srcBucket, Key: srcKey }, next); }, functiontransform(response, next) { gm(response.Body).size(function(err, size) { // Infer the scaling factor to avoid stretching the image unnaturally. var scalingFactor = Math.min( MAX_WIDTH / size.width, MAX_HEIGHT / size.height ); var width = scalingFactor * size.width; var height = scalingFactor * size.height; // Transform the image buffer in memory. this.resize(width, height) .toBuffer(imageType, function(err, buffer) { if (err) { next(err); } else { next(null, response.ContentType, buffer); } }); }); }, functionupload(contentType, data, next) { // Stream the transformed image to a different S3 bucket. s3.putObject({ Bucket: dstBucket, Key: dstKey, Body: data, ContentType: contentType }, next); } ], function (err) { if (err) { console.error( 'Unable to resize ' + srcBucket + '/' + srcKey + ' and upload to ' + dstBucket + '/' + dstKey + ' due to an error: ' + err ); } else { console.log( 'Successfully resized ' + srcBucket + '/' + srcKey + ' and uploaded to ' + dstBucket + '/' + dstKey ); } callback(null, "message"); } ); }; Erik Kappelman wears many hats including blogger, developer, data consultant, economist, and transportation planner. He lives in Helena, Montana and works for theDepartment of Transportation as a transportation demand modeler.
Read more
  • 0
  • 0
  • 2993

article-image-top-4-business-intelligence-tools
Ed Bowkett
04 Dec 2014
4 min read
Save for later

Top 4 Business Intelligence Tools

Ed Bowkett
04 Dec 2014
4 min read
With the boom of data analytics, Business Intelligence has taken something of a front stage in recent years, and as a result, a number of Business Intelligence (BI) tools have appeared. This allows a business to obtain a reliable set of data, faster and easier, and to set business objectives. This will be a list of the more prominent tools and will list advantages and disadvantages of each. Pentaho Pentaho was founded in 2004 and offers a suite, among others, of open source BI applications under the name, Pentaho Business Analytics. It has two suites, enterprise and community. It allows easy access to data and even easier ways of visualizing this data, from a variety of different sources including Excel and Hadoop and it covers almost every platform ranging from mobile, Android and iPhone, through to Windows and even Web-based. However with the pros, there are cons, which include the Pentaho Metadata Editor in Pentaho, which is difficult to understand, and the documentation provided offers few solutions for this tool (which is a key component). Also, compared to other tools, which we will mention below, the advanced analytics in Pentaho need improving. However, given that it is open source, there is continual improvement. Tableau Founded in 2003, Tableau also offers a range of suites, focusing on three products: Desktop, Server, and Public. Some benefits of using Tableau over other products include ease of use and a pretty simple UI involving drag and drop tools, which allows pretty much everyone to use it. Creating a highly interactive dashboard with various sources to obtain your data from is simple and quick. To sum up, Tableau is fast. Incredibly fast! There are relatively few cons when it comes to Tableau, but some automated features you would usually expect in other suites aren’t offered for most of the processes and uses here. Jaspersoft As well as being another suite that is open source, Jaspersoft ships with a number of data visualization, data integration, and reporting tools. Added to the small licensing cost, Jaspersoft is justifiably one of the leaders in this area. It can be used with a variety of databases including Cassandra, CouchDB, MongoDB, Neo4j, and Riak. Other benefits include ease of installation and the functionality of the tools in Jaspersoft is better than most competitors on the market. However, the documentation has been claimed to have been lacking in helping customers dive deeper into Jaspersoft, and if you do customize it the customer service can no longer assist you if it breaks. However, given the functionality/ability to extend it, these cons seem minor. Qlikview Qlikview is one of the oldest Business Intelligence software tools in the market, having been around since 1993, it has multiple features, and as a result, many pros and cons that include ones that I have mentioned for previous suites. Some advantages of Qlikview are that it takes a very small amount of time to implement and it’s incredibly quick; quicker than Tableau in this regard! It also has 64-bit in-memory, which is among the best in the market. Qlikview also has good data mining tools, good features (having been in the market for a long time), and a visualization function. These aspects make it so much easier to deal with than others on the market. The learning curve is relatively small. Some cons in relation to Qlikview include that while Qlikview is easy to use, Tableau is seen as the better suite to use to analyze data in depth. Qlikview also has difficulties integrating map data, which other BI tools are better at doing. This list is not definitive! It lays out some open source tools that companies and individuals can use to help them analyze data to prepare business performance KPIs. There are other tools that are used by businesses including Microsoft BI tools, Cognos, MicroStrategy, and Oracle Hyperion. I’ve chosen to explore some BI tools that are quick to use out of the box and are incredibly popular and expanding in usage.
Read more
  • 0
  • 0
  • 2992

article-image-so-you-want-be-devops-engineer
Darrell Pratt
20 Oct 2016
5 min read
Save for later

So you want to be a DevOps engineer

Darrell Pratt
20 Oct 2016
5 min read
The DevOps movement has come about to accomplish the long sought-after goal of removing the barriers between the traditional development and operations organizations. Historically, development teams have written code for an application and passed that code over to the operations team to both test and deploy onto the company’s servers. This practice generates many mistakes and misunderstandings in the software development lifecycle, in addition to the lack of ownership amongst developers that grows as a result of them not owning more of the deployment pipeline and production responsibilities. The new DevOps teams that are appearing now start as blended groups of developers, system administrators, and release engineers. The thought isthat the developers can assist the operations team members in the process of building and more deeply understanding the applications, and the operations team member can shed light on the environments and deployment processes that they must master to keep the applications running. As these teams evolve, we are seeing the trend to specifically hire people into the role of the DevOps Engineer. What this role is and what type of skills you might need to succeed as a DevOps engineer is what we will cover in this article. The Basics Almost every job description you are going to find for a DevOps engineer is going to require some level of proficiency in the desired production operating systems. Linux is probably the most common. You will need to have a very good level of understanding of how to administer and use a Linux-based machine. Words like grep, sed, awk, chmod, chown, ifconfig, netstat and others should not scare you. In the role of DevOps engineer, you are the go-to person for developers when they have issues with the server or cloud. Make sure that you have a good understanding of where the failure points can be in these systems and the commands that can be used to pinpoint the issues. Learn the package manager systems for the various distributions of Linux to better understand the underpinnings of how they work. From RPM and Yum to Apt and Apk, the managers vary widely but the common ideas are very similar in each. You should understand how to use the managers to script machine configurations and understand how the modern containers are built. Coding The type of language you need for a DevOps role is going to depend quite a bit on the particular company. Java, C#, JavaScript, Ruby and Python are all popular languages. If you are a devout Java follower then choosing a .NET shop might not be your best choice. Use your discretion here, but the job is going to require a working knowledge of coding in one more focused languages. At a minimum, you will need to understand how the build chain of the language works and should be comfortable understanding the error logging of the system and understand what those logs are telling you. Cloud Management Gone are the days of uploading a war file to a directory on the server. It’s very likely that you are going to be responsible for getting applications up and running on a cloud provider. Amazon Web Services is the gorilla in the space and having a good level of hands on experience with the various services that make up a standard AWS deployment is a much sought after skill set. From standard AMIs to load balancing, cloud formation and security groups, AWS can be complicated but luckily it is very inexpensive to experiment and there are many training classes of the different components. Source Code Control Git is the tool of choice currently for source code control. Git gives a team a decentralized SCM system that is built to handle branching and merging operations with ease. Workflows that teams use are varied, but a good understanding of how to merge branches, rebase and fix commit issues is required in the role. The DevOps engineers are usually looked to for help on addressing “interesting” Git issues, so good, hands-on experience is vital. Automation Tooling A new automation tool has probably been released in the time it takes to read this article. There will be new tools and platforms in this part of the DevOps space, but the most common are Chef, Puppet and Ansible. Each system provides a framework for treating the setup and maintenance of your infrastructure as code. Each system has a slightly different take on the method for writing the configurations and deploying them, but the concepts are similar and a good background in any one of these is more often than not a requirement for any DevOps role. Each of these systems requires a good understanding of either Ruby or Python and these languages appear quite a bit in the various tools used in the DevOps space. A desire to improve systems and processes While not an exhaustive list, mastering this set of skills will accelerate anyone’s journey towards becoming a DevOps engineer. If you can augment these skills with a strong desire to improve upon the systems and processes that are used in the development lifecycle, you will be an excellent DevOps engineer. About the author Darrell Pratt is the director of software development and delivery at Cars.com, where he is responsible for a wide range of technologies that drive the Cars.com website and mobile applications. He is passionate about technology and still finds time to write a bit of code and hack on hardware projects. You can find him on Twitter here: @darrellpratt.
Read more
  • 0
  • 0
  • 2987

article-image-will-ethereum-eclipse-bitcoin
Ashwin Nair
24 Oct 2017
8 min read
Save for later

Will Ethereum eclipse Bitcoin?

Ashwin Nair
24 Oct 2017
8 min read
Unless you have been living under a rock, you have most likely heard about Bitcoin, the world's most popular cryptocurrency that is growing by leaps and bounds. In fact, recently, Bitcoin broke the threshold of $6000 and is now priced at an all-time high. Bitcoin is not alone in this race as another cryptocurrency named Ethereum is hot on its heels. Despite being only three years old, Ethereum is quickly emerging as a popular choice especially among enterprise users. Ethereum’s YTD price growth has been more than a whopping 3000%. In terms of market cap as well Ethereum has shown a significant increase. Its overall share of the 'total cryptocurrency market' rose from 5% at the beginning of the year to 30% YTD.  In absolute terms, today it stands at around $28 Billion.  On the other hand, Bitcoin’s market cap as a percentage of the market has shrunk from 85% at the start of the year to 55%  and is valued at around $90 Billion. Bitcoin played a huge role in bringing Ethereum into existence. The co-creator and inventor of Ethereum, Vitalik Buterin, was only 19 when his father introduced him to bitcoin and by extension, to the fascinating world of cryptocurrency. In a span of 3 years, Vitalik had written several blogs on the topic and also co-founded the Bitcoin Magazine in 2011. Though Bitcoin served as an excellent tool for money transaction eliminating the need for banks, fees, or third party, its scripting language had limitations. This led to Vitalik, along with other developers, to found Ethereum - A platform that aimed to extend beyond Bitcoin’s scope and make internet decentralized. How Ethereum differs from the reigning cryptocurrency - Bitcoin Both Bitcoin and Ethereum are built on top of Blockchain technology allowing them to build a decentralized public network. However, Ethereum’s capability extends beyond being a cryptocurrency and differs from Bitcoin substantially in terms of scope and potential. Exploiting the full spectrum blockchain platform Bitcoin leverages Blockchain's distributed ledger technology to perform secured peer-to-peer cash transactions. It thus disrupted traditional financial transaction instruments such as PayPal. Meanwhile, Ethereum aims to offer much more than digital currency by helping developers build and deploy any kind of decentralized applications on top of Blockchain. The following are some Ethereum based features and applications that make it superior to bitcoin. DApps A decentralized app or DApp refers to a program running on the internet through a network but is not under the control of any single entity. A white paper on DApp highlights the four conditions that need to be satisfied to call an application a DApp: It must be completely open-source Data and records of operation must be cryptographically stored It should utilize a cryptographic token It must generate tokens The whitepaper also goes on to suggest that DApps are the future: “decentralized applications will someday surpass the world’s largest software corporations in utility, user-base, and network valuation due to their superior incentivization structure, flexibility, transparency, resiliency, and distributed nature.” Smart Contracts and EVM Another feature that Ethereum boasts over Bitcoin is a smart contract. A smart contract works like a traditional contract. You can use it to perform a task or transfer money in return for any asset or task in an efficient manner without needing interference from a middleman. Though Bitcoin is fast, secure, and saves cost it has limitations in terms of the ability to run operations. Ethereum solves this problem by allowing operations to work as a contract by converting them to pieces of code and have them supervised by a network of computers. A tool that helps Ethereum developers build and experiment with different contracts is Ethereum Virtual Machine. It acts as a testing environment to build blockchain operations and is isolated from the main network. Thus, it gives developers a perfect platform to build and test smart as well as robust contracts across different industries. DAOs One can also create Decentralized Autonomous Organizations (DAO) using Ethereum. DAO eliminates the need for human managerial involvement. The organization runs through smart contracts that convert rules, core tasks and structure of the organization to codes monitored by a fault-tolerant network. An example of DAO is Slock.it, a DAO version of Airbnb. Performance An important factor for cryptocurrency transaction is the amount of time it takes to finalize the transaction. This is called as Block Time. In terms of performance, the Bitcoin network takes 10 minutes to make a transaction whereas Ethereum is much more efficient and boasts a block time of just 14-15 seconds. Development Ethereum’s programming language Solidity is based on JavaScript. This is great for web developers who want to use their knowledge of JavaScript to build cool DApps and extend the Ethereum platform. Moreover, Ethereum is Turing complete, meaning it can compute anything that is computable provided enough resources are available. Bitcoin, on the other hand, is based on C++ which comparatively is not a popular choice among the new generation of app developers. Community and Vision One can say Bitcoin works as a DAO with no involvement of individuals in managing the cryptocurrency and is completely decentralized and owned by the community. Satoshi Nakamoto, who prefers to stay behind the curtains, is the only name that one comes across when it comes to relating an individual with Bitcoin. The community, therefore, lacks a figurehead when it comes to seeking future directions. Meanwhile, Vitalik Buterin is hugely popular amongst Ethereum enthusiasts and is very much involved in designing the future roadmap with other co-founders. Cryptocurrency Supply Similar to Bitcoin, Ethereum has Ether which works as a digital asset that fuels the network and transactions performed on the platform. Bitcoin has a fixed supply cap of around 21 million coins. It’s going to take more than 100 years to mine the last Bitcoin after which Bitcoin would behave as a deflationary cryptocurrency. Ethereum, on the other hand, has no fixed supply cap but has restricted its annual supply to 18 million Ethers. With no upper cap on the number of Ether that can be mined, Ethereum behaves as an inflationary currency and may lose value with time. However, the Ethereum community is now planning to move from proof-of-work to proof-of-stake model which should limit the number of ethers being mined and also offer benefits such as energy efficiency and security. Some real-world applications using Ethereum The Decentralized applications’ growth has been on the rise with people starting to recognize the value offered by Blockchain and decentralization such as security, immutability, tamper-proofing, and much more. While Bitcoin uses blockchain purely as a list of transactions, Ethereum manages to transfer value and information through its platform. Thus, it allows for immense possibilities when it comes to building different DApps across a wide range of industries. The financial domain is obviously where Ethereum is finding a lot of traction. Projects such as Branche - a Decentralized Consumer Micro­credit and Financial Services and Augur, a decentralized prediction market that has raised more than $ 5 million are some prominent examples. But financial applications are only the tip of the iceberg when it comes to possibilities that Ethereum offers and potential it holds when it comes disrupting industries across various sectors. Some other sectors where Ethereum is making its presence felt are: Firstblood is a decentralized eSports platform which has raised more than $5.5 million. It allows players to test their skills and bet using Ethereum while the tournaments are tracked on smart contracts and blockchain. Alice.Si a charitable trust that lets donors invest in noble causes knowing the fact that they only pay for causes where the charity makes an impact. Chainy is an Ethereum-based authentication and verification system that permanently stores records on blockchain using timestamping. Flippening is happening! If you haven’t heard of Flippening, it’s a term coined by cryptocurrency enthusiasts on Ethereum chances of beating Bitcoin to claim the number one spot to become the largest capitalized blockchain. Comparing Ethereum to Bitcoin may not be right as both serve different purposes. Bitcoin will continue to dominate cryptocurrency but as more industries adopt Ethereum to build Smart Contracts, DApps, or DAOs of their choice, its popularity is only going to grow, subsequently making Ether more valuable. Thus, the possibility of Ether displacing Bitcoin is strong. With the pace at which Ethereum is growing and the potential it holds in terms of unleashing Blockchain’s power to transform industries, it is definitely a question of when rather than if Flippening would happen!
Read more
  • 0
  • 0
  • 2983

article-image-is-blockchain-a-failing-trend-or-can-it-build-a-better-world-harish-garg-provides-his-insight-interview
Packt Editorial Staff
02 Jan 2019
4 min read
Save for later

Is Blockchain a failing trend or can it build a better world? Harish Garg provides his insight [Interview]

Packt Editorial Staff
02 Jan 2019
4 min read
In 2018, Blockchain and cryptocurrency exploded across tech. We spoke to Packt author Harish Garg on what they see as the future of Blockchain in 2019 and beyond. Harish Garg, founder of BignumWorks Software LLP, is a data scientist and lead software developer with 17 years' software industry experience. BignumWorks is an India-based software consultancy that provides consultancy services in software development and technical training. Harish has worked for McAfee\Intel for 11+ years. He is an expert in creating data visualizations using R, Python, and web-based visualization libraries. Find all of Harish Garg's books for Packt here. From early adopters to the enterprise What do you think was the biggest development in blockchain during 2018? The biggest development in Blockchain during 2018 was the explosion of Blockchain based digital currencies. We have now thousands of different coins and projects supported by these coins. 2018 was also the year when Blockchain really captured the imagination of public at large, beyond just technical savvy early adopters. 2018 also saw first a dramatic rise in the price of digital currencies, especially Bitcoin and then a similar dramatic fall in the last half of the year. Do you think 2019 is the year that enterprise embraces blockchain? Why? Absolutely. Early adoption of Enterprise blockchain is already underway in 2018. Companies like IBM have already released and matured their Blockchain offerings for enterprises. 2018 also saw the big behemoth of Cloud Services, Amazon Web Services launching their own Blockchain solutions. We are on the cusp of wider adoption of Blockchain in enterprises in 2019. Key Blockchain challenges in 2019 What do you think the principle challenges in deploying blockchain technology are, and how might developers address them in 2019? There have been two schools that have been emerging about the way blockchain is perceived. One one side, there are people who are pitching Blockchain as some kind of ultimate Utopia, the last solution to solve all of humanity’s problems. And on the other end of the spectrum are people who dismiss Blockchain as another fading trend with nothing substantial to offer. These two kind of schools pose the biggest challenge to the success of Blockchain technology. The truth is somewhere lies in between these two. Developers need to take the job of Blockchain evangelism in their own hands and make sure the right kind of expectations are set up for policy makers and customers. Have the Bitcoin bubble and greater scrutiny from regulators made blockchain projects less feasible, or do they provide a more solid market footing for the technology? Why? Bitcoin has invited lot of scrutiny from regulators and governments, without the bubble too. Bitcoin upends the notion of a nation state controlling the supply of money. So obviously different governments are reacting to it with a wide range of actions, ranging from outright ban from using the existing banking systems to buy and sell Bitcoin and other digital currencies to some countries putting a legal framework in place to securely let their citizens trade in them. The biggest fear they have is the black money being pumped into digital currencies. With proper KYC procedures, these fears can be removed. However, governments and financial institutions are also realizing the advantages Blockchain offer in streamlining their banking and financial markets and are launching pilot projects to adopt Blockchain. Blockchain and disruption in 2019 Will Ethereum continue to dominate the industry or are there new platforms that you think present a serious challenge? Why? Ethereum do have an early mover advantage. However, we know that the early moved advantage is not such a big moat to cross for new competitors. There are likely to be competing and bigger platforms to emerge from the likes of Facebook, Amazon, and IBM that will solve the scalability issues Ethereum faces. What industries do you think blockchain technology is most likely to disrupt in 2019, and why? Finance and Banking are still the biggest industries that will see an explosion of creative products coming out due to the adoption of Blockchain technology. Products for Government use are going to be big especially wherever there is a need for immutable source of truth, like in the case of land records. Do you have any other thoughts on the future of blockchain you’d like to share? We are at a very early stage of Blockchain adoption. It’s very hard to predict right now what kind of killer apps will emerge few years down the line. Nobody predicted smartphones in 2007 will give rise to Apps like Uber. Important thing is to have the right mix of optimism and skepticism.
Read more
  • 0
  • 0
  • 2971

article-image-devops-not-continuous-delivery
Xavier Bruhiere
11 Apr 2017
5 min read
Save for later

DevOps is not continuous delivery

Xavier Bruhiere
11 Apr 2017
5 min read
What is the difference between DevOps and continuous delivery? The tech world is full of buzzwords; DevOps is undoubtly one of the best-known of the last few years. Essentially DevOps is a concept that attempts to solve two key problems for modern IT departments and development teams - the complexity of a given infrastructure or service topology and market agility. Or, to put it simply, there are lots of moving parts in modern software infrastructures, which make changing and fixing things hard.  Project managers who know their agile inside out - not to mention customers too - need developers to: Quickly release new features based on client feedback Keep the service available, even during large deployments Have no lasting crashes, regressions, or interruptions How do you do that ? You cultivate a DevOps philosophy and build a continous integration pipeline. The key thing to notice there are the italicised verbs - DevOps is cultural, continuous delivery is a process you construct as a team. But there's also more to it than that. Why do people confuse DevOps and continuous delivery? So, we've established that there's a lot of confusion around DevOps and continuous delivery (CD). Let's take a look at what the experts say.  DevOps is defined on AWS as: "The combination of cultural philosophies, practices, and tools that increases an organization’s ability to deliver applications and services at high velocity." Continuous Delivery, as stated by Carl Caum from Puppet, "…is a series of practices designed to ensure that code can be rapidly and safely be deployed to production by delivering every change to a production-like environment and ensuring that business applications and services function as expected through rigorous automated testing." So yes, both are about delivering code. Both try to enforce practices and tools to improve velocity and reliability of software in production. Both want the IT release pipeline to be as cost effective and agile as possible. But if we're getting into the details, DevOps is focused on managing challenging time to market expectations, while continuous delivery was a process to manage greater service complexity - making sure the code you ship is solid, basically. Human problems and coding problems In its definition of DevOps, Atlassian puts forward a neat formulation: "DevOps doesn’t solve tooling problems. It solves human problems." DevOps, according to this understanding promotes the idea that development and operational teams should work seamlessly together. It argues that they should design tools and processes to ensure rapid and efficient development-to-production cycles. Continuous Delivery, on the other hand, narrows this scope to a single mentra: your code should always be able to be safely released. It means that any change goes through an automated pipeline of tests (units, integrations, end-to-end) before being promoted to production. Martin Fowler nicely sums up the immediate benefits of this sophisticated deployment routine: "reduced deployment risk, believable progress, and user feedback." You can't have continuous delivery without DevOps Applying CD is difficult and requires advanced operational knowledge and enough resources to set up a pipeline that works for the team. Without a DevOps culture, you're team won't communicate properly, and technical resources won't be properly designed. It will certainly hurt the most critical IT pipeline: longer release cycles, more unexpected behaviors in production, and a slow feedback loop. Developers and management might fear the deployment step and become less agile. You can have DevOps without continuous delivery... but it's a waste of time The reverse this situation, DevOps without CD, is slightly less dangerous. But it is, unfortunately, pretty inefficient. While DevOps is a culture, or a philosophy, it is by no means supposed to remain theoretical. It's supposed to be put into practice. After all, the main aim isn't chin stroking intellectualism, it's to help teams build better tools and develop processes to deliver code. The time (ie. money) spent to bootstrap such a culture shouldn't be zeroed by a lack of concrete actions. CD delivery is a powerful asset for projects trying to conquer a market in a lean fashion. It overcomes the investments with teams of developers focused on business problems anddelevering to clients tested solutions as fast as they are ready. Take DevOps and continuous delivery seriously What we have, then, are two different, but related, concepts in how modern development teams understand operations. Ignoring one of them induces waste of resources and poor engineering efficiency. However, it is important to remember that the scope of DevOps - involving an entire organizational culture, potentially - and the complexity of continuous delivery mean that adoption shouldn't be rushed or taken lightly. You need to make an effort to do it properly. It might need a mid or long term roadmap, and will undoubtedly require buy-in from a range of stakeholders. So, keep communication channels open, consider using built cloud services if required, understand the value of automated tests and feedback-loops, and, most importantly, hire awesome people to take responsibility. A final word of caution. As sophisticated as they are, DevOps and continuous delivery aren't magical methodologies. A service as critical as AWS S3 claims 99.999999999% durability, thanks to rigorous engineering methods and yet, on February 28, it suffered a large service disruption. Code delivery is hard so keep your processes sharp! About the author Xavier Bruhiere is a Senior Data Engineer at Kpler. He is a curious, sharp, entrepreneur, and engineer who has built many projects, broke most of them, and launched and scaled what was left, learning from them all.
Read more
  • 0
  • 0
  • 2967
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at £15.99/month. Cancel anytime
article-image-what-jamstack-and-why-should-i-care
Antonio Cucciniello
07 May 2017
4 min read
Save for later

What is JAMstack and why should I care?

Antonio Cucciniello
07 May 2017
4 min read
What is JAMstack? JAMstack, according to the project site, offers you a "modern web development architecture based on client-side JavaScript, reusable APIs, and prebuild Markup." As you can tell by the acronym, it utilizes JavaScript, APIs and Markup as core the components for the development stack. It can be used in any website or web application that does not depend on tight coupling between the client and the server. It sounds simple, but let's dive a little deeper into the three parts. JavaScript The JavaScript is basically any form of client-side JavaScript. It can be used to handle requests and responses, front-end frameworks such as React and Angular, any client side libraries, or plain old JavaScript. APIs The APIs consists of any and all server-side processes or database commands that your web app needs to handle. These APIs can originate from third-party APIs out there, or from a custom API that you created for this application. The APIs communicate with JavaScript through HTTP calls. Markup This is Markup that is templated and is built at deploy time. This is done using a build tool such as grunt, or a static site generator. Now that you know the individual parts of how this works, let's discuss how you could optimize this stack with a few of the best practices. JAMstack best practices Hosted on Content Delivery Network It is a good idea to distribute all the code to CDNs to reduce the load time of each page. The JAMStack websites do not rely on server-side code so they can be distributed on CDNs much easier. All code in Git In order to increase development speed and have others contribute to your site, all of the code should be in source control. If using git, should be able to simply clone the repository and install the third party packages that your project requires. From there, developers should be smooth sailing to making changes. Build tools to automate builds Use tools like Babel, Webpack and Browserify to automate repetitive tasks to reduce development time. You want your builds to be automatic in order to run in order for users to see your changes Use atomic deploying Atomic Deploying allows you to deploy all of your changes at once, when all of your files are built. This allows the changes to be displayed after all files are uploaded and built. Instant cache purge The cache on your CDN may hold the old assets after you create a new build and deploy your changes. In order to make sure the client sees the changes that you implemented, you want to be able to clear the cache of CDNs that host your web application. Enough about best practices already, how about the benefits? Why should YOU care? There are a couple of benefits for you as a developer to building your next application using JAMstack. Let's discuss them. Security Here we are removing server-side parts that would normally be closely working with the client-side components. When we remove the server-side components we are reducing the complexity of the application. That makes the client-side components easier to build and maintain. Making for easier development process, and therefore increasing the security and reliability of your website. Cost Since we are removing the server-side parts, we do not need as many servers to host the application and we do not need as many backend engineers to handle the server-side functionality. Thus reducing your overall cost significantly. Speed Since we are using prebuilt Markup that is built at deploy time, we reduce the amount of work that needs to get completed at runtime. That will, in hand, increase the speed of your site because the page will be built already. JAMstack - key takeaways In the end, JAMstack is just a web development architecture that makes its web apps using JavaScript, APIs and prebuilt Markup. It has several advantages such as increased security, reduced cost, and faster speed. Here is a link to some examples of web apps that are built using JAMStack. Under each one, they list the tools that were used to make the app, including the front-end frameworks, static site builders, build tools and various APIs that were utilized. If you enjoyed this post share it on twitter! Leave a comment down low and let me know your thoughts on JAMStack and how you will use it in your future applications! Possible Resources Check out my GitHub View my personal blog Check out my YouTube Channel This is a great talk on JAMStack
Read more
  • 0
  • 0
  • 2952

article-image-redis-cluster-features-overview-0
Zhe Lin
15 Jan 2016
4 min read
Save for later

Redis Cluster Features Overview

Zhe Lin
15 Jan 2016
4 min read
After months of developing and testing, Redis 3.0 cluster was released on April 1st, 2015. A Redis Cluster is a set of Redis instances connecting each other with the gossip protocol, and each instance serves an nonoverlapping subset of all caching data. In this post, I'd like to talk about that how users can benefit from it, and also what's the cost of those benefits. The essence of Redis you may already know is that no matter what kinds of structure Redis supports, it is simply a key-value caching utility. Things are the same with Redis Cluster. A Redis Cluster is not something that magically shards your data into different Redis instances separately. The keys are still the unit and not splittable. For example, if you have a list of 100 elements, they will still be stored in one key, in one Redis, no matter how many instances in the cluster. More precisely, Redis Cluster uses CRC16 of a key string mod 16384 as the slot number of the key, and each master Redis instance serves some of the all 16384 slots, so that each instance just takes responsibility for keys in their owning slots. Knowing this you may soon realize that Redis Cluster finally catches up with the multiple cores fashion. As we know, Redis is designed as an asynchronous single-threaded program, which means although it behaves non-blocking. It can, however, use up to only 1 CPU. Since Redis Cluster simply splits keys into different instances by hash and they could serve data simultaneously, as many CPUs as the number of instances in a cluster are possible to be used so that Redis QPS may become much more than a standalone Redis. Another good news is that Redis instances on different hosts can be joined into one cluster, which means the memory a Redis service could use won't be limited to one host machine any longer, and you won't always worry about how much memory Redis may consume three month later because if memory is about to run out, we can extend Redis capacity by starting some more cluster mode instances, joining them into the cluster and doing a reshard. There is also a great news for those who turns on persistence options (RDB or AOF). When a Redis do persistence it will fork before writing data, which probably causes a latency if your dataset is really large. But there is no large thing in a cluster, since it's all sharded, and each instance just persists its own subset. The next advantage you should know is the availability improvement. A Redis Cluster will be much more robust than a standalone Redis, if you deploy a slave for each master Redis. The slaves in cluster mode are different from those in standalone mode, as they can automatically failover its master if its master is disconnected (accidentally killed or network fault, etc). And "the gossip protocol" we mentioned before means there is no central controller in a Redis Cluster, so that if one master is down and replaced by its slave, other masters will tell you who's the new guy to access. Besides the good things Redis Cluster offers to us, we should also take a look at what a cluster cannot do, or do well. The cluster model which Redis chooses sacrifices consistency for availability. It is good enough for a data caching solution. But as a consequence you may soon find some problems with multiple-keys command like MGET, since Redis Cluster requires that all keys manipulated in each operation shall be in one slot (otherwise you'll get a CROSSSLOT error). This restriction is so strong that those operations, not only MGET, MSET, but also EVAL, SUNION, BRPOPLPUSH, etc, are generally unavailable in a cluster. However, if you store all keys in one slot intendedly, the cluster loses it meaning. Another practice to avoid is to store large object intensively, like overwhelmingly huge lists, hashes, sets which are unable to shard. You may break hashes down to individual keys, but therefore you cannot do a HGETALL. You should also think about how to split lists or sets if you want to take advantage of cluster. Those are things you should know about Redis Cluster if you decide to use it. We must say it's a great improvement in availability and performance, as long as you don't the particular multi-keys commands frequently. So, stay with standalone Redis, or proceed to Redis Cluster, it's time to make your choice.
Read more
  • 0
  • 0
  • 2950

article-image-5-application-development-tool-matter-2018
Richard Gall
13 Dec 2017
3 min read
Save for later

5 application development tools that will matter in 2018

Richard Gall
13 Dec 2017
3 min read
2017 has been a hectic year. Not least in application development. But it’s time to look ahead to 2018. You can read what ‘things’ we think are going to matter here, but here are the key tools we think are going to define the next 12 months in the area. 1. Kotlin Kotlin has been one of the most notable languages in 2017. It’s adoption has been dramatic over the last 12 months, and signals significant changes in what engineers want and need from a programming language. We think it’s likely to challenge Java's dominance throughout 2018 as more and more people adopt it. If you want a run down of the key reasons why you should start using Kotlin, you could do a lot worse than this post on Medium. Learn Kotlin. Explore Kotlin eBooks and videos. 2. Kubernetes Kubernetes is a tool that’s been following in the slipstream of Docker. It has been a core part of the growth of containerization, and we’re likely to see it move from strength to strength in 2018 as the technology matures and the size of container deployments continues to grow in size and complexity. Kubernetes’ success and importance was underlined earlier this year when Docker announced that its enterprise edition would support Kubernetes. Clearly, if Docker paved the way for the container revolution, Kubernetes is consolidating and helping teams take the next step with containerization. Find Packt’s Kubernetes eBooks and videos here. 3. Spring Cloud This isn’t a hugely well known tool, but 2018 might just be the year that the world starts to pay it more attention. In many respects Spring Cloud is a thoroughly modern software project, perfect for a world where microservices reign supreme. Following the core principles of Spring Boot, it essentially enables you to develop distributed systems in a really efficient and clean way. Spring is interesting because it represents the way Java is responding to the growth of open source software and the decline of the more traditional enterprise system. 4. Java 9 This nicely leads us on to the new version of Java 9. Here we have a language that is thinking differently about itself, that is moving in a direction that is heavily influenced by a software culture that is distinctive from where it belonged 5-10 years ago. The new features are enough to excite anyone that’s worked with Java before. They have all been developed to help reduce the complexity of modern development, modeled around the needs of developers in 2017 - and 2018. And they all help to radically improve the development experience - which, if you’ve been reading up, you’ll know is going to really matter for everyone in 2018. Explore Java 9 eBooks and videos here. 5. ASP.NET Core Microsoft doesn’t always get enough attention. But it should. Because a lot has changed over the last two years. Similar to Java, the organization and its wider ecosystem of software has developed in a way that moves quickly and responds to developer and market needs in an impressive way. ASP.NET Core is evidence of that. A step forward from the formidable ASP.NET, this cross-platform framework has been created to fully meet the needs of today’s cloud based, fully connected applications that run on microservices. It’s worth comparing it with Spring Cloud above - both will help developers build a new generation of applications, and both represent two of software’s old-guard establishment embracing the future and pushing things forward. Discover ASP.NET Core eBooks and videos.
Read more
  • 0
  • 0
  • 2949

article-image-why-algorithm-never-win-pulitzer
Richard Gall
21 Jan 2016
6 min read
Save for later

Why an algorithm will never win a Pulitzer

Richard Gall
21 Jan 2016
6 min read
In 2012, a year which feels a lot like the very early years of the era of data, Wired published this article on Narrative Science, an organization based in Chicago that uses Machine Learning algorithms to write news articles. Its founder and CEO, Kris Hammond, is a man whose enthusiasm for algorithmic possibilities is unparalleled. When asked whether an algorithm would win a Pulitzer in the next 20 years he goes further, claiming that it could happen in the next 5 years. Hammond’s excitement at what his organization is doing is not unwarranted. But his optimism certainly is. Unless 2017 is a particularly poor year for journalism and literary nonfiction, a Pulitzer for one of Narrative Science’s algorithms looks unlikely to say the least. But there are a couple of problems with Hammond’s enthusiasm. He fails to recognise the limitations of algorithms, the fact that the job of even the most intricate and complex Deep Learning algorithm is very specific is quite literally determined by the people who create it. “We are humanising the machine” he’s quoted as saying in a Guardian interview from June 2015. “Based on general ideas of what is important and a close understanding of who the audience is, we are giving it the tools to know how to tell us stories”. It’s important to notice here how he talks - it’s all about what ‘we’re’ doing. The algorithms that are central to Narrative Science’s mission are things that are created by people, by data scientists. It’s easy to read what’s going on as a simple case of the machines taking over. True, perhaps there is cause for concern among writers when he suggests that in 25 years 90% of news stories will be created by algorithms, but in actual fact there’s just a simple shift in where labour is focused. It's time to rethink algorithms We need to rethink how we view and talk about data science, Machine Learning and algorithms. We see, for example, algorithms as impersonal, blandly futuristic things. Although they might be crucial to our personalized online experiences, they are regarded as the hypermodern equivalent of the inauthentic handshake of a door to door salesman. Similarly, at the other end, the process of creating them are viewed as a feat of engineering, maths and statistics nerds tackling the complex interplay of statistics and machinery. Instead, we should think of algorithms as something creative, things that organize and present the world in a specific way, like a well-designed building. If an algorithm did indeed win a Pulitzer, wouldn’t it really be the team behind it that deserves it? When Hammond talks, for example, about “general ideas of what is important and a close understanding who the audience is”, he is referring very much to a creative process. Sure, it’s the algorithm that learns this, but it nevertheless requires the insight of a scientist, an analyst to consider these factors, and to consider how their algorithm will interact with the irritating complexity and unpredictability of reality. Machine Learning projects, then, are as much about designing algorithms as they are programming them. There’s a certain architecture, a politics that informs them. It’s all about prioritization and organization, and those two things aren’t just obvious; they’re certainly not things which can be identified and quantified. They are instead things that inform the way we quantify, the way we label. The very real fingerprints of human imagination, and indeed fallibility are in algorithms we experience every single day. Algorithms are made by people Perhaps we’ve all fallen for Hammond’s enthusiasm. It’s easy to see the algorithms as the key to the future, and forget that really they’re just things that are made by people. Indeed, it might well be that they’re so successful that we forget they’ve been made by anyone - it’s usually only when algorithms don’t work that the human aspect emerges. The data-team have done their job when no one realises they are there. An obvious example: You can see it when Spotify recommends some bizarre songs that you would never even consider listening to. The problem here isn’t simply a technical one, it’s about how different tracks or artists are tagged and grouped, how they are made to fit within a particular dataset that is the problem. It’s an issue of context - to build a great Machine Learning system you need to be alive to the stories and ideas that permeate within the world in which your algorithm operates - if you, as the data scientist lack this awareness, so will your Machine Learning project. But there have been more problematic and disturbing incidents such as when Flickr auto tags people of color in pictures as apes, due to the way a visual recognition algorithm has been trained. In this case, the issue is with a lack of sensitivity about the way in which an algorithm may work - the things it might run up against when it’s faced with the messiness of the real-world, with its conflicts, its identities, ideas and stories. The story of Solid Gold Bomb too, is a reminder of the unintended consequences of algorithms. It’s a reminder of the fact that we can be lazy with algorithms; instead of being designed with thought and care they become a surrogate for it - what’s more is that they always give us a get out clause; we can blame the machine if something goes wrong. If this all sounds like I’m simply down on algorithms, that I’m a technological pessimist, you’re wrong. What I’m trying to say is that it’s humans that are really in control. If an algorithm won a Pulitzer, what would that imply – it would mean the machines have won. It would mean we’re no longer the ones doing the thinking, solving problems, finding new ones. Data scientists are designers As the economy becomes reliant on technological innovation, it’s easy to remove ourselves, to underplay the creative thinking that drives what we do. That’s what Hammond’s doing, in his frenzied excitement about his company - he’s forgetting that it’s him and his team that are finding their way through today’s stories. It might be easier to see creativity at work when we cast our eyes towards game development and web design, but data scientists are designers and creators too. We’re often so keen to stress the technical aspects of these sort of roles that we forget this important aspect of the data scientist skillset.
Read more
  • 0
  • 0
  • 2944
article-image-5-alternatives-to-raspberry-pi
Ed Bowkett
30 Oct 2014
4 min read
Save for later

5 Alternative Microboards to Raspberry Pi

Ed Bowkett
30 Oct 2014
4 min read
This blog will show you five alternative boards to Raspberry Pi that are currently on the market and which you might not have considered before, as they aren’t as well known. There are others out there, but these are the ones that I’ve either dabbled in or have researched and am excited for. Hummingboard Figure 1: Hummingboard The Hummingboard has been argued to be more powerful than a Raspberry Pi, and certainly the numbers do seem to support this: 1 GHz vs 700 MHz, and more RAM in the available models, varying from 512MB to 1GB. What’s even better with the Hummingboard is the ability to take out the CPU and memory module should you need to upgrade them in the future. It also allows you to run on many open source operating systems such as Debian, XBMC, and Android. However, it is also more costly than a Raspberry Pi, coming in at $55 for the 512MB model and a pricey $100 for the 1GB model. However, I feel that the performance per cost is worth it, and it will be interesting to see what the community does with the Hummingboard. Banana Pi Figure 2: Banana Pi Whilst some people can look at the name of the Banana Pi and assume that it is a clone of the famous Raspberry Pi, it’s actually even better. With 1GB of RAM and a dual core processor running at 1 GHz, it’s even more powerful than its namesake (albeit still a fruit). It includes an Ethernet port, micro-USB port, and a DSI for graphics, and can also run Android, Ubuntu, and Debian, as well as Raspberry Pi Image and Cubieboard Image. If you are seeking to upgrade from a Raspberry Pi, this is quite possibly the board to go for. It will set you back around $50, but again, when you think about the performance you get for the price, this is a great deal. Cubieboard Figure 3: Cubieboard The Cubieboard has been around for a couple of years now, so can be considered an early-adoption board. Nonetheless, the Cubieboard is very powerful, runs a 1 GHz processer, has an extra infrared sensor, which is good for using as a media center, and also comes with a SATA port. One compelling point that the Cubieboard has, along with its performance, is its cost. It comes in at just $49. Considering the Raspberry Pi sells at $35, this is not that much of a price leap and gives you much more zing for your bucks. Initially, of course, Arduino and Raspberry Pi had huge communities, whereas Cubieboard didn’t. However, this is changing, and hence the Cubieboard deserves a mention. Intel Galileo Figure 4: Intel Galileo Arduino was one of the first boards to be sold to the mass market. Intel took this and developed their own boards, which led to the birth of the Intel Galileo. Arduino-certified, this board combines Intel technology with their ready-made expansion cards (shields) as well as Arduino libraries. The Galileo can be programmable with OS X, Windows, and Linux. However, a real negative to the Galileo is the performance, coming in at just 400 MHz. This, combined with the cost, $70, means it’s one of the weakest in terms of price-performance on this list. However, if you want to develop on Windows with the relative safety of Arduino libraries, this is probably the board for you. Raspberry Pi Pad OK, OK. I know this isn’t strictly a microboard. However, the Raspberry Pi Pad was announced on the 21st October, and it’s a pretty big deal. Essentially, it’s a touchscreen display that will run on Raspberry Pi. So, you can essentially build a Raspberry Pi tablet. That’s pretty impressive, and awesome at the same time. I think this will be the thing to watch out for in 2015, and it will be cool to see what the community makes of it. This blog covered alternative microboards that you might not have considered before. It’s thrown a curveball at the end and generally tried to provide different boards other than the usual Raspberry Pi, Beaglebone, and Arduino. About the author Ed Bowkett is Category Manager of Game Development and Hardware at Packt Publishing. When not imagining what the future of games will be in 5 years’ time, he is usually researching up on how to further automate his home using the latest ARM boards.
Read more
  • 0
  • 0
  • 2943

article-image-what-blockchain-means-security
Lauren Stephanian
02 Oct 2017
5 min read
Save for later

What Blockchain Means for Security

Lauren Stephanian
02 Oct 2017
5 min read
It is estimated that hacks and flaws in security have cost the US over $445B every year. It is clear at this point that the cost of hacking attacks and ransomware has increased and will continue to increase year by year. Therefore, industries—especially those that require large amounts of important data—will need to invest in technologies to continue to be more secure. By design, Blockchain is theoretically a secure means of storing data. Each transaction is detailed on an immutable ledger, which serves to prevent and detect any form of tampering. Besides this, Blockchain also eliminates the need for verification from trusted third parties, which can come at high costs. But is this a promise that the technology has yet to fulfill, or is it part of the security revolution of the future we so desperately need? How Blockchain is resolving security issues One security issue that can be resolved by Blockchain relates to the fact that many industries rely heavily on “cloud and on-demand services, where our data is accessed and processed by untrusted third parties.” There are also many situations where they may want to jointly work on data without revealing our portion to untrusted entities. Blockchain can be used to create a system where users can jointly store data and also remain anonymous. In this case, Blockchain can be used to record time-stamped events that can’t be removed—so in the case of a cyber attack, it is easy to see where it came from. The Enigma Project, originally developed at MIT, is a good example of this use case. Another issue that Blockchain can improve is data tampering. There have been a number of cyber attacks where the attackers don’t delete or steal data, but alter it. One infamous example of this is the Stuxnet malware, which severely and physically damaged Iran's nuclear program. If this data were altered on the Blockchain, the transactions will be marked and will not be able to be altered or covered, and therefore hackers will not be able to hide their tracks. Blockchain's security vulnerabilities The inalterability of Blockchain and its decentralization clearly has many advantages, however, it does not entirely remove the possibility of data being altered. It is possible to introduce data unrelated to transactions to the Blockchain, and therefore this Blockchain data could be exposed to malware. The extent to which malware could impact the entire Blockchain and all its data is not yet known, however, there have been some instances of proven vulnerabilities. One such proven vulnerability includes Vitaly Kamluk’s proof of concept software that could take information from a hacker’s Bitcoin address and essentially pull malicious data and store it on the Blockchain. Private vs. public Blockchain implementations When understanding security risks in Blockchain technology, it is also important to understand the difference between private and public implementations. On public Blockchains, anyone can read or write transactions and anyone can aggregate those transactions and publish them if they are able to solve a cryptographic puzzle. Solving these puzzles takes a lot of computer power, and therefore a high amount of energy is required to solve many of these problems. This leads to a market where most of the transactions and puzzle solving is done in countries where energy is cheapest. This, in turn, leads to centralization and potential collusion. Private Blockchains, in comparison, give the network operator control over who can read and write to the ledger. In the case of Bitcoin in particular, ownership is proven through a private key linked to a transaction and just like physical money, these can easily be lost or stolen. One estimate puts the value of lost Bitcoins at $950M. There are many pros and cons which should be considered when deciding whether or not to use Blockchain. It is important to note here that the most important thing Blockchain provides us is with the ability to track who committed a particular transaction—for good or for bad—and when. There are some security measures with which it certainly would help a great deal—especially when it comes to tracking what information was breached, altered, or stolen. However, it is not an end-all-be-all when it comes to keeping data secured. If Blockchain is to be used to store important data, such as financial information, or client health records, it should be a wrapped in a layer of other cyber security software. Lauren Stephanian is a software developer by training and an analyst for the structured notes trading desk at Bank of America Merrill Lynch. She is passionate about staying on top of the latest technologies and understanding their place in society. When she is not working, programming, or writing, she is playing tennis, traveling, or hanging out with her good friends in Manhattan or Brooklyn. You can follow her on Twitter or Medium at @lstephanian or via her website.
Read more
  • 0
  • 0
  • 2940

article-image-what-progressive-web-app-and-why-should-i-care
Antonio Cucciniello
09 May 2017
4 min read
Save for later

What is a progressive web app?

Antonio Cucciniello
09 May 2017
4 min read
You've probably heard plenty of buzz about something called progressive web apps over the past couple of years – you might have even been given the opportunity to use some of these on your devices. You’re also here reading this article, so it’s probably safe to say you’re also at least somewhat interested in learning more about progressive web apps. Let’s dive into what they are, some characteristics of one, and how progressive web apps affect you as a developer. What’s this all about then? A progressive web app is a program that is stored on a server somewhere and given to the user on a web browser, but is delivered with and interacted with as a native application experience. Stated more simply, it is a web application that feels like a native application to the user. It is built using web development technologies (browser, server, database, etc.), but it's created with the design and feel of being a native application for the end users. It is a great attempt at creating an application that combines the benefits of a web-based application and a native application. Progressive web apps have some defining characteristics, like they are: Reliable: The app should load instantly even under poor network conditions. Lighting fast and app-like: The app should respond to the user's actions with speed and with a smooth interaction. Engaging and responsive: The app should give the feeling that it was made specifically for that device, but it should be able to work across all platforms. Protected and secure: Since it is still a web app, it is served over HTTPS to make sure the contents of the app are not messed with. Installable: The app can be saved to a device's home screen for offline usage. Linkable: The app can be shared and accessed through a URL. Up-to-date: The application is always up to date using service workers.  Why should you care? Now let's dive into why application developers should be interested in progressive web apps. As you probably already noticed when reading the list above this, there are plenty of benefits to using progressive web apps for the user. First off, it keeps the simplicity and speed of developing a web application. It is built using the same old web technology that you have been building your web application with, which tends to be easier and cheaper to get developed compared to a native application because that is device specific, and involves learning more techonologies. Second, it has service workers that allow users to use the application with some offline functionality. The service workers usually cache application resources in order to be used offline. In a standard web app, you would not be able to access anything offline, but in a progressive web app it gives that added benefit to the user. Third, it allows for fluidity between all of your devices. Because the user interface and the interactions with it are the same on all devices, it is easy for the user to use the progressive web app on multiple platforms. Fourth, learning about the topic of building a progressive web application does not involve you learning a new technology if you have already been developing web applications for some time now. All you need to do as a developer is to build the web application with the correct principles in mind when starting out. Looking ahead Progressive web appsare an awesome combination of a web app and a native app that have the combined benefits of developing either/or, and bringing it to the user in one application. You can build the application more easily, it can be used at least partially offline, it allows for a nice fluidity between all of your devices, and it does not require much extra learning on your part. I would highly suggest you take this approach into consideration when building your next application. If you wanted to take a look at some of the progressive web apps that are out today, check out this link. It gives you a link to some of the better progressive web applications to date.  About the author Antonio Cucciniello is a Software Engineer with a background in C, C++ and JavaScript (Node.Js) from New Jersey.   His most recent project called Edit Docs is an Amazon Echo skill that allows users to edit Google Drive files using your voice.  He loves building cool things with software, reading books on self-help and improvement, finance, and entrepreneurship. Follow him on twitter @antocucciniello, and follow him on GitHub here: https://github.com/acucciniello. 
Read more
  • 0
  • 0
  • 2939
article-image-quantum-computing-trick-or-treat
Prasad Ramesh
01 Nov 2018
1 min read
Save for later

Quantum computing - Trick or treat?

Prasad Ramesh
01 Nov 2018
1 min read
Quantum computing uses quantum mechanics in quantum computers to solve a diverse set of complex problems. It uses qubits to store information in parallel dimensions. Quantum computers can work through a solution involving large parameters with far fewer operations than a standard computer. What is so special about Quantum Computing? As they have potential to work through and solve complex problems of tomorrow, research and work on this area is attracting funding from everywhere. But these computers need a lot of physical space right now, kind of like the very first computers in the twentieth century. Quantum computers also pose a security threat since they are good at calculating large items/numbers. Quantum encryption anyone? Quantum computing is even available on the Cloud from different companies. There is even a dedicated language called Q# by Microsoft. Using concepts like entanglement to speed up computation, quantum computing can solve complex problems and is a tricky one, but I call it a treat. What about the security threat? Well, Dr. Alan Turing built a better computer to decrypt messages from another machine, we’ll let you think now.
Read more
  • 0
  • 0
  • 2936

article-image-how-much-does-it-cost-to-build-an-iot-app
Guest Contributor
13 May 2019
6 min read
Save for later

How much does it cost to build an IoT app?

Guest Contributor
13 May 2019
6 min read
According to a Gartner study, the estimated approximate amount to be spent on the connected things (IoT related services) for the year 2017 was ($235 billion) and it is predicted to reach a level of 14.2 billion by the end of 2019. The number of connected devices across the globe will also be increased by around 10 billion by the end of the year 2020. Research by IDC (International Data Corporation) shows that market transformation due to IoT escalation has scaled up to approx 1.9 trillion in 2013 and will reach 7.1 trillion by the year 2020. These stats draw a clear picture that the Internet of Things is making businesses agile, fast, user-engaging and most importantly connected with each other. The areas where IoT is predicted to be used are exponentially growing. However, with the expansion comes with a burgeoning question “What is the cost of building an IoT Solution?” Before estimating the costs of developing an IoT app, you should have a clear answer to the following questions: What is the main idea or goal of your IoT app? Who will be the users of your upcoming IoT app? What benefits will you provide to the users through the app? What hardware are you going to use for the app development? What type of features will your IoT app have? What might be the possible challenges and issues of your IoT app? It’s important to answer these questions as more details you provide to your IoT development partner, the better your app result will be. Getting an insight into each IoT app development phase provides the developer with a clear picture of the future app. It also saves a lot of time by eliminating the chances of making unnecessary corrections. So, it’s essential to give significant consideration to the above-mentioned questions. Next, let's move to the various factors that help in estimating the cost of developing an IoT app. The time required to develop an IoT app Development phase eats most of the time when it comes to creating an IoT app for business purposes. The process starts with app information analysis and proceeds to prototype development and visual design creation. The phases include features and functionality research, UI/UX design, interface design, logo, and icon selection. Your IoT app development time also depends on the project size, use of new technologies and tools, uncertain integration requirements, a growing number of visual elements and complex UI and UX feature integration. Every aspect which consumes time leads an app towards cost increment. Thus, you can expect high-cost for your IoT app if you wish to incorporate all the above features in your connected environment. Integrating advanced features in your IoT app Often your app may require advanced feature integration such as Payment Gateway, Geo-location, Data Encryption, Third-party API Integration, All-across device synchronization, Auto-learning feed, CMS Integration, etc. Integrating advanced features like social media and geo-location functionality take much effort and time as compared to other simple features. This ultimately increases the app’s cost. You can hire programmers for integrating these advanced features. Generally, hourly rates of professional designers and programmers depend on the region the developers reside, such as: The cost in Eastern Europe is $30-50/hour The cost in Western Europe is $60-130/hour The cost in North America is $50-150/hour The cost in India is $20-50/hour Choose IoT developers accordingly by knowing the development cost of your region. Remember, the cost is just a rough idea and may vary with the app development requisites. The team required for building an IoT app Like any normal app, IoT app development also requires a team of diligent and skilled developers, who possess ample know-how of the latest technologies and development trends. Hiring experienced developers would unquestionably cost higher and lead your IoT app development process towards price expansion. Your IoT app development team (with cost) may consist of Front-end developer - $29.20 per hour Back-end developer - $29.59 per hour UI Designer - $41.93 per hour QA Engineer - $45 per hour Project Manager - $53.85 per hour Business Analyst - $39 per hour The cost mentioned above for each professional is gathered on an average basis. Calculating the total cost will give you the overall cost of IoT development. Don’t consider the aforementioned cost the final app investment as it may vary according to the project size, requisites, and other parameters. Post app development support and maintenance The development of IoT app doesn’t end at deployment, rather the real phase starts just after it. This is the post-production phase where the development company is supposed to provide after deployment support for the delivered project. If you have hired developers for your IoT app development make sure that they are ready to offer you the best post-deployment support for your app. It can be related to adding new features to the app or resolving the issues found during app performance. Also, make sure that they provide your app with a clear code so that anyone with the same skills can easily interpret and modify it to make future changes. Cost based on the size of project or app Generally, projects are categorized based on three sizes: small, middle and large. As obvious, a small project or less complicated app costs less than a complex one. For example, the development of IoT applications for modern home appliances like a refrigerator or home theatre is much easy and cost-effective. On the contrary, if you wish to develop a self-driven vehicle, it would be an expensive plan to proceed. Similarly, developing IoT application for ECG monitors incurs less cost approx 3000$ – 4000$ whereas the IoT system created for fitness machines requires around 30,000$ – 35,000$. This might not be the final cost of apps and you may also discover some hidden costs later on. Conclusion It is recommended to take the assistance of an IoT app development company, which has talented professionals to establish an in-depth IoT app development cost structure. Remember, the more complex your app is the more cost it will incur. So make a clear plan by understanding the needs of your customers while also thinking about the type of features your IoT app will have. About The Author Tom Hardy is a senior technology developer in Sparx IT Solutions. He always stays updated with the growing technology trends and also makes others apprised through his detailed and informative technology write-ups.
Read more
  • 0
  • 0
  • 2935