Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech Guides

851 Articles
article-image-6-new-ebooks-for-programmers-to-watch-out-for-in-march
Richard Gall
20 Feb 2019
6 min read
Save for later

6 new eBooks for programmers to watch out for in March

Richard Gall
20 Feb 2019
6 min read
The biggest challenge for anyone working in tech is that you need multiple sets of eyes. Yes, you need to commit to regular, almost continuous learning, but you also need to look forward to what’s coming next. From slowly emerging trends that might not even come to fruition (we’re looking at you DataOps), to version updates and product releases, for tech professionals the horizon always looms and shapes the present. But it’s not just about the big trends or releases that get coverage - it’s also about planning your next (career) move, or even your next mini-project. That could be learning a new language (not necessarily new, but one you haven’t yet got round to learning), trying a new paradigm, exploring a new library, or getting to grips with cloud native approaches to software development. This sort of learning is easy to overlook but it is one that's vital to any developers' development. While the Packt library has a wealth of content for you to dig your proverbial claws into, if you’re looking forward, Packt has got some new titles available in pre-order that could help you plan your learning for the months to come. We’ve put together a list of some of our own top picks of our pre-order titles available this month, due to be released late February or March. Take a look and take some time to consider your next learning journey... Hands-on deep learning with PyTorch TensorFlow might have set the pace when it comes to artificial intelligence, but PyTorch is giving it a run for its money. It’s impossible to describe one as ‘better’ than the other - ultimately they both have valid use cases, and can both help you do some pretty impressive things with data. Read next: Can a production ready Pytorch 1.0 give TensorFlow a tough time? The key difference is really in the level of abstraction and the learning curve - TensorFlow is more like a library, which gives you more control, but also makes things a little more difficult. PyTorch, then, is a great place to start if you already know some Python and want to try your hand at deep learning. Or, if you have already worked with TensorFlow and simply want to explore new options, PyTorch is the obvious next step. Order Hands On Deep learning with PyTorch here. Hands-on DevOps for Architects Distributed systems have made the software architect role incredibly valuable. This person is not only responsible for deciding what should be developed and deployed, but also the means through which it should be done and maintained. But it’s also made the question of architecture relevant to just about everyone that builds and manages software. That’s why Hands on DevOps for Architects is such an important book for 2019. It isn’t just for those who typically describe themselves as software architects - it’s for anyone interested in infrastructure, and how things are put together, and be made to be more reliable, scalable and secure. With site reliability engineering finding increasing usage outside of Silicon Valley, this book could be an important piece in the next step in your career. Order Hands-on DevOps for Architects here. Hands-on Full stack development with Go Go has been cursed with a hell of a lot of hype. This is a shame - it means it’s easy to dismiss as a fad or fashion that will quickly disappear. In truth, Go’s popularity is only going to grow as more people experience, its speed and flexibility. Indeed, in today’s full-stack, cloud native world, Go is only going to go from strength to strength. In Hands-on Full Stack Development with Go you’ll not only get to grips with the fundamentals of Go, you’ll also learn how to build a complete full stack application built on microservices, using tools such as Gin and ReactJS. Order Hands-on Full Stack Development with Go here. C++ Fundamentals C++ is a language that often gets a bad rap. You don’t have to search the internet that deeply to find someone telling you that there’s no point learning C++ right now. And while it’s true that C++ might not be as eye-catching as languages like, say, Go or Rust, it’s nevertheless still a language that still plays a very important role in the software engineering landscape. If you want to build performance intensive apps for desktop C++ is likely going to be your go-to language. Read next: Will Rust replace C++? One of the sticks that’s often used to beat C++ is that it’s a fairly complex language to learn. But rather than being a reason not to learn it, if anything the challenge it presents to even relatively experienced developers is one well worth taking on. At a time when many aspects of software development seem to be getting easier, as new layers of abstraction remove problems we previously might have had to contend with, C++ bucks that trend, forcing you to take a very different approach. And although this approach might not be one many developers want to face, if you want to strengthen your skillset, C++ could certainly be a valuable language to learn. The stats don’t lie - C++ is placed 4th on the TIOBE index (as of February 2019), beating JavaScript, and commands a considerably high salary - indeed.com data from 2018 suggests that C++ was the second highest earning programming language in the U.S., after Python, with a salary of $115K. If you want to give C++ a serious go, then C++ Fundamentals could be a great place to begin. Order C++ Fundamentals here. Data Wrangling with Python & Data Visualization with Python Finally, we’re grouping two books together - Data Wrangling with Python and Data Visualization with Python. This is because they both help you to really dig deep into Python’s power, and better understand how it has grown to become the definitive language of data. Of course, R might have something to say about this - but it’s a fact the over the last 12-18 months Python has really grown in popularity in a way that R has been unable to match. So, if you’re new to any aspect of the data science and analysis pipeline, or you’ve used R and you’re now looking for a faster, more flexible alternative, both titles could offer you the insight and guidance you need. Order Data Wrangling with Python here. Order Data Visualization with Python here.
Read more
  • 0
  • 0
  • 3877

article-image-how-to-migrate-from-magento-1-to-magento-2-a-comprehensive-guide
Guest Contributor
15 Aug 2019
8 min read
Save for later

How to migrate from Magento 1 to Magento 2. A comprehensive guide

Guest Contributor
15 Aug 2019
8 min read
Migrating from Magento 1 to Magento 2 has been one of the most commonly discussed topics in the world of eCommerce. Magento 2 was made available in 2015. Subsequently, Magento declared it will end its official support to Magento 1 in 2020. This makes the migration to Magento not only desirable but also necessary. Why you should migrate to Magento 2 As mentioned above, support to Magento 1 ends 2020. Here’s a list of the six most important reasons why migration from Magento 1.x to Magento 2 is important for your Magento store. Security Once the official support to Magento ends, security patches for different versions of Magento 1.x will no longer be offered. That means, if you continue running your Magento website on Magento 1.x, you’ll be exposed to a variety of risks and threats, many of which may have no official solution. Competition When your store is practically the only store that hasn’t migrated to Magento 2, you are at a severe competitive disadvantage. So while your competitors enjoy all the innovations that will continue happening on Magento 2, your Magento 1 website will be left out. Mobile friendly From regular shopping to special holiday purchases, an increasingly bigger proportion of e-commerce businesses come from mobile devices. Magento 2 is better optimized for mobile phones as compared to Magento 1. Performance In the e-commerce industry, better performance leads to better business, increased revenue and higher conversions. Magento 2 enables up to 66% faster add-to-cart server response times than Magento 1. Hence, Magento 2 becomes your best bet for growth. Checkout The number of steps for checkout has been slashed in Magento 2, marking a significant improvement in the buying process. Magento 2 offers the Instant Purchase feature which lets repeat customers purchase faster. Interface Magento 1 had an interface that wasn’t always friendly. Magento 2 has delved deeper to find the exact pain-points and made the new interface extremely user-friendly. Adding new products, editing product features or simply looking for tools has become easier with  Magento 2. FAQs for Magento migration By when should I migrate my store? All forms of official support for Magento 1 will be discontinued on June 2020, you should be migrating your store before that. Your Magento e-commerce store should be ready well before the deadline, so it’s highly recommended you start working towards the migration right away. How long will the migration take? It’s difficult to answer that question without further information about your store. The size of your store, its database and the kind of customization you need are some of the factors that influence the time horizon. Should I hire a Magento developer for the migration or should I let my in-house team deal with it? As with the earlier question, this question too needs further information. If you’re having your own team do it, allow them a good deal of time to learn a number of things and factor in a few false-starts as well. However, doing the migration all by yourself means you’ll have to divert a lot of in-house resources to the migration. That can negatively impact your ongoing business and put undue pressure on your revenue streams. Nearly all Magento stores have found that instead if they hire an experienced Magento 2 developer, they get better outcomes. Pre-migration checklist for moving from Magento 1 to Magento 2 Before you carry out the actual migration, you’ll want to prepare your site for the migration. Here’s your pre-migration checklist for Magento 1 to Magento 2 Filter your data. As you move to a better more sophisticated technology, you don’t want to carry outdated data or data that’s no way relevant to your business needs. There’s no point loading the new system with stuff that will only hog resources without ever being useful. So begin by removing data that’s not going to be useful. Critique your site. This is perhaps the best time to have a close look at your site and seriously consider upgrading it. Advanced technology like Magento 2 will produce even better results if your site reflects the current trends in e-commerce store design. Magento 2 offers better opportunities and you don’t want to be left out just because your site isn’t equipped to encash them. Build redundancy. Despite all your planning, there’s always a small risk of some kind of data loss. To safeguard yourself against it, make sure you replicate your Magento 1.x database. When you are actually implementing the migration, use this replicated database as your source for migration, without disturbing the original. Prepare to freeze admin activities. When you begin the dry run or the actual migration, continuing your administrative activities can alter your database. That would result in a patchy migration with some loose ends. To prevent this, go through a drill to prepare your business to stop all admin activities when you practice dry run and actual implementation of migration from Magento 1 to Magento 2. Finalize your blueprints. Unless absolutely critical, don’t waver from your original plans. Sticking to what you had planned will produce the best results. Changes that have not been factored in, can slow down or weaken your migration and even make it more expensive. Steps for migration from Magento 1 to Magento Migration from Magento 1 to Magento 2 doesn’t just depend on 1 activity but it is interdependent on multiple activities. They are: Data Migration Theme Migration Customization Migration, and Extension Migration Let’s look at each of them separately. Data Migration Step 1: Download Magento 2 without taking in the sample data. Follow the steps given for the setup and install the platform. Step 2: You will need a Data Migration Tool to transfer your data. You can download it from the official website. Remember, the Data Migration Tool version should be the same as the Magento 2 codebase version. Step 3: Feed the public keys and private keys for authorization. The keys too are available from the Magento site. Step 4: Configure the Data Migration Tool. How you configure it depends on which Magento 2 edition (Community Edition or Enterprise Edition) you would be using. You may not migrate from Enterprise Edition to Community Edition. Step 5: The next step is a mapping between Magento 1 and Magento 2 databases. Step 6: Get into maintenance mode to prepare for the actual migration. This will stop all administrative activities. Step 7: In the final step, you may migrate the Magento site, along with the system configuration like shipping and payments. Theme Migration Unlike Data Migration, Theme Migration in Magento doesn’t have standard tools that will take care of your theme migration. That’s also because of the fact that the frontend templates and their codes are hugely different in Magento 1.x and Magento 2.x So instead of looking for a tool, the best way out will be to get a new theme. You could either buy a Magento 2 theme that suits your style and requirements and customize it or develop one. This is one of the reasons why we suggested, upgrading your entire Magento store. Customization Migration The name customization itself suggests that what works for one online store won’t fit another. Which is why there’s no single way of migrating any of the customizations you might have done for your Magento 1. So you’ll be required to design all the customizations you need. However, there’s an important point to remember. Because of its efficiency and versatility, your store on Magento 2 may need lesser customization than you believe. So before you hurry into re-designing everything, take time to study what exactly you need and to what degree Magento 2 satisfies those needs. As you migrate from Magento 1.x to Magento 2.x, the number of customizations will possibly turn out to be considerably fewer than what you originally planned. Extension Migration Again, the same rule applies for extensions and plugins. What plugins worked for Magento 1 will likely not work for Magento 2 and you will have to build them again. Instead of interpreting it as something that’s frustrating, you can actually take it as an opportunity to correct minor errors and improve the overall experience. A dedicated Magento developer who specializes in Magento migration services can be of great help here. Final remarks on Magento migration If all this sounds a little overwhelming, relax, you’re not alone. Because Magento 2 is considerably superior to Magento 1, the migration may appear more challenging than what you had originally bargained for. In any case, the migration is compulsory; otherwise, you’ll face security threats and won’t be able to handle the competition. From the year 2020, this migration will not be a choice, so you might as well begin early so that you have more time to plan out things better. If you need help, a competent Magento web development company can make the migration more efficient and easier for you. Author Bio Kaartik Iyer is the Founder & CEO at Infigic Technologies, a web and mobile app development company. Kaartik has contributed to sites like Huffington Post, Yourstory, Tamebay to name a few. He's passionate about fitness, entrepreneurship, startups and all things digital. You can connect with him on LinkedIn for a quick chat on any of these topics. Why should your e-commerce site opt for Headless Magento 2? Adobe is going to acquire Magento for $1.68 Billion 5 things to consider when developing an eCommerce website
Read more
  • 0
  • 0
  • 3866

article-image-data-science-getting-easier
Erik Kappelman
10 Sep 2017
5 min read
Save for later

Is data science getting easier?

Erik Kappelman
10 Sep 2017
5 min read
The answer is yes, and no. This is a question that could've easily been applied to textile manufacturing in the 1890s, and could've received a similar answer. By this I mean, textile manufacturing improved leaps and bounds throughout the industrial revolution, however, despite their productivity, textile mills were some of the most dangerous places to work. Before I further explain my answer, let’s agree on a definition for data science. Wikipedia defines data science as, “an interdisciplinary field about scientific methods, processes, and systems to extract knowledge or insights from data in various forms, either structured or unstructured.” I see this as the process of acquiring, managing, and analyzing data. Advances in data science First, let's discuss why data science is definitely getting easier. Advances in technology and data collection have made data science easier. For one thing, data science as we know it wasn’t even possible 40 years ago, but due to advanced technology we can now analyze, gather, and manage data in completely new ways. Scripting languages like R and Python have mostly replaced more convoluted languages like Haskell and Fortran in the realm of data analysis. Tools like Hadoop bring together a lot of different functionality to expedite every element of data science. Smartphones and wearable tech collect data more effectively and efficiently than older data collection methods, which gives data scientists more data of higher quality to work with. Perhaps most importantly, the utility of data science has become more and more recognized throughout the broader world. This helps provide data scientists the support they need to be truly effective. These are just some of the reasons why data science is getting easier. Unintended consequences While many of these tools make data science easier in some respects, there are also some unintended consequences that actually might make data science harder. Improved data collection has been a boon for the data science industry, but using the data that is streaming in is similar to drinking out of a firehose. Data scientists are continually required to come up with more complicated ways of taking data in, because the stream of data has become incredibly strong. While R and Python are definitely easier to learn than older alternatives, neither language is usually accused of being parsimonious. What a skilled Haskell programming might be able to do in 100 lines, might take a less skilled Python scripter 500 lines. Hadoop, and tools like it, simplify the data science process, but it seems like there are 50 new tools like Hadoop a day. While these tools are powerful and useful, sometimes data scientists spend more time learning about tools and less time doing data science, just to keep up with the industry’s landscape. So, like many other fields related to computer science and programming, new tech is simultaneously making things easier and harder. Golden age of data science Let me rephrase the title question in an effort to provide even more illumination: is now the best time to be a data scientist or to become one? The answer to this question is a resounding yes. While all of the current drawbacks I brought up remain true, I believe that we are in a golden age of data science, for all of the reasons already mentioned, and more. We have more data than ever before and our data collection abilities are improving at an exponential rate. The current situation has gone so far as to create the necessity for a whole new field of data analysis, Big Data. Data science is one of the most vast and quickly expanding human frontiers at present. Part of the reason for this is what data science can be used for. Data science can effectively answer questions that were previously unanswered. Of course this makes for an attractive field of study from a research standpoint. One final note on whether or not data science is getting easier. If you are a person who actually creates new methods or techniques in data science, especially if you need to support these methods and techniques with formal mathematical and scientific reasoning, data science is definitely not getting easier for you. As I just mentioned, Big Data is a whole new field of data science created to deal with new problems caused by the efficacy of new data collection techniques. If you are a researcher or academic, all of this means a lot of work. Bootstrapped standard errors were used in data analysis before a formal proof of their legitimacy was created. Data science techniques might move at the speed of light, but formalizing and proving these techniques can literally take lifetimes. So if you are a researcher or academic, things will only get harder. If you are more of a practical data scientist, it may be slightly easier for now, but there’s always something! About the Author Erik Kappelman wears many hats including blogger, developer, data consultant, economist, and transportation planner. He lives in Helena, Montana and works for theDepartment of Transportation as a transportation demand modeler.
Read more
  • 0
  • 0
  • 3851

article-image-tao-devops
Joakim Verona
21 Apr 2016
4 min read
Save for later

The Tao of DevOps

Joakim Verona
21 Apr 2016
4 min read
What is Tao? It's a complex idea - but one method of thinking of it is to find the natural way of doing things, and then making sure you do things that way. It is intuitive knowing, an approach that can't be grasped just in principle but only through putting them into practice in daily life. The principles of Tao can be applied to almost anything. We've seen the Tao of Physics, the Tao of Pooh, even the Tao of Dating. Tao principles apply just as well to DevOps - because who can know fully what DevOps actually is? It is an idiom as hard to define as "quality" - and good DevOps is closely tied to the good quality of a software product. Want a simple example? A recipe for cooking a dish normally starts with a list of ingredients, because that's the most efficient way of describing cooking. When making a simple desert, the recipe starts with a title: "Strawberries And Cream". Already we can infer a number of steps in making the dish. We must acquire strawberries and cream, and probably put them together on a plate. The recipe will continue to describe the preparation of the dish in more detail, but even if we read only the heading, we will make few mistakes. So what does this mean for DevOps and product creation? When you are putting things together and building things, the intuitive and natural way to describe the process is to do it declaratively. Describe the "whats" rather than the "hows", and then the "hows" can be inferred. The Tao of Building Software Most build tools have at their core a way of declaring relationships between software components. Here's a Make snippet: a : b cc b And here's an Ant snippet: cc b </build> And a Maven snippet: <dependency> lala </dep> Many people think they wound up in a Lovecraftian hell when they see XML, even though the brackets are perfectly euclidean. But if you squint hard enough, you will see that most tools at their core describe dependency trees. The Apache Maven tool is well-known, and very explicit about the declarative approach. So, let's focus on that and try to find the Tao of Maven. When we are having a good day with Maven and we are following the ways of Tao, we describe what type of software artifact we want to build, and the components we are going to use to put it together. That's all. The concrete building steps are inferred. Of course, since life is interesting and complex, we will often encounter situations were the way of Tao eludes us. Consider this example: type:pom antcall tar together ../*/target/*.jar Although abbreviated, I have observed this antipattern several times in real world projects. Whats wrong with it? After all, this antipattern occurs because the alternatives are non-obvious, or more verbose. You might think it's fine. But first of all, notice that we are not describing whats (at least not in a way that Maven can interpret). We are describing hows. Fixing this will probably require a lot of work, but any larger build will ensure that it eventually becomes mandatory to find a fix. Pause (perhaps in your Zen Garden) and consider that dependency trees are already described within the code of most programming languages. Isn't the "import" statement of Java, Python and the like enough? In theory this is adequate - if we disregard the dynamism afforded by Java, where it is possible to construct a class name as a string and load it. In practice, there are a lot of different artifact types that might contain various resources. Even so, it is clearly possible in theory to package all required code if the language just supported it. Jsr 294 - "modularity in Java" - is an effort to provide such support at the language level. In Summary So what have we learned? The two most important lessons are simple - when building software (or indeed, any product), focus on the "Whats" before the "Hows". And when you're empowered with building tools such as Maven, make sure you work with the tool rather than around it. About the Author Joakim Verona is a consultant with a specialty in Continuous Delivery and DevOps, and the author of Practical DevOps. He has worked as the lead implementer of complex multilayered systems such as web systems, multimedia systems, and mixed software/hardware systems. His wide-ranging technical interests led him to the emerging field of DevOps in 2004, where he has stayed ever since. Joakim completed his masters in computer science at Linköping Institute of Technology. He is a certified Scrum master, Scrum product owner, and Java professional.
Read more
  • 0
  • 0
  • 3850

article-image-hierarchical-data-format
Janu Verma
10 Jan 2017
6 min read
Save for later

Hierarchical Data Format

Janu Verma
10 Jan 2017
6 min read
Hierarchical Data Format (HDF) is an open source file format for storing huge amounts of numerical data. It’s typically used in research applications to distribute and access very large datasets in a reasonable way, without centralizing everything through a database. We can use HDF5 data format for pretty fast serialization of or random access to fairly large datasets in a local/development environment. The Million Song Dataset, for example, is distributed this way. HDF was developed by National Center for Supercomputing Applications. Think of HDF as a file system within a file. It lets you organize the data hierarchically and manage a large amount of data very efficiently. Every object in an HDF5 file has a name, and they are arranged in a POSIX - style hierarchy with / separators, e.g.: /path/to/resource HDF5 has two kinds of objects: Groups Datasets Groups are folder-like objects which contain datasets and other groups. Datasets contain the actual data in the from of arrays. HDF in python For my work, I had to study the data stored in HDF5 files. These files are not human readable, and so I had to write some codes in Python to access the data. Luckily, there is the PyTables package, which has a framework to parse HDF5 files. The PyTables package does much more than that. PyTables can be used in any scenario where you need to save and retrieve large amounts of multidimensional data and provide metadata for it. PyTables can also be employed if you need to structure some portions of your cluttered RDBMS. For example, if you have very large tables in your existing relational database, then you can move those tables to PyTables so as to reduce the burden of your existing database while efficiently keeping those huge tables on disk. Reading a HDF5 file in python: from tables import * h5file = open_file("myHDF5file.h5", "a") All the nodes in the file: for node in h5file: print node This will print all the nodes in the file. This is of little use as this is like listing all the files in my filesystem. The main advantage of a hierarchical framework is that you want to retrieve data in a hierarchical fashion. So the first step would be to look at all the groups (folders): for group in h5file.walk_groups(): print group > / (RootGroup) '' /group1 (Group) /group2 (Group) We have 3 groups in this file, the root, group1, and group2. Everything is either a direct or indirect child of the root, as in a tree. Think of the home folder on your computer. Now, we would want to look at the contents of the groups (which will be either subgroups or datasets): print h5file.root._v_children > {'group1': /group1 (Group) '' children := ['group2' (Group), '/someList' (Array(40000,)], 'list2': /list2 (Array(2500,)) '' atom := Int8Atom(shape=(), dflt=0) maindim := 0 flavor := 'numpy' byteorder := 'irrelevant' chunkshape := None, 'tags': /tags (Array(2, 19853)) '' atom := Int64Atom(shape=(), dflt=0) maindim := 0 flavor := 'numpy' byteorder := 'little' chunkshape := None} _v_children gives a dictionary of the children of a group, the root in the above example. Now we can see that from node, there are 3 children hanging – a group and two arrays. We can also read that group1 has two children – a group and an array. We saw earlier that h5file.walk_groups() is a way to iterate through all the groups of the HDF5 file; this can be used to loop over the groups: for group in h5file.walk_groups(): nodes = group._v_children namesOfNodes = nodes.keys() print namesOfNodes This will print the names of the children for each group. One can do more interesting things using .walk_groups(). A very important procedure one can run on a group is: x = group._v_name for array in h5file.list_nodes(x, classname="Array"): array_name = array._v_name array_contents = array.read() print array_contents This will print the contents of all the arrays that are the children of the group. The supported classes in classname are 'Group', 'Leaf', 'Table', and 'Array’. Recall that array.read() for each array gives a Numpy array, so all the Numpy operations like ndim, shape, etc., work for these objects. With these operations, you can start exploring an HDF5 file. For more procedures and methods, check out the tutorials on PyTables. Converting HDF to JSON I wrote a class to convert the contents of the the HDF5 file into a JSON object. The codes can be found here. Feel free to use and comment. The motivation for this is two-fold: JSON format provides a very easy tool for data serialization and it has always been my first choice for serialization/deserialization. JSON schema is used in many NOSQL databases, e.g.: Membase and MongoDB. We can store information in JSON schema in relational databases also. In fact, there are claims that PostgreSQL 9.4 is now faster than MongoDB for storing JSON documents. We know that the HDF5 files are not human-readable. This class renders them into human-readable data objects consisting of key–value pairs. This creates a JSON file of the same name as the input HDF5 file with the .json extension. When decoded, the file contains a nested python dictionary: HDF5toJSON.py hdf2file.h5 json_data = converter(h5file) contents = json_data.jsonOutput() > 'hdf2file.json' Recall that every object in an HDF5 file has a name and is arranged in a POSIX-style hierarchy with / separators, e.g.: /group1/group2/dataArray. I wanted to maintain the same hierarchy in the JSON file also. So, if you want to access the contents of dataArray in JSON file: json_file = open('createdJSONfile.json') for line in json_file: record = json.loads(line) print record['/']['group1']['group2']['dataArray'] The main key is always going to the root key, '/'. This class also has methods to access the contents of a group directly without following the hierarchy. If you want to get a list of all the groups in the HDF5 file: json_data = converter(h5file) groups = json_data.Groups() print groups > ['/', 'group1', 'group2'] One can also directly look at the contents of group1: json_data = converter(h5file) contents = json_data.groupContents('group1') print contents > {'group2':{'dataArray':[12,24,36]}, array1:[1,2,4,9]} Or, if you are interested in the group objects hanging from group1: json_data = converter(h5file) groups = json_data.subgroups('group1') > ['group2'] About the author Janu Verma is a Researcher in IBM T.J. Watson Research Center, New York. His research interests are in mathematics, machine learning, information visualization, computational biology, and healthcare analytics. He had held research positions at Cornell University, Kansas State University, Tata Institute of Fundamental Research, Indian Institute of Science, and Indian Statistical Institute. He has written papers for IEEE Vis, KDD, International Conference on HealthCare Informatics, Computer Graphics and Applications, Nature Genetics, IEEE Sensors Journals, etc. His current focus is on the development of visual analytics systems for prediction and understanding. He advises startups and companies on data science and machine learning in the Delhi-NCR area. Email to schedule a meeting.
Read more
  • 0
  • 0
  • 3849

article-image-cognitive-iot-how-artificial-intelligence-is-remoulding-industrial-and-consumer-iot
Savia Lobo
15 May 2018
8 min read
Save for later

Cognitive IoT: How Artificial Intelligence is remoulding Industrial and Consumer IoT

Savia Lobo
15 May 2018
8 min read
Internet of Things (IoT) has gained a huge traction due to the ability to gather data from sensors embedded within a variety of IoT devices including Close-circuit cameras, vehicles, smart homes, smart appliances, and many more. Think of IoT as a network of devices which gathers raw and real-time data, analyzes them, and provides desired outputs that benefit the users. But what after the data is analyzed? What is done with the analyzed report? The data has to be acted upon. Here, Artificial Intelligence can do the needful. AI can get hold of all that data crunched by IoT devices and act on it in a successful and organized manner. Industries that already use IoT devices can automate certain mundane workflows such as documentation, machine maintenance notification alert, and so on when powered by AI. Intelligent things with AI-backed IoT The saying, ‘With great power come great responsibilities’, is true for AI powered IoT.AI backed IoT devices can make complex decisions, perform self-learning, and can carry out autonomous decision making. One can group IoT applications broadly into two categories based on who the end user is, i.e. Industrial IoT for enterprises and consumer IoT for individual consumers. Let’s look into some of the major domains that AI has enhanced. 1. Industrial IoT Also known as the IIoT, IoT has impacted industries by bringing in unprecedented opportunities. However, it has also brought in a wave of new risks to businesses. IIoT provides the internet with a new ability to control machines, factories and the industrial infrastructure. Some of the characteristics of IIoT include, Improved Interoperability where the machines and sensors communicate via IoT Availability of Transparent information with the presence of more sensors, which means abundance of information. Autonomous decision making now lies in the hands of the IoT devices, where they can detect emergency situations, for instance when a machine servicing is required and can act on it immediately.    Manufacturing Manufacturing is by far the biggest industry affected by the IoT wave. According to a report, ‘global manufacturers will invest $70 billion on IoT solutions in 2020, which is up from the $29 billion they spent in 2015’.Let’s see how some of the processes in manufacturing get a lift with AI enabled IoT: Detection of machine health using Predictive maintenance : Predictive maintenance involves collection and evaluation of data from machines in order to increase efficiency and optimize the maintenance processes. With predictive maintenance, manufacturers can determine the condition of their equipments and also predict when machines need maintenance. A startup named Konux, based in Munich, Germany, has developed a machine-learning powered monitoring system for train switches. The Konux switch sensor can be retrofitted onto existing train networks, providing real-time monitoring of track conditions and rolling stock. Data is transmitted wirelessly to the Konux Kora platform, which uses predictive algorithms based on machine learning to alert staff to specific problems as well as drive recommendations for maintenance. Supply Chain Optimization : With an IoT-optimized supply chain, manufacturers can get hold of real-time data and analyze issues to act upon them before the onset of any major problem. This in turn reduces inventory and capital requirements. In order to track a product, companies have set up smart shelves, which keep a record of when the product has been removed, the total no. of products, and so on. This smart shelf is connected to their entire network which is linked to their planning and demand sensing engine. Here, the AI powered decision support systems help to translate those demand signals into production and order processes. Read ‘How AI is transforming the manufacturing Industry’ for a more indepth look at AI’s impact on the manufacturing industry. Retail Adoption of IIoT in retail has upped the game for online retailers. Retail stores now comprise of in-store advertising and gesture walls. These walls help customers search merchandize, offers, and buy products with simple gestures. Retailers also have Automated Checkouts, or most simply self-checkout kiosks. This enables customers to avoid long queues and pay for products using a mobile app based payments system which scans the QR code embedded on the products, contactless payments or other means. With IoT enabled sensors, retailers can now extract insights about the most popular areas people pass by and where they stop to see the merchandize. Retailers can then send promotional text messages, discount coupons directly on the customer’s phone while they are in the store’s vicinity. For instance, Apple’s iBeacon enables devices to alert apps and websites about customer location. Retailers have also adopted Inventory Optimizations by using digital shelf and RFID techniques for managing their inventories effectively. Healthcare IoT in healthcare is proving to be a boon for patients by decreasing costs and reducing multiple visits to doctors. With these healthcare solutions, patient monitoring can be done in real-time. Due to this real-time data, diseases can be treated well in advance before they reach a malignant stage. These IoT enabled healthcare solutions provide accurate collection of data, automated workflows which are combined with data driven decisions.This cuts down on waste, reducing system costs and most importantly minimizes errors. Also, creation and management of drugs is a major expenditure in the healthcare industry. With IoT processes and devices, it is possible to manage these costs better. A new generation of “smart pills” is allowing healthcare organizations to ensure that a patient takes his or her medication, while also collecting other vital data. Apart from these major applications of IoT in the Industrial sectors, it has also affected sectors such as telecommunications, energy, and in the Government. Next up, we move on to explaining how AI backed IoT can affect and enhance the consumer domain. 2. Consumer IoT Consumers go for services that provide them with an easy way to do mundane tasks. Let us have a look at some examples where AI has intelligently assisted IoT for consumers benefit. Connected Vehicles Connected vehicles are vehicles that use any of a number of different communication technologies to communicate with the driver, other cars on the road (vehicle-to-vehicle [V2V]): This tech helps wirelessly exchange information about the speed and position of surrounding vehicles. This helps in avoiding crashes, ease traffic congestion, and improve the environment. roadside infrastructure (vehicle-to-infrastructure [V2I]): These technologies capture vehicle-generated traffic data wirelessly and provide information such as warnings from the infrastructure to the vehicle that inform the driver of safety, mobility, or environment-related conditions. the “Cloud” [V2C]: A Vehicle-to-Cloud infrastructure integrates NaaS (Network As A Service) into the automotive ecosystem and allows provisioning of vehicle-based services for automobile user. Connected homes These AI enabled IoT devices and services can automatically respond to preset rules, be remotely accessed and managed by mobile apps or a browser, and send alerts or messages to the user. For instance, Google Home, with a built-in Google Assistant, controls home and helps people with lists, translation, news, music, calendar and much more. Google Home can also answer any questions asked to it. This is because of Google’s huge Knowledge Graph that it is connected to. Similarly, Amazon’s Echo, a voice-controlled speaker and Apple’s homepod also assist in collecting data they get via voice. The AI can also get all devices within the home connected, with the help of Wi-Fi. With the latest IFTTT technology, your Google Home can talk to Nest and adjust the temperature of your home as per your requirement or the external temperature change. Health and lifestyle AI integrated with predictive analytics within the embedded devices such as fitness apps, health trackers, diet planners, and so on, makes them intelligent and personalized. For instance, Fitbit coach app paired with the Fitbit has a huge database. The app uses complex algorithms to extract meaningful information from the user data. This data is further used to recommend highly-tailored workout plans. Also, AthGene, uses ML algorithms to convert genetic information into valuable insights for customizing fitness regimen, diet plans, and lifestyle changes for users. IoT was only about devices monitoring data and giving insights in real-time. But AI added the efficiency factor, and also gave the power to these systems to take decisions. AI with IoT has a bright future; one can expect smart machines managed via Echo or Google Home in the future. Read Next How Google’s DeepMind is creating images with artificial intelligence Customer Relationship management just got better with Artificial Intelligence 5 Ways Artificial Intelligence is Transforming the Gaming Industry
Read more
  • 0
  • 0
  • 3848
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $15.99/month. Cancel anytime
article-image-what-can-google-duplex-do-for-businesses
Natasha Mathur
16 May 2018
9 min read
Save for later

What can Google Duplex do for businesses?

Natasha Mathur
16 May 2018
9 min read
When talking about the capabilities of AI-driven digital assistants, the most talked about issue is their inability to converse in a way a real human does. The robotic tone of the virtual assistants has been limiting them from imitating real humans for a long time. And it’s not just the flat monotone. It’s about understanding the nuances of the language, pitches, intonations, sarcasm, and a lot more. Now, what if there emerges a technology that is capable of sounding and behaving almost human? Well, look no further, Google Duplex is here to dominate the world of digital assistants. Google introduced the new Duplex at Google I/O 2018, their annual developer conference, last week. But, what exactly is it? Google Duplex is a newly added feature to the famed Google assistant. Adding to the capabilities of Google assistant, it is also able to make phone calls for the users, and imitate human natural conversation almost perfectly to get the day-to-day tasks ( such as booking table reservations, hair salon appointments, etc. ) done in an easy manner. It includes pause-fillers and phrases such as “um”, “uh-huh “, and “erm” to make the conversation sound as natural as possible. Don’t believe me? Check out the audio yourself! [audio mp3="https://hub.packtpub.com/wp-content/uploads/2018/05/Google-Duplex-hair-salon.mp3"][/audio]  Google Duplex booking appointments at a hair salon [audio mp3="https://hub.packtpub.com/wp-content/uploads/2018/05/Google-Duplex-table-reservation.mp3"][/audio]  Google Duplex making table reservations at a restaurant The demo call recording video of the assistant and the business employee, presented by Sundar Pichai, Google’s CEO, during the opening keynote, befuddled the entire world about who’s the assistant and who’s the human, making it go noticeably viral. A lot of questions are buzzing around whether Google Duplex just passed the Turing Test. The Turing Test assesses a machine’s ability to present intelligence closer or equivalent to that of a human being. Did the new human sounding robot assistant pass the Turing test yet? No, but it’s certainly the voice AI that has come closest to passing it. Now how does Google Duplex work? It’s quite simple. Google Duplex finds out the information ( you need ) that isn’t out there on the internet by making a direct phone call. For instance, a restaurant has shifted location and the new address is nowhere to be found online. Google Duplex will call the restaurant and check on their new address for you. The system comes with a self-monitoring capability, helping it recognize complex tasks that it cannot accomplish on its own. Such cases are signaled to a human operator, who then takes care of the task. To get a bit technical, Google Duplex makes use of Recurrent Neural Networks ( RNNs ) which are created using TensorFlow extended ( TFX ), a machine learning platform. Duplex’s RNNs are trained using the data anonymization technique on phone conversation data. Data anonymization helps with protecting the identity of a company or an individual by removing the data sets related to them. The output of Google’s Automatic speech recognition technology, conversation history and different parameters of the conversation are used by the network. The model also makes use of hyperparameter optimization from TFX which further enhances the model. But, how does it sound natural? Google uses concatenative text to speech ( TTS ) along with synthesis TTS engine ( using Tacotron and WaveNet ) to control the intonation depending on different circumstances. Concatenative TTS is a technique that converts normal text into speech by concatenating or linking together the recorded speech pieces. Synthesis TTS engine helps developers modify the speech rate, volume, and pitch of the synthesized output. Including speech disfluencies ( “hmm”s, “erm”s, and “uh”s ) makes the Duplex sound more human. These speech disfluencies are added when very different sound units are combined in the concatenative TTS or adding synthetic waits. This allows the system to signal in a natural way that it is still processing ( equivalent to what humans do when trying to sort out their thoughts ). Also, the delay or latency should match people’s expectations. Duplex is capable of figuring out when to give slow or fast responses using low-confidence models or faster approximations. Google also found out that including more latency helps with making the conversation sound more natural. Some potential applications of Google Duplex for businesses Now that we’ve covered the what and how of this new technology, let’s look at five potential applications of Google Duplex in the immediate future. Customer Service Basic forms of AI using natural language processing ( NLP ), such as chatbots and the existing voice assistants such as Siri and Alexa are already in use within the customer care industry. Google Duplex paves the way for an even more interactive form of engaging customers and gaining information, given its spectacular human sounding capability. According to Gartner, “By 2018, 30% of our interactions with technology will be through "conversations" with smart machines”. With Google Duplex, being the latest smart machine introduced to the world, the basic operations of the customer service industry will become easier, more manageable and efficient. From providing quick solutions to the initial customer support problems and delivering internal services to the employees, Google Duplex perfectly fills the bill. And it will only get better with further advances in NLP. So far chatbots and digital assistants have been miserable at handling irate customers. I can imagine Google Duplex in John Legend’s smooth voice calming down an angry customer or even making successful sales pitches to potential leads with all its charm and suave! Of course, Duplex must undergo the right customer management training with a massive amount of quality data on what good and bad handling look like before it is ready for such a challenge. Other areas of customer service where Google Duplex can play a major role is in IT support. Instead of connecting with the human operator, the user will first get connected to Google Duplex. Thus, making the entire experience friendly and personalized from the user perspective and saving major costs for organizations. HR Department Google Duplex can also extend a helping hand in the HR department. The preliminary rounds of talent acquisition where hiring executives make phone calls to their respective candidates could be handled by Google Duplex provided it gets the right training. Making note of the basic qualifications, candidate details, and scheduling interviews are all the functions that Google Duplex should be able to do effectively. The Google Assistant can collect the information and then further rounds can be conducted by the human HR personnel. This could greatly cut down on the time expended by HR executives on the first few rounds of shortlisting. This means they are free to focus their time on other strategically important areas of hiring. Personal assistants and productivity As presented at Google I/O 2018, Google Duplex is capable of booking appointments at hair salons, booking table reservations and finding out holiday hours over the phone. It is not a stretch to therefore assume that it can also order takeaway food over a phone call, check with the delivery man regarding the order, cancel appointments, make business inquiries, etc. Apart from that, it’s a great aid for people with hearing loss issues as well as people who do not speak the local language by allowing them to carry out tasks on phone. Healthcare Industry There is already enough talk surrounding the use of Alexa, Siri, and other voice assistants in healthcare. Google Duplex is another new addition to the family. With its natural way of conversing, Duplex can: Let patients know their wait time for emergency rooms. Check with the hospital regarding their health appointments. Order the necessary equipment for hospital use. Another allied area is elder care. Google Duplex could help reduce ailments related to loneliness by engaging with the users at a more human level. It could also assist with preventive care and in the management of lifestyle diseases such as diabetes by ensuring patients continue their med intake, keep their appointments, provide emergency first aid help, call 911 etc. Real Estate Industry Duplex enabled Google Assistants will help make realtors’ task easy. Duplex can help call potential sellers and buyers, thereby, making it easy for realtors to select the respective customers. The conversation between Google Duplex ( helping a realtor ) and a customer wanting to buy a house can look something like this: Google Duplex: Hi! I heard you are house hunting. Are you looking to buy or sell a property? Customer: Hey, I’m looking to buy a home in the Washington area. Google Duplex: That’s great! What part of Washington are you looking in for? Customer:  I’m looking for a house in Seattle. 3 bedrooms and 3 baths would be fine. Google Duplex: Sure, umm, may I know your budget? Customer: Somewhere between $749,000 to $850,000, is that fine? Google Duplex: Ahh okay sure, I’ve made a note and I’ll call you once I find the right matches. Customer: Yeah, sure. Google Duplex: okay, thanks. Customer: Thanks, Bye! Google Duplex then makes a note of the details on the realtor’s phone, thereby, narrowing down the efforts made by realtors on cold calling the potential sellers to a great extent. At the same time, the broker will also receive an email with the consumer’s details and contact information for a follow-up. Every rose has its thorns. What’s Duplex’s thorny issue? With all the good hype surrounding Google Duplex, there have been some controversies regarding the ethicality of Google Duplex. Some people have questions and mixed reactions about Google Duplex fooling people of one’s identity as the voice of the Duplex differs significantly from that of a robot. A lot of talk surrounding this issue is trending on several twitter threads. It has hushed away these questions by saying how ‘transparency in technology’ is important and they are ‘designing this feature with disclosure built-in’ which will help in identifying the system. Google also mentioned how any feedback that people have regarding their new product. Google successfully managed to awe people across the globe with their new and innovative Google Duplex. But there is a still a long way to go even though Google has already taken a step ahead in an effort to better the human relationships with the machines. If you enjoyed reading this article and want to know more, check out the official Google Duplex blog post. Google’s Android Things, developer preview 8: First look Google News’ AI revolution strikes balance between personalization and the bigger picture Android P new features: artificial intelligence, digital wellbeing, and simplicity  
Read more
  • 0
  • 0
  • 3845

article-image-how-iot-is-going-to-change-tech-teams
Raka Mahesa
31 Jan 2018
5 min read
Save for later

How IoT is going to change tech teams

Raka Mahesa
31 Jan 2018
5 min read
The Internet of Things is going to transform the way we live in the future. It will change how we commute, how we work, even simple day to day activities. But one thing that’s often overlooked when we talk about the internet of things is how it will impact IT teams. We’ve seen a lot of change in the shape of the modern IT team over the last 10 years thanks to things like DevOps, but IoT is going to shape things further in the near future.  To better understand how the Internet of Things will shape IT teams in the future, we first need to understand the application of the Internet of Things, especially in the sector closest to IT teams, the enterprise sector. IoT in the enterprise sector If you look at consumer media, the most common applications of the Internet of Things are the small-scale ones like smart gadgets and smart home systems. Unfortunately, this class of IoT products hasn't really caught up with mainstream consumers; its audience is limited to hobbyists and people in the tech. However, it's a whole different story with the enterprise sector becuse companies all over the world are starting to realize the benefit of applying IoT in their line of business.  Different industries have different applications of IoT. Usually though, IoT is used to either increase efficiency or reduce cost. For example, a shipping service may apply a monitoring system on their vehicles to track their speed and mileage to find ways to reduce fuel usage. Similarly, an airline company could apply sensors on their fleet of airplanes to monitor engine conditions to maintain it properly. A company may also apply IoT to manage its energy consumption so that it can reduce unneeded expenses. What new skills does IoT demand of tech pros All of these applications of IoT are going to require new skills and maybe even new job roles. So while we’ll see efficiencies thanks to these innovations, to really make an impact its still going to need both personal and organizational investment in skills and knowledge to ensure IoT is really helping to drive positive change. IoT and the second data explosion Let’s start with the most obvious change – the growth of data. Yes, the big data explosion has been happening all around us for the last decade, but IoT is bringing with it a second explosion that will be even bigger. This means everyone is going to have to become more data-savvy. That’s not to say that everyone will need to moonlight as a data scientist, but they will need an awareness of how data is stored and processed, who needs access to it and who needs to act on it. Device management will become more important than ever IoT isn’t just about data. It’s also about devices. With more gadgets and sensors connected to a given network, device management and maintenance will be an essential part of the IT team’s work. To tackle this problem, the team will need to grow bigger to handle more work, or they will need to use a more powerful device management tool that can handle a big amount of connected devices. New security risks presented by IoT An increase in the number of connected devices also presents increased security risks. This means pressure will be on IT departments to  IT team will need to tighten up security. Managing networks is one part of that, but a further challenge will be managing the human side of security – ensuring good practice is followed by staff and taking steps to minimize social engineering threats. IT teams will have to customize IoT solutions to meet their needs IoT doesn’t yet have many standards. That means today’s organizations face opportunities and challenges in how they customize solutions and tools for their own needs. This can be daunting, but for people working in IT teams it’s also really exciting – it gives them more control and ownership of the work they are doing. Third party solutions will no doubt remain, but they won’t be quite so important when it comes to IoT. True, companies like IBM will be working on IoT solutions right now to capture the market; however, because these innovations are in their infancy there’s a limit on traditional technology corporations’ ability to shape and define the IoT landscape in the way they have done with innovations in the past.  And that's just a small bit of how the Internet of Things will affect the IT team. When IoT takes off, it will change our lives in the most unimaginable ways possible, so of course there will be even more changes that will happen with the IT teams in charge of this. But then again, the world of technology is ripe with changes and disruptions, so I'm sure we're all used to changes and will be able to adapt. Raka Mahesa is a game developer at Chocoarts who is interested in digital technology in general. Outside of work, he enjoys working on his own projects, with Corridoom VR being his latest released game. Raka also regularly tweets @legacy99.
Read more
  • 0
  • 0
  • 3829

article-image-top-7-devops-tools-2018
Vijin Boricha
25 Apr 2018
5 min read
Save for later

Top 7 DevOps tools in 2018

Vijin Boricha
25 Apr 2018
5 min read
DevOps is a methodology or a philosophy. It's a way of improving the friction between development and operations. But while we could talk about what DevOps is and isn't for decades (and people probably will), there are a range of DevOps tools that are integral to putting its principles into practice. So, while it's true that adopting a DevOps mindset will make the way you build software more efficiently, it's pretty hard to put DevOps into practice without the right tools. Let's take a look at some of the best DevOps tools out there in 2018. You might not use all of them, but you're sure to find something useful in at least one of them - probably a combination of them. DevOps tools that help put the DevOps mindset into practice Docker Docker is a software that performs OS-level virtualization, also known as containerization. Docker uses containers to package up all the requirements and dependencies of an application making it shippable to on-premises devices, data center VMs or even Cloud. It was developed by Docker, Inc, back in 2013 with complete support for Linux and limited support for Windows. By 2016 Microsoft had already announced integration of Docker with Windows 10 and Windows Server 2016. As a result, Docker enables developers to easily pack, ship, and run any application as a lightweight, portable container, which can run virtually anywhere. Jenkins Jenkins is an open source continuous integration server in Java. When it comes to integrating DevOps processes, continuous integration plays the most important part and this is where Jenkins comes into picture. It was released in 2011 to help developers integrate DevOps stages with a variety of in-built plugins. Jenkins is one of those prominent tools that helps developers find and solve code bugs quickly and also automates the testing of their builds. Ansible Ansible was developed by the Ansible community back in 2012 to automate network configuration, software provisioning, development environment, and application deployment. In a nutshell, it is responsible for delivering simple IT automation that puts a stop to repetitive task. This eventually helps DevOps teams to focus on more strategic work. Ansible is completely agentless where in it uses syntax written in YAML and follows a master-slave architecture. Puppet Puppet is an open source software configuration management tool written in C++ and Closure. It was released back in 2005 licensed under the GNU General Public License (GPL) until version 2.7.0. Later it was licensed under Apache License 2.0. Puppet is an open-source configuration management tool used to deploy, configure and manage servers. It uses a Master Slave architecture where the Master and Slave use secure encrypted channels to communicate. Puppet runs on any platform that supports Ruby, for example CentOS, Windows Server, Oracle Enterprise Linux, Microsoft, and more. Git Git is a version control system that allows you to track file changes which in turn helps in coordinating with team members working on those files. Git was released in 2005 where it was majorly used for Linux Kernel development. Its primary use case is source code management in software development. Git is a distributed version control system where every contributor can create a local repository by cloning the entire main repository. The main advantage of this system is that contributors can update their local repository without any interference to the main repository. Vagrant Vagrant is an open source tool released in 2010 by HashiCorp and it used to build and maintain virtual environments. It provides a simple command-line interface to manage virtual machines with custom configurations so that DevOps team members have an identical development environment. While Vagrant is written in Ruby, it supports development in all major languages. It works seamlessly on Mac, Windows, and all popular Linux distributions. If you are considering building and configuring a portable, scalable, and lightweight environment, Vagrant is your solution. Chef Chef is a powerful configuration management tool used to transform infrastructure into code. It was released back in 2009 and is written in Ruby and Erlang. Chef uses a pure-ruby domain specific language (DSL) to write system configuration 'recipes' which are put together as cookbook for easier management. Unlike Puppet’s master-slave architecture Chef uses a client-server architecture. Chef supports multiple cloud environments which makes it easy for infrastructures to manage data centers and maintain high availability. Think carefully about the DevOps tools you use To increase efficiency and productivity, the right tool is key. In a fast-paced world where DevOps engineers and their entire teams do all the extensive work, it is really hard to find the right tool that fits your environment perfectly. Your best bet is to choose your tool based on the methodology you are going to adopt. Before making a hard decision it is worth taking a step back to analyze what would work best to increase your team’s productivity and efficiency. The above tools have been shortlisted based on current market adoptions. We hope you find a tool in this list that eventually saves a lot of your time in choosing the right one. Learning resources Here is a small selection of books and videos from our Devops portfolio to help you and your team master the DevOps tools that fit your requirements: Mastering Docker (Second Edition) Mastering DevOps [Video] Mastering Docker [Video] Ansible 2 for Beginners [Video] Learning Continuous Integration with Jenkins (Second Edition) Mastering Ansible (Second Edition) Puppet 5 Beginner's Guide (Third Edition) Effective DevOps with AWS
Read more
  • 0
  • 0
  • 3824

article-image-defending-your-business-from-the-next-wave-of-cyberwar-iot-threats
Guest Contributor
15 Sep 2018
6 min read
Save for later

Defending your business from the next wave of cyberwar: IoT Threats

Guest Contributor
15 Sep 2018
6 min read
There’s no other word for the destabilization of another nation through state action other than war -- even if it’s done with ones and zeros. Recent indictments of thirteen Russians and three Russian companies tampering with US elections is a stark reminder. Without hyperbole it is safe to say that we are in the throes of an international cyber war and the damage is spreading massively over the corporate economy. Reports have reached a fever pitch and the costs globally are astronomical. According to Cybersecurity Ventures, damage related to cybercrime in general is projected to hit $6 trillion annually by 2021. Over the past year, journalists for many news agencies have reported credible studies regarding the epidemic of state sponsored cyber attacks. Wired and The Washington Post among many others have outlined threats that have reached the US energy grid and other elements of US infrastructure. However, the cost to businesses is just as devastating. While many attacks have been government targeted, businesses are increasingly at risk from state sponsored cyber campaigns. A recent worldwide threat assessment from the US Department of Justice discusses several examples of state-sponsored cyber attacks that affect commercial entities including diminishing trust from consumers, ransomware proliferation, IoT threats, the collateral damage from disruptions of critical infrastructure, and the disruption of shipping lanes. How Cyberwar Affects Us on a Personal Level An outcome of cyberwarfare that isn’t usually considered, but a large amount of damage is reflected in human capital. This can be found in the undermining of consumer and employee confidence in the ability of a company to protect data. According to a recent study examining how Americans feel about internet privacy in 2018, 51% of respondents said their main concern was online threats stealing their information, and over a quarter listed that they were particularly concerned about companies collecting/sharing their personal data. This kind of consumer fear is justified by a seeming lack of ability of companies to protect the data of individuals. Computing and quantitative business expert Dr. Benjamin Silverstone points out that recent cyber-attacks focus on the information of consumers (rather than other confidential documentation or state secrets which may have greater protection). Silverstone says, “Rather than blaming the faceless cyber-criminals, consumers will increasingly turn to the company that is being impersonated to ask how this sort of thing could happen in the first place. The readiness to share details online, even with legitimate companies, is being affected and this will damage their business in the long term.” So, how can businesses help restore consumer confidence? You should: Increase your budget toward better cybercrime solutions and tell your consumers about it liberally. Proven methods include investing in firewalls with intrusion prevention tools, teaching staff how to detect and avoid malware software, and enforcing strict password protocols to bolster security. Invest in two-factor authorization so that consumers feel safer when accessing your product Educate your consumer base -- it is equally important that everyone be more aware when it comes to cyber attack. Give your consumers regular updates about suspected scams and send tips and tricks on password safety. Ransomware and Malware Attacks CSO Online reports that ransomware damage costs exceeded $5 billion in 2017, 15 times the cost in 2015. Accordingly, Cybersecurity Ventures says that costs from ransomware attacks will rise to $11.5 billion next year. In 2019, they posit, a business will fall victim to a ransomware attack every 14 seconds. But is This International Warfare? The North Korean government’s botnet has been shown to be able to pull off DDoS attacks and is linked to the wannacry ransomware attack. In 2017, over 400,000 machines were infected by the wannacry virus, costing companies  over $4 Billion in over 150 countries. To protect yourself from ransomware attacks: Back up your data often and store in non-networked spaces or on the cloud. Ransomware only works if there is a great deal of data that is at risk. Encrypt whatever you can and keep firewalls/two-factor authorization in place wherever possible. Keep what cyber experts call the  “crown jewels” (the top 5% most important and confidential documents) on a dedicated computer with very limited access. The Next Wave of Threat - IoT IoT devices make mundane tasks like scheduling or coordination more convenient. However, proliferation of these devices create cybersecurity risk. Companies are bringing in devices like printers and coffee makers that are avenues for hackers to enter a network.   Many experts point to IoT as their primary concern. A study from shared assessment found that 97% of IT respondents felt that unsecured IoT devices could cause catastrophic levels of damage to their company. However, less than a third of the companies represented reported thorough monitoring of the risks associated with third-party technology. Here’s a list of how to protect yourself from IoT threats: Evaluate what data IoT devices are accumulating and limit raw storage. Create policies regarding anonymizing user data as much as possible. Apply security patches to any installed IoT device. This can be as simple as making sure you change the default password. Vet your devices - make sure you are buying from sources that (you believe) will be around a long time. If the business you purchase your IoT device from goes under, they will stop updating safety protocols. Make a diversified plan, just in case major components of your software set up are compromised. While we may not be soldiers, a war is currently on that affects us all and everyone must be vigilant. Ultimately, communication is key. Consumers rely on businesses to protect them from individual attack. These are individuals who are more likely to remain your customers if you can demonstrate how you are maneuvering to respond to global threats. About the author           Zach is a freelance writer who likes to cover all things tech. In particular, he enjoys writing about the influence of emerging technologies on both businesses and consumers. When he's not blogging or reading up on the latest tech trend, you can find him in a quiet corner reading a good book, or out on the track enjoying a run. New cybersecurity threats posed by artificial intelligence Top 5 cybersecurity trends you should be aware of in 2018 Top 5 cybersecurity myths debunked  
Read more
  • 0
  • 0
  • 3821
article-image-essential-tools-for-go-programming
Nicholas Maccharoli
14 Jan 2016
5 min read
Save for later

Essential Tools for Go Programming

Nicholas Maccharoli
14 Jan 2016
5 min read
Golang as a programming language is a pleasure to work with, but the reason for this also comes largely in part from the great community around the language and its modern tool set, both from standard distribution and third-party tools. The go command On a system with go installed, type go with no arguments to see its quick help menu. Here, you will see the basic go commands, such as build, run, get, install, fmt, and so on. Go ahead and take a minute to run go help on some verbs that look interesting; I promise I'll be here when you get back. Basic Side options The go build and go run commands do what you think they do, as is also the case with go test, which runs any test files in the directory it is passed. The go clean command wipe out all the compiled and executable files from the directory in which it is run. Run this command when you want to force a build to be made entirely from source again. The go version command prints out the version and build info, as you might expect. The go env command is very useful when you want to see exactly how your environment is set up. Running it will show where all your environment variables point and will also make you aware of which ones are still not properly set. go doc: Which arguments did this take again? Whenever in doubt, just give go doc a call. Just running go doc [Package Name] will give you a high-level readout of the types, interfaces, and behavior defined in this package; that is, go doc net/http will give you all the function stubs and types defined. If you just need to check the order or types of arguments that a function takes, run go doc on the package and use a tool like grep to grab the relevant line, such as go doc net/http | grep -i servecontent This will produce just what we need! func ServeContent(w ResponseWriter, req *Request, name string, modtime time.Time, content io.ReadSeeker) If you need more detail on the function or type, just run the go doc command with the package and function name, and you will get a quick description of this function or type. gofmt This little tool is quite a time-saver. I mainly use it to ensure that my source files are stylistically correct, and I also use the -s flag to let gofmt simplify my code. Just run gofmt -w on a file or an entire directory to fix up the files in place. After running this command, you should see the proper use of white space and indentation corrected to eight space tabs. Here is a diff of a file with poor formatting that I ran through gofmt: Original package main import "fmt" func main() { hello_to := []string{"Dust", "Trees", "Plants", "Carnivorous plants"} for _, value := range hello_to { fmt.Printf("Hello %v!n",value) } } After running gofmt -w Hello.go package main import "fmt" func main() { hello_to := []string{"Dust", "Trees", "Plants", "Carnivorous plants"} for _, value := range hello_to { fmt.Printf("Hello %v!n", value) } } As you can see, the indentation looks much better and reads way easier! The magic of gofmt -s The -s flag to gofmt helps clean up unnecessary code; so, the intentionally ignored values in the following code: hello_to := []int{1, 2, 3, 4, 5, 6} for count, _ := range hello_to { fmt.Printf("%v: Hello!n", count) } Would get converted to the following after running –s: hello_to := []int{1, 2, 3, 4, 5, 6} for count, _ := range hello_to { fmt.Printf("%v: Hello!n", count) } The awesomeness of go get One of the really cool features of the go command is that go get it works seamlessly with code hosted on GitHub as well as repositories hosted elsewhere. A note of warning Make sure that $GOPATH is properly set (this is usually exported as a variable in your shell). You may have a line such as “export GOPATH=$HOME” in your shell's profile file. Nabbing a library off of GitHub Say, we see this really neat library we want to use called fastHttp. Using only the go tool, we can fetch the library and get it ready for use all with just: go get github.com/valyala/fasthttp Now, all we have to do is import it with the exact same path, and we can start using the library right away! Just type this and it should do the trick: import "github.com/valyala/fasthttp" In the event that you want to have a look around in the library you just downloaded with go get, just type cd into $GOPATH/src/[Path that was provided to get command]—in this case, $GOPATH/src/github.com/valyala/fasthttp—and feel free to inspect the source files. I am also happy to inform you that you can also use go doc with the libraries you download in the exact same way as you use go doc when interacting with the standard library! Try it: type go doc fasthttp (you might want to tack on less since its a little bit long to type go doc fasthttp | less). Those are only stock features and options! The go tool is great and gets the job done, but there are also other great alternatives to some of the go tool's features, such as the godep package manager. If you have some time, I think it’s worth the time investment to learn! About the author Nick Maccharoli is an iOS/backend developer and an open source enthusiast working at a start-up in Tokyo and enjoying the current development scene. You can see what he is up to at @din0sr or github.com/nirma.
Read more
  • 0
  • 0
  • 3811

article-image-how-assess-your-tech-teams-skills
Hari Vignesh
20 Sep 2017
5 min read
Save for later

How to assess your tech team’s skills

Hari Vignesh
20 Sep 2017
5 min read
For those of us that manage others, effectiveness is largely driven by the skills and motivation of those that report to us. So whether you are a CIO, IT division leader, or a front-line manager, you need to spend the time to assess the currents skills, abilities and career aspirations of your staff and help them put in place the plans that can support their development. And yet, you need to do this in such a way that still supports the overall near-term objectives of the organization, and properly balances the need for professional development against the day-to-day needs of the organization. There are certifications for competence in many different products. Having such certifications is very valuable and gives one a sense of the skill-set of an individual. But how do you assess someone as a journeyman programmer, tester or systems engineer, or perhaps as a master in one’s chosen discipline? This evaluation is overly subjective and places too much emphasis on “book knowledge” rather than practical application of that knowledge to develop new, innovative solutions or approaches that the organization truly needs. In other words, how do you assess the knowledge, skills and abilities (KSAs) of a person to perform their job role? This assessment problem is two-fold: For a specific IT discipline, you need a comprehensive framework by which to understand the types of skills and knowledge you should have each level — from novice to expert. For each discipline, you also need a way to accurately assess the current level ability of your technical staff members to create the baseline by which you can develop their skills to move to higher levels of proficiency. This not only helps the individual develop a realistic and achievable plan, but also gives you insights into where you have significant skills gaps in your organization. Skills Framework for the Information Age (SFIA) In 2003, a non-profit organization was founded called the Skills Framework for the Information Age (SFIA), which provides a comprehensive framework of skills in IT technologies and disciplines based on a broad industry “body of knowledge.” SFIA currently covers 97 professional skills required by professionals in roles involving information and communications technology. These skills are organized into six categories, as follows: Strategy and Architecture Change and Transformation Development and Implementation Delivery and Operation Skills and Quality Relationships and Engagement Each of the skills are described at one or more of SFIA’s seven levels of attainment — from a novice to expert. Find out more about this framework here. Although the framework helps define your needed competencies, it doesn’t tell you if your workers have the skills that match them. Building your own effective framework In order to accurately assess the current ability level of your technical staff members is to create the baseline from which you can develop their skills to higher levels of proficiency. So, the best way to progress would be by identifying the goals of the team or org and then building your own framework. So, how do we proceed? List the roles within your team To start with you need a list of the role types within your team. This isn’t the same thing as having a listing of every position on your org chart. You want to simplify the process by grouping together like roles. List the skills needed for each role Now that you’ve created a list of role types, the next step is to list the skills needed for each of these roles. What do the skills look like? They could be behavioral like “Listens to customer needs carefully to determine requirements” or they could be more technical like this sample list of engineering skills: Writing quality code Design skills Writing optimal code Programming patterns Once you have this list, it’s a valuable resource in itself. Create a survey It’s ideal if you can find out all of the relevant skills a person has, not just those for their current role. To do this, create a survey that makes it easy for your people to respond. This essentially means you need to keep it short and not ask the same question twice. To achieve this, the survey should group together each of the major role types. Use the list you created in step 2 as your starting point for this. Let’s say you have an engineering group within your organization. It may have a number of different role types within it, but there’s probably common skills across many of them. For example, many of the role types may require people to be skilled at “Programming.” Rather than listing skills more than once under each relevant role type, list them once under a common group heading. Survey your workforce With the survey designed, you are now ready to ask your workforce to respond to it. The size of your team and the number of roles will determine how you go about doing this. It’s a good practice to communicate to survey participants to explain why you are asking for their response and what will happen with the information. Analyze the data You can now reap the rewards of your skills audit process. You can analyze: The skill gaps in specific roles Skill gaps within teams or organization groups Potential successors for certain roles The number of people who have critical skills Future skill requirements This assessment not only helps employees create realistic and achievable individual development plans, but also gives you insight into where you have significant skills gaps in your team or in your organization. Hari Vignesh Jayapalan is a Google Certified Android app developer, IDF Certified UI & UX Professional, street magician, fitness freak, technology enthusiast, and wannabe entrepreneur. He can be found on Twitter @HariofSpades.
Read more
  • 0
  • 0
  • 3805

article-image-an-introduction-to-reactjs-2
Simon Højberg
14 Jan 2015
1 min read
Save for later

An introduction to React - Part 2 (video)

Simon Højberg
14 Jan 2015
1 min read
  Sample Code You can find the sample code on Simon's Github repository.   About The Author Simon Højberg is a Senior UI Engineer at Swipely in Providence, RI. He is the co-organizer of the Providence JS Meetup group and former JavaScript instructor at Startup Institute Boston. He spends his time building functional User Interfaces with JavaScript, and hacking on side projects like cssarrowplease.com. Simon recently co-authored "Developing a React Edge."
Read more
  • 0
  • 0
  • 3799
article-image-introduction-redux
Soham Kamani
01 Jun 2016
6 min read
Save for later

Introduction to Redux

Soham Kamani
01 Jun 2016
6 min read
There's this great new library called redux that has been making rounds recently for all of the right reasons. But, why are developers so crazy about redux, and why should you consider using redux in your next application? This post will explain what makes redux a good choice as we create a small application using redux and its principles. Redux is based on three main principles; they are as  follows: Every app has a single store: The "store" is where the state of your application is stored. Your app must have its entire state in only one store. This is because there has to be a single source of truth that renders the rest of your application. What this means is that if there’s something that doesn't look right in your app, it's easier to track the source of the bug because you know exactly where the state of each component is coming from. The application state is immutable: Once you have set your application state, it cannot be mutated again. This doesn't mean your application can't be dynamic. State can only be changed through actions and reducers (explained next), in which case you have to recompute the new state of the application each time and not change the existing state. The immutable state is better for your application because every time there is a new state, your application gets notified and can re-render accordingly. In this way, you are guaranteed to have your application show a visual representation of your state at any point in time. All reducers are pure functions: As stated before, the only way you can change the state of your application is by recomputing the new state from the old state. This is done with a reducer, which is nothing more than a function that takes two arguments (that is, the previous state and the action required) and returns the new application state. The most important concept here is that all reducers must be pure functions. If your reducer is doing anything outside the function's scope, it's possible that you're doing it wrong. Writing reducers The standard format of a reducer is as follows: const myReducer = (state = defaultValue, action) => { /* perform some calculations based on action and old state. newState !== state */ return newState; }; Using this format, let’s write a reducer for a simple counter: const counter = (state = 0, action) => { const { type } = action; switch (type) { case 'INCREMENT': return state + 1; case 'DECREMENT': return state + 2; default: return state; } }; The state variable is the original state passed to the reducer—we give a default value of 0 just in case (because this is the first time the reducer is called). The action argument is an object that contains a type attribute describing the kind of change we want in our state. If the action is of the INCREMENT type, we return the state increased by one and decrease the state by one for the DECREMENT type. If the type of action passed is unrecognized, we just return the state as it is. This is an important concept to remember because it will become very important once the application grows in size. Writing an application using the reducer So far, there has been no mention of redux, only of reducers. We now need redux as a glue to bridge our business logic (the counter reducer) to the store and the application state. In order to make our application, we will use npm and ES6 modules. You can bootstrap a project easily using a yeoman generator like this one: Install redux and react using the following: npm install --save redux Create the counter store: import { createStore } from 'redux'; const store = createStore(counter); In our html file, we will add a simple interface for our counter: <html> <head> <title>My App</title> </head> <body> <div id="counter-display"> </div> <button id="counter-increment"> + </button> <button id="counter-decrement"> - </button> <script src="bundle.min.js"></script> </body> </html> Next, let’s create a render method and subscribe our store to it such that it is called every time the state of the store is changed: const counterDisplay = document.getElementById('counter-display'); const render = () => { counterDisplay.innerHTML = store.getState(); }; store.subscribe(render); render(); We also call the render method once in the beggining to render the app initially. Now, we will add event listeners to the increment and decrement buttons to dispatch events every time they are clicked: const incrementButton = document.getElementById('counter-increment'); const decrementButton = document.getElementById('counter-decrement'); incrementButton.addEventListener('click', ()=>{ store.dispatch({type : 'INCREMENT'}); }); decrementButton.addEventListener('click', ()=>{ store.dispatch({type : 'DECREMENT'}); }); Now we have a fully functioning counter. The data flow in/out counter is as follows: The user clicks a button (increment or decrement). The event listener dispatches an event, with a type of either INCREMENT or DECREMENT, based on the button clicked. The reducer re-computes the state of the store depending on the action type. Since there is a new state, the render function, which was subscribed to the state, is called. The render method gets the current state from the store and changes the contents of the DOM. The source code for this application can be found here, and the working example can be seen here. Redux developer tools Redux is so great because of the many developer tools that it makes available. This one is written by the creator of redux himself. A few of the reasons you should consider incorporationg developer tools into your development are as follows: They provide a way to constantly monitor your state. No more pesky console.logs to check what your current state is. You can see exactly which action changed the state. There’s no more guesswork. If the state was changed, you now know exactly when and why it was changed. You can change the past. Yes, you read that right! The redux developer tools give you the option of removing an action you may have performed some time ago and re-computing the state to show you the current state of the application, as if that action had never been performed at all. For small scale applications, redux devolopment tools provide an easy and convenient way to debug and inspect your application, and for larger applications, I would go so far as to say that they are required. About the author Soham Kamani is a Full Stack web developer and electronics hobbyist. He is especially interested in JavaScript, Python, and IOT. HisTwitter handle is @sohamkamani, and he can also be found here.
Read more
  • 0
  • 0
  • 3787

article-image-do-you-need-artificial-intelligence-and-machine-learning-expertise-in-house
Guest Contributor
22 Jan 2019
7 min read
Save for later

Do you need artificial intelligence and machine learning expertise in house?

Guest Contributor
22 Jan 2019
7 min read
Developing artificial intelligence expertise is a challenge. There’s a huge global demand for practitioners with the right skills and knowledge and a lack of people who can actually deliver what’s needed. It’s difficult because many of the most talented engineers are being hired by the planet’s leading tech companies on salaries that simply aren’t realistic for many organizations. Ultimately, you have two options: form an in-house artificial intelligence development team or choose an external software development team or consultant with proven artificial intelligence expertise. Let’s take a closer look at each strategy. Building an in-house AI development team If you want to develop your own AI capabilities, you will need to bring in strong technical skills in machine learning. Since recruiting experts in this area isn’t an easy task, upskilling your current in-house development team may be an option. However, you will need to be confident that your team has the knowledge and attitude to develop those skills. Of course, it’s also important to remember that a team building artificial intelligence is comprised of a range of skills and areas of expertise. If you can see how your team could evolve in that way, you’re halfway to solving your problem. AI experts you need for building a project  Big Data engineers: Before analyzing data, you need you collect, organize, and process it. AI is usually based on big data, so you need the engineers who have experience working with structured and unstructured data, and can build a secure data platform. They should have sound knowledge of Hadoop, Spark, R, Hive, Pig, and other Big Data technologies.  Data scientists: Data scientists are a vital part of your AI team. They work their magic with data, building the models, investigating, analyzing, and interpreting it. They leverage data mining and other techniques to surface hidden insights and solve business problems. NLP specialists: A lot of AI projects involve Natural Language Processing, so you will probably need NLP specialists. NLP allows computers to understand and translate human language serving as a bridge between human communication and machine interpretation. Machine learning engineers: These specialists utilize machine learning libraries, deploying ML solutions into production. They take care of the maintainability and scalability of data science code. Computer vision engineers: They specialize in imagery recognition, correlating image to a particular metric instead of correlating metrics to metrics. For example, computer vision is used for modeling objects or environments (medical image analysis), identification tasks (a species identification system), and processes controlling (industrial robots).  Speech recognition engineers: You will need these experts if you want to build your speech recognition system. Speech recognition can be very useful in telecommunication services, in-car systems, medical documentation, and education. For instance, it is used in language learning for practicing pronunciation. Partnering with an AI solution provider If you realize that recruiting and building your own in-house AI team is too difficult and expensive, you can engage with an external AI provider. Such an approach helps companies keep the focus on their core expertise and avoid the headache of recruiting the engineers and setting up the team. Also, it allows them to kick off the project much faster and thus gain a competitive advantage. Factors to consider when choosing an artificial intelligence solution provider AI engineering experience Due to the huge popularity of AI these days, many companies claim to be professional AI development providers without practical experience. Hence it’s extremely important to do extensive research. Firstly, you should study the portfolio and case studies of the company. Find out which AI, machine learning or data science projects your potential vendor worked on and what kind of artificial intelligence solutions the company has delivered. For instance, you may check out these European AI development companies and the products they developed. Also, make sure a provider has experience in the types of machine learning algorithms (supervised, unsupervised, and reinforcement), data structures and algorithms, computer vision, NLP, etc that are relevant to your project needs. Expertise in AI technologies Artificial Intelligence covers a multitude of different technologies, frameworks, and tools. Make sure your external engineering team consists of professional data scientists and data engineers who can solve your business problems. Building the AI team and selecting the necessary skill set might be challenging for businesses that have no internal AI expertise. Therefore, ask a vendor to provide tech experts or delivery managers who will advise you on the team composition and help you hire the right people. Capacities to scale a team When choosing a team, you should consider not only your primary needs but also the potential growth of your business. If you expect your company to scale up, you’ll need more engineering capacities. Therefore, take into account your partner’s ability to ramp up the team in the future. Also, consider factors such as the vendor’s employer image and retention rate since your ability to attract top AI talent and keep them on your project will largely depend on it. Suitable cooperation model It is essential to choose the AI company with a cooperation model that fits your business requirements. The most popular cooperation models are Fixed Price, Time and Material, and Dedicated Development Team. Within the fixed price model all the requirements and the scope of work are set from the start, and you as a customer need to have them described to the smallest detail as it will be extremely difficult to make change requests during the project. However, it is not the best option for AI projects since they involve a lot of R&D and it is difficult to define everything at the initial stage. Time and material model is the best for small projects when you don’t need the specialists to be fully dedicated to your project. This is not the best choice for AI development as the hourly rates of AI engineers are extremely high and the whole project would cost you a fortune with this type of contract. In order to add more flexibility yet keep control over the project budget, it is better to choose a dedicated development team model or staff augmentation. It will allow you to change the requirements when needed and have control over your team. With this type of engagement, you will be able to keep the knowledge within your team and develop your AI expertise as developers will work exclusively for you. Conclusion If you have to deal with the challenge of building AI expertise in your company, there are two possible ways to go. First off, you can attract local AI talent and build the expertise in-house. Then you have to assemble the team of data scientists, data engineers, and other specialists depending on your needs. However, developing AI expertise in-house is always time- and cost-consuming taking into account the shortage of well-qualified machine learning specialists and superlative salary expectations. The other option is to partner with an AI development vendor and hire an extended team of engineers. In this case, you have to consider a number of factors such as the company’s experience in delivering AI solutions, the ability to allocate the necessary resources, the technological expertise, and its capabilities to satisfy your business requirements. Author Bio Romana Gnatyk is Content Marketing Manager at N-IX passionate about software development. Writing insightful content on various IT topics, including software product development, mobile app development, artificial intelligence, the blockchain, and different technologies. Researchers introduce a machine learning model where the learning cannot be proved “All of my engineering teams have a machine learning feature on their roadmap” – Will Ballard talks artificial intelligence in 2019 [Interview] Apple ups it’s AI game; promotes John Giannandrea as SVP of machine learning
Read more
  • 0
  • 0
  • 3783