Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Cloud & Networking

765 Articles
article-image-rookout-and-appdynamics-team-up-to-help-enterprise-engineering-teams-debug-at-speed-with-deep-code-insights
Richard Gall
20 Feb 2020
3 min read
Save for later

Rookout and AppDynamics team up to help enterprise engineering teams debug at speed with Deep Code Insights

Richard Gall
20 Feb 2020
3 min read
It's not acknowledged enough that the real headache when it comes to software faults and performance problems isn't so much the problems themselves, but instead the process of actually identifying those problems. Sure, problems might slow you down, but wading though your application code to actually understand what's happened can sometimes grind engineering teams to a halt. For enterprise engineering teams, this can be particularly fatal. Agility is hard enough when you're dealing with complex applications and the burden of legacy software; but when things go wrong, any notion of velocity can be summarily discarded to the trashcan. However, a new partnership between debugging platform Rookout and APM company AppDynamics, announced at AppDynamics' Transform 2020 event, might just change that. The two organizations have teamed up, with Rookout's impressive debugging capabilities now available to AppDynamics customers in the form of a new product called Deep Code Insights. [caption id="attachment_31042" align="alignleft" width="696"] Live debugging of an application in production in Deep Code Insights[/caption]                 What is Deep Code Insights? Deep Code Insights is a new product for AppDynamics customers that combines the live-code debugging capabilities offered by Rookout with AppDynamic's APM platform. The advantage for developers could be substantial. Jerrie Pineda, Enterprise Software Architect at Maverik says that "Rookout helps me get the debugging data I need in seconds instead of waiting for several hours." This means, he explains, "[Maverik's] mean time to resolution (MTTR) for most issues is slashed up to 80%.” What does Deep Code Insights mean for AppDynamics? For AppDynamics, Deep Code Insights allows the organization to go one step further in its mission to "make it easier for businesses to understand their own software." At least that's how AppDynamics' VP of corporate development and strategy at Kevin Wagner puts it. "Together [with Rookout], we are narrowing the gaps between indicating a code-related problem impacting performance, pinpointing the direct issue within the line of code, and deploying a solution quickly for a seamless customer experience," he says. What does Deep Code Insights mean for Rookout? For Rookout, meanwhile, the partnership with AppDynamics is a great way for the company to reach out to a wider audience of users working at large enterprise organizations. The company received $8,000,000 in Series A funding back in August. This has provided a solid platform on which it is clearly looking to build and grow. Rookout's Co-Founder and CEO Or Weis describes the partnership as "obvious." "We want to bring the next-gen developer workflow to enterprise customers and help them increase product velocity," he says. Learn more about Rookout: www.rookout.com Learn more about AppDynamics: www.appdynamics.com  
Read more
  • 0
  • 0
  • 12828

article-image-5-reasons-why-you-should-use-an-open-source-data-analytics-stack-in-2020
Amey Varangaonkar
28 Jan 2020
7 min read
Save for later

5 reasons why you should use an open-source data analytics stack in 2020

Amey Varangaonkar
28 Jan 2020
7 min read
Today, almost every company is trying to be data-driven in some sense or the other. Businesses across all the major verticals such as healthcare, telecommunications, banking, insurance, retail, education, etc. make use of data to better understand their customers, optimize their business processes and, ultimately, maximize their profits. This is a guest post sponsored by our friends at RudderStack. When it comes to using data for analytics, companies face two major challenges: Data tracking: Tracking the required data from a multitude of sources in order to get insights out of it. As an example, tracking customer activity data such as logins, signups, purchases, and even clicks such as bookmarks from platforms such as mobile apps and websites becomes an issue for many eCommerce businesses. Building a link between the Data and Business Intelligence: Once data is acquired, transforming it and making it compatible for a BI tool can often prove to be a substantial challenge. A well designed data analytics stack comes is essential in combating these challenges. It will ensure you're well-placed to use the data at your disposal in more intelligent ways. It will help you drive more value. What does a data analytics stack do? A data analytics stack is a combination of tools which when put together, allows you to bring together all of your data in one platform, and use it to get actionable insights that help in better decision-making. As seen the diagram above illustrates, a data analytics stack is built upon three fundamental steps: Data Integration: This step involves collecting and blending data from multiple sources and transforming them in a compatible format, for storage. The sources could be as varied as a database (e.g. MySQL), an organization’s log files, or event data such as clicks, logins, bookmarks, etc from mobile apps or websites. A data analytics stack allows you to use all of such data together and use it to perform meaningful analytics. Data Warehousing: This next step involves storing the data for the purpose of analytics. As the complexity of data grows, it is feasible to consolidate all the data in a single data warehouse. Some of the popular modern data warehouses include Amazon’s Redshift, Google BigQuery and platforms such as Snowflake and MarkLogic. Data Analytics: In this final step, we use a visualization tool to load the data from the warehouse and use it to extract meaningful insights and patterns from the data, in the form of charts, graphs and reports. Choosing a data analytics stack - proprietary or open-source? When it comes to choosing a data analytics stack, businesses are often left with two choices - buy it or build it. On one hand, there are proprietary tools such as Google Analytics, Amplitude, Mixpanel, etc. - where the vendors alone are responsible for their configuration and management to suit your needs. With the best in class features and services that come along with the tools, your primary focus can just be project management, rather than technology management. While using proprietary tools have their advantages, there are also some major cons to them that revolve mainly around cost, data sharing, privacy concerns, and more. As a result, businesses today are increasingly exploring the open-source alternatives to build their data analytics stack. The advantages of open source analytics tools Let's now look at the 5 main advantages that open-source tools have over these proprietary tools. Open source analytics tools are cost effective Proprietary analytics products can cost hundreds of thousands of dollars beyond their free tier. For small to medium-sized businesses, the return on investment does not often justify these costs. Open-source tools are free to use and even their enterprise versions are reasonably priced compared to their proprietary counterparts. So, with a lower up-front costs, reasonable expenses for training, maintenance and support, and no cost for licensing, open-source analytics tools are much more affordable. More importantly, they're better value for money. Open source analytics tools provide flexibility Proprietary SaaS analytics products will invariably set restrictions on the ways in which they can be used. This is especially the case with the trial or the lite versions of the tools, which are free. For example, full SQL is not supported by some tools. This makes it hard to combine and query external data alongside internal data. You'll also often find that warehouse dumps provide no support either. And when they do, they'll probably cost more and still have limited functionality. Data dumps from Google Analytics, for instance, can only be loaded into Google BigQuery. Also, these dumps are time-delayed. That means the loading process can be very slow.. With open-source software, you get complete flexibility: from the way you use your tools, how you combine to build your stack, and even how you use your data. If your requirements change - which, let's face it, they probably will - you can make the necessary changes without paying extra for customized solutions. Avoid vendor lock-in Vendor lock-in, also known as proprietary lock-in, is essentially a state where a customer becomes completely dependent on the vendor for their products and services. The customer is unable to switch to another vendor without paying a significant switching cost. Some organizations spend a considerable amount of money on proprietary tools and services that they heavily rely on. If these tools aren't updated and properly maintained, the organization using it is putting itself at a real competitive disadvantage. This is almost never the case with open-source tools. Constant innovation and change is the norm. Even if the individual or the organization handling the tool moves on, the community catn take over the project and maintain it. With open-source, you can rest assured that your tools will always be up-to-date without heavy reliance on anyone. Improved data security and privacy Privacy has become a talking point in many data-related discussions of late. This is thanks, in part, to data protection laws such as the GDPR and CCPA coming into force. High-profile data leaks have also kept the issue high on the agenda. An open-source stack analytics running inside your cloud or on-prem environment gives complete control of your data. This lets you decide which data is to be used when, and how. It lets you dictate how third parties can access and use your data, if at all. Open-source is the present It's hard to counter the fact that open-source is now mainstream. Companies like Microsoft, Apple, and IBM are now not only actively participating in the open-source community, they're also contributing to it. Open-source puts you on the front foot when it comes to innovation. With it, you'll be able to leverage the power of a vibrant developer community to develop better products in more efficient ways. How RudderStack helps you build an ideal open-source data analytics stack RudderStack is a completely open-source, enterprise-ready platform to simplify data management in the most secure and reliable way. It works as a perfect data integration platform by routing your event data from data sources such as websites, mobile apps and servers, to multiple destinations of your choice - thus helping you save time and effort. RudderStack integrates effortlessly with a multitude of destinations such as Google Analytics, Amplitude, MixPanel, Salesforce, HubSpot, Facebook Ads, and more, as well as popular data warehouses such as Amazon Redshift or S3. If performing efficient clickstream analytics is your goal, RudderStack offers you the perfect data pipeline to collect and route your data securely. Learn more about Rudderstack by visiting the RudderStack website, or check out its GitHub page to find out how it works.
Read more
  • 0
  • 0
  • 10602

article-image-10-tech-startups-for-2020-that-will-help-the-world-build-more-resilient-secure-and-observable-software
Richard Gall
30 Dec 2019
10 min read
Save for later

10 tech startups for 2020 that will help the world build more resilient, secure, and observable software

Richard Gall
30 Dec 2019
10 min read
The Datadog IPO in September marked an important moment for the tech industry. This wasn’t just because the company was the fourth tech startup to reach a $10 billion market cap in 2019, but also because it announced something that many people, particularly those in and around Silicon Valley, have been aware of for some time: the most valuable software products in the world aren’t just those that offer speed, and efficiency, they’re those that provide visibility, and security across our software systems. It shouldn’t come as a surprise. As software infrastructure becomes more complex, constantly shifting and changing according to the needs of users and businesses, the ability to assume some degree of control emerges as particularly precious. Indeed, the idea of control and stability might feel at odds with a decade or so that has prized innovation at speed. The mantra ‘move fast and break things’ is arguably one of the defining ones of the last decade. And while that lust for change might never disappear, it’s nevertheless the case that we’re starting to see a mindset shift in how business leaders think about technology. If everyone really is a tech company now, there’s now a growing acceptance that software needs to be treated with more respect and care. The Datadog IPO, then, is just the tip of an iceberg in which monitoring, observability, security, and resiliency tools have started to capture the imagination of technology leaders. While what follows is far from exhaustive, it does underline some of the key players in a growing field. Whether you're an investor or technology decision maker, here are ten tech startups you should watch out for in 2020 from across the cloud and DevOps space. Honeycomb Honeycomb has been at the center of the growing conversation around observability. Designed to help you “own production in hi-res,” what makes it unique in the market is that it allows you to understand and visualize your systems through high-cardinality dimensions (eg. at a user by user level, rather than, say, browser type or continent). The driving force behind Honeycomb is Charity Majors, its co-founder and former CEO. I was lucky enough to speak to her at the start of the year, and it was clear that she has an acute understanding of the challenges facing engineering teams. What was particularly striking in our conversation is how she sees Honeycomb as a tool for empowering developers. It gives them ownership over the code they write and the systems they build. “Ownership gives you the power to fix the thing you know you need to fix and the power to do a good job…” she told me. “People who find ownership is something to be avoided – that’s a terrible sign of a toxic culture.” Honeycomb’s investment status At the time of writing, Honeycomb has received $26.9 million in funding, with $11.4 million series A back in September. Firehydrant “You just got paged. Now what?” That’s the first line that greets you on the FireHydrant website. We think it sums up many of the companies on this list pretty well; many of the best tools in the DevOps space are designed to help tackle the challenges on-call developers face. FireHydrant isn't a tech startup with the profile of Honeycomb. However, as an incident management tool that integrates very neatly into a massive range of workflow tools, we’re likely to see it gain traction in 2020. We particularly like the one-click post mortem feature - it’s clear the product has been built in a way that allows developers to focus on the hard stuff and minimize the things that can just suck up time. FireHydrant’s investment status FireHydrant has raised $1.5 million in seed funding. NS1 Managing application traffic can be business-critical. That’s why NS1 exists; with DNS, DHCP and IP address management capabilities, it’s arguably one of the leading tools on the planet for dealing with the diverse and extensive challenges that come with managing massive amounts of traffic across complex interlocking software applications and systems. The company boasts an impressive roster of clients, including DropBox, The Guardian and LinkedIn, which makes it hard to bet against NS1 going from strength to strength in 2020. Like all software adoption, it might take some time to move beyond the realms of the largest and most technically forward-thinking organizations, but it’s surely only a matter of time until it the importance of smarter and more efficient becomes clear to even the smallest businesses. NS1’s investment status NS1 has raised an impressive $78.4 million in funding from investors (although it’s important to note that it’s one of the oldest companies on this list, founded all the way back in 2013). It received $33 million in series C funding at the beginning of October. Rookout “It’s time to liberate your data” Rookout implores us. For too long, the startup’s argument goes, data has been buried inside our applications where it’s useless for developers and engineers. Once it has been freed, it can help inform how we go about debugging and monitoring our systems. Designed to work for modern architectural and deployment patterns such as Kubernetes and serverless, Rookout is a tool that not only brings simplicity in the midst of complexity, it can also save engineering teams a serious amount of time when it comes to debugging and logging - the company claims by 80%. Like FireHydrant, this means engineers can focus on other areas of application performance and resilience. Rookout’s investment status Back in August, Rookout raised $8 million in Series A funding, taking its total funding amount to $12.2 million dollars. LaunchDarkly Feature flags or toggles are a concept that have started to gain traction in engineering teams in the last couple of years or so. They allow engineering teams to “modify system behavior without changing code” (thank you Martin Fowler). LaunchDarkly is a platform specifically built to allow engineers to use feature flags. At a fundamental level, the product allows DevOps teams to deploy code (ie. change features) quickly and with minimal risk. This allows for testing in production and experimentation on a large scale. With support for just about every programming language, it’s not surprising to see LaunchDarkly boast a wealth of global enterprises on its list of customers. This includes IBM and NBC. LaunchDarkly’s investment status LaunchDarkly raised $44 million in series C funding early in 2019. To date, it has raised $76.3 million. It’s certainly one to watch closely in 2020; it's ability to help teams walk the delicate line between innovation and instability is well-suited to the reality of engineering today. Gremlin Gremlin is a chaos engineering platform designed to help engineers to ‘stress test’ their software systems. This is important in today’s technology landscape. With system complexity making unpredictability a day-to-day reality, Gremlin lets you identify weaknesses before they impact customers and revenue. Gremlin’s mission is to “help build a more reliable internet.” That’s not just a noble aim, it’s an urgent one too. What’s more, you can see that the business is really living out its mission. With Gremlin Free launching at the start of 2019, and the second ChaosConf taking place in the fall, it’s clear that the company is thinking beyond the core product: they want to make chaos engineering more accessible to a world where resilience can feel impossible in the face of increasing complexity. Gremlin’s investment status Since being founded back in 2016 by CTO Matt Fornaciari and CEO Kolton Andrus, Gremlin has raised $26.8Million in funding from Redpoint Ventures, Index Ventures, and Amplify Partners. Cockroach Labs Cockroach Labs is the organization behind CockroachDB, the cloud-native distributed SQL database. CockroachDB’s popularity comes from two things: it’s ability to scale from a single instance to thousands, and it’s impressive resilience. Indeed, its resilience is where it takes its name from. Like a cockroach, CockroachDB is built to keep going even after everything else has burned to the ground. It’s been an interesting year for CockroachLabs and CockroachDB - in June the company changed the CockroachDB core licence from open source Apache license to the Business Source License (BSL), developed by the MariaDB team. The reason for this was ultimately to protect the product as it seeks to grow. The BSL still means the source code is accessible for any use other than for a DBaaS (you’ll need an enterprise license for that). A few months later, the company took another step in pushing forward in the market with $55 million series C funding. Both stories were evidence that CockroachLabs is setting itself up for a big 2020. Although the database market will always be seriously competitive, with resilience as a core USP it’s hard to bet against Cockroach Labs. Cockroaches find a way, right? CockroachLabs investment status CockroachLabs total investment, following on from that impressive round of series C funding is now $108.5 million. Logz.io Logz.io is another platform in the observability space that you really need to watch out for in 2020. Built on the ELK stack (ElasticSearch, Logstash, and Kibana), what makes Logz.io really stand out is the use of machine learning to help identify issues across thousands and thousands of logs. Logz.io has been on ‘ones to watch’ lists for a number of years now. This was, we think, largely down to the rising wave of AI hype. And while we wouldn’t want to underplay its machine learning capabilities, it’s perhaps with the increasing awareness of the need for more observable software systems that we’ll see it really pack a punch across the tech industry. Logz.io’s investment status To date, Logz.io has raised $98.9 million. FaunaDB Fauna is the organization behind FaunaDB. It describes itself as “a global serverless database that gives you ubiquitous, low latency access to app data, without sacrificing data correctness and scale.” The database could be big in 2020. With serverless likely to go from strength to strength, and JAMstack increasing as a dominant approach for web developers, everything the Fauna team have been doing looks as though it will be a great fit for the shape of the engineering landscape in the future. Fauna’s investment status In total, Fauna has raised $32.6 million in funding from investors. Clubhouse One thing that gets overlooked when talking about DevOps and other issues in software development processes is simple project management. That’s why Clubhouse is such a welcome entry on this list. Of course, there are a massive range of project management tools available at the moment. But one of the reasons Clubhouse is such an interesting product is that it’s very deliberately built with engineers in mind. And more importantly, it appears it’s been built with an acute sense of the importance of enjoyment in a project management product. Clubhouse’s investment status Clubhouse has, to date, raised $16 million. As we see a continuing emphasis on developer experience, the tool is definitely one to watch in a tough marketplace. Conclusion: embrace the unpredictable The tech industry feels as unpredictable as the software systems we're building and managing. But while there will undoubtedly be some surprises in 2020, the need for greater security and resilience are themes that no one should overlook. Similarly, the need to gain more transparency and build for observability are critical. Whether you're an investor, business leader, or even an engineer, then, exploring the products that are shaping and defining the space is vital.
Read more
  • 0
  • 0
  • 13372
Banner background image

article-image-beyond-kubernetes-key-skills-for-infrastructure-and-ops-engineers-in-2020
Richard Gall
20 Dec 2019
5 min read
Save for later

Beyond Kubernetes: Key skills for infrastructure and ops engineers in 2020

Richard Gall
20 Dec 2019
5 min read
For systems engineers and those working in operations, the move to cloud and the rise of containers in recent years has drastically changed working practices and even the nature of job roles. But that doesn’t mean you can just learn Kubernetes and then rest on your laurels. To a certain extent, the broad industry changes we’ve seen haven’t stabilised into some sort of consensus but rather created a field where change is only more likely - and where things are arguably even less stable. This isn’t to say that you have anything to fear as an engineer. But you should be open minded about the skills you learn in 2020. Here’s a list of 5 skills you should consider spending some time developing in the new year. Scripting and scripting languages Scripting is a well-established part of many engineers’ skill set. The reasons for this are obvious: they allow you to automate tasks and get things done quickly. If you don’t know scripting, then of course you should learn it. But even if you do it’s worth thinking about exploring some new programming languages. You might find that a fresh approach - like learning, for example, Go if you mainly use Python - will make you more productive, or will help you to tackle problems with greater ease than you have in the past. Learn Linux shell scripting with Learn Linux Shell Scripting: the Fundamentals of Bash 4.4. Find out how to script with Python in Mastering Python Scripting for System Administrators. Infrastructure automation tools and platforms With the rise of hybrid and multi-cloud, infrastructure automation platforms like Ansible and Puppet have been growing more and more important to many companies. While Kubernetes has perhaps dented their position in the wider DevOps tooling marketplace (if, indeed, that’s such a thing), they nevertheless remain relevant in a world where managing complexity appears to be a key engineering theme. With Puppet looking to continually evolve and Ansible retaining a strong position on the market, they remain two of the most important platforms to explore and learn. However, there are a wealth of other options too - Terraform in particular appears to be growing at an alarming pace even if it hasn’t reached critical mass, but Salt and Chef are also well worth learning too. Get started with Ansible, fast - learn with Ansible Quick Start Guide. Cloud architecture and design Gone are the days when cloud was just a rented server. Gone are the days when it offered a simple (or relatively simple, at least) solution to storage and compute problems. With trends like multi and hybrid cloud becoming the norm, serverless starting to gain traction at the cutting edge of software development, being able to piece together various different elements is absolutely crucial. Indeed, this isn’t a straightforward skill that you can just learn with some documentation and training materials. Of course those help, but it also requires sensitivity to business needs, an awareness of how developers work, as well as an eye for financial management. However, if you can develop the broad range of skills needed to architect cloud solutions, you will be a very valuable asset to a business. Become a certified cloud architect with Packt's new Professional Cloud Architect – Google Cloud Certification Guide. Security and resilience With the increase in architectural complexity, the ability to ensure security and resilience is now both vital but also incredibly challenging. Fortunately, there are many different tools and techniques available for doing this, each one relevant to different job roles - from service meshes to monitoring platforms, to chaos engineering, there are many ways that engineers can take on stability and security challenges head on. Whatever platforms you’re working with, make it your mission to learn what you need to improve the security and resilience of your systems. Learn how to automate cloud security with Cloud Security Automation. Pushing DevOps forward No one wants to hear about the growth of DevOps - we get that. It’s been growing for almost a decade now; it certainly doesn’t need to be treated to another wave of platitudes as the year ends. So, instead of telling you to simply embrace DevOps, a smarter thing to do would be to think about how you can do DevOps better. What do your development teams need in terms of support? And how could they help you? In theory the divide between dev and ops should now be well and truly broken - the question that remains is that how should things evolve once that silo has been broken down? Okay, so maybe this isn’t necessarily a single skill you can learn. But it’s something that starts with conversation - so make sure you and those around you are having better conversations in 2020. Search the latest DevOps eBooks and videos on the Packt store.
Read more
  • 0
  • 0
  • 5052

article-image-new-for-2020-in-operations-and-infrastructure-engineering
Richard Gall
19 Dec 2019
5 min read
Save for later

New for 2020 in operations and infrastructure engineering

Richard Gall
19 Dec 2019
5 min read
It’s an exciting time if you work in operations and software infrastructure. Indeed, you could even say that as the pace of change and innovation increases, your role only becomes more important. Operations and systems engineers, solution architects, everyone - you’re jobs are all about bringing stability, order and control into what can sometimes feel like chaos. As anyone that’s been working in the industry knows, managing change, from a personal perspective, requires a lot of effort. To keep on top of what’s happening in the industry - what tools are being released and updated, what approaches are gaining traction - you need to have one eye on the future and the wider industry. To help you with that challenge and get you ready for 2020, we’ve put together a list of what’s new for 2020 - and what you should start learning. Learn how to make Kubernetes work for you It goes without saying that Kubernetes was huge in 2019. But there are plenty of murmurs and grumblings that it’s too complicated and adds an additional burden for engineering and operations teams. To a certain extent there’s some truth in this - and arguably now would be a good time to accept that just because it seems like everyone is using Kubernetes, it doesn’t mean it’s the right solution for you. However, having said that, 2020 will be all about understanding how to make Kubernetes relevant to you. This doesn’t mean you should just drop the way you work and start using Kubernetes, but it does mean that spending some time with the platform and getting a better sense of how it could be used in the future is a useful way to spend your learning time in 2020. Explore Packt's extensive range of Kubernetes eBooks and videos on the Packt store. Learn how to architect If software has eaten the world, then by the same token perhaps complexity has well and truly eaten software as we know it. Indeed, Kubernetes is arguably just one of the symptoms and causes of this complexity. Another is the growing demand for architects in engineering and IT teams. There are a number of different ‘architecture’ job roles circulating across the industry, from solutions architect to application architect. While they each have their own subtle differences, and will even vary from company to company, they’re all roles that are about organizing and managing different pieces into something that is both stable and value-driving. Cloud has been particularly instrumental in making architect roles more prominent in the industry. As organizations look to resist the pitfalls of lock-in and better manage resources (financial and otherwise), it will be down to architects to balance business and technology concerns carefully. Learn how to architect cloud native applications. Read Architecting Cloud Computing Solutions. Get to grips with everything you need to know to be a software architect. Pick up Software Architect's Handbook. Artificial intelligence It’s strange that the hype around AI doesn’t seem to have reached the world of ops. Perhaps this is because the area is more resistant to the spin that comes with AI, preferring instead to focus more on the technical capabilities of tools and platforms. Whatever the case, it’s nevertheless true that AI will play an important part in how we manage and secure infrastructure. From monitoring system health, to automating infrastructure deployments and configuration, and even identifying security threats, artificial intelligence is already an important component for operations engineers and others. Indeed, artificial intelligence is being embedded inside products and platforms that ops teams are using - this means the need to ‘learn’ artificial intelligence is somewhat reduced. But it would be wrong to think it’s something that can just be managed from a dashboard. In 2020 it will be essential to better understand where and how artificial intelligence can fit into your operations and architectural toolchain. Find artificial intelligence eBooks and videos in Packt's collection of curated data science bundles. Observability, monitoring, tracing, and logging One of the challenges of software complexity is understanding exactly what’s going on under the hood. Yes, the network might be unreliable, as the saying goes, but what makes things even worse is that we’re not even sure why. This is where observability and the next generation of monitoring, logging and tracing all come into play. Having detailed insights into how applications and infrastructures are performing, how resources are being managed, and what things are actually causing problems is vitally important from a team perspective. Without the ability to understand these things, it can put pressure on teams as knowledge becomes siloed inside the brains of specific engineers. It makes you vulnerable to failure as you start to have points of failure at a personnel level. There are, of course, a wide range of tools and products available that can make monitoring and tracing easy (or easier, at least). But understanding which ones are right for your needs still requires some time learning and exploring the options out there. Make sure you do exactly that in 2020. Learn how to monitor distributed systems with Learn Centralized Logging and Monitoring with Kubernetes. Making serverless a reality We’ve talked about serverless a lot this year. But as a concept there’s still considerable confusion about what role it should play in modern DevOps processes. Indeed, even the nomenclature is a little confusing. Platforms using their own terminology, such as ‘lambdas’ and ‘functions’, only adds to the sense that serverless is something amorphous and hard to pin down. So, in 2020, we need to work out how to make serverless work for us. Just as we need to consider how Kubernetes might be relevant to our needs, we need to consider in what ways serverless represents both a technical and business opportunity. Search Packt's library for the latest serverless eBooks and videos. Explore more technology eBooks and videos on the Packt store.
Read more
  • 0
  • 0
  • 4484

article-image-operations-and-infrastructure-engineering-in-2019-what-really-mattered
Richard Gall
18 Dec 2019
6 min read
Save for later

Operations and infrastructure engineering in 2019: what really mattered

Richard Gall
18 Dec 2019
6 min read
Everything is unreliable, right? If we didn’t realise it before, 2019 was the year when we fully had to accept the reality of the systems we’re building and managing. That was scary, sure, but it was also liberating. But we shouldn’t get carried away: given how highly distributed software systems are now part and parcel in a range of different industries, the issue of reliability and resilience isn’t purely an academic issue: in many instances, it’s urgent and critical. That makes the work of managing and building software infrastructure an incredibly vital role. Back in 2015 I wrote that Docker had turned us all into SysAdmins, but on reflection it may be more accurate to say that we’ve now entered a world where cloud and the infrastructure-as-code revolution has turned everyone into a software developer. Kubernetes is everywhere Kubernetes is arguably the definitive technology of 2019. With the move to containers now fully mainstream, Kubernetes is an integral in helping engineers to deploy and manage containers at scale. The other important element to Kubernetes is that it all but kills off dreaded infrastructure lock-in. It gives you the freedom to build across different environments, and inside a more heterogeneous software infrastructure. From a tooling and skill set perspective that’s a massive win. Although conversations about flexibility and agility have been ongoing in the tech industry for years, with Kubernetes we are finally getting to a place where that’s a reality. This isn’t to say it’s all plain sailing - Kubernetes’ complexity is a point of complaint for many, with many people suggesting that compared to, say, Docker, the developer experience leaves a lot to be desired. But insofar as DevOps and cloud-native have almost become the norm for many engineering teams, Kubernetes casts a huge shadow. Indeed, even if it’s not the right option for you right now, it’s hard to escape the fact that understanding it, and being open to using it in the future, is crucial. Find an extensive range of Kubernetes content in our new cloud bundles.  Serverless and NoOps This year serverless has really come into its own. Although it was certainly gaining traction in 2018, the last 12 months have demonstrated its value as more and more teams have been opting to forgo servers completely. There have been a few arguments about whether serverless is going to kill off containers. It’s not hard to see where this comes from, but in reality there’s no chance that this is going to happen. The way to think of serverless is to see it as an additional option that can be used when speed and agility are particularly important. For large-scale application development and deployment, containers running on ‘traditional’ cloud servers will be the dominant architectural approach. The companion trend to serverless is NoOps. Given the level of automation and abstraction that serverless can give you, the need to configure environments to ensure code runs properly all but disappears - code runs through ‘functions’ that get fired when needed. So, the thinking goes, the need for operations becomes very small indeed. But before anyone starts worrying about their jobs, the death of operations is greatly exaggerated. As noted above, serverless is just one option - it’s not redefining the architectural landscape. It might mean that the way we understand ‘ops’ evolves (just as ‘dev’ has), but it certainly won’t kill it off. Discover and search serverless eBooks and videos on the Packt store. Chaos engineering In the introduction I mentioned that one of the strange quandaries of our contemporary distributed software world is that we’ve essentially made things more unreliable at a time when software systems are being used in ever more critical applications. From healthcare to self-driving cars, we’re entering a world where unreliability is both more common and potentially more damaging. This is where chaos engineering comes in. Although it first appeared on ThoughtWorks Radar back in November 2017 and hasn’t yet moved out of its ‘Trial’ quadrant, in reality chaos engineering has been manifesting itself in a whole host of ways in 2019. Indeed, it’s possible that the term itself is misleading. While it suggests a wholesale methodology, in truth, there are different ways in which the core principles behind it - essentially stress-testing your software in order to manage unpredictability and improve resilience - are being used in different ways for both testing and security purposes. Tools like Gremlin have done a lot to help promote chaos engineering and make it more accessible to organizations that maybe wouldn't see themselves as having the resources to perform cutting-edge approaches. It appears the ground-work has been done, which means it will be interesting to see how it evolves in 2020. Observability: service meshes and tracing One of the biggest challenges when dealing with complex software systems - and one of the reasons why they are necessarily unreliable - is because it can be difficult (sometimes impossible) to get an understanding of what’s actually going on. This is why the debate around observability and monitoring has moved on. It’s no longer enough to have a set of discrete logs and metrics. Chances are that they won’t capture the subtleties of what’s happening, or won’t be able to provide you with context that helps you to actually understand where errors are coming from. What’s more, a lack of observability and the wrong monitoring set up can cause all sorts of issues inside a team. At a time when the role of the on call developer has never been more discussed and, indeed, important, ensuring there’s a level of transparency is the only way to guarantee that all developers are able to support each other and solve problems as they emerge. From this perspective, then, observability has a cultural impact as much as it does a technical one. Learn distributed tracing with Yuri Shkuro from Uber's observability engineering team: find Mastering Distributed Tracing on the Packt store.         Not sure what to learn for 2020? Start exploring thousands of tech eBooks and videos on the Packt store.
Read more
  • 0
  • 0
  • 3324
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €14.99/month. Cancel anytime
article-image-was-2019-the-year-the-world-caught-the-kubernetes-fever
Guest Contributor
17 Dec 2019
8 min read
Save for later

Was 2019 the year the world caught the Kubernetes fever?

Guest Contributor
17 Dec 2019
8 min read
In the current IT landscape, phrases such as “containerized applications” and “container deployment” are thrown around so often, that the meanings and connotations behind them often get tampered, and ultimately forgotten. In the case of Kubernetes, however, the opposite seems to be coming true. Although it might seem hyperbolic to refer to the modern interaction with software management as being heavily influenced by the “Age of Kubernetes”-  the accelerating growth of Kubernetes as one of the most widely adopted open-source project, with over 2300 active contributors to Kubernetes’s repository on GitHub bears witness to the massive influence that the orchestration platform has had. Originally developed by Google, and launched in 2014- Kubernetes has come a really long way since it’s advent. Although there are other similar container orchestration platforms available on the market, the most notable ones being Docker Swarm and Apache Mesos; Kubernetes has established itself as the de-facto orchestration platform in use today. Having said that, as a quick Google search might reveal- with a whopping 26,400,000 results- Kubernetes has risen to the top of the totem pole over the course of the year. However, before we can get into rationalizing the reasons that drive the world’s obsession with the container orchestration platform, we’d like to provide our readers with a quick snapshot of everything Kubernetes is and everything that it is not. Kubernetes: A Brief Overview The transition from the traditional deployment era, where organizations used to rely on applications being run on physical servers to the virtual deployment era, in which the highly popular concept of virtualization was introduced- to the container deployment era, which saw the employment of  ‘containers’ that are significantly lighter in weight, as compared to virtual machines (VMs)- these changes ultimately led to the creation of a container orchestration market, which is a huge contributing factor to the growing popularity of Kubernetes and other similar platforms. Having said that, however, as we’ve already mentioned above- the features that Kubernetes offers to organizations enable it to have a certain edge over its competition. Originally developed by Google in 2014, having descended from an old-school container orchestration platform called ‘Borg,’ Kubernetes is an open-source container orchestration platform that reduces the workload for both large and small companies, by automating the deployment, scaling and management of containerized applications. Bearing witness to the effectiveness and reliability of the container orchestration application is the fact that it is imbursed by gigantic digital entities such as Google, Microsoft, Cisco, Intel, and Red Hat. Furthermore, on their website, Kubernetes cites several testimonials from colossal corporations such as Spotify, Nav, Capital One, Comcast- which further goes on to demonstrate the reliability of the benefits offered by the container orchestration platform. What functions does Kubernetes perform? Taking into consideration the fact that most organizations, regardless of how large or small they might be, are deploying hundreds and thousands of containerized instances daily- the complexity of the situation requires platforms such as Kubernetes to step in and help organizations manage and automate containerized processes while taking into account the context of the microservice architecture as well. Kubernetes aids development teams by deploying applications and helping in the management of the containerized applications by performing the following functions: Deployment: Perhaps the most significant function that Kubernetes performs includes the deployment of a specified number of containers to a host, along with ensuring that the containers are functioning as they are supposed to, that is, without any malfunctions, etc. Rollouts: A rollout refers to a change in the original deployment of a container. Kubernetes allows development teams to take the management of their containerized tasks to the next level, by automating the initiation of the container deployment, along with offering them the option of pausing, resuming or rolling back any rollouts. Discovery of service: Kubernetes automates the exposure of a specified container to the internet, or to other containers, by allotting to containers a DNS name or an IP address. Since the increasing threats and risks of cyber-attacks, it has become essential to protect your IP address. To do so use a VPN as it not only hides the IP address but also provides protection against IP spoofing. Managing storage: A monumental advantage that Kubernetes offers organization is the liberty to allocate persistent local or cloud storage to specified containers as needed. Load scaling and balancing: Kubernetes allows for organizations to maintain stability across the network by automatically load balancing and scaling in the instance that traffic to a certain container increases. Self-healing: A feature unique to Kubernetes, the widely popular container orchestration platform seeks to improve the availability on the network through restarting or replacing a failed container. Moreover, Kubernetes can also automate the removal of containers that appear to be damaged, or fail to meet the health-check requirements. Are there any limitations to Kubernetes’s power? Up till now, we’ve done nothing but present facts regarding Kubernetes. Often times, however, organizations tend to overlook the limitations of an effective management tool. Despite the numerous advantages that organizations get to reap with the integration of Kubernetes, the fact that Kubernetes is not a traditional software and functions on a container level, rather than at the hardware-level should always be kept in mind. In order to make the most effective use of the container orchestration platform, it is essential that companies take into account the limitations of Kubernetes- which consist of the following: Kubernetes does not build applications, neither does it deploy source code. Kubernetes is not responsible for providing organizations with services centric to applications. Examples of these application-level services include middleware (message buses) and other data-processing frameworks such as Spark, caches, amongst many others. Kubernetes does not offer to organizations logging, monitoring, and alerting solutions, instead it provides integrations and mechanisms which then enable organizations to collect and export metrics. In addition to these limitations, it should also be mentioned that despite the constant referral of Kubernetes as an orchestration tool- it is not just that. Instead of simply orchestrating or managing the containerized applications by propagating a defined workflow, Kubernetes eliminates the need for orchestration altogether and consists of components that constantly drive the current state of the network into providing the desired result to the organization. Furthermore, Kubernetes also gives rise to a system without any centralized control, which makes it much more easier to use. Explaining Kubernetes’s popularity Now that we’ve hopefully jogged up our reader’s memories by providing them with a rundown of everything Kubernetes- let’s get down to business. Taking into consideration the ever-increasing growth and popularity of the container orchestration platform, particularly it’s a spike in 2019- readers might be left wondering with the question; “Why is Kubernetes so popular?” Well, the short explanation behind Kubernetes’s popularity is simple- it’s highly effective. The longer explanation, on the other hand, however, can be broken down into the following main reasons: Kubernetes saves time: In the digital age, time is more crucial than ever. As more and more organizations get digitized, time plays a monumental role in routine operations, especially where development teams are concerned. The staggering popularity of Kubernetes is deeply rooted in how time-effective, a platform is since it allows organizations to effectively handle all facets of container orchestration without having to fill out forms or send emails to request new machines to run applications. 2. Kubernetes is highly cost-effective: For most enterprises, the driving force behind their operations is the knowledge that their business goal is being fulfilled. Kubernetes can actually contribute to that since it allows for organizations to partake in better resource utilization. As we’ve already mentioned above, Kubernetes is a much more improved alternative to VMs, since it focuses solely on containers, which are light-weight, and thus require less CPU and memory resources. 3. Kubernetes can run on the cloud, as well as on-premise: An unprecedented, but widely welcomed feature that Kubernetes offers is that it is cloud-agnostic. The term ‘cloud-agnostic’ implies that Kubernetes can run on cloud-based services, as well as on-premise. This offers organizations with the luxury of not having to redesign or alter their infrastructure or applications to accommodate Kubernetes. Additionally, companies are also providing software that helps organizations manage the running of Kubernetes, whether it is on a cloud-based server or on-premise. Final Words We hope that we’ve made it clear what Kubernetes does, and the reasons that led to its rise in popularity. Having said that, however, it is still equally important that organizations take into consideration the limitations of the container orchestration system, and integrate it within their companies smartly- which ultimately enables organizations to leverage better benefits! Author Bio Rebecca James is an enthusiastic cybersecurity journalist. A creative team leader, editor of PrivacyCrypts. DevOps mistakes which developers should avoid! Chaos engineering comes to Kubernetes thanks to Gremlin Understanding the role AIOps plays in the present-day IT environment
Read more
  • 0
  • 0
  • 3282

article-image-devops-mistakes-which-developers-should-avoid
Guest Contributor
16 Dec 2019
9 min read
Save for later

DevOps mistakes which developers should avoid!

Guest Contributor
16 Dec 2019
9 min read
DevOps is becoming recognized as a vital pillar of digital transformation. Because of this, CIOs are becoming enthusiastic regarding how DevOps and open source can completely transform the enterprise culture. All organizations want to succeed and reach their development goals across all projects. However, in reality, the entire journey is not at all easy as it seems, and it often requires collective efforts and time. In this entire journey, there are some common failures which teams are likely to come across. In this post, we’ll discuss DevOps mistakes which everyone should know and must avoid. Before that, it is necessary to understand the importance of DevOps in today’s world. Importance of DevOps in today’s world DevOps often describes a culture and a set of processes which brings development and operations together to complete software development. It enables organizations not just to create but also improve products at a faster pace than they can with some approaches to software development. DevOps adoption rate is increasing by each passing day. According to Statista many business organizations are shifting towards the DevOps culture and there is an increase of 17% in 2018 from the previous year.  DevOps culture is instrumental in today’s world. The following points briefly highlight the need for DevOps in this era. It reduces costs and other IT stuff. Results in greater competencies. It provides better communication and cooperation opportunities. The development cycle is fast and innovative. Deployment failures are reduced to a great extent. Eight DevOps Mistakes Many people still don’t fully understand what DevOps means. Without prior knowledge and understanding, many DevOps initiatives fail to get off the ground successfully. Following is a brief description of DevOps’ mistakes, and how they can be avoided to start a successful DevOps journey. 1. Rigid DevOps Process Compliance with core DevOps tenets is vital for DevOps success; organizations have to make adjustments in active response to meet organization demands. Enterprises have to make sure that while the main DevOps pillars remain stable while implementing DevOps; they make the internal adjustments needed in internal benchmarking of the expected consequences. Instrumenting codebases in a gritty manner and making them more and more partitioned results in more flexibility and provide DevOps team the ultimate power to backtrack and recognize the root cause of diversion in the event of failed outcomes. But, all adjustments have to be made while staying within the boundaries defined by DevOps. 2. Oversimplification of the process Indeed DevOps is a complex process. To implement DevOps, enterprises often go on a DevOps Engineer hiring spree or at times, create a new and isolated one. The DevOps department is then responsible for managing the DevOps framework and strategy, and it needlessly adds new processes that are often lengthy and complicated. Instead of creating an isolated DevOps department, organizations should focus on optimizing their processes to make operational products that leverage the right set of resources. For successful implementation of DevOps, organizations must be capable enough to manage the DevOps framework, leverage functional experts, and other resources that can manage DevOps related tasks like budgeting goals, resource management, and process tracking. DevOps requires a cultural overhaul. Organizations must consider a phased and measured transition to DevOps implementation by educating and training employees on these new processes. Also, they should have the right frameworks to enable careful collaboration. 3. Not preparing for a cultural change When you have the right tools for DevOps practices, you likely might come across a new challenge. The challenge will be trying to make your teams use the tools for fast development, continuous delivery, automated testing, and monitoring. Is your DevOps culture ready for all such things? For example, agile methodologies usually mandate that you ship new code once a week, or once a day. It results in the failure of agile methods. You might also face the same conceptual issues with DevOps. It can be like pulling on a smooth road with a car with no fuel in it. To prevent this situation, plan for a transition period. Leave enough time for the development and operational team to get used to new practices. Also, make sure that they have a chance to gain experience with the new processes and tools. Ensure that before adopting DevOps, you've got a matured Dev and Ops culture. 4. Creating a single DevOps team The most common mistake which most organizations and enterprises make is to create a brand-new team and task them with addressing all the burdens of a DevOps initiative. It is challenging and complicated for both development and operations to deal with a new group that coordinates with everyone. DevOps started with the idea of enhancing collaborations between teams involved in the development of software like security, DBMS, and QA. However, it is not only about development and operations. If you create a new side to address DevOps, you’re making things more complicated. The secret ingredient here is simplicity. Focus on culture by encouraging a mindset of automation, quality, and stability. For instance, you might involve everyone in a conversation regarding your architecture, or about some problems found in production environments in which all the relevant players need to be well aware of how their work influences others. “DevOps is not about a single dedicated team but about organizations that progress together as a DevOps team.” 5. Not including the security team DevOps is about more than merely putting the development and operations teams together. It is a continuous process of automation and software development, including audit, compliance, and security. Many organizations make the mistake of not following their security practices in advance. According to a CA Technologies survey, security concerns were the number-one obstacle to DevOps as cited by 38% of the respondents. Similarly, the Puppet survey found that high-performing DevOps teams spend 50% less time remediating security issues than low performers. These high performing teams found different ways to communicate their security objectives and to establish security in the early phases of their development process. All DevOps practitioners should evaluate the controls, recognize the risks, and understand the processes. In the end, security is always an integral part of DevOps practices, such as DevSecOps (a practice in which development and operations is integrated with security). For example, if you have some security issues in production, you can address them within your DevOps pipeline through the tools which the security team already uses. DevOps and security practices should be followed strictly, and there should be no compromises. Moreover, other measures should be adopted to avoid cyber-criminals invading the DevOps culture. Invest in cybersecurity markets has become a necessity to avoid situations where attacker can carry out attacks like that of spear phishing and phishing. It is found that out of all attacks on various organizations, 95% of them were a result of spear phishing. 6. Incorrect use of incident management DevOps teams must have a robust incident management process in place. The incident management needs to be utterly proactive and an ongoing process. It means that having a documented incident management process is imperative to define the incident responses. For example, a total downtime event will have a different response workflow in comparison to a minor latency problem. The failure to not do so can often lead to missed timelines and preventable projects delay. 7. Not utilizing purposeful automation DevOps needs organizations to adopt and implement purposeful automation. For DevOps, it is essential to take automation across the complete development lifecycle. It includes continuous delivery, continuous integration, and deployment for velocity and quality outcomes. Purposeful end-to-end automation is a crucial successful DevOps implementation. Therefore, organizations should look at the complete automation of the CI and CD pipeline. However, at the same time, organizations need to identify various opportunities for automation across functions and processes. This helps to reduce the need for manual handoffs for complicated integrations that need new management in multiple format deployments. Editor’s Note: Do you use or plan to use Azure for DevOps? If you want to know all about Azure DevOps services, we recommend our latest cookbook, ‘Azure DevOps Server 2019 Cookbook - Second Edition’ written by Tarun Arora and Utkarsh Shigihalli. The recipes in this book will help you achieve skills you need to break down the invisible silos between your software development teams and transform them into a modern cross-functional software development team. 8. Wrong-way to measure project success DevOps promises for faster delivery. But, if that acceleration comes at the cost of quality, then the DevOps program is a failure. Enterprises looking at deploying DevOps should use the right metrics to understand project growth and success. For this reason, it is imperative to consider metrics that align velocity with success. Do focus on the right parameters as it is essential to drive intelligent automation decisions. Conclusion Now organizations are rapidly running towards DevOps to stand with competition and become successful but they often make big mistakes. There are mistakes that people commit while implementing a DevOps culture. However, all these mistakes are avoidable, and hopefully, the points mentioned above have successfully cleared your vision to a great extent. After you overcome all the mistakes and adopt DevOps practices, your organization will surely enjoy improved client satisfaction, and employee morale increased productivity, and agility- all of which helps in growing your business. If you plan to accelerate deployment of high-quality software by automating build and releases using CI/CD pipelines in Azure, we suggest you to check out Azure DevOps Server 2019 Cookbook - Second Edition, which will help you create and release extensions to the Azure DevOps marketplace and reach the million-strong developer ecosystem for feedback. Author Bio Rebecca James is an enthusiastic cybersecurity journalist. A creative team leader, editor of PrivacyCrypts. Abel Wang explains the relationship between DevOps and Cloud-Native Can DevOps promote empathy in software engineering? 7 crucial DevOps metrics that you need to track
Read more
  • 0
  • 0
  • 4264

article-image-ansible-role-patterns-and-anti-patterns-by-lee-garrett-its-debian-maintainer
Vincy Davis
16 Dec 2019
6 min read
Save for later

Ansible role patterns and anti-patterns by Lee Garrett, its Debian maintainer

Vincy Davis
16 Dec 2019
6 min read
At DebConf held last year, Lee Garrett, a Debian maintainer for Ansible talked about some of the best practices in the open-source, configuration management tool. Ansible runs on Unix-like systems and configures both Unix-like and Microsoft Windows. It uses a simple syntax written in YAML, which is a human-readable data serialization language and uses SSH to connect to the node machines. Ansible is a helpful tool for creating a group of machines, describing their configuration and actions. Ansible is used to implement software provisioning, application-deployment security, compliance, and orchestration solutions. When compared to other configuration management tools like Puppet, Chef, SaltStack, etc, Ansible is very easy to setup. Garett says that due to its agentless nature, users can easily control any machine with an SSH daemon using Ansible. This will assist users in controlling any Debian installed machine using Ansible. It also supports the configuration of many things like networking equipment and Windows machines. Interested in more of Ansible? [box type="shadow" align="" class="" width=""]Get an insightful understanding of the design and development of Ansible from our book ‘Mastering Ansible’ written by James Freeman and Jesse Keating. This book will help you grasp the true power of Ansible automation engine by tackling complex, real-world actions with ease. The book also presents the fully automated Ansible playbook executions with encrypted data.[/box] What are Ansible role patterns? Ansible uses a playbook as an entry point for provisioning and defines automation through the YAML format. A playbook requires a predefined pattern to organize them and also needs other files to facilitate the sharing and reusing of provisioning. This is when a ‘role’ comes into the picture.  An Ansible role which is an independent component allows the reuse of common configuration steps. It contains a set of tasks that can be used to configure a host such that it will serve a certain function like configuring a service. Roles are defined using YAML files with a predefined directory structure. A role directory structure contains directories like defaults, vars, tasks, files, templates, meta, handlers.  Some tips for creating good Ansible role patterns An ideal role must have a ‘roles/<role>/task/main.yml’ format, thus specifying the name of the role, it’s tasks, and main.yml. At the beginning of each role, users are advised to check for necessary conditions like the ‘assert’ tasks to inspect if the variables are defined or not. Another prerequisite involves installing packages, using apps on CentOS machines and Yum (the default package manager tool in CentOS) or by using the git checkout.  Templating of files with abstraction is another important factor where variables are defined and put into templates to create the actual config file. Garrett also points out that a template module has a validate parameter which helps the user to check if the config file has any syntax errors. The syntax error can fail the playbook even before deploying the config file. For example, he says, “use Apache with the right parameters to do a con check on the syntax of the file. So that way you never end up with a state where there's a broken configure something there.”  Garrett also recommends putting sensible defaults in the ‘roles/defaults/main.yml’ layout which will make the defaults override the variables on specific cases. He further adds that a role should ideally run in the check mode. Ansible playbook has a --check which basically is “just a dry run” of a user’s complete playbook and --diff will display file or file mode changes in the playbook. Further, he adds that a variable can be defined in the default and in the Var's folder. However, the latter folder is hard to override and should be avoided, warns Garrett. What are some typical anti-patterns in Ansible? The shell and command modules are used in Ansible for executing commands on remote servers. Both modules require command names followed by a list of arguments.  The shell module is used when a command is to be executed in the remote servers in a particular shell. Garrett says that new Ansible users generally end up using the shell or command module in the same way as the wget computer program. According to him, this practice is wrong, since “there's currently I think thousands of three hundred different modules in ansible so there's likely a big chance that whatever you want to do there already a module for that just did that thing.”  He also asserts that these two modules have several problems as the shell module gets interrupted by the actual shells, so if the user has any special variables in the shell string and if their PlayBook is running in the check mode then the shell and the command module won't run.  Another drawback of these modules is that they will always refer back to change while running a command which makes its exit value zero. This means that the user will have to probably get the output and then check if there is any standard error present in it.  Next, Garrett explored some examples to show the alternatives to the shell/command module - the ‘slurp’ module. The slurp module will “slope the whole file and a 64 encoded” and will also enable access to the actual content with ‘path file.contents’. The best thing about this module is that it will never return any change and works great in the check mode. In another example, Garrett showed that when fetching a URL, the shell command ends up getting downloaded every time the playbook runs, thus throwing an error each time. This can again be avoided by using the ‘uri’ module instead of the shell module. The uri module will define the URL every time a file is to be retrieved thus helping the user to write and create a parameter. At the end of the talk, Garrett also threw light on the problems with using the set_facts module and shares its templates. Watch the full video on Youtube. You can also learn all about custom modules, plugins, and dynamic inventory sources in our book ‘Mastering Ansible’ written by James Freeman and Jesse Keating. Read More Ansible 2 for automating networking tasks on Google Cloud Platform [Tutorial] Automating OpenStack Networking and Security with Ansible 2 [Tutorial] Why choose Ansible for your automation and configuration management needs? Ten tips to successfully migrate from on-premise to Microsoft Azure Why should you consider becoming ‘AWS Developer Associate’ certified?
Read more
  • 0
  • 0
  • 4636

article-image-ten-tips-to-successfully-migrate-from-on-premise-to-microsoft-azure
Savia Lobo
13 Dec 2019
11 min read
Save for later

Ten tips to successfully migrate from on-premise to Microsoft Azure 

Savia Lobo
13 Dec 2019
11 min read
The decision to start using Azure Cloud Services for your IT infrastructure seems simple. However, to succeed, a cloud migration requires hard work and good planning. At Microsoft Ignite 2018, Eric Berg, an Azure Lead Architect at COMPAREX, a Microsoft MVP Azure + Cloud and Data Center Management, shared ‘Ten tips for a successful migration from on-premises to Azure’, based on their day-to-day learnings. Eric shares known issues, common pitfalls, and best practices to get started. Further Reading To gain a deep understanding of various Azure services related to infrastructure, applications, and environments, you can check out our book Microsoft Azure Administrator – Exam Guide AZ-103 by Sjoukje Zaal. This book is also an effective guide for acquiring the skills needed to pass the Exam AZ-103, with effective mock tests and solutions so that you can confidently crack this exam. Tip #1: Have your Azure Governance Set One needs to have a basic plan of what they are going to do with Azure. Consider Azure Governance as the basis for Cloud Adoption. Berg says, “if you don't have a plan for what you do with Azure, it will hurt you.” To run something on Azure is good, but to keep it secure is the key thing. Here, Governance rule sets help users to audit and figure out if everything is running as expected. One of the key parts of Azure Governance is Networking. Hence one should consider a networking concept that suits both the company and the business. Microsoft is moving really fast; in 2018, to connect to the US and Europe you had to use a VPN then came global v-net peering, and now we have ESRI virtual WAN. Such advancements allow a concept to further grow and always use the top of the edge technologies while adoption of such a rule set enables customers to try a lot of things on their own. Tip #2: Think about different requirements From an IT perspective, every organization wants control, focus on its IT, and also to ensure that everything is compliant. Many organizations also want to write policies in place. On the other hand, the human resource department section wants to be totally agile and innovative and wants to consume services and self-service without feeling the need to communicate with IT. “I've seen so many human resource departments doing their own contracts with external partners building some fancy new hiring platforms and IT didn't know anything about it,” Berg points out. When it comes to Cloud, each and every member of the company should be aware and should be involved. It is simply not just an IT-dependent decision, but is company dependent. Tip #3: Assess your infrastructure Berg says organizations should assess their environment. Migrating your servers as they are to Azure is not the right thing to do. This is because in Azure the decision between 8 and 16 gigabytes of RAM is a decision between 100 and 200 percent of the cost. Hence, right scaling or a good assessment is extremely important and this cannot be achieved by running a script once for 10 minutes and you know what your VMs are doing. Instead, you should at least run an assessment for one month or even three months to see some peaks and some low times. This is like a good assessment where you know what you really need to migrate your systems better. Keep a check on your inventory and also on your contracts to check if you are allowed to migrate your ERP system or CRM system to Azure. As some contracts state that the “deployment of this solution outside of the premises of the company needs some extra contract and some extra cost,” Berg warns. Migrating to Azure is technically easy but difficult from a contract perspective. Also, you should define your needs for migration to a cloud platform. If you don't get value out of your migration don't do it. Berg advises, don't migrate to Azure because everybody does or because it's cool or fancy. Tip #4: Do not rebuild your on-premises structures on Cloud Cloud needs trust. Organizations often try to bring in the old stuff on the on-premises infrastructures such as the external DMZ, the internal DMZ, and also 15 security layers. Berg said they use intune, a cloud-based service in the enterprise mobility management (EMM) space that helps enable your workforce to be productive while keeping your corporate data protected, along with Office 365 on a cloud.  In tune doesn't stick to a DMZ; even if you want to deploy your application or use the latest tech such as BOTS, cognitive services, etc. It may not fit totally into a structured network design on the cloud. On the other hand, there will be disconnected subscriptions, i.e. there will be subscriptions with no connection to your on-premises network. This problem has to be dealt with on a security level. New services need new ways. If you are not agile your IT won't be agile. If you need 16 days or six weeks to deploy a server and you want to stick to those rules and processes, then Azure won't be beneficial for you as there will be no value in it for you. Tip #5: Azure consumption is billed If you spin up a VM that costs $25,000 a month you have to pay for it. The M-series VMs have 128 cores 4 terabytes of RAM and are simply amazing. If you deployed using Windows Server and SQL Server Enterprise, the cost goes up to $58,000 a month for just one VM. When you migrate to Azure and you start integrating new things you probably have to change your own business model. To implement tech such as facial recognition, and others you have to set up a cost management tool for usage tracking. There are many usage APIs and third-party tools available. Proper cost management into the Azure infrastructure helps to divide costs. If you put everything into one subscription, one resource group, where everyone is the owner. Here, the problem won’t be the functioning but you will not be able to figure out who's responsible for what. Instead, a good structure of subscriptions, a good role-based access control, a good tagging policy will help you to figure out cost better. Tip #6: Identity is the new perimeter Azure Ad is the center of everything. To access a user’s data center is not easy these days as it needs access within the premises, then into the data center, then log into the user’s own premises infrastructure. If anyone has a user’s login ID, they are inside the user’s Azure AD, the user’s visa VPN, and also on their on-premises data center. Hence identity is a key part of security. “So, don’t think about using MFA, use MFA. Don't think about using Privileged Identity Management, use it because that's the only way to secure your infrastructure probably and get an insight into who is using what in my infrastructure and how is it going,” Berg warns. In the modern workplace, one can work from anywhere. However, one needs to have proper security levels in place. Secure devices, secure identity, secure access ways to MFA, and so on. Stay cautious. Tip #7: Include your users Users are the most important part of any ecosystem. So, when you migrate servers or the entire on-premise architecture, inform them. What if you have a CRM system fully in the cloud and there's no local cache on the system anymore? This won't fit the needs of your customers or internal customers and this is why organizations should inform them of their plans. They should also ask them what they really need and this will, in turn, help the organizations. Berg illustrated this point with a project in Germany that includes a customer with a very specific project that wanted the product to decrease their response times. The client needs up to two days to answer a customer's email because the project product is very complex and they have a very spread documentation library and it's hard. Their internal goal is to bring down the product response to ten minutes--from two days to 10 minutes. Berg said they considered using a bot, some cognitive services and Azure search, and a plug-in an Outlook. So you get the mail you just search for your product and everything will be figured out. The documentation, the fact sheets, and the standard email template for answering such a thing. The solution proposed was good; both Berg and the IT liked it. However, when the sales team was asked, they said such a solution would steal their jobs. The mistake here was Sales was not included in the process of finding this solution. To rectify this, organizations should include all stakeholders. Focus on benefits, have some key users because they will help you to spread the word over. In the above case, explain and evangelize the sales teams as they are afraid because they don't know and don't understand what happens if you have a bot and some cognitive services to figure out which document is right. This won’t steal their job but instead, help to do better at their job with improved efficiency. Train and educate so they are able to use it, check processes and consider changes. Managed services can help you focus. Back up, monitoring, patching, this is something somebody can do for you. Instead, organizations can now focus on after the migration such as integrating new services, improving right scaling, optimizing cost, optimizing performance, staying up-to-date with all the changes in Azure, etc. Tip #8: Consider Transformation instead of Migration Consider a transformation instead of a migration. Build some logical blocks, don't move an ERP system without your database or the other way around. Berg suggests: To adopt technical and licensing showstoppers define your infrastructure requirements check your compatibility to migrate update helpdesk about SLAs Ask if Azure is really helping me (to figure out or to cover my assets or is it getting better or maybe worse). Tip #9: Keep up to date Continuous learning and continuous knowledge are key to growth. As Azure releases a lot of changes very often, users are notified of these latest updates via emails or via Azure news. Organizations should review their architecture on a regular basis, Berg says. VPN to global v-net peering to Global WAN so that you can change your infrastructure quite fast. Audit your governance not on a yearly basis may be monthly or quarterly. Consider changes fast; don't think two years about a change because then it will not be any more interesting. If there's a new opportunity, grab it, use it and three weeks later probably drop it away. But avoid thinking for two months or more else it will be too late. Tip #10: Plan for the future Do some end to end planning, think about the end-to-end solution; who's using it, what's my back end on this, and so on. Save money and forecast your costs. Keep an eye on resources that probably spread because someone runs the script without knowing what they are doing.  Simply migrating an IIS server with a static website to Azure is not actual cloud migration. Instead, customers should consider moving their servers to a static storage website, to a web app, etc. but not in the Windows VM. Berg concludes by saying that an important migration step is to move from infrastructure. Everybody migrates infrastructure to Azure because that's easy because it's just migrating from one VM to another VM. Customers should not ‘only’ migrate. They should also start an optimization, move forward to platform services, be more agile, think about new ways and most importantly get rid of all on-premise old stuff. Berg adds, “In five years probably nobody will talk about infrastructure as a service anymore because everybody has migrated and optimized it already.” To stay more compliant with corporate standards and SLAs, learn how to configure Azure subscription policies with “Microsoft Azure Administrator – Exam Guide AZ-103” by Packt Publishing. 5 reasons Node.js developers might actually love using Azure [Sponsored by Microsoft] Azure Functions 3.0 released with support for .NET Core 3.1! Microsoft announces Azure Quantum, an open cloud ecosystem to learn and build scalable quantum solutions
Read more
  • 0
  • 0
  • 8802
article-image-why-should-you-consider-becoming-aws-developer-associate-certified
Savia Lobo
12 Dec 2019
5 min read
Save for later

Why should you consider becoming ‘AWS Developer Associate’ certified?

Savia Lobo
12 Dec 2019
5 min read
Organizations both large and small are looking to automating their day-to-day processes and the best option they consider is moving to the cloud. However, they also fear certain challenges that can make cloud adoption difficult. The biggest challenge is the lack of resources or expertise to understand how different cloud services function or how they are built, to leverage its advantages to the fullest. Many developers use cloud computing services--either through the companies they work with or simply subscribe to it--without really knowing the intricacies. Their knowledge of how the internal processes work remains limited. Certifications, can, in fact, help you understand how cloud functions and what goes on within these gigantic data holders. To start with, enroll yourself into a basic certification by any of the popular cloud service providers. Once you know the basics, you can go ahead to master the other certifications available based on your job role or career aspirations. Why choose an AWS certification Amazon Web Services (AWS) is considered one of the top cloud services providers in the cloud computing market currently. According to Gartner’s Magic Quadrant 2019, AWS continues to lead in public cloud adoption. AWS also offers eleven certifications that include foundational and specialty cloud computing topics. If you are a developer or a professional who wants to pursue a career in Cloud computing, you should consider taking the ‘AWS Certified Developer - Associate’ certification. Do you wish to learn from the AWS subject-matter experts, explore real-world scenarios, and pass the AWS Certified Developer – Associate exam? We recommend you to explore the book, AWS Certified Developer - Associate Guide - Second Edition by Vipul Tankariya and Bhavin Parmar. Many organizations use AWS services and being certified can open various options for improved learning. Along with being popular among companies, AWS includes a host of cloud service options compared to other cloud service providers. While having a hands-on experience holds great value for developers, getting certified by one of the most popular cloud services will only have greater advantages for their better future. Starting with web developers, database admins, IoT or an AI developer, etc., AWS includes various certification options that delve into almost every aspect of technology. It is also constantly adding more offerings and innovating in a way that keeps one updated with cutting-edge technologies. Getting an AWS certification is definitely a difficult task but you do not have to quit your current job for this one. Unlike other vendors, Amazon offers a realistic certification path that does not require highly specialized (and expensive) training to start. AWS certifications validate a candidate’s familiarity and knowledge of best practices in cloud architecture, management, and security. Prerequisites for this certification The AWS Developer Associate certification will help you enhance your skills impacting your career growth. However, one needs to keep certain prerequisites in mind. A developer should have: Attended the AWS Essentials course or should have an equivalent experience Knowledge in developing applications with API interfaces Basic understanding of relational and non-relational databases. How the AWS Certified Developer - Associate level certification course helps a developer AWS Certified Developer Associate certification training will give you hands-on exposure to core AWS services through guided lectures, videos, labs and quizzes. You'll get trained in compute and storage fundamentals, architecture and security best practices that are relevant to the AWS certified developer exam. This associate-level course will help developers identify the appropriate AWS architecture and also learn to design, develop, and deploy optimum AWS cloud solutions. If one already has some existing knowledge of AWS, this course will help them identify and deploy secure procedures for optimal cloud deployment and maintenance. Developers will also learn to develop and maintain applications written for Amazon S3, DynamoDB, SQS, SNS, SWS, AWS Elastic Beanstalk, and AWS CloudFormation. After achieving this certification, you will be an asset to any organization. You can help them leverage best practices around advanced cloud-based solutions and migrate existing workloads to the cloud. This indirectly means a rise in your annual income and also career growth. However, getting certified alone is not enough, other factors such as skills, experience, geographic location, etc. are also important. This certification will help you become competent in using Amazon’s cloud services. This course is a part of the first tier (Associate level) of certifications that AWS offers. You could further improve your cloud computing skills by taking up certifications from the professional tier and later from the specialty tiers, whatever suits you the best. New AWS services and features are added every year. Certification alone is not enough, staying relevant is the key. To continually demonstrate expertise and knowledge of best practices for the most up to date AWS services, certification holders are required to re-certify every two years. You can either choose to take a professional-level exam for the same certification or pass the re-certification exam for your existing certification. To further gain valuable insights on how to design, develop, and deploy cloud-based solutions using AWS and also get familiar with Identity and Access Management (IAM) along with Virtual private cloud (VPC), you can check out the book, AWS Certified Developer - Associate Guide - Second Edition by Vipul Tankariya and Bhavin Parmar. How do AWS developers manage Web apps? Why AWS is the preferred cloud platform for developers working with big data How do you become a developer advocate?
Read more
  • 0
  • 0
  • 5001

article-image-abel-wang-explains-the-relationship-between-devops-and-cloud-native
Savia Lobo
12 Dec 2019
5 min read
Save for later

Abel Wang explains the relationship between DevOps and Cloud-Native

Savia Lobo
12 Dec 2019
5 min read
Cloud-native is microservices containers and serverless apps that run in multi-cloud environments and are managed by DevOps processes. However, the relationship between these is not always clearly defined. Shayne Boyer, Principal Cloud Advocate, in a conversation with Abel Wang, Principal cloud Advocate, and DevOps lead discussed the relationship between DevOps and Cloud-Native on the Microsoft developer channel. Do you wish to further learn how to implement DevOps using Azure DevOps, and also want to learn the entire serverless stack available in Azure including Azure Event Grid, Azure Functions, and Azure Logic Apps, you should check out the book Azure for Architects - Second Edition to know more. What is DevOps? Abel starts off by saying, DevOps at Microsoft is the union of people, processes, and products to enable the continuous delivery of value to our end-users. The reason one should care about DevOps when it comes to cloud native apps is that the key here is continuously delivering value. One of the powers of Cloud Native is because all of your infrastructures is out in the cloud, you're able to iterate it very quickly. This is what DevOps helps you do as well, continuously deliver value to your end-users. These days the speed of business is so quick that if we can't iterate quickly and give value quickly, our competitors will and once they do it the rest will be obsolete. Hence, it is extremely important to iterate quickly which Cloud-Native helps enable. The concept of continuously delivering value remains similar to the concept we carry out on our local machine during a standard deployment. Where Cloud-Native become completely unique is, All your infrastructures are out in the cloud. Hence, deploying to the cloud is easier to do than deploying on to like a mobile app. One of the most powerful things about cloud-native is that it is a microservice-based architecture. With these advantages, we're able to iterate quickly because instead of deploying this massive model if we make one tiny little change, we can just deploy that one service. This will simplify and speed up the process. With this every developer check-in can go through our gates, can go through our pipeline, reach production at a quicker rate, and so we're able to give value even faster and better. Key DevOps and Cloud-Native Apps concepts Wang says to aim for a CI/CD pipeline that can process code as soon as somebody adds it in. The pipeline should further make it easy to build and then finally deploy it to the infrastructure present. Wang demonstrates a cloud-native application with a slightly complicated infrastructure. The application consists of a static website that is held in Azure storage, it has a back-end written in .Net Core which is held in an Azure function and they both connect up to a Cosmos DB. If microservices are deployed independently, the services need to be smart enough to realize what version the other services are on so that the entire application is not disturbed if additional service is uploaded. He further demonstrates an instance for deploying the entire infrastructure all at once. You can check out the video to know more about the demonstration in detail. How to ensure quality across environments in a DevOps practice? In a cloud-native application, we need not worry about deploying similar infrastructures while moving from one environment to the next. This is because the dev environment will be exactly the same as the QA environment and further the same throughout all the way out into production. This will be cost-effective because we can just spin it up to run whatever we need to and as soon as it's done, tear it down so we're not paying for anything. Automating the Cloud Native processes include a few manual steps such as approving an email. Wang says one could technically automate everything, however, he prefers having manual approvers. Within Azure DevOps, there is a concept of an automated approval gate as well. So one can use automation to help decide if they should postpone approval or not. Wang says he uses an automated approval gate to conduct DNS checks that can inform him whether or not the DNS has propagated. Wang says, trying to keep quality in your pipelines is really difficult to do. You can do things like run all of the automated UI tests for a particular environment. “so by the time let's say I deployed this into a QA environment by the time my QA testers even look at it it could have run through like hundreds of thousands of automated UI tests already. So there's a lot less that a human needs to do,” Wang adds. To learn comprehensively how to develop Azure cloud architecture and a pipeline management system and also to know about some security best practices for your Azure deployment, you can check out the book, Azure for Architects - Second Edition by Ritesh Modi. Pivotal and Heroku team up to create Cloud Native Buildpacks for Kubernetes Can DevOps promote empathy in software engineering? Is DevOps really that different from Agile? No, says Viktor Farcic [Podcast]
Read more
  • 0
  • 0
  • 2854

article-image-aws-reinvent-2019-day-2-highlights-aws-wavelength-provisioned-concurrency-for-lambda-functions-and-more
Savia Lobo
04 Dec 2019
6 min read
Save for later

AWS re:Invent 2019 Day 2 highlights: AWS Wavelength, Provisioned Concurrency for Lambda functions, and more!

Savia Lobo
04 Dec 2019
6 min read
Day 2 of the ongoing AWS re:Invent 2019 conference at Las Vegas, included a lot of new announcements such as AWS Wavelength, Provisioned Concurrency for Lambda functions, Amazon Sagemaker Autopilot, and much more. The Day 1 highlights included a lot of exciting releases too, such as preview of AWS’ new quantum service, Braket; Amazon SageMaker Operators for Kubernetes, among others. Day Two announcements at AWS re:Invent 2019 AWS Wavelength to deliver ultra-low latency applications for 5G devices With AWS Wavelength, developers can build applications that deliver single-digit millisecond latencies to mobile devices and end-users. AWS developers can deploy their applications to Wavelength Zones, AWS infrastructure deployments that embed AWS compute and storage services within the telecommunications providers’ datacenters at the edge of the 5G networks, and seamlessly access the breadth of AWS services in the region. This enables developers to deliver applications that require single-digit millisecond latencies such as game and live video streaming, machine learning inference at the edge, and augmented and virtual reality (AR/VR). AWS Wavelength brings AWS services to the edge of the 5G network. This minimizes the latency to connect to an application from a mobile device. Application traffic can reach application servers running in Wavelength Zones without leaving the mobile provider’s network. This reduces the extra network hops to the Internet that can result in latencies of more than 100 milliseconds, preventing customers from taking full advantage of the bandwidth and latency advancements of 5G. To know more about AWS Wavelength, read the official post. Provisioned Concurrency for Lambda Functions To provide customers with improved control over their mission-critical app performance on serverless, AWS introduces Provisioned Concurrency, which is a Lambda feature and works with any trigger. For example, you can use it with WebSockets APIs, GraphQL resolvers, or IoT Rules. This feature gives you more control when building serverless applications that require low latency, such as web and mobile apps, games, or any service that is part of a complex transaction. This is a feature that keeps functions initialized and hyper-ready to respond in double-digit milliseconds. This addition is helpful for implementing interactive services, such as web and mobile backends, latency-sensitive microservices, or synchronous APIs. On enabling Provisioned Concurrency for a function, the Lambda service will initialize the requested number of execution environments so they can be ready to respond to invocations. To know more Provisioned Concurrency in detail, read the official document. Amazon Managed Cassandra Service open preview launched Amazon Managed Apache Cassandra Service (MCS) is a scalable, highly available, and managed Apache Cassandra-compatible database service. Since the Amazon MCS is serverless, you pay for only the resources you use and the service automatically scales tables up and down in response to application traffic. You can build applications that serve thousands of requests per second with virtually unlimited throughput and storage. With Amazon MCS, it becomes easy to run Cassandra workloads on AWS using the same Cassandra application code and developer tools that you use today. Amazon MCS implements the Apache Cassandra version 3.11 CQL API, allowing you to use the code and drivers that you already have in your applications. Updating your application is as easy as changing the endpoint to the one in the Amazon MCS service table. To know more about Amazon MCS in detail, read AWS official blog post. Introducing Amazon SageMaker Autopilot to auto-create high-quality Machine Learning models with full control and visibility The AWS team launched Amazon SageMaker Autopilot to automatically create classification and regression machine learning models with full control and visibility. SageMaker Autopilot first checks the dataset and then runs a number of candidates to figure out the optimal combination of data preprocessing steps, machine learning algorithms, and hyperparameters. All this with a single API call or few clicks in the Amazon SageMaker Studio. Further, it uses this combination to train an Inference Pipeline, which can be easily deployed either on a real-time endpoint or for batch processing. All of this takes place on fully-managed infrastructure. SageMaker Autopilot also generates Python code showing exactly how data was preprocessed: not only can you understand what SageMaker Autopilot does, you can also reuse that code for further manual tuning if you’re so inclined. SageMaker Autopilot supports: Input data in tabular format, with automatic data cleaning and preprocessing, Automatic algorithm selection for linear regression, binary classification, and multi-class classification, Automatic hyperparameter optimization, Distributed training, Automatic instance and cluster size selection. To know more about Amazon Sagemaker Autopilot, read the official document. Announcing ML-powered Amazon Kendra Amazon Kendra is an ML-powered highly accurate enterprise search service. It provides powerful natural language search capabilities to your websites and applications such that end users can easily find the required information within the vast amount of content spread across the organization. Key benefits of Kendra include: Users can get immediate answers to questions asked in natural language. This eliminates sifting through long lists of links and hoping one has the information you need. Kendra lets you easily add content from file systems, SharePoint, intranet sites, file-sharing services, and more, into a centralized location so you can quickly search all of your information to find the best answer. The search results get better over time as Kendra’s machine learning algorithms learn which results users find most valuable. To know more about Amazon Kendra in detail, read the official document. Introducing preview of Amazon Codeguru Amazon CodeGuru is a machine learning service for automated code reviews and application performance recommendations. It helps developers find the most expensive lines of code that affect application performance and causes difficulty while troubleshooting. CodeGuru is powered by machine learning, best practices, and hard-learned lessons across millions of code reviews and thousands of applications profiled on open source projects and internally at Amazon. It helps developers find and fix code issues such as resource leaks, potential concurrency race conditions, and wasted CPU cycles. To know more about Amazon Codeguru in detail, read the official blog post. A few other highlights of Day two at AWS re:Invent 2019 include: General availability of Amazon EKS on AWS Fargate, AWS Fargate Spot, and ECS Cluster Auto Scaling. The Deep Graph Library, an open source library built for easy implementation of graph neural networks, is now available on Amazon SageMaker. Amazon re:Invent will continue throughout this week till the 6th of December. You can access the Livestream. Keep checking this space for news for further updates and releases. Amazon re:Invent 2019 Day One: AWS launches Braket, its new quantum service and releases SageMaker Operators for Kubernetes Amazon’s hardware event 2019 highlights: a high-end Echo Studio, the new Echo Show 8, and more 10 key announcements from Microsoft Ignite 2019 you should know about
Read more
  • 0
  • 0
  • 3304
article-image-amazon-reinvent-2019-day-one-aws-launches-braket-its-new-quantum-service-and-releases-sagemaker-operators-for-kubernetes
Sugandha Lahoti
03 Dec 2019
6 min read
Save for later

Amazon re:Invent 2019 Day One: AWS launches Braket, its new quantum service and releases SageMaker Operators for Kubernetes

Sugandha Lahoti
03 Dec 2019
6 min read
At day one of the ongoing Amazon re:Invent 2019, there was a flurry of announcements made for AWS. Most importantly, AWS announced the preview launch of Braket, its own quantum computing service following the likes of IBM, Microsoft, and Google. Amazon also released Amazon SageMaker Operators for Kubernetes to help data scientists using Kubernetes to train, tune, and deploy machine learning models in Amazon SageMaker. re:Invent is Amazon’s flagship conference hosted by Amazon Web Services for the global cloud computing community. This year re: Invent is taking place in Las Vegas, December 2-6, 2019. re:Invent 2019 Day One announcements Braket: AWS’ new quantum service in preview now Amazon Braket (named after the common notation for quantum states) is a fully managed service that helps you get started with quantum computing. Braket consists of a full development environment that helps data scientists to: design quantum algorithms from scratch or choose from a set of pre-built algorithms, test these algorithms on simulated quantum computers (including gate based and quantum annealing superconductors, and ion trap hardware) run them on your choice of different quantum hardware technologies ( including D-Wave, IonQ, and Rigetti) Once your tests are complete, you will be automatically notified and your results will be stored in Amazon S3. Amazon Braket publishes event logs and performance metrics such as completion status and execution time to Amazon CloudWatch. To make it easier to develop hybrid algorithms that combine classical and quantum tasks, Amazon Braket helps manage classical compute resources and establish low-latency connections to the quantum hardware. At re:Invent 2019, AWS also launched the Amazon Quantum Solutions Lab, a collaborative research program that connects you with quantum computing experts from Amazon and its technology and consulting partners. They can help you identify potential uses of quantum computing, build internal expertise, and collaborate on programs to design and test quantum algorithms. Braket is available in preview now. Amazon SageMaker Operators for Kubernetes Now developers and data scientists can use Kubernetes to train, tune, and deploy machine learning models in Amazon SageMaker, with the new Amazon SageMaker Operators for Kubernetes. Customers can install these Amazon SageMaker Operators on their Kubernetes cluster to create Amazon SageMaker jobs natively using the Kubernetes API and command-line Kubernetes tools such as ‘kubectl’. Operators can be used to train machine learning models, optimize hyperparameters for a given model, run batch transform jobs over existing models, and set up inference endpoints. With these operators, users can manage their jobs in Amazon SageMaker from their Kubernetes cluster in Amazon Elastic Kubernetes Service EKS. Amazon SageMaker Operators for Kubernetes are available in select AWS regions. AWS DeepComposer, a creative way to learn Machine Learning Amazon has launched AWS DeepComposer, the world’s first machine learning-enabled musical keyboard at re:Invent 2019. AWS DeepComposer is an educational tool to teach people Machine Learning. AWS DeepComposer gives developers of all skill levels a creative way to experience machine learning – music. https://youtu.be/XH2EbK9dQlg You can input a melody by connecting the AWS DeepComposer keyboard to your computer, or play the virtual keyboard in the AWS DeepComposer console. You can generate an original music composition using the pre-trained genre models in the console. You can then publish your tracks to SoundCloud. It is designed specifically to educate developers by means of tutorials, sample code, and training data. These can be used to get started with building generative AI models, all without having to write a single line of code. With AWS DeepComposer, you can train and optimize GAN models to create original music. GAN models pit two different neural networks against each other to produce new and original digital works based on sample inputs. AWS DeepComposer is available in preview now. Amazon Transcribe now extended to healthcare patients Amazon’s automatic speech recognition service Amazon Transcribe is now available for medical speech as announced in re:Invent 2019. Amazon Transcribe Medical allows physicians to easily and quickly dictate their clinical notes and see their speech converted to accurate text in real-time, without any human intervention. Clinicians can use natural speech and do not have to explicitly call out punctuation like “comma” or “full stop”. This text can then be automatically fed to downstream applications such as EHR systems, or to AWS language services such as Amazon Comprehend Medical for entity extraction. To make it work, you need to capture audio using your device’s microphone and send PCM (Pulse-code modulation) audio to a streaming API based on the popular Websocket protocol. This API will respond with a series of JSON blobs with the transcribed text, as well as word-level time stamps, punctuation, etc. Optionally, you can save this data to an Amazon Simple Storage Service (S3) bucket. Amazon Transcribe Medical is available in US East (N. Virginia) and US West (Oregon) regions. Updates to Microsoft Windows Server AWS has released a bring-your-own-license (BYOL) experience for customers as an easier way to bring, and manage, their existing licenses for Microsoft Windows Server and SQL Server to AWS. The new BYOL experience enables customers who want to use their existing Windows Server or SQL Server licenses to seamlessly create virtual machines in EC2, while AWS takes care of managing their licenses to help ensure compliance to licensing rules specified by the customer. Amazon is also providing End-of-Support Migration Program (EMP) for Windows Server. On January 14, 2020, support for Windows Server 2008 and 2008 R2 will end. Having an application that can run only on an unsupported version of Windows Server is problematic as you will no longer get free security patch updates, leaving you vulnerable to security and compliance risks. This new program combines technology with expert guidance, to migrate your legacy applications running on outdated versions of Windows Server to newer, supported versions on AWS. Other updates announced at Amazon re:Invent 2019 Amazon EventBridge Schema Registry is now in preview.  The schema registry stores the structure (schema) of Amazon EventBridge events and maps them to Java, Python, and Typescript bindings so that you can use the events as typed objects. The existing AWS IoT SiteWise preview adds new features such as creating a virtual representation of your facility, monitor production performance metrics and use AWS IoT SiteWise Monitor to visualize the data in real-time. AWS IoT SiteWise Monitor is a new SaaS application that lets you monitor and interact with the data collected and organized by AWS IoT SiteWise. The upcoming AWS DeepRacer Evo car will include a stereo camera and a Light Detection and Ranging (LIDAR) sensor.  The DeepRacer League in 2020 will have 8 additional races in 5 countries. The preview of EC2 Image Builder, a service that makes it easier and faster to build and maintain secure OS images for Windows Server and Amazon Linux 2, using automated build pipelines. Amazon re:Invent will continue throughout this week (the last day is the 6th of December). You can access the Livestream here. Keep checking this space for news on other updates and launches. Amazon EKS Windows Container Support is now generally available Amazon’s hardware event 2019 highlights: a high-end Echo Studio, the new Echo Show 8, and more 10 key announcements from Microsoft Ignite 2019 you should know about
Read more
  • 0
  • 0
  • 3015

article-image-kubecon-cloudnativecon-north-america-2019-highlights-helm-3-0-release-codeready-workspaces-2-0-and-more
Savia Lobo
26 Nov 2019
6 min read
Save for later

KubeCon + CloudNativeCon North America 2019 Highlights: Helm 3.0 release, CodeReady Workspaces 2.0, and more!

Savia Lobo
26 Nov 2019
6 min read
Update: On November 26, the OpenFaaS community released a post including a few of its highlights at KubeCon, San Diego. The post also includes a few highlights from OpenFaaS Cloud, Flux from Weaveworks, Okteto, Dive from Buoyant and k3s going GA. The KubeCon + CloudNativeCon 2019 held at San Diego, North America from 18 - 21 November, witnessed over 12,000 attendees to discuss and improve their education and advancement about containers, Kubernetes and cloud-native. The conference was home to many major announcements including the release of Helm 3.0, Red Hat’s CodeReady Workspaces 2.0, GA of Managed Istio on IBM Cloud Kubernetes Service, and many more. Major highlights at the KubeCon + CloudNativeCon 2019 General availability of Managed Istio on IBM Cloud Kubernetes Service IBM cloud announced that the managed Istio on its cloud Kubernetes service is generally available. This service provides a seamless installation of Istio, automatic updates, lifecycle management of Istio control plane components, and integration with platform logging and monitoring tools. With managed Istio, a user’s service mesh is tuned for optimal performance in IBM Cloud Kubernetes Service. Istio is a service mesh that it is able to provide its features without developers having to make any modifications to their applications. The Istio installation is tuned to perform optimally on IBM Cloud Kubernetes Service and is pre-configured to work out of the box with IBM Log Analysis with LogDNA and IBM Cloud Monitoring with Sysdig. Red Hat announces CodeReady Workspaces 2.0 CodeReady Workspaces 2.0 helps developers to build applications and services similar to the production environment, i.e all apps run on Red Hat OpenShift. A few new services and tools in the CodeReady Workspaces 2.0 include: Air-gapped installs: These enable CodeReady Workspaces to be downloaded, scanned and moved into more secure environments when access to the public internet is limited or unavailable. It doesn’t "call back" to public internet services. An updated user interface: This brings an improved desktop-like experience to developers. Support for VSCode extensions: This gives developers access to thousands of IDE extensions. Devfile: A sharable workspace configuration that specifies everything a developer needs to work, including repositories, runtimes, build tools and IDE plugins, and is stored and versioned with the code in Git. Production consistent containers for developers: This clones the sources in where needed and adds development tools (such as debuggers, language servers, unit test tools, build tools) as sidecar containers so that the running application container mirrors production. Brad Micklea, vice president of Developer Tools, Developer Programs, and Advocacy, Red Hat, said, “Red Hat is working to make developing in cloud native environments easier offering the features developers need without requiring deep container knowledge. Red Hat CodeReady Workspaces 2 is well-suited for security-sensitive environments and those organizations that work with consultants and offshore development teams.” To know more about CodeReady Workspaces 2.0, read the press release on the Red Hat official blog. Helm 3.0 released Built upon the success of Helm 2, the internal implementation of Helm 3 has changed considerably from Helm 2. The most apparent change in Helm 3.0 is the removal of Tiller. A rich set of new features have been added as a result of the community’s input and requirements. A few features include: Improved Upgrade strategy: Helm 3 uses 3-way strategic merge patches Secrets as the default storage driver Go import path changes Validating Chart Values with JSONSchema Some features have been deprecated or refactored in ways that make them incompatible with Helm 2. Some new experimental features have also been introduced, including OCI support. Also, the Helm Go SDK has been refactored for general use. The goal is to share and re-use code open sourced with the broader Go community. To know more about Helm 3.0 in detail, read the official blog post. AWS, Intuit and WeaveWorks Collaborate on Argo Flux Recently, Weaveworks announced a partnership with Intuit to create Argo Flux, a major open-source project to drive GitOps application delivery for Kubernetes via an industry-wide community. Argo Flux combines the Argo CD project led by Intuit with the Flux CD project driven by Weaveworks, two well known open source tools with strong community support. At KubeCon, AWS announced that it is integrating the GitOps tooling based on Argo Flux in Elastic Kubernetes Service and Flagger for AWS App Mesh. The collaboration resulted in a new project called GitOps Engine to simplify application deployment in Kubernetes. The GitOps Engine will be responsible for the following functionality: Access to Git repositories Kubernetes resource cache Manifest Generation Resources reconciliation Sync Planning To know more about this collaboration in detail, read the GitOps Engine page on GitHub. Grafana Labs announces general availability of Loki 1.0 Grafana Labs, an open source analytics and monitoring solution provider, announced that Loki version 1.0 is generally available for production use. Loki is an open source logging platform that provides developers with an easy-to-use, highly efficient and cost-effective approach to log aggregation. With Loki 1.0, users can instantaneously switch between metrics and logs, preserving context and reducing MTTR. By storing compressed, unstructured logs and only indexing metadata, Loki is cost-effective and simple to operate by design. It includes a set of components that can be composed into a fully-featured logging stack. Grafana Cloud offers a high-performance, hosted Loki service that allows users to store all logs together in a single place with usage-based pricing. Read about Loki 1.0 on GitHub to know more in detail. Rancher Extends Kubernetes to the Edge with the general availability of K3s Rancher, creator of the vendor-agnostic and cloud-agnostic Kubernetes management platform, announced the general availability of K3s, a lightweight, certified Kubernetes distribution purpose-built for small footprint workloads. Rancher partnered with ARM to build a highly optimized version of Kubernetes for the edge. It is packaged as a single <40MB binary with a small footprint which reduces the dependencies and steps needed to install and run Kubernetes in resource-constrained environments such as IoT and edge devices. To know more about this announcement in detail, read the official press release. There were many additional announcements including Portworx launched PX-Autopilot, Huawei presented their latest advances on KubeEdge, Diamanti Announced Spektra Hybrid Cloud Solution, and may more! To know more about all the keynotes and tutorials in KubeCon North America 2019, visit its GitHub page. Chaos engineering comes to Kubernetes thanks to Gremlin “Don’t break your users and create a community culture”, says Linus Torvalds, Creator of Linux, at KubeCon + CloudNativeCon + Open Source Summit China 2019 KubeCon + CloudNativeCon EU 2019 highlights: Microsoft’s Service Mesh Interface, Enhancements to GKE, Virtual Kubelet 1.0, and much more!
Read more
  • 0
  • 0
  • 4201