Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Cloud & Networking

376 Articles
article-image-twilio-acquires-sendgrid-a-leading-email-api-platform-to-bring-email-services-to-its-customers
Natasha Mathur
16 Oct 2018
3 min read
Save for later

Twilio acquires SendGrid, a leading Email API Platform, to bring email services to its customers

Natasha Mathur
16 Oct 2018
3 min read
Twilio Inc., a universal cloud communications platform announced yesterday, that it is acquiring SendGrid, a leading email API platform. Twilio focussed mainly on providing voice calling, text messaging, video, web, and mobile chat services. SendGrid, on the other hand, focused purely on providing email services. With this acquisition, Twilio aims to bring tremendous value to the combined customer bases by offering services around voice, video, chat as well as email. “Email is a vital communications channel for companies around the world, and so it was important to us to include this capability in our platform. The two companies share the same vision, the same model, and the same values,” mentioned Jeff Lawson, Twilio's co-founder and chief executive officer. The two companies will also be focussing on making it easy for developers to build a communications platform by delivering a single, best-in-class platform for developers. This would help them better manage all of their important communication channels including voice, messaging, video, and email. Moreover, as per the terms of the deal, SendGrid will become a wholly-owned subsidiary of Twilio. Once the deal is closed, SendGrid’s common stock will get converted into Twilio’s stock. “At closing, each outstanding share of SendGrid common stock will be converted into the right to receive 0.485 shares of Twilio Class A common stock, which represents a per share price for SendGrid common stock of $36.92 based on the closing price of Twilio Class A common stock on October 15, 2018. The exchange ratio represents a 14% premium over the average exchange ratio for the ten calendar days ending, October 15, 2018”, reads Twilio’s press release. The boards of directors of both Twilio and SendGrid have approved the above-mentioned transaction. “Our two companies have always shared a common goal - to create powerful communications experiences for businesses by enabling developers to easily embed communications into the software they are building. Our mission is to help our customers deliver communications that drive engagement and growth, and this combination will allow us to accelerate that mission for our customers”, said Sameer Dholakia, SendGrid’s CEO. The acquisition will be closing in the first half of 2019. This is subject to the satisfaction of customary closing conditions, including approvals by shareholders of each SendGrid’s and Twilio’s. “We believe this is a once-in-a-lifetime opportunity to bring together the two leading developer-focused communications platforms to create the unquestioned platform of choice for all companies looking to transform their customer engagement”, said Lawson. For more information, check out the official Twilio press release. Twilio WhatsApp API: A great tool to reach new businesses Make phone calls and send SMS messages from your website using Twilio Building a two-way interactive chatbot with Twilio: A step-by-step guide
Read more
  • 0
  • 0
  • 1825

article-image-platform9-announces-a-new-release-of-fission-io-the-open-source-kubernetes-native-serverless-framework
Sugandha Lahoti
16 Oct 2018
3 min read
Save for later

Platform9 announces a new release of Fission.io, the open source, Kubernetes-native Serverless framework

Sugandha Lahoti
16 Oct 2018
3 min read
Platform9 is announcing a new release of Fission.io, the open source, Kubernetes-native Serverless framework.  It’s new features enable developers and IT Operations to improve the quality and reliability of serverless applications. Fission comes with built-in Live-reload and Record-replay capabilities to simplify testing and accelerate feedback loops. Other new features include Automated Canary Deployments to reduce the risk of failed releases, Prometheus integration for automated monitoring and alerts, and fine-grained cost and performance optimization capabilities. With this latest release, Fission also allows Dev and Ops teams to safely adopt Serverless and benefit from the speed, cost savings, and scalability of this cloud-native development pattern on public cloud or on-premises. Let’s look at the features in detail. Live-reload: Test as you type With Live-reload, Fission automatically deploys the code as it is written into a live Kubernetes test cluster. It allows developers to toggle between their development environment and the runtime of the function, to rapidly iterate through their coding and testing cycles. Record-replay: Simplify testing and debug (Image via Fission) Record-replay automatically saves events that trigger serverless functions and allows for the replaying of these events on demand. Record-replay can also reproduce complex failures during testing or debugging, simplify regression testing, and troubleshoot issues. Operations teams can use recording on a subset of live production traffic to help engineers reproduce issues or verify application updates. Automated Canary Deployments: Reduce the risk of failed releases Fission provides fully automated Canary Deployments that are easy to configure. With AutomatedCanary Deployments, it automatically increments traffic proportions to the newer version of the function as long as it succeeds and rolls back to the old version if the new version fails. Prometheus Integration: Easy metrics collection and alerts Integration with Prometheus enables automatic aggregation of function metrics, including the number of functions called, function execution time, success, failures, and more. Users can also define custom alerts for key events, such as for when a function fails or takes too long to execute. Prometheus metrics can also feed monitoring dashboards to visualize application metrics. (Image via Fission) One of Fission’s users Kenneth Lam, Director of Technology at Snapfish said, “Fission allows our company to benefit from the speed, cost savings and scalability of a cloud-native development pattern on any environment we choose, whether it be the public cloud or on-prem.” You can learn more about Fission on its website. You can also go through a quick demo of all the new features in Fission. How to deploy Serverless Applications in Go using AWS Lambda [Tutorial]. Azure Functions 2.0 launches with better workload support for serverless. How Serverless computing is making AI development easier
Read more
  • 0
  • 0
  • 2671

article-image-microsoft-fixing-and-testing-the-windows-10-october-update-after-file-deletion-bug
Prasad Ramesh
16 Oct 2018
2 min read
Save for later

Microsoft fixing and testing the Windows 10 October update after file deletion bug

Prasad Ramesh
16 Oct 2018
2 min read
Microsoft started re-releasing the Windows October update last week. The update was halted earlier due to a bug that was deleting user files and folders. After data deletion was reported by multiple users, Microsoft pulled the Windows 10 October update. Microsoft investigated all of the data loss reports and fixed all known issues in the update. It also conducted internal validation and is providing free customer service for affected users. Microsoft is currently rolling out the update to a few called the Windows Insider community. They will carefully study the diagnostics data, the feedback from the tests and from the insiders before general public release. What caused the issue? In the Windows 10 April 2018 Update, users with KFR reported an extra copy of Known Folders on their computer. Code was introduced in the October 2018 Update to remove these empty folders. That change, with another change to the update construction sequence, resulted in the deletion of the original “old” folder locations and their content. The PCs were left only with the new “active” folder. The file deletion issue happened if Known Folder Redirection (KFR) was enabled before the update. KFR is the process of redirecting the known Windows folders like Desktop, Documents, Pictures, Screenshots, Videos etc. from the default folder location to a new folder location. The files were deleted since they remained in the original “old” folder location instead of being moved to the new, redirected location. Further actions The team apologized for any impact these issues had on the users. In the blog John Cable, Director of Program Management, Windows Servicing and Delivery stated: “We will continue to closely monitor the update and all related feedback and diagnostic data from our Windows Insider community with the utmost vigilance. Once we have confirmation that there is no further impact we will move towards an official re-release of the Windows 10 October 2018 Update.” For more details visit the official Microsoft Blog. Microsoft pulls Windows 10 October update after it deletes user files Microsoft Your Phone: Mirror your Android phone apps on Windows .NET Core 2.0 reaches end of life, no longer supported by Microsoft
Read more
  • 0
  • 0
  • 1988
Visually different images

article-image-microsoft-announces-decentralized-identity-in-partnership-with-dif-and-w3c-credentials-community-group
Bhagyashree R
12 Oct 2018
3 min read
Save for later

Microsoft announces ‘Decentralized Identity’ in partnership with DIF and W3C Credentials Community Group

Bhagyashree R
12 Oct 2018
3 min read
Yesterday, Microsoft published a white paper on Decentralized Identity (DID) solution. These identities are user-generated, self-owned, globally unique identifiers rooted in decentralized systems. Over the past 18 months, Microsoft has been working towards building a digital identity system using blockchain and other distributed ledger technologies. With these identities aims to enhance personal privacy, security, and control. Microsoft has been actively collaborating with members of the Decentralized Identity Foundation (DIF), the W3C Credentials Community Group, and the wider identity community. They are working with these groups to identify and develop critical standards. Together they plan to establish a unified, interoperable ecosystem that developers and businesses can rely on to build more user-centric products, applications, and services. Why decentralized identity (DID) is needed? Nowadays, people use digital identity at work, at home, and across every app, service, and device. Access to these digital identities such as email addresses and social network IDs can be removed at any time by the email provider, social network provider, or other external parties. Users also give permissions to numerous apps and devices, which calls for a high degree of vigilance of tracking who has access to what information. This standards-based decentralized identity system empowers users and organizations to have greater control over their data. This system addresses the problem of users granting broad consent to countless apps and services. It provides them a secure encrypted digital hub where they can store their identity data and easily control access to it. What it means for users, developers, and organizations? Benefits for users It enables all users to own and control their identity Provides secure experiences that incorporate privacy by design Design user-centric apps and services Benefits for developers It allows developers to provide users personalized experiences while respecting their privacy Enables developers to participate in a new kind of marketplace, where creators and consumers exchange directly Benefits for organizations Organizations can deeply engage with users while minimizing privacy and security risks Provides a unified data protocol to organizations to transact with customers, partners, and suppliers Improves transparency and auditability of business operations To know more about decentralized identity, read the white paper published by Microsoft. Microsoft joins the Open Invention Network community, making 60,000 of its patents accessible to fellow members Microsoft invests in Grab; together aim to conquer the Southeast Asian on-demand services market with Azure’s Intelligent Cloud Microsoft announces Project xCloud, a new Xbox game streaming service, on the heels of Google’s Stream news last week
Read more
  • 0
  • 0
  • 9791

article-image-announcing-the-early-release-of-travis-ci-on-windows
Savia Lobo
12 Oct 2018
2 min read
Save for later

Announcing the early release of Travis CI on Windows

Savia Lobo
12 Oct 2018
2 min read
Yesterday, Travis CI announced that its service will now be available on Windows. Travis CI is a distributed Continuous Integration service used to test and deploy projects hosted on GitHub. This is an early release and they plan to release a stable version in Q2 next year. With this update, teams can run their tests on Linux, Mac, and Windows--all in the same build. At present, users can use Windows with open source and private projects on either travis-ci.org or travis-ci.com. Travis CI plans to bring this to enterprise soon. The company says, “this is our very first full approach to Windows-support, so the tooling is light.” Laurie Voss, Chief Operating Officer, npm, Inc says, “Adding Windows support to Travis CI will provide a more stable development experience for a huge segment of the JavaScript community—32% of projects in the npm Registry use Travis CI. We look forward to continuing to work with Travis CI to reduce developer friction and empower over 10 million developers worldwide to build amazing things.” Travis Windows CI environment Windows Build Environment for Travis CI launches with support for Node.js, Rust, and Bash languages. Travis Windows CI will run a git bash shell, to maintain consistency with our other bash-based environments. This will also allow users to shell out to PowerShell as needed. In addition to this, Docker is also made available for Windows builds. Travis CI uses Chocolatey as a package manager and also has a pre-installed Visual Studio 2017 Build Tools. The Windows build environment is currently based on Windows Server 1803 for containers running Windows Server 2016 as the OS version. Travis CI in their blog post mention that they are hosting their Windows virtual machines in Google Compute Engine. Following which, they have seen some variations in their boot times. However, they plan to improve this alongside their other infrastructure-related work. The company expects to release Windows Build Environments for Enterprise before the release of the stable version. To know more about Travis CI on Windows in detail, visit their official Travis CI blog. Creating a Continuous Integration commit pipeline using Docker [Tutorial] How to master Continuous Integration: Tools and Strategies Why Agile, DevOps and Continuous Integration are here to stay: Interview with Nikhil Pathania, DevOps practitioner
Read more
  • 0
  • 0
  • 2203

article-image-microsoft-joins-the-open-invention-network-community-making-60000-of-its-patents-accessible-to-fellow-members
Richard Gall
10 Oct 2018
3 min read
Save for later

Microsoft joins the Open Invention Network community, making 60,000 of its patents accessible to fellow members

Richard Gall
10 Oct 2018
3 min read
A decade ago, Microsoft typified the closed and aggressively protective technology company. Just a few years ago, the company was profiting heavily from the success of Android, so extensive was its patents. In 2013, for example, it's thought that Microsoft received a royalty payment from Samsung exceeding $1 billion. However, things are different now - by joining the Open Invention Network, as was revealed today, Microsoft is taking another big step towards embracing open source software and open source culture. With more than 2,000 OIN members, including Google, IBM, Sony, and Red Hat, Microsoft certainly isn't blazing a new trail. It's more a case of the company finally joining the club. What is the Open Invention Network? The Open Invention Network describes itself as "a shared defensive patent pool with the mission to protect Linux." In essence, it's an organization that was set up in 2005 to protect the open source world from increasing patents - a culture that, at the time, Microsoft would have been guilty of driving. Members of OIN have access to the patents of other members, royalty-free. This is what a 'patent non-aggression community' (a phrase the OIN likes to use) looks like in practice. Prior to Microsoft joining, the OIN owned more than 1,300 patents and licenses. Remarkably, Microsoft will add another 60,000 to that number. That should give you an indication of how important patents were to Microsoft over the last decade or so. Why has Microsoft joined the Open Invention Network? The news the Microsoft is joining the OIN is really just another step in a transformation of the company's culture and mission. From Steve Ballmer calling open source a 'cancer' back in 2001, to the acquisition of GitHub this year, the company seems to have done a complete u-turn when it comes to open source software. To further emphasize this trend you only have to look back  a couple of days, when Microsoft open sourced its machine learning framework Infer.NET. “Microsoft sees open source as a key innovation engine, and for the past several years we have increased our involvement in, and contributions to, the open source community,” said Microsoft's Corporate VP Erich Andersen in the OIN's press release. "The protection OIN offers the open source community helps increase global contributions to and adoption of open source technologies. We are honored to stand with OIN as an active participant in its program to protect against patent aggression in core Linux and other important OSS technologies."
Read more
  • 0
  • 0
  • 2711
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $15.99/month. Cancel anytime
article-image-introducing-alpha-support-for-volume-snapshotting-in-kubernetes-1-12
Melisha Dsouza
10 Oct 2018
3 min read
Save for later

Introducing Alpha Support for Volume Snapshotting in Kubernetes 1.12

Melisha Dsouza
10 Oct 2018
3 min read
Kubernetes v1.12 now offers alpha support for volume snapshotting. This will allow users to create or delete volume snapshots, and natively create new volumes from a snapshot using the Kubernetes API. A snapshot represents a copy of a volume at that particular instant of time. This snapshot can be used to provision a new volume that can be pre-populated with the snapshot data or to restore the existing volume to a previous state. Importance of adding Snapshots to Kubernetes The main goal of the Kubernetes team is to create an abstraction layer between distributed systems applications and underlying clusters. The layer will ensure that application deployment requires no "cluster specific" knowledge. Snapshot operations are a critical functionality for many stateful workloads. For instance, a database administrator may want to snapshot a database volume before starting a database operation. By providing a standard way to trigger snapshot operations in the Kubernetes API, users don’t have to manually execute storage system specific operations around the Kubernetes API. They can instead incorporate snapshot operations in a cluster agnostic way into their tooling and policy assured that it will work against arbitrary Kubernetes clusters regardless of the underlying storage. These snapshot primitives help to develop advanced, enterprise-grade, storage administration features for Kubernetes which includes data protection, data replication, and data migration. 3 new API objects introduced by Kubernetes Volume Snapshots: #1 VolumeSnapshot The creation and deletion of this object depicts if a user wants to create or delete a cluster resource (a snapshot). It is used to request the creation of a snapshot for a specified volume. It gives the user information about snapshot operations like the timestamp at which the snapshot was taken and whether the snapshot is ready to use. #2 VolumeSnapshotContent This object is created by the CSI volume driver once a snapshot has been successfully created. It contains information about the snapshot including its ID. This object represents a provisioned resource on the cluster (a snapshot). Once a snapshot is created, the VolumeSnapshotContent object binds to the VolumeSnapshot- with a one to one mapping- for which it was created. #3 VolumeSnapshotClass This object created by cluster administrators describes how snapshots should be created. It includes the driver information, how to access the snapshot, etc. These Snapshot objects are defined as CustomResourceDefinitions (CRDs).  End users need to verify if a CSI driver that supports snapshots is deployed on their Kubernetes cluster. CSI Drivers that support snapshots will automatically install the required CRDs. Limitations of the alpha implementation of snapshots The alpha implementation does not support reverting an existing volume to an earlier state represented by a snapshot It does not support "in-place restore" of an existing PersistentVolumeClaim from a snapshot. Users can provision a new volume from a snapshot. However, updating an existing PVC to a new volume and reverting it back to an earlier state is not allowed. No snapshot consistency guarantees given beyond any of those provided by storage system An example of creating new snapshots and importing existing snapshots is explained well on the Kubernetes Blog. Head over to  the team's Concepts page or Github to find more official documentation of the snapshot feature. ‘AWS Service Operator’ for Kubernetes now available allowing the creation of AWS resources using kubectl Limited Availability of DigitalOcean Kubernetes announced! Google Cloud hands over Kubernetes project operations to CNCF, grants $9M in GCP credits
Read more
  • 0
  • 0
  • 2440

article-image-microsoft-invests-in-grab-together-aim-to-conquer-the-southeast-asian-on-demand-services-market-with-azures-intelligent-cloud
Natasha Mathur
09 Oct 2018
2 min read
Save for later

Microsoft invests in Grab; together aim to conquer the Southeast Asian on-demand services market with Azure’s Intelligent Cloud

Natasha Mathur
09 Oct 2018
2 min read
Microsoft announced, yesterday, that it is collaborating with Grab, the leading on-demand transportation, mobile payments and online-to-offline services platform in Southeast Asia, as part of a strategic cloud partnership. The partnership aims to transform the delivery of digital services and mobility by using Microsoft’s state-of-the-art expertise in machine learning and other artificial intelligence (AI) capabilities. “Our partnership with Grab opens up new opportunities to innovate in both a rapidly evolving industry and growth region. We’re excited to team up to transform the customer experience as well as enhance the delivery of digital services for the millions of users who rely on Grab for safe and affordable transport, food and package delivery, mobile payments, and financial services”, mentioned Peggy Johnson, executive vice president at Microsoft. Grab is a Singapore-based technology company delivering ride-hailing, ride sharing, and logistics services via its app in Singapore and neighboring Southeast Asian nations. It currently operates in 235 cities across eight Southeast Asian countries.  Moreover, Grab’s digital wallet, GrabPay, is the top player in Southeast Asia. This partnership is expected to help both companies explore a wide range of innovative deep technology projects such as mobile facial recognition with built-in AI for drivers and customers, using Microsoft Azure’s fraud detection services to prevent fraudulent transactions on Grab’s platform, and so on. These projects aim to transform the experience for Grab’s users, driver-partners, merchants as well as agents. Grab will be adopting Microsoft Azure as its preferred cloud platform and Microsoft is set to make a strategic investment in Grab; the magnitude of which currently undisclosed. “As a global technology leader, Microsoft’s investment into Grab highlights our position as the leading homegrown technology player in the region. We look forward to collaborating with Microsoft in the pursuit of enhancing on-demand transportation and seamless online-to-offline experiences for users”, said Ming Maa, president of Grab. There are a few other areas of collaborations between Grab and Microsoft. These include Microsoft Outlook integration, Microsoft Kaizala, In-car solutions, and integration of Microsoft Rewards Gift Cards. For more information, check out the official Microsoft blog. Microsoft open sources Infer.NET, it’s popular model-based machine learning framework Microsoft announces Project xCloud, a new Xbox game streaming service, on the heels of Google’s Stream news last week Microsoft’s new neural text-to-speech service lets machines speak like people
Read more
  • 0
  • 0
  • 2045

article-image-libp2p-the-modular-p2p-network-stack-by-ipfs-for-better-decentralized-computing
Melisha Dsouza
09 Oct 2018
4 min read
Save for later

libp2p: the modular P2P network stack by IPFS for better decentralized computing

Melisha Dsouza
09 Oct 2018
4 min read
libp2p is a P2P Network stack introduced by the IPFS community. libp2p is capable of discovering other peers and networks without resourcing to centralized registries that enables apps to work offline. In July 2018, Davis Dias explained that the design of a 'location addressed web' is the reason for its fragility. Small errors in its backbone can lead to shutting down of all running applications. Firewalls, routing issues, roaming issue, and network reliability interfere with users having a smooth experience on the web. Thus came a need to re-imagine the network stack. To solve all the above problems, the InterPlanetary File System (IPFS) came into being. It is a decentralized web protocol based on content-addressing, digital signatures, and peer-to-peer distribution. Today, IPFS is used to build completely distributed (and offline-capable!) web-apps which are also available offline. IPFS saves and distributes valuable datasets, and moves billions of files. IPFS spawned several other projects and libp2p is one of them. It enables users to run network applications free from runtime and address services while being independent of their location. libp2p solves the complexity of dealing with numerous protocols in a decentralized environment. It effectively helps users connect with multiple peers using only a single protocol thus paving the way for the next generation of decentralized systems. Libp2p Features #1 Transport Module libp2p enables application developers to pick the modules needed to run their application. These modules vary depending on the runtime they are executing. A libp2p node uses one or more Transports to dial and listen for connections. These transport modules offer a clean interface for dialing and listening which is defined by the interface-transport specification. #2 No prior assigning of ports Before libp2p came into existence, users would assign a listener to a port and then assign ports to special protocols. This was done so that other hosts would know in advance which port to dial. With libp2p users do not have to assign ports beforehand. #3 Encrypted communication To ensure an encrypted connection, libp2p also supports a set of modules that encrypt every communication established. #4 Peer Discovery and Routing A peer discovery module helps libp2p to find peers to connect to. Peer routing finds other peers in the network by intentionally issuing queries, which can be iterative or recursive, until a peer is found. Content routing mechanism is used to find where content lives in the network. Using libp2p in IPFS libp2p is now refactored into its own project so that other users can take advantage of it and be part of its ecosystem as well. It is what provides IPFS and other projects the P2P connectivity, support for multiple platforms and browsers and many other advantages. Users can utilize the libp2p module to create their own libp2p bundle. They can customize their bundles with features and default setup. It also takes into account a user's needs. For example, the team has built a browser working version of libp2p that acts as the network layer of IPFS and leverages browser transports. You can head over to GitHub to check this example. Keep Networks has also demonstrated the use of libp2p. Since participants need to know how to connect to each other, the team has come up with a simple example of peer-to-peer discovery. They have used a few pieces of the libp2p JS library to create nodes that discover and communicate with each other. You can head over to their blog to check out how the example works. Another emerging use for libP2P is in blockchain applications. IPFS is used by blockchains and blockchain applications, and its subprotocols (libp2p, multihash, IPLD) can be extremely useful for blockchain standardization. A good  example of this would be getting the ethereum blockchain in the browser or in a Node.js process using libp2p and running it through ethereum-vm. That being said, there are multiple challenges that developers will encounter while using libP2P for their Blockchain examples. Chris Pacia, the backend developer for OB1, explains how developers can face these challenges in his talk at QCon. With all the buzz around blockchains and decentralized computing these days, libp2p is making its rounds on the internet. For more insights on libp2p, you can visit their official site. Cloudflare’s decentralized vision of the web: InterPlanetary File System (IPFS) Gateway to create distributed websites Tim Berners-Lee plans to decentralize the web with ‘Solid’, an open-source project for “personal empowerment through data” Introducing TLS 1.3, the first major overhaul of the TLS protocol with improved security and speed
Read more
  • 0
  • 0
  • 6110

article-image-google-opts-out-of-pentagons-10-billion-jedi-cloud-computing-contract-as-it-doesnt-align-with-its-ethical-use-of-ai-principles
Bhagyashree R
09 Oct 2018
3 min read
Save for later

Google opts out of Pentagon’s $10 billion JEDI cloud computing contract, as it doesn’t align with its ethical use of AI principles

Bhagyashree R
09 Oct 2018
3 min read
Yesterday, Google announced that they will be not be competing for the Pentagon’s cloud-computing contract which is supposedly worth $10 billion. They opted out of bidding for the project named, Joint Enterprise Defense Infrastructure (JEDI) saying the project may conflict with its principles for the ethical use of AI. The JEDI project involves moving massive amounts of Pentagon internal data to a commercially operated secure cloud system. The bidding for this contract began two months ago and closes this week (12th October). CNBC reported in July that Amazon is considered as the number one choice for the contract because it is already providing services for the cloud system used by U.S intelligence agencies. Cloud providers such as IBM, Microsoft, and Oracle are also top-contenders as they have worked with government agencies for many decades. This move could help their chances of winning the decade-long JEDI contract. Why Google has dropped out of this bidding? One of Google’s spokespersons told TechCrunch that the main reason for opting out of this bidding is because it doesn’t align with their AI principles: “While we are working to support the US government with our cloud in many areas, we are not bidding on the JEDI contract because first, we couldn’t be assured that it would align with our AI Principles and second, we determined that there were portions of the contract that were out of scope with our current government certifications.” He further added that: “Had the JEDI contract been open to multiple vendors, we would have submitted a compelling solution for portions of it. Google Cloud believes that a multi-cloud approach is in the best interest of government agencies, because it allows them to choose the right cloud for the right workload. At a time when new technology is constantly becoming available, customers should have the ability to take advantage of that innovation. We will continue to pursue strategic work to help state, local and federal customers modernize their infrastructure and meet their mission critical requirements.” Also, this decision is a result of thousands of Google employees protesting against the company's involvement in another US government project named Project Maven. Earlier this year, some of the Google employees reportedly quit over the company's work on this project. Its employees believed that the U.S. military could weaponize AI and apply the technology towards refining drone strikes and other kinds of lethal attacks. An internal petition was also drafted for Google CEO Sundar Pichai to cancel Project Maven and was signed by over 3,000 employees. After this protest, Google said it would not renew the contract or pursue similar military contracts. Further, Google also formulated its principles for the ethical use of AI. You can read the full story on Bloomberg. Bloomberg says Google, Mastercard covertly track customers’ offline retail habits via a secret million dollar ad deal Ex-googler who quit Google on moral grounds writes to Senate about company’s “Unethical” China censorship plan Google slams Trump’s accusations, asserts its search engine algorithms do not favor any political ideology
Read more
  • 0
  • 0
  • 2929
article-image-bpftrace-a-dtrace-like-tool-for-linux-now-open-source
Prasad Ramesh
09 Oct 2018
2 min read
Save for later

bpftrace, a DTrace like tool for Linux now open source

Prasad Ramesh
09 Oct 2018
2 min read
bpftrace is a DTrace like tool for troubleshooting kernel problems. It was created about a year ago by Alastair Robertson and the GitHub repository was made public recently. It has plenty of features to relate it to DTrace 2.0. bpftrace bpftrace is an open source high level tracing tool which allows analyzing systems. It is now more competent and built for modern extended Berkeley Packet Filter (eBPF). eBPF is a part of the Linux kernel and is popular in systems engineering. Robertson recently developed struct support, and applied it to tracepoints. Struct support was also applied to kprobes. bpftrace uses existing Linux kernel facilities like eBPF, kprobes, uprobes, tracepoints, and perf_events. It also uses bcc libraries. bpftrace uses a lex/yacc parser internally to convert programs into abstract syntax tree (AST). Then llvm intermediate representation actions are done and finally, then BPF is done. Source: GitHub bpftrace and DTrace bpftrace is a higher-level front end for custom ad-hoc tracing. It can play a similar role as DTrace. There are some things eBPF can do and DTrace can't, one of them being the ability to save and retrieve stack traces as variables. Brendan Gregg, one of the contributors of bpftrace states in his blog: “We've been adding bpftrace features as we need them, not just because DTrace had them. I can think of over a dozen things that DTrace can do that bpftrace currently cannot, including custom aggregation printing, shell arguments, translators, sizeof(), speculative tracing, and forced panics.” A one-liner tutorial and reference guide is available on GitHub for learning bpftrace. For more details and trying bpftrace head on to the GitHub repository and Brendan Gregg’s blog. NVTOP: An htop like monitoring tool for NVIDIA GPUs on Linux LLVM 7.0.0 released with improved optimization and new tools for monitoring Xamarin Test Cloud for API Monitoring [Tutorial]
Read more
  • 0
  • 0
  • 4397

article-image-microsoft-pulls-windows-10-october-update-after-it-deletes-user-files
Prasad Ramesh
08 Oct 2018
2 min read
Save for later

Microsoft pulls Windows 10 October update after it deletes user files

Prasad Ramesh
08 Oct 2018
2 min read
The Windows 10 October update was available for download around the time of the Surface event last week. While the update brought features like Your Phone App and Windows Timeline, users also experienced massive file deleting from their systems. Microsoft had excluded the update from some devices due to compatibility issues with newer processors. The issue was reported by users in the early stages before mass rollout. Users could manually download and install the Windows October 2018 Update from October 2. Rollout was to be pushed October 9 for Patch Tuesday. Microsoft recommends contacting their customer support if the update has deleted your files. The support site advices: “If you have manually downloaded the Windows 10 October 2018 Update installation media, please don’t install it and wait until new media is available.” As of now, it is not known how many users faced this issue. Windows updates are not known to be smooth, causing some issues and errors. But it is unusual that an issue of this magnitude was not detected in Microsoft’s testing of the Windows update. Earlier this year, Microsoft had delayed the Windows 10 April 2018 because of Blue Screen of Death issues. But the issues in that update were rectified before the update reached regular users. Fortunately, this update wasn’t mass rolled out and the issue was detected in an early stage. This serves as a reminder to users to create a backup of important files before an OS update. When Microsoft continues mass rollout of this update, the issue will be fixed, but it is safe to backup your data in any case. The official support page states: “We have paused the rollout of the Windows 10 October 2018 Update (version 1809) for all users as we investigate isolated reports of users missing some files after updating.” There are comments on the support page, where users are stating the problem. For more details visit the Microsoft support website. Microsoft Your Phone: Mirror your Android phone apps on Windows What’s new in the Windows 10 SDK Preview Build 17704 Microsoft Cloud Services get GDPR Enhancements
Read more
  • 0
  • 0
  • 2278

article-image-githubs-new-integration-for-jira-software-cloud-aims-to-provide-teams-a-seamless-project-management-experience
Bhagyashree R
08 Oct 2018
2 min read
Save for later

GitHub’s new integration for Jira Software Cloud aims to provide teams a seamless project management experience

Bhagyashree R
08 Oct 2018
2 min read
Last week, GitHub announced that they have built a new integration to enable software teams to connect their code on GitHub.com to their projects on Jira Software Cloud. This integration updates Jira with data from GitHub, providing a better visibility into the current status of your project. What are the advantages of this new GitHub and Jira integration? No need to constantly switch between GitHub and Jira With your GitHub account linked to Jira, your team can see the branches, commit messages, and pull request in the context of the Jira tickets they’re working on. This integration provides a deeper connection by allowing you to view references to Jira in GitHub issues and pull requests. Source: GitHub Improved capabilities This new GitHub-managed app provides improved security, along with the following capabilities: Smart commits: You can use smart commits to update the status, leave a comment, or log time without having to leave your command line or GitHub View from within a Jira ticket: You can view associated pull requests, commits, and branches from within a Jira ticket Searching Jira issues: You can search for Jira issues based on related GitHub information, such as open pull requests. Check the status of development work: The status of development work can be seen from within Jira projects Keep Jira issues up to date: You can automatically keep your Jira issues up to date while working in GitHub Install the Jira Software and GitHub app to connect your GitHub repositories to your Jira instance. The previous version of the Jira integration will be deprecated in favor of this new GitHub-maintained integration. Once the migration is complete, the legacy integration (DVCS connector) is disabled automatically. Read the full announcement at the GitHub blog. 4 myths about Git and GitHub you should know about GitHub addresses technical debt, now runs on Rails 5.2.1 GitLab raises $100 million, Alphabet backs it to surpass Microsoft’s GitHub
Read more
  • 0
  • 0
  • 3095
article-image-aws-service-operator-for-kubernetes-now-available-allowing-the-creation-of-aws-resources-using-kubectl
Melisha Dsouza
08 Oct 2018
3 min read
Save for later

‘AWS Service Operator’ for Kubernetes now available allowing the creation of AWS resources using kubectl

Melisha Dsouza
08 Oct 2018
3 min read
On the 5th of October, the Amazon team announced the general availability of ‘The AWS Service Operator’. This is an open source project in an alpha state which allows users to manage their AWS resources directly from Kubernetes using the standard Kubernetes CLI, kubectl. What is an Operator? Kubernetes is built on top of a 'controller pattern'. This allows applications and tools to listen to a central state manager (etcd), and take action when something happens. The controller pattern allows users to create decoupled experiences without having to worry about how other components are integrated. An operator is a purpose-built application that manages a specific type of component using this same pattern. You can check the entire list of operators at Awesome Operators. All about the AWS Service Operator Generally, users that need to integrate Amazon DynamoDB with an application running in Kubernetes or deploy an S3 Bucket for their application to use, would need to use tools such as AWS CloudFormation or Hashicorp Terraform. They then have to create a way to deploy those resources. This requires the user to behave as an operator to manage and maintain the entire service lifecycle. Users can now skip all of the above steps and deploy Kubernetes’ built-in control loop. This stores a desired state within the API server for both the Kubernetes components and the AWS services needed. The AWS Service Operator models the AWS Services as Custom Resource Definitions (CRDs) in Kubernetes and applies those definitions to a user’s cluster. A developer can model their entire application architecture from the container to ingress to AWS services, backing it from a single YAML manifest. This will reduce the time it takes to create new applications, and assist in keeping applications in the desired state. The AWS Service Operator exposes a way to manage DynamoDB Tables, S3 Buckets, Amazon Elastic Container Registry (Amazon ECR) Repositories, SNS Topics, SQS Queues, and SNS Subscriptions, with many more integrations coming soon. Looks like users are pretty excited about this update! Source: Hacker News You can learn more about this announcement on the AWS Service Operator project on GitHub. Head over to the official blog to explore how to use AWS Service Operator to create a DynamoDB table and deploy an application that uses the table after it has been created. Limited Availability of DigitalOcean Kubernetes announced! Google Cloud hands over Kubernetes project operations to CNCF, grants $9M in GCP credits Kubernetes 1.12 released with general availability of Kubelet TLS Bootstrap, support for Azure VMSS  
Read more
  • 0
  • 0
  • 3365

article-image-jfrog-devops-artifact-management-platform-bags-165-million-series-d-funding
Sugandha Lahoti
05 Oct 2018
2 min read
Save for later

JFrog, a DevOps based artifact management platform, bags a $165 million Series D funding

Sugandha Lahoti
05 Oct 2018
2 min read
JFrog the DevOps based artifact management platform has announced a $165 million Series D funding, yesterday. This funding round was led by Insight Venture Partners. The secured funding is expected to drive JFrog product innovation, support rapid expansion into new markets, and accelerate both organic and inorganic growth. Other new investors included Spark Capital and Geodesic Capital, as well as existing investors including Battery Ventures, Sapphire Ventures, Scale Venture Partners, Dell Technologies Capital and Vintage Investment Partners. Additional JFrog investors include JFrog Gemini VC Israel, Qumra Capital and VMware. JFrog transforms the way software is updated by offering an end-to-end, universal, highly-available software release platform. This platform is used for storing, securing, monitoring and distributing binaries for all technologies, including Docker, Go, Helm, Maven, npm, Nuget, PyPi, and more. As of now, according to the company, more than 5 million developers use JFrog Artifactory as their system of record when they build and release software. It also supports multiple deployment options, with its products available in a hybrid model, on-premise, and across major cloud platforms: Amazon Web Services, Google Cloud Platform, and Microsoft Azure. The announcement comes on the heels of Microsoft’s $7.5 billion purchase of coding-collaboration site GitHub earlier this year. Since its Series C funding round in 2016, the company has seen more than 500% sales growth and expanded its reach to over 4,500 customers, including more than 70% of the Fortune 100. It continues to add 100 new commercial logos per month and supports the world’s open source communities with its Bintray binary hub. Bintray powers 700K community projects distributing over 5.5M unique software releases that generate over 3 billion downloads a month. Read more about the announcement on JFrog official press release. OmniSci, formerly MapD, gets $55 million in series C funding. Microsoft’s GitHub acquisition is good for the open source community. Chaos engineering platform Gremlin announces $18 million series B funding and new feature for “full-stack resiliency”
Read more
  • 0
  • 0
  • 2679