Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Cloud & Networking

376 Articles
article-image-introducing-numpywren-a-system-for-linear-algebra-built-on-a-serverless-architecture
Sugandha Lahoti
29 Oct 2018
3 min read
Save for later

Introducing numpywren, a system for linear algebra built on a serverless architecture

Sugandha Lahoti
29 Oct 2018
3 min read
Last week, researchers from UC Berkeley and UW Madison published a research paper highlighting a system for linear algebra built on a serverless framework. numpywren is a scientific computing framework built on top of the serverless execution framework pywren. Pywren is a stateless computation framework that leverages AWS Lambda to execute python functions remotely in parallel. What is numpywren? Basically Numpywren, is a distributed system for executing large-scale dense linear algebra programs via stateless function executions. numpywren runs computations as stateless functions while storing intermediate state in a distributed object store. Instead of dealing with individual machines, hostnames, and processor grids numpywren works on the abstraction of "cores" and "memory". Numpywren currently uses Amazon EC2 and Lambda services for computation and uses Amazon S3 as a distributed memory abstraction. Numpywren can scale to run Cholesky decomposition (a linear algebra algorithm) on a 1Mx1M matrix within 36% of the completion time of ScaLAPACK running on dedicated instances and can be tuned to use 33% fewer CPU-hours. They’ve also introduced LAmbdaPACK, a domain-specific language designed to implement highly parallel linear algebra algorithms in a serverless setting. Why serverless for Numpywren? Per their research, serverless computing model can be used for computationally intensive programs while providing ease-of-use and seamless fault tolerance. The elasticity provided by serverless computing also allows the numpywren system to dynamically adapt to the inherent parallelism of common linear algebra algorithms. What’s next for Numpywren? One of the main drawbacks of the serverless model is the high communication needed due to the lack of locality and efficient broadcast primitives. The researchers want to incorporate coarser serverless executions (e.g., 8 cores instead of 1) that process larger portions of the input data. They also want to develop services that provide efficient collective communication primitives like broadcast to help address this problem. The researchers want modern convex optimization solvers such as CVXOPT to use Numpywren to scale much larger problems. They are also working on automatically translating numpy code directly into LAmbdaPACK instructions that can be executed in parallel. As data centers continue their push towards disaggregation, the researchers point out that platforms like numpywren open up a fruitful area of research. For further explanation, go through the research paper. Platform9 announces a new release of Fission.io, the open source, Kubernetes-native Serverless framework Azure Functions 2.0 launches with better workload support for serverless How Serverless computing is making AI development easier
Read more
  • 0
  • 0
  • 2925

article-image-btrfs-makes-multiple-performance-improvements-to-be-shipped-in-the-next-linux-kernel-release
Sugandha Lahoti
25 Oct 2018
2 min read
Save for later

Btrfs makes multiple performance improvements to be shipped in the next Linux Kernel release

Sugandha Lahoti
25 Oct 2018
2 min read
In order to prepare for the Linux 4.20 release, there are multiple performance improvements made to the Btrfs file-system. These changes are to be shipped in the next Linux kernel release. Btrfs is a modern ‘copy on write’ filesystem for Linux. It offers a lot of features not readily available in other in-tree Linux file-systems such as fault tolerance, repair, and easy administration.  However, its performance has been degrading for some time (partially because copy-on-write by default damages some workloads). However, with performance improvements for the Linux 4.20 release, there should be multiple speed-ups to Btrfs. Improvements include more files/sec in fsmark, better perf on multi-threaded workloads (filebench, dbench), fewer context switches and overall better memory allocation characteristics (multiple benchmarks). Apart from general performance, there's an improvement for qgroups + balance workload. Performance improvements Btrfs has deprecated the blocking mode of path; only the spinning mode is used. Blocking mode of path is eliminated because it resulted in unnecessary wakeups and updates to the path locks. Improvement for qgroups + balance workload include speedup balancing with qgroups, as well as skip quota accounting on unchanged subtrees. The overall gain is about 30+ % in runtime. A small improvement has been made to rb-tree to avoid pointer chasing. rb-tree with cached first node is now used for several structures. Btrfs now has better error reporting, after processing blockgroups and whole device. It continues trimming block groups after an error is encountered. It also has less interaction with transaction commit that improves latency on slower storage (eg. image files over NFS). Cleanups in Btrfs Unused struct members and variables are removed Function return type cleanups are performed Delayed refs code refactoring is done Protection is provided against deadlock that could be caused by crafted image that tries to allocate from a tree that's locked already These are just a select few updates. Read the full list of changes in a post by David Sterba. Linux 4.19 kernel releases with open arms and AIO-based polling interface; Linus back to managing the Linux kernel. KUnit: A new unit testing framework for Linux Kernel bpftrace, a DTrace like tool for Linux now open source
Read more
  • 0
  • 0
  • 2559

article-image-google-cloud-storage-security-gets-an-upgrade-with-bucket-lock-cloud-kms-keys-and-more
Melisha Dsouza
24 Oct 2018
3 min read
Save for later

Google Cloud Storage Security gets an upgrade with Bucket Lock, Cloud KMS keys and more

Melisha Dsouza
24 Oct 2018
3 min read
Earlier this month, the team at Google Cloud Storage announced new capabilities for improving the reliability and performance of user’s data. They have now rolled out updates for storage security that will cater to privacy of data and compliance with financial services regulations.  With these new security upgrades including the general availability of Cloud Storage Bucket Lock, UI changes for privacy management, Cloud KMS integration with Cloud Storage and much more; users will be able to build reliable applications as well as ensure the safety of data. Storage security features on Google Cloud Storage: #1 General availability of Cloud Storage Bucket Lock Cloud Storage Bucket Lock is now generally available. This feature is especially useful for users that need a Write Once Read Many (WORM) storage, as it prevents deletion or modification of content for a specified period of time. To help organizations meet compliance, legal and regulatory requirements for retaining data for specific lengths of time, Bucket Lock provides retention lock capabilities, as well as event, holds for content. Bucket Lock works with all tiers of Cloud Storage. Both primary and archive data can use the same storage setup. Users can automatically move locked data into colder storage tiers and delete data once the retention period expires. Bucket Lock has been used in a diverse range of applications from financial records compliance and Healthcare records retention to Media content archives and much more. You can head over to the Bucket Lock documentation to learn more about this feature. #2 New UI features for secure sharing of data The new UI features in the Cloud Storage console enable users to securely share their data and gain insights over which data, buckets, and objects are publicly visible across their Cloud Storage environment. The public sharing option in the UI has been replaced with an Identity and Access Management (IAM) panel. This mechanism will prevent users from clicking the mouse by mistake and publicly sharing their objects. Administrators can clearly understand which content is publicly available. The mechanism also enables users to know how their data is being shared publicly. #3 Use Cloud KMS keys with Cloud Storage data Cloud Key Management System (KMS) provides users with sophisticated encryption key management capabilities. Users can manage and control encryption keys for their Cloud Storage datasets through the Cloud Storage–KMS integration. This KMS integration helps users manage active keys, authorize users or applications to use certain keys, monitor key use, and more. Cloud Storage users can also perform a  key rotation, revocation, and deletion. Head over to Google Cloud storage blog to learn more about Cloud KMS integration. #4 Access Transparency for Cloud Storage and Persistent Disk This new transparency mechanism will show users who, when, where and why Google support or the engineering team has accessed their Cloud Storage and Persistent Disk environment. Users can use Stackdriver APIs to monitor logs related to Cloud Storage actions programmatically and also archive their logs if required for future auditing. This gives complete visibility into administrative actions for monitoring and compliance purposes You can learn more about AXT on Google's blog post. Head over to Google Cloud Storage blog to understand how these new upgrades will add to the security and control of cloud resources. What’s new in Google Cloud Functions serverless platform Google Cloud announces new Go 1.11 runtime for App Engine Cloud Filestore: A new high performance storage option by Google Cloud Platform  
Read more
  • 0
  • 0
  • 4092
Visually different images

article-image-center-for-democracy-and-technology-formulates-signals-of-trustworthy-vpns-to-improve-transparency-among-vpn-services
Bhagyashree R
22 Oct 2018
3 min read
Save for later

Center for Democracy and Technology formulates ‘Signals of Trustworthy VPNs’ to improve transparency among VPN services

Bhagyashree R
22 Oct 2018
3 min read
Earlier this year in May, the Center for Democracy and Technology (CDT) held a discussion at RightsCon in Toronto with popular VPN service providers: IVPN, Mullvad, TunnelBear, VyprVPN, and ExpressVPN. They together formulated a list of eight questions that describes the basic commitments VPNs can make to signal their trustworthiness and positive reputation which is called Signals of Trustworthy VPNs. CDT is a Washington, D.C.-based non-profit organization which aims to strengthen individual rights and freedom by defining, promoting, and influencing technology policy and the architecture of the internet. What was the goal behind the discussion between CDT and VPN providers? The goal of these questions is to improve transparency among VPN services and to help resources like That One Privacy Site and privacytools.io provide better comparisons between different services. Additionally, it will provide a way for users to easily compare privacy, security, and data use practices of VPNs. This initiative will also encourage VPNs to deploy measures that will meaningfully improve the privacy and security of individuals using their services. The questions that they have come up with tries to provide users clarity in three areas: Corporate accountability and business models Privacy practices Data security protocols and protections You can find the entire list of the questions at CDT’s official website. What are the key recommendations by CDT for VPN providers? The following are few of the best practices for VPN providers in order to build trust in their users: VPN providers should share information about the company’s leadership team, which can help users know more about the reputation of who they are trusting with their online activities. Any VPN provider should be able to share their place of legal incorporation and the laws they operate under. They should provide detailed information about their business model, specifically whether subscriptions are the sole source of a service’s revenue. They should clearly define what exactly they mean by “logging”. This information will include both connection and activity logging practices, as well as whether the VPN provider aggregates this information. Users should be aware of the approximate retention periods for any log data. VPN providers put in place procedures for automatically deleting any retained information after an appropriate period of time. This period of time should be disclosed and the length of time should also be justified. VPN providers can also implement bug bounty programs. This will encourage third parties to identify and report vulnerabilities they might come across when using the VPN service. Independent security audits should be conducted to identify technical vulnerabilities. To know more about the CDT’s recommendations and the eight questions, check out their official website. Apple bans Facebook’s VPN app from the App Store for violating its data collection rules What you need to know about VPNFilter Malware Attack IBM launches Industry’s first ‘Cybersecurity Operations Center on Wheels’ for on-demand cybersecurity support
Read more
  • 0
  • 0
  • 1435

article-image-another-bug-in-windows-10-october-update-that-can-cause-data-loss
Prasad Ramesh
22 Oct 2018
2 min read
Save for later

Another bug in Windows 10 October update that can cause data loss

Prasad Ramesh
22 Oct 2018
2 min read
Earlier this month, the Windows 10 October update had problems with files being deleted off users’ computers. After which Microsoft had to pause the mass rollout of Windows 10 update for October. After the issue was reported, Microsoft took steps, did testing with the Windows Insider community for finding the reasons and fixing this bug. But now there is another bug which can cause you to lose your files. Many people having installed the Windows 10 October update have reported an issue where ZIP operations are not working as intended. Windows fails to notify as to which files should be overwritten. On unzipping files to a folder, if other copies of those files already exist in that folder, Windows 10 usually asks whether those existing copies should be overwritten. This does not happen anymore after the update and Windows just overwrites the files. The user is not even informed that the files are overwritten. Accidental overwrites are highly likely if the user doesn’t get any prompt or information. There can be situations where a modified file, for example, is overwritten with something original from the ZIP file. However, this happens only with the built-in Windows file manager. On using a third party tool to work with compressed files, however, this bug does not happen. Wazhai, a Reddit user sums up the issue nicely: “The issue is that in 1809, overwriting files by extracting from an archive using File Explorer doesn’t result in an overwrite prompt dialogue and also doesn’t replace any files at all; it just fails silently. There are also some reports that it did overwrite items, but did so silently without asking.” There is also another scenario but not widely reported where the file extraction seems to happen, but no files are updated. This bug was discussed on Reddit over the weekend and can be read on a Reddit thread, visit their website. Microsoft pulls Windows 10 October update after it deletes user files Microsoft fixing and testing the Windows 10 October update after file deletion bug Microsoft Your Phone: Mirror your Android phone apps on Windows
Read more
  • 0
  • 0
  • 2114

article-image-sway-1-0-beta-1-released-with-the-addition-of-third-party-panels-auto-locking-and-more
Savia Lobo
22 Oct 2018
4 min read
Save for later

Sway 1.0 beta.1 released with the addition of third-party panels, auto-locking, and more

Savia Lobo
22 Oct 2018
4 min read
Last week, Sway, the i3-compatible Wayland compositor, released its version 1.0-beta.1. The community mentions that the Sway 1.0-beta.1 is 100% compatible with the i3 X11 window manager. Sway works well the existing i3 configuration and supports most of i3's features. The community also maintains the wlroots project to provide a modular basis for Sway and other Wayland compositors to build upon, and have also published standards for interoperable Wayland desktops. This version includes many input and output features alongwith other features such as auto-locking, idle management, and more. New features in Sway 1.0-beta.1 Output features The Sway 1.0 beta.1 version includes a new output features where the users can get the names of the outputs for use in their config file by using swaymsg -t get_outputs. Following are some examples of how outputs can be used: To rotate display to 90 degrees, use: output DP-1 transform 90 To enable Sway’s improved HiDPI support, use: output DP-1 scale 2 To enable fractional scaling : output DP-1 scale 1.5 Users can now run sway on multiple GPUs In this version, sway picks up a primary GPU automatically. Users can also override this by specifying a list of card names at startup with WLR_DRM_DEVICES=card0:card1:... Other features include support for daisy-chained DisplayPort configurations and improved Redshift support. Users can now drag windows between outputs with the mouse. Input features Users can get a list of their identifiers with swaymsg -t get_inputs. Users can now have multiple mice with multiple cursors, and can link keyboards, mice, drawing tablets, and touchscreens to each other arbitrarily. Users can have a dvorak keyboard for normal use and a second qwerty keyboard for a paired programming session. The coworker can also focus and type into separate windows from what a user is working on. Addition of third-party panels, lockscreens, etc. This version includes a new layer-shell protocol which enables the use of more third-party software on sway. One of the main features in sway 1.0 and wlroots is to break the boundaries between Wayland compositors and encourage standard inter-operable protocols. The community has also added two new protocols for capturing user’s screen; screencopy and dmabuf-export. These new protocols are useful for screenshots and real-time screen capture, for example to live stream on Twitch. DPMS, auto-locking, and idle management The new swayidle tool adds support for DPMS, auto-locking, and idle management, which even works on other Wayland compositors. To configure it, start by running the daemon in the sway config file: exec swayidle \    timeout 300 'swaylock -c 000000' \    timeout 600 'swaymsg "output * dpms off"' \       resume 'swaymsg "output * dpms on"' \    before-sleep 'swaylock -c 000000' This code will lock user’s screen after 300 seconds of inactivity. After 600 seconds, it will turn off all of the outputs (and turn them back on when the user simply wiggles the mouse). This configuration also locks the screen before the system goes to sleep. However, none of this will happen if while watching a video on a supported media player (mpv, for example). Other features of Sway 1.0 beta.1 The additional features in the Sway 1.0 beta version include: swaylock has a config file Drag and drop is supported Rich content (like images) is synced between the Wayland and X11 clipboards The layout is updated atomically, meaning that user will never see an in-progress frame when resizing windows Primary selection is implemented and synced with X11 To know more about Sway 1.0 beta.1 in detail, see the release notes. Chrome 70 releases with support for Desktop Progressive Web Apps on Windows and Linux Announcing the early release of Travis CI on Windows Windows 10 IoT Core: What you need to know  
Read more
  • 0
  • 0
  • 1634
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $15.99/month. Cancel anytime
article-image-opus-1-3-a-popular-foss-audio-codec-with-machine-learning-and-vr-support-is-now-generally-available
Amrata Joshi
22 Oct 2018
3 min read
Save for later

Opus 1.3, a popular FOSS audio codec with machine learning and VR support, is now generally available

Amrata Joshi
22 Oct 2018
3 min read
Last week, the team at Opus announced the general availability of Opus Audio Codec version 1.3. Opus 1.3 comes along with a new set of features, namely, a recurrent neural network, reliable speech/music detector, convenience, ambisonics support, efficient memory, compatibility with RFC 6716 and a lot more. Opus is an open and royalty-free audio codec, which is highly useful for all audio applications, right from music streaming and storage to high-quality video-conferencing and VoIP. Six years after its standardization by the IETF, Opus is included in all major browsers and mobile operating systems, used for a wide range of applications and is the default WebRTC codec. New features in Opus Audio Codec 1.3 Reliable speech/music detector powered by machine learning Opus 1.3 promises a new speech/music detector. As it is based on a recurrent neural network, it is way simpler and reliable than the detector used in version 1.1.The speech/music detector in earlier versions was based on a simple (non-recurrent) neural network, followed by an HMM-based layer to combine the neural network results over time. Opus 1.3 introduces a new recurrent neuron which is the Gated Recurrent Unit (GRU). The GRU does not just learn how to use its input and memory at a time, but it also promises to learn, how and when to update its memory. This, in turn, helps it to remember information for a longer period of time. Mixed Content encoding gets better Mixed content encoding, especially at bit rates below 48 kb/s, will get more convenient as the new detector helps in improving the performance of Opus. Developers will experience a great change in speech encoding at lower bit rates, both for mono and stereo. Encode 3D audio soundtracks for VR easily This release comes along with ambisonics support. Ambisonics can be used to encode 3D audio soundtracks for VR and 360 videos. Opus detector won’t take much of your space The Opus detector has just 4986 weights (that fit in less than 5 KB) and takes about 0.02% memory of CPU to run in real-time, instead of thousands of neurons and millions of weights running on a GPU. Additional Updates Improvements in Security/hardening, Voice Activity Detector (VAD), and speech/music classification using an RNN are simply add-ons. The major bug fixes in this release are CELT PLC and bandwidth detection fixes. Read more about the release on Mozilla’s official website. Also, check out a demo for more details. YouTube starts testing AV1 video codec format, launches AV1 Beta Playlist Google releases Oboe, a C++ library to build high-performance Android  audio apps How to perform Audio-Video-Image Scraping with Python
Read more
  • 0
  • 0
  • 3881

article-image-azure-devops-outage-root-cause-analysis-starring-greedy-threads-and-rogue-scale-units
Prasad Ramesh
19 Oct 2018
4 min read
Save for later

Azure DevOps outage root cause analysis starring greedy threads and rogue scale units

Prasad Ramesh
19 Oct 2018
4 min read
Azure DevOps suffered several outages earlier this month. Microsoft has done a root cause analysis to find the causes. This is after Azure cloud was affected by the environment last month. Incidents on October 3, 4 and 8 It started on October 3 with a networking issue in the North Central US region lasting over an hour. It happened again the following day which lasted an hour. On following up with the Azure networking team, it was found that there were no networking issues when the outages happened. Another incident happened on October 8. They realized that something was fundamentally wrong which is when an analysis on telemetry was done. The issue was not found after this. After the third incident, it was found that the thread count on the machine continued to rise. This was an indication that some activity was going on even with no load coming to the machine. It was found that all 1202 threads had the same call stack, the following being the key call. Server.DistributedTaskResourceService.SetAgentOnline Agent machines send a heartbeat signal every minute to the service to notify being online. On no signal from an agent over a minute it is marked offline and the agent needs to reconnect to signal. The agent machines were marked offline in this case and eventually, they succeeded after retries. On success, the agent was stored in an in-memory list. Potentially thousands of agents were reconnecting at a time. In addition, there was a cause for threads to get full with messages since asynchronous call patterns were adopted recently. The .NET message queue stores a queue of messages to process and maintains a thread pool where. As a thread becomes available, it will service the next message in queue. Source: Microsoft The thread pool, in this case, was smaller than the queue. For N threads, N messages are processed simultaneously. When an async call is made, the same message queue is used and it queues up a new message to complete the async call in order to read the value. This call is at the end of the queue while all the threads are occupied processing other messages. Hence, the call will not complete until the other previous messages have completed, tying up one thread. The process comes to a standstill when N messages are processed where N also equals to the number of threads. At this state, an device can no longer process requests causing the load balancer to take it out of rotation. Hence the outage. An immediate fix was to conditionalize this code so no more async calls were made. This was done as the pool providers feature isn’t in effect yet. Incident on October 10 On October 10, an incident with a 15-minute impact took place. The initial problem was the result of a spike in slow response times from SPS. It was ultimately caused by problems in one of the databases. A Team Foundation Server (TFS) put pressure on SPS, their authentication service. On deploying TFS, sets of scale units called deployment rings are also deployed. When the deployment for a scale unit completes, it puts extra pressure on SPS. There are built-in delays between scale units to accommodate the extra load. There is also sharding going on in SPS to break it into multiple scale units. These factors together caused a trip in the circuit breakers, in the database. This led to slow response times and failed calls. This was mitigated by manually recycling the unhealthy scale units. For more details and complete analysis, visit the Microsoft website. Real clouds take out Microsoft’s Azure Cloud; users, developers suffer indefinite Azure outage Why did last week’s Azure cloud outage happen? Here’s Microsoft’s Root Cause Analysis Summary. Is your Enterprise Measuring the Right DevOps Metrics?
Read more
  • 0
  • 0
  • 3375

article-image-ubuntu-18-10-cosmic-cuttlefish-releases-with-focus-on-ai-development-multi-cloud-and-edge-deployments-and-much-more
Melisha Dsouza
19 Oct 2018
3 min read
Save for later

Ubuntu 18.10 ‘Cosmic Cuttlefish’ releases with focus on AI development, multi-cloud and edge deployments, and much more!

Melisha Dsouza
19 Oct 2018
3 min read
“Ubuntu is now the world’s reference platform for AI engineering and analytics.” -Mark Shuttleworth, CEO of Canonical. Yesterday (on 18th October), Canonical announced the release of Ubuntu 18.10 termed as ‘Cosmic Cuttlefish’. This new release is focussed on multi-cloud deployments, AI software development, a new community desktop theme, and richer snap desktop integration. According to Mark, the new release will help accelerate developer productivity and help enterprises operate at a better speed whilst being scalable across multiple clouds and diverse edge appliances. [box type="shadow" align="" class="" width=""]Fun Fact : Ubuntu codenames are in incremental alphabetical order. Following the Ubuntu 18.04 Bionic Beaver, we now have the Cosmic Cuttlefish. These codenames are comprised of an adjective and an animal, both starting with the same letter.[/box] 5 major features of Ubuntu 18.10 #1 New compression algorithms for faster installation and boot Ubuntu 18.10 uses compression algorithms like LZ4 and ztsd, which support around 10% faster boot as compared to those used in its previous version. The algorithms also facilitate the installation process which takes around 5 minutes in offline mode. #2 Optimised for multi-cloud computing This new version is designed especially keeping in mind cloud based deployments. The Ubuntu Server 18.10 images are available on all major public clouds. For private clouds, the release supports OpenStack Rocky for AI and NFV hardware acceleration. It comes with Ceph Mimic to reduce storage overhead. Including the Kubernetes version 1.12, this new version brings increased security and scalability by automating the provisioning of clusters with transport layer encryption. It is more responsive to dynamic workloads through faster scaling #3 Improved gaming performance The new kernel has been updated to the 4.18 based Linux kernel. In addition to this, the updates in Mesa and X.org significantly improve game performance. Graphics support expands to AMD VegaM in the latest Intel Kabylake-G CPUs, Raspberry Pi 3 Model B, B+ and Qualcomm Snapdragon 845. Ubuntu 18.10 introduces the GNOME 3.30 desktop which has recently been released thus contributing to an overall gaming performance boost. #4 Startup time boost and XDG Portals support for Snap applications Canonical is bringing some useful improvements to its Snap packages. Snap applications will  start in lesser time. With XDG portal support, Snap can be installed in a few clicks from the Snapcraft Store website. Major public cloud and server applications like Google Cloud SDK, AWS CLI, and Azure CLI are now available in the new version. The new release allows accessing files on the host system through native desktop controls. #5 New default theme and icons Ubuntu 18.10 uses the Yaru community theme replacing their long-serving Ambiance and Radiance themes. It gives the desktop a fresh new look and feel. Other miscellaneous changes include: DLNA support connects Ubuntu with DLNA supported Smart TVs, tablets and other devices Fingerprint scanner is now supported Ubuntu Software removes dependencies while uninstalling software The default toolchain has moved to gcc 8.2 with glibc 2.28 Ubuntu 18.10 is also updated to openssl 1.1.1 and gnutls 3.6.4 with TLS1.3 support All these upgrades are causing waves in the Linux community. That being said, users are requested to check the release notes for issues that were encountered in this new version. You can head over to the official release page to download the new version of this OS. Alternatively, learn more about these new features at itsfloss.com. KUnit: A new unit testing framework for Linux Kernel Google Project Zero discovers a cache invalidation bug in Linux memory management, Ubuntu and Debian remain vulnerable The kernel community attempting to make Linux more secure
Read more
  • 0
  • 0
  • 2466

article-image-atlassian-overhauls-its-jira-software-with-customizable-workflows-new-tech-stack-and-roadmaps-tool
Sugandha Lahoti
19 Oct 2018
3 min read
Save for later

Atlassian overhauls its Jira software with customizable workflows, new tech stack, and roadmaps tool

Sugandha Lahoti
19 Oct 2018
3 min read
Atlassian has completely revamped it’s traditional Jira software adding a simplified user experience, new third-party integrations, and a new product roadmaps tool. Announced yesterday, in their official blog post, they mention that “They’ve rolled out an entirely new project experience for the next generation with a focus on making Jira Simply Powerful.” Sean Regan, head of growth for Software Teams at Atlassian, said: “With a more streamlined and simplified application, Atlassian hopes to appeal to a wider range of business execs involved in the software-creation process.” What’s new in the revamped Jira software? Powerful tech stack: Jira Software is transformed into a modern cloud app. It now includes an updated tech stack, permissions, and UX. Developers have more autonomy, administrators have more flexibility and advanced users have more power. “Additionally, we’ve made Jira simpler to use across the board. Now, anyone who works with development teams can collaborate more easily.” Customizable workflow: To upgrade user experience, Atlassian has introduced a new feature called build-your-own-boards. Users can customize their own workflow, issue types, and fields for the board. They don’t require administrator access or the need to jeopardize other project’s customizations. Source: Jira blog This customizable workflow was inspired by Trello, the task management app acquired by Atlassian for $425 million in 2017. “What we tried to do in this new experience is mirror the power that people know and love about Jira, with the simplicity of an experience like Trello.” said Regan. Third party integrations: The new Jira comes with almost 600 third-party integrations. These third-party applications, Atlassian said, should help appeal to a broader range of job roles that interact with developers. Integrations include Adobe, Sketch, and Invision. Other integrations include Facebook's Workplace and updated integrations for Gmail and Slack. Jira Cloud Mobile: Jira Cloud mobile helps developers access their projects from their smartphones. Developers can create, read, update, and delete issues and columns; groom their backlog and start and complete sprints; respond to comments and tag relevant stakeholders, all from their mobile. Roadmapping tool: Jira now features a brand new roadmaps tool that makes it easier for teams to see the big picture. “When you have multiple teams coordinating on multiple projects at the same time, shipping different features at different percentage releases, it’s pretty easy for nobody to know what is going on,” said Regan. “Roadmaps helps bring order to the chaos of software development.” Source: Jira blog Pricing for the Jira software varies by the number of users. It costs $10 per user per month for teams of up to 10 people; $7 per user per month for teams of between 11 and 100 users; and varying prices for teams larger than 100. The company also offers a free 7-day trial. Read more about the release on the Jira Blog. You can also have a look at their public roadmap. Atlassian acquires OpsGenie, launches Jira Ops to make the incident response more powerful. GitHub’s new integration for Jira Software Cloud aims to provide teams with a seamless project management experience. Atlassian open sources Escalator, a Kubernetes autoscaler project
Read more
  • 0
  • 0
  • 3294
article-image-kunit-a-new-unit-testing-framework-for-linux-kernel
Savia Lobo
18 Oct 2018
2 min read
Save for later

KUnit: A new unit testing framework for Linux Kernel

Savia Lobo
18 Oct 2018
2 min read
On Tuesday, Google engineer Brendan Higgins announced an experimental set of 31 patches by introducing KUnit as a new Linux kernel unit testing framework to help preserve and improve the quality of the kernel's code. KUnit is a lightweight unit testing and mocking framework designed for the Linux kernel. Unit tests necessarily have finer granularity, they are able to test all code paths easily solving the classic problem of difficulty in exercising error handling code. KUnit is heavily inspired by JUnit, Python's unittest.mock, and Googletest/Googlemock for C++. KUnit provides facilities for defining unit test cases, grouping related test cases into test suites, providing common infrastructure for running tests, mocking, spying, and much more. Brenden writes, "It does not require installing the kernel on a test machine or in a VM and does not require tests to be written in userspace running on a host kernel. Additionally, KUnit is fast: From invocation to completion KUnit can run several dozen tests in under a second. Currently, the entire KUnit test suite for KUnit runs in under a second from the initial invocation (build time excluded)." When asked if KUnit will replace the other testing frameworks for the Linux Kernel, Brenden denied it,  saying, “Most existing tests for the Linux kernel are end-to-end tests, which have their place. A well tested system has lots of unit tests, a reasonable number of integration tests, and some end-to-end tests. KUnit is just trying to address the unit test space which is currently not being addressed.” To know more about KUnit in detail, read Brendan Higgins’ email threads. What role does Linux play in securing Android devices? bpftrace, a DTrace like tool for Linux now open source Linux drops Code of Conflict and adopts new Code of Conduct
Read more
  • 0
  • 0
  • 5073

article-image-aws-announces-more-flexibility-its-certification-exams-drops-its-exam-prerequisites
Melisha Dsouza
18 Oct 2018
2 min read
Save for later

AWS announces more flexibility its Certification Exams, drops its exam prerequisites

Melisha Dsouza
18 Oct 2018
2 min read
Last week (on 11th October), the AWS team announced that they are removing the exam-prerequisites to give users more flexibility on the AWS Certification Program. Previously, it was a prerequisite for a customer to pass the foundational or Associate level exam before appearing for the Professional or Specialty certification. AWS has now eliminated this prerequisite, taking into account customers requests for flexibility. Customers are no longer required to have an Associate certification before pursuing a Professional certification. Nor do they need to hold a Foundational or Associate certification before pursuing Specialty certification. The professional level exams are pretty tough to pass. Until a customer has a complete deep knowledge of the AWS platform, passing the professional exam is difficult. If a customer skips the Foundational or Associate level exams and directly appears for the professional level exams, he will not have the practice and knowledge necessary to fare well in them. Instead, if he/she fails the exam, backing up to the Associate level can be demotivating. The AWS Certification demonstrates helps individuals obtain an expertise to design, deploy, and operate highly available, cost-effective, and secure applications on AWS. They will gain a  proficiency with AWS which will help them earn tangible benefits This exam will help Employers Identify skilled professionals that can use  AWS technologies to lead IT initiatives. Moreover, the exams will help them reduce risks and costs to implement their workloads and projects on the AWS platform. AWS dominates the cloud computing market and the AWS Certified Solutions Architect exams can help candidates secure their career in this exciting field. AWS offers digital and classroom training build cloud skills and prepare for certification exams. To know more about this announcement, head over to their official Blog. ‘AWS Service Operator’ for Kubernetes now available allowing the creation of AWS resources using kubectl Machine Learning as a Service (MLaaS): How Google Cloud Platform, Microsoft Azure, and AWS are democratizing Artificial Intelligence AWS machine learning: Learning AWS CLI to execute a simple Amazon ML workflow [Tutorial]  
Read more
  • 0
  • 0
  • 8670

article-image-twilio-flex-a-fully-programmable-contact-center-platform-is-now-generally-available
Bhagyashree R
18 Oct 2018
3 min read
Save for later

Twilio Flex, a fully-programmable contact center platform, is now generally available

Bhagyashree R
18 Oct 2018
3 min read
Yesterday, Twilio announced the general availability of Flex. Since its preview announcement in March, Flex has been used by thousands of contact center agents including support and sales teams at Lyft, Scorpion, Shopify, and U-Haul. Twilio Flex is a fully-programmable contact center platform that aims to give businesses complete control over customer engagement. It is a cloud-based platform that provides infinite flexibility in your hands. What functionalities does Flex provide to enterprises? Twilio Flex enables enterprises to do the following: Answer user queries using Autopilot Flex provides a conversational AI platform called Autopilot using which businesses can build custom messaging bots, IVRs, and home assistant apps. These bots are trained with the data pulled by Autopilot using Twilio’s natural language processing engine. Companies can deploy those bots across multiple channels including voice, SMS, Chat Alexa, Slack, and Google Assistant. With these bots, enterprises can also respond to frequently asked questions and if the queries become complex the bots can then transfer the conversation to a human agent. Secure phone payment with Twilio Pay With only one line of code, you can activate the Twilio Pay service that provides businesses the tools needed to process payments over the phone. It relies on secure payment methods such as tokenization to ensure that credit card information is securely handled. Provide a true omnichannel experience Flex gives enterprises access to a number of channels out of the box including voice, SMS, email, chat, video, and Facebook Messenger, among others. Also, agents can switch from channel to channel without losing the conversation or context. Customize user interface programmatically Flex user interfaces are designed with customization in mind. Enterprises can customize the customer-facing components like click-to-call or click-to-chat. It also allows adding entirely new channels or integrating new reporting dashboards to display agent performance or customer satisfaction. Integrate any application Enterprises can integrate their third-party business-critical applications with Flex. These applications may include systems such as customer relationship management (CRM), workforce management (WFM), reporting, analytics, or data stores. Analytics and insights for better customer experience It offers real-time event stream, a supervisor desktop, and admin desktop, which gives supervisors and administrators complete visibility and control over interaction data. Using these analytics and insights they will be able to better monitor and manage an agent’s performance. To know more about Twilio Flex, check out their official announcement. Twilio acquires SendGrid, a leading Email API Platform, to bring email services to its customers Twilio WhatsApp API: A great tool to reach new businesses Building a two-way interactive chatbot with Twilio: A step-by-step guide
Read more
  • 0
  • 0
  • 1631
article-image-satya-nadella-microsofts-progress-data-ai-business-applications-trust-privacy
Sugandha Lahoti
17 Oct 2018
5 min read
Save for later

Satya Nadella reflects on Microsoft's progress in areas of data, AI, business applications, trust, privacy and more.

Sugandha Lahoti
17 Oct 2018
5 min read
Microsoft CEO, Satya Nadella published his letter to shareholders in the company’s 2018 annual report, on LinkedIn yesterday. He talks about Microsoft’s accomplishments in the past year, results and progress of Microsoft’s workplace, business applications, infrastructure, data, AI, and gaming. He also mentioned the data and privacy rules adopted by Microsoft, and their belief to, “ instill trust in technology across everything they do.” Microsoft’s result and progress Data and AI Azure Cosmos DB has already exceeded $100 million in annualized revenue. The company also saw rapid customer adoption of Azure Databricks for data preparation, advanced analytics, and machine learning scenarios. Their Azure Bot Service has nearly 300,000 developers, and they are on the road for building the world’s first AI supercomputer in Azure. Microsoft also acquired GitHub to recognize the increasingly vital role developers will play in value creation and growth across every industry. Business Applications Microsoft’s investments in Power BI have made them the leader in business analytics in the cloud. Their Open Data Initiative with Adobe and SAP will help customers to take control of their data and build new experiences that truly put people at the center. HoloLens and mixed reality will be used for designing for first-line workers, who account for 80 percent of the world’s workforce. New solutions powered by LinkedIn and Microsoft Graphs help companies manage talent, training, and sales and marketing. Applications and Infrastructure Azure revenue grew 91 percent year-over-year and the company is investing aggressively to build Azure as the world’s computer. They added nearly 500 new Azure capabilities in the past year, focused on both existing workloads and new workloads such as IoT and Edge AI. Microsoft expanded their global data center footprint to 54 regions. They introduced Azure IoT and Azure Stack and Azure Sphere. Modern Workplace More than 135 million people use Office 365 commercial every month. Outlook Mobile is also employed on 100 million iOS and Android devices worldwide. Microsoft Teams is being used by more than 300,000 organizations of all sizes, including 87 of the Fortune 100. Windows 10 is active on nearly 700 million devices around the world. Gaming The company surpassed $10 billion in revenue this year for gaming. Xbox Live now has 57 million monthly active users, and they are investing in new services like Mixer and Game Pass. They also added five new gaming studios this year including PlayFab to build a cloud platform for the gaming industry across mobile, PC and console. Microsoft’s impact around the globe Nadella highlighted that companies such as Coca-Cola, Chevron Corporation, ZF Group, a car parts manufacturer in Germany are using Microsoft’s technology to build their own digital capabilities. Walmart is also using Azure and Microsoft 365 for transforming the shopping experience for customers. In Kenya, M-KOPA Solar, one of their partners connected homes across sub-Saharan Africa to solar power using the Microsoft Cloud. Office Dynamics 365 was used in Arizona to improve outcomes among the state’s 15,000 children in foster care. MedApp is using HoloLens in Poland to help cardiologists visualize a patient's heart as it beats in real time. In Cambodia, underserved children in rural communities are learning to code with Minecraft. How Microsoft is handling trust and responsibility Microsoft motto is “instilling trust in technology across everything they do.” Nadella says, “We believe that privacy is a fundamental human right, which is why compliance is deeply embedded in all our processes and practices.” Microsoft has extended the data subject rights of GDPR to all their customers around the world, not just those in the European Union, and advocated for the passage of the CLOUD Act in the U.S. They also led the Cybersecurity Tech Accord, which has been signed by 61 global organizations, and are calling on governments to do more to make the internet safe. They announced the Defending Democracy Program to work with governments around the world to help safeguard voting and introduced AccountGuard to offer advanced cybersecurity protections to political campaigns in the U.S. The company is also investing in tools for detecting and addressing bias in AI systems and advocating government regulation. They are also addressing society's most pressing challenges with new programs like AI for Earth, a five-year, $50M commitment to environmental sustainability, and AI for Accessibility to benefit people with disabilities. Nadella further adds, “Over the past year, we have made progress in building a diverse and inclusive culture where everyone can do their best work.” Microsoft has nearly doubled the number of women corporate vice presidents at Microsoft since FY16.  They have also increased African American/Black and Hispanic/Latino representation by 33 percent. He concludes saying that “I’m proud of our progress, and I’m proud of the more than 100,000 Microsoft employees around the world who are focused on our customers’ success in this new era.” Read the full letter on Linkedin. Paul Allen, Microsoft co-founder, philanthropist, and developer dies of cancer at 65. ‘Employees of Microsoft’ ask Microsoft not to bid on US Military’s Project JEDI in an open letter. Microsoft joins the Open Invention Network community, making 60,000 of its patents accessible to fellow members
Read more
  • 0
  • 0
  • 3783

article-image-jeff-bezos-amazon-will-continue-to-support-u-s-defense-department
Richard Gall
16 Oct 2018
2 min read
Save for later

Jeff Bezos: Amazon will continue to support U.S. Defense Department

Richard Gall
16 Oct 2018
2 min read
Just days after Google announced that it was pulling out of the race to win the $10 billion JEDI contract from the Pentagon, Amazon's Jeff Bezos has stated that Amazon will continue to support Pentagon and Defense projects. But Bezos went further, criticising tech companies that don't work with the military. Speaking at Wired25 Conference, the Amazon chief said "if big tech companies are going to turn their back on U.S. Department of Defense (DoD), this country is going to be in trouble... One of the jobs of senior leadership is to make the right decision, even when it’s unpopular." Bezos remains unfazed by criticism It would seem that Bezos isn't fazed by criticism that other companies have faced. Google explained its withdrawal by saying "we couldn’t be assured that it would align with our AI Principles." However, it's likely that the significant internal debate about the ethical uses of AI, as well as a wave of protests against Project Maven earlier in the year were critical components in the final decision. Microsoft remains in the running for the JEDI contract, but there appears to be much more internal conflict over the issue. Anonymous Microsoft employees have, for example, published an open letter to senior management on Medium. The letter states: "What are Microsoft's AI Principles, especially regarding the violent application of powerful A.I. technology? How will workers, who build and maintain these services in the first place, know whether our work is being used to aid profiling, surveillance, or killing?" Clearly, Jeff Bezos isn't too worried about upsetting his employees. Perhaps the story says something about the difference in the corporate structure of these huge companies. While they all have high-profile management teams, its only at Amazon that the single figure of Bezos reigns supreme in the spotlight. With Blue Origin he's got his sights set on something far beyond ethical decision making - sending humans into space. Cynics might even say it's the logical extension of the implicit imperialism of his enthusiasm for Pentagon.
Read more
  • 0
  • 0
  • 2211