Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3709 Articles
article-image-keras-2-2-0-releases
Sunith Shetty
08 Jun 2018
3 min read
Save for later

Keras 2.2.0 releases!

Sunith Shetty
08 Jun 2018
3 min read
Keras team has announced a new version 2.2.0 with notable features to allow developers to perform deep learning with ease. This release has brought new API changes, new input modes, bug fixes and performance improvements to the high-level neural network API. Keras is a popular neural network API which is capable of running on top of TensorFlow, CNTK or Theano. This Python API is developed with a focus on bringing fast experimentation results, thus taking least possible delay while doing research. It is a highly efficient library allowing easy and fast prototyping, and can even run seamlessly on CPU and GPU. Some of the noteworthy changes available in Keras 2.2.0: New areas of improvements A new API called Model subclassing is added for model definition. They have added a new input mode which provides the ability to call models on TensorFlow tensors directly (however this is applicable to TensorFlow backend only). More improved feature coverage of Keras with the CNTK and Theano backends. Lots of bug fixes and performance improvements are done to the Keras API Now, Keras engine will follow a much more modular structure, thus improving code structure, code health, and reduced test time. Keras modules applications and preprocessing are now externalized to their own repositories such as keras-applications and keras-preprocessing respectively. New API changes MobileNetV2 application added which is available for all backends. Enabled CNTK and Theano support for applications Xception and MobileNet. They have also extended their support for layers SeparableConv1D, SeparableConv2D, as well as the backend methods separable_conv1d and separable_conv2d. which was previously only available for TensorFlow. Now you can feed symbolic tensors to models, with TensorFlow backend. Support for input masking in the TimeDistributed layer. ReLU activation is made easier to configure while retaining easy serialization capabilities by adding an advanced_activation layer ReLU. In order to have a complete list of new API changes, you can visit Github. Breaking changes They have removed the legacy Merge layers and their related functionalities which were the remains of Keras 0. These layers were deprecated in May 2016, with full eviction schedules for August 2017. From now on models from the Keras 0 API using these layers will not be loaded with Keras 2.2.0 and above. The base initializer called truncated_normal now return values that are scaled by ~0.9 thus providing the correct variance value after truncation. For the full list of updates, you can refer the release notes. Read more Why you should use Keras for deep learning Implementing Deep Learning with Keras 2 ways to customize your deep learning models with Keras How to build Deep convolutional GAN using TensorFlow and Keras
Read more
  • 0
  • 0
  • 3682

article-image-plotly-4-0-popular-python-data-visualization-framework-releases
Fatema Patrawala
23 Jul 2019
3 min read
Save for later

Plotly 4.0, popular python data visualization framework, releases with Offline Only, Express first, Displayable anywhere features

Fatema Patrawala
23 Jul 2019
3 min read
Yesterday the Plotly team announced the release of Plotly.py 4.0 version which is now available for download from PyPI. This version includes some exciting new features and changes, including a switch to “offline” mode by default, the inclusion of Plotly Express as the recommended entry point into the library, and a new rendering framework compatible with not only Jupyter notebooks but other notebook systems such as Colab, Azure and Kaggle notebooks, as well as popular IDEs such as PyCharm, VSCode, Spyder and others. To upgrade to the latest version, you can run pip install plotly==4.0.0 or conda install -c plotly plotly==4.0.0. More details can be found from the page Getting Started and Migrating to Version 4 guides. Let us check out the key features in Plotly 4.0 Offline Only Prior versions of plotly contained functionality for creating figures in both “online” and “offline” modes. In “online” mode, figures were uploaded to an instance of Plotly’s Chart Studio service and then displayed, whereas in “offline” mode figures were rendered locally. This duality was a common source of confusion for several years, and so in version 4 the team made some important changes to help clear this up. In this version, the only supported mode of operation in the plotly package is “offline” mode, which requires no internet connection, no account, no authentication tokens, and no payment of any kind. Support for “online” mode has been moved into a separately-installed package called chart-studio. Express First Earlier this year the team released a standalone library called Plotly Express aimed at making it significantly easier and faster to create plotly figures from tidy data—as easy as a single line of Python. Plotly Express was extremely well-received by the community and starting with version 4, plotly now includes Plotly Express built-in which is accessible as plotly.express. Displayable anywhere In addition to “offline” mode, the plotly.offline package has been reimplemented on top of a new extensible renderers framework which enables Plotly figures to be displayed not only in Jupyter notebooks, but just about anywhere, like: JupyterLab & classic Jupyter notebook Other notebooks like Colab, nteract, Azure & Kaggle IDEs and CLIs like VSCode, PyCharm, QtConsole & Spyder Other contexts such as sphinx-gallery Dash apps (with dash_core_components.Graph()) Static raster and vector files (with fig.write_image()) Standalone interactive HTML files (with fig.write_html()) Embedded into any website (with fig.to_json() and Plotly.js) In addition to the above new features, there are other changes like a new default theme available in Plotly.py 4.0. The team has introduced a suite of new figure methods for updating figures after they have been constructed. It also supports all subplot and trace types: 2D, 3D, polar, ternary, maps, pie charts, sunbursts, Sankey diagrams etc. Plotly.py 4.0 is also supported by JupyterLab 1.0. To know about these feature updates in detail, check out the Medium post by the Plotly team. Plotly releases Dash DAQ: a UI component library for data acquisition in Python plotly.py 3.0 releases Python in Visual Studio Code released with enhanced Variable Explorer, Data Viewer, and more!
Read more
  • 0
  • 0
  • 3681

article-image-unity-announces-a-new-automotive-division-and-two-day-unity-autotech-summit
Sugandha Lahoti
18 May 2018
3 min read
Save for later

Unity announces a new automotive division and two-day Unity AutoTech Summit

Sugandha Lahoti
18 May 2018
3 min read
Unity technologies have made a startling announcement of plunging into the automotive and transportation industry.  With their newly formed automotive division, they plan to pass on their rendering technology to auto creators. They plan to show off this technology at their very first Unity AutoTech Summit at Unite Berlin, scheduled to happen during June 19-21 this year. As John Riccitiello, Chief Executive Officer, Unity Technologies, describes it “The real-time revolution in automotive is here. Over the past 15 years, we’ve made great strides leading the game development industry – now, we’re bringing our real-time rendering technology to a new group of creators, equipping automakers with the tools that will allow them to iterate at the speed of thought.” Unity Automotive Division This automotive division will bring real-time 3D, VR, and AR technologies to the world’s automotive original equipment manufacturers (OEMs) and suppliers through the Unity engine. The division is led by of experts from key automobile companies like Volkswagen, Renault, GM, Delphi, and Denso. Unity has already been working alongside the world’s top OEMs. Including Audi (VR design review), Volkswagen (interactive VR training for 10,000 employees), Cadillac (Virtual Showroom) and Mercedes-Benz (AMG Powerwall). Unity AutoTech Summit The Unity AutoTech summit that will grace Unite Berlin, is a one of a kind, two-day gathering of sessions, tech demos, and networking dedicated to the automotive industry. Featured sessions will include: Bringing the Lexus LC500 to Life Through the Magic of Unity by David Telfer, (Lexus), Joe DeMiero, and Carl Seibert from Lexus. How to Drive VR/AR Use Cases for Enterprises Using the Example of Volkswagen by Torben Volkwein from Volkswagen. Creating Powerful Mixed Reality Applications Across Auto by Jason Yim from Trigger Global for Nissan. Next Level Rendering Quality for Automotive by Arisa Scott  from Unity Unity Training Workshops Taster: Introduction to Automotive Design Visualization by Anuja Dharkar from Unity. Unity in Automotive - The Road Ahead by Tim McDonough and Ed Martin from Unity. Unity for Enterprise Unity and PiXYZ have also partnered to launch the enterprise-level Unity Industry Bundle. This bundle consists of PiXYZ products, training, and Unity Pro. It streamlines the data preparation and import of CAD data for creating real-time experiences in Unity. It provides services like design and engineering, AR/VR training, and the creation of high-impact customer experiences for both datacenters to individuals. Visit the Unity Automotive and Transportation website for the list of Unity’s entire solutions. Put your game face on! Unity 2018.1 is now available Unity plugins for augmented reality application development Unity 2D & 3D game kits simplify Unity game development for beginners
Read more
  • 0
  • 0
  • 3678

article-image-introducing-vmware-integrated-openstack-vio-5-0-a-new-infrastructure-as-a-service-iaas-cloud
Savia Lobo
30 May 2018
3 min read
Save for later

Introducing VMware Integrated OpenStack (VIO) 5.0, a new Infrastructure-as-a-Service (IaaS) cloud

Savia Lobo
30 May 2018
3 min read
VMware recently released its brand new Infrastructure-as-a-Service (IaaS) cloud, known as the VMware Integrated OpenStack (VIO) 5.0. This release, announced at the OpenStack Summit in Vancouver, Canada, is fully based on the new OpenStack Queens release. VIO provides customers with a fast and efficient solution to deploy and operate OpenStack clouds. These clouds are highly optimized for VMware's NFV and software-defined data center (SDDC) infrastructure, with advanced automation and onboarding. If one is already using VIO, they can use OpenStack's built-in upgrade capability to upgrade seamlessly to VIO 5.0. VMWare Integrated OpenStack(VIO)5.0 would be available in both Carrier and Data Center Editions.The VIO-Carrier Edition will addresses specific requirements of communication service providers (CSP). The improvements in this include: An Accelerated Data Plane Performance:  Support of NSX Managed Virtual Distributed Switch in Enhanced Data Path mode and DPDK provides customers with: Significant improvements in application response time, reduced network latencies breakthrough network performance optimized data plane techniques in VMware vSphere. Multi-Tenant Resource is now scalable: This will provide resource guarantee and resource isolation to each tenant. It will also support elastic resource scaling that allows CSPs to add new resources dynamically across different vSphere clusters to adapt to traffic conditions or transition from pilot phase to production in place. OpenStack for 5G and Edge Computing: Customers will have full control over the micro data centers and apps at the edge via automated API-driven orchestration and lifecycle management. The solution will help tackle enterprise use cases such as utilities, oil and gas drilling platforms, point-of-sale applications, security cameras, and manufacturing plants. Also, Telco oriented use-cases such Multi-Access Edge Computing (MEC), latency sensitivity VNF deployments, and operational support systems (OSS) would be addressed. VIO 5.0 also enables CSP and enterprise customers to utilize Queens advancements to support mission-critical workloads, across container and cloud-native application environments. Some new features include: High Scalability: One can run upto 500 hosts and 15,000 VMs in a single region using the VIO5.0. It will also introduce support for multiple regions at once with monitoring and metrics at scale. High Availability for Mission-Critical Workloads: Creating snapshots, clones, and backups of attached volumes to dramatically improve VM and application uptime via enhancements to the Cinder volume driver is now possible. Unified Virtualized Environment: Ability to deploy and run both VM and container workloads on a single virtualized infrastructure manager (VIM) and with a single network fabric based on VMware NSX-T Data Center. This architecture will enable customers to seamlessly deploy hybrid workloads where some components run in containers while others run in VMs. Advanced Security: Consolidate and simplify user and role management based on enhancements to Keystone, including the use of application credentials as well as system role assignment. VMware Integrated OpenStack 5.0 takes security to new levels with encryption of internal API traffic, Keystone to Keystone federation, and support for major identity management providers that includes VMware Identity Manager. Optimization and Standardization of DNS Services: Scalable, on-demand DNS as a service via Designate. Customers can auto-register any VM or Virtual Network Function (VNF) to a corporate approved DNS server instead of manually registering newly provisioned hosts through Designate. To know more about the other features in detail read VMWare’s official blog. How to create and configure an Azure Virtual Machine Introducing OpenStack Foundation’s Kata Containers 1.0 SDLC puts process at the center of software engineering
Read more
  • 0
  • 0
  • 3674

article-image-googles-new-chrome-extension-password-checkup-checks-if-your-username-or-password-has-been-exposed-to-a-third-party-breach
Melisha Dsouza
06 Feb 2019
2 min read
Save for later

Google’s new Chrome extension ‘Password CheckUp’ checks if your username or password has been exposed to a third party breach

Melisha Dsouza
06 Feb 2019
2 min read
Google released a new Chrome extension on Tuesday, called the  ‘Password CheckUp’. This extension will inform users if the username and password that they are currently using was stolen in any data breaches. It then sends a prompt for them to reset their password. If a user’s Google account credentials have been exposed in a third-party data breach, the company automatically resets their passwords. The new Chrome extension will ensure the same level of protection to all services on the web. On installing, Password Checkup will appear in the browser bar as a green shield. The extension will then check the login details against a database of around four billion usernames and passwords. If a match is found, a dialogue box prompting users to “Change your password” will appear and the icon will turn bright red. Source: Google Password Checkup was designed by Google along with cryptography experts at Stanford University, keeping in mind that Google should not be able to capture a user’s credentials, to prevent a “wider exposure” of the situation. Google’s blog states “We also designed Password Checkup to prevent an attacker from abusing Password Checkup to reveal unsafe usernames and passwords.”   Password Checkup uses multiple rounds of hashing, k-anonymity, private information retrieval, and a technique called blinding to achieve encryption of the user’s credentials. You can check out Google’s blog for technical details on the extension. Google Chrome announces an update on its Autoplay policy and its existing YouTube video annotations Meet Carlo, a web rendering surface for Node applications by the Google Chrome team Google Chrome 70 now supports WebAssembly threads to build multi-threaded web applications
Read more
  • 0
  • 0
  • 3672

article-image-vulkan-memory-model-vulkan-becomes-the-worlds-first-graphics-api-to-include-a-formal-memory-model
Savia Lobo
14 Sep 2018
2 min read
Save for later

Vulkan memory model: Vulkan becomes the world’s first graphics API to include a formal memory model

Savia Lobo
14 Sep 2018
2 min read
Yesterday, the Khronos Group announced that its Vulkan API (modern cross-platform graphics and compute API) has become the world’s first graphics API to include a formal memory model (Vulkan Memory model) for its associated GLSL™ and SPIR-V™ programming languages. This announcement has a number of components that come together to significantly boost the robustness of the Vulkan standard for programming correctness and sophisticated compiler optimizations. The Vulkan memory model Vulkan’s memory model is based on the C++ memory model. However, it adds valuable functionality such as scopes, storage classes, and memory availability and visibility operations. These capabilities can be exploited to reduce the cost of synchronization and thus increase performance. Scopes allow synchronization to be limited to threads in close proximity to each other. Storage classes allow synchronization to be limited to specific types of memory. Availability and visibility operations give control over when and how cache maintenance operations are performed in systems with noncoherent cache hierarchies. Additional memory model materials The Khronos Group has lined up additional memory model materials in provisional form to enable feedback from the C++ community, academics, compiler engineers and software developers throughout the industry with experience in multi-threaded communication and memory usage. The additional memory model materials include: A provisional Vulkan Memory Model Specification: This specification includes extensions for Vulkan, SPIR-V, and GLSL that gives Vulkan developers additional control over how their shaders synchronize access to memory in a parallel execution environment. Memory model extension conformance tests to help shader compilers ensure that they implement the specified memory model synchronization functionality correctly. A formal description of the Vulkan memory model using Alloy, which is a language developed by MIT for describing logical structures and a tool for exploring them. This is the first instance where Khronos has used an Alloy model for its specifications. This is because, Alloy precisely documents the interactions of memory operations between multiple threads and devices, and enables formal modeling and experimentation. To know more about the Vulkan Memory model in detail, visit its GitHub page. macOS gets RPCS3 and Dolphin using Gfx-portability, the Vulkan portability implementation for non-Rust apps AMD open sources V-EZ, the Vulkan wrapper library Think Silicon open sources GLOVE: An OpenGL ES over Vulkan middleware
Read more
  • 0
  • 0
  • 3672
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $15.99/month. Cancel anytime
article-image-the-packt-top-10-for-10
Packt Editorial Staff
19 Nov 2018
5 min read
Save for later

The Packt top 10 for $10

Packt Editorial Staff
19 Nov 2018
5 min read
Right now, every eBook and every video is just $10 each on the Packt store. Need somewhere to get started? Here’s our Black Friday top ten for just $10. Deep Reinforcement Learning Hands-On Reinforcement learning is the hottest topic in the area of AI research. The technique allows a machine learning agent grow through trial and error in an interactive environment. Just like a human, it builds its intelligence and understanding by learning from its experiences. In Deep Reinforcement Learning Hands-On, expert author Maxim Lapan reveals the reinforcement learning methods responsible for paradigm-shifting AI such as Google’s AlphaGo Zero. Filling the gaps between theory and practice, this book is focused on practical insight on how reinforcement learning works - hands-on! Find out more. The Modern C++ Challenge “I would recommend this to anyone” ★★★★ Amazon Review Take on the modern C++ challenge! Designed to hone and test your C++ skills, The Modern C++ Challenge consists of a stack of programming problems for developers of all levels. These problems don’t just test your knowledge of the language, but your skill as a programmer. Think outside the box to come up with the answers, and don’t worry. If you’re ever stumped, we've got the best solutions to the problems right in the book. So are you up for the challenge? Learn more. Angular 6 for Enterprise-Ready Web Applications The demands of modern business for powerful and reliable web applications is huge. In Angular 6 for Enterprise-Ready Web Applications, software development expert and conference speaker Doguhan Uluca takes you through a hands-on and minimalist approach to designing and architecting high quality Angular apps. More than just a technical manual, this book introduces Enterprise-level project delivery methods. Use Kanban to focus on value delivery, communicate design ideas with mock-up tools and build great looking apps with Angular Material. Find out more. Mastering Blockchain - Second Edition “I love this book and have recommended it to everyone I know who is interested in Blockchain. I also teach Blockchain at the graduate school level and have used this book in my course development and teaching...quite simply, there is nothing better on the market.” ★★★★★ Amazon Review 2018 has been the year that Blockchain and cryptocurrency hit the mainstream. Fully updated and revised from the bestselling first edition, Mastering Blockchain is dedicated to showing you how to put this revolutionary technology into implementation in the real world. Develop Ethereum applications, discover Blockchain for business frameworks, build Internet of Things apps using Blockchain - and more. The possibilities are endless. Find out more. Mastering Linux Security and Hardening Network engineer or systems administrator? You need this book. In one 378 page volume, you’ll be equipped with everything you need to know to deliver a Linux system that’s resistant to being hacked. Fill your arsenal with security techniques including SSH hardening, network service detection, setting up firewalls, encrypting file systems, and protecting user accounts. When you’re done, you’ll have a fortress that will be much, much harder to compromise. Find out more. Mastering Go The CEO of Shopify famously said “Go will be the server language of the future.” Mastering Go shows you how to deliver on that promise. Take your Go skills beyond the basics and learn how to integrate them with production code. Filled with details on the interplay of systems and networking code, Mastering Go will get you writing server-level code that plays well in all environments. Learn more. Mastering Machine Learning Algorithms From financial trading to your Netflix recommendations, machine learning algorithms rule modern life. But whilst each algorithm is often a highly-prized secret, all are often built upon a core algorithmic theory. Mastering Machine Learning Algorithms is your complete guide to quickly getting to grips with popular machine learning algorithms. You will be introduced to the most widely used algorithms in supervised, unsupervised, and semi-supervised machine learning, and will learn how to use them in the best possible manner. If you are looking for a single resource to study, implement, and solve end-to-end machine learning problems and use-cases, this is the book you need. Find out more. Learn Qt 5 Cross-platform development is a big promise. Qt goes beyond the basics of ‘runs on Android and iOS’ or ‘works on Windows and Linux’. If you build your app with Qt it’s truely cross-platform, offering intuitive and easy GUIs for everything from mobile and desktop, to Internet of Things, automotive devices and embedded apps. Learn Qt 5 gives hands-on coverage of the suite of essential techniques that will empower you to progress from a blank page to shipped Qt application. Write your Qt application once, then deploy it to multiple operating systems with ease. Learn more. Microservice Patterns and Best Practices Microservices empower your organization to deliver applications continuously and with agility. But the proper architecture of microservices-based applications can be tricky. Microservice Patterns and Best Practices show you the absolute best way to build and structure your microservices. Start making the right choices at the application development stage, and learn how to cut your monolithic app down into manageable chunks. Find out more. Natural Language Processing with TensorFlow In Natural Language Processing with TensorFlow, chief data scientist Thushan Ganegedara unravels the complexities of natural language processing. An expert on working with untested data, Thushan gives you invaluable tools to tackle immense and unstructured data volumes. Processing your raw corpus is key to effective deep learning. Let Thushan show you how with NLP and Python’s most popular deep learning library. Learn more.
Read more
  • 0
  • 0
  • 3671

article-image-openssl-3-0-will-have-significant-changes-in-architecture-will-include-fips-module-and-more
Melisha Dsouza
14 Feb 2019
3 min read
Save for later

OpenSSL 3.0 will have significant changes in architecture, will include FIPS module and more

Melisha Dsouza
14 Feb 2019
3 min read
On 13th February, the OpenSSL team released a blog post outlining the changes that users can expect in the OpenSSL 3.0 architecture and plans for including a new FIPS module. Architecture changes in OpenSSL 3.0 ‘Providers’ will be introduced in this release which will be a possible replacement for the existing ENGINE interface to enable more flexibility for implementers. There will be three types of Providers: the “default” Provider will implement all of the most commonly used algorithms available in OpenSSL. The “legacy” Provider will implement legacy cryptographic algorithms and the “FIPS” Provider will implement FIPS validated algorithms. Existing engines will have to be recompiled to work normally and will be made available via both the old ENGINE APIs as well as a Provider compatibility layer. The architecture will include Core Services that will form the building blocks usable by applications and providers. Providers in the new architecture will implement cryptographic algorithms and supporting services. It will have implementations of one or more of the following: The cryptographic primitives (encrypt/decrypt/sign/hash etc)  for an algorithm Serialisation for an algorithm Store loader back ends   A Provider may be entirely self-contained or it may use services provided by different providers or the Core Services.     Protocol implementations, for instance TLS, DTLS.  New EVP APIs will be provided in order to find the implementation of an algorithm in the   Core to be used for any given EVP call.  Implementation agnostic way will be used to pass information between the core library and the providers.  Legacy APIs that do not go via the EVP layer will be deprecated. The OpenSSL FIPS Cryptographic Module will be self-contained and implemented as a dynamically loaded provider. Other interfaces may also be transitioned to use the Core over time  A majority of existing well-behaved applications will just need to be recompiled. No deprecated APIs will be removed in this release You can head over to the draft documentation to know more about the features in the upgraded architecture. FIPS module in OpenSSL 3.0 The updated architecture incorporates the FIPS module into main line OpenSSL. The module is dynamically loadable and will no longer be a separate download and support periods will also be aligned. He module is a FIPS 140-2 validated cryptographic module that contains FIPS validated/approved cryptographic algorithms only. The FIPS module version number will be aligned with the main OpenSSL version number. New APIs will give applications greater flexibility in the selection of algorithm implementations. The FIPS Provider will implement a set of services that are FIPS validated and made available to the Core. This includes: POST: Power On Self Test KAT: Known Answer Tests Integrity Check Low Level Implementations Conceptual Component View of OpenSSL 3.0 Read the draft documentation to know more about the FIPS module in the upgraded architecture. Baidu Security Lab’s MesaLink, a cryptographic memory safe library alternative to OpenSSL OpenSSL 1.1.1 released with support for TLS 1.3, improved side channel security Transformer-XL: A Google architecture with 80% longer dependency than RNNs    
Read more
  • 0
  • 0
  • 3666

article-image-microsoft-releases-open-service-broker-for-azure-osba-version-1-0
Savia Lobo
29 Jun 2018
2 min read
Save for later

Microsoft releases Open Service Broker for Azure (OSBA) version 1.0

Savia Lobo
29 Jun 2018
2 min read
Microsoft released version 1.0 of Open Service Broker for Azure (OSBA) along with full support for Azure SQL, Azure Database for MySQL, and Azure Database for PostgreSQL. Microsoft announced the preview of Open Service Broker for Azure (OSBA) at the KubeCon 2017. OSBA is the simplest way to connect apps running on cloud-native environment (such as Kubernetes, Cloud Foundry, and OpenShift) and rich suite of managed services available on Azure. The OSBA 1.0 ensures to connect mission-critical applications to Azure’s enterprise-grade backing services. It is also ideal to run on a containerized environment like Kubernetes. In a recent announcement of a strategic partnership between Microsoft and Red Hat to provide  OpenShift service on Azure, Microsoft demonstrated the use of OSBA using an OpenShift project template. OSBA will enable customers to deploy Azure services directly from the OpenShift console and connect them to their containerized applications running on OpenShift. It also plans to collaborate with Bitnami to bring OSBA into KubeApps, for customers to deploy solutions like WordPress built on Azure Database for MySQL and Artifactory on Azure Database for PostgreSQL. Microsoft plans 3 additional focus areas for OSBA and the Kubernetes service catalog: Plans to expand the set of Azure services available in OSBA by re-enabling services such as Azure Cosmos DB and Azure Redis. These services will progress to a stable state as Microsoft will learn how customers intend to use them. They plan to continue working with the Kubernetes community to align the capabilities of the service catalog with the behavior that customers expect. With this, the cluster operator will have the ability to choose which classes/plans are available to developers. Lastly, Microsoft has a vision for the Kubernetes service catalog and the Open Service Broker API. It will enable developers to describe general requirements for a service, such as “a MySQL database of version 5.7 or higher”. Read the full coverage on Microsoft’s official blog post GitLab is moving from Azure to Google Cloud in July Announces general availability of Azure SQL Data Sync Build an IoT application with Azure IoT [Tutorial]
Read more
  • 0
  • 0
  • 3659

article-image-kubernetes-1-16-releases-with-endpoint-slices-general-availability-of-custom-resources-and-other-enhancements
Vincy Davis
19 Sep 2019
4 min read
Save for later

Kubernetes 1.16 releases with Endpoint Slices, general availability of Custom Resources, and other enhancements

Vincy Davis
19 Sep 2019
4 min read
Yesterday, the Kubernetes team announced the availability of Kubernetes 1.16, which consists of 31 enhancements: 8 moving to stable, 8 is beta, and 15 in alpha. This release contains a new feature called Endpoint Slices in alpha to be used as a scalable alternative to Endpoint resources. Kubernetes 1.16 also contains major enhancements like custom resources, overhauled metrics and volume extension. It also brings additional improvements like the general availability of custom resources and more. Extensions like extensions/v1beta1, apps/v1beta1, and apps/v1beta2 APIs are deprecated in this version. This is Kubernetes' third release this year. The previous version Kubernetes 1.15 released three months ago. It accorded features like extensibility around core Kubernetes APIs and cluster lifecycle stability and usability improvements. Introducing Endpoint Slices in Kubernetes 1.16 The main goal of Endpoint Slices is to increase the scalability for Kubernetes Services. With the existing Endpoints, a single resource had to include all the network endpoints making the corresponding Endpoints resources large and costly. Also, when an Endpoints resource is updated, all the pieces of code watching the Endpoints required a full copy of the resource. This became a tedious process when dealing with a big cluster. With Endpoint Slices, the network endpoints for a Service are split into multiple resources by decreasing the amount of data required for updates. The Endpoint Slices are restricted to 100 endpoints each, by default. The other goal of Endpoint Slices is to provide extensible and useful resources for a variety of implementations. Endpoint Slices will also provide flexibility for address types. The blog post states, “An initial use case for multiple addresses would be to support dual stack endpoints with both IPv4 and IPv6 addresses.”  As the feature is available in alpha only, it is not enabled by default in Kubernetes 1.16. Major enhancements in Kubernetes 1.16 General availability of Custom Resources With Kubernetes 1.16, CustomResourceDefinition (CRDs) is generally available, with apiextensions.k8s.io/v1, as it contains the integration of API evolution in Kubernetes. CRDs were previously available in beta. It is widely used as a Kubernetes extensibility mechanism. In the CRD.v1, the API evolution has a ‘defaulting’ support by default. When defaulting is  combined with the CRD conversion mechanism, it will be possible to build stable APIs over time. The blog post adds, “Updates to the CRD API won’t end here. We have ideas for features like arbitrary subresources, API group migration, and maybe a more efficient serialization protocol, but the changes from here are expected to be optional and complementary in nature to what’s already here in the GA API.” Overhauled metrics In the earlier versions, the global metrics registry was extensively used by the Kubernetes to register exposed metrics. In this latest version, the metrics registry has been implemented, thus making the Kubernetes metrics more stable and transparent. Volume Extension This release contains many enhancements to volumes and volume modifications. The volume resizing support in (Container Storage Interface) CSI specs has moved to beta, allowing the CSI spec volume plugin to be resizable. Additional Windows Enhancements in Kubernetes 1.16 Workload identity option for Windows containers has moved to beta. It can now gain exclusive access to external resources. New alpha support is added for kubeadm which can be used to prepare and add a Windows node to cluster. New plugin support is introduced for CSI in alpha. Interested users can download Kubernetes 1.16 on GitHub. Check out the Kubernetes blog page for more information. Other interesting news in Kubernetes The Continuous Intelligence report by Sumo Logic highlights the rise of Multi-Cloud adoption and open source technologies like Kubernetes Kubernetes releases etcd v3.4 with better backend storage, improved raft voting process, new raft non-voting member and more CNCF-led open source Kubernetes security audit reveals 37 flaws in Kubernetes cluster; recommendations proposed
Read more
  • 0
  • 0
  • 3658
article-image-facebook-general-matrix-multiplication-fbgemm-high-performance-kernel-library-open-sourced-to-run-deep-learning-models-efficiently
Melisha Dsouza
08 Nov 2018
3 min read
Save for later

Facebook GEneral Matrix Multiplication (FBGEMM), high performance kernel library, open sourced, to run deep learning models efficiently

Melisha Dsouza
08 Nov 2018
3 min read
Yesterday (on the 7th of November), Facebook open-sourced its high-performance kernel library FBGEMM: Facebook GEneral Matrix Multiplication. This library offers optimized on-CPU performance for reduced precision calculations used to accelerate deep learning models. The library has delivered 2x performance gains when deployed at Facebook (in comparison to their current production baseline). Users can deploy it using the Caffe2 front end, and it will soon be callable directly by PyTorch 1.0 Python front end. Features of FBGEMM 1. FBGEMM is optimized for server-side inference. It delivers accuracy and efficiency when performing quantized inference using contemporary deep learning frameworks. It is a low-precision, high-performance matrix-matrix multiplications and convolution library that enables large-scale production servers to run the most powerful deep learning models efficiently. The library exploits opportunities to overcome the unique challenges of matrix multiplication at lower precision with bandwidth-bound pre- and post-GEMM operations. At Facebook, FBGEMM has benefited many AI services, increased the speed of English-to-Spanish translations by 1.3x, reduced DRAM bandwidth usage in their recommendation system used in feeds by 40%, and speed up character detection by 2.4x in Rosetta, the machine learning system for understanding text in images and videos. FBGEMM supplies modular building blocks to construct an overall GEMM pipeline needed by plugging and playing different front-end and back-end components. It combines small compute with bandwidth-bound operations and exploits cache locality by fusing post-GEMM operations with macro kernel while providing support for accuracy-loss-reducing operations. Why does GEMM matter? Floating point operations (FLOPs)  are mostly consumed by Fully connected (FC) operators in the deep learning models that are  deployed in Facebook’s data centers. These FC operators are just plain GEMM, which means that their overall efficiency directly depends on GEMM efficiency. 19% of these deep learning frameworks at Facebook implement convolution as im2col followed by GEMM. However, straightforward im2col adds overhead from the copy and replication of input data. To combat this, some deep learning libraries implement direct (im2col-free) convolution for improved efficiency. Facebook provides a way to fuse im2col with the main GEMM kernel to minimize im2col overhead. Facebook  says that recent industry and research works have indicated that inference using mixed-precision works well- without adversely affecting accuracy. FBGEMM uses this as an alternative strategy to improve inference performance with quantized models. Also, newer generations of GPUs, CPUs, and specialized tensor processors natively support lower-precision compute primitives, and hence the deep learning community is moving toward low-precision models. FBGEMM provides a way to perform efficient quantized inference on the current and upcoming generation of CPUs. Head over to Facebook’s official blog to understand more about this library and how it is implemented. A new data breach on Facebook due to malicious browser extensions allowed almost 81,000 users’ private data up for sale, reports BBC News 90% Google Play apps contain third-party trackers, share user data with Alphabet, Facebook, Twitter, etc: Oxford University Study Facebook open sources a set of Linux kernel products including BPF, Btrfs, Cgroup2, and others to address production issues
Read more
  • 0
  • 0
  • 3658

article-image-oaths-distributed-network-telemetry-collector-panoptes-is-now-open-source
Melisha Dsouza
04 Oct 2018
3 min read
Save for later

Oath’s distributed network telemetry collector- 'Panoptes' is now Open source!

Melisha Dsouza
04 Oct 2018
3 min read
Yesterday, the Oath network automation team open sourced Panoptes, a distributed system for collecting, enriching and distributing network telemetry. This pluggable, distributed and high-performance data collection system supports multiple polling formats, including SNMP and vendor-specific APIs. It also supports emerging streaming telemetry standards including gNMI. Panoptes is written primarily in Python. It leverages multiple open-source technologies to provide the most value for the least development effort. Panoptes Architecture Source: Yahoo Developers The architecture is designed to enable easy data distribution and integration with other systems. The plugin to push metrics into InfluxDB allows Panoptes to evolve with industry standards. Teams can quickly set up a fully-featured monitoring environment because of the combination of Grafana and the InfluxData ecosystem. There were multiple issues inherent in legacy polling systems, including overpolling due to multiple point solutions for metrics, a lack of data normalization, consistent data enrichment and integration with infrastructure discovery systems. Panoptes aims to overcome all these issues. Check scheduling is accomplished using Celery, which is a horizontally scalable, open-source scheduler that utilizes a Redis data store. Panoptes ships with a simple, CSV-based discovery system. It can be integrated with a CMDB. From there, Panoptes will manage the task of scheduling polling for the desired devices. Users can also develop custom discovery plugins to integrate with their CMDB and other device inventory data sources. Vendors are moving towards a more streamlined model of telemetry. Panoptes’ flexible architecture will minimize the effort required to adopt these new protocols. The metric bus at the center of the model is implemented on Kafka. All data plane transactions flow across this bus. Discovery plugins publish devices to the bus and polling plugins publish metrics to the bus. Similarly, numerous clients read the data off of the bus for additional processing and forwarding. This architecture enables easy data distribution and integration with other systems. The team at Oath has deployed Panoptes in a tiered, federated model. They have developed numerous custom applications on the platform, including a load balancer monitor, a BGP session monitor, and a topology discovery application. All this was done at a reduced cost, thanks to Panoptes. This open-source release is packaged for easy deployment into any Linux-based environment and available on Github. You can head over to Yahoo Developer Network for deeper insights into this news. Performing Vehicle Telemetry job analysis with Azure Stream Analytics tools Anaconda 5.3.0 released, takes advantage of Python’s Speed and feature improvements Arm releases free Cortex-M processor cores for FPGAs, includes measures to combat FOSSi threat
Read more
  • 0
  • 0
  • 3655

article-image-black-hat-hackers-used-ipmi-cards-to-launch-junglesec-ransomware-affects-most-of-the-linux-servers
Savia Lobo
10 Jan 2019
3 min read
Save for later

Black Hat hackers used IPMI cards to launch JungleSec Ransomware, affects most of the Linux servers

Savia Lobo
10 Jan 2019
3 min read
Unsecured IPMI (Intelligent Platform Management Interface) cards are preparing a gateway for the JungleSec ransomware that affected multiple Linux servers. The ransomware attack was originally reported in early November 2018. Victims were seen using the Windows, Linux, and Mac; however, there were no traces of how they were being infected. The Black Hat hackers have been using the IPMI cards to breach access and install the JungleSec ransomware, which encrypts data and demands a 0.3 bitcoin payment (about $1,100) for the unlock key. IPMI, a management interface, is built into server motherboards or installed as an add-on card. This enables administrators to remotely manage the computer, power on and off the computer, get system information, and get access to a KVM that gives one remote console access. The IPMI is also useful for managing servers, especially when renting servers from another company at a remote collocation center. However, if the IPMI interface is not properly configured, it could allow attackers to remotely connect to and take control of servers using default credentials. Bleeping Computers said they have “spoken to multiple victims whose Linux servers were infected with the JungleSec Ransomware and they all stated the same thing; they were infected through unsecured IPMI devices”. Bleeping Computers first reported this story on Dec 26 indicating that the hack only affected Linux servers. The attackers installed the JungleSec ransomware through the server's IPMI interface. In the conversations that Bleeping computers had with two of the victims, one victim said, “that the IPMI interface was using the default manufacturer passwords.” The other victim stated that “the Admin user was disabled, but the attacker was still able to gain access through possible vulnerabilities.” Once the attackers were successful in gaining access to the servers, the attackers would reboot the computer into single user mode in order to gain root access. Once in single user mode, they downloaded and compiled the ‘ccrypt’ encryption program. In order to secure the IPMI interface, the first step is to change the default password as most of these cards come with default passwords Admin/Admin. “Administrators should also configure ACLs that allow only certain IP addresses to access the IPMI interface. In addition, IPMI interfaces should be configured to only listen on an internal IP address so that it is only accessible by local admins or through a VPN connection”, Bleeping computer reports. The report also includes a tip from Negulescu--not specific to IPMI interfaces--which suggests adding a password to the GRUB bootloader. Doing so will make it more difficult, if not impossible, to reboot into single user mode from the IPMI remote console. To know more about this news in detail head over to Bleeping Computers’ complete coverage. Go Phish! What do thieves get from stealing our data? Hackers are our society’s immune system – Keren Elazari on the future of Cybersecurity Sennheiser opens up about its major blunder that let hackers easily carry out man-in-the-middle attacks
Read more
  • 0
  • 0
  • 3653
article-image-github-has-blocked-an-iranian-software-developers-account
Richard Gall
25 Jul 2019
3 min read
Save for later

GitHub has blocked an Iranian software developer's account

Richard Gall
25 Jul 2019
3 min read
GitHub's importance to software developers can't be overstated. In the space of a decade it has become central to millions of people's professional lives. For it to be taken away, then, must be incredibly hard to take. Not only does it cut you off from your work, it also cuts your identity as a developer. But that's what appears to have happened today to Hamed Saeedi, an Iranian software developer. Writing on Medium, Saeedi revealed that he today received an email from GitHub explaining that his account has been restricted "due to U.S. trade controls law restrictions." As Saeedi notes, he is not a paying GitHub customer, only using their free services, which makes the fact he has been clocked by the platform surprising. Does GitHub really think a developer is developing dangerous software in a public repo? Digging down into the terms and conditions around U.S. trade laws, Saeedi found a paragraph that states the platform cannot: "...be used for services prohibited under applicable export control laws, including purposes related to the development, production, or use of nuclear, biological, or chemical weapons or long range missiles or unmanned aerial vehicles." The implication - in Saeedi's reading at least - is that he is using GitHub for precisely that. But the impact of this move is massive for Saeedi. The incident has echoes of when Slack terminated Iranian users' accounts at the end of 2018, but, as one Twitter user noted, this is even more critical because "GitHub is hosting all the efforts of a programmer/engineer." How has GitHub and the developer community responded? GitHub hasn't, as of writing, responded publicly to the incident. However, it would be reasonable to assume that the organization would lean heavily on existing trades sanctions against Iran as an explanation for the actions. The ethical and moral implications of that notwithstanding, it's a move that would ensure that would protect the company. Given increased scrutiny on the geopolitical impact of technology, and current Iran/U.S. tensions, perhaps it isn't that surprising. But it has received condemnation from a number of developers on Twitter. One commented on the need to break up GitHub's monopoly, while another suggested that the incident emphasised the importance of #deletegithub - a small movement that sees GitHub (and other ostensibly 'free' software) as compromised and failing to live up to the ideals of free and open source software. Mikhail Novikov, a developer part of the GatsbyJS team, had words of solidariy for Saeedi, reading the situation in the context of the U.S. President's rhetoric towards Iran: https://twitter.com/freiksenet/status/1154297497290006528?s=20 It appears that other Iranian users have been affected in the same way - however, it remains unclear to what extent GitHub has been restricting Iranian accounts.
Read more
  • 0
  • 0
  • 3652

article-image-ipython-7-0-releases-with-asyncio-integration-and-new-async-libraries
Natasha Mathur
28 Sep 2018
2 min read
Save for later

IPython 7.0 releases with AsyncIO Integration and new Async libraries

Natasha Mathur
28 Sep 2018
2 min read
IPython team released version 7.0 of IPython, yesterday. IPython is a powerful Python interactive shell with features such as advanced tab completion, syntactic coloration, and more. IPython 7.0 explores new features such as AsyncIO integration, new Async libraries, and Async support in Notebooks. IPython (Interactive Python) provides a rich toolkit for interactive computing in multiple programming languages. It’s the Jupyter kernel for Python used by millions of users. Let’s discuss the key features in IPython 7.0 release. AsyncIO Integration IPython 7.0 comes with the integration of IPython and AsyncIO. This means that you don’t have to import or learn about asyncIO anymore. AsyncIO is a library which lets you write concurrent code using the async/await syntax. The asyncIO library is used as a foundation for multiple Python asynchronous frameworks providing high-performance network, web-servers, database connection libraries, distributed task queues, etc. Just remember that asyncIO is an async function, it won’t magically make your code faster but will make it easier to write. New Async Libraries (Curio and Trio integration) Python consists of keywords async and await. This helps simplify the use of asynchronous programming and the standardization around asyncIO. It also allows experimentation with the new paradigms for asynchronous libraries. Now, two new Async Libraries namely Curio and Trio, have been added in IPython 7.0. Both of these libraries explore ways to write asynchronous programs. They also explore how to use async, awaits, and coroutines when starting from a blank slate. Curio is a library which helps perform concurrent I/O and common system programming tasks. It makes use of the Python coroutines and the explicit async/await syntax. Trio is an async/await-native I/O library for Python. It lets you write programs that do multiple things at the same time with parallelized I/O. Async support in Notebooks Async code will now work in a notebook when using ipykernel for Jupyter users. With IPython 7.0, async will work with all the frontends that support the Jupyter Protocol, including the classic Notebook, JupyterLab, Hydrogen, nteract desktop, and nteract web. The default code will run in the existing asyncIO/tornado loop that runs the kernel. For more information, check out the official release notes. Make Your Presentation with IPython How to connect your Vim editor to IPython Increase your productivity with IPython
Read more
  • 0
  • 0
  • 3648