Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3709 Articles
article-image-julia-announces-the-preview-of-multi-threaded-task-parallelism-in-alpha-release-v1-3-0
Vincy Davis
24 Jul 2019
5 min read
Save for later

Julia announces the preview of multi-threaded task parallelism in alpha release v1.3.0

Vincy Davis
24 Jul 2019
5 min read
Yesterday, Julia team announced the alpha release of v1.3.0, which is an early preview of Julia version 1.3.0, expected to be out in a couple of months. The alpha release includes a preview of a new threading interface for Julia programs called multi-threaded task parallelism. The task parallelism model allows many programs to be marked in parallel for execution, where a ‘task’ will run all the codes simultaneously on the available thread. This functionality works similar to a GC model (garbage collection) as users can freely release millions of tasks and not worry about how the libraries are implemented. This portable model has been included over all the Julia packages. Read Also: Getting started with Z Garbage Collector (ZGC) in Java 11 [Tutorial] Jeff Bezanson and Jameson Nash from Julia Computing, and Kiran Pamnany from Intel say the Julia task parallelism is “inspired by parallel programming systems like Cilk, Intel Threading Building Blocks(TBB) and Go”. With multi-threaded task parallelism, Julia model can schedule many parallel tasks that call library functions. This works smoothly as the CPUs are not overcrowded with threads. This acts as an important feature for high-level languages as they require library functions frequently. How to resolve challenges while implementing task parallelism Allocating and switching task stacks Each task requires its own execution stack distinct from the usual process or thread stacks provided by Unix operating systems. Julia has an alternate implementation of stack switching which trades time for memory when a task switches. However, it may not be compatible with foreign code that uses cfunction. This implementation is used when stacks consume large address space. Event loop thread issues an async signal If a thread needs an event loop thread to wake up, it issues an async signal. This may be due to another thread scheduling new work, or a thread which is beginning to run garbage collection, or a thread which wants to take the I/O lock to perform I/O. Task migration across system threads In general, a task may start running on one thread, block for a while, and then restart on another thread. Julia uses thread-local variables every time a memory is allocated internally. Currently, a task always runs on the thread it started running on initially. To support this, Julia is using the concept of a sticky task where a task must run on a given thread and per-thread queues for running tasks associated with each thread. Sleeping idle threads To avoid 100% usage of CPUs all the time, some tasks are made to sleep. This can lead to a synchronization problem as some threads might be scheduled for new work while others have been kept on sleep. Dedicated scheduler task cause overhead problem When a task is blocked, the scheduler is called to pick another task to run. But, on what stack does the code run? It is possible to have a dedicated scheduler task; however, it may cause less overhead if the scheduler code runs in the context of the recently-blocked task. One suggested measure is to pull a task out of the scheduler queue to avoid switch away. Classic bugs The Julia team faced many difficult bugs while implementing multi-threaded functionality. One of the many bug was a mysterious one on Windows which got fixed by flipping a single bit. Future goals for Julia version 1.3.0 increase performance work on task switch and the I/O latency allow task migration use multiple threads in the compiler improved debugging tools provide alternate schedulers Developers are impressed with the new multithreaded parallelism functionality. A user on Hacker News comments “Great to see this finally land - thanks for all the team's work. Looking forward to giving it a whirl. Threading is something of a prerequisite for acceptance as a serious language among many folks. So great to not just check that box, but to stick the pen right through it. The devil is always in the details, but from the doc the interface looks pretty nice.” Another user says, “This is huge! I was testing out the master branch a few days ago and the parallelism improvements were amazing.” Many users are expecting Julia to challenge Python in the future. A comment on Hacker News reads “Not only is this huge for Julia, but they've just thrown down the gauntlet. The status quo has been upset. I expect Julia to start eating everyone's lunch starting with Python. Every language can use good concurrency & parallelism support and this is the biggest news for all dynamic languages.” Another user says, “I worked in a computational biophysics department with lots of python/bash/R and I was the only one who wrote lots of high-performance code in Julia. People were curious about the language but it was still very much unknown. Hope we will see a broader adoption of Julia in the future - it's just that it is much better for the stuff we do on a daily basis.” To learn how to implement Julia using task parallelism, head over to Julia blog. Mozilla is funding a project for bringing Julia to Firefox and the general browser environment Announcing Julia v1.1 with better exception handling and other improvements Julia for machine learning. Will the new language pick up pace?
Read more
  • 0
  • 0
  • 3473

article-image-apple-advanced-talks-with-intel-to-buy-its-smartphone-modem-chip-business-for-1-billion-reports-wsj
Bhagyashree R
24 Jul 2019
3 min read
Save for later

Apple advanced talks with Intel to buy its smartphone modem chip business for $1 billion, reports WSJ

Bhagyashree R
24 Jul 2019
3 min read
On Monday, the Wall Street Journal reported that Apple is in advanced talks to buy Intel’s smartphone-modem business for at least $1 billion, citing people familiar with the matter. This Apple-Intel deal that will cover a portfolio of patents and staff, is expected to get confirmed in the next week. According to the report, the companies started discussing this deal last summer around the time Brian Krzanich, Intel’s former CEO resigned. However, the talk broke when Apple signed a multiyear supply agreement for modems with Qualcomm in April to settle a longstanding legal dispute between the companies. The dispute was regarding royalties Qualcomm charges for its smartphone modems. After Apple’s settlement with Qualcomm, Intel announced its plans to exit the 5G smartphone modem business. The company’s new CEO Bob Swan said in a press release that there is no “path to profitability and positive returns” for Intel in the smartphone modem business. Intel then opened this offer to other companies but eventually resumed talks with Apple, who is seen as the “most-logical buyer” for its modem business. How will this deal benefit Apple This move will help Apple jumpstart its efforts to make modem chips in-house. In recent years, Apple has been expanding its presence in the components market to eliminate dependence on other companies for hardware and software in its devices. It now designs its own application processors, graphics chips, Bluetooth chips, and security chips. Last year, Apple acquired patents, assets, and employees from Dialog Semiconductor, a British chipmaker as a part of a 600 million deal to bring power management designs in house. With this deal, the tech giant will get access to Intel’s engineering work and talent to help in the development of modem chips for the crucial next generation of wireless technology known as 5G, potentially saving years of development work. How will this deal benefit Intel This deal will allow Intel to part ways from a business that hasn't been much profitable for the company. “The smartphone operation had been losing about $1 billion annually, a person familiar with its performance has said, and has generally failed to live up to expectations,” the report reads. After its exit from the 5G smartphone modem business, the company wants to put its focus in 5G network infrastructure. Read the full story on the Wall Street Journal. Apple patched vulnerability in Mac’s Zoom Client; plans to address ‘video on by default’ OpenID Foundation questions Apple’s Sign In feature, says it has security and privacy risks Apple gets into chip development and self-driving autonomous tech business
Read more
  • 0
  • 0
  • 1798

article-image-dropbox-walks-back-its-own-decision-brings-back-support-for-zfs-xfs-btrfs-and-ecryptfs-on-linux
Vincy Davis
23 Jul 2019
3 min read
Save for later

Dropbox walks back its own decision; brings back support for ZFS, XFS, Btrfs, and eCryptFS on Linux

Vincy Davis
23 Jul 2019
3 min read
Today, Dropbox notified users that it has brought back support for ZFS and XFS on 64-bit Linux systems, and Btrfs and eCryptFS on all Linux systems in its Beta Build 77.3.127. The support note in the Dropbox forum reads “Add support for zfs (on 64-bit systems only), eCryptFS, xfs (on 64-bit systems only), and btrfs filesystems in Linux.” Last year in November, Dropbox notified users that they are “ending support for Dropbox syncing to drives with certain uncommon file systems. The supported file systems are Ext4 filesystem on Linux, NTFS for Windows, and HFS+ or APFS for Mac.” Dropbox explained, a supported file system is necessary for Dropbox as it uses extended attributes (X-attrs) to identify files in their folder and to keep them in sync. The post also mentioned that Dropbox will support only the most common file systems that support X-attrs to ensure stability and consistency to its users. After Dropbox discontinued support for these Linux formats, many developers switched to other services such as Google Drive, Box, etc. This is speculated to be one of the reasons why Dropbox has changed its previous decision. However, no official statement from the Dropbox community, for bringing the support back, has been announced yet. Many users have expressed resentment on Dropbox’s irregular actions. A user on Hacker News says, “Too late. I have left Dropbox because of their stance on Linux filesystems, price bump with unnecessary features, and the continuous badgering to upgrade to its business. It's a great change though for those who are still on Dropbox. Their sync is top-notch” A Redditor comments, “So after I stopped using Dropbox they do care about me as a user after all? Linux users screamed about how nonsensical the original decision was. Maybe ignoring your users is not such a good idea after all? I moved to Cozy Drive - it's not perfect, but has native Linux client, is Europe based (so I am protected by EU privacy laws) and is pretty good as almost drop-in replacement.” Another Redditor said that “Too late for me, I was a big dropbox user for years, they dropped support for modern file systems and I dropped them. I started using Syncthing to replace the functionality I lost with them.” Few developers are still happy to see that Dropbox will again support the popular Linux systems. A user on Hacker News comments, “That's good news. Happy to see Dropbox thinking about the people who stuck with them from day 1. In the past few years they have been all over the place, trying to find their next big thing and in the process also neglecting their non-enterprise customers. Their core product is still the best in the market and an important alternative to Google.” Fedora announces the first preview release of Fedora CoreOS as an automatically updating Linux OS for containerized workloads Linux Mint 19.2 beta releases with Update Manager, improved menu and much more! Linux kernel announces a patch to allow 0.0.0.0/8 as a valid address range
Read more
  • 0
  • 0
  • 5163

article-image-typescript-3-6-beta-is-now-available
Amrata Joshi
23 Jul 2019
2 min read
Save for later

TypeScript 3.6 beta is now available!

Amrata Joshi
23 Jul 2019
2 min read
Last week, the team behind TypeScript announced the availability of TypeScript 3.6 Beta. The full release of TypeScript 3.6 is scheduled for the end of the next month with a Release Candidate coming a few weeks prior.  What’s new in TypeScript 3.6? Stricter checking TypeScript 3.6 comes with stricter checking for iterators and generator functions. The earlier versions didn’t let users of generators differentiate whether a value was yielded or returned from a generator. With TypeScript 3.6, users can narrow down values from iterators while dealing with them. Simpler emit The emit for constructs like for/of loops and array spreads can be a bit heavy so TypeScript opts for a simpler emit by default that supports array types, and helps in iterating on other types using the --downlevelIteration flag. With this flag, the emitted code is more accurate, but is larger. Semicolon-aware code edits Older versions of TypeScript added semicolons to the end of every statement which was not appreciated by many users as it didn’t go along with their style guidelines. TypeScript 3.6 can easily detect if a file uses semicolons while applying edits and if a file lack semicolons, TypeScript doesn’t add one. DOM updates Following are a few of the declarations that have been removed or changed within lib.dom.d.ts: Instead of GlobalFetch, WindowOrWorkerGlobalScope is used. Non-standard properties on Navigator no more exist. webgl or webgl2 is used instead of experimental-webgl context. To know more about this news, check out the official post.  Next.js 9 releases with built in zero-config TypeScript support, automatic static optimization, API routes and more TypeScript 3.5 releases with ‘omit’ helper, improved speed, excess property checks and more Material-UI v4 releases with CSS specificity, Classes boilerplate, migration to Typescript and more      
Read more
  • 0
  • 0
  • 1862

article-image-international-cybercriminals-exploited-citrix-internal-systems-for-six-months-using-password-spraying-technique
Savia Lobo
23 Jul 2019
4 min read
Save for later

International cybercriminals exploited Citrix internal systems for six months using password spraying technique

Savia Lobo
23 Jul 2019
4 min read
On March 8, this year, an American Cloud computing firm, Citrix revealed a data breach occurrence where international cybercriminals gained access to its internal network. The FBI informed the company about this incident on March 6. Soon after the incident was reported by the FBI, Citrix initiated a forensic investigation while securing their network. Today, the company announced they have concluded the investigation and shared a report of their findings and their future plan of action to improve security. Post the incident, Eric Armstrong, Citrix’s Vice President of Corporate Communications updated the users on the investigation twice--on April 4 and May 24--before releasing the final report today. Attackers used ‘Password Spraying’ technique to exploit weak passwords In both the updates, Armstrong said they have identified password spraying, a technique that exploits weak passwords, to be the likely method used for the data breach. He said the company had also performed a forced password reset throughout the Citrix corporate network and improved internal password management protocols. Based on the ongoing investigation, Armstrong revealed they have found no evidence that the threat actors discovered or exploited any vulnerabilities within Citrix products or services to gain entry. Also, they found no evidence of compromise of the customer cloud service. Investigation reveals criminals were lurking for “six months” within Citrix internal system In their final report, Citrix revealed that the cybercriminals accessed their internal network between October 13, 2018, and March 8, 2019, and stole business documents and files from a company shared network drive, which was used to store current and historical business documents. They also accessed a drive associated with a web-based tool, which was used by Citrix for consulting purposes. The investigation also speculates that the criminals may have “accessed the individual virtual drives and company email accounts of a very limited number of compromised users and launched without further exploitation a limited number of internal applications”, David Henshall, President and CEO, Citrix writes. “Importantly, we found no compromise or exfiltration beyond what has been previously disclosed,” he further added. Citrix was also warned by Resecurity before the FBI When the data breach incident was revealed on March 8, on Citrix’s official website, security firm Resecurity wrote that it had warned Citrix of the data attack on December 28th, 2018. Resecurity also mentioned that the attack may have been caused by the Iranian group called "IRIDIUM" and also mentioned "at least 6 terabytes of sensitive data stored in the Citrix enterprise network, including e-mail correspondence, files in network shares and other services used for project management and procurement." On March 6, when the FBI contacted Citrix, “they had reason to believe that international cybercriminals gained access to the internal Citrix network”, Stan Black, Citrix's chief security and information officer wrote on the blog post. Henshall says, “The cybercriminals have been expelled from our systems”. Experts are having a close look at the documents that may have been accessed or stolen during the incident. “We have notified, or shortly will notify, the limited number of customers who may need to consider additional protective steps”, Henshall said. Along with performing a global password reset and improving internal password management, Citrix has: improved its firewall logging, extended its data exfiltration monitoring capabilities, removed internal access to non-essential web-based services, and disabled non-essential data transfer pathways, The company has also deployed FireEye’s endpoint agent technology across its systems for continuous monitoring of the system. Although Resecurity revealed that 6TB data might have been compromised, the company has not shared information on how many users were affected during this breach but they have assured they will notify those who need to take additional protection. To know more about this news in detail, read Citrix’s official blog post. Getting Started – Understanding Citrix XenDesktop and its Architecture British Airways set to face a record-breaking fine of £183m by the ICO over customer data breach US Customs and Border Protection reveal data breach that exposed thousands of traveler photos and license plate images
Read more
  • 0
  • 0
  • 2405

article-image-plotly-4-0-popular-python-data-visualization-framework-releases
Fatema Patrawala
23 Jul 2019
3 min read
Save for later

Plotly 4.0, popular python data visualization framework, releases with Offline Only, Express first, Displayable anywhere features

Fatema Patrawala
23 Jul 2019
3 min read
Yesterday the Plotly team announced the release of Plotly.py 4.0 version which is now available for download from PyPI. This version includes some exciting new features and changes, including a switch to “offline” mode by default, the inclusion of Plotly Express as the recommended entry point into the library, and a new rendering framework compatible with not only Jupyter notebooks but other notebook systems such as Colab, Azure and Kaggle notebooks, as well as popular IDEs such as PyCharm, VSCode, Spyder and others. To upgrade to the latest version, you can run pip install plotly==4.0.0 or conda install -c plotly plotly==4.0.0. More details can be found from the page Getting Started and Migrating to Version 4 guides. Let us check out the key features in Plotly 4.0 Offline Only Prior versions of plotly contained functionality for creating figures in both “online” and “offline” modes. In “online” mode, figures were uploaded to an instance of Plotly’s Chart Studio service and then displayed, whereas in “offline” mode figures were rendered locally. This duality was a common source of confusion for several years, and so in version 4 the team made some important changes to help clear this up. In this version, the only supported mode of operation in the plotly package is “offline” mode, which requires no internet connection, no account, no authentication tokens, and no payment of any kind. Support for “online” mode has been moved into a separately-installed package called chart-studio. Express First Earlier this year the team released a standalone library called Plotly Express aimed at making it significantly easier and faster to create plotly figures from tidy data—as easy as a single line of Python. Plotly Express was extremely well-received by the community and starting with version 4, plotly now includes Plotly Express built-in which is accessible as plotly.express. Displayable anywhere In addition to “offline” mode, the plotly.offline package has been reimplemented on top of a new extensible renderers framework which enables Plotly figures to be displayed not only in Jupyter notebooks, but just about anywhere, like: JupyterLab & classic Jupyter notebook Other notebooks like Colab, nteract, Azure & Kaggle IDEs and CLIs like VSCode, PyCharm, QtConsole & Spyder Other contexts such as sphinx-gallery Dash apps (with dash_core_components.Graph()) Static raster and vector files (with fig.write_image()) Standalone interactive HTML files (with fig.write_html()) Embedded into any website (with fig.to_json() and Plotly.js) In addition to the above new features, there are other changes like a new default theme available in Plotly.py 4.0. The team has introduced a suite of new figure methods for updating figures after they have been constructed. It also supports all subplot and trace types: 2D, 3D, polar, ternary, maps, pie charts, sunbursts, Sankey diagrams etc. Plotly.py 4.0 is also supported by JupyterLab 1.0. To know about these feature updates in detail, check out the Medium post by the Plotly team. Plotly releases Dash DAQ: a UI component library for data acquisition in Python plotly.py 3.0 releases Python in Visual Studio Code released with enhanced Variable Explorer, Data Viewer, and more!
Read more
  • 0
  • 0
  • 3681
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at ₹800/month. Cancel anytime
article-image-following-epic-games-ubisoft-joins-blender-development-fund-adopts-blender-as-its-main-dcc-tool
Vincy Davis
23 Jul 2019
5 min read
Save for later

Following Epic Games, Ubisoft joins Blender Development fund; adopts Blender as its main DCC tool

Vincy Davis
23 Jul 2019
5 min read
Yesterday, Ubisoft Animation Studio (UAS) announced that they will fund the development of Blender as a corporate Gold member through the Blender Foundation’s Development Fund. It has also been announced that Ubisoft will be adopting the open-source animation software Blender as their main digital content creation (DCC) tool. The exact funding amount has not been disclosed. Gold corporate members of the Blender development fund can have their prominent logo on blender.org dev fund page and have credit as Corporate Gold Member in blender.org and in official Blender foundation communication. The Gold corporate members also have a strong voice in approving projects for Blender. The Gold corporate members donate a minimum of EUR 30,000 as long as they remain a member. Pierrot Jacquet, Head of Production at UAS mentioned in the press release , “Blender was, for us, an obvious choice considering our big move: it is supported by a strong and engaged community, and is paired up with the vision carried by the Blender Foundation, making it one of the most rapidly evolving DCCs on the market.”  He also believes that since Blender is an open source project, it will allow Ubisoft to share some of their own developed tools with the community. “We love the idea that this mutual exchange between the foundation, the community, and our studio will benefit everyone in the end”, he adds. As part of their new workflow, Ubisoft is creating a development environment supported by open source and inner source solutions. The Blender software will replace Ubisoft’s in-house digital content creation tool and will be used to produce short content with the incubator. Later, the Blender software will also be used in Ubisoft’s upcoming shows in 2020. Per Jacquet, Blender 2.8 will be a “game-changer for the CGI industry”. Blender 2.8 beta is already out, and its stable version is expected to be released in the coming days. Ubisoft was impressed with the growth of the internal Blender community as well as with the innovations expected in Blender 2.8. Blender 2.8 will have a revamped UX, Grease Pencil, EEVEE real-time rendering, new 3D viewport and UV editor tools to enhance users gaming experience. Ubisoft was thus convinced that this is the “right time to bring support to our artists and productions that would like to add Blender to their toolkit.” This news comes a week after Epic Games announced that it is awarding Blender Foundation $1.2 million in cash spanning three years, to accelerate the quality of their software development projects. With two big companies funding Blender, the future does look bright for them. The Blender 2.8 preview features is expected to have made both the companies step forward and support Blender, as both Epic and Ubisoft have announced their funding just days before the stable release of Blender 2.8. In addition to Epic and Ubisoft, corporate members include animation studio Tangent, Valve, Intel, Google, and Canonical's Ubuntu Linux distribution. Ton Roosendaal, founder and chairman of Blender Foundation is surely a happy man when he says that “Good news keeps coming”. He added, “it’s such a miracle to witness the industry jumping on board with us! I’ve always admired Ubisoft, as one of the leading games and media producers in the world. I look forward to working with them and help them find their ways as a contributor to our open source projects on blender.org.” https://twitter.com/tonroosendaal/status/1153376866604113920 Users are very happy and feel that this is a big step forward for Blender. https://twitter.com/nazzagnl/status/1153339812105064449 https://twitter.com/Nahuel_Belich/status/1153302101142978560 https://twitter.com/DJ_Link/status/1153300555986550785 https://twitter.com/cgmastersnet/status/1153438318547406849 Many also see this move as the industry’s way of sidelining Autodesk, the company which is popularly used for its DCC tools. https://twitter.com/flarb/status/1153393732261072897 A Hacker News user comments, “Kudos to blender's marketing team. They get a bit of free money from this. But the true motive for Epic and Unisoft is likely an attempt to strong-arm Autodesk into providing better support and maintenance. Dissatisfaction with Autodesk, lack of care for their DCC tools has been growing for a very long time now, but studios also have a huge investment into these tools as part of their proprietary pipelines.  Expect Autodesk to kowtow soon and make sure that none of these companies will make the switch. If it means that Autodesk actually delivers bug fixes for the version the customer has instead of one or two releases down the road, it is a good outcome for the studios.” Visit the Ubisoft website for more details. CraftAssist: An open-source framework to enable interactive bots in Minecraft by Facebook researchers What to expect in Unreal Engine 4.23? Pluribus, an AI bot built by Facebook and CMU researchers, has beaten professionals at six-player no-limit Texas Hold ’Em Poker
Read more
  • 0
  • 0
  • 7002

article-image-introducing-deep-tabnine-a-language-agnostic-autocompleter-based-on-openais-gpt-2
Bhagyashree R
23 Jul 2019
3 min read
Save for later

Introducing Deep TabNine, a language-agnostic autocompleter based on OpenAI’s GPT-2

Bhagyashree R
23 Jul 2019
3 min read
TabNine is a language-agnostic autocompleter that leverages machine learning to provide responsive, reliable, and relevant code suggestions. In a blog post shared last week, Jacob Jackson, TabNine's creator, introduced Deep TabNine that uses deep learning to significantly improve suggestion quality. What is Deep TabNine? Deep TabNine is based on OpenAI's GPT-2 model that uses the Transformer architecture. This architecture was intended for solving problems in natural language processing, Deep TabNine uses it to understand the English in code. For instance, the model can negate words with an if/else statement. While training, the model's goal is to predict the next token given the tokens that come before it. Trained on nearly 2 million files from GitHub, Deep TabNine comes with pre-existing knowledge, instead of learning only from a user’s current project. Additionally, the model also refers to documentation written in natural language to infer function names, parameters, and return types. It is capable of using small clues that are difficult for a traditional tool to access. For instance, it understands that the return type of app.get_user() is assumed to be an object with setter methods and the return type of app.get_users() is assumed to be a list. How can you access Deep TabNine? Although integrating a deep learning model comes with several benefits, using it demands a lot of computing power. Jackson clearly mentioned that running it on a laptop will not deliver low latency that TabNine's users are accustomed to.  As a solution, they are offering TabNine Cloud (Beta), a service that will enable users to use TabNine's servers for GPU-accelerated autocompletion. To get access to TabNine Cloud, you can sign up here. However, there are many who prefer to keep their code on their machines. To ensure the privacy and security of your code, the TabNine team is working on the following use cases: They are promising to come up with a reduced-size model in the future that can run on a laptop with reasonable latency for individual developers. Enterprises will have an option to license the model and run it on their hardware. They are also offering to train a custom model that will understand the unique patterns and style specific to an enterprise's codebase. Developers have already started its beta testing and are quite impressed: https://twitter.com/karpathy/status/1151887984691576833 https://twitter.com/aruslan/status/1151914744053297152 https://twitter.com/Frenck/status/1152634220872916996 You can check out the official announcement by TabNine to know more in detail. Implementing autocompletion in a React Material UI application [Tutorial] Material-UI v4 releases with CSS specificity, Classes boilerplate, migration to Typescript and more Conda 4.6.0 released with support for more shells, better interoperability among others
Read more
  • 0
  • 0
  • 4204

article-image-github-services-were-down-for-4-hours-yesterday
Bhagyashree R
23 Jul 2019
4 min read
Save for later

GitHub services experienced a 41-minute disruption yesterday

Bhagyashree R
23 Jul 2019
4 min read
Update: Yesterday the GitHub team in a blog post stated what they have uncovered in their initial investigation, “On Monday at 3:46 pm UTC, several services on GitHub.com experienced a 41-minute disruption, and as a result, some services were degraded for a longer period. Our initial investigation suggests a logic error introduced into our deployment pipeline manifested during a subsequent and unrelated deployment of the GitHub.com website. This chain of events destabilized a number of internal systems, complicated our recovery efforts, and resulted in an interruption of service.” It was not a very productive Monday for many developers when GitHub started showing 500 and 422 error code on their repositories. This was because several services on GitHub were down yesterday from around 15:46 UTC for 41 minutes. Soon GitHub engineers began their investigation and all the services were back to normal by 19:47 UTC. https://twitter.com/githubstatus/status/1153391172167114752 The outage affected GitHub services including Git operations, API requests, Gist, among others. The experiences that developers reported were quite inconsistent. Some developers said that though they were able to open the main repo page, they could not see commit log or PRs. Others reported that all the git commands that required interaction with GitHub’s remotes failed. A developer commented on Hacker News, “Git is fine, and the outage does not affect you and your team if you already have the source tree anywhere. What it does affect is the ability to do code reviews, work with issues, maybe even do releases. All the non-DVCS stuff.” GitHub is yet to share the cause and impact of the downtime. However, developers took to different discussion forums to share what they think the reason behind GitHub outage could be. While some speculated that it might be its increasing user base, others believed it was because GitHub might be still moving “stuff to Azure after the acquisition.” Developers also discussed what steps they can take so that such outages do not affect their workflow in the future. One developer suggested not to rely on a single point of failure by setting two different URLs for the same remote so that a single push command will push to both. You can do something like this, a developer suggested: git remote set-url --add --push origin [email protected]:Foo/bar.git git remote set-url --add --push origin [email protected]:Foo/bar.git Another developer suggested, “I highly recommend running at least a local, self-hosted git mirror at any tech company, just in these cases. Gitolite + cgit is extremely low maintenance, especially if you host them next to your other production services. Not to mention, if you get the self-hosted route you can use Gerrit, which is still miles better for code review than GitHub, Gitlab, bitbucket and co.” Others joked that this was a good opportunity to take a few hours of break and relax. “This is the perfect time to take a break. Kick back, have a coffee, contemplate your life choices. That commit can wait, that PR (i was about to merge) can wait too. It's not the end of the world,” a developer commented. Lately, we are seeing many cases of outages. Earlier this month, almost all of Apple’s iCloud services were down for some users. On July 2, Cloudflare suffered a major outage due to a massive spike in CPU utilization in the network. Last month, Google Calendar was down for nearly three hours around the world. In May, Facebook and its family of apps Whatsapp, Messenger, and Instagram faced another outage in a row. Last year, Github faced issues due to a failure in its data storage system which left the site broken for a complete day. Several developers took to Twitter to kill their time and vent out frustration: https://twitter.com/jameskbride/status/1153332862587944960 https://twitter.com/BobString/status/1153329356284055552 https://twitter.com/pikesley/status/1153332278774439941 https://twitter.com/francesc/status/1153336190390550528 Cloudflare RCA: Major outage was a lot more than “a regular expression went bad” EU’s satellite navigation system, Galileo, suffers major outage; nears 100 hours of downtime Twitter experienced major outage yesterday due to an internal configuration issue
Read more
  • 0
  • 0
  • 1931

article-image-kazakhstan-government-intercepts-nationwide-https-traffic-to-re-encrypt-with-a-govt-issued-root-certificate-cyber-security-or-cyber-surveillance
Savia Lobo
22 Jul 2019
6 min read
Save for later

Kazakhstan government intercepts nationwide HTTPS traffic to re-encrypt with a govt-issued root certificate - Cyber-security or Cyber-surveillance?

Savia Lobo
22 Jul 2019
6 min read
Update: On August 6, 2019, TSARKA, a cyberattack prevention body in Kazakhstan, announced that those who have established the National Certificate may delete it since it will no longer be needed. "Officials explained that it was happening because of the new security system's testing," TSAR mentioned. TSAR was officially informed that the tests were completed, all the tasks set during the pilot were successfully solved.  However, they further said, "the need for its installation may arise in cases of strengthening the digital border of Kazakhstan within the framework of special regulations." On Wednesday, July 17, 2019, the Kazakhstan government started intercepting internet traffic within its borders. The government further instructed all the ISPs to force their users to install a government-issued root certificate by Quaznet Trust Network on all devices and in every browser. With the help of this security root certificate, the local government agencies will be able to decrypt users’ HTTPS traffic, sneak into their content, re-encrypt it with the government’s own certificate, and later send it to its destination; thus allowing for the possibility of a nation-wide man-in-the-middle (MITM) attack. Since Wednesday, all internet users in Kazakhstan have been redirected to a page instructing users to download and install the new certificate, be it in their desktops or on their mobile devices. Why is the Kazakhstan government forcing citizens to install the root certificate? A local media, Tengrinews.kz reported, the Kazakh Ministry of Digital Development, Innovation and Aerospace said only internet users in Kazakhstan's capital of Nur-Sultan will have to install the certificate; however, users from all across the country reported being blocked from accessing the internet until they installed the government's certificate. Olzhas Bibanov, head of public relations service at Tele2 Kazakhstan, said, "We were asked by authorized bodies to notify Nur-Sultan's subscribers about the need to establish a security certificate”. In an announcement sent to the local ISPs the government said the introduction of the root certificate was due to “the frequent cases of theft of personal and credentials, as well as money from bank accounts of Kazakhstan”. The government in the announcement mentioned, “The introduction of a security certificate will help in the protection of information systems and data, as well as in identifying hacker cyber attacks of Internet fraudsters on the country's information space systems, private, including the banking sector, before they can cause damage. (...) In the absence of a security certificate on subscriber devices, technical limitations may arise with access to individual Internet resources". The government further assured the tool “will become an effective tool to protect the country's information space from hackers, Internet fraudsters and other types of cyber threats.'' The Kazakh government has tried unsuccessfully before to get its root certificate implemented Similar to current situation, in December 2015, the government tried their first attempt to force Kazakh users to install the root certificate. The government also sent across a notice to all users warning to install the certificate by January 1, 2016. “The decision was never implemented because the local government was sued by several organizations, including ISPs, banks, and foreign governments, who feared this would weaken the security of all internet traffic (and adjacent business) originating from the country”, ZDNet reports. The Kazakh government approached Mozilla to include their root certificate into their Firefox by default. However, Mozilla declined their proposal. How can users ensure their safety from their own government? If users do not wish to install such a certificate that puts their personal data at risk, they can try encrypting their internet traffic themselves or avoid the installation of this certificate. One way is, by switching to Linux as according to the announcement, Linux users are exempted from downloading this certificate. “[…] the installation of a security certificate must be performed from each device that will be used to access the Internet (mobile phones and tablets based on iOS / Android, personal computers and laptops based on Windows / MacOS).” Eugene Ivanov, a member of the Mozilla team says, “I think both Mozilla and Google should intervene into this situation because it can create a dangerous precedent, nullifying all the efforts of enforcing HTTPS. If Kazakhstan will succeed, more and more governments (eg. Russian Federation, Iran, etc.) will start global MITM attacks on their citizens and this is not good. I think all CAs used for MITM attacks should be explicitly blacklisted both by Mozilla and Google to exclude even [the] possibility of such attacks.” The government claims that installing the certificate is entirely voluntary. However, a user on HackerNews adds to this claim saying, “Technically yes, installing the certificate is voluntary; it's just that if you don't install it you won't be able to access the internet anymore when the government starts MITMing your connections”.  This is possible.  The government can take strict measures, which may not be in favour of the public and in turn force them to indirectly and involuntarily handover their personal data In such cases people are highly dependent on browsers such as Firefox, Google, to fight for their rights. A Kazakhstan user writes on HackerNews, “Banning this certificate or at least warning the users against using it WILL help a lot. Each authoritarian regime is authoritarian in its own way. Kazakhstan doesn't have a very strong regime, especially since the first president resigned earlier this year. When people protest strongly against something, the government usually backs down. For example, a couple of years ago the government withdrew their plans of lending lands to foreign governments after backlash from ordinary people. If Kazakhs knew about the implications of installing this certificate, they would have been on the streets already.” The user further adds, “If Firefox, Chrome and/or Safari block this certificate, the people will show their dissatisfaction and the law will be revoked. Sometimes the people in authoritarian countries need a little bit of support from organizations to fight for their rights. I really hope the browser organizations would help us here.” Browser organizations are having a discussion to come up with a plan of action to deal with sites that have been (re-)encrypted by the Kazakh government's root certificate. However, nothing is yet officially disclosed. We will update this page on further updates to this news. Read Google’s discussion group to know more about this news in detail. An attack on SKS Keyserver Network, a write-only program, poisons two high-profile OpenPGP certificates Firefox releases v66.0.4 and 60.6.2 to fix the expired certificate problem that ended up disabling add-ons Apple revoked Facebook developer certificates due to misuse of Apple’s Enterprise Developer Program; Google also disabled its iOS research app
Read more
  • 0
  • 0
  • 3225
article-image-ietf-proposes-json-meta-application-protocol-jmap-as-the-next-standard-for-email-protocols
Bhagyashree R
22 Jul 2019
4 min read
Save for later

IETF proposes JSON Meta Application Protocol (JMAP) as the next standard for email protocols

Bhagyashree R
22 Jul 2019
4 min read
Last week, the Internet Engineering Task Force (IETF) published JSON Meta Application Protocol (JMAP) as RFC 8260, now marked as “Proposed Standard”. The protocol is authored by Neil Jenkins, Director and UX Architect at Fastmail and Chris Newman, Principle Engineer at Oracle. https://twitter.com/Fastmail/status/1152281229083009025 What is JSON Meta Application Protocol (JMAP)? Fastmail started working on JMAP in 2014 as an internal development project. It is an internet protocol that handles the submission and synchronization of emails, contacts, and calendars between a client and a server providing a consistent interface to different data types. It is developed to be a possible successor to IMAP and a potential replacement for the CardDAV and CalDAV standards. Why is it needed? According to the developers, the current standards for email protocols, that is IMAP and SMTP, for client-server communication are outdated and complicated. They are not well-suited for modern mobile networks and high-latency scenarios. These limitations in current standards have led to stagnation in the development of new good email clients. Many have also started coming up with proprietary alternatives like Gmail, Outlook, Nylas, and Context.io. Another drawback is that many mobile email clients proxy everything via their own server instead of talking directly to the user’s mail store, for example, Outlook and Newton. This is not only bad for client authors who have to run server infrastructure in addition to just building their clients, but also raises security and privacy concerns. Here’s a video by FastMail explaining the purpose behind JMAP: https://www.youtube.com/watch?v=8qCSK-aGSBA How JMAP solves the limitations in current standards? JMAP is designed to be easier for developers to work with and enable efficient use of network resources. Here are some of its properties that address the limitations in current standards: Stateless: It does not require a persistent connection, which fits best for mobile environments. Immutable Ids: It is more like NFS or filesystems with inodes rather than a name-based hierarchy, which makes renaming easy to detect and cheap to sync. Batchable API calls: It batches multiple API calls in a single request to the server resulting in reduced round trips and better battery life for mobile users. Provides flood control: The client can put limits on how much data the server is allowed to send. For instance, the command will return a ‘tooManyChanges’ error on exceeding the client’s limit, rather than returning a million * 1 EXPUNGED lines as can happen in IMAP. No custom parser required: Support for JSON, a well understood and widely supported encoding format, makes it easier for developers to get started. A backward compatible data model: Its data model is backward compatible with both IMAP folders and Gmail-style labels. Fastmail is already using JMAP in production for its Fastmail and Topicbox products. It is also seeing some adoption in organizations like the Apache Software Foundation, who added experimental support for JMAP in its free mail server Apache in version 3.0. Many developers are happy about this announcement. A user on Hacker News said, “JMAP client and the protocol impresses a lot. Just 1 to a few calls, you can re-sync entire emails state in all folders. With IMAP need to select each folder to inspect its state. Moreover, just a few IMAP servers support fast synchronization extensions like QRESYNC or CONDSTORE.” However, its use of JSON did spark some debate on Hacker News. “JSON is an incredibly inefficient format for shareable data: it is annoying to write, unsafe to parse and it even comes with a lot of overhead (colons, quotes, brackets and the like). I'd prefer s-expressions,” a user commented. To stay updated with the current developments in JMAP, you can join its mailing list. To read more about its specification check out its official website and also its GitHub repository. Wireshark for analyzing issues and malicious emails in POP, IMAP, and SMTP [Tutorial] Google announces the general availability of AMP for email, faces serious backlash from users Sublime Text 3.2 released with Git integration, improved themes, editor control and much more!  
Read more
  • 0
  • 0
  • 3126

article-image-why-was-rust-chosen-for-libra-us-congressman-questions-facebook-on-libra-security-design-choices
Sugandha Lahoti
22 Jul 2019
6 min read
Save for later

“Why was Rust chosen for Libra?”, US Congressman questions Facebook on Libra security design choices

Sugandha Lahoti
22 Jul 2019
6 min read
Last month, Facebook announced that it’s going to launch its own cryptocurrency, Libra and Calibra, a payment platform that sits on top of the cryptocurrency, unveiling its plans to develop an entirely new ecosystem for digital transactions. It also developed a new programming language, “Move” for implementing custom transaction logic and “smart contracts” on the Libra Blockchain. The Move language is written entirely in Rust. Although Facebook’s media garnered a massive media attention and had investors and partners from the likes of PayPal, loan platform Kiva, Uber, and Lyft, it had its own share of concerns. The US administration is worried about a non-governmental currency in the hands of big tech companies. Early July, the US congress asked Facebook to suspend the implementation of Libra until the ramifications were investigated. Last week, at the U.S. House Committee on Financial Services hearing, investigating Libra’s security related challenges, Congressman Denver Riggleman posed an important question to David Marcus, head of Calibra, asking why the Rust language was chosen for Libra. Riggleman: I was really surprised about the Rust language. So my first question is, why was the Rust language chosen as the implementation language for Libra? Do you believe it's mature enough to handle the security challenges that will affect these large cryptocurrency transactions? Marcus: The Libra association will own the repository for the code. While there are many flavors and branches being developed by third parties, only safe and verified code will actually be committed to the actual Libra code base which is going to be under the governance of the Libra association. Riggleman: It looks like Libra was built on the nightly build of the Rust programming language. It's interesting because that's not how we did releases at the DoD. What features of Rust are only available in the nightly build that aren't in the official releases of Rust? Does Facebook see it as a concern that they are dependent on unofficially released features of the Rust programming language? Why the nightly releases? Do you see this as a function of the prototyping phase of this? Marcus: Congressman, I don’t have the answers to your very technical questions but I commit that we will get back to you with more details on your questions. Marcus appeared before two US congressional hearing sessions last week where he was constantly grilled by legislators. The grilling led to a dramatic alteration in the strategy of Libra. Marcus has clarified that Facebook won't move forward with Libra until all concerns are addressed. The original vision of Facebook with Libra was to be an open and largely decentralized network which would be beyond the reach of regulators. Instead, regulatory compliance would be the responsibility of exchanges, wallets, and other services called the Libra association. Post the hearing Marcus has stated that the Libra Association would have a deliberately limited role in regulatory matters. Per ArsTechnica, Calibra, would follow US regulations on consumer protection, money laundering, sanctions, and so forth. But Facebook didn't seem to have plans for the Libra Association, Facebook, or any associated entity to police illegal activity on the Libra network as a whole. This video clipping sparked quite the discussion on Hacker News and Reddit with people applauding the QnA session. Some appreciated that legislators are now asking tough questions like these. “It's cool to see a congressman who has this level of software dev knowledge and is asking valid questions.” “Denver Riggleman was an Air Force intelligence officer for 11 years, then he became an NSA contractor. I'm not surprised he's asking reasonable questions.” “I don't think I've ever heard of a Congressman going to GitHub, poking around in some open source code, and then asking very cogent and relevant questions about it. This video is incredible if only because of that.” Others commented on why Congress may have trust issues with using a young programming language like Rust for something like Libra, which requires layers of privacy and security measures. “Traditionally, government people have trust issues with programming languages as the compiler is, itself, an attack vector. If you are using a nightly release of the compiler, it may be assumed by some that the compiler is not vetted for security and could inject unstable or malicious code into another critical codebase. Also, Rust is considered very young for security type work, people rightly assume there are unfound weaknesses due to the newness of the language and related libraries”, reads one comment from Hacker News. Another adds, “Governments have issues with non-stable code because it changes rapidly, is untested and a security risk. Facebook moves fast and break things.” Rust was declared as the most loved programming language by developers in the Stack Overflow survey 2019. This year more or less most major platforms have  jumped on the bandwagon of writing or rewriting its components in the Rust programming language. Last month, post the release of Libra, Calibra tech lead Ben Maurer took to Reddit to explain why Facebook chose the programming language Rust. Per Maurer, “As a project where security is a primary focus, the type-safety and memory-safety of Rust were extremely appealing. Over the past year, we've found that even though Rust has a high learning curve, it's an investment that has paid off. Rust has helped us build a clean, principled blockchain implementation. Part of our decision to choose Rust was based on the incredible momentum this community has achieved. We'll need to work together on challenges like tooling, build times, and strengthening the ecosystem of 3rd-party crates needed by security-sensitive projects like ours.” Not just Facebook, last week, Microsoft announced plans to replace their C and C++ code with Rust calling it a “modern safer system programming language” with great memory safety features. In June, Brave ad-blocker also released a new engine written in Rust which gives 69x better performance. Airbnb has introduced PyOxidizer, a Python application packaging and distribution tool written in Rust. “I’m concerned about Libra’s model for decentralization”, says co-founder of Chainspace, Facebook’s blockchain acquisition Facebook launches Libra and Calibra in a move to seriously disrupt the financial sector Facebook releases Pythia, a deep learning framework for vision and language multimodal research
Read more
  • 0
  • 0
  • 6362

article-image-to-create-effective-api-documentation-know-how-developers-use-it-says-acm
Bhagyashree R
19 Jul 2019
5 min read
Save for later

To create effective API documentation, know how developers use it, says ACM

Bhagyashree R
19 Jul 2019
5 min read
Earlier this year, the Association for Computing Machinery (ACM) in its January 2019 issue of Communication Design Quarterly (CDQ), talked about how developers use API documentation when getting into a new API and also suggested a few guidelines for writing effective API documentation. Application Programming Interfaces (APIs) are standardized and documented interfaces that allow applications to communicate with each other, without having to know how they are implemented. Developers often turn to API references, tutorials, example projects, and other resources to understand how to use them in their projects. To support the learning process effectively and write optimized API documentation, this study tried to answer the following questions: Which information resources offered by the API documentation developers use and to what extent? What approaches developers take when they start working with a new API? What aspects of the content hinders efficient task completion? API documentation and content categories used in the study The study was done on 12 developers (11 male and 1 female), who were asked to solve a set of pre-defined tasks using an unfamiliar public API. To solve these tasks, they were allowed to refer to only the documentation published by the API provider. The participants used the API documentation about 49% of the time while solving the task. On an individual level, there was not much variation, with the means for all but two participants ranging between 41% and 56%. The most used content category was API reference, followed by the Recipes page. The aggregate time spent on both Recipes and Samples categories was almost equal to the time spent on the API reference category. The Concepts page, however, was used less often as compared to the API reference. Source: ACM “These findings show that the API reference is an important source of information, not only to solve specific programming issues when working with an API developers already have some experience with, but even in the initial stages of getting into a new API, in line with Meng et al. (2018),” the study concludes. How do developers learn a new API The researchers observed two different problem-solving behaviors that were very similar to the opportunistic and systematic developer personas discussed by Clarke (2007). Developers with the opportunistic approach tried to solve the problem in an “exploratory fashion”. They were more intuitive, open to making errors, and often tried solutions without double-checking in the documentation. This group was the one who does not invest much time to get a general overview of the API before starting with the first task. Developers from this group prefer fast and direct access to information instead of large sections of the documentation. On the contrary, developers with the systematic approach tried to first get a deeper understanding of the API before using it. They took some time to explore the API and prepare the development environment before starting with the first task. This group of developers attempted to follow the proposed processes and suggestions closely. They were also able to notice parts of the documentation that were not directly relevant to the given task. What aspects of API documentation make it hard for developers to complete tasks efficiently? Lack of transparent navigation and search function Some participants felt that the API documentation lacked a consistent system of navigation aids and did not offer side navigation including within-page links. Developers often required a search function when they were missing a particular piece of information, such as a term they did not know. As the documentation used in the test did not offer a search field, developers had to use a simple page search instead, which was often unsuccessful. Issues with high-level structuring of API documentation The participants observed several problems in the high-level structuring of the API documentation, that is, the split of information in Concepts, Samples, API reference, and so on. For instance, to search for a particular piece of information, participants sometimes found it difficult to decide which content category to select. It was particularly unclear how the content provided in the Samples and Recipes were distinct. Unable to reuse code examples Most of the time participants developed their solution using the sample code provided in the documentation. However, the efficient use of sample code was hindered because of the presence of placeholders in the code referencing some other code example. Few guidelines for writing efficient API documentation Organizing the content according to API functionality: The API documentation should be divided into categories that reflect the functionality or content domain of the API. So participants would have found it more convenient if instead of dividing documentation into “Samples,” “Concepts,” “API reference” and “Recipes,” the API used categories such as “Shipment Handling,” “Address Handling” and so on. Enabling efficient access to relevant content: While designing API documentation, it is important to take specific measures for improved accessibility to content that is relevant to the task at hand. This can be done by organizing the content according to API functionality, presenting conceptual information integrated with related tasks, and providing transparent navigation and powerful search function. Facilitating initial entry into the API: For this, you need to identify appropriate entry points into the API and relate particular tasks to specific API elements. Provide clean and working code examples, provide relevant background knowledge, and connect concepts to code. Supporting different development strategies: While creating the API documentation, you should also keep in mind the different strategies that developers adopt when approaching a new API. Both the content and the way it is presented should serve the needs of both opportunistic and systematic developers. These were some observations and implications from the study. To know more, read the paper: How Developers Use API Documentation: An Observation Study. GraphQL API is now generally available Best practices for RESTful web services: Naming conventions and API Versioning [Tutorial] Stripe’s API suffered two consecutive outages yesterday causing elevated error rates and response times
Read more
  • 0
  • 0
  • 3383
article-image-new-twitter-touts-write-once-run-everywhere-redesign-users-roll-eyes-with-displeasure
Fatema Patrawala
19 Jul 2019
5 min read
Save for later

New Twitter touts “write once, run everywhere” redesign, users roll eyes with displeasure

Fatema Patrawala
19 Jul 2019
5 min read
On Monday, Twitter rolled out the new website to the general public. Those who have already seen it, may find the new design refreshing in its subtlety. A few things have been rearranged in the new three-column design, and the site is noticeably faster, but according to users it seems there aren’t a lot of drastic updates. The official blog post reads, “a refreshed and updated website that is faster, easier to navigate and more personalized. The site has an updated look and feel that is more consistent with the Twitter you see on other devices, making it easier to access some of your favorite features, and with more options to make it your own.” The Twitter engineering team on Monday posted a separate blog on the new Twitter website and its architecture. They say that their goal was to create one codebase website that is responsive to more than just design and the screen size. The team posted, “Our goal was to create one codebase - one website - capable of delivering the best experience possible to each person.” The engineering team also wrote, “On web, we believe in the “write once, run everywhere” philosophy.” They said the goal for this new website is two fold. First to make it easier and faster to develop new features for people worldwide. Secondly, provide each person and each device with the right experience. This post gained a lot of attention on Hacker News and the users commented of appreciating the single code base for mobile and web but they feel the major turn off is how the Home timeline appeared on the mobile and desktop. One of the users commented, “To the posted article, I think it's impressive they are shipping a single codebase for mobile and desktop. Modular features you can turn off for different views. It's smart and I'll be curious to see if other sites follow suit. Unfortunately they've now ported one of the most offensive features from mobile to desktop. The "Home" timeline, with tweets out of order. And the real kicker; you can still select "latest Tweets first" but then the app literally undoes your preference every week or two, forcing you back to their "Home" view. It's offensive. Also a small thing, but the new desktop Twitter now has obfuscated CSS classes for everything. The names change frequently too, maybe at every deploy? Anyway it makes it a lot harder to modify the desktop HTML presentation with an extension or set of ad blocker rules.” Finally let us check out the new features added to Twitter. Updates to new Twitter With the new features listed below the team at Twitter has tried to make conversations easier to find and follow – and a bit more fun: More of What’s Happening: They have brought over Explore to bring the same great content found in our apps; you can expect more live video and local moments personalized for wherever you are in the world. Get context with profile information within conversations and check out your Top Trends in any view so you never miss what’s happening. Easy Access to Your Favorite Features: Bookmarks, Lists, and your Profile are right up front and have their own spot on the side navigation, making it easier and faster to jump between different tabs. Direct Messages All in One Place: Direct Messages have been expanded so you can see your conversations and send messages all from the same view. Now there’s less hassle switching between screens to send a message. Login, Logout Struggle No More: Whether you have one profile or a few, now you can switch between accounts faster, directly from the side navigation Make Twitter Yours: The love is real for dark mode themes Dim and Lights Out. Twitter has brought to you different themes and color options, along with two options for dark mode. However, the new site for Twitter was all about “Woah, What’s this? a shiny new Twitter.com is here. '' Users seem to be unhappy with the statement and posted dull comments on the announcement. The users feel new features were added to the site but a lot of it is still missing. Here’s some of the tweet responses to the official announcement. https://twitter.com/grandayy/status/1150948766851174402 https://twitter.com/BetterGarf/status/1150972967482023936 https://twitter.com/falcons_fan1966/status/1150833643046211596 https://twitter.com/Autumn_Antal/status/1150870408570134529 https://twitter.com/MrPuddins/status/1151342148626866178   Once again Twitter only focused on the web design and UI, made no efforts for better or healthier conversations on Twitter, which is actually their motto. Creative Commons’ search engine, now out of beta, indexes over 300 million public domain images Mozilla launches Firefox Preview, an early version of a GeckoView-based Firefox for Android Vue maintainers proposed, listened, and revised the RFC for hooks in Vue API
Read more
  • 0
  • 0
  • 2887

article-image-ex-microsoft-employee-arrested-for-stealing-over-10m-from-store-credits-using-a-test-account
Savia Lobo
19 Jul 2019
4 min read
Save for later

Ex-Microsoft employee arrested for stealing over $10M from store credits using a test account

Savia Lobo
19 Jul 2019
4 min read
On Tuesday, one of Microsoft’s former employees, Volodymyr Kvashuk, 25, was arrested for attempting to steal $10 million worth of digital currency from Microsoft. “If convicted of mail fraud, the former Microsoft software engineer could face as much as 20 years in prison and a $250,000 fine”, The Register reports. Kvashuk, a Ukranian citizen residing in Renton, Washington was hired by Microsoft in August 2016 as a contractor till June 2018. He was a part of Microsoft’s Universal Store Team (UST) with a duty to handle the company's e-commerce operations. Sam Guckenheimer, product owner for Azure DevOps at Microsoft, back in 2017,  said the UST "is the main commercial engine of Microsoft with the mission to bring One Universal Store for all commerce at Microsoft.” He further explained, "The UST encompasses everything Microsoft sells and everything others sell through the company, consumer and commercial, digital and physical, subscription and transaction, via all channels and storefronts". According to the prosecution’s complaint report, filed in a US federal district court in Seattle, the UST team was assigned to make simulated purchases of products from the online store to ensure customers could make purchases without any glitches. The test accounts used to make these purchases were linked to artificial payment devices (“Test In Production” or “TIP” cards) that allowed the tester to simulate a purchase without generating an actual charge. The program was designed to block the delivery of physical goods. However, no restrictions or safeguards were placed to block the test purchases of digital currency i.e. “Currency Stored Value” or “CSV”, which could also be used to buy Microsoft products or services. Kvashuk fraudulently obtained these CSVs and resold them to third parties, which reaped him over $10,000,000 in CSV and also some property from Microsoft. Kvashuk bought these CSVs by disguising his identity with different false names and statements. According to The Register, “The scheme supposedly began in 2017 and escalated to the point that Kvashuk, on a base salary of $116,000 per year, bought himself a $162,000 Tesla and $1.6m home in Renton, Washington”. Microsoft's UST Fraud Investigation Strike Team (FIST) noticed an unexpected rise in the use of CSV to buy subscriptions to Microsoft's Xbox gaming system in February 2018. By tracing the digital funds, the investigators found out that these were resold on two different websites, to two whitelisted test accounts. FIST then traced the accounts and transactions involved. With the assistance of the US Secret Service and the Internal Revenue Service, investigators concluded that Kvashuk had defrauded Microsoft. Kvashuk had also a Bitcoin mixing service to hide his public blockchain transactions. “In addition to service provider records that point to Kvashuk, the complaint notes that Microsoft's online store uses a form of device fingerprinting called a Fuzzy Device ID. Investigators, it's claimed, linked a specific device identifier to accounts associated with Kvashuk”, according to The Register. One of the users on HackerNews mentions, “There are two technical interesting takeaways in this: 1 - Microsoft, and probably most big companies, have persistent tracking ID on most stuff that is hard to get rid of and can be used to identify you and devices linked to you in a fuzzy way. I mean, we know about super cookies, fingerprinting and such, but it's another to hear it being used to track somebody that was careful and using multiple anonymous accounts. 2 - BTC mixers will not protect you. Correlating one single wallet with you will make it possible to them retrace the entire history.” To know about this news in detail, head over to the prosecution’s complaint. Microsoft Azure VP demonstrates Holoportation, a reconstructed transmittable 3D technology Microsoft mulls replacing C and C++ code with Rust calling it a “modern safer system programming language” with great memory safety features Microsoft adds Telemetry files in a “security-only update” without prior notice to users
Read more
  • 0
  • 0
  • 2364