Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3709 Articles
article-image-meet-sublime-merge-a-new-git-client-from-the-makers-of-sublime-text
Prasad Ramesh
21 Sep 2018
3 min read
Save for later

Meet Sublime Merge, a new Git client from the makers of Sublime Text

Prasad Ramesh
21 Sep 2018
3 min read
The makers of Sublime Text have released a new Git client yesterday. Called Sublime Merge, this tool combines the user interface of Sublime Text, with a from-scratch implementation of Git. The result is a Git client with a better and familiar interface. Sublime Merge has no time limit, no metrics, and with no tracking done on your usage. It has two themes, light and dark. The evaluation version is fully functional, but does not have the dark theme. You don’t need an account for the evaluation version. Here are some of the features of Sublime Merge. An integrated merge tool An integrated merge tool allows resolving conflicts in Sublime Merge itself instead of having to open another editor. There is a 3-pane view for viewing conflicts. The changes done by you are on the left, and by others, on the right. The resolved text is displayed on a pane in the center with buttons to choose between what changes to accept. Advanced diffs For cases where necessary, Sublime Merge will display exactly which individual characters have been changed for a commit. This includes renames, moves, resolving conflicts or just looking at the commit history. It can be done simply by selecting any two commits in Sublime Merge with Ctrl+Left Mouse to show the diff between them. Key bindings There are also good keyboard usability options. The Tab key can be used to navigate through different parts of the application. Space bar can toggle expansion, and Enter can stage/unstage hunks. The Command Palette allows quick access to a large set of Git commands and is triggered by Ctrl+P. Command line integration Sublime Merge will work hand-in-hand with the command line. All repository changes are updated live and things work the same from the command line as they would from the UI. So either the GUI or the command line can be used for different functions, the choice is yours. The smerge tool that comes with Sublime Merge can be used to open repositories, blame files, and search for commits. Advanced search Sublime Merge features find-as-you-type search to find the commit with exact matches. You can search for commit messages, commit authors, file names, and even wildcard patterns. Complex search queries can also be constructed using ‘and’, ‘or’, and ‘()’ symbols for deep searches within folders. Use of real Git Working with Sublime Merge means you're working with the real Git, and not just a simplified version. Hovering over the buttons will show you which command it will run. Sublime Merge uses the same lingo as Git, and it doesn't make use of any state beyond Git itself. It uses a custom implementation of Git for reading repositories that drives high performance functionalities. However, Git itself, is directly used in Sublime Merge for repository mutating operations like staging, committing, checking out branches, etc. Downloads and licence Individual licences are lifetime with three years of updates included. For business licenses, subscription is available. Sublime Merge is in its early stages and has only been used by the makers and a small team of beta testers. Now they have invited other users to try it as well. You can download and read more about the Git Client from the Sublime Merge website. TypeScript 3.0 is finally released with ‘improved errors’, editor productivity and more GitHub introduces ‘Experiments’, a platform to share live demos of their research projects Packt’s GitHub portal hits 2,000 repositories
Read more
  • 0
  • 0
  • 6904

article-image-iso-c-committee-announces-that-c20-design-is-now-feature-complete
Bhagyashree R
25 Feb 2019
2 min read
Save for later

ISO C++ Committee announces that C++20 design is now feature complete

Bhagyashree R
25 Feb 2019
2 min read
Last week, as per the schedule, the ISO C++ Committee met in Kona, Hawaii to finalize the feature set for the next International Standard (IS), C++ 20. The committee has announced that C++20 is now feature complete and they are planning to finish the C++20 specification at the upcoming meeting, which is scheduled to happen in July 2019. Once the specification is complete they are planning to send the Committee Draft for review. Some of the features this draft include Modules With the introduction of modules, developers will not require to separate their files into header and source parts. The committee has now fixed internal linkage escaping modules. Coroutines The committee has gone through the coroutines proposals and has decided to go ahead with the specification. According to the specification of this feature, three keywords will be added: co_await, co_yield, and co_return. Contracts Contracts are made up of preconditions, postconditions, and assertions. These act as a basic mitigation measure when a program goes wrong because of some mismatch of expectations between parts of the programs. The committee is focused on refining the feature and renamed expects/ensures to pre/post. Concepts The concepts library include the definitions of fundamental library concepts, which are used for compile-time validation of template arguments and perform function dispatch on properties of types. Ranges The ranges library comes with components for dealing with ranges of elements including a variety of view adapters. To read the entire announcement, check out this Reddit thread. Code completion suggestions via IntelliCode comes to C++ in Visual Studio 2019 How to build Template Metaprogramming (TMP) using C++[Tutorial] Mio, a header-only C++11 memory mapping library, released!
Read more
  • 0
  • 0
  • 6854

article-image-google-open-sources-gpipe-a-pipeline-parallelism-library-to-scale-up-deep-neural-network-training
Natasha Mathur
05 Mar 2019
3 min read
Save for later

Google open-sources GPipe, a pipeline parallelism Library to scale up Deep Neural Network training

Natasha Mathur
05 Mar 2019
3 min read
Google AI research team announced that it’s open sourcing GPipe, a distributed machine learning library for efficiently training Large-scale Deep Neural Network Models, under the Lingvo Framework, yesterday. GPipe makes use of synchronous stochastic gradient descent and pipeline parallelism for training. It divides the network layers across accelerators and pipelines execution to achieve high hardware utilization. GPipe also allows researchers to easily deploy accelerators to train larger models and to scale the performance without tuning hyperparameters. Google AI researchers had also published a paper titled “GPipe: Efficient Training of Giant Neural Networks using Pipeline Parallelism" last year in December. In the paper, researchers demonstrated the use of pipeline parallelism to scale up deep neural networks to overcome the memory limitation on current accelerators. Let’s have a look at major highlights of GPipe. GPipe helps with maximizing the memory and efficiency GPipe helps with maximizing the memory allocation for model parameters. Researchers conducted experiments on Cloud TPUv2s. Each of these Cloud TPUv2s consists of 8 accelerator cores and 64 GB memory (8 GB per accelerator). Generally, without GPipe, a single accelerator is able to train up to 82 million model parameters because of the memory limitations, however, GPipe was able to bring down the immediate activation memory from 6.26 GB to 3.46GB on a single accelerator. Researchers also measured the effects of GPipe on the model throughput of AmoebaNet-D to test its efficiency. Researchers found out that there was almost a linear speedup in training. GPipe also enabled 8 billion parameter Transformer language models on 1024-token sentences using speedup of 11x.                                        Speedup of AmoebaNet-D using GPipe Putting the accuracy of GPipe to test Researchers used GPipe to verify the hypothesis that scaling up existing neural networks can help achieve better model quality. For this experiment, an AmoebaNet-B with 557 million model parameters and input image size of 480 x 480  was trained on the ImageNet ILSVRC-2012 dataset. Researchers observed that the model was able to reach 84.3% top-1 / 97% top-5 single-crop validation accuracy without the use of any external data. Researchers also ran the transfer learning experiments on the CIFAR10 and CIFAR100 datasets, where they observed that the giant models improved the best published CIFAR-10 accuracy to 99% and CIFAR-100 accuracy to 91.3%. “We are happy to provide GPipe to the broader research community and hope it is a useful infrastructure for efficient training of large-scale DNNs”, say the researchers. For more information, check out the official GPipe Blog post. Google researchers propose building service robots with reinforcement learning to help people with mobility impairment Google AI researchers introduce PlaNet, an AI agent that can learn about the world using only images Researchers release unCaptcha2, a tool that uses Google’s speech-to-text API to bypass the reCAPTCHA audio challenge
Read more
  • 0
  • 0
  • 6851

article-image-cisco-and-huawei-routers-hacked-via-backdoor-attacks-and-botnets
Savia Lobo
23 Jul 2018
5 min read
Save for later

Cisco and Huawei Routers hacked via backdoor attacks and botnets

Savia Lobo
23 Jul 2018
5 min read
In today’s world, organizations and companies go to great lengths to protect themselves from network breaches. However, even a pinhole is enough for the attackers to intrude into any system. Last week, routers by Cisco and Huawei were hacked by two separate groups using different methods. Cisco’s routers were hacked using a backdoor attack while Huawei routers were exploited using a much older vulnerability programming code. An abnormal rise in the Cisco router backdoors Cisco in the year 2004 had written the IETF proposal for a “lawful intercept” backdoor for their routers. This proposal stated that the law enforcement teams could use the intercept to remotely log in to routers. These routers which are sold to ISPs and other large enterprises would allow the law enforcement agents to wiretap IP networks. These law enforcement agents are supposed to gain such an access only via a court order or other legal access request. [box type="shadow" align="" class="" width=""]A backdoor is a malware type which can surpass the normal authentication process for accessing any system or application. Some backdoors are legitimate and assist, for instance, manufacturers to regain lost passwords. However, these backdoors can be used by attackers to remotely access the systems without anyone on the system knowing it.[/box] However, later in the year 2010, an IBM security researcher stated that such a protocol would give an easy access to malicious attackers and would take over Cisco IOS routers. Also, the ISPs related to these routers would also end up being hacked. Some undocumented backdoors were discovered in the year 2013, 2014, 2015, and 2017. According to Tom’s Hardware, this year alone, Cisco recorded five different backdoors within their routers, which resulted in a security flaw for the company’s routers. Let’s have a look at the list of undocumented backdoors found and when. The month of March recorded two backdoors. Firstly, a hardcoded account with the username ‘cisco’, which would have provided an intrusion within more than 8.5 million Cisco routers and switches in a remote mode. Another hardcoded password was found for Cisco's Prime Collaboration Provisioning (PCP) software. This software is used for the remote installation of Cisco voice and video products. May revealed another backdoor in Cisco’s Digital Network Architecture (DNA) Center. This center is used by enterprises to provision devices across a network. Further, in the month of June, Cisco’s Wide Area Application Services (WAAS) found a backdoor account. Note that this is a software tool for traffic optimizations in the Wide Area Network (WAN). The most recent backdoor, found this month, was in the Cisco Policy Suite, which is a software suite for ISPs and large companies that can manage a network’s bandwidth policies. Using this backdoor, the attacker gets a root access to the network with no mitigations against it. However, this backdoor has been patched with Cisco’s software update. The question that arises from these incidents is whether these backdoors were created accidentally or actually by intruders? The recurrence of such incidents does not paint a good picture of Cisco as a responsible, reliable and trustworthy network for end users. Botnet built in a day brings down Huawei routers Researchers from the NewSky security spotted a new botnet last week, which nearly enslaved 18,000 Huawei’s IoT devices within a day. [box type="shadow" align="" class="" width=""]Botnets are huge networks of enslaved devices and can be used to perform distributed denial-of-service attack (DDoS attack), send malicious packets of data to a device, and remotely execute code.[/box] The most striking feature of this huge botnet is that it was built within a day and with a vulnerability which was previously known, as CVE-2017-17215. Anubhav said, “It's painfully hilarious how attackers can construct big bot armies with known vulns"This botnet was created by a hacker, nicknamed Anarchy, says Ankit Anubhav, security researcher at NewSky security. Other security firms including Rapid7 and Qihoo 360 Netlab also confirmed the existence of this new botnet. They first noticed a huge increase in Huawei’s device scanning. Anubhav states that the hacker revealed to him an IP list of victims. This list has not been made public yet. He further adds that the same code was released as public in January this year. The same code was used in the Satori and Brickerbot botnets, and also within other botnets based on Mirai botnets (Mirai botnets were used in 2016 to disrupt Internet services across the US on a huge scale). The NetSky security researcher suspects that Anarchy may be the same hacker known as Wicked, who was linked with the creation of the Owari/Sora botnets. Moreover, Anarchy/Wicked told the researcher that they also plan to start a scan for Realtek router vulnerability CVE-2014-8361, in order to enslave more devices. After receiving such a warning from the hacker himself, what new security measures will be taken henceforth? Read more about this Huawei botnet attack on ZDNet. Is Facebook planning to spy on you through your mobile’s microphones? Social engineering attacks – things to watch out for while online DCLeaks and Guccifer 2.0: How hackers used social engineering to manipulate the 2016 U.S. elections
Read more
  • 0
  • 0
  • 6838

article-image-ethical-mobile-operating-system-eelo-an-alternative-for-android-and-ios-is-in-beta
Prasad Ramesh
11 Oct 2018
5 min read
Save for later

‘Ethical mobile operating system’ /e/, an alternative for Android and iOS, is in beta

Prasad Ramesh
11 Oct 2018
5 min read
Right now Android and iOS are the most widely used OSes on mobile phones. Both owned by giant corporates and there are no other offerings that are in line with public interest, privacy, or affordability. Android is owned by Google, can’t say it is pro user privacy with all the tracking they do. iOS by Apple is a very closed OS and not to mention that it isn’t exactly affordable to the masses. Apart from some OSes in the works, there is an OS called /e/ or eelo from the creator of Mandrake-Linux, focused on user privacy. Some OSes in the works Some of the mobile OSes include Tizen from Samsung which it had released only with entry level smartphones. There is also an OS in the making by Huawei. Google has also been working on a new OS called Fuchsia. It uses a new microkernel called Zicron created by Google, instead of Linux. It is also in the early stages and there is no clear indication behind the purpose of building Fuchsia when Android is ubiquitous in the market. Google was fined for $5B regarding Android antitrust earlier this year, maybe Fuchsia can come into picture here. In response to EU’s decision to fine Google, Sundar Pichai said that preventing Google from bundling its apps would “upset the balance of the Android ecosystem” and that the Android business model guaranteed zero charges for the phone makers. This seems like a warning from Google to consider licensing Android to phone makers. Will curtains be closed on Android over legal disputes? That does not seem very likely considering Android smartphones and Google’s services in these smartphones are a big source of income for Google. They would not let it go that easily and I’m not sure if the world is ready to let go of the Android OS either. It has given access to apps, information, connectivity to the large masses. However, there is growing discontent among Android users, developers and handset partners. Whether that frustration will pivot enough to create a viable market for alternative mobile OS, is something only time can tell. Either way, there is one OS called /e/ or eelo intent on displacing Android. It has made some progress but is not an OS made from scratch exactly. What is eelo? The above mentioned OSes are far from complete and owned by large corporations. Here comes eelo, it is free and open-source. It is a forked LineageOS with all the Google apps and services removed. But that’s not all, it also has a select few default applications, a new user interface, and several integrated online services. The “/e/” ROM is in Beta stage and can be installed on several devices. More devices will be supported as more contributors port and maintain for different devices. The ROM uses microG instead of Google’s core apps. It uses Mozilla NLP which will make geolocation available even when GPS signal is not available. eelo project leader, Gaël Duval states: “At /e/, we want to build an alternative mobile operating system that everybody can enjoy using, one that is a lot more respectful of user’s data privacy while offering them real freedom of choice. We want to be known as the ethical mobile operating system, built in the public interest.” BlissLauncher is included with original icons and support for widgets and auto icon sizing based on screen pixel density. There are new default applications, a mail app, an SMS app (Signal), a chat application (Telegram), along with a weather app, a note app, a tasks app and a maps app. There is an /e/ account manager in which users can choose to use a single /e/ identity ([email protected]) for all services. It will also have OTA updates. The default search engine is searX with Qwant and DuckDuckGo as alternatives. They also plant to open a project in the personal assistant area. How has the market reacted to eelo? Early testers seem happy with /e/ or alternatively called as eelo. https://twitter.com/lowpunk/status/1050032760373633025 https://twitter.com/rvuong_geek/status/1048541382120525824 There are also some negative reactions where people don’t really welcome this new “mobile OS”. A comment on reddit by user JaredTheWolfy says: “This sounds like what Cyanogen tried to do, but at least Cyanogen was original and created a legacy for the community.” Another comment by user MyNDSETER on reddit reads: “Don't trust Google with your data. Trust us instead. Oh gee ok and I'll take some stickers as well.” Yet another reddit user zdakat says: “I guess that's the android version of I made my own cryptocurrency! (by changing a few strings in Bitcoin source, or the new thing: by deploying a token on Ethereum)” You can check out a detailed article about eelo on Hackernoon, and the /e/ website. A decade of Android: Slayer of Blackberry, challenger of iPhone, mother of the modern mobile ecosystem Microsoft Your Phone: Mirror your Android phone apps on Windows Android Studio 3.2 releases with Android App Bundle, Energy Profiler, and more!
Read more
  • 0
  • 0
  • 6827

article-image-a-libre-gpu-effort-based-on-risc-v-rust-llvm-and-vulkan-by-the-developer-of-an-earth-friendly-computer
Prasad Ramesh
02 Oct 2018
2 min read
Save for later

A libre GPU effort based on RISC-V, Rust, LLVM and Vulkan by the developer of an earth-friendly computer

Prasad Ramesh
02 Oct 2018
2 min read
An open-source libre GPU project is under the works by Luke Kenneth Casson Leighton. He is the hardware engineer who developed the EOMA68, an earth-friendly computer. The project already has access to $250k USD in funding. The basic idea for this "libre GPU" is to use a RISC-V processor. The GPU will be mostly software-based. It will leverage the LLVM compiler infrastructure and utilize a software-based Vulkan renderer to emit code and run on the RISC-V processor. The Vulkan implementation will be used for writing in the Rust programming language. The project's current road-map has details only on the software side of figuring out the RISC-V LLVM back-end state. Work is being done on writing a user-space graphics driver, implementing the necessary bits for the proposed RISC-V extensions like "Simple-V". While doing this, they will start figuring out the hardware design and the rest of the project. The road-map is quite simplified for the arduous task at hand. The website notes: “Once you've been through the "Extension Proposal Process" with Simple-V, it need never be done again, not for one single parallel / vector / SIMD instruction, ever again.” This process will include creating a fixed-function 3D "FP to ARGB" custom instruction, a custom extension with special 3D pipelines. With Simple-V, there is no need to worry about about how those operations would be parallelised. This is not a new concept, it's borrowed directly from videocore-iv. videocore-iv calls it "virtual parallelism". It's an enormous effort on both the software and hardware ends to come up with a RISC-V, Rust, LLVM, and Vulkan open-source combined project. It is difficult even with the funding considering it is a software based GPU. It is worth noting that the EOMA68 project was started by Luke in 2016 and raised over $227k USD from crowdfunding participants and hasn't shipped yet. To know more about this project, visit the libre risc-v website. NVIDIA leads the AI hardware race. But which of its GPUs should you use for deep learning? AMD ROCm GPUs now support TensorFlow v1.8, a major milestone for AMD’s deep learning plans PyTorch-based HyperLearn Statsmodels aims to implement a faster and leaner GPU Sklearn
Read more
  • 0
  • 0
  • 6826
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at £15.99/month. Cancel anytime
article-image-microsoft-announces-the-first-public-preview-of-sql-server-2019-at-ignite-2018
Amey Varangaonkar
25 Sep 2018
2 min read
Save for later

Microsoft announces the first public preview of SQL Server 2019 at Ignite 2018

Amey Varangaonkar
25 Sep 2018
2 min read
Microsoft made several key announcements at their Ignite 2018 event, which began yesterday in Orlando, Florida. The biggest announcement of them all was the public preview availability of SQL Server 2019. With this new release of SQL Server, businesses will be able to manage their relational and non-relational data workloads in a single database management system. What we can expect in SQL Server 2019 Microsoft SQL Server 2019 will run either on-premise, or on the Microsoft Azure stack Microsoft announced the Azure SQL Database Managed Instance, which will allow businesses to port their database to the cloud without any code changes Microsoft announced new database connectors that will allow organizations to integrate SQL Server with other databases such as Oracle, Cosmos DB, MongoDB and Teradata SQL Server 2019 will get built-in support for popular Open Source Big Data processing frameworks such as Apache Spark and Apache Hadoop SQL Server 2019 will have smart machine learning capabilities with support for SQL Server Machine Learning services and Spark Machine Learning Microsoft also announced support for Big Data clusters managed through Kubernetes - the Google-incubated container orchestration system With organizations slowly moving their operations to the cloud, Microsoft seems to have hit the jackpot with the integration of SQL Server and Azure services. Microsoft has claimed businesses can save upto 80% of their operational costs by moving their SQL database to Azure. Also, given the rising importance of handling Big Data workloads efficiently, SQL Server 2019 will now be able to ingest, process and analyze Big Data on its own with built-in capabilities of Apache Spark and Hadoop - the world’s leading Big Data processing frameworks. Although Microsoft hasn’t hinted at the official release date yet, it is expected that SQL Server 2019 will be generally available in the next 3-5 months. Of course, the duration can be extended or accelerated depending on the feedback received from the tool’s early adopters. You can try the public preview of SQL Server 2019 by downloading it from the official Microsoft website. Read more Microsoft announces the release of SSMS, SQL Server Management Studio 17.6 New updates to Microsoft Azure services for SQL Server, MySQL, and PostgreSQL Troubleshooting in SQL Server
Read more
  • 0
  • 0
  • 6807

article-image-openjs-foundation-accepts-electron-js-in-its-incubation-program
Fatema Patrawala
12 Dec 2019
3 min read
Save for later

OpenJS Foundation accepts Electron.js in its incubation program

Fatema Patrawala
12 Dec 2019
3 min read
Yesterday, at the Node+JS Interactive in Montreal, the OpenJS Foundation announced the acceptance of Electron into the Foundation’s incubation program. The OpenJS Foundation provides vendor-neutral support for sustained growth within the open source JavaScript community. It's supported by 30 corporate and end-user members, including GoDaddy, Google, IBM, Intel, Joyent, and Microsoft. Electron is an open source framework created for building desktop apps using JavaScript, HTML, and CSS, it is based on Node.js and Chromium. Additionally, Electron is widely used on many well-known applications including Discord, Microsoft Teams, OpenFin, Skype, Slack, Trello, Visual Studio Code, etc. “We’re heading into 2020 excited and honored by the trust the Electron project leaders have shown through this significant contribution to the new OpenJS Foundation,” said Robin Ginn, Executive Director of the OpenJS Foundation. He further added, “Electron is a powerful development tool used by some of the most well-known companies and applications. On behalf of the community, I look forward to working with Electron and seeing the amazing contributions they will make.” Electron’s cross-platform capabilities make it possible to build and run apps on Windows, Mac, and Linux computers. Initially developed by GitHub in 2013, today the framework is maintained by a number of developers and organizations. Electron is suited for anyone who wants to ship visually consistent, cross-platform applications, fast and efficiently. “We’re excited about Electron’s move to the OpenJS Foundation and we see this as the next step in our evolution as an open source project,” said Jacob Groundwater, Manager at ElectronJS and Principal Engineering Manager at Microsoft. “With the Foundation, we’ll continue on our mission to play a prominent role in the adoption of web technologies by desktop applications and provide a path for JavaScript to be a sustainable platform for desktop applications. This will enable the adoption and development of JavaScript in an environment that has traditionally been served by proprietary or platform-specific technologies.” What this means for developers Electron joining the OpenJS Foundation does not change how Electron is made, released, or used — and does not directly affect developers building applications with Electron. Even though Electron was originally created at GitHub, it is currently maintained by a number of organizations and individuals. In 2019, Electron codified its governance structure and invested heavily into formalizing how decisions affecting the entire project are made. The Electron team believes that having multiple organizations and developers investing in and collaborating on Electron makes the project stronger. Hence, lifting Electron up from being owned by a single corporate entity and moving it into a neutral foundation focused on supporting the web and JavaScript ecosystem is a natural next step as they mature in the open-source ecosystem. To know more about this news, check out the official announcement from the OpenJS Foundation website. The OpenJS Foundation accepts NVM as its first new incubating project since the Node.js Foundation and JSF merger Node.js and JS Foundations are now merged into the OpenJS Foundation Denys Vuika on building secure and performant Electron apps, and more
Read more
  • 0
  • 0
  • 6796

article-image-daily-coping-31-dec-2020-from-blog-posts-sqlservercentral
Anonymous
31 Dec 2020
2 min read
Save for later

Daily Coping 31 Dec 2020 from Blog Posts - SQLServerCentral

Anonymous
31 Dec 2020
2 min read
I started to add a daily coping tip to the SQLServerCentral newsletter and to the Community Circle, which is helping me deal with the issues in the world. I’m adding my responses for each day here. All my coping tips are under this tag.  Today’s tip is to plan some new acts of kindness to do in 2021. As I get older, I do try to spend more time volunteering and helping others more than myself. I’ve had success, my children are adults, and I find less “wants” for myself than I feel the impetus to help others more. I also hope more people feel this, perhaps at a younger age than I am. In any case, I have a couple things for 2021 that I’d like to do: Random acts – I saw this in a movie or show recently, but someone was buying a coffee or something small for a stranger once a week. I need to do that, especially if I get the chance to go out again. DataSaturdays – The demise of PASS means more support for people that might want to run an event, so I need to be prepared to help others again. Coaching – I have been coaching kids, but they’ve been privileged kids. I’d like to switch to kids that lack some of the support and privileges of the kids I usually deal with. I’m hoping things get moving with sports again and I get the chance to talk to the local Starlings program. The post Daily Coping 31 Dec 2020 appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 6768

article-image-minecraft-is-serious-about-global-warming-adds-a-new-spigot-plugin
Sugandha Lahoti
23 Aug 2018
3 min read
Save for later

Minecraft is serious about global warming, adds a new (spigot) plugin to allow changes in climate mechanics

Sugandha Lahoti
23 Aug 2018
3 min read
Minecraft Server Java Edition has added a new (spigot) plugin which changes climate mechanics in the game. This plugin adds the concept of greenhouse gases (CO2) in the game world's atmosphere. According to a recent report, only 45 percent of Americans think that global warming will pose a serious threat in their lifetime, and just 43 percent say they worry about climate change. These figures are alarming because serious damages due to Global Warming are imminent. As such, games and other forms of entertainment services are a good approach to change these ideologies and make people aware of how serious the threat of Global warming is. Minecraft’s approach could not only spread awareness but also has the potential to develop personal accountability and healthy personal habits. What does the Minecraft plugin do? The Furnaces within the game emit CO2 when players smelt items. Every furnace burn causes a Contribution to emissions with an associated numerical value. The trees are designed to instantly absorb CO2 when they grow from a sapling. Every tree growth causes a Reduction from emissions with an associated numerical value. As CO2 levels rise, the global temperature of the game environment will also rise because of the Greenhouse Effect. The global temperature is a function of the net global carbon score. As the global temperature rises, the frequency and severity of negative climate damages increases. Players need to design a default model that doesn't quickly destroy worlds. Players are best off when they cooperate and agree to reduce their emissions. What are its features? Scoreboard and Economy Integration Carbon Scorecard, where each player can see their latest carbon footprint trends via command line. Custom Models, with configurable thresholds, probabilities, and distributions. Load data on startup, queue DB changes to be done asynchronously and at intervals, and empty queue on shutdown. How was the response? The new Minecraft plugin received mixed reviews. Some considered it a great idea for teaching in schools. “Global warming is such an abstract problem and if you can tie it to individual's behaviors inside a (small) simulated world, it can be a very powerful teaching tool.” Others were not as happy. People feel that Minecraft lacks the basic principle of conservation of matter and energy, which is where you start with ecology. As a hacker news user pointed out, “I wish there was a game which would get the physical foundations right so that the ecology could be put on as a topping. What I imagine is something like a Civilization, where each map cell would be like 1 km2 and you could define what industries would be in that cell (perhaps even design the content of each cell). Each cell would contain a little piece of civilization and/or nature. These cells would then exchange different materials with each other, according to conservation laws.” While there will always be room for improvement, we think Minecraft is setting the tone for what could become a movement within the gaming community to bring critical abstract ideas to players in a non-threatening and thought-provoking way. The gaming industry has always lead technological innovations that then further cascade to other industries. We are excited to see this new real-world dimension becoming a focus area for Minecraft. You can read more about the Minecraft Plugin on its Github repo. Building a portable Minecraft server for LAN parties in the park Minecraft: The Programmer’s Sandbox Minecraft Modding Experiences and Starter Advice
Read more
  • 0
  • 0
  • 6739
article-image-google-releases-oboe-a-c-library-to-build-high-performance-android-audio-apps
Bhagyashree R
12 Oct 2018
3 min read
Save for later

Google releases Oboe, a C++ library to build high-performance Android audio apps

Bhagyashree R
12 Oct 2018
3 min read
Yesterday, Google released the first production-ready version of Oboe. It is a C++ library for building real-time audio apps. One of its main benefits includes the lowest possible audio latency across the widest range of Android devices. It is similar to AndroidX for native audio. How Oboe works The communication between apps and Oboe happens by reading and writing data to streams.  This library facilitates the movement of audio data between your app and the audio inputs and outputs on your Android device. The apps are able to pass data in and out by reading from and writing to audio streams, represented by the class AudioStream. A stream consists of the following: Audio device An audio device is a hardware interface or virtual endpoint that acts as a source or sink for a continuous stream of digital audio data. For example, a built-in mic or bluetooth headset. Sharing mode The sharing mode determines whether a stream has exclusive access to an audio device that might otherwise be shared among multiple streams. Audio format This the format of the audio data in the stream. The data that is passed through a stream has the usual digital audio attributes, which developers must specify when defining a stream. These are as follows: Sample format Samples per frame Sample rate The following sample formats are allowed by Oboe: Source: GitHub What are its benefits Oboe leverages the improved performance and features of AAudio on Orea MR1 (API 27+) and also maintains backward compatibility on API 16+. The following are some of its benefits: You write and maintain less code: It uses C++ allowing you to write clean and elegant code. With Oboe you can create an audio stream in just three lines of code whereas, when using OpenSL ES the same thing requires 50+ lines. Accelerated release process: As Oboe is supplied as a source library, bug fixes can be rolled out in few days as opposed to the Android platform release cycle. Better bug handling and less guesswork: It provides workarounds for known audio bugs and has sensible default behaviour for stream properties. Open source: It is open source and maintained by Google engineers. To get started with Oboe, check out the full documentation and the code samples available on its GitHub repository. Also, read the announcement posted on the Android Developers Blog. What role does Linux play in securing Android devices? A decade of Android: Slayer of Blackberry, challenger of iPhone, mother of the modern mobile ecosystem Google announces updates to Chrome DevTools in Chrome 71
Read more
  • 0
  • 0
  • 6735

article-image-introducing-photon-micro-gui-an-open-source-lightweight-ui-framework-with-reusable-declarative-c-code
Vincy Davis
09 Jul 2019
4 min read
Save for later

Introducing Photon Micro GUI: An  open-source, lightweight UI framework with reusable declarative C++ code

Vincy Davis
09 Jul 2019
4 min read
Photon Micro is an open-source, lightweight and modular GUI, which comprises of fine-grained and flyweight ‘elements’. It uses a declarative C++ code with a heavy emphasis on reuse, to form deep element hierarchies. Photon has its own HTML5 inspired canvas drawing engine and uses Cairo as a 2D graphics library. Cairo supports the X Window System, Quartz, Win32, image buffers, PostScript, PDF, and SVG file output. Joel de Guzman, the creator of Photon Micro GUI, and the main author of the Boost.Spirit Parser library, the Boost.Fusion library and the Boost.Phoenix library says, “One of the main projects I got involved with when I was working in Japan in the 90s, was a lightweight GUI library named Pica. So I went ahead, dusted off the old code and rewrote it from the ground up using modern C++.” The Photon Micro GUI client can use the following gallery code: Image Source: Cycfi This pops up a warning: Image Source: Cycfi Some highlights of Photon Micro GUI Modularity and reuse are two important design aspects of the Photon Micro GUI. It is supported by the following functionalities: Share Photon Micro GUI can be shared using std::shared_ptr. Hold Hold can be used to share an element somewhere in the view hierarchy. Key_intercept It is a delegate element that intercepts key-presses. Fixed_size Elements are extremely lightweight, fixed_size fixes the size of the GUI contained element. margin, left_margin These are two of the many margins, including right_margin, top_margin, etc. It adds padding around the element like the margin adds 20 pixels all around the contained element. The left_margin adds a padding of 20 pixels to separate the icon and the text box. vtile, htile Vertical and horizontal fluid layout elements allocate sufficient space to contained elements. This enables stretchiness, fixed sizing, and vertical and horizontal alignment, to place elements in a grid. Stretchiness is the ability of elements to stretch within a defined minimum and maximum size limit. Guzman adds, “While it is usable, and based on very solid architecture and design, there is still a lot of work to do. First, the Windows and Linux ports are currently in an unusable state due to recent low-level refactoring.” Some developers have shown interest in the elements of Photon Micro GUI. https://twitter.com/AlisdairMered/status/1148242189354450944 A user on Hacker News comments, “Awesome, that looks like an attempt to replace QML by native C++. Would be great if there was a SwiftUI inspired C++ UI framework (well, of course C++ might not lend itself so well for the job, but I'm just very curious what it would look like if someone makes a real attempt).” Some users feel that more work needs to be done to make this GUI more accessible and less skeuomorphic. [box type="shadow" align="" class="" width=""]Skeuomorphism is a term most often used in graphical user interface design to describe interface objects that mimic their real-world counterparts in how they appear and/or how the user can interact with them (IDF).[/box] A user says, “Too many skeuomorphic elements. He needs to take the controls people know and understand and replace them with cryptic methods that require new learning, and are hidden from view by default. Otherwise, no one will take it seriously as a modern UI.” Another user on Hacker News adds, “don’t use a GUI toolkit like this, that draws its own widgets rather than using platform standard ones when developing a plugin for a digital audio workstation (e.g. VST or Audio Unit), as this author is apparently doing. Unless someone puts in all the extra effort to implement platform-specific accessibility APIs for said toolkit.” For details about the other highlights, head over to Joel de Guzman’s post. Apple showcases privacy innovations at WWDC 2019: Sign in with Apple, AdGuard Pro, new App Store guidelines and more Google and Facebook allegedly pressured and “arm-wrestled” EU expert group to soften European guidelines for fake news: Open Democracy Report Google I/O 2019 D1 highlights: smarter display, search feature with AR capabilities, Android Q, linguistically advanced Google lens and more  
Read more
  • 0
  • 0
  • 6701

article-image-red-hat-drops-mongodb-over-concerns-related-to-its-server-side-public-license-sspl
Natasha Mathur
17 Jan 2019
3 min read
Save for later

Red Hat drops MongoDB over concerns related to its Server Side Public License (SSPL)

Natasha Mathur
17 Jan 2019
3 min read
It was last year in October when MongoDB announced that it’s switching to Server Side Public License (SSPL). Now, the news of Red Hat removing MongoDB from its Red Hat Enterprise Linux and Fedora over its SSPL license has been gaining attention. Tom Callaway, University outreach Team lead, Red Hat, mentioned in a note, earlier this week, that Fedora does not consider MongoDB’s Server Side Public License v1 (SSPL) as a Free Software License. He further explained that SSPL is “intentionally crafted to be aggressively discriminatory towards a specific class of users. To consider the SSPL to be "Free" or "Open Source" causes that shadow to be cast across all other licenses in the FOSS ecosystem, even though none of them carry that risk”. The first instance of Red Hat removing MongoDB happened back in November 2018 when its RHEL 8.0 beta was released. RHEL 8.0 beta release notes explicitly mentioned that the reason behind the removal of MongoDB in RHEL 8.0 beta is because of SSPL. Apart from Red Hat, Debian also dropped MongoDB from its Debian archive last month due to similar concerns over MongoDB’s SSPL. “For clarity, we will not consider any other version of the SSPL beyond version one. The SSPL is clearly not in the spirit of the DFSG (Debian’s free software guidelines), let alone complimentary to the Debian's goals of promoting software or user freedom”, mentioned Chirs Lamb, Debian Project Leader. Also, Debian developer, Apollon Oikonomopoulos, mentioned that MongoDB 3.6 and 4.0 will be supported longer but that Debian will not be distributing any SSPL-licensed software. He also mentioned how keeping the last AGPL-licensed version (3.6.8 or 4.0.3) without the ability to “cherry-pick upstream fixes is not a viable option”. That being said, MongoDB 3.4 will be a part of Debian as long as it is AGPL-licensed (MongoDB’s previous license). MongoDB’s decision to move to SSPL license was due to cloud providers exploiting its open source code. SSPL clearly specifies an explicit condition that companies wanting to use, review, modify or redistribute MongoDB as a service, would have to open source the software that they’re using. This, in turn, led to a debate among the industry and the open source community, as they started to question whether MongoDB is open source anymore. https://twitter.com/mjasay/status/1082428001558482944 Also, MongoDB’s adoption SSPL forces companies to either go open source or choose MongoDB’s commercial products. “It seems clear that the intent of the license author is to cause Fear, Uncertainty, and Doubt towards commercial users of software under that license” mentioned Callaway. https://twitter.com/mjasay/status/1083853227286683649 MongoDB acquires mLab to transform the global cloud database market and scale MongoDB Atlas MongoDB Sharding: Sharding clusters and choosing the right shard key [Tutorial] MongoDB 4.0 now generally available with support for multi-platform, mobile, ACID transactions and more
Read more
  • 0
  • 0
  • 6690
article-image-grain-functional-programming-language
Richa Tripathi
03 Aug 2018
2 min read
Save for later

Grain: A new functional programming language that compiles to Webassembly

Richa Tripathi
03 Aug 2018
2 min read
Grain is a strongly-typed functional programming language built for the modern web by leveraging the brilliant work done by the WebAssembly project. Unlike other languages used on the web today (like TypeScript or Elm), that compile into JavaScript, Grain doesn’t compile into JavaScript but compiles all the way down to WebAssembly, supported by a tiny JavaScript runtime to give it access to web features that WebAssembly doesn’t support yet. It was designed with the purpose to specifically serve web developers. Following are the language features: No runtime type errors Grain does not need any kind of type annotations. All the pieces of Grain code that developers write is thoroughly sifted for type errors. Developers do not have to deal with runtime exceptions and thus achieving full type safety with none of the fuss. Being functional, but flexible Grain is geared towards functional programming, but understands the web isn't as pure as we would like it to be. It enables one to easily write what's appropriate for the scenario. Embracing new web standards Grain is built on top of WebAssembly, a brand-new technology that represents a paradigm shift in web development. WebAssembly is nothing but a bytecode format which is executed in a web browser. This allows an application to be deployed to a device with a compliant web browser having to go through any explicit installation steps. TypeScript 3.0 is finally released with ‘improved errors’, editor productivity and more Elm and TypeScript – Static typing on the Frontend Tools in TypeScript
Read more
  • 0
  • 0
  • 6686

article-image-using-deep-learning-methods-to-detect-malware-in-android-applications
Savia Lobo
10 Jan 2019
5 min read
Save for later

Using deep learning methods to detect malware in Android Applications

Savia Lobo
10 Jan 2019
5 min read
Researchers from the North China Electric Power University have recently published a paper titled, ‘A Review on The Use of Deep Learning in Android Malware Detection’. Researchers highlight the fact that Android applications can not only be used by application developers, but also by malware developers with criminal intention to design and spread malicious applications that can affect the normal work of Android phones and tablets, steal personal information and credential data, or even worse lock the phone and ask for ransom. In this paper, they have explained how deep learning methods can be used as a countermeasure in Android malware detection to fight back malware. Android Malware Detection Techniques Researchers have said that one critical point of mobile phones is that they are a sensor-based event system, which permits malware to respond to approaching SMS, position changes and so forth, increasing the sophistication of automated malware-analysis techniques. Moreover, the apps can use services and activities and integrate varied programming languages (e.g. Java and C++) in one application. Each application is analyzed in the following stages: Static Analysis The static analysis screens parts of the application without really executing them. This analysis incorporates Signature-based, Permission-based and Component-based analysis. The Signature-based strategy draws features and makes distinctive signs to identify specific malware. Hence, it falls short to recognize the variation or unidentified malware. The Permission-based strategy recognizes permission requests to distinguish malware. The Component-based techniques decompile the APP to draw and inspect the definition and byte code connections of significant components (i.e. activities, services, etc.), to identify the exposures. The principal drawbacks of static analysis are the lack of real execution paths and suitable execution conditions. Dynamic Analysis This technique includes the execution of the application on either a virtual machine or a physical device. This analysis results in a less abstract perspective of application than static analysis. The code paths executed during runtime are a subset of every single accessible path. The principal objective of the analysis is to achieve high code inclusion since every feasible event ought to be activated to watch any possible malicious behavior Hybrid Analysis The hybrid analysis technique includes consolidating static and dynamic features gathered from examining the application and drawing data while the application is running, separately. Nevertheless, it would boost the accuracy of the identification. The principal drawback of hybrid analysis is that it consumes the Android system resources and takes a long time to perform the analysis. Use of deep learning in Android malware detection Currently available machine learning has several weaknesses and some open issues related to the use of DL in Android malware detection include: Deep learning lacks transparency to provide an interpretation of the decision created by its methods. Malware analysts need to understand how the decision was made. There is no assurance that classification models built based on deep learning will perform in different conditions with new data that would not match previous training data. Deep learning studies complex correlations within input and output feature with no innate depiction of causality. Deep learning models are not autonomous and need continual retraining and rigorous parameters adjustments. The DL models in the training phase were subjected to data poisoning attacks, which are merely implemented by manipulating the training and instilling data that make a deep learning model to commit errors. In the testing phase, the models were exposed to several attack types including: Adversarial Attacks are where the DL model inputs are the ones that an adversary has invented deliberately to cause the model to make mistakes Evasion attack: Here, the intruder exploits malevolent instances at test time to have them incorrectly classified as benign by a trained classifier, without having an impact over the training data. This can breach system integrity, either with a targeted or with an indiscriminate attack. Impersonate attack: This attack mimics data instances from targets. The attacker plans to create particular adversarial instances to such an extent that current deep learning-based models mistakenly characterize original instances with different tags from the imitated ones. Inversion attack: This attack uses the APIs allowed by machine learning systems to assemble some fundamental data with respect to the target system models. This kind of attack is divided into two types; Whitebox attack and Blackbox attack. The white-box attack implies that an aggressor can loosely get to and download learning models and other supporting data, while the black-box one points to the way that the aggressor just knows the APIs opened by learning models and some observation after providing input. According to the researchers, hardening deep learning models against different adversarial attacks and detecting, describing and measuring concept drift are vital in future work in Android malware detection. They also mentioned that the limitation of deep learning methods such as lack of transparency and being nonautonomous, is to build more efficient models. To know more about this research in detail, read the research paper. Researchers introduce a deep learning method that converts mono audio recordings into 3D sounds using video scenes IEEE Computer Society predicts top ten tech trends for 2019: assisted transportation, chatbots, and deep learning accelerators among others Stanford researchers introduce DeepSolar, a deep learning framework that mapped every solar panel in the US
Read more
  • 0
  • 0
  • 6684