Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Data

1208 Articles
article-image-uber-ai-labs-introduce-poetpaired-open-ended-trailblazer-to-generate-complex-and-diverse-learning-environments-and-their-solutions
Savia Lobo
09 Jan 2019
3 min read
Save for later

Uber AI Labs introduce POET(Paired Open-Ended Trailblazer) to generate complex and diverse learning environments and their solutions

Savia Lobo
09 Jan 2019
3 min read
Yesterday, researchers at the Uber AI Labs released the Paired Open-Ended Trailblazer (POET) algorithm that pairs the generation of environmental challenges and the optimization of agents to solve those challenges. The POET algorithm explores many different paths through the space of possible problems and solutions and, critically, allows these stepping-stone solutions to transfer between problems. The algorithms aim towards generating new tasks, optimizing solutions for them, and transferring agents between tasks to enable otherwise unobtainable advances. Researchers have applied POET to create and solve bipedal walking environments. These environments were adapted from the BipedalWalker environments in OpenAI Gym, popularized in a series of blog posts and papers by David Ha. Each environment Ei is paired with a neural network-controlled agent Ai that tries to learn to navigate through that environment. Here’s an image that depicts an example environment and agent: Source: Uber Engineering In this experiment, the POET algorithm aims to achieve two goals, which are: (1) evolve the population of environments towards diversity and complexity (2) optimize agents to solve their paired environments. During a single such run, POET generates a diverse range of complex and challenging environments, as well as their solutions. POET also periodically performs transfer experiments to explore whether an agent optimized in one environment might serve as a stepping stone to better performance in a different environment. There are two types of transfer attempts: Direct transfer: Here, the agents from the originating environment are directly evaluated in the target environment. Proposal transfer: Here, agents take one ES optimization step in the target environment. Source: Uber Engineering By testing transfers to other active environments, POET harnesses the diversity of its multiple agent-environment pairs to its full potential, i.e., without missing any opportunities to gain an advantage from existing stepping stones. Thus researchers mention that POET could invent radical new courses and solutions to them at the same time. It could similarly produce fascinating new kinds of soft robots for unique challenges it invents that only soft robots can solve. POET could also generate simulated test courses for autonomous driving that both expose unique edge cases and demonstrate solutions to them. In their blog, the researchers said that they will release the source code soon and also that “more exotic applications are conceivable, like inventing new proteins or chemical processes that perform novel functions that solve problems in a variety of application areas. Given any problem space with the potential for diverse variations, POET can blaze a trail through it”. Read more about Paired Open-Ended Trailblazer (POET) in detail in its research paper. Here’s a video that demonstrates the working of the POET algorithm: https://youtu.be/D1WWhQY9N4g Canadian court rules out Uber’s arbitration process; calls it “unconscionable” and “invalid” Uber to restart its autonomous vehicle testing, nine months after the fatal Arizona accident Kelsey Hightower on Serverless and Security on Kubernetes at KubeCon + CloudNativeCon  
Read more
  • 0
  • 0
  • 3967

article-image-postgresql-wins-dbms-of-the-year-2018-beating-mongodb-and-redis-in-db-engines-ranking
Amrata Joshi
09 Jan 2019
4 min read
Save for later

PostgreSQL wins ‘DBMS of the year’ 2018 beating MongoDB and Redis in DB-Engines Ranking

Amrata Joshi
09 Jan 2019
4 min read
Last week, DB Engines announced PostgreSQL as the Database Management System (DBMS) of the year 2018, as it gained more popularity in the DB-Engines Ranking last year than any of the other 343 monitored systems. Jonathan S. Katz, PostgreSQL contributor, said, "The PostgreSQL community cannot succeed without the support of our users and our contributors who work tirelessly to build a better database system. We're thrilled by the recognition and will continue to build a database that is both a pleasure to work with and remains free and open source." PostgreSQL, which will turn 30 this year has won the DBMS title for the second time in a row. It has established itself as the preferred data store amongst developers and has been appreciated for its stability and feature set. In the DBMS market, various systems use PostgreSQL as their base technology, this itself justifies that how well-established PostgreSQL is. Simon Riggs, Major PostgreSQL contributor, said, "For the second year in a row, the PostgreSQL team thanks our users for making PostgreSQL the DBMS of the Year, as identified by DB-Engines. PostgreSQL's advanced features cater to a broad range of use cases all within the same DBMS. Rather than going for edge case solutions, developers are increasingly realizing the true potential of PostgreSQL and are relying on the absolute reliability of our hyperconverged database to simplify their production deployments." How the DB-Engines Ranking scores are calculated For determining the DBMS of the year, the team at DB Engines subtracted the popularity scores of January 2018 from the latest scores of January 2019. The team used a difference of these numbers instead of percentage because that would favor systems with tiny popularity at the beginning of the year. The popularity of a system is calculated by using the parameters, such as the number of mentions of the system on websites, the number of mentions in the results of search engine queries. The team at DB Engines uses Google, Bing, and Yandex for this measurement. In order to count only relevant results, the team searches for <system name> together with the term database, e.g. "Oracle" and "database".The next measure is known as General interest in the system, for which the team uses the frequency of searches in Google Trends. The number of related questions and the number of interested users on the well-known IT-related Q&A site such as Stack Overflow and DBA Stack Exchange are also checked in this process. For calculating the ranking, the team also uses the number of offers on the leading job search engines Indeed and Simply Hired. A number of profiles in professional networks such as LinkedIn and Upwork in which the system is mentioned is also taken into consideration. The number of tweets in which the system is mentioned is also counted. The calculated result is a list of DBMSs sorted by how much they managed to increase their popularity in 2018. 1st runner-up: MongoDB For 2018, MongoDB is the first runner-up and has previously won the DBMS of the year in 2013 and 2014. Its growth in popularity has even accelerated ever since, as it is the most popular NoSQL system. MongoDB keeps on adding functionalities that were previously outside the NoSQL scope. Lat year, MongoDB also added ACID support, which got a lot of developers convinced, to rely on it with critical data. With the improved support for analytics workloads, MongoDB is a great choice for a larger range of applications. 2nd runner-up: Redis Redis, the most popular key-value store got the third place for DBMS of the year 2018. It has been in the top three DBMS of the year for 2014. It is best known as high-performance and feature-rich key-value store. Redis provides a loadable modules system, which means third parties can extend the functionality of Redis. These modules offer a graph database, full-text search, and time-series features, JSON data type support and much more. PipelineDB 1.0.0, the high performance time-series aggregation for PostgreSQL, released! Devart releases standard edition of dbForge Studio for PostgreSQL MongoDB switches to Server Side Public License (SSPL) to prevent cloud providers from exploiting its open source code
Read more
  • 0
  • 0
  • 7679

article-image-researchers-introduce-a-machine-learning-model-where-the-learning-cannot-be-proved
Prasad Ramesh
09 Jan 2019
4 min read
Save for later

Researchers introduce a machine learning model where the learning cannot be proved

Prasad Ramesh
09 Jan 2019
4 min read
In a study published in Nature Machine Intelligence, researchers discovered that in some cases of machine learning it cannot be proved whether the system actually ‘learned’ something or solved the problem. They explore machine learning learnability. Axioms leading to axioms in arithmetic models We already know that machine learning systems, and AI systems in general are black boxes. You feed the system some data, you get some output or a trained system that performs some tasks but you don’t know how the system arrived at a particular solution. Now we have a published study from Ben-Davis et al that shows learnability in machine learning is undecidable. In the 1930s, Austrian logician Kurt Gödel showed that a set of axioms forming an arithmetic model lead to more axioms. In the following decades it was demonstrated that the continuum hypothesis can neither be proved nor refuted using standard mathematical axioms. The hypothesis states that no set of objects is larger in size than integers or smaller in size than real numbers. What does this have to do with machine learning? In machine learning, algorithms are designed to improve performance of certain actions with the data they are trained on. Some problems like facial recognition or recommendation engines cannot be created with regular linear programming. These are problems that can be solved today by machine learning. Machine learning learnability can be defined. A system can be considered learnable if the machine learning model can perform as the best predictor in a family of functions. This needs to be achieved under some reasonable constraints. Typically learnability in a model can be explained by analysing dimensions. But this new research shows that this is not always the case. A learning model introduced in the paper is the focus of the research: estimating the maximum (EMX) which is similar to PAC learning. The authors of the paper discover a family of functions whose learnability in EMX cannot be proved with standard mathematics. What is the EMX problem? As described in the paper, the EMX problem is: “Let X be some domain set, and let F be a family of functions from X to {0, 1} (we often think of each function f∈F as a subset of X and vice versa). Given a sample S of elements drawn i.i.d. from some unknown distribution P over X, the EMX problem is about finding a function f ∈ F that approximately maximizes the expectation EP(f) with respect to P.” In the paper, the authors present an example problem—displaying specific ads to the most frequent visitors of a website. The catch is, which visitors will visit the website is unknown. Now the EMX problem is formed as a question—what is a learner’s ability to find a function whose expected value is the largest. They show a relation between machine learning and data compression. If training data labelled by a function can be compressed, then the family from which the function originates has low complexity. Such a function is considered learnable. Monotone compression Algorithms can be used to compress data. A new one called monotone compression is introduced. They show that this compression is suitable to describe the learnability of function families in the EMX problem. A weak monotone compression is associated with the cardinality of particular infinite sets. The authors use the interval [0, 1] which contains real numbers. The results show that the finite subsets in the interval [0, 1] have monotone compression and are therefore considered learnable in EMX. But, this applied only if the continuum hypothesis is true which stands to be unprovable to date. The problem is how you define learnability In the concluding points, the paper points out an interesting perspective as to why current machine learning models get off easy without any questions about learnability. Or do they? The problem lies in how learnability is defined—as functions or as algorithms?. Current standard definitions focus on the theoretical aspect without considering computational implications. This approach in viewing learnability levies a high cost when more general types of learning is involved. You can read the research paper by Shai Ben-David and others about learnability being undecidable at the Nature journal website. Technical and hidden debts in machine learning – Google engineers’ give their perspective The US Air Force lays groundwork towards artificial general intelligence based on hierarchical model of intelligence NVIDIA demos a style-based generative adversarial network that can generate extremely realistic images; has ML community enthralled
Read more
  • 0
  • 0
  • 3899
Visually different images

article-image-github-now-provides-unlimited-free-private-repos-and-a-new-github-enterprise
Amrata Joshi
08 Jan 2019
4 min read
Save for later

GitHub now provides unlimited free private repos and a new GitHub Enterprise

Amrata Joshi
08 Jan 2019
4 min read
Yesterday, GitHub, a platform to build and share software, announced it will give users of its free plan, an access to unlimited private repositories. The platform initially had a major drawback that software projects weren’t visible to the broader public and were shared only with a handful of pre-defined collaborators, specifically, the users who paid. With this new update, developers can now use GitHub for their private projects with up to three collaborators per repository, and for free. A lot of developers want to use private repositories to apply for a job, try something out in private before releasing it publicly, or work on a side project. All this is now possible with the new update. No changes have been made to public repositories, they are still free and include unlimited collaborators. https://twitter.com/natfriedman/status/1082345111566970880 A good indication for Microsoft? This news sounds like a good indication for Microsoft as it closed its acquisition of GitHub last October, with former Xamarin CEO Nat Friedman as GitHub’s new CEO. Though few developers were rather nervous about this deal they eventually came to terms with it. Also, GitHub’s model for monetizing the service is different from Microsoft’s. As Microsoft focuses on larger enterprises to use the service instead of smaller teams, this new change in strategy could give Microsoft a much better competitive position against rival services like BitBucket and GitLab. Last year, in June, during a Reddit AMA, GitHub's new chief Nat Friedman was asked if Microsoft ever planned to make private repositories free. Friedman said at that time, "It's too soon for me to know the answer to that. We want GitHub to be accessible to everyone in the world, and for everyone to have an opportunity to be a developer." A great news for GitHub users As private repositories on free accounts are limited to three collaborators a project, this might work well for a smaller project like a team competing in a hackathon. But it isn’t well-suited for commercial usage. It could also be a bit risky for the company, as the existing paid users might not be much happy with this move. Earlier, user’s incomplete projects were open to all. However, with this new update, users can easily cover it up under the name of private repos. Will this affect GitHub’s open culture? Users have given mixed reactions to this news. Few users are wondering if Microsoft would become more powerful after this latest move as they will have a stronger social graph. One of the users commented on HackerNews, “Microsoft has both LinkedIn and GitHub, meaning they have the social graph of the government, business, and the technology spheres. That social graph is arguably even more valuable in terms of revenue opportunities than Facebook's. Direct revenue of LinkedIn and GitHub might as well be irrelevant.”  Another user commented, “And Microsoft is in good relationship with the government and agencies. Guess how valuable is that data for them. And guess what they want... Private code to know what people are working on." This move has now led a few paid users to move away from paying. But most of the users are happy and excited about this news. https://twitter.com/slametan/status/1082507600447467526 As this news is already creating some buzz, the competition between GitHub and other similar platforms is going to be tough. One of the users (from GitLab) commented,“ I like to think that increased competition from us (GitLab) contributed to this change, we recently passed 10m repositories on GitLab.com. At GitLab we think that repositories will become a commodity and we're focussing on making a single application for the entire DevOps lifecycle. I think Microsoft will try to generate revenue with people using Azure more instead of paying for repos.” GitHub also announced its new product, GitHub Enterprise which combines Enterprise Cloud and Enterprise Server. GitHub Inc. said in a post,"Organizations that want the flexibility to use GitHub in a cloud or self-hosted configuration can now access both at one per-seat price." These products can be securely linked and provide a hybrid option with GitHub Connect which would help developers to work seamlessly. GitHub introduces Content Attachments API (beta) GitHub plans to deprecate GitHub services and move to Webhooks in 2019 GitHub was down first working day of 2019, hacker claims DDoS
Read more
  • 0
  • 0
  • 1187

article-image-ethereum-classic-suffered-a-51-attack-developers-deny-state-a-new-asic-card-was-tested
Prasad Ramesh
08 Jan 2019
3 min read
Save for later

Ethereum classic suffered a 51% attack; developers deny, state a new ASIC card was tested

Prasad Ramesh
08 Jan 2019
3 min read
Yesterday there were discussions on Twitter about an Ethereum classic 51% attack which was a possible chain reorganization or double spend attack. However, Ethereum developers denied it and have shed some light on the incident. Ethereum classic is the original version of Ethereum which suffered a major hack in 2016. The developers then forked that and used it to create a new version where the hack was fixed. This new version was called Ethereum. https://twitter.com/eth_classic/status/1082045223310483457 A 51% attack rate is when one or more parties have more than 50% of compute power (hash rate) in the network. Such a party could mine a large amount of block in the network, double spend coins and reward themselves unfairly. Double spending is exactly what it sounds like, paying the same amount twice. In a chain reorganization, single or more miners have significantly more hashrate than others in the network. Such a miner can define a new transaction history on the network. Etherchain Tweeted that there was a successful 51% attack on Ethereum classic. https://twitter.com/etherchain_org/status/1082329360948969472 Cryptocurrency coin exchange Coinbase published a post noting the same: “On 1/5/2019, Coinbase detected a deep chain reorganization of the Ethereum Classic blockchain that included a double spend. In order to protect customer funds, we immediately paused movements of these funds on the ETC blockchain. Subsequent to this event, we detected 8 additional reorganizations that included double spends, totaling 88,500 ETC (~$460,000)”. Amidst the confusion, fear and lowering ETC value, the Ethereum team has responded to the incident. The latest update from Ethereum classic official sources contradict the Coinbase report. They said that this activity was a selfish mine and not a 51% attack. ‘No double spends were detected’. They said that an ASIC card manufacturer, Linzhi was testing their new ethash machines which had a power of 1,400/Mh. The tweet seems to be removed but the contents stated: Regarding the recent mining events. We may have an idea of where the hashrate came from. ASIC manufacturer Linzhi confirmed testing of new 1,400/Mh ethash machines #projectLavaSnow – Most likely selfish mining (Not 51% attack) – Double spends not detected (Miner dumped blocks) A more recent tweet from Ethereum Classic states that both angles of coinbase and ASIC card may be true. https://twitter.com/eth_classic/status/1082392663314202624 Currently, ETC is 18th on the market cap with a market capitalization of ~$540 million. Ethereum Constantinople hard fork to move Ethereum from PoW (proof-of-work) to PoS (proof-of-stake) model Origin DApp: A decentralized marketplace on Ethereum mainnet aims to disrupt gig economy platforms like Airbnb and Uber Vitalik Buterin’s new consensus algorithm to make Ethereum 99% fault tolerant
Read more
  • 0
  • 0
  • 2166

article-image-blockchain-governance-and-uses-beyond-finance-carnegie-mellon-university-podcast
Prasad Ramesh
08 Jan 2019
5 min read
Save for later

Blockchain governance and uses beyond finance - Carnegie Mellon university podcast

Prasad Ramesh
08 Jan 2019
5 min read
Hosted by Will Hayes Principal engineer at the Software Engineering Institute (SEI), Carnegie Mellon University (CMU), a podcast was aired last month about Blockchain research at the CMU, governance, and applications of blockchain beyond finance. The participants of the discussion were Dr. Eliezer Kanal from SEI, CMU and Eugene Leventhal a Masters degree student from Heinz college in CMU. What is the discussion about? The main discussion is around these points: Is blockchain actually better than the currently existing solutions? If it is, what are the costs? Are there things that we can take from it to make existing solutions better? About some central blockchain governance even though it is a distributed system. Blockchain at CMU There is a lot of interest for Blockchain at CMU, they even have a Blockchain Group. Blockchain was discussed for digital currency even before it was called blockchain. Aside from all the digital currency and infinite coin offerings, the underlying technology in blockchain is a distributed ledger. Ledgers are present in various businesses record keeping, health data, licenses etc. The power of blockchain comes from its distributed nature where nothing is deleted and there is a lot of visibility in what is going on. At the Software Engineering Institute, Carnegie Mellon University, research is going on in two major aspects: Ensuring blockchain is a secure environment that people can operate, it is difficult to do. Advising government, there is confusion on what a use case in blockchain is about. They are trying to play the role of a trusted advisor. Blockchain and open source Blockchain can be somewhat compared to open source. In version control systems, users can go to a specific version and use it if the current version does not work for them, so the changes are publicly visible. Blockchain beyond finance Blockchain is democratized banking, there is no need for a central entity that clears the transactions. But this is the sole application that has thrived in the market. The application of Blockchain as a distributed computer has not really picked up. There is a general reluctance in the industry to pick up Blockchain for other applications. It will be interesting to see some research on the potential applications where it is pure research, not some time constrained goal to make money. The hype to make quick money on blockchain has started to die out, and interesting research is upcoming. For example, proof-of-stake versus proof-of-work model. There has been a lot of discussion on using blockchain for name servers, DNS servers etc that underlie the entire Internet and for public key infrastructure (PKI). “How can I both have the benefits that I could gain, and simultaneously enable technology, or enable a policy that requires me to not have something, which is fundamental to the technology.” For this question, there is no good answer yet. This is part of what makes blockchain such a fascinating field. Central control of blockchain, blockchain governance? Git is now offered by Microsoft, Hayes says that this is how government organizations are getting access to it. Are there things happening in the blockchain world where the decision of some influential authority is opening up the ecosystem for a wider array of audience? Even though the idea of blockchain is to be decentralized, it can benefit from some level of centralization. An example is Hyperledger, it has some blockchain concepts but most of Hyperledger is towards centralization. Public visibility is good for audit purposes but not if some personal information is attached to a transaction. GDPR comes into the picture as it includes the right to be forgotten; this is a clash. One of the older papers where Dr. Kanal is researching the legal aspects of blockchain argues that blockchain is the most vulnerable way to structure any kind of organization. There is maximum exposure and fraud protection is not present. Blockchain as a truly decentralized platform has a challenge - where does the governance come from? If someone plays around with it, people from the outside are likely to be drawn away from using it pointing out the flaws. Blockchain governance is a serious issue and challenge to work on a for the technology to be accepted by a wider audience. Blockchain would make a lot of sense in areas where you can’t trust your government, where you don’t have access to a bank but have access to a mobile phone. Blockchain can be of help in such an environment. They also talk about  Zk-SNARKS where the proof exists but there is zero knowledge of the transaction and no interaction is required between the transaction provider and verifier. There is greater privacy is such a system but no visible verifiability other than the set rules it may have. These were the highlights of the main concepts of the podcast, for more discussion you can view the podcast on YouTube. IBM launches blockchain-backed Food Trust network which aims to provide greater transparency on food supply chains Microsoft Azure’s new governance DApp: An enterprise blockchain without mining LedgerConnect: A blockchain app store by IBM, CLS, Barclays, Citi and 7 other banks is being trialled
Read more
  • 0
  • 0
  • 2768
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €14.99/month. Cancel anytime
article-image-gnu-ed-1-15-released
Savia Lobo
07 Jan 2019
2 min read
Save for later

GNU ed 1.15 released!

Savia Lobo
07 Jan 2019
2 min read
Last week, GNU ed, a line-oriented text editor, released GNU ed 1.15. GNU ed is used to create, display, modify and otherwise manipulate text files, both interactively and via shell scripts. Red, a restricted version of ed, can only edit files in the current directory and cannot execute shell commands. Ed is the "standard" text editor and the original editor for Unix. For most purposes, however, it is superseded by full-screen editors such as GNU Emacs or GNU Moe. Changes in GNU ed 1.15 The list command has been fixed to print a backslash before every '$' character within the text. Address ',,' has been fixed to mean '$,$' instead of '1,$'. A 's' command that is part of a 'g' or 'v' command-list can again split a line by including a newline escaped with a backslash '\' in the replacement string. For this, the closing delimiter of the replacement string can't be omitted unless the 's' command is the last command in the list because otherwise, the meaning of the escaped newline would become ambiguous. Due to a recent change in the POSIX standard, the 'c' command no longer accepts an address of 0, and the documentation for the 'i' command now explains that it treats address 0 as meaning "at the beginning of the buffer", instead of as a synonym for address 1. Minor fixes have been made to the manual. The configure script now accepts appending options to CFLAGS using the syntax 'CFLAGS+=OPTIONS'. To know more about this release, visit GNU ed’s email thread. GitHub now supports the GNU General Public License (GPL) Cooperation Commitment as a way of promoting effective software regulation GNU Bison 3.2 got rolled out Following Linux, GNU publishes ‘Kind Communication Guidelines’ to benefit members of ‘disprivileged’ demographics
Read more
  • 0
  • 0
  • 2985

article-image-ces-2019-top-announcements-made-so-far
Sugandha Lahoti
07 Jan 2019
3 min read
Save for later

CES 2019: Top announcements made so far

Sugandha Lahoti
07 Jan 2019
3 min read
CES 2019, the annual consumer electronics show in Las Vegas will go from Tuesday, Jan. 8 through Friday, Jan. 11. However, the conference has unofficially kicked off on Sunday, January 6, followed by press conferences on Monday, Jan. 7. Over the span of these two days, a lot of companies showcased their latest projects and announced new products, software, and services. Let us look at the key announcements made by prominent tech companies so far. Nvidia Nvidia CEO Jensen Huang unveiled some "amazing new technology innovations." First, they announced that over 40 new laptop models in 100-plus configurations will be powered by NVIDIA GeForce RTX GPUs. Turing-based laptops will be available across the GeForce RTX family — from RTX 2080 through RTX 2060 GPUs, said Huang. Seventeen of the new models will feature Max-Q design. Laptops with the latest GeForce RTX GPUs will also be equipped with WhisperMode, NVIDIA Battery Boost, and NVIDIA G-SYNC. GeForce RTX-powered laptops will be available starting Jan. 29 from the world's top OEMs. Nvidia also announced the first 65-inch 4K HDR gaming display that will arrive in February for $4,999. LG LG Electronics, which have a major press release today, has already confirmed a variety of their new products. These include the release of LG's 2019 TVs with Alexa and Google Assistant support, 8K OLED, full HDMI 2.1 support and more. Also includes, LG CineBeam Laser 4K projector for voice control, new sound bars included with Dolby Atmos and Google Assistant and LG Gram 17 and new 14-inch 2-in-1. Samsung Samsung announced that their Smart TVs will be soon equipped with iTunes Movies & TV Shows and will support AirPlay 2 beginning Spring 2019. AirPlay 2 support will be available on Samsung Smart TVs in 190 countries worldwide. Samsung is also launching a new Notebook Odyssey to take PC gaming more seriously posing a threat to competitors Razer and Alienware. HP HP also announced HP Chromebook 14, at CES 2019. It is the world's first AMD-powered Chromebook running on either an AMD A4 or A6 processor with integrated Radeon R4 or R5 graphics. It has 4GB of memory and 32GB of storage and support for Android apps from the Google Play Store. These models will start shipping in January starting at $269. More announcements: Asus launches a new 17-inch, 10-pound Surface Pro gaming laptop, the Asus ROG Mothership. It has also announced Zephyrus S GX701, the smallest and lightest 17-inch gaming laptop yet. Corsair’s impressive compact gaming desktops come with Core i9 chips and GeForce RTX graphics L’Oréal’s newest prototype detects wearers’ skin pH levels Acer’s new Swift 7 will kill the bezel when it launches in May for $1,699. It is one of the thinnest and lightest laptops ever made Audeze’s motion-aware headphones will soon recreate your head gestures in-game Whirlpool is launching a Wear OS app for its connected appliances with simplified voice commands for both Google Assistant and Alexa devices. Vuzix starts selling its AR smart glasses for $1,000 Pico Interactive just revealed the Pico G2 4K, an all-in-one 4K VR headset based-on China’s best-selling VR unit, the Pico G2. It’s incredibly lightweight, powerful and highly customizable for enterprise purposes. Features include kiosk mode, hands-free controls, and hygienic design. You can have a look at all products that will be showcased at CES 2019. NVIDIA launches GeForce Now’s (GFN) ‘recommended router’ program to enhance the overall performance and experience of GFN NVIDIA open sources its game physics simulation engine, PhysX, and unveils PhysX SDK 4.0 Uses of Machine Learning in Gaming
Read more
  • 0
  • 0
  • 3577

article-image-googles-secret-operating-system-fuchsia-will-run-android-applications-9to5google-report
Melisha Dsouza
04 Jan 2019
3 min read
Save for later

Google’s secret Operating System ‘Fuchsia’ will run Android Applications: 9to5Google Report

Melisha Dsouza
04 Jan 2019
3 min read
Google’s secret operating system in the works and a potential Android replacement will use the Android runtime to run Android apps. On 2nd January, an evidence for the same was spotted by 9to5Google, who found a new change in the Android Open Source Project that will use a special version of ART to run Android applications. This feature would enable devices with Fuchsia —i.e. smart devices including mobile phones, tablets, computers, wearables, and other gadgets— to take advantage of Android apps in the Google Play Store. Last month, the same site had reported two new Fuchsia-related repositories that were added to the Android Open Source Project (AOSP) manifest: “platform/prebuilts/fuchsia_sdk” and “device/google/fuchsia”. In a new change posted to Android’s Gerrit source code management, Google has included a README file that indicates what the latter repository is intended for: Source: 9to5Google The above snippet from the README file means that Fuchsia will use a specially designed version of the Android Runtime to run Android applications and installable on any Fuchsia device using a .far file. Google has not listed the exact details on how Fuchsia will use the Android Runtime. What we know about Project Fuchsia so far According to a Bloomberg report, Google engineers have been working on this project for the past two years in the hope that project Fuchsia will replace the now dominant Android operating system. Google started posting the code for this project 2 years before, and have been working on the project ever since. Fuchsia, is being designed to overcome the limitations of Android with better voice interactions and frequent security updates for devices. In the software code posted online, the engineers built encrypted user keys into the system to ensure  information is protected every time the software is updated. Bloomberg stated the main aim of designing Fuchsia- according to people familiar with the project- as ‘creating a single operating system capable of running all the company’s in-house gadgets’. These include devices like Pixel phones and smart speakers, as well as third-party devices relying on Android and other systems like Chrome OS. Some engineers also told Bloomberg that engineers that they want to embed Fuchsia on connected home devices, like  voice-controlled speakers, and then move on to larger machines such as laptops. Ultimately aspiring to swap in their system for Android. You can head over to ZDNet for more insights to this news. Alternatively, check out 9to5Google for more information on this announcement. Hacker duo hijacks thousands of Chromecasts and Google smart TVs to play PewDiePie ad, reveals bug in Google’s Chromecast devices! ‘Istio’ available in beta for Google Kubernetes Engine, will accelerate app delivery and improve microservice management Project Fi is now Google Fi, will support multiple Android based phones, offer beta service for iPhone  
Read more
  • 0
  • 0
  • 4204

article-image-pandas-will-drop-support-for-python-2-this-month-with-pandas-0-24
Prasad Ramesh
04 Jan 2019
2 min read
Save for later

pandas will drop support for Python 2 this month with pandas 0.24

Prasad Ramesh
04 Jan 2019
2 min read
The next version of the Python library, pandas 0.24.0 will not have support for Python 2. pandas is a popular Python library widely used for data manipulation and data analysis. It is used in areas like numerical tables and time series data. Jeff Reback, pandas maintainer Tweeted on Wednesday: https://twitter.com/jreback/status/1080603676882935811 Many major Python libraries removing Python 2 support One of the first tools to drop support for Python 2 was ipython in 2017. This was followed by matplotlib and more recently NumPy. Other popular libraries like scikit-learn and SciPy will also be removing support for Python 2 this year. IDEs like Spyder and Pythran are also included in the list. Python 2 support ending in 2020 Core Python developers will stop supporting Python 2 no later than the year 2020. This move is to control fragmentation and save on workforce for maintaining Python 2. Python 2 will no longer receive any new features and all support for it will cease next year. As stated on the official website: “2.7 will receive bugfix support until January 1, 2020. After the last release, 2.7 will receive no support.” Python 2 support was about to end in 2015 itself but was extended by five years considering the user base. Users seem to welcome the change to move forward as a comment on Hacker new says: “Time to move forward. Python 2 is so 2010.” NumPy drops Python 2 support. Now you need Python 3.5 or later. Python governance vote results are here: The steering council model is the winner NYU and AWS introduce Deep Graph Library (DGL), a python package to build neural network graphs
Read more
  • 0
  • 0
  • 4775
article-image-alibaba-cloud-released-mars-a-tensor-based-framework-for-large-scale-data-computation
Savia Lobo
04 Jan 2019
2 min read
Save for later

Alibaba Cloud released Mars, a tensor-based framework for large-scale data computation

Savia Lobo
04 Jan 2019
2 min read
A few days ago, Alibaba Cloud announced the release of Mars, its tensor-based framework for large-scale data computation. Mars tensor provides a familiar interface like Numpy, which is a popular tool for most of the Python users such as mathematicians, engineers, etc. and the ones working in core scientific computing. Mars can also scale into a single machine, and scale out to a cluster with hundreds of machines. Users can simply install Mars tensor with the following code: import mars.tensor as mta = mt.random.rand(1000, 2000)(a + 1).sum(axis=1).execute() According to a Medium post by Synced, “Mars can simply tile a large tensor into small chunks and describe the inner computation with a directed graph, enabling the running of parallel computation on a wide range of distributed environments, from a single machine to a cluster comprising thousands of machines.” Xuye Qin, Alibaba Cloud Senior Engineer, bragged about Mars’ performance by stating, “Mars can complete the computation on a 2.25T-size matrix and a 2.25T-size matrix multiplication in two hours.” Unlike NumPy, Mars provides users with the ability to run matrix computation at a very large-scale. Alibaba developers carried out a simple experiment to test Mars’ performance. According to the graph below where NumPy (represented by a red cross at the upper left) lags far behind Mars tensors, which is successful in achieving ideal performance values. Source: Medium Mars supports a subset of NumPy interfaces, which include: Arithmetic and mathematics: +, -, *, /, exp, log, etc. Reduction along axes (sum, max, argmax, etc). Most of the array creation routines (empty, ones_like, diag, etc). Mars not only supports create array/tensor on GPU, but also supports create sparse tensor. Most of the array manipulation routines such as reshape, rollaxis, concatenate, etc. Basic indexing (indexing by ints, slices, newaxes, and Ellipsis) Fancy indexing along a single axis with lists or NumPy arrays, e.g. x[[1, 4, 8],:5] Universal functions for elementwise operations. Linear algebra functions including product (dot, matmul, etc.) and decomposition (cholesky, svd, etc.). To know more about Mars in detail, visit its official GitHub page. NumPy drops Python 2 support. Now you need Python 3.5 or later Google researchers introduce JAX: A TensorFlow-like framework for generating high-performance code from Python and NumPy machine learning programs Introducing numpywren, a system for linear algebra built on a serverless architecture
Read more
  • 0
  • 0
  • 3022

article-image-canadian-court-rules-out-ubers-arbitration-process-calls-it-unconscionable-and-invalid
Sugandha Lahoti
04 Jan 2019
3 min read
Save for later

Canadian court rules out Uber’s arbitration process; calls it “unconscionable” and “invalid”

Sugandha Lahoti
04 Jan 2019
3 min read
The Canadian top court has slammed Uber’s arbitration process allowing Uber drivers to turn to Canadian courts for resolving their disputes with Uber. According to Uber’s previous policy, Uber drivers and employees had to resolve their complaints through an international mediation process in the Netherlands which costed drivers US$14,500. In a rule released on Wednesday, a panel of three judges with the Court of Appeal for Ontario concluded that this arbitration clause in Uber’s driver services agreement was “unconscionable” and “invalid”. “It can be safely concluded that Uber chose this arbitration clause in order to favour itself and thus take advantage of its drivers who are clearly vulnerable to the market strength of Uber,” the ruling said. Uber considers its drivers as contractual workers instead of employees and hence denies basic worker rights to them such as sick leaves and minimum wages. Drivers protested and proposed class-action lawsuit to declare drivers as employees, not independent contractors. They demanded minimum wage, overtime and vacation pay claiming $400 million in damages. Uber argued that this lawsuit can’t proceed in Canada due to the arbitration clause. A lower court agreed, but the panel of three appeal court judges reversed the decision. The court found this clause improper due to two reasons. First, it is an illegal contracting out of an employment standard under the Employment Standards Act. Second, the clause is immoral considering the inequality of bargaining power between Uber and its drivers. “This decision confirms that employment laws actually matter in Ontario, and that you cannot deprive workers of their legal rights under the Ontario Employment Standards Act by sending them 6,000 km overseas to enforce those rights at exorbitant personal cost,” told lawyer Lior Samfiru who represents the proposed class-action plaintiffs and a partner at Samfiru Tumarkin LLP to Financial Post. “I think the message here is for companies … if you’re going to operate in Ontario, if you’re going to operate in Canada, you have to abide by our laws,” Samfiru said. “You have to play by the same rules as everyone else.” Uber Canada has released a statement saying that it is currently reviewing the court’s decision and is “proud to offer a flexible earning opportunity to tens of thousands of drivers throughout Ontario.” This news first appeared on Financial Post. Uber to restart its autonomous vehicle testing, nine months after the fatal Arizona accident Uber manager warned the leadership team of the inadequacy of safety procedures in their prototype robo-taxis early March, reports The Information Uber fined by British ICO and Dutch DPA for nearly $1.2m over a data breach from 2016.
Read more
  • 0
  • 0
  • 1504

article-image-introducing-txqr-data-transfer-via-animated-qr-codes
Amrata Joshi
04 Jan 2019
3 min read
Save for later

Introducing TXQR, data transfer via animated QR codes

Amrata Joshi
04 Jan 2019
3 min read
TXQR is a project for transferring data via animated QR codes. It is written in Go and uses fountain erasure codes. Ivan Daniluk, it’s creator and software engineer has shared his experience in building TXDR and also the results of using animated QR as a data transfer method. QR Codes QR codes, a type of visual encoding allows different error recovery levels, with almost 30% redundancy for the highest one. QR Version 40 allows to encode up to 4296 alphanumeric or 2953 binary symbols. But this gives rise to two major issues, firstly 3-4KB might just not be enough and secondly the more data in QR code, the better should be the quality and image resolution. But what if we need to transfer approximately ~15KB of data on the average consumer devices? Using animated QR codes with dynamic FPS and size changes could possibly work. The basic design of TXQR A client chooses the data to be sent and generates an animated QR code and shows it in the loop until all the frames are received by the reader. Encoding is designed properly such that it allows any particular order of frames, as well as dynamic changes in FPS. In case, the reader is slower then it can display a message “please decrease FPS on the sender.” Talking about the protocol, it is simply where each frame starts with a prefix “NUM/TOTAL|”, (where NUM and TOTAL are integer values for current and total frames respectively) and rest is the file content. Original data is encoded using Base64, so only alphanumeric data is actually encoded in QR. Gomobile To get .framework or .aar file to be included in your iOS or Android project one can write a standard Go code, then run gomobile bind. Users can refer to it as any regular library and get autocomplete and type information. Ivan has built a simple iOS QR scanner in Swift and modified it to read animated QR codes, fed the decoded chunks into txqr decoder and displayed received the file in a preview window. Fountain codes TXQR is used for unidirectional data transfer using an animated sequence of QR codes. The approach involved in TXQR included repeating the encoded sequence over and over until the receiver gets complete data. This led to long delays in case the receiver missed at least one frame. As per the article by Bojtos Kiskutya, LT ( Luby Transform) codes can yield much better results for TXQR. LT codes are one of the implementations of the family of codes called fountain codes. It’s a class of erasure codes that can easily produce a potentially infinite amount of blocks from the source message blocks (K). The receiver can receive blocks from any point, in any order, with any erasure probability. Fountain codes start as soon as  K+ different blocks are received. It is named as fountain code as the encoded blocks represent the fountain’s water drops. Fountain codes are easy and they solve critical problems as they harness the properties of randomness, mathematical logic and probability distribution tuning to achieve their goal. In this article we have covered TXQR’s basic design, basics of animated QR codes, Fountain codes, Gomobile etc. To know more about the experimentation in detail, check out Ivan’s Github. AWS introduces ‘AWS DataSync’ for automated, simplified, and accelerated data transfer Google, Microsoft, Twitter, and Facebook team up for Data Transfer Project MySQL Data Transfer using Sql Server Integration Services (SSIS)
Read more
  • 0
  • 0
  • 6218
article-image-the-us-commerce-department-plans-to-put-export-controls-for-certain-emerging-technologies-like-ai
Bhagyashree R
03 Jan 2019
3 min read
Save for later

The US Commerce Department plans to put export controls for certain emerging technologies like AI

Bhagyashree R
03 Jan 2019
3 min read
Last year, the US Commerce Department issued a notice named advance notice of proposed rulemaking (ANPRM) that lists some emerging technologies on which export controls will be employed. The list includes fourteen categories including AI, quantum computing, robotics, advanced materials, and advanced surveillance technologies, among others. This notice called for public opinion on the criteria according to which these emerging technologies will be identified that are essential to U.S. national security. The public was requested to submit their comments on or before December 19, 2018, which is now extended to January 10, 2019. After identifying an emerging or foundational technology based on public comments, the Commerce Department will be authorized under the Export Control Reform Act (ECRA) of 2018 to establish "appropriate controls" on "emerging and foundational technologies". Will employing export rules on emerging technologies be successful? A technology policy researcher at the Massachusetts Institute of Technology, R. David Edelman believes that it is nearly impossible to classify technologies in two categories: military or commercial. He told to the New York Times, “trying to draw a line between what is military and what is commercial is exceedingly difficult. It may be impossible.” Another point to note here is that the research on these ever dynamic technologies is often done by scientists and engineers collaboratively all over the world, so we cannot really claim that the product is entirely developed by America. Also, companies and other researchers working on these technologies open source their work in hopes that some other researcher will be able to further develop the tool or technology. This is why policy experts believe that these US restrictions will have very little effect on the progress of AI in China and other countries. The government is unlikely to restrict companies and universities from publishing results of their AI research. But Greg Jaeger, a lawyer at the law firm Stroock & Stroock & Lavan who deals with export controls told NYT that the government could restrict foreign access to that information. One of the Hacker News readers wondered, “I'm curious how export restrictions would affect open source projects like Tensorflow and PyTorch. Would they be forced to become closed source? Could the license just include a disclaimer: "You're not allowed to use this if you're in one of the following countries: ..."? Would sites like Gitlab and Github be forced to implement per-repo geoblocking? Could they somehow be moved to ownership by a non-American entity that wasn't subject to such code? Does a US citizen contributing to a non-US open source ML project constitute a breach of export controls?”. They could also put control over the export of cloud-computing technology and computer chips related to artificial intelligence. These restrictive rules can prevent researchers from other countries from working on certain technologies in the US. They would rather choose other countries such as Europe. “It might be easier for people to just do this stuff in Europe,” said Jason Waite, a lawyer with the firm Alston & Bird who specializes in international trade, in the NYT interview. Patreon speaks out against the protests over its banning Sargon of Akkad for violating its rules on hate speech Mozilla v. FCC: Mozilla challenges FCC’s elimination of net neutrality protection rules Twitter prepares for mid-term US elections, with stronger rules and enforcement approach to fight against fake accounts and other malpractices
Read more
  • 0
  • 0
  • 1879

article-image-tim-cook-cites-supply-constraints-and-economic-deceleration-as-the-major-reason-for-apple-missing-its-earnings-target
Sugandha Lahoti
03 Jan 2019
3 min read
Save for later

Tim Cook cites supply constraints and economic deceleration as the major reason for Apple missing it’s earnings target

Sugandha Lahoti
03 Jan 2019
3 min read
Yesterday, Tim Cook, CEO Apple, wrote an open letter to Apple investors talking about why Apple missed it’s earnings target. They have also released a revised guidance for Apple’s fiscal 2019 first quarter, with Apple's revenue lower than what they had originally anticipated. The revised guidance states: Revenue of approximately $84 billion Gross margin of approximately 38 percent Operating expenses of approximately $8.7 billion Other income/(expense) of approximately $550 million Tax rate of approximately 16.5 percent before discrete items Cook has cited two major reasons for Apple’s revenue decline. First, supply constraints, that was responsible to block the sales of certain Apple products during Q1. This included Apple Watch Series 4, iPad Pro, AirPods and MacBook Air. Second, they had initially expected economic weakness in some emerging markets which turned to have a significantly greater impact than originally projected. The major economic weakness was observed in Greater China. Cook wrote, “most of our revenue shortfall to our guidance, and over 100 percent of our year-over-year worldwide revenue decline, occurred in Greater China across iPhone, Mac and iPad.” Traffic to Apple retail stores and channel partners in China also declined as the quarter progressed. He cited China’s government-reported GDP growth during the September quarter and the trade tensions of China with the United States as the major reasons for slow economic environment in China. Apart from Greater China, iPhone revenue also declined in some other developed countries due to weak iPhone upgrades. Cook said that macroeconomic challenges in these markets were a key contributor to this trend. However, he also added, “ fewer carrier subsidies, US dollar strength-related price increases, and some customers taking advantage of significantly reduced pricing for iPhone battery replacements”, as other factors impacting iPhone performance. On the positive side, Apple set a new record in China for Services revenue. Services generated over $10.8 billion in revenue during the quarter, with Apple on track to double the size of this business from 2016 to 2020. The installed base of devices also grew over the last year. Categories outside of iPhone (Services, Mac, iPad, Wearables/Home/Accessories) combined to grow almost 19 percent year-over-year. Their installed base of active devices grew by more than 100 million units in 12 months. Wearables also spiked with almost 50 percent growth year-over-year. This was attributed to new MacBook Air, Mac mini and the new iPad Pro. Apple Watch and AirPods were also wildly popular among holiday shoppers. For the future, Apple expects to set all-time revenue records in several developed countries and to report a new all-time record for Apple’s earnings per share. You may go through the entire letter from Tim Cook to Apple investors. Apple ups it’s AI game; promotes John Giannandrea as SVP of machine learning Apple’s security expert joins the American Civil Liberties Union (ACLU) Apple has quietly acquired privacy-minded AI startup Silk Labs, reports Information
Read more
  • 0
  • 0
  • 1927