Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3709 Articles
article-image-postgresql-12-progress-update
Amrata Joshi
13 May 2019
2 min read
Save for later

PostgreSQL 12 progress update

Amrata Joshi
13 May 2019
2 min read
Last week, the team at PostgreSQL released a progress update for the eagerly awaited PostgreSQL 12. This release  comes with performance improvements and better server configuration, indexes, recovery parameters and much more.   This article was updated 05.14.2019 to correct the fact that this was a progress update for PostgreSQL, not a software release. What’s going to be coming in PostgreSQL 12? Performance In PostgreSQL 12  the Just-in-Time (JIT) compilation will be enabled by default. Memory consumption of COPY and function calls will be reduced and the search performance for multi-byte characters will also be improved. Server configuration Updates to server configuration will add the ability to enable/disable cluster checksums using pg_checksums. It should also reduce the default value of autovacuum_vacuum_cost_delay to 2ms and allows time-based server variables to use micro-seconds. Indexes in PostgreSQL 12  The speed of btree index insertions should be optimized for PostgreSQL. The new code will also improve the space-efficiency of page splits and should further reduce locking overhead, and gives better performance for UPDATEs and DELETEs on indexes with many duplicates. Recovery parameters PostgreSQL 12 should also allow recovery parameters to be changed with reload. These parameters include, archive_cleanup_command, promote_trigger_file, recovery_end_command, and recovery_min_apply_delay. It also allows streaming replication timeout. OID columns The special behavior of OID columns will likely be removed, but columns will still be explicitly specified as type OID. The operations on tables that have columns named OID will need to be adjusted. Data types Data types abstime, reltime, and tinterval look as though they'll be removed from PostgreSQL 12. Geometric functions Geometric functions and operators will be refactored to produce better results than are currently available. The geometric types can be restructured to handle NaN, underflow, overflow and division by zero. To learn more about what's likely to be coming to PostgreSQL 12, check out the official announcement. Building a scalable PostgreSQL solution PostgreSQL security: a quick look at authentication best practices [Tutorial] How to handle backup and recovery with PostgreSQL 11 [Tutorial]  
Read more
  • 0
  • 0
  • 4052

article-image-flutter-gets-new-set-of-lint-rules-to-build-better-chrome-os-apps
Sugandha Lahoti
13 May 2019
2 min read
Save for later

Flutter gets new set of lint rules to build better Chrome OS apps

Sugandha Lahoti
13 May 2019
2 min read
Last week at the Google I/O, Flutter UI framework expanded from mobile to multi-platform and the company released the first technical preview of Flutter for web. On Friday, Google announced new updates to Flutter for building Chrome OS applications. Flutter tools allow developers to build and test their apps directly on the Chrome OS. New updates for Flutter for Chrome OS Along with Flutter’s seamless resizing feature, Flutter for Chrome OS comes with additional features such as scroll wheel support, hover management, and better keyboard event support. The Flutter team also added a new set of lint rules to the Flutter tooling to catch violations of the most important of the Chrome OS best practice guidelines. This will help developers get a better idea of whether their Android app is going to run well on Chrome OS. In the IDE or when running flutter analyze at the command line, developers can see lints if their Flutter app has issues targeting Chrome OS. Image source: GitHub Lint rules can be turned on the Flutter app by creating a file named analysis_options.yaml in the root of your Flutter project. The contents should look similar to this: include: package:flutter/analysis_options_user.yaml analyzer: optional-checks:   chrome-os-manifest-checks Developing Flutter on ChromeOS has got the developer community excited. https://twitter.com/mklin/status/1127001767873409025 https://twitter.com/timsneath/status/1126921052922081280 https://twitter.com/lehtimaeki/status/1103602179556937729 If you’d like to target Flutter for Chrome OS, you can do so today simply by installing the latest version of Flutter. Google I/O 2019: Flutter UI framework now extended for Web, Embedded, and Desktop Google launches Flutter 1.2, its first feature update, at Mobile World Congress 2019 Google I/O 2019 D1 highlights: smarter display, search feature with AR capabilities, Android Q and more.
Read more
  • 0
  • 0
  • 3337

article-image-introducing-swiftwasm-a-tool-for-compiling-swift-to-webassembly
Bhagyashree R
13 May 2019
2 min read
Save for later

Introducing SwiftWasm, a tool for compiling Swift to WebAssembly

Bhagyashree R
13 May 2019
2 min read
The attempts of porting Swift to WebAssembly has been going on for very long, and finally, a team of developers has come up with SwiftWasm, which was released last week. With this tool, you will now be able to run your Swift code on the web by compiling it to WebAseembly. https://twitter.com/swiftwasm/status/1127324144121536512 The SwiftWasm tool is built on top of the WASI SDK, which is a WASI-enabled C/C++ toolchain. This makes the WebAssembly executables generated by SwiftWasm work on both browsers and standalone WebAssembly runtimes such as Wasmtime, Fastly’s Lucet, or any other WASI-compatible WebAssembly runtime. How you can work with SwiftWasm? While macOS does not need any dependencies to be installed, some dependencies need to be installed on Ubuntu and Windows: On Ubuntu install ‘libatomic1’: sudo apt-get install libatomic1 On Windows: First Install the Windows Subsystem for Linux, and then install the libatomic1 library. The next step is to compile SwiftWasm by running the following commands: ./swiftwasm example/hello.swift hello.wasm To run the resulting ‘hello.wasm’ file, go to the SwiftWasm polyfill and upload the file. You will see the output in the textbox. This polyfill supports Firefox 66, Chrome 74, and Safari 12.1. The news of having a tool for running Swift on the web has got many developers excited. https://twitter.com/pvieito/status/1127620197668487169 https://twitter.com/johannesweiss/status/1126913408455053312 https://twitter.com/jedisct1/status/1126909145926569986 The project is still work-in-progress and thus has some limitations. Currently, only the Swift ‘stdlib’ is compiled and other libraries such as Foundation or SwiftPM are not included. Few functions such as ‘Optional.Map’ does not work because of the calling convention differences between throwing and non-throwing closures. If you want to contribute to this project, check out its pull request on Swift’s GitHub repository to know more about its current status. You can try SwiftWasm on its official website. Swift is improving the UI of its generics model with the “reverse generics” system Swift 5 for Xcode 10.2 is here! Implementing Dependency Injection in Swift [Tutorial]
Read more
  • 0
  • 0
  • 5291

article-image-github-announces-beta-version-of-github-package-registry-its-new-package-management-service
Sugandha Lahoti
13 May 2019
3 min read
Save for later

GitHub announces beta version of GitHub Package Registry, its new package management service

Sugandha Lahoti
13 May 2019
3 min read
Update: At WWDC 2019, GitHub added support for Swift packages to GitHub Package Registry. Swift packages make it easy to share your libraries and source code across projects and with the Swift community. Last Friday, GitHub announced a new package management service to allow developers and organizations to easily generate "packages" from their code. Called the GitHub Package Registry, this service allows developers to publish public or private packages next to their source code. https://twitter.com/github/status/1127261105963917312 “GitHub Package Registry is compatible with common package management clients, so you can publish packages with your choice of tools,” Simina Pasat, director of Product Management at GitHub, explains in the official announcement. The GitHub Package Registry is available in limited beta for now. However, it will always be free to use for open source. The new service is currently compatible with JavaScript (npm), Java (Maven), Ruby (RubyGems), .NET (NuGet) and Docker images, with support for other languages and tools to come. Packages hosted on GitHub will include detailed insights such as download statistics and project/package history. Developers can also publish multiple packages of different types for more complex repositories. They can also customize publishing and post-publishing workflows using webhooks and GitHub Actions. GitHub Package Registry has unified identity and permissions meaning packages on GitHub inherit the visibility and permissions associated with the repository. This means, organizations no longer need to maintain a separate package registry and mirror permissions across systems. They can use a single set of credentials across different systems for code and packages, and manage access permissions with the same tools. Developers are generally enthusiastic about the new GitHub Venture. Here are some positive comments from a thread on Hacker News. “This is really outstanding. GitHub Package Registry separates the registry from the artifact storage, which is the right way to do it. The registry should be quick to update because it's only a pointer. The artifact storage will be under my control. Credentials and security should be easier to deal with. I really hope this works out.” “This is pretty interesting. Github really is becoming the social network that MS never seemed to be able to create. We already use it as our portfolio of work for potential employers. We collaborate with fellow enthusiasts and maybe even make new friends. We host our websites from it. Abuse it to store binaries, too. And now, alongside, source code we can use it as a CDN of sorts to serve packages, for free, sounds pretty great.” “It's a really nice project overall, having a GitHub Package Registry that supports many different projects and run by a company that today is good, is always nice.” GitHub deprecates and then restores Network Graph after GitHub users share their disapproval Apache Software Foundation finally joins the GitHub open source community Introducing Gitpod, a one-click IDE for GitHub
Read more
  • 0
  • 0
  • 2674

article-image-elvis-pranskevichus-on-limitations-in-sql-and-how-edgeql-can-help
Bhagyashree R
10 May 2019
3 min read
Save for later

Elvis Pranskevichus on limitations in SQL and how EdgeQL can help

Bhagyashree R
10 May 2019
3 min read
Structure Query Language (SQL), which was once considered “not a serious language” by its authors, has now become a dominant query language for relational databases in the industry. Its battle-tested solutions, stability, portability makes it a reliable choice to perform operations on your stored data. However, it does has its share of weak points and that’s what Elvis Pranskevichus, founder of EdgeDB, listed down in a post titled “We Can Do Better Than SQL” published yesterday. He explained that we now need a “better SQL” and further introduced the EdgeQL language, which aims to address the limitations in SQL. SQL’s shortcomings Following are some of the shortcomings Pranskevichus talks about in his post: “Lack of Orthogonality” Orthogonality is a property, which means if you make some changes in one component, it will have no side effect on any other component. In the case of SQL, it means, allowing users to combine a small set of primitive constructs in a small number of ways. Orthogonality leads to a more compact and consistent design and not having it will lead to language which has many exceptions and caveats. Giving an example, Pranskevichus wrote, “A good example of orthogonality in a programming language is the ability to substitute an arbitrary part of an expression with a variable, or a function call, without any effect on the final result.” SQL does not permit such type of generic substitution. “Lack of Compactness” One of the side effects of not being orthogonal is lack of compactness. SQL is also considered to be “verbose” because of its goal of being an English-like language for catering to “non-professions”. “However, with the growth of the language, this verbosity has contributed negatively to the ability to write and comprehend SQL queries. We learnt this lesson with COBOL, and the world has long since moved on to newer, more succinct programming languages. In addition to keyword proliferation, the orthogonality issues discussed above make queries more verbose and harder to read,” wrote Pranskevichus in his post. “Lack of Consistency” Pranskevichus further adds that SQL is inconsistent in terms of both syntax and semantics. Additionally, there is a problem of standardization as well as different database vendors implement their own version of SQL, which often end up being incompatible with other SQL variants. Introducing EdgeQL With EdgeQL, Pranskevichus aims to provide users a language which is orthogonal, consistent, and compact, and at the same time works with the generally applicable relational model. In short, he aims to make SQL better! EdgeQL basically considers every value a set and every expression a function over sets. This design of the language allows you yo factor any part of an EdgeQL expression into a view or a function without changing other parts of the query. It has no null and a missing value is considered to be an empty set, which comes with the advantage of having only two boolean logic states. Read Pranskevichus’s original post for more details on EdqeQL. Building a scalable PostgreSQL solution PostgreSQL security: a quick look at authentication best practices [Tutorial] How to handle backup and recovery with PostgreSQL 11 [Tutorial]
Read more
  • 0
  • 0
  • 2634

article-image-tableau-day-highlights-augmented-analytics-tableau-prep-builder-and-conductor-and-more
Savia Lobo
10 May 2019
4 min read
Save for later

‘Tableau Day’ highlights: Augmented Analytics, Tableau Prep Builder and Conductor, and more!

Savia Lobo
10 May 2019
4 min read
The Tableau community held a Tableau Day in Mumbai, India, yesterday. The community announced some upcoming and exciting developments in Tableau. Some of the highlights of the Tableau Day include an in-depth explanation of the new Tableau Prep Builder and Conductor, how Tableau plans to move on to Augmented Analytics, and many others. The conference also included a customer story from Nishtha Sharma, Manager at Times Internet, who shared how Tableau helped Times Internet in optimizing their sales, revenue, managing cost per customer, and business predictions with the help of Tableau Dashboards. She further said, Times Internet was solving around 10 business problems with 7 dashboards initially; however, due to success with Tableau initially, they are now solving close to 30 business cases with 15 dashboards. Let us have a look at some of the highlights below. Augmented Analytics: The next step for Tableau Varun Tandon, Tableau Solution consultant explained how Tableau is adopting intelligent or Augmented Analytics. Tableau may be moving into adopting augmented analytics for its platform where ML and AI can be used to enhance data access and data quality, uncover previously hidden insights, suggest analyses, deliver predictive analytics and suggest actions, and a lot of other tasks. A lot of users came up with questions and speculations based on Tableau’s acquisition of Empirical Systems last year and whether Ask Data, Tableau’s new natural language capability, a feature included in Tableau 2018.2, was a result of the same. The representatives confirmed the acquisition and also mentioned that Tableau is planning on building analytics and augmented analytics within Tableau without the need for additional third-party add-ons. However, they did not clarify if Ask Data was a result of Empirical System’s acquisition. With Empirical’s NLP module, Tableau users may easily gain insights, make better data-driven decisions, and explore many more features without knowledge of data science or query languages. Doug Henschen, a technology analyst at ConstellationR in his report, “Tableau Advances the Era of Smart Analytics” explored the smart features that Tableau Software has introduced and is investing in and how these capabilities will benefit Tableau customers. Creating a single Hub for data from various sources The conference explained in detail with examples of how Tableau can be used as a single Hub for data coming from various sources such as Netsuite, Excel, Salesforce, and so on. New features on Tableau Prep Builder and Conductor Tableau’s new Data Prep Builder and Conductor, which saves massive data preparation time, was also demonstrated and its new features were explained in detail, in this conference. Shilpa Bhatia, a Customer Consultant at Tableau Software, conducted this session. Questions were asked if Tableau Prep Builder and Conductor would replace ETL. The representatives said that Data Prep does a good job with data preparation; however, users should not confuse it with ETL. They have called the Tableau Prep Builder and Conductor, a Mini ETL. Tableau is also coming up with monthly updates since the tool is still evolving and it will continue for the near future. A question on the ability to pull the data from Data Prep to Jupyter notebook for building data frames was also asked. However, even this isn’t possible with the Prep Prep Builder and Conductor. They said Data Prep is extremely simple to use; however, it is a resource heavy tool and a dedicated machine with more than 16 GB RAM to will be needed to avoid system lag for large datasets. The self-service mode in Tableau Jayen Thakker, Sales Consultant at Tableau explained how one can go beyond dashboards with Tableau. He said, with the help of Tableau’s self-service mode, users can explore and build dashboards on their own without the need of waiting for the developer to build it for them.​ Upcoming Tableau versions The conference also revealed that Tableau 2019.2 is currently in Beta 2 and is expected to be released by the next month. Also, there will be a Beta 3 version before the final release. Each release of Tableau includes around 100 to 150 changes, and a couple of changes were also discussed including Spatial data (MakePoint and MakeLine), some next steps on how it will move beyond 'Ask Data' and will include advanced analytics and AI features, and so on. Tableau is also working on serving people who need more traditional reporting, the representatives mentioned. To know more about the ‘Tableau Day’ highlights from Mumbai, watch this space or visit Tableau’s official website. Alteryx vs. Tableau: Choosing the right data analytics tool for your business Tableau 2019.1 beta announced at Tableau Conference 2018 How to do data storytelling well with Tableau [Video]
Read more
  • 0
  • 0
  • 4195
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $15.99/month. Cancel anytime
article-image-after-rhel-8-release-users-awaiting-the-release-of-centos-8
Vincy Davis
10 May 2019
2 min read
Save for later

After RHEL 8 release, users awaiting the release of CentOS 8

Vincy Davis
10 May 2019
2 min read
The release of Red Hat Enterprise Linux 8 (RHEL 8) this week, has made everyone waiting for CentOS 8 rebuild to occur. The release of CentOS 8 will require major overhaul in the installer, packages, packaging, and build systems so that it can work with the newer OS. CentOS 7 was released last year, days after RHEL 7 was released. So far the team at CentOS have made their new build system setup ready, and are currently working on the artwork. But they still need to work on their multiple series of build loops in order to get all of the CentOS 8.0 packages built in a compatible fashion. There will be a an installer update followed by a release candidate(s). Only after all these releases, CentOS 8 will finally be available for it’s users. The RHEL 8 release has made many users excited for the CentOS 8 build. A user on Reddit commented, “Thank you to the project maintainers; while RedHat does release the source code anyone who’s actually compiled from source knows that it’s never push-button easy” Another user added, “Thank you Red Hat! You guys are amazing. The entire world has benefited from your work. I've been a happy Fedora user for many years, and I deeply appreciate how you've made my life better. Thank you for building an amazing set of distros, and thank you for pushing forward many of the huge projects that improve our lives such as Gnome and many more. Thank you for your commitment to open source and for living your values. You are heroes to me” So far, a release date has not been declared for CentOS 8 , but a rough timeline has been shared. To read about the steps needed to make a CentOS rebuild, head over to the CentOS wiki page. Red Hat releases OpenShift 4 with adaptability, Enterprise Kubernetes and more! Soon, RHEL (Red Hat Enterprise Linux) won’t support KDE Red Hat released RHEL 7.6
Read more
  • 0
  • 0
  • 6649

article-image-artist-holly-herndon-releases-an-album-featuring-an-artificial-intelligence-musician
Richard Gall
10 May 2019
6 min read
Save for later

Artist Holly Herndon releases an album featuring an artificial intelligence 'musician'

Richard Gall
10 May 2019
6 min read
The strange mixture of panic and excitement around artificial intelligence only appears to grow as the world uncovers new and more novel ways of using it. These waves of innovation then only feed into continuing cycles of stories that have a habit of perpetuating misleading ideas about both the threats and opportunities it presents. It shouldn't be surprising, then, that there's a serious misunderstanding of what artificial intelligence really is and how it works - as Rowel Atienza told us last month, "we're still very far from robots taking over society." However, artist Holly Herndon (who, incidentally, is a researcher at Stanford) is getting listeners to think differently about artificial intelligence. On her latest album PROTO, which was released today, she's using it to augment and complement her music. Holly Herndon's AI agent, Spawn The special guest that makes PROTO remarkable is Spawn, an AI agent created by Herndon, her husband, and a software engineer. What makes Spawn particularly significant is that Herndon doesn't use it to replace or recreate something but instead as something that exists alongside human activity and creativity. How does Spawn work? Spawn was 'trained' on the music that Herndon and her band were writing for the album. In essence, then, this makes it quite different from the way in which AI is typically used, in that it was developed around a new dataset, not an existing one. When we use existing data sets - and especially when we use them uncritically, without any awareness of how they reproduce or hide certain biases - the AI develops around those very biases. However, when learning from the new 'data', which bears all the marks of Herndon's creative decision making, Spawn almost becomes a 'creative' AI agent. If you listen to the album, it's not always that easy to spot which parts are created by the artificial intelligence and which are made by human musicians. This combination of creative 'sources' means Herndon's album forces us to ask questions about how we use AI and how it interacts with our lives. It quite deliberately engages with the conversation around ethics in AI that has been taking place across the tech industry over the last year or so. https://open.spotify.com/album/3PkYFFSJTPxOhnSYBtyZsk?si=OgFCY5p4Tu2u2rK-3mFYjA "The advent of sampling raised many questions about the ethical use of material created by others," Herndon wrote in a statement published on Twitter at the end of 2018, "but the era of machine legible culture accelerates and abstracts that conversation." https://twitter.com/hollyherndon/status/1069978436851113985 What does Holly Herndon's album tell us about artificial intelligence? PROTO raises a number of really important questions about artificial intelligence. First and foremost, it suggests that artificial intelligence isn't a replacement for human intelligence. Spawn isn't used to take the jobs from any musicians, but rather extends what's sonically possible. It adds to their capabilities, giving it a new dimension. Furthermore, just as Herndon refuses to see artificial intelligence as something which can replicate human labor - or creativity - it also points out some of the very problems with this kind of understanding: the idea that AI can 'replicate' human intelligence at all. Instead, the album's merging of the human and the artificial is a way of exploring the weaknesses of artificial intelligence. This is a way of making AI more transparent. It opens up something that so seems seamless, and highlights the ways it doesn't quite work. It almost refracts rather than mimics the sound the human musicians make. As Herndon said in an interview with Jezebel publication The Muse, "the technology is impressive and it’s cool but it’s really early still. We really wanted to be honest about that and show its mistakes and show how kind of rough the technology is still because... it's more honest and more interesting, to allow it to have its own aesthetic." https://www.youtube.com/watch?v=r4sROgbaeOs Read next: Why an algorithm will never win a Pulitzer The human side of AI technology But the album does more than just present AI as a flawed tool that can complement human ingenuity. It also asks us about ownership and creativity. It uses the technology as a way of tackling human questions like "what does it mean to create something?" and "who's even allowed to create things?" This is important when we consider the fact that not only does someone control and own a given algorithm - as in literally owning the intellectual property - but also that someone owns and controls the swathes of data that are, at a really fundamental level, crucial to artificial intelligence being possible at all. "The history of music and our shared, human, intellectual project that leads up to today, is a shared resource that we all tap into and we all learn from," Herndon also said in the interview with Jezebel. "So if an individual can just scrape that and then claim so much of that as their own because they hold the keys to this AI, and then they can recreate it, of course it’s going to give people anxiety because there’s an ethical issue with that." Read next: Sex robots, artificial intelligence, and ethics: How desire shapes and is shaped by algorithms Instrumental and aesthetic artificial intelligence One of the main reasons artificial intelligence has become a buzzword is because it's a tool for industry. It has a commercial value; it can improve efficiency by allowing us to do more with less. The value of an album like PROTO - even if it's not the sort of thing you'd usually listen to - is that it removes artificial intelligence from a context in which it is instrumentalized, and puts it into one that's purely aesthetic. To make that clearer, it changes something we'd typically think about in a functional manner - is it working? is it doing what it's supposed to do? - to something in which it's very function is open to question. If Herndon's album is able to do that in even the smallest way, then that can only be a good thing, right? And even if it doesn't - at least it sounds good...
Read more
  • 0
  • 0
  • 3385

article-image-selenium-4-alpha-quietly-released-on-maven-repository
Bhagyashree R
10 May 2019
3 min read
Save for later

Selenium 4 alpha quietly released on Maven Repository

Bhagyashree R
10 May 2019
3 min read
Last month, the team behind Selenium silently slipped in Selenium 4 alpha on the Maven Repository without any official announcement. Alpha release means that developers can start testing out the new updates in the software but are not recommended to use it in production. Selenium 4 is a major release, which was actually planned to ship by Christmas last year, as shared by Simon Stewart, one of the Selenium lead developers and the inventor of WebDriver, at the Selenium Conference India 2018. https://www.youtube.com/watch?v=ypmrrJmgM9U&feature=youtu.be However, going by the status on SeleniumHQ GitHub repository, we can expect more delay in this release. This situation is very similar to that of Selenium 3.0 release. Back in 2013, Stewart shared that Selenium 3.0 will be released by Christmas, which ended up hitting the market after 3 years of the announcement. "I did say Christmas, but I didn't specify what year," he jokingly said in a webinar in 2016. Following are some of the updates in Selenium 4 alpha release: Native support removed for Opera and PhantomJS Starting from this release, the Opera and PhantomJS browsers will not be supported natively as the WebDriver implementations for these browsers are no longer under active development. Since Opera is built on top of the Chromium open source projects, users are recommended to test with the Chrome browsers. PhantomJS users can use Firefox or Chrome in headless mode. Updates for W3C WebDriver Spec Compliance Selenium 4 WebDriver will be completely standardized with W3C. In lieu of this, the following changes are made in this release: Changes to the Actions API This release comes with a revamped Actions API to comply with the WebDriver specifications. The Actions API serves as a low-level interface for providing virtualized device input to the web browser. Currently, Actions is only supported in Firefox natively. Other browser users can use this API by putting the Actions class into “bridge mode”. It will then attempt to translate mouse and keyboard actions to the legacy API. Alternatively, users can continue using the legacy API via the ‘lib/actions’ module. However, it should be noted that the legacy API will be deprecated and will be removed in a minor release once other browsers start supporting the new API. Other changes This release comes with support for all window manipulation commands WebElement.getSize() and WebElement.getLocation() are now replaced with a single method, WebElement.getRect(). A new method, driver.switchTo().parentFrame() method is added To read what else has been updated in this release, check out change doc on Selenium GitHub repository. Selenium and data-driven testing: An interview with Carl Cocchiaro How to work with the Selenium IntelliJ IDEA plugin How to handle exceptions and synchronization methods with Selenium WebDriver API
Read more
  • 0
  • 0
  • 3547

article-image-singapore-passes-controversial-bill-that-criminalizes-publishing-fake-news
Vincy Davis
10 May 2019
3 min read
Save for later

Singapore passes controversial bill that criminalizes publishing “fake” news

Vincy Davis
10 May 2019
3 min read
Yesterday, Singapore passed a law criminalizing publication of fake news which will allow the government to block and order the removal of such content. The bill ‘The Protection from Online Falsehoods and Manipulation’ was passed by a vote of 72-9 in the Singapore parliament. This law would allow the government to demand corrections, order the removal of content, or block websites deemed to be propagating falsehoods contrary to the public interest. Two months ago, Russia passed a new law which will allow the government to punish individuals and online media for spreading “fake” news and information which disrespects the state. In recent months, other countries like France and Germany have already passed tough laws against fake news or hate speech. Singapore is ranked 151 out of 180 countries in this year's World Press Freedom Index. What does the Bill cover? ‘The Protection from Online Falsehoods and Manipulation Bill’ will give the Singapore government the power to ban fake news which can be detrimental to Singapore or can influence elections. The government can demand removal of such hurtful content or they can even block it. Offenders could face a jail term of up to 10 years and hefty fines. Last month during a visit to Malaysia, Singapore Prime Minister Lee Hsien Loong had said, “fake news was a serious problem and other countries including France, Germany and Australia were legislating to combat it”. He added Singapore’s proposed laws “will be a significant step forward” and “We’ve deliberated on this now for almost two years. What we have done has worked for Singapore, it is our objective to continue to do things which will work for Singapore.” Reactions to the Bill Under the proposed legislation, all of Singapore government's ministers will be handed powers to demand corrections or order websites to be blocked if they are found to be propagating “falsehoods” contrary to the public interest. Very few people have praised the law, as there are many who believe that this law will target ‘Free Speech’ more than ‘Fake News’. Phil Robertson, deputy Asia director at Human Rights said, “Singapore’s new 'fake news' law is a disaster for online expression by ordinary #Singaporeans, and a hammer blow against the independence of many online news portals they rely on to get real news about their country beyond the ruling People's Action Party political filter”. He added, “You’re basically giving the autocrats another weapon to restrict speech, and speech is pretty restricted in the region already.” Social media firms have strongly criticized the law which hurt freedom of speech by forcing social media platforms to censor users in order to avoid potential fines. Google, Facebook, and Twitter have voiced their reservations regarding the ‘Fake News bill’. According to Reuters news, Google which has its Asia headquarters in Singapore, said it was "concerned that this law will hurt innovation" and that "how the law is implemented matters." Though authorities around the world are of the opinion that laws to restrict ‘Fake News’ are the need of the hour. It would be good if they would have decide what’s worse, some fake news on the web or some big daddy deciding what is right for the people. To know more details about the bill, read the document release. Facebook hires top EEF lawyer and Facebook critic as Whatsapp privacy policy manager Will Facebook enforce it’s updated “remove, reduce, and inform” policy to curb fake news and manage problematic content? OpenAI’s new versatile AI model, GPT-2 can efficiently write convincing fake news from just a few words
Read more
  • 0
  • 0
  • 1141
article-image-introducing-tensorflow-graphics-packed-with-tensorboard-3d-object-transformations-and-much-more
Amrata Joshi
10 May 2019
3 min read
Save for later

Introducing TensorFlow Graphics packed with TensorBoard 3D, object transformations, and much more

Amrata Joshi
10 May 2019
3 min read
Yesterday, the team at TensorFlow introduced TensorFlow Graphics. A computer graphics pipeline requires 3D objects and their positioning in the scene, and a description of the material they are made of, lights and a camera. This scene description then gets interpreted by a renderer for generating a synthetic rendering. In contrast, a computer vision system starts from an image and then tries to infer the parameters of the scene. This also allows the prediction of which objects are in the scene, what materials they are made of, and their three-dimensional position and orientation. Developers usually require large quantities of data to train machine learning systems that are capable of solving these complex 3D vision tasks.  As labelling data is a bit expensive and complex process, so it is better to have mechanisms to design machine learning models. They can easily comprehend the three dimensional world while being trained without much supervision. By combining computer vision and computer graphics techniques we get to leverage the vast amounts of unlabelled data. For instance, this can be achieved with the help of analysis by synthesis where the vision system extracts the scene parameters and the graphics system then renders back an image based on them. In this case, if the rendering matches the original image, which means the vision system has accurately extracted the scene parameters. Also, we can see that in this particular setup, computer vision and computer graphics go hand-in-hand. This also forms a single machine learning system which is similar to an autoencoder that can be trained in a self-supervised manner. Image source: TensorFlow We will now explore some of the functionalities of TensorFlow Graphics. Object transformations Object transformations are responsible for controlling the position of objects in space. The axis-angle formalism is used for rotating a cube and the rotation axis points up to form a positive which leads the cube to rotate counterclockwise. This task is also at the core of many applications that include robots that focus on interacting with their environment. Modelling cameras Camera models play a crucial role in computer vision as they influence the appearance of three-dimensional objects projected onto the image plane. For more details about camera models and a concrete example of how to use them in TensorFlow, check out the Colab example. Material models Material models are used to define how light interacts with objects to give them their unique appearance. Some materials like plaster and mirrors usually reflect light uniformly in all directions. Users can now play with the parameters of the material and the light to develop a good sense of how they interact. TensorBoard 3d TensorFlow Graphics features a TensorBoard plugin to interactively visualize 3d meshes and point clouds. Through which visual debugging is also possible that helps to assess whether an experiment is going in the right direction. To know more about this news, check out the post on Medium. TensorFlow 1.13.0-rc2 releases! TensorFlow 1.13.0-rc0 releases! TensorFlow.js: Architecture and applications  
Read more
  • 0
  • 0
  • 3555

article-image-differentialequations-jl-v6-4-0-released-with-gpu-support-in-ode-solvers-linsolve-defaults-and-much-more
Amrata Joshi
10 May 2019
3 min read
Save for later

DifferentialEquations.jl v6.4.0 released with GPU support in ODE solvers, linsolve defaults, and much more!

Amrata Joshi
10 May 2019
3 min read
Yesterday, the team behind JuliaDiffeEq released DifferentialEquations.jl v6.4.0,  a suite for numerically solving differential equations in Julia. This release gives users the ability to use ODE solvers on GPU, with automated tooling for faster broadcast, matrix-free Newton-Krylov, better Jacobian re-use algorithms, memory use reduction, etc. What’s new in DifferentialEquations.jl v6.4.0? Full GPU support in ODE solvers With this release, the stiff ODE solvers allow expensive calculations, like those in neural ODEs or PDE discretizations, and utilize GPU acceleration. This release also allows the initial condition to be a GPUArray where the internal methods don’t perform any indexing in order to allow for all computations to take place on the GPU without data transfers. Fast DiffEq-Specific Broadcast This release comes with a broadcast wrapper that allows all sorts of information to be passed to the compiler in the differential equation solver’s internals. This makes a bunch of no-aliasing and sizing assumptions that are normally not possible. This leads the internals to use a special @..,which also turns out to be faster than standard loops. Smart linsolve defaults This release comes with a smarter linsolve defaults, which automatically detects the BLAS installation and utilizes RecursiveFactorizations.jl that speeds up the process for ODE. Users can use the linear solver to automatically switch to a form that works for sparse Jacobians. Even banded matrices and Jacobians on the GPU are now automatically handled. Automated J*v Products via Autodifferentiation Users can now use GMRES, easily without the need for constructing the full Jacobian matrix. Users can simply use the directional derivatives in the direction of v in order to compute J*v. Performance improvement With this release, the performance of all implicit methods like KenCarp4 has been improved. DiffEqBiological.jl can now handle large reaction networks and can parse the networks much faster and can build Jacobians that utilize sparse matrices. Though there is still plenty of room for improvement. Partial Neural ODEs This release comes with a lot of improvements and gives a glimpse of working examples of partial neural differential equations that are equations, which have pre-specified portions. These equations allow for batched data and GPU acceleration. Memory optimization  This release comes with memory optimizations of low-memory Runge-Kutta methods for hyperbolic or advection-dominated PDEs. These methods now have a minimal number of registers which are required for the method. Large PDE discretizations can now make use of DifferentialEquations.jl without loss of memory efficiency. Robust callbacks The team has introduced the ContinuousCallback implementation in this release that has increased robustness in double event detection. To know more about this news, check out the official announcement. The solvers – these great unknown Moving Further with NumPy Modules How to build an options trading web app using Q-learning  
Read more
  • 0
  • 0
  • 1549

article-image-jeff-bezoss-unveils-space-mission-blue-origins-lunar-lander-to-colonize-the-moon
Sugandha Lahoti
10 May 2019
5 min read
Save for later

Jeff Bezos unveils space mission: Blue Origin’s Lunar lander to colonize the moon

Sugandha Lahoti
10 May 2019
5 min read
World’s wealthiest man and CEO of Amazon, Jeff Bezos’ next venture is to land in space. With his company Blue Origins, Bezos revealed his plans to establish huge space colonies on the moon, in a theatrical press event yesterday. “It’s time to go back to the moon, this time to stay,” Bezos said. https://twitter.com/JeffBezos/status/1126634588560748545 Bezos added that he wants people to have access to space travel because the Earth is "finite," so expanding to space will become a necessity one day to ensure humanity doesn't fall into statis and rationing. His reasoning: “If we move out into the solar system for all practical purposes, we have unlimited resources." On stage, at the invite-only presentation to media and space industry executives at Washington DC, Bezos showed off Blue Origin’s Blue Moon lander. Per Bloomberg, “the craft features a large internal spherical fuel tank and sits atop four landing pads. It’s powered by liquid hydrogen, in part so it can be refuel from ice water on the moon’s poles. Hydrogen fuel cells will power the device through the lunar night.” The Blue Moon lander can bring 3.6 metric tons to the lunar surface, according to Bezos. Blue Moon will weigh 33,000 pounds when loaded with fuel on lift-off from Earth, which will decrease to about 7,000 pounds when it is about to land on the Moon. https://youtu.be/hmk1oHzvNKA Bezos also unveiled the company's BE-7 rocket engine at the event. The engine will be test fired for the first time this summer, Bezos said. It's largely made of "printed" parts, he added. BE-7 engine is a high-performance liquid hydrogen/liquid oxygen engine that powers the lander’s six minute descent. After it lands, Blue Moon will deploy a small rover. https://twitter.com/Erdayastronaut/status/1126594675048157185 Bezos hopes missions can commence by 2024. The Trump administration had directed NASA for a return to the moon by 2024 and said to “accomplish this goal by any means necessary.” "I love Vice President Pence's 2024 lunar landing goal," Bezos said, adding that Blue Origin can meet that timeline "because we started this three years ago." “The kids here and their kids and grandchildren will build these colonies. My generation’s job is to build the infrastructure so they will be able to. We are going to build a road to space,” Bezos told the audience. “And then amazing things will happen. Then you’ll see entrepreneurial creativity. Then you’ll see space entrepreneurs start companies in their dorm rooms. That can’t happen today.” Not just Bezos, Elon Musk has also publicly announced his own plans for space colonization. Unlike Bezos, Musk is a more of Mars guy. He sees colonizing Mars as humanity’s best “Plan B.” The SpaceX CEO has outlined a bold plan to get people to the red planet, by 2024. All these plans are surely awe-inspiring, but Dr. S.A. Applin, tech reporter at Vice, contradicts these motives stating that tech billionaires are building their tech utopias, pursuing their hobbies, and charting humanity’s future, without consulting us- and sadly we can do little to influence their decisions. Not only this, the government has also appointed these wealthy space titans as the only ones to decide humanity’s future. In April, the US House of Representatives passed an Act that says, “outer space shall not be considered a global commons.” “This means”, states Applin, “unless we are in the United States, and incredibly wealthy, we aren’t allowed to think about outer space—it belongs to the rich, which right now means SpaceX and Blue Origin.” People on Twitter had a similar opinion after Bezos unveiled Blue Moon. On Bezos statement about how these future space colonies are, “Maui on its best day all year long. No rain, no storms, no earthquakes," Grady Booch, a scientist based in Maui tweeted, “I live on Maui. And it is paradise. But we have yet to figure out how to not import 80% of our food and pretty much all our fuel, and to provide reasonable housing for the people who live here.” “Billionaires are so obsessed with space colonization” tweeted Natalie Shure, a Los Angeles-based writer and researcher. “They see it as a solution to climate change that doesn't involve confronting capitalism” https://twitter.com/rodneylives/status/1126597616308035593 Loren Grush, senior science reporter at The Verge, who was present at the press event tweeted minute-by-minute details of Bezos’ presentation. One of the statements Bezos made: “If we're out in the solar system, we can have a trillion humans in the solar system, which means we'd have 1000 Mozarts and 1000 Einsteins." https://twitter.com/lorengrush/status/1126582358713483264 Others also talked about how billionaires should consider addressing serious present issues such as climate change, feeding the starving, ending bigotry and war, keeping kids from shooting up schools and talking about improving world healthcare, instead of pursuing space dreams. https://twitter.com/TheLoveBel0w/status/1126597360258297856 https://twitter.com/mer__edith/status/1126648705594068995 https://twitter.com/OccuWorld/status/1126639744186769409 https://twitter.com/ZBDouglas/status/1126540720674877440 Katie Bouman unveils the first ever black hole image with her brilliant algorithm Elon Musk’s tiny submarine is a lesson in how not to solve problems in tech. 4,520+ Amazon employees sign an open letter asking for a “company-wide plan that matches the scale and urgency of climate change.
Read more
  • 0
  • 0
  • 2343
article-image-stefan-judis-a-twilio-web-developer-on-responsible-web-development-with-http-headers
Bhagyashree R
10 May 2019
7 min read
Save for later

Stefan Judis, a Twilio web developer, on responsible web development with HTTP headers

Bhagyashree R
10 May 2019
7 min read
As the web has evolved, its security needs have changed too. Today, as web applications become more lightweight, composed of loosely coupled services, with a mesh of different dependencies, the onus is on web developers to build websites and applications that have not only better user experience but also better user security. There are many techniques that can be used to do this. And at this year’s JavaScript fwdays, Twilio frontend developer Stefan Judis demonstrated how we can use HTTP headers to add another layer of security and optimization to our applications and websites. Why do web developers need to think seriously about security? The browser has in recent years become a popular site for attacks. Back in 2017, Equifax, a massive credit rating agency, announced that criminals exploited a vulnerability in their website to get access to personal information of 143 million American consumers. Last year, more than 4,000 websites in the US and UK were found serving the CoinHive crypto miner, a JavaScript script designed to mine cryptocurrency at the expense of users’ CPU power, to its users because of a “cryptojacking” attack. Today, when there are so many open-source packages available on package managers like npm, nobody really codes everything from scratch while developing any application. Last year in November, we saw the case of malicious code in the npm event-stream package. The attacker used social engineering tactics to become the maintainer of the event-stream package and then exploited their position to add a malicious package as a direct dependency. Looking at these cases developers have become more aware of the different means and ways of securing their work like using a web application firewall, using web vulnerability scanners, securing the web server, and more. And, a good place to start is ensuring security through HTTP security headers and that’s what Judis explained in his talk. https://www.youtube.com/watch?v=1jD7dBsg_Nw What are HTTP headers? HTTP, which stands for Hypertext transfer protocol, allows the client and server to communicate over a secure connection. HTTP headers are the key-value pairs that are used to exchange additional information. When a client makes a request for a resource, along with the request a request header is sent to the server, which includes particulars such as the browser version, client’s operating system, and so on. The server answers back with the resource along with a response header containing information like type, date, and size of the file sent. Some different examples of HTTP headers Below are some of the HTTP headers Judis demonstrated: HSTS (HTTP Strict Transport Security) HTTPS, which is a secure version of HTTP, ensures that we are communicating over a secure channel with the help of Transport Layer Security (TLS) protocol or its predecessor Secure Sockets Layer (SSL) for encryption. Not only just security, but it is often a requirement for many new browser features, especially those required for progressive web apps. Though browser vendors do mark non-HTTPS unsafe, we cannot really guarantee safety all the time. “Unfortunately, we’re not browsing safe all the time. When someone wants to open a website they are not entering the protocol into the address bar (and why should they?). This action leads to an unencrypted HTTP request. Securely running sites then redirect to HTTPS. But what if someone intercepts the first unsecured request?,” wrote Judis in a blog post. To address such cases you can use HSTS response headers, that allow declaring that your web server only takes HTTPS requests. You can implement it this way: Strict-Transport-Security: max-age=1000; includeSubDomains; preload Before you implement, do read this helpful advice shared by a web developer on Hacker News: “Be sure that you understand the concept of HSTS! Simply copy/pasting the example from this article will completely break subdomains that are not HTTPS enabled and preloading will break it permanently. I wish the authors made that more clear. Don't use includesubdomains and preload unless you know what you are doing.” CSP (Content Security Policy) The security model for the web is based on the same-origin policy. This means that a web browser will only allow a script in the preceding web page to access data from the following page if both pages have the same origin. This policy is bypassed by attacks like cross-site scripting (XSS), in which malicious scripts are injected into trusted websites. You can use CSP, which will help in significantly minimizing the risk and impact of XSS attacks. You can define CSP using the following meta element in your server HTML or via HTTP headers: Content-Security-Policy: upgrade-insecure-requests This directive (upgrade-insecure-requests) upgrades all the HTTP requests to HTTPS requests. CSP offers a wide variety of policy directives that give you the control of defining the source from where a page is allowed to load a resource. The ‘script-src’ directive can prove to be very helpful in case of XSS attack, which controls a set of script-related privileges for a specific page. Other examples include ‘img-src’, ‘media-src’, ‘object-src’, and more. Before you implement CSP, take into account the following advice from a Hacker News user: “CSP can be really hard to set up. For instance: if you include google analytics, you need to set a script-src and a img-src. The article does a good job of explaining you should use CSP monitoring (I recommend sentry), but it doesn't explain how deceptive it can be. You'll get tons of reports of CSP exceptions caused by browsers plugins that attempt to inject CSS or JS. You must learn to distinguish which errors you can fix, and which are out of your control. Modern popular frontend frameworks will be broken by CSP as they rely heavily on injecting CSS (a concept known as JSS or 'styled components'). As these techniques are often adopted by less experienced devs, you'll see many 'solutions' on StackOverflow and Github that you should set unsafe-inline in your CSP. This is bad advise as it will basically disable CSP! I have attempted to raise awareness in the past but I always got the 'you're holding it wrong' reply (even on HN). The real solution is that your build system should separate the CSS from JS during build time. Not many popular build systems (such as create-react-app) support this.” Despite these advantages, Judis in his talk highlighted that not many websites have put it to work and merely 6% are using it. He wrote in a post, “To see how many sites serve content with CSP I ran a query on HTTP Archive and found out that only 6% of the crawled sites use CSP. I think we can do better to make the web a safer place and to avoid our users mining cryptocurrency without knowing it.” Cache-Control Judis believes that it is a web developers’ responsibility to ensure that their website or web app is not eating up a user’s data to keep the “web affordable for everybody”. One way to do that is by using the Cache-Control header, which allows you to define response caching policies. This header controls for how long a browser can cache an individual response. Here is how you can define it: Cache-Control: max-age=30, public These were some of the headers that Judis highlighted in his post. The article further explains the use of other headers like Accept-Encoding, Feature-Policy, and more. Go ahead and give it a read! All about Browser Fingerprinting, the privacy nightmare that keeps web developers awake at night How to build a real-time data pipeline for web developers – Part 2 [Tutorial] How to build a real-time data pipeline for web developers – Part 1 [Tutorial]  
Read more
  • 0
  • 0
  • 2451

article-image-neuvector-announces-new-container-risk-reports-for-vulnerability-exploits-external-attacks-and-more-based-on-redhat-openshift-integration
Savia Lobo
09 May 2019
3 min read
Save for later

NeuVector announces new container risk reports for vulnerability exploits, external attacks, and more based on RedHat OpenShift integration

Savia Lobo
09 May 2019
3 min read
NeuVector, a firm that deals with container network security, yesterday, announced new capabilities to help container security teams better assess the security posture of their deployed services in production. NeuVector now delivers an intelligent assessment of the risk of east-west attacks, ingress and egress connections, and damaging vulnerability exploits. An overall risk score summarizes all available risk factors and provides advice on how to lower the threat of attack – thus improving the score. The service connection risk score shows how likely it is for attackers to move laterally (east-west) to probe containers that are not segmented by the NeuVector firewall rules. The ingress/egress risk score shows the risk of external attacks or outbound connections commonly used for data stealing or connecting to C&C (command and control) servers. In an email written to us, Gary Duan, CTO of NeuVector said, “The NeuVector container security solution spans the entire pipeline – from build to ship to run. Because of this, we are able to present an overall analysis of the risk of attack for containers during run-time. But not only can we help assess and reduce risk, we can actually take automated actions such as blocking network attacks, quarantining suspicious containers, and capturing container and network forensics.” With the RedHat OpenShift integration, individual users can review the risk scores and security posture for the containers within their assigned projects. They are able to see the impact of their improvements to security configurations and protections as they lower risk scores and remove potential vulnerabilities. Read Also: Red Hat Summit 2019 Highlights: Microsoft collaboration, Red Hat Enterprise Linux 8 (RHEL 8), IDC study predicts $10 trillion global revenue by end of 2019, and more! The one-click RBAC integration requires no additional coding, scripting or configuration, and adds to other OpenShift integration points for admission control, image streams, OVS networking, and service deployments. Fei Huang, CEO of NeuVector said, “We are seeing many business-critical container deployments using Red Hat OpenShift. These customers turn to NeuVector to provide complete run-time protection for in-depth defense – with the combination of container process and file system monitoring, as well as the industry’s only true layer-7 container firewall.” To know about this announcement in detail visit the official website. Red Hat releases OpenShift 4 with adaptability, Enterprise Kubernetes and more! Red Hat rebrands logo after 20 years; drops Shadowman Red Hat announces CodeReady Workspaces, the first Kubernetes-Native IDE for easy collaboration among developers
Read more
  • 0
  • 0
  • 1502