Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3709 Articles
article-image-why-did-last-weeks-azure-cloud-outage-happen-heres-microsofts-root-cause-analysis-summary
Prasad Ramesh
12 Sep 2018
3 min read
Save for later

Why did last week’s Azure cloud outage happen? Here’s Microsoft’s Root Cause Analysis Summary.

Prasad Ramesh
12 Sep 2018
3 min read
Earlier this month, Microsoft Azure Cloud was experiencing problems that left users unable to access its cloud services. The outage in South Central US affected several Azure Cloud services and caused them to go offline for U.S. users. The reason for the outage was stated as “severe weather”. Microsoft is currently conducting a root cause analysis to find out the exact reason. Many services went offline due to cooling system failure causing the servers to overheat and turn themselves off. What did the RCA reveal about the Azure outage High energy storms associated with Hurricane Gordon hit the southern area of Texas near Microsoft Azure’s data centers for South Central US. Many data centers were affected and experienced voltage fluctuations. Lightning-induced increased electrical activity caused significant voltage swells. The rise in voltages, in turn, caused a portion of one data center to switch to generator power. The power swells also shut down the mechanical cooling systems despite surge suppressors being in place. With the cooling systems being offline, temperatures exceeded the thermal buffer within the cooling system. The safe operational temperature threshold exceeded which initiated an automated shutdown of devices. The shutdown mechanism is installed to preserve infrastructure and data integrity. But in this incident, the temperatures increased pretty quickly in some areas of the datacenter causing hardware damage before a shutdown could be initiated. Many storage servers and some network devices and power units were damaged. Microsoft is taking steps to prevent further damage as the storms are still active in the area. They are switching the remaining data centers to generator power to stabilize power supply. For recovery of damaged units, the first step taken was to recover the Azure Software Load Balancers (SLBs) for storage scale units. The next step was to recover the storage servers and the data on them by replacing failed components and migrating data to healthy storage units while validating that no data is corrupted. The Azure website also states that the “Impacted customers will receive a credit pursuant to the Microsoft Azure Service Level Agreement, in their October billing statement.” A detailed analysis will be available on their website in the coming weeks. For more details on the RCA and customer impact, visit the Azure website. Real clouds take out Microsoft’s Azure Cloud; users, developers suffer indefinite Azure outage Microsoft Azure’s new governance DApp: An enterprise blockchain without mining Microsoft Azure now supports NVIDIA GPU Cloud (NGC)
Read more
  • 0
  • 0
  • 3695

article-image-google-dissolves-its-advanced-technology-external-advisory-council-in-a-week-after-repeat-criticism-on-selection-of-members
Amrata Joshi
05 Apr 2019
3 min read
Save for later

Google dissolves its Advanced Technology External Advisory Council in a week after repeat criticism on selection of members

Amrata Joshi
05 Apr 2019
3 min read
Last week Google announced the formation of Advanced Technology External Advisory Council, to help the company with the major issues in AI such as facial recognition and machine learning fairness. And it is only a week later that Google has decided to dissolve the council, according to reports by Vox. In a statement to Vox, a Google spokesperson said that “the company has decided to dissolve the panel, called the Advanced Technology External Advisory Council (ATEAC), entirely.” The company further added, “It’s become clear that in the current environment, ATEAC can’t function as we wanted. So we’re ending the council and going back to the drawing board. We’ll continue to be responsible in our work on the important issues that AI raises, and will find different ways of getting outside opinions on these topics.” This news comes immediately after a group of Google employees criticized the selection of the council and insisted the company to remove Kay Coles James, the Heritage Foundation President for her anti-trans and anti-immigrant thoughts. The presence of James in the council had somewhere made the others uncomfortable too. When Joanna Bryson was asked by one of the users on Twitter, if she was comfortable serving on a board with James, she answered, “Believe it or not, I know worse about one of the other people.” https://twitter.com/j2bryson/status/1110632891896221696 https://twitter.com/j2bryson/status/1110628450635780097 Few researchers and civil society activists had also voiced their opinion against the idea of anti-trans and anti-LGBTQ.  Alessandro Acquisti, a behavioural economist and privacy researcher, had declined an invitation to join the council. https://twitter.com/ssnstudy/status/1112099054551515138 Googlers also insisted on removing Dyan Gibbens, the CEO of Trumbull Unmanned, a drone technology company, from the board. She has previously worked on drones for the US military. Last year, Google employees were agitated about the fact that the company had been working with the US military on drone technology as part of so-called Project Maven. A lot of employees decided to resign because of this reason, though later promised to not renew Maven. While talking more on the ethics front, Google has even offered resources to the US Department of Defense for a “pilot project” to analyze drone footage with the help of artificial intelligence. The question that arises here, “Are Googlers and Google’s shareholders comfortable with the idea of getting their software used by the US military?” President Donald Trump’s meet with the Google CEO, Sundar Pichai adds more to it. https://twitter.com/realDonaldTrump/status/1110989594521026561 Though this move by Google seems to be a mark of victory for more than 2300 Googlers and supporters who signed the petition and took a stand against Transphobia, it is still going to be a tough time for Google to redefine its AI ethics. Also, the company might have saved itself from this major turmoil if they had wisely selected the council members. https://twitter.com/EthicalGooglers/status/1113942165888094215 To know more about this news, check out the blog post by Vox. Google employees filed petition to remove anti-trans, anti-LGBTQ and anti-immigrant Kay Coles James from the AI council Is Google trying to ethics-wash its decisions with its new Advanced Tech External Advisory Council? Amazon joins NSF in funding research exploring fairness in AI amidst public outcry over big tech #ethicswashing
Read more
  • 0
  • 0
  • 3694

article-image-the-union-types-2-0-proposal-gets-a-go-ahead-for-php-8-0
Bhagyashree R
11 Nov 2019
3 min read
Save for later

The Union Types 2.0 proposal gets a go-ahead for PHP 8.0

Bhagyashree R
11 Nov 2019
3 min read
Last week, the Union Types 2.0 RFC by Nikita Popov, a software developer at JetBrains got accepted for PHP 8.0 with 61 votes in favor and 5 against. Popov submitted this RFC as a GitHub pull request to check whether it would be a better medium for RFC proposals in the future, which got a positive response from many PHP developers. https://twitter.com/enunomaduro/status/1169179343580516352 What the Union Types 2.0 RFC proposes PHP type declarations allow you to specify the type of parameters and return values acceptable by a function. Though for most of the functions, the acceptable parameters and possible return values will be of only one type, there are cases when they can be of multiple types. Currently, PHP supports two special union types. One is the nullable types that you can specify using the ‘?Type’ syntax to mark a parameter or return value as nullable. This means, in addition to the specified type, NULL can also be passed as an argument or return value. Another one is ‘array’ or ‘Traversable’ that you can specify using the special iterable type. The Union Types 2.0 RFC proposes to add support for arbitrary union types, which can be specified using the syntax T1|T2|... Support for Union types will enable developers to move more type information from ‘phpdoc’ into function signatures. Other advantages of arbitrary union types include early detection of mistakes and less boilerplate-y code compared to ‘phpdoc’. This will also ensure that type is checked during inheritance and are available through Reflection. This RFC does not contain any backward-incompatible changes. However, existing ReflectionType based code will have to be adjusted in order to support the processing of code that uses union types. The RFC for union types was first proposed 4 years ago by PHP open source contributors, Levi Morrison and Bob Weinand. This new proposal has a few updates compared to the previous one that Popov shared on the PHP mailing list thread: Updated to specify interaction with new language features, like full variance and property types. Updated for the use of the ?Type syntax rather than the Type|null syntax. It only supports "false" as a pseudo-type, not "true". Slightly simplified semantics for the coercive typing mode. In a Reddit discussion, many developers welcomed this decision. A user commented, “PHP 8 will be blazing. I can't wait for it.” While some others felt that this a one step backward. “Feels like a step backward. IMHO, a better solution would have been to add function overloading to the language, i.e. give the ability to add many methods with the same name, but different argument types,” a user expressed. You can read the Union Types 2.0 RFC to know more in detail. You can read the discussion about this RFC on GitHub. Symfony leaves PHP-FIG, the framework interoperability group Oracle releases GraphPipe: An open-source tool that standardizes machine learning model deployment Connecting your data to MongoDB using PyMongo and PHP
Read more
  • 0
  • 0
  • 3694

article-image-q-101-getting-know-basics-microsofts-new-quantum-computing-language
Sugandha Lahoti
14 Dec 2017
5 min read
Save for later

Q# 101: Getting to know the basics of Microsoft’s new quantum computing language

Sugandha Lahoti
14 Dec 2017
5 min read
A few days back we posted about the preview of Microsoft‘s development toolkit with a new quantum programming language, simulator, and supporting tools. The development kit contains the tools which allow developers to build their own quantum computing programs and experiments. A major component of the Quantum Development Kit preview is the Q# programming language. According to Microsoft “Q# is a domain-specific programming language used for expressing quantum algorithms. It is to be used for writing sub-programs that execute on an adjunct quantum processor, under the control of a classical host program and computer.” The Q# programming language is foundational for any developer of quantum software. It is deeply integrated with the Microsoft Visual Studio and hence programming quantum computers is easy for developers who are well-versed with Microsoft Visual Studio. An interesting feature of Q# is the fact that it supports a basic procedural model (read loops and if/then statements) for writing programs. The top-level constructs in Q# are user-defined types, operations, and functions. The Type models Q# provides several type models. There are the primitive types such as the Qubit type or the Pauli type. The Qubit type represents a quantum bit or qubit. A quantum computer stores information in the form of qubits as both 1s and 0s at the same time.  Qubits can either be tested for identity (equality) or passed to another operation. Actions on Qubits are implemented by calling operations in the Q# standard library. The Pauli type represents an element of the single-qubit Pauli group. The Pauli group on 1 qubit is the 16-element matrix group consisting of the 2 × 2 identity matrix and all of the Pauli matrices. This type has four possible values: PauliI, PauliX, PauliY, and PauliZ. There are also array and tuple types for creating new, structured types. It is possible to create arrays of tuples, tuples of arrays, tuples of sub-tuples, etc. Tuple instances are immutable i.e. the contents of a tuple can’t be changed once created. Q# does not include support for rectangular multi-dimensional arrays. Q# also has User-defined types. User-defined types may be used anywhere. It is possible to define an array of a user-defined type and to include a user-defined type as an element of a tuple type. newtype TypeA = (Int, TypeB); newtype TypeB = (Double, TypeC); newtype TypeC = (TypeA, Range); Operations and Functions A Q# operation is a quantum subroutine, which means it is a callable routine that contains quantum operations. A Q# function is the traditional subroutine used within a quantum algorithm. It has no quantum operations. You may pass operations or qubits to Functions for processing. However, they can’t allocate or borrow qubits or call operations. Operations and functions are together known as callables. A functor in Q# is a factory that specifies a new operation from another operation. An important feature of the function is the fact, that they have access to the implementation of the base operation when defining the implementation of the new operation. This means that functors can perform more complex functions than classical complex functions. Comments Comments begin with two forward slashes, //, and continue until the end of line.  A comment may appear anywhere in a Q# source file, including where statements are not valid.  However, end of line comments in the middle of an expression is not supported, although the expression can be multi-lined. Comments can also begin with three forward slashes, ///. Their contents are considered as documentation for the defined callable or user-defined type when they appear immediately before an operation, function, or type definition. Namespaces Q# follows the same rules for namespace as other .NET languages. Every Q# operation, function, and user-defined type is defined within a namespace.  However, Q# does not support nested namespaces. Control Flow The control flow consists of For-Loop, Repeat-Until-Success Loop, Return statement, and the Conditional statement. For-Loop Like the traditional for loop, Q# uses the for statement for iteration through an integer range. The statement consists of the keyword for, followed by an identifier, the keyword in, a Range expression, and a statement block. for (index in 0 .. n-2) { set results[index] = Measure([PauliX], [qubits[index]]); } Repeat-until-success Loop The repeat statement supports the quantum “repeat until success” pattern. It consists of the keyword repeat, followed by a statement block (the loop body), the keyword until, a Boolean expression, the keyword fixup, and another statement block (the fixup). using ancilla = Qubit[1] {    repeat {        let anc = ancilla[0];        H(anc);        T(anc);        CNOT(target,anc);        H(anc);        let result = M([anc],[PauliZ]);    } until result == Zero    fixup {        ();    } }  The Conditional statement Similar to the if-then conditional statement in most programming languages, the if statement in Q# supports conditional execution. It consists of the keyword if, followed by a Boolean expression and a statement block (the then block). This may be followed by any number of else-if clauses, each of which consists of the keyword elif, followed by a Boolean expression and a statement block (the else-if block). if (result == One) {    X(target); } else {    Z(target); }  Return Statement The return statement ends execution of an operation or function and returns a value to the caller.  It consists of the keyword return, followed by an expression of the appropriate type, and a terminating semicolon. return 1; OR return (); OR return (results, qubits); File Structure A Q# file consists of one or more namespace declarations. Each namespace declaration contains definitions for user-defined types, operations, and functions. You can download the Quantum Development Kit here. You can learn more about the features of the Q# language here.
Read more
  • 0
  • 0
  • 3694

article-image-all-docker-versions-are-now-vulnerable-to-a-symlink-race-attack
Vincy Davis
29 May 2019
3 min read
Save for later

All Docker versions are now vulnerable to a symlink race attack

Vincy Davis
29 May 2019
3 min read
Yesterday Aleksa Sarai, Senior Software Engineer at SUSE Linux GmbH, notified users that the ‘docker cp' is vulnerable to symlink-exchange race attacks. This attack makes all the Docker versions vulnerable. This attack can be seen as a continuation of some 'docker cp' security bugs that Sarai had found and fixed in 2014. This attack was discovered by Sarai, “though Tõnis Tiigi (software engineer at Docker) did mention the possibility of an attack like this in the past (at the time we thought the race window was too small to exploit)”, he added. The basis of this attack is that FollowSymlinkInScope suffers from a fundamental TOCTOU attack. FollowSymlinkInScope is used to take a path and resolve it safely as though the process was inside the container. Once the full path is resolved, it is passed around a bit and operated later on. If an attacker adds a symlink component to the path after the resolution, but before it is operated on, then the user will end up resolving the symlink path component on the host as root. Sarai adds, “As far as I'm aware there are no meaningful protections against this kind of attack. Unless you have restricted the Docker daemon through AppArmor, then it can affect the host filesystem”. Two reproducers of the issue have been attacked, including a Docker image and an empty directory in a loop hoping to hit the race condition. The Docker image contains a simple binary that does a RENAME_EXCHANGE of a symlink to "/”. In both the scripts, the user will be trying  to copy a file to or from a path containing the swapped symlink. However, the run_write.sh script can overwrite the host filesystem in very few iterations. This is because internally Docker has a "chrootarchive" concept where the archive is extracted from within a chroot. However in Docker, it chroots into the parent directory of the archive target which can be controlled by the attacker. This makes the attacker more likely to succeed. In an attempt to come up with a better solution for this problem, Sarai is working on Linux kernel patches. This will “add the ability to safely resolve paths from within a roots”. Users are concerned with the Docker versions being vulnerable as ‘docker cp’ is a very popular command. A user on Reddit says, “This seems really severe, it basically breaks a lot of the security that docker is assumed to provide. I know that we're often told not to rely upon docker for security, but still. I guess trusted but unsecure containers where the attack is executed after startup are still safe, because the docker cp command has already been executed before the attack begins.” A user on Hacker News comments, “So from a reading of the advisory and pull request, this seems to affect a specific set of scenarios, where a malicious image is running. Not sure if there are other scenarios where this would hit as well. One to be aware of, but as with most vulnerabilities, good to understand how it can be exploited, when you're assessing mitigations” To read more details of the notification, head over to Sarai’s mailing list. Angular 8.0 releases with major updates to framework, Angular Material, and the CLI Canva faced security breach, 139 million users data hacked: ZDNet reports SENSORID attack: Calibration fingerprinting that can easily trace your iOS and Android phones, study reveals
Read more
  • 0
  • 0
  • 3694

article-image-googles-translation-tool-is-now-offline-and-more-powerful-than-ever-thanks-to-ai
Pravin Dhandre
13 Jun 2018
2 min read
Save for later

Google's translation tool is now offline - and more powerful than ever thanks to AI

Pravin Dhandre
13 Jun 2018
2 min read
Google has today rolled out its super fast translation package in offline mode. This will deliver accurate and natural machine translations to users without a live connection to the internet. The team at Google worked for almost more than 2 years to deliver the powerful neural machine translation (NMT) technology to Google’s native Translate applications on smartphones. Using neural nets, the package should provide instant and accurate human-sounding translations for both Android and iOS users. Previously, the offline translation tool worked by breaking down sentences and then translating every individual phrase. However, with AI-powered NMT technology, the app translates the whole sentence swiftly in one. NMT uses millions of translated examples collected from different sources including books, documents, articles, and search engine results. This information is then used to understand how a given sentence can be formulated in a natural way that remains true to its intended context. In addition, this offline feature is surprisingly compact. Each language package is just 35 MB. That means you’ll be able to download it to your phone without using up all of your precious storage. Google says that the package would be very soon rolled out in over 59 languages in next couple of days. It should include European, Indian and several other languages. At present, you will be able to translate the following languages offline: Afrikaans, Albanian, Arabic, Belarusian, Bengali, Bulgarian, Catalan, Chinese, Croatian, Czech, Danish, Dutch, English, Esperanto, Estonian, Filipino, Finnish, French, Galician, Georgian, German, Greek, Gujarati, Haitian, Creole, Hebrew, Hindi, Hungarian, Icelandic, Indonesian, Irish, Italian, Japanese, Kannada, Korean, Latvian, Lithuanian, Macedonian, Malay, Maltese, Marathi, Norwegian, Persian, Polish, Portuguese, Romanian, Russian, Slovak, Slovenian, Spanish, Swahili, Swedish, Tamil, Telugu, Thai, Turkish, Ukrainian, Urdu, Vietnamese and Welsh. To use offline translations in your Google Translate app, browse to Offline Translation settings, tap the symbol next to the language name and the package gets downloaded. To learn more, check out the official announcement at the Google Blog page. FAE (Fast Adaptation Engine): iOlite’s tool to write Smart Contracts using machine translation How to auto-generate texts from Shakespeare writing using deep recurrent neural networks Implement Named Entity Recognition (NER) using OpenNLP and Java
Read more
  • 0
  • 0
  • 3693
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $15.99/month. Cancel anytime
article-image-an-iot-worm-silex-developed-by-a-14-year-old-resulted-in-malware-attack-and-taking-down-2000-devices
Amrata Joshi
28 Jun 2019
5 min read
Save for later

An IoT worm Silex, developed by a 14 year old resulted in malware attack and taking down 2000 devices

Amrata Joshi
28 Jun 2019
5 min read
This week, an IoT worm called Silex that targets a Unix-like system took down around 2,000 devices, ZDNet reports. This malware attacks by attempting a login with default credentials and after gaining access. Larry Cashdollar, an Akamai researcher, the first one to spot the malware, told ZDNet in a statement, "It's using known default credentials for IoT devices to log in and kill the system.” He added, “It's doing this by writing random data from /dev/random to any mounted storage it finds. I see in the binary it's calling fdisk -l which will list all disk partitions."  He added, "It then writes random data from /dev/random to any partitions it discovers." https://twitter.com/_larry0/status/1143532888538984448 It deletes the devices' firewall rules and then removes its network config and triggers a restart, this way the devices get bricked. Victims are advised to manually reinstall the device's firmware for recovering. This malware attack might remind you of the BrickerBot malware that ended up destroying millions of devices in 2017. Cashdollar told ZDNet in a statement, "It's targeting any Unix-like system with default login credentials." He further added, "The binary I captured targets ARM devices. I noticed it also had a Bash shell version available to download which would target any architecture running a Unix like OS." This also means that this malware might affect Linux servers if they have Telnet ports open and in case they are secured with poor or widely-used credentials. Also, as per the ZDNet report, the attacks were carried out from a VPS server that was owned by a company operating out of Iran. Cashdollar said, "It appears the IP address that targeted my honeypot is hosted on a VPS server owned by novinvps.com, which is operated out of Iran."  With the help of NewSky Security researcher Ankit Anubhav, ZDNet managed to reach out to the Silex malware author who goes by the pseudonym Light Leafon. According to Anubhav, Light Leafon, is a 14-year-old teenager responsible for this malware.  In a statement to Anubhav and ZDNet, he said, “The project started as a joke but has now developed into a full-time project, and has abandoned the old HITO botnet for Silex.” Light also said that he has plans for developing the Silex malware further and will add even more destructive functions. In a statement to Anubhav and ZDNet, he said, "It will be reworked to have the original BrickerBot functionality."  He is also planning to add the ability to log into devices via SSH apart from the current Telnet hijacking capability. He plans to give the malware the ability to use vulnerabilities for breaking into devices, which is quite similar to most of the IoT botnets. Light said, "My friend Skiddy and I are going to rework the whole bot.” He further added, "It is going to target every single publicly known exploit that Mirai or Qbot load." Light didn’t give any justification for his actions neither have put across any manifesto as the author of BrickerBot (goes with the pseudonym-Janit0r) did post before the BrickerBot attacks. Janit0r motivated the 2017 attacks to protest against owners of smart devices that were constantly getting infected with the Mirai DDoS malware. In a statement to ZDNet, Anubhav described the teenager as "one of the most prominent and talented IoT threat actors at the moment." He further added, "Its impressive and at the same time sad that Light, being a minor, is utilizing his talent in an illegal way." People are surprised how a 14-year-old managed to work this out and are equally worried about the consequences the kid might undergo. A user commented on Reddit, “He's a 14-year old kid who is a bit misguided in his ways and can easily be found. He admits to DDoSing Wix, Omegle, and Twitter for lols and then also selling a few spots on the net. Dude needs to calm down before it goes bad. Luckily he's under 18 so really the worst that would happen in the EU is a slap on the wrist.”  Another user commented, “It’s funny how those guys are like “what a skid lol” but like ... it’s a 14-year-old kid lol. What is it people say about the special olympics…” Few others said that developers need to be more vigilant and take security seriously. Another comment reads, “Hopefully manufacturers might start taking security seriously instead of churning out these vulnerable pieces of shit like it's going out of fashion (which it is).” To know more about this news, check out the report by ZDNet. WannaCry hero, Marcus Hutchins pleads guilty to malware charges; may face upto 10 years in prison FireEye reports infrastructure-crippling Triton malware linked to Russian government tech institute ASUS servers hijacked; pushed backdoor malware via software updates potentially affecting over a million users  
Read more
  • 0
  • 0
  • 3693

article-image-cloudflares-1-1-1-1-dns-service-is-now-available-as-a-mobile-app-for-ios-and-android
Melisha Dsouza
13 Nov 2018
2 min read
Save for later

Cloudflare’s 1.1.1.1 DNS service is now available as a mobile app for iOS and Android

Melisha Dsouza
13 Nov 2018
2 min read
Earlier this year, Cloudflare launched its 1.1.1.1 DNS service as a resolver to make DNS queries faster and more secure that anyone could use free of charge. The day before yesterday, they announced the launch of 1.1.1.1 mobile app for iOS and Android. DNS services are used by internet service providers to interpret a domain name like “Google.com” into an IP address that routers and switches can understand. However, DNS servers provided by ISPs are often slow and unreliable. Cloudflare claims to combat this issue with its 1.1.1.1 service. On a public internet connection, people can see what sites a user visits. This data can also be misused by an internet service provider. The 1.1.1.1 tool makes it easy to get a faster, more private, internet experience. Cloudflare’s 1.1.1.1 app will redirect all user apps to send DNS requests through a local resolver on their phone to its faster 1.1.1.1 server. The server will then encrypt the data to avoid any third person from spying on user data. Features of Cloudfare 1.1.1.1 mobile app The app is open source. The app uses VPN support to push mobile traffic towards the 1.1.1.1 DNS servers and improve speed. It prevents a user’s carrier from tracking their browsing history and misusing the same. Cloudflare has promised not to track 1.1.1.1 mobile app users or sell ads. The company has retained KPMG to perform an annual audit and publish a public report.  It also says most of the limited data collected is only stored for 24 hours. Cloudflare claims that 1.1.1.1 is the fastest public server, about “28 percent faster” than other public DNS resolvers. As compared to the desktop version, the mobile app is really easy to use and navigate. Head over to the Cloudflare Blog to know more about this announcement. You can download the app on iOS or Android to test the app for yourself. Cloudflare’s Workers enable containerless cloud computing powered by V8 Isolates and WebAssembly Cloudflare Workers KV, a distributed native key-value store for Cloudflare Workers Cloudflare’s decentralized vision of the web: InterPlanetary File System (IPFS) Gateway to create distributed websites
Read more
  • 0
  • 0
  • 3689

article-image-julia-v1-2-releases-with-support-for-argument-splatting-unicode-12-new-star-unary-operator-and-more
Vincy Davis
21 Aug 2019
3 min read
Save for later

Julia v1.2 releases with support for argument splatting, Unicode 12, new star unary operator, and more

Vincy Davis
21 Aug 2019
3 min read
Yesterday, the team behind Julia announced the release of Julia v1.2. It is the second minor release in the 1.x series and has new features such as argument splatting, support for Unicode 12 and a new ⋆ (star) unary operator. Julia v1.2 also has many performance improvements with marginal and undisruptive changes. The post states that Julia v1.2 will not have a long term support and “As of this release, 1.1 has been effectively superseded by 1.2, which means there will not likely be any further 1.1.x releases. Our good friend 1.0 is still currently the only long-term support version.” What’s new in Julia v1.2 This version supports Argument splatting (x...). It can be used in calls to the new pseudo-function in constructors. Support for Unicode 12 has been added. A new unary operator ⋆ (star) has been added. New library functions A new argument !=(x), >(x), >=(x), <(x), <=(x) has been added to assist in returning the partially-applied versions of the functions A new getipaddrs() function is added to return all the IP addresses of the local machine with the IPv4 addresses New library function Base.hasproperty and Base.hasfield  Other improvements in Julia v1.2 Multi-threading changes It will now be possible to schedule and switch tasks during @threads loops, and perform limited I/O. A new thread-safe replacement has been added to the Condition type. It can now be accessed as Threads.Condition. Standard library changes The extrema function now accepts a function argument in the same way like minimum and maximum. The hasmethod method can now check for matching keyword argument names. The mapreduce function will accept multiple iterators. Functions that invoke commands like run(::Cmd), will get a ProcessFailedException rather than an ErrorException. A new no-argument constructor for Ptr{T} has been added to construct a null pointer. Jeff Bezanson, Julia co-creator says, “If you maintain any packages, this is a good time to add CI for 1.2, check compatibility, and tag new versions as needed.” Users are happy with the Julia v1.2 release and are all praises for the Julia language. A user on Hacker News comments, “Julia has very well thought syntax and runtime I hope to see it succeed in the server-side web development area.” Another user says, “I’ve recently switched to Julia for all my side projects and I’m loving it so far! For me the killer feature is the seamless GPUs integration.” For more information on Julia v1.2, head over to its release notes. Julia co-creator, Jeff Bezanson, on what’s wrong with Julialang and how to tackle issues like modularity and extension Julia announces the preview of multi-threaded task parallelism in alpha release v1.3.0 Mozilla is funding a project for bringing Julia to Firefox and the general browser environment
Read more
  • 0
  • 0
  • 3688

article-image-ibm-ceo-ginni-rometty-on-bringing-hr-evolution-with-ai-and-its-predictive-attrition-ai
Natasha Mathur
05 Apr 2019
4 min read
Save for later

IBM CEO, Ginni Rometty, on bringing HR evolution with AI and its predictive attrition AI

Natasha Mathur
05 Apr 2019
4 min read
On Wednesday, CNBC held its At Work Talent & HR: Building the Workforce of the Future Conference in New York. Ginni Rometty, IBM CEO (also appointed to Trump’s American Workforce Policy Board) had a discussion around several strategies and announcements regarding job change due to AI and IBM’s predictive attrition AI. Rometty shared details about an AI that IBM HR department has filed a patent for, as first reported by CNBC. The AI is developed with Watson (IBM’S Q&A AI), for “predictive attrition Program”, which can predict at 95% of accuracy about which employees are about to quit. It will also prescribe remedies to managers for better employee engagement.  The AI retention tool is part of IBM products designed to transform the traditional approach to HR management. Rometty also mentioned that since IBM has implemented AI more widely, it has been able to reduce the size of its global human resources department by 30 percent. Rometty states that AI will be effective at tasks where HR departments and corporate managers are not very effective. It will keep employees on a clear career path and will also help identify their skills. Rometty mentions that many companies fail to be 100% transparent with the employees regarding their individual career path and growth which is a major issue. But, IBM AI will be able to better understand the data patterns and adjacent skills, which in turn, will help with identifying an individual’s strength. "We found manager surveys were not accurate. Managers are subjective in ratings. We can infer and be more accurate from data”, said Rometty. IBM also eradicated the annual performance review method. "We need to bring AI everywhere and get rid of the [existing] self-service system," Rometty said. This is because AI will now help the IBM employees better understand which programs they need to get growth in their career. Also, poor performance is no longer a problem as IBM is using "pop-up" solution centers that help managers seek the expected and better performance from their employees. "I expect AI to change 100 percent of jobs within the next five to 10 years," said Rometty. The need for “skill revolution” has already been an ongoing topic of discussion in different organizations and institutions across the globe, as AI keeps advancing. For instance, the Bank of England’s chief economist, Andy Haldane, gave a warning, last year, that the UK needs to skill up overall and across different sectors (tech, health, finance, et al) as up to 15 million jobs in Britain are at stake. This is because artificial intelligence is replacing a number of jobs which were earlier a preserve of humans. But, Rometty has a remedy to prevent this “technological unemployment” in the future. She says, “to get ready for this paradigm shift companies have to focus on three things: retraining, hiring workers that don't necessarily have a four-year college degree and rethinking how their pool of recruits may fit new job roles”. IBM also plans to invest $1 billion in training workers for “new collar” jobs, for workers with tech-skills will be hired without a four-year college degree. These "new collar" jobs could include working at a call center, app development, or a cyber-analyst at IBM via P-TECH (Pathways in Technology Early College High School) program. P-TECH is a six-year-long course that starts with high school and involves an associate's degree. Other measures by IBM include CTA apprenticeship Coalition program, which is aimed at creating thousands of new apprenticeships in 20 states in the US. These apprenticeships come with frameworks for over 15 different roles across fields including software engineering, data science and analytics, cybersecurity, creative design, and program management. As far as employers are concerned, Rometty advises to “bring consumerism into the HR model. Get rid of self-service, and using AI and data analytics personalize ways to retrain, promote and engage employees. Also, move away from centers of excellence to solution centers”. For more information, check out the official conversation with Ginni Rometty at CNBC @Work Summit. IBM sued by former employees on violating age discrimination laws in workplace Diversity in Faces: IBM Research’s new dataset to help build facial recognition systems that are fair IBM launches Industry’s first ‘Cybersecurity Operations Center on Wheels’ for on-demand cybersecurity support
Read more
  • 0
  • 0
  • 3688
article-image-textmate-2-0-the-text-editor-for-macos-releases
Amrata Joshi
16 Sep 2019
3 min read
Save for later

TextMate 2.0, the text editor for macOS releases

Amrata Joshi
16 Sep 2019
3 min read
Yesterday, the team behind TextMate released TextMate 2.0. They announced that the code for TextMate 2.0 is available via the GitHub repository. In 2012, the team had open-sourced the alpha version of TextMate 2.0.  One of the reasons why the company open-sourced the code for TextMate 2.0 was to indicate that Apple isn’t limiting user and developer freedom on the Mac platform. In this release, the qualifier suffix in the version string has been deprecated and even the 32 bit APIs have been replaced. This release comes with improved accessibility support. What’s new in TextMate 2.0? Makes swapping easy This release allows users to easily swap pieces of code. Makes search results convenient TextMate presents the results of the search in a way that users can switch between matches, extract matched text and preview desired replacements. Version control  Users can see changes in the file browser view and they can check the changes made to lines of code in the editor view. Improved commands  TextMate features WebKit as well as a dialog framework for Mac-native or HTML-based interfaces. Converting code pieces into snippets  Users can now turn commonly used pieces of text or code into snippets with transformations, placeholders, and more. Bundles Users can use bundles for customization and a number of different languages, workflows, markup systems, and more.  Macros  TextMate features Marcos that eliminates repetitive work.  This project was supposed to release years ago and now it has finally released that makes a lot of users happy.  A user commented on GitHub, “Thank you @sorbits. For making TextMate in the first place all those years ago. And thank you to everyone who has and continues to contribute to the ongoing development of TextMate as an open source project. ~13 years later and this is still the only text editor I use… all day every day.” Another user commented, “Immense thanks to all those involved over the years!” A user commented on HackerNews, “I have a lot of respect for Allan Odgaard. Something happened, and I don't want to speculate, that caused him to take a break from Textmate (version 2.0 was supposed to come out 9 or so years ago). Instead of abandoning the project he open sourced it and almost a decade later it is being released. Textmate is now my graphical Notepad on Mac, with VS Code being my IDE and vim my text editor. Thanks Allan.” It is still not clear as to what took TextMate 2.0 this long to get released. According to a few users on HackerNews, Allan Odgaard, the creator of TextMate wanted to improve the designs in TextMate 1 and he realised that it would require a lot of work to do the same. So he had to rewrite everything that might have taken away his time. Another comment reads, “As Allan was getting less feedback about the code he was working on, and less interaction overall from users, he became less motivated. As the TextMate 2 project dragged past its original timeline, both Allan and others in the community started to get discouraged. I would speculate he started to feel like more of the work was a chore rather than a joyful adventure.” To know more about this news, check out the release notes. Other interesting news in Programming Introducing ‘ixy’, a simple user-space network driver written in high-level languages like Rust, Go, and C#, among others  GNOME 3.34 releases with tab pinning, improved background panel, custom folders and more! GitHub Package Registry gets proxy support for the npm registry  
Read more
  • 0
  • 0
  • 3686

article-image-nvidias-latest-breakthroughs-in-conversational-ai-trains-bert-in-under-an-hour-launches-project-megatron-to-train-transformer-based-models-at-scale
Bhagyashree R
14 Aug 2019
4 min read
Save for later

NVIDIA’s latest breakthroughs in conversational AI: Trains BERT in under an hour, launches Project Megatron to train transformer based models at scale

Bhagyashree R
14 Aug 2019
4 min read
Researchers have been constantly putting their efforts into improving conversational AI to make them better understand human languages and their nuances. One such advancement in the conversational AI field is the introduction of Transformer-based models such as OpenAI’s GPT-2 and Google’s BERT. In a quest to make the training and deployment of these vastly large language models efficient, NVIDIA researchers recently conducted a study, the details of which they shared yesterday. https://twitter.com/ctnzr/status/1161277599793860618 NVIDIA’s Tensor core GPU took less than an hour to train the BERT model BERT, short for, Bidirectional Encoder Representations from Transformers, was introduced by a team of researchers at Google Language AI. It is capable of performing a wide variety of state-of-the-art NLP tasks including Q&A, sentiment analysis, and sentence classification. What makes BERT different from other language models is that it applies the bidirectional training of Transformer to language modelling. Transformer is an attention mechanism that learns contextual relations between words in a text. It is designed to pre-train deep bidirectional representations from the unlabeled text by using both left and right context in all layers. NVIDIA researchers chose BERT-LARGE, a version of BERT created with 340 million parameters for the study. NVIDIA’s DGX SuperPOD was able to train the model in a record-breaking time of 53 minutes. The Super POD was made up of 92 DGX-2H nodes and 1472 GPUs, which were running PyTorch with Automatic Mixed Precision. The following table shows the time taken to train BERT-Large for various numbers of GPUs: Source: NVIDIA Looking at these results, the team concluded, “The combination of GPUs with plenty of computing power and high-bandwidth access to lots of DRAM, and fast interconnect technologies, makes the NVIDIA data center platform optimal for dramatically accelerating complex networks like BERT.” In a conversation with reporters and analysts, Bryan Catarazano, Vice President of Applied Deep Learning Research at NVIDIA said, “Without this kind of technology, it can take weeks to train one of these large language models.” NVIDIA further said that it has achieved the fastest BERT inference time of 2.2 milliseconds by running it on a Tesla T4 GPU and TensorRT 5.1 optimized for datacenter inference. NVIDIA launches Project Megatron, under which it will research training transformer language models at scale Beginning this year, OpenAI introduced the 1.5 billion parameter GPT-2 language model that generates nearly coherent and meaningful texts. The NVIDIA Research team has built a scaled-up version of this model, called GPT-2 8B. As its name suggests, it is made up of 8.3 billion parameters, which makes it 24X the size of BERT-Large. To train this huge model the team used PyTorch with 8-way model parallelism and 64-way data parallelism on 512 GPUs. This experiment was part of a bigger project called Project Megatron, under which the team is trying to create a platform that facilitates the training of such “enormous billion-plus Transformer-based networks.” Here’s a graph showing the compute performance and scaling efficiency achieved: Source: NVIDIA With the increase in the number of parameters, there was also a noticeable improvement in accuracy as compared to smaller models. The model was able to achieve a wikitext perplexity of 17.41, which surpasses previous results on the wikitext test dataset by Transformer-XL. However, it does start to overfit after about six epochs of training that can be mitigated by using even larger scale problems and datasets. NVIDIA has open-sourced the code for reproducing the single-node training performance in its BERT GitHub repository. The NLP code on Project Megatron is also openly available in Megatron Language Model GitHub repository. To know more in detail, check out the official announcement by NVIDIA. Also, check out the following YouTube video: https://www.youtube.com/watch?v=Wxi_fbQxCM0 Baidu open sources ERNIE 2.0, a continual pre-training NLP model that outperforms BERT and XLNet on 16 NLP tasks CMU and Google researchers present XLNet: a new pre-training method for language modeling that outperforms BERT on 20 tasks ACLU (American Civil Liberties Union) file a complaint against the border control officers for violating the constitutional rights of an Apple employee
Read more
  • 0
  • 0
  • 3686

article-image-responsible-tech-leadership-or-climate-washing-microsoft-hikes-its-carbon-tax-and-announces-new-initiatives-to-tackle-climate-change
Sugandha Lahoti
17 Apr 2019
5 min read
Save for later

Responsible tech leadership or climate washing? Microsoft hikes its carbon tax and announces new initiatives to tackle climate change

Sugandha Lahoti
17 Apr 2019
5 min read
Microsoft is taking a stand against climate devastation by hiking its internal carbon tax in a new sustainability drive. On Tuesday, the company announced that it nearly doubling its internal carbon fee to $15 per metric ton on all carbon emissions. The company introduced the internal carbon tax back in 2012. The fee is charged based on energy use from the company’s data centers, offices, and factories, and emissions from its employees' business air travel. Now, the funds from this higher fee will maintain Microsoft’s carbon neutrality and help meet their sustainability goals. https://twitter.com/satyanadella/status/1118241283133149184 Microsoft is aiming to use 70% renewable energy to power its data centers by 2023. For comparison, Google reached 100% renewable energy for its global operations — including both their data centers and offices in 2017. In April, this year Apple announced that its global facilities are powered with 100 percent clean energy. This achievement includes retail stores, offices, data centers and co-located facilities in 43 countries. Amazon has been the slow one in this race. Although, Amazon announced that it would power data centers with 100 percent renewable energy; since 2018 Amazon has reportedly slowed down its efforts to use renewable energy using only 50 percent. Microsoft has started the construction of  17 new buildings at their Washington headquarters. These buildings will run on 100 percent carbon-free electricity. Also, the amount of carbon associated with the construction materials of these buildings will be reduced by at least 15 percent, with a goal of reaching 30 percent. This would be monitored through Embodied Carbon Calculator for Construction (EC3), a new tool to track the carbon emissions of raw building materials. What is missing from this plan, is a complete transition off of fossil fuels rather than relying on carbon offsets. Microsoft is also joining the Climate Leadership Council (CLC). CLC is an international policy institute which promotes a national carbon pricing approach. “In addition to our internal carbon tax”, Microsoft says, “we supported the recent Washington state ballot measure on pricing carbon and believe it’s time for a robust national discussion on carbon pricing to lower emissions in an economically sound way.” Microsoft is also aggregating and hosting the environmental data sets on its cloud platform, Azure, and make them publicly available. These data sets, Microsoft notes, “are large government datasets contain satellite. and aerial imagery, among other things, and require petabytes of storage. By making them available in our cloud, we will advance and accelerate the work of grantees and researchers around the world.” Finally, the company will also scale up the work it does with other nonprofits and companies tackling environmental issues through their own data and Artificial Intelligence expertise. Responsible tech leadership or climate washing? Although, Microsoft plans to address quite a number of climate change and sustainability issues, what is missing are commitments for structural and business goal level changes or commitments. A report by Gizmodo highlights the lengths that Google, Microsoft, Amazon and other tech companies are going to help the oil industry accelerate the climate crisis—and there continued profits from this process. Per Gizmodo, Bill Gates heads a $1 billion climate action fund and has published his own point-by-point plan for fighting climate change. Notably absent from that plan is “Empowering Oil & Gas with AI”. Microsoft is also two years into a seven-year deal—rumored to be worth over a billion dollars—to help Chevron, one of the world’s largest oil companies, better extract and distribute oil. Microsoft Azure has also partnered with Equinor, a multinational energy company to provide data services in a deal worth hundreds of millions of dollars. Microsoft has also partnered with ExxonMobil to help it triple oil production in Texas and New Mexico. Microsoft also has a 7-year, multibillion-dollar deal with Chevron. Instead of making profits from these deals Microsoft could be prioritizing climate impacts in business decisions, including ending partnerships with fossil fuel companies that accelerate oil and gas exploration and extraction. https://twitter.com/MsWorkers4/status/1098693994903552000 https://twitter.com/MsWorkers4/status/1118540637899354113 Last week, Over 4,520 Amazon employees signed an open letter addressed to Jeff Bezos and Amazon board of directors asking for a company-wide action plan to address climate change and an end to the company’s reliance on dirty energy resources. Their demands: “define public goals and timelines to reduce emissions; complete ban from using fossil fuels; ending partnerships with fossil fuel companies; reducing harm caused by a company’s operations to vulnerable communities first; advocacy for local, federal, and international policies to reduce carbon emissions and fair treatment of all employees during extreme weather events linked to climate change.” Microsoft Workers 4 good who created their own petition for Microsoft to do better, endorsed the stand taken by Amazon employees and called for all employees to encourage their employers to take actions for climate change. Microsoft’s closed employee only petition was launched in February where Microsoft employees were asking the company to help them align employee’s retirement investments with Microsoft’s sustainability mission. https://twitter.com/MsWorkers4/status/1092942849522323456 4,520+ Amazon employees sign an open letter asking for a “company-wide plan that matches the scale and urgency of climate crisis” Minecraft is serious about global warming, adds a new (spigot) plugin to allow changes in climate mechanics. Google moving towards data centers with 24/7 carbon-free energy
Read more
  • 0
  • 0
  • 3685
article-image-lego-launches-brickheadz-builder-ar-a-new-and-free-android-app-to-bring-bricks-and-toys-to-life
Natasha Mathur
16 Jul 2018
3 min read
Save for later

LEGO launches BrickHeadz Builder AR, a new and free Android app to bring bricks and toys to life

Natasha Mathur
16 Jul 2018
3 min read
LEGO, the Danish Toy Maker, came out with a new and free Augmented reality Android app named “ BrickHeadz Builder AR ” last week. Android users with the latest version of Google LLC’s ARCore can now download the newly launched AR app on their phones. With the magic of Augmented Reality, users can interact virtually with tiny toy figures and building blocks of BrickHeadz toys as it brings the kids’ BrickHeadz toys to life. The BrickHeadz line, launched in March 2017, includes the classic LEGO bricks along with an instruction manual for fans which directs them to build characters with big heads and little bodies. It features characters from the very popular Marvel (such as Iron Man, Captain America, Black Widow, etc), DC (Batman, Robin, Batgirl, The Joker) and Disney (Belle, The Beast, Captain Jack Sparrow) franchises. The company is also planning to expand the line by including more characters. LEGO has always been on the look-out for ways to make the physical play experience even more fun by blending it with virtual play, according to Sean McEvoy, VP of digital games and apps at the Lego Group. Let’s have a look at what the new BrickHeadz Builder AR app is all about. Key Features In the BrickHeadz Builder app, different lego related creations such as characters and objects can be easily accessed. It also enables these characters and objects to interact with each other in interesting ways. The app directs kids through the steps of construction from beginner to free builder. It enables them to discover new characters and objects by solving play formulas in a “magic book”. The “magic book” comes with tutorials and information on challenges that can provide you with rewards. You can also personalize characters by playing with their behavior and outfits. They can also build their own objects with the building blocks despite the prebuilt characters and objects. Unlocking new characters and items is also possible by playing with your creations. More industries are catching interest in AR apps these days, especially after the launch of Pokemon Go in 2016 which managed to exceed $1.8 billion in revenue in the past two years. It is by far the most popular AR game ever to be released. There is also a VR version of the BrickHeadz Builder android app that Lego launched back in October last year. That product also allows children, and adults, to build and play with virtual Lego blocks and characters. For iOS users, the company released LEGO AR-Studio last year in December. The Brickheadz Builder android app is free with no in-app purchases. All you need is the most recent version of ARCore running on Android 8.0 or later. It can The app can also run on few of the qualified phones ( such as Asus Zenfone AR and LG V30 ) running Android 7.0 or later. For more coverage on the BrickHeadz Builder AR app, check out the official LEGO blog. Niantic, of the Pokemon Go fame, releases a preview of its AR platform Adobe glides into Augmented Reality with Adobe Aero Qualcomm announces a new chipset for standalone AR/VR headsets at Augmented World Expo  
Read more
  • 0
  • 0
  • 3684

article-image-stanford-researchers-introduce-deepsolar-a-deep-learning-framework-that-mapped-every-solar-panel-in-the-us
Bhagyashree R
20 Dec 2018
3 min read
Save for later

Stanford researchers introduce DeepSolar, a deep learning framework that mapped every solar panel in the US

Bhagyashree R
20 Dec 2018
3 min read
Yesterday, researchers from Stanford University introduced DeepSolar, a deep learning framework that analyzes satellite images to identify the GPS location and size of solar panels. Using this framework they have built a comprehensive database containing all the GPS locations and sizes of solar installations in the US. The system was able to identify 1.47 million individual solar installations across the United States, ranging from small rooftop configurations, solar farms, to utility-scale systems. The DeepSolar database is available publicly to aid researchers to extract further insights into solar adoption. This database will also help policymakers in better understanding the correlation between solar deployment and socioeconomic factors such as household income, population density, and education level. How DeepSolar works? DeepSolar uses transfer learning to train a CNN classifier on 366,467 images. These images are sampled from over 50 cities/towns across the US with merely image-level labels indicating the presence or absence of panels. One of the researchers, Rajagopal explained the model to Gizmodo, “The algorithm breaks satellite images into tiles. Each tile is processed by a deep neural net to produce a classification for each pixel in a tile. These classifications are combined together to detect if a system—or part of—is present in the tile.” The deep neural net then identifies which tile is a solar panel. Once the training is complete, the network produces an activation map, which is also known as a heat map. The heat map outlines the panels, which can be used to obtain the size of each solar panel system. Rajagopal further explained how this model gives better efficiency, “A rooftop PV system typically corresponds to multiple pixels. Thus even if each pixel classification is not perfect, when combined you get a dramatically improved classification. We give higher weights to false negatives to prevent them.” What are some of the observations the researchers made? To measure its classification performance the researchers defined two metrics: utilize precision and recall. Utilize precision is the rate of correct decisions among all positive decisions and recall is the ratio of correct decisions among all positive samples. DeepSolar was able to achieve a precision of 93.1% with a recall of 88.5% in residential areas and a precision of 93.7% with a recall of 90.5% in non-residential areas. To measure its size estimation performance they calculated the mean relative error (MRE). It was recorded to be 3.0% for residential areas and 2.1% for non-residential areas for DeepSolar. Future work Currently, the DeepSolar database only covers the contiguous US region. The researchers are planning to expand its coverage to include all of North America, including remote areas with utility-scale solar, and non-contiguous US states. Ultimately, it will also cover other countries and regions of the world. Also, DeepSolar only estimates the horizontal projection areas of solar panels from satellite imagery. In the future, it would be able to infer high-resolution roof orientation and tilt information from street view images. This will give a more accurate estimation of solar system size and solar power generation capacity. To know more in detail, check out the research paper published by Ram Rajagopal et al: DeepSolar: A Machine Learning Framework to Efficiently Construct a Solar Deployment Database in the United States. Introducing remove.bg, a deep learning based tool that automatically removes the background of any person based image within 5 seconds NeurIPS 2018: How machine learning experts can work with policymakers to make good tech decisions [Invited Talk] NVIDIA makes its new “brain for autonomous AI machines”, Jetson AGX Xavier Module, available for purchase
Read more
  • 0
  • 0
  • 3684