Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3709 Articles
article-image-desmond-u-patton-director-of-safelab-shares-why-ai-systems-should-be-a-product-of-interdisciplinary-research-and-diverse-teams
Bhagyashree R
27 Mar 2019
4 min read
Save for later

Desmond U. Patton, Director of SAFElab shares why AI systems should be a product of interdisciplinary research and diverse teams

Bhagyashree R
27 Mar 2019
4 min read
The debate about AI systems being non-inclusive, sexist, and racist has been going on for a very long time. Though most of the time the blame is put on the training data, one of the key reasons behind this behavior is not having a diverse team. Last week, Stanford University launched a new institute named the Stanford Institute of Human-Centered Artificial Intelligence (HAI). This institute is aimed at researching and developing human-centered applications and technologies through multidisciplinary collaboration to get “true diversity of thought”. While the institute talked about diversity, its list of faculty members failed to reflect that. Out of the 121 members initially announced as the part of the institute, more than 100 were white and majority of them were male even as women and people of color had pioneered AI ethics and safety. https://twitter.com/chadloder/status/1108588849503109120 Emphasizing on the importance of interdisciplinary research in AI, Desmond U. Patton, the director of SAFElab, shared his experience of working on AI systems as a non-tech person. Through this blog post, he also encouraged his fellow social workers and non-tech people to contribute in AI research to make AI more inclusive. For the past 6 years, Patton with his colleagues from computer science and data science fields worked on co-designing AI systems aimed to understand the underlying causes of community-based violence. He believes that social media posts can prove to be very helpful in identifying people who are at risk of getting involved in gun violence. So, he created an interdisciplinary group of researchers who, with the help of AI techniques, study the language and images in social-media posts to identify patterns of grieving and anger. Patton believes that having a domain expert in the team is important. All the crucial decisions related to the AI system such as concepts that should be analyzed, framing of those concepts, and error analysis of outputs should be taken jointly. His team also worked with community groups and people who previously worked for gangs involved in gun violence to co-design the AI systems. They hired community members and advisory teams and valued their suggestions, critiques, and ideas in shaping up the AI systems. Algorithmic and workforce bias has led to a lot of controversies in the recent years, including facial recognition systems misidentifying black women. Looking at these cases, Joy Buolamwini founded Algorithmic Justice League (AJL), a collective that focuses on creating more inclusive and ethical AI systems. AJL researches about algorithmic bias, provides a platform to people for raising their concerns and experiences with coded discrimination, and runs algorithmic audits to hold the companies accountable. Though it has not become a norm, the concept of interdisciplinary research is surely gaining attention of several researchers and technologists. At EmTech Digital, Rediet Abebe, a computer science researcher at Cornell University, said, “We need adequate representation of communities that are being affected. We need them to be present and tell us the issues they’re facing.” She further added, “We also need insights from experts from areas including social sciences and the humanities … they’ve been thinking about this and working on this for longer than I’ve been alive. These missed opportunities to use AI for social good—these happen when we’re missing one or more of these perspectives.” Abebe has cofounded Mechanism Design for Social Good, a multi-institution, interdisciplinary research group that aims to improve access to opportunities for those who have been historically underserved and marginalized. The organization has worked on several areas including global inequality, algorithmic bias and discrimination, and the impact of algorithmic decision-making on specific policy areas including online labor markets, health care, and housing. AI researchers and developers need to collaborate with social sciences and underserved communities. We need to have those people affected by these systems to have a say in building these systems. The more different people we have, the more perspectives we get. And, this is the type of team that brings more innovation to technology. Read Patton’s full article on Medium. Professors from MIT and Boston University discuss why you need to worry about the ‘wrong kind of AI’ Is China’s facial recognition powered airport kiosks an attempt to invade privacy via an easy flight experience? How AI is transforming the Smart Cities IoT? [Tutorial]
Read more
  • 0
  • 0
  • 3111

article-image-google-announces-the-general-availability-of-amp-for-email-faces-serious-backlash-from-users
Bhagyashree R
27 Mar 2019
3 min read
Save for later

Google announces the general availability of AMP for email, faces serious backlash from users

Bhagyashree R
27 Mar 2019
3 min read
After launching the developer preview of accelerated mobile pages or AMP for email last year, Google announced its general availability yesterday. For utilizing AMP for email, you do not have to be restricted to Gmail as other major email providers including Yahoo Mail, Outlook.com, and Mail.Ru have also added support for AMP. What is AMP for email? AMP for email aims to take the simple text emails to the next level by making them interactive and engaging, which Google calls “dynamic emails”, like web pages without having the user to hit the browser. AMP for email promises that users will now be able to actually get things done within the email. For instance, users will be able to take actions like RSVP to an event, fill out a questionnaire, browse a catalog, or respond to a comment. AMP emails are designed to be compatible with the current email ecosystem using a new MIME part called “text/x-amp-html”. AMP for email supports many a subset of AMP markups, which includes carousels, forms, and lists. Also, if an email provider does not support AMP emails, it allows emails to fallback to HTML. How users and developers are reacting to this? Since the AMP initiative was first announced, it has faced criticism by many, so much so that there is a Twitter account named “GoogleAMPSucks”. Google AMP for email has also sparked a huge discussion on Hacker News. Many users think that this opens up an alternate channel for sending ads to users. One user commented, “Google is looking for alternative channels to sell ads through. Adding more complicated media to email increases the type of ads that can be sold. It won’t be right away, but it’s coming.” AMP for email provides senders new ways to revise the information in an email they have already sent. This will make emails mutable which is a concern for many users. Explaining how this feature can be misused, one of the users on Hacker News, said, “Sent an ad claiming that you had a given price for a full week, and then decided you didn't want to sell it for that anymore two days later? Handy that you can remove all evidence of your advertisement after you sent it.” Bron Gondwana, CEO of FastMail, also believes that the immutable behavior of emails is, in fact, its strength. He wrote in a blog post, “The email in your mailbox is your copy of what was said, and nobody else can change it or make it go away. The fact that the content of an email can’t be edited is one of the best things about POP3 and IMAP email standards. I admit it annoyed me when I first ran into it – why can’t you just fix up a message in place – but the immutability is the real strength of email. You can safely forget the detail of something that you read in an email, knowing that when you go back to look at it, the information will be exactly the same.” To read the official announcement, check out Google’s blog. European Union fined Google 1.49 billion euros for antitrust violations in online advertising Google announces Stadia, a cloud-based game streaming service, at GDC 2019 Google is planning to bring Node.js support to Fuchsia  
Read more
  • 0
  • 0
  • 2329

article-image-the-ftc-issues-orders-to-7-broadband-companies-to-analyze-isp-privacy-practices-given-they-are-also-ad-support-content-platforms
Savia Lobo
27 Mar 2019
3 min read
Save for later

The FTC issues orders to 7 broadband companies to analyze ISP privacy practices given they are also ad-support content platforms

Savia Lobo
27 Mar 2019
3 min read
The Federal Trade Commission announced yesterday that they have issued orders to seven U.S. Internet broadband providers to analyze how these broadband companies carry out the data collection and distribution process. Seven broadband companies including AT&T Inc., AT&T Mobility LLC, Comcast Cable Communications doing business as Xfinity, Google Fiber Inc., T-Mobile US Inc., Verizon Communications Inc., and Cellco Partnership doing business as Verizon Wireless, received orders from the FTC for monitoring the companies’ privacy policies, procedures, and practices. According to the FTC press release, “This study is to better understand Internet service providers’ privacy practices in light of the evolution of telecommunications companies into vertically integrated platforms that also provide advertising-supported content. Under current law, the FTC has the ability to enforce against unfair and deceptive practices involving Internet service providers.” What information does the FTC plan to retrieve? The FTC is authorized to issue the “Orders to File a Special Report by Section 6(b) of the FTC Act”, the press release reads. The Commission seeks to obtain information on the categories of personal information collected about consumers or their devices, including the purpose for which the information is collected or used. Also, the techniques for collecting such information: whether the information collected is shared with third parties; internal policies for access to such data; and how long the information is retained; They will also analyze whether the information is aggregated, anonymized or de-identified. The other factors they’ll analyze include: Copies of the companies’ notices and disclosures to consumers about their data collection practices; Whether the companies offer consumers choices about the collection, retention, use, and disclosure of personal information, and whether the companies have denied or degraded service to consumers who decline to opt-in to data collection; and Procedures and processes for allowing consumers to access, correct, or delete their personal information. “The FTC has given the companies up to 45 days to hand over the requested information”, The Verge reports. A user wrote on HackerNews, “It’s good to check on this of course but...as far as ISPs go, this is actually about #3 on the list of problems I want the FTC or someone to fix” “How about the fact that there’s usually only one choice. Or that Internet that everyone wants can be force-bundled with ridiculous things no one wants (like a home phone line and minimum TV bundle), that we tolerate because there is no option. Or prices that go up forever with no improvements, except when they all magically found a way the day after Google Fiber was announced. These companies abuse their positions and need to be checked for that in addition to privacy”, the user added. To know more about this news in detail, visit the official press release. Facebook under criminal investigations for data sharing deals: NYT report Advocacy groups push FTC to fine Facebook and break it up for repeatedly violating the consent order and unfair business practices US Senators introduce a bill to avoid misuse of facial recognition technology
Read more
  • 0
  • 0
  • 1616

article-image-uber-and-lyft-drivers-strike-in-los-angeles
Richard Gall
26 Mar 2019
3 min read
Save for later

Uber and Lyft drivers strike in Los Angeles

Richard Gall
26 Mar 2019
3 min read
Uber and Lyft drivers yesterday went on strike across Los Angeles in opposition to Uber's decision to cut rates by 25% in the Los Angeles area. Organized by Rideshare Drivers United, the strike comes as a further labor-led fightback against ride-hailing platforms following news that U.K.-based Uber drivers are suing the company over access to personal data. If anyone thought 2018's techlash was over, they need to think again - it appears worker solidarity is only getting stronger in the tech industry. What was the purpose of the March 25 Uber strike? Uber and Lyft drivers have experienced declining wages for a number of years as the respective platforms compete to grow customers. This made news earlier in March that Uber would be reducing the per mile rate for drivers from 80¢ to 60¢ particularly tough to take. It underlined to many drivers that things are probably only going to continue to get worse while power rests solely on the side of the platforms. https://twitter.com/_drivers_united/status/1107745851890253824?s=20 But there was more at stake than just an increase in wages. In many ways opposition to Uber's pay cut is simply a first step in a longer road towards improved working conditions and greater power. Rideshare Drivers United actually has an extensive list of aims and demands: A 10% cap on commission The right to organize and negotiate for improved working conditions Ensuring Uber and Lyft are working in accordance with authorities on green initiatives With New York City authorities taking steps to implement minimum pay requirements in December 2018, the action on the west coast could certainly be seen as an attempt to push for consistency across the country. However, it doesn't appear that Los Angeles authorities are interested in taking similar steps at the moment. Uber and Lyft's response to the Los Angeles strike In a statement given to The Huffington Post, an Uber spokesperson said that the changes "will make rates comparable to where they were in September, while giving drivers more control over how they earn by allowing them to build a model that fits their schedule best.” In the same piece, the HuffPo quotes a Lyft spokesperson who points out that the company hasn't changed their rates for 12 months. Support for striking Uber and Lyft drivers Support for the strikers came from many quarters, including the National Union of Health Workers and Senator Bernie Sanders. "One job should be enough to make a decent living in America" the NUHW said. https://twitter.com/NUHW/status/1110270149309849600?s=20 Time for Silicon Valley to rethink There's clearly a long way to go if Rideshare Drivers United are going to achieve their aims. But the conversation is shifting and many Silicon Valley executives will need to look up and take notice - perhaps it's time to rethink things.
Read more
  • 0
  • 0
  • 2060

article-image-chinas-facial-recognition-powered-airport-kiosks-an-attempt-to-invade-privacy
Fatema Patrawala
26 Mar 2019
7 min read
Save for later

Is China’s facial recognition powered airport kiosks an attempt to invade privacy via an easy flight experience?

Fatema Patrawala
26 Mar 2019
7 min read
We've all heard stories of China's technological advancements and how face recognition is really gaining a lot of traction. If you needed any proof of China's technological might over most countries of the world, it doesn't get any better (or scarier) than this. On Sunday, tech analyst at Tencent & WeChat, Matthew Brennan, tweeted a video of a facial recognition kiosk at Chengdu Shuangliu International Airport in the People’s Republic of China. The kiosk seemed to give Brennan minutely personalized flight information as he walked by after automatically scanning his face in just seconds. A simple 22 second video crossed over 1.2 million views in just over a day. The tweet went viral, with many commenters writing how dystopian or terrifying they found this technology. Suggesting how wary we should be of the proliferation of biometric systems like those used in China. https://twitter.com/mbrennanchina/status/1109741811310837760 “There’s one guarantee that I’ll never get to go to China now,” one Twitter user wrote in response. “That’s called fascism and it’s not moral or ok,” another comment read. https://twitter.com/delruky/status/1109812054012002304 Surveillance tech isn’t a new idea The airport facial recognition technology isn’t new in China, and similar systems are already being implemented at airports in the United States. In October, Shanghai’s Hongqiao airport reportedly debuted China’s first system that allowed facial recognition for automated check-in, security clearance, and boarding. And since 2016, the Department of Homeland Security has been testing facial recognition at U.S. airports. This biometric exit program uses photos taken at TSA checkpoints to perform facial recognition tests to verify international travelers’ identities. Documents recently obtained by Buzzfeed show that Homeland Security is now racing to implement this system at the top 20 airports in the U.S. by 2021. And it isn’t just the federal government that has been rolling out facial recognition at American airports. In May of 2017, Delta announced it was testing a face-scanning system at Minneapolis-Saint Paul International Airport that allowed customers to check in their bags, or, as the company called it in a press release, “biometric-based bag drop.” The airline followed up those tests with what it celebrated as “the first biometric terminal” in the U.S. at Atlanta’s Maynard H. Jackson International Airport at the end of last year. Calling it an “end-to-end Delta Biometrics experience,” Delta’s system uses facial recognition kiosks for check-in, baggage check, TSA identification, and boarding. The facial recognition option is saving an average of two seconds for each customer at boarding, or nine minutes when boarding a wide body aircraft. “Delta’s successful launch of the first biometric terminal in the U.S. at the world’s busiest airport means we are designing the airport biometric experience blueprint for the industry,” said Gil West, Delta’s COO. “We’re removing the need for a customer checking a bag to present their passport up to four times per departure – which means we’re giving customers the option of moving through the airport with one less thing to worry about, while empowering our employees with more time for meaningful interactions with customers.” Dubai International Airport's Terminal 3 will soon replace its security-clearance counter with a walkway tunnel filled with 80 face-scanning cameras disguised as a distracting immersive video. The airport has an artsy, colorful video security and customs tunnel that scans your face, adds you to a database, indexes you with artificial intelligence and decides if you're free to leave -- or not. Potential dangers surveillance tech could bring in From first glance, the kiosk does seem really cool. But it should also serve as a warning as to what governments and companies can do with our data if left unchecked. After all, if an airport kiosk can identify Brennan in seconds and show him his travel plans, the Chinese government can clearly use facial recognition tech to identify citizens wherever they go. The government may record everyone's face and could automatically fine/punish someone if they do break/bend the rules. Matter of fact, they are already doing this via their social credit system. If you are officially designated as a “discredited individual,” or laolai in Mandarin, you are banned from spending on “luxuries,” whose definition includes air travel and fast trains. This class of people, most of whom have shirked their debts, sit on a public database maintained by China’s Supreme Court. For them, daily life is a series of inflicted indignities – some big, some small – from not being able to rent a home in their own name, to being shunned by relatives and business associates. Alibaba, China’s equivalent of Amazon, already has control over the traffic lights in one Chinese city, Hangzhou. Alibaba is far from shy about it’s ambitions. It has 120,000 developers working on the problem and intends to commercialise and sell the data it gathers about citizens. Surveillance technology is pervasive in our society, leading to fierce debate between proponents and opponents. Government surveillance, in particular, has been brought increasingly under public scrutiny, with proponents arguing that it increases security, and opponents decrying its invasion of privacy. Critics have loudly accused governments of employing surveillance technologies that sweep up massive amounts of information, intruding on the privacy of millions, but with little to no evidence of success. And yet, evaluating whether surveillance technology increases security is a difficult task. From the War Resisters League to the Government Accountability Project, Data for Black Lives and 18 Million Rising, from Families Belong Together to the Electronic Frontier Foundation, more than 85 groups signed letters to corporate giants Microsoft, Amazon and Google, demanding that the companies commit not to sell face surveillance technology to the government. Shoshana Zuboff, writer of the book, The Age of Surveillance Capitalism mentions that the technology companies insist their technology is too complex to be legislated, there are companies that have poured billions into lobbying against oversight, and while building empires on publicly funded data and the details of our private lives. They have repeatedly rejected established norms of societal responsibility and accountability. Causes more harm to the Minority groups and vulnerable communities There has been a long history of use of surveillance technologies that will particularly impact vulnerable communities and groups such as immigrants, communities of color, religious minorities, even domestic violence and sexual assault survivors. Privacy is not only a luxury that many residents cannot afford. In surveillance-heavy precincts, for practical purposes, privacy cannot be bought at any price. Privacy advocates have sometimes struggled to demonstrate the harms of government surveillance to the general public. Part of the challenge is empirical. Federal, state, and local governments shield their high-technology operations with stealth, obfuscation, and sometimes outright lies when obliged to answer questions. In many cases, perhaps most, these defenses defeat attempts to extract a full, concrete accounting of what the government knows about us, and how it puts that information to use. There is a lot less mystery for the poor and disfavored, for whom surveillance takes palpable, often frightening forms. The question is, as many commenters pointed out after Brennan’s tweet, do we want this kind of technology available? If so, how could it be kept in check and not abused by governments and other institutions? That’s something we don’t have an answer for yet–an answer we desperately need. Alarming ways governments are using surveillance tech to watch you Seattle government arrange public review on the city’s surveillance tech systems The Intercept says IBM developed NYPD surveillance tools that let cops pick targets based on skin color
Read more
  • 0
  • 0
  • 4142

article-image-professors-from-mit-and-boston-university-discuss-why-you-need-to-worry-about-the-wrong-kind-of-ai
Natasha Mathur
26 Mar 2019
4 min read
Save for later

Professors from MIT and Boston University discuss why you need to worry about the ‘wrong kind of AI’

Natasha Mathur
26 Mar 2019
4 min read
Daron Acemoglu, a professor from MIT (Massachusettes Institute of Technology) and Pascual Restrepo, a professor from Boston University published a paper earlier this month, titled, “The Wrong Kind of AI? Artificial Intelligence and the future of Labor Demand”.   In the paper, professors talk about how recent technological advancements have been biased towards automation, thereby, changing the focus from creating new tasks to productively employ labor. They argue that the consequences of this choice have been ceasing the labor demand, decreasing labor share in national income, increasing equality, and low productivity growth. Automation technologies do not increase labor’s productivity Professors state that there’s a common preconceived notion that advancements in tech lead to an increase in productivity, which in turn, leads to an increase in demand for labor, thereby, impacting employment and wages. However, this is not entirely true as automation tech does not boost labor’s productivity. Instead, this tech replaces labor’s productivity by finding a cheaper capital substitute in terms of tasks performed by humans. In a nutshell, automation tech always reduces the ‘labor’s share in value added’. “In an age of rapid automation, labor’s relative standing will deteriorate and workers will be particularly badly affected if new technologies are not raising productivity sufficiently”, states professors. But, the paper also poses a question that if automation tends to reduce the labor share then why did the labor share remain constant over the last two centuries? Also, why does productivity growth go hand-in-hand with commensurate wage growth? Professors state that in order to understand this relationship and find an answer, people need to recognize different types of technological advances that contribute to productivity growth. They state that Labor demand has not increased over the last two centuries due to technologies that have made labor more productive rather due to new technologies that have eliminated labor from tasks in which it previously specialized. The ‘Wrong kind of AI’ Professors state that economists put a great deal of trust in the Market’s ability to distribute the resources efficiently. However, there are many who disagree. “Is there any reason to worry that AI applications with the promise of reinstating human labor will not be exploited and resources will continue to pour instead into the wrong kind of AI?” state the professors. Professors listed several reasons for market failures in innovation with some specific reasons that are important in terms of AI. Few of these reasons are as follows: Innovation creates externalities and markets do not perform well under such externalities. Markets struggle in case there are alternative and competing technological paradigms.  This is because in case a wrong paradigm moves ahead of the other, it can get very difficult to reverse the trend and benefit from the possibilities offered by an alternative paradigm. The research paper states that there are additional factors that can distort choices over what types of AI applications to develop. The first one being that in case of employment creation having a social value beyond what is in the GDP statistics, this social value gets ignored by the market. Recently, the US government has been frugal in its support for research and its determination to change the direction of technological change. A part of this change is because of: reduction in resources devoted to the government for support of innovation. the increasingly dominant role of the private sector in setting agenda in high-tech areas. This shift discourages the research related to future promise and other social objectives. To sum it up, professors state that although there is no ‘definitive evidence’ that research and corporate resources are getting directed towards the wrong kind of AI, the market for innovation does not provide a good enough reason to expect an efficient balance between different types of AI. Instead of contributing to productivity growth, employment, and shared prosperity, automation advancement would instead lead to anemic growth and inequality. “Though many today worry about the security risks and other..consequences of AI, we have argued that there are prima facie reasons for worrying about the wrong kind of AI from an economic point of view becoming all the rage and the basis of future technological development”, reads the paper. Stanford University launches Institute of Human Centered Artificial Intelligence; receives public backlash for non-representative faculty makeup Researchers at Columbia University use deep learning to translate brain activity into words for epileptic cases Researchers discover Spectre like new speculative flaw, “SPOILER” in Intel CPU’s
Read more
  • 0
  • 0
  • 3437
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $15.99/month. Cancel anytime
article-image-asus-servers-hijacked-pushed-backdoor-malware-via-software-updates-potentially-affecting-over-a-million-users
Savia Lobo
26 Mar 2019
4 min read
Save for later

ASUS servers hijacked; pushed backdoor malware via software updates potentially affecting over a million users

Savia Lobo
26 Mar 2019
4 min read
Motherboard, today, reported of a backdoor malware attack on ASUS’ servers, which took place last year between June and November 2018. The attack was discovered by Kaspersky Lab in January 2019 and was named ‘ShadowHammer’ thereafter. Researchers say that the attack was discovered after adding a new supply-chain detection technology to ASUS’ scanning tool to catch anomalous code fragments hidden in legitimate code or catch code that is hijacking normal operations on a machine. Kaspersky analysts told Kim Zetter, a cybersecurity journalist at Motherboard, that the backdoor malware was pushed to ASUS customers for at least five months before it was discovered and shut down. Researchers also said that attackers compromised ASUS’ server for the company’s live software update tool. Following which the attackers used it to push the malware to inadvertently install a malicious backdoor on thousands of its customers’ computers. The malicious file, however, was signed with legitimate ASUS digital certificates to make it appear to be an authentic software update from the company. One of Kaspersky’s spokesperson said, “Over 57,000 Kaspersky users have downloaded and installed the backdoored version of ASUS Live Update at some point in time... We are not able to calculate the total count of affected users based only on our data; however, we estimate that the real scale of the problem is much bigger and is possibly affecting over a million users worldwide”. According to researchers at Kaspersky Lab, the goal of the attack was to “surgically target an unknown pool of users, which were identified by their network adapters’ MAC addresses”. The attackers' first hardcoded a list of MAC addresses in the trojanized samples and this list was used to identify the actual intended targets of this massive operation. “We were able to extract more than 600 unique MAC addresses from over 200 samples used in this attack. Of course, there might be other samples out there with different MAC addresses in their list”, the researchers mentioned. Zetter also tweeted about “a Reddit forum from last year where ASUS users were discussing the suspicious software update ASUS was trying to install on their machines in June 2018” https://twitter.com/KimZetter/status/1110239014735405056 Kaspersky Lab plans to release a full technical paper and presentation about the ASUS attack at its Security Analyst Summit held in Singapore next month. Vitaly Kamluk, Asia-Pacific director of Kaspersky Lab’s Global Research and Analysis Team, said, “This attack shows that the trust model we are using based on known vendor names and validation of digital signatures cannot guarantee that you are safe from malware.” Zetter writes, “Motherboard sent ASUS a list of the claims made by Kaspersky in three separate emails on Thursday but has not heard back from the company.” Costin Raiu, company-wide director of Kaspersky’s Global Research and Analysis Team, told Motherboard, “I’d say this attack stands out from previous ones while being one level up in complexity and stealthiness. The filtering of targets in a surgical manner by their MAC addresses is one of the reasons it stayed undetected for so long. If you are not a target, the malware is virtually silent.” In a press release, Asus stated that the backdoor was fixed in the Live Update version 3.6.8. The company has also "introduced multiple security verification mechanisms to prevent any malicious manipulation in the form of software updates or other means, and implemented an enhanced end-to-end encryption mechanism", the press release states. Additionally, ASUS has also created an online security diagnostic tool to check for affected systems. To know more about the technical details on this attack, head over to Kaspersky’s website. UPDATED: In a press release, Asus stated that the backdoor was fixed in the Live Update version 3.6.8. Additionally, ASUS has also created an online security diagnostic tool to check for affected systems. Researchers prove that Intel SGX and TSX can hide malware from antivirus software Using deep learning methods to detect malware in Android Applications Security researcher exposes malicious GitHub repositories that host more than 300 backdoored apps
Read more
  • 0
  • 0
  • 2433

article-image-trick-or-a-treat-telegram-announces-its-new-delete-feature-that-deletes-messages-on-both-the-ends
Amrata Joshi
26 Mar 2019
4 min read
Save for later

Trick or a treat: Telegram announces its new ‘delete feature’ that deletes messages on both the ends

Amrata Joshi
26 Mar 2019
4 min read
Just two days ago, Telegram, announced the new ‘delete feature’ that allows users to delete messages in one-to-one and/or group private chats. This latest feature allows users to selectively delete their own messages or the messages sent by others in the chat. To delete the message they don’t even need to compose the original message in the first place. This feature is available in Telegram 5.5. So next time you have a conversation with someone, you can delete all of the chats from your device and from the device on the other hand (with whom you are chatting). For deleting a message from both the ends, a user needs to tap on the message and select delete. After that, the user will be given an option of ‘delete for me’ or for everyone. Pavel Durov, Founder at Telegram justified the need for the feature. He writes, “Relationships start and end, but messaging histories with ex-friends and ex-colleagues remain available forever. It’s getting worse. Within the next few decades, the volume of our private data stored by our chat partners will easily quadruple.” According to him, users should have control of their digital historic conversation. He further added, “An old message you already forgot about can be taken out of context and used against you decades later. A hasty text you sent to a girlfriend in school can come haunt you in 2030 when you decide to run for mayor. We have to admit: Despite all of our progress in encryption and privacy, we have very little actual control of our data. We can’t go back in time and erase things for other people.” Telegram’s ‘delete’ feature repercussions This might sound like an exciting feature but what lies before us is the bigger question ‘Will the feature get misused?’ The next time if someone bullies a user on Telegram chat or sends something abusive or absurd. The victim probably won’t even have the proof to show it to others in the case, the attacker deletes them. Moreover, if a group chat involves a series of conversation and a user maliciously deletes a few messages, other users from the group won’t come to know. This way, the conversation might get misinterpreted and the flow of the conversation would get disturbed. The conversation might more look like a manipulated one and cause more trouble. The traces available for criminals or attackers over this platform will get wiped off. It is giving control to the users but secretly opening the ways for malicious attacks. WhatsApp’s unsend message feature seems better in this regard because it lets users delete their own messages and which also sounds legit. Also, when the message is deleted, users in the group chat or in the private chat get notified about it unlike how it works in Telegram. The feature could also cause trouble in case the user accidentally end up deleting someone else’s message in the group or in private chat as the deleted message can’t even get recalled. While talking about the misuse, Durov writes, “We know some people may get concerned about the potential misuse of this feature or about the permanence of their chat histories. We thought carefully through those issues, but we think the benefit of having control over your own digital footprint should be paramount.” Few users are happy with this news and think that this feature can save them in case they accidentally share something sensitive. A user commented on HackerNews, “As a person who accidentally posted sensitive info on chats, I welcome this feature. I do wish they implemented an indication "message deleted" in the chat to show that the editing took place.” Few others think that this feature could cause major trouble. Another user commented, “The problem I see with this is that it adds the ability to alter history. You can effectively remove messages sent by the other person, altering context and changing the meaning of the conversation. You can also remove evidence of transactions, work or anything else. I get that this is supposed to be a benefit, but it's also a very significant issue, especially where business is concerned.” To know more, check out Telegram’s blog post. Messaging app Telegram’s updated Privacy Policy is an open challenge The Indian government proposes to censor social media content and monitor WhatsApp messages Facebook hires top EEF lawyer and Facebook critic as Whatsapp privacy policy manager
Read more
  • 0
  • 0
  • 3178

article-image-kubernetes-1-14-releases-with-support-for-windows-nodes-kustomize-integration-and-much-more
Amrata Joshi
26 Mar 2019
2 min read
Save for later

Kubernetes 1.14 releases with support for Windows nodes, Kustomize integration, and much more

Amrata Joshi
26 Mar 2019
2 min read
Yesterday, the team at Kubernetes released Kubernetes 1.14, a new update to the popular open-source container orchestration system. Kubernetes 1.14 comes with support for Windows nodes, kubectl plugin mechanism, Kustomize integration, and much more. https://twitter.com/spiffxp/status/1110319044249309184 What’s new in Kubernetes 1.14? Support for Windows Nodes This release comes with added support for Windows nodes as worker nodes. Kubernetes now schedules Windows containers and enables a vast ecosystem of Windows applications. With this release, enterprises with investments can easily manage their workloads and operational efficiencies across their deployments, regardless of the operating systems. Kustomize integration With this release, the declarative resource config authoring capabilities of kustomize are now available in kubectl through the -k flag. Kustomize helps the users in authoring and reusing resource config using Kubernetes native concepts. kubectl plugin mechanism This release comes with kubectl plugin mechanism that allows developers to publish their own custom kubectl subcommands in the form of standalone binaries. PID Administrators can now provide pod-to-pod PID (Process IDs) isolation by defaulting the number of PIDs per pod. Pod priority and preemption in this release enables Kubernetes scheduler to schedule important pods first and remove the less important pods to create room for more important ones. Users are generally happy and excited about this release. https://twitter.com/fabriziopandini/status/1110284805411872768 A user commented on HackerNews, “The inclusion of Kustomize[1] into kubectl is a big step forward for the K8s ecosystem as it provides a native solution for application configuration. Once you really grok the pattern of using overlays and patches, it starts to feel like a pattern that you'll want to use everywhere” To know more about this release in detail, check out Kubernetes’ official announcement. RedHat’s OperatorHub.io makes it easier for Kuberenetes developers and admins to find pre-tested ‘Operators’ for applications Microsoft open sources ‘Accessibility Insights for Web’, a chrome extension to help web developers fix their accessibility issues Microsoft open sources the Windows Calculator code on GitHub  
Read more
  • 0
  • 0
  • 2493

article-image-clojurists-together-fund-a-sum-of-9000-each-for-the-open-source-projects-neanderthal-and-aleph
Bhagyashree R
26 Mar 2019
3 min read
Save for later

Clojurists Together fund a sum of $9,000 each for the open source projects, Neanderthal and Aleph

Bhagyashree R
26 Mar 2019
3 min read
Clojurists Together shortlisted two projects namely Neanderthal and Aleph for Q1 of 2019 (February-April) to provide funding for further development of these projects, the details of which they shared yesterday. These projects will get total funding of $9,000, which means $3,000 per month. https://twitter.com/cljtogether/status/1109925960155983872 What is Clojurists Together? Clojurists Together was formed back in 2017 and is run by a board of developers in the Clojure Community. It focuses on keeping the development of open source Clojure software sustainable by raising funds and providing support for infrastructure and documentation. Additionally, it also supports other community initiatives like the Google Summer of Code. The way it works is that open source developers apply for funding, and if the board members think that the project meets the requirements of the Clojurists Together members, their project is selected for funding. Then the developers get paid to work on their project for three months. The funds are raised with the help of Clojure companies and individual developers who can sign up for a monthly contribution or do a one-time donation. This is their fifth funding cycle and previously they have supported datascript, kaocha, cljdoc, Shadow CLJS, clj-http, Figwheel, ClojureScript, and CIDER. Details of the projects funded Neanderthal Neanderthal is a Clojure library for faster matrix and linear algebra computations. It is based on native libraries of BLAS and LAPACK computation routines for both CPU and GPU. On GPU, this library is almost 3000x faster than optimized Clojure/Java libraries and on CPU it is 100x faster than optimized pure Java. This project is being developed by Dragan Djuric, who works on the Uncomplicate suite of libraries and is also the professor of Software Engineering at the University of Belgrade. Within the span of these three months, Djuric plans to work on some of the following updates: Writing an introductory series named Deep Learning from the ground up with Clojure. Integrating Nvidia’s cuSolver into Neanderthal's CUDA GPU engine to provide some key LAPACK functions that are only available on the CPU. Along with these developments, he will also be improving the documentation and tutorials for Neanderthal. Aleph Aleph is a Clojure library for client and server network programming. Based on Netty, it is said to be one of the best options for building high-performance communication systems in Clojure. Oleksii Kachaiev, who is working on Aleph, has planned the following additions for Aleph in the allocated 3 months: Releasing a new version of Aleph with the latest developments Updating internals of the library and interactions with Netty to ease the operational burden and improve performance Implementing missing parts of the websocket protocol To know how far these projects have come, check out this monthly update for February shared by Clojurists Together yesterday. To read the official announcement, visit the official site of Clojurists Together. Clojure 1.10 released with Prepl, improved error reporting and Java compatibility ClojureCUDA 0.6.0 now supports CUDA 10 Clojure 1.10.0-beta1 is out!
Read more
  • 0
  • 0
  • 2111
article-image-swift-5-for-xcode-10-2-is-here
Natasha Mathur
26 Mar 2019
3 min read
Save for later

Swift 5 for Xcode 10.2 is here!

Natasha Mathur
26 Mar 2019
3 min read
Apple announced Swift 5 for Xcode 10.2 yesterday. The latest Swift 5 update in Xcode 10.2 explores new features and changes to App thinning, Swift Language, Swift Standard Library, Swift Package Manager, and Swift Compiler. Swift is a popular general-purpose, compiled programming language developed by Apple Inc. for iOS, macOS, watchOS, tvOS, and beyond. Writing Swift code is interactive and its syntax is concise yet expressive. Swift code is safe and comprises all modern features. What’s new in Swift 5 for Xcode 10.2? In Xcode 10.2, the Swift command-line tools now require the Swift libraries in macOS. These libraries are included by default in Swift, starting with macOS Mojave 10.14.4. App Thinning Swift apps now no longer include dynamically linked libraries for the Swift standard library as well as the Swift SDK overlays in build variants for devices that run iOS 12.2, watchOS 5.2, and tvOS 12.2. This results in Swift apps getting smaller once they’re shipped in the App Store or thinned in an app archive for development distribution. Swift Language String literals in Swift 5 can be expressed with the help of enhanced delimiters. A string literal consisting of one or more number signs (#) before the opening quote considers backslashes and double-quote characters as literal. Also, Key paths can now support the identity keypath (\.self) i.e. a WritableKeyPath that refers to its entire input value. Swift Standard Library The standard library in Swift now consists of the Result enumeration with Result.success(_:) and Result.failure(_:) cases. Due to the addition of Standard Library, the Error protocol now conforms to itself and makes working with errors easier. The SIMD types and basic operators have now been defined in the standard library. Set and Dictionary also make use of a different hash seed now for each newly created instance. The DictionaryLiteral type has been renamed as KeyValuePairs in Swift 5. The String structure’s native encoding has been switched from UTF-16 to UTF-8. This improves the relative performance of String.UTF8View as compared to String.UTF16View. Swift Package Manager Targets can now declare the commonly used, target-specific build settings while using the Swift 5 Package.swift tools-version. A new dependency mirroring feature in Swift 5 enables the top-level packages to override dependency URLs. Package manager operations have become significantly faster for larger packages. Swift Compiler Size taken up by Swift metadata can be reduced now as the Convenience initializers defined in Swift only allocates an object ahead of time in case its calling a designated initializer defined in Objective-C. C types consisting of alignment greater than 16 bytes are no longer available in Swift. Swift 3 mode has been deprecated. Supported values for the -swift-version flag has become 4, 4.2, and 5.Default arguments can now be printed in SourceKit-generated interfaces for Swift modules. For more information, check out the official Swift 5 for Xcode 10.2 release notes. Swift 5 for Xcode 10.2 beta is here with stable ABI Exclusivity enforcement is now complete in Swift 5 ABI stability may finally come in Swift 5.0
Read more
  • 0
  • 0
  • 4401

article-image-oracle-does-organizational-restructuring-by-laying-off-100s-of-employees
Bhagyashree R
25 Mar 2019
3 min read
Save for later

Oracle does “organizational restructuring” by laying off 100s of employees

Bhagyashree R
25 Mar 2019
3 min read
Last week, the news of Oracle laying off a huge number of employees as a part of its “organizational restructuring” came as a shock. The reason behind this layoff round was not clear, while some said that this was done to save money, some others said that people working on a legacy product were let go. This secretive and sudden layoff process has left people speculating about the number of employees laid off. One of the users on TheLayoff.com commented, “I think it was 10% from OCI due to being insulated for the past 5 years. I thought it was around 3k global, 2.3% of the total workforce.” Another user speculated that the number could be 1,500 and “more to follow” given Oracle’s deflating performance. Another user speculated that the target is in fact 9000 people worldwide and the layoffs will be conducted in three different rounds. The user disclosing further details said, “First round will close in May, the US, as usual, got hit first, next in turn, is LAD and APAC, then EMEA. Slightly more than 2000 people affected overall. The second round will close in October/November, around 6000 people. Last round will close in December.” The lay off was nothing but a “slaughter” The employees were informed about the layoffs through an email sent on Thursday by Don Johnson, Oracle executive vice president at 5 a.m. In the email, Johnson explained how this change will impact Oracle Cloud Infrastructure (OCI). Johnson in the email wrote, “Today’s changes within OCI will better align with Larry's vision of the business. It will streamline our products and services, focus investments on our most strategic priorities, and help us to more effectively and rapidly deliver the full promise and reach of Oracle's Gen 2 Cloud.” Further, in the email, he ironically talked about “a bright future”. He wrote, “OCI’s business is stronger than ever, and this team’s future is bright.” While for some people new mornings bring new possibilities, for these employees it felt like a “slaughter”. One of the affected employees told IEEE Spectrum, “The morning felt like a slaughter. One person after another. People who have shaped me, who’ve been instrumental in my own success, who are good, smart, capable, wonderful. It’s been a terrible day. A company’s most valuable asset is the people it employs. Today’s cuts leave me to wonder if Oracle will ever realize the mistake it has made in letting these people go.” One Oracle manager shared that he was notified about an employee just a day before and some other managers were not even notified beforehand. Here is what Oracle told IEEE Spectrum when asked about this layoff, “As our cloud business grows, we will continually balance our resources and restructure our development group to help ensure we have the right people delivering the best cloud products to our customers around the world.” IBM, Oracle under the scanner again for questionable hiring and firing policies The tug of war between Google and Oracle over API copyright issue has the future of software development in the crossfires Oracle releases VirtualBox 6.0.0 with improved graphics, user interface and more  
Read more
  • 0
  • 0
  • 3871

article-image-a-bitwise-study-presented-to-the-sec-reveals-that-95-of-coinmarketcaps-btc-trading-volume-report-is-fake
Savia Lobo
25 Mar 2019
2 min read
Save for later

A Bitwise study presented to the SEC reveals that 95% of CoinMarketCap’s BTC trading volume report is fake

Savia Lobo
25 Mar 2019
2 min read
A recent research report by Bitwise Asset Management last week revealed that 95% of the reported trading volumes in Bitcoin by CoinMarketCap.com is fake and artificially created by unregulated exchanges. Surprisingly, this fake data came from CoinMarketCap.com, the most widely cited source for bitcoin volume and is also used by most of the major media outlets. CoinMarketCap hasn't responded yet to the findings. “Despite its widespread use, the CoinMarketCap.com data is wrong. It includes a large amount of fake and/or non-economic trading volume, thereby giving a fundamentally mistaken impression of the true size and nature of the bitcoin market”, the Bitwise report states. The report also claims that only 10 cryptocurrency exchanges have actual volume, including major names like Binance, Coinbase, Kraken, Gemini, and Bittrex. https://twitter.com/BitwiseInvest/status/1109114656944209921 Following are the key takeaways of the report: 95% of reported BTC spot volume is fake. Likely motive is listing fees (can be $1-3M) Real daily spot volume is ~$270M 10 exchanges make up almost all of the real trading volume Majority of the 10 are regulated Spreads are <0.10%. Arbitrage is super efficient CoinMarketCap.com(CMC) originally reported a combined $6 billion in average daily trading volume. However, the 226-slide presentation by Bitwise to the U.S. Securities and Exchanges Commission (SEC) revealed that only $273 million of CMC’s reported BTC trading volume was legitimate. The report also has a detailed breakdown of all the exchanges that report more than $1 million in daily trading volumes on CoinMarketCap. Matthew Hougan, the global head of Bitwise’s research division, said, “People looked at cryptocurrency and said this market is a mess; that’s because they were looking at data that was manipulated”. Bitwise also posted on its official Twitter account, “Arbitrage between the 10 real exchanges has improved significantly. The avg price deviation of any one exchange from the aggregate price is now less than 0.10%! Well below the arbitrage band considering exchange-level fees (0.10-0.30%) & hedging costs.” https://twitter.com/BitwiseInvest/status/1109114686635687936 To know more about this in detail, head over to the complete Bitwise report. 200+ Bitcoins stolen from Electrum wallet in an ongoing phishing attack Can Cryptocurrency establish a new economic world order? Crypto-cash is missing from the wallet of dead cryptocurrency entrepreneur Gerald Cotten – find it, and you could get $100,000
Read more
  • 0
  • 0
  • 5446
article-image-microsoft-introduces-pyright-a-static-type-checker-for-the-python-language-written-in-typescript
Bhagyashree R
25 Mar 2019
2 min read
Save for later

Microsoft introduces Pyright, a static type checker for the Python language written in TypeScript

Bhagyashree R
25 Mar 2019
2 min read
Yesterday, Microsoft released a new static type checker for Python called Pyright to fill in the gaps in existing Python type checkers like mypy. Currently, this type checker supports Python 3.0 and its newer versions. What are the type checking features Pyright brings in? It comes with support for PEP 484 (type hints including generics), PEP 526 (syntax for variable annotations), and PEP 544 (structural subtyping). It supports type inference for function return values, instance variables, class variables, and globals. It provides smart type constraints that can understand conditional code flow constructs like if/else statements. Increased speed Pyright shows 5x speed as compared to mypy and other existing type checkers written in Python. It was built keeping large source bases in mind and can perform incremental updates when files are modified. No need for setting up a Python Environment Since Pyright is written in TypeScript and runs within Node, you do not need to set up a Python environment or import third-party packages for installation. This proves really helpful when using the VS Code editor, which has Node as its extension runtime. Flexible configurability Pyright enables users to have granular control over settings. You can specify different execution environments for different subsets of a source base. For each environment, you can specify different PYTHONPATH settings, Python version, and platform target. To know more in detail about Pyright, check out its GitHub repository. Debugging and Profiling Python Scripts [Tutorial] Python 3.8 alpha 2 is now available for testing Core CPython developer unveils a new project that can analyze his phone’s ‘silent connections’  
Read more
  • 0
  • 0
  • 7178

article-image-zoom-the-video-conferencing-company-files-to-go-public-possibly-a-profitable-ipo
Amrata Joshi
25 Mar 2019
3 min read
Save for later

Zoom, the video conferencing company files to go public, possibly a profitable IPO

Amrata Joshi
25 Mar 2019
3 min read
It seems filing an Initial Public Offering(IPO) is in trend since quite some time now. Major companies like Lyft, Uber, Spotify, and Airbnb have already filed an IPO, with Pinterest’s recent addition to the list. Last week, Zoom, the video conferencing company filed on the Nasdaq Stock Market to go public by next month. But it seems Zoom is on the brighter side with respect to its revenue as the company has raised around $145 million from the previous year and this year the company holds $330 million in revenue. There is a year on year increase with respect to the increase in the company’s revenue. Zoom marked more than doubled revenues from 2017 to 2018 as it was $60.8 million in 2017 and it became $151.5 million, last year. Even the losses have shrunk from $14 million in 2017, $8.2 million last year and just $7.5 million this year in January 2019. Eric S. Yuan, the CEO, and founder at Zoom will hold  22% of the company share. Li Ka-shing, a Chinese billionaire holds 6.1% while the current directors and executives own 36.7%. Emergence Capital who owns 12.5% (pre-IPO stake) has been backing Zoom, according to the IPO filing. The investors’ list involved in Zoom includes Sequoia Capital who holds 11.4% (pre-IPO stake) and Digital Mobile Venture holding 9.8%. Eric Yuan who previously was the Vice President of engineering at Cisco, had sold Cisco for $3.2 billion in 2007. In a statement to TechCrunch, last month, he said that “he would never sell another company again.” This clearly indicates that he preferred opting for an IPO for Zoom instead of selling it. Few users are appreciating the effort taken by Eric Yuan. A user commented on HackerNews, “The founder from Cisco is Eric Yuan. He's been working this thing for a long time and it's awesome to see them get to this.” Another user commented, “I’m impressed by their financials. $330m total revenues and $7m profit. Rarely do we hear these days about a tech company IPO who’s profitable” Some others think that Zoom is not good software and their experience with Zoom has been bad. A comment reads, “The whole experience has been bad enough that I get actively annoyed at seeing giant Zoom ads plastered all over 101, on buses, at T5 at JFK, etc. and think about what those cost compared to allocating some engineering time to fixing really basic bugs. Maybe it's a cheaper solution than the other options, I don't really know. But if the decision was up to me, Zoom would be basically last on my list if any significant portion of the company was using Linux.” Though the figures show that the company is on the profit side, but it would be interesting to know how a company remains profitable while going public. To know more about this news, check out Zoom’s official statement. Pinterest files new IPO framework and reports revenue of roughly $756 million in 2018 Slack confidentially files to go public SUSE is now an independent company after being acquired by EQT for $2.5 billion  
Read more
  • 0
  • 0
  • 2472