Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

6719 Articles
article-image-tim-berners-lee-plans-to-decentralize-the-web-with-solid-an-open-source-project-for-personal-empowerment-through-data
Natasha Mathur
01 Oct 2018
4 min read
Save for later

Tim Berners-Lee plans to decentralize the web with ‘Solid’, an open-source project for “personal empowerment through data”

Natasha Mathur
01 Oct 2018
4 min read
Tim Berners-Lee, the creator of the world wide web, revealed his plans of changing the web for the better, as he launched his new startup, Inrupt, last Friday. His major goal with Inrupt is to decentralize the web and get rid of the tech giant’s monopolies (Facebook, Google, Amazon, etc) on the user data. He hopes to achieve this with Inrupt’s new open source-project, Solid, a platform built using the existing web format. Tim has been working on Solid over the recent years with collaboration from people in MIT. “I’ve always believed the web is for everyone. That's why I and others fight fiercely to protect it. The changes we’ve managed to bring have created a better and more connected world. But for all the good we’ve achieved, the web has evolved into an engine of inequity and division; swayed by powerful forces who use it for their own agendas”, said Tim on his announcement blog post. Solid to provide users with more control over their data Solid will be working on completely changing the current web model that involves users handing over their personal data to digital giants “in exchange for perceived value”. “Solid is how we evolve the web in order to restore balance - by giving every one of us complete control over data, personal or not, in a revolutionary way,” says Tim. Solid offers every user a choice regarding where data gets stored, which specific people and groups can access the select elements in a data, and which apps you use. It will enable you, your family and colleagues, to link and share the data with anyone. Also, people can look at the same data with different apps at the same time, with Solid. Solid technology is built on existing web formats, and developers will have to integrate Solid into their apps and sites. Solid hopes to empower individuals, developers and businesses across the globe with totally new ways to build and find innovative and trusted applications. “I see multiple market possibilities, including Solid apps and Solid data storage”, adds Tim. According to Katrina Brooker, writer at FastCompany, “every bit of data you create or add-on Solid exists within a Solid pod. Solid Pod is a personal online data store. These pods provide Solid users with control over their applications and information on the web. Whoever uses the Solid Platform gets a Solid identity and Solid pod”. This is how Solid will enforce “personal empowerment through data” by helping users take back the power of web from gigantic corporations. Additionally, Tim told Brooker that he’s currently working on a way to create a decentralized version of Alexa, Amazon’s popular digital assistant. “He calls it Charlie. Unlike with Alexa, on Charlie people would own all their data. That means they could trust Charlie with, for example, health records, children’s school events, or financial records. That is the kind of machine Berners-Lee hopes will spring up all over Solid to flip the power dynamics of the web from corporations to individuals”, writes Brooker. Given the recent Facebook security breach that compromised 50M user accounts, and past data misuse scandals such as the Cambridge Analytica scandal, Solid seems to be an angel in disguise that will transfer the data control power into the hands of users. However, there’s no guarantee if Solid will be widely accepted by everyone as a lot of companies on the web are extremely sensitive when it comes to their data. They might not be interested in losing control over that data. But, it’s definitely going to take us one step ahead, to a more free and open Internet. Tim has taken a sabbatical from MIT and has reduced his day-to-day involvement with the World Wide Web Consortium (W3C) to work full time on Inrupt. “It is going to take a lot of effort to build the new Solid platform and drive broad adoption but I think we have enough energy to take the world to a new tipping point,” says Tim. For more coverage on this news, check out the official Inrupt blog.
Read more
  • 0
  • 0
  • 2548

article-image-facebook-witnesses-the-biggest-security-breach-since-cambridge-analytica-50m-accounts-compromised
Sugandha Lahoti
01 Oct 2018
4 min read
Save for later

Facebook’s largest security breach in its history leaves 50M user accounts compromised

Sugandha Lahoti
01 Oct 2018
4 min read
Facebook has been going through a massive decline of trust in recent times. And to make matters worse, it has witnessed another massive security breach, last week. On Friday, Facebook announced that nearly 50M Facebook accounts have been compromised by an attack that gave hackers the ability to take over users’ accounts. This security breach has not only affected user’s Facebook accounts but also impacted other accounts linked to Facebook. This means that a hacker could have accessed any account of yours that you log into using Facebook. This security issue was first discovered by Facebook on Tuesday, September 25. The hackers have apparently exploited a series of interactions between three bugs related to Facebook’s “View As” feature that lets people see what their own profile looks like to someone else. The hackers stole Facebook access tokens to take over people’s accounts. These tokens allow an attacker to take full control of the victim’s account, including logging into third-party applications that use Facebook Login. “I’m glad we found this and fixed the vulnerability,” Mark Zuckerberg said on a conference call with reporters on Friday morning. “But it definitely is an issue that this happened in the first place. I think this underscores the attacks that our community and our services face.” As of now, this vulnerability has been fixed and Facebook has contacted law enforcement authorities. The vice-president of product management, Guy Rosen, said that Facebook was working with the FBI, but he did not comment on whether national security agencies were involved in the investigation. As a security measure, Facebook has automatically logged out 90 million Facebook users from their accounts. These included the 50 million that Facebook knows were affected and an additional 40 million that potentially could have been. This attack exploited the complex interaction of multiple issues in Facebook code. It originated from a change made to Facebook’s video uploading feature in July 2017, which impacted “View As.” Facebook says that the affected users will get a message at the top of their News Feed about the issue when they log back into the social network. The message reads, "Your privacy and security are important to us, We want to let you know about recent action we've taken to secure your account." The message is followed by a prompt to click and learn more details. Facebook has also publicly apologized stating that, “People’s privacy and security is incredibly important, and we’re sorry this happened. It’s why we’ve taken immediate action to secure these accounts and let users know what happened.” This is not the end of misery for Facebook. Some users have also tweeted that they are unable to post Facebook’s security breach coverage from The Guardian and Associated Press. When trying to share the story to their news feed, they were met with the error message which prevented them from sharing the story. The error reads, “Our security systems have detected that a lot of people are posting the same content, which could mean that it’s spam. Please try a different post.” People have criticized Facebook’s automated content flagging tools. This is an example of how it tags legitimate content as illegitimate, calling it spam. It has also previously failed to detect harassment and hate speech. However, according to updates on Facebook’s Twitter account, the bug has now been resolved. https://twitter.com/facebook/status/1045796897506516992 The security breach comes at a time when the social media company is already facing multiple criticisms over issues such as foreign election interference, misinformation and hate speech, and data privacy. Recently, an Indie Taiwanese hacker also gained popularity with his plan to take down Mark Zuckerberg’s Facebook page and broadcast it live. However, soon he grew cold feet and said he’ll refrain from doing so after receiving global attention following his announcement. "I am canceling my live feed, I have reported the bug to Facebook and I will show proof when I get a bounty from Facebook," he told Bloomberg News. It’s high time that Facebook began taking it’s user privacy seriously, probably even going in the lines of rethinking it’s algorithm and platform entirely. They should also take responsibility for the real-world consequences of actions enabled by Facebook. How far will Facebook go to fix what it broke: Democracy, Trust, Reality. WhatsApp co-founder reveals why he left Facebook; is called ‘low class’ by a Facebook senior executive. Ex-employee on contract sues Facebook for not protecting content moderators from mental trauma
Read more
  • 0
  • 0
  • 2850

article-image-ethical-dilemmas-developers-on-artificial-intelligence-products-must-consider
Amey Varangaonkar
29 Sep 2018
10 min read
Save for later

The ethical dilemmas developers working on Artificial Intelligence products must consider

Amey Varangaonkar
29 Sep 2018
10 min read
Facebook has recently come under the scanner for sharing the data of millions of users without their consent. Their use of Artificial Intelligence to predict their customers’ behavior and then to sell this information to advertisers has come under heavy criticism and has raised concerns over the privacy of users’ data. A lot of it inadvertently has to do with the ‘smart use’ of data by companies like Facebook. As Artificial Intelligence continues to revolutionize the industry, and as the applications of AI continue to rapidly grow across a spectrum of real-world domains, the need for a regulated, responsible use of AI has also become more important than ever. Several ethical questions are being asked of the way the technology is being used and how it is impacting our lives, Facebook being just one of the many examples right now. In this article, we look at some of these ethical concerns surrounding the use of AI. Infringement of users’ data privacy Probably the biggest ethical concern in the use of Artificial Intelligence and smart algorithms is the way companies are using them to gain customer insights, without getting the consent of the said customers in the first place. Tracking customers’ online activity, or using the customer information available on various social media and e-commerce websites in order to tailor marketing campaigns or advertisements that are targeted towards the customer is a clear breach of their privacy, and sometimes even amounts to ‘targeted harassment’. In the case of Facebook, for example, there have been many high profile instances of misuse and abuse of user data, such as: The recent Cambridge Analytica scandal where Facebook’s user data was misused Boston-based data analytics firm Crimson Hexagon misusing Facebook user data Facebook’s involvement in the 2016 election meddling Accusations of Facebook along with Twitter and Google having a bias against conservative views Accusation of discrimination with targeted job ads on the basis of gender and age How far will these tech giants such as Facebook go to fix what they have broken - the trust of many of its users? The European Union General Data Protection Regulation (GDPR) is a positive step to curb this malpractice. However, such a regulation needs to be implemented worldwide, which has not been the case yet. There needs to be a universal agreement on the use of public data in the modern connected world. Individual businesses and developers must be accountable and hold themselves ethically responsible when strategizing or designing these AI products, keeping the users’ privacy in mind. Risk of automation in the workplace The most fundamental ethical issue that comes up when we talk about automation, or the introduction of Artificial Intelligence in the workplace, is how it affects the role of human workers. ‘Does the AI replace them completely?’ is a common question asked by many. Also, if human effort is not going to be replaced by AI and automation, in what way will the worker’s role in the organization be affected? The World Economic Forum (WEF) recently released a Future of Jobs report in which they highlight the impact of technological advancements on the current workforce. The report states that machines will be able to do half of the current job tasks within the next 5 years. A few important takeaways from this report with regard to automation and its impact on the skilled human workers are: Existing jobs will be augmented through technology to create new tasks and resulting job roles altogether - from piloting drones to remotely monitoring patients. The inclusion of AI and smart algorithms is going to reduce the number of workers required for certain work tasks The layoffs in certain job roles will also involve difficult transitions for many workers and investment for reskilling and training, commonly referred to as collaborative automation. As we enter the age of machine augmented human productivity, employees will be trained to work along with the AI tools and systems, empowering them to work quickly and more efficiently. This will come with an additional cost of training which the organization will have to bear Artificial stupidity - how do we eliminate machine-made mistakes? It goes without saying that learning happens over time, and it is no different for AI. The AI systems are fed lots and lots of training data and real-world scenarios. Once a system is fully trained, it is then made to predict outcomes on real-world test data and the accuracy of the model is then determined and improved. It is only normal, however, that the training model cannot be fed with every possible scenario there is, and there might be cases where the AI is unprepared for or can be fooled by an unusual scenario or test-case. Some images where the deep neural network is unable to identify their pattern is an example of this. Another example would be the presence of random dots in an image that would lead the AI to think there is a pattern in an image, where there really isn’t any. Deceptive perceptions like this may lead to unwanted errors, which isn’t really the AI’s fault, it’s just the way they are trained. These errors, however, can prove costly to a business and can lead to potential losses. What is the way to eliminate these possibilities? How do we identify and weed out such training errors or inadequacies that go a long way in determining whether an AI system can work with near 100% accuracy? These are the questions that need answering. It also leads us to the next problem that is - who takes accountability for the AI’s failure? If the AI fails or misbehaves, who takes the blame? When an AI system designed to do a particular task fails to correctly perform the required task for some reason, who is responsible? This aspect needs careful consideration and planning before any AI system can be adopted, especially on an enterprise-scale. When a business adopts an AI system, it does so assuming the system is fail-safe. However, if for some reason the AI system isn’t designed or trained effectively because either: It was not trained properly using relevant datasets The AI system was not used in a relevant context and as a result, gave inaccurate predictions Any failure like this could lead to potentially millions in losses and could adversely affect the business, not to mention have adverse unintended effects on society. Who is accountable in such cases? Is it the AI developer who designed the algorithm or the model? Or is it the end-user or the data scientist who is using the tool as a customer? Clear expectations and accountabilities need to be defined at the very outset and counter-measures need to be set in place to avoid such failovers, so that the losses are minimal and the business is not impacted severely. Bias in Artificial Intelligence - A key problem that needs addressing One of the key questions in adopting Artificial Intelligence systems is whether they can be trusted to be impartial, fair or neutral. In her NIPS 2017 keynote, Kate Crawford - who is a Principal Researcher at Microsoft as well as the Co-Founder & Director of Research at the AI Now institute - argues that bias in AI cannot just be treated as a technical problem; the underlying social implications need to be considered as well. For example, a machine learning software to detect potential criminals, that tends to be biased against a particular race, raises a lot of questions on its ethical credibility. Or when a camera refuses to detect a particular kind of face because it does not fit into the standard template of a human face in its training dataset, it naturally raises the racism debate. Although the AI algorithms are designed by humans themselves, it is important that the learning data used to train these algorithms is as diverse as possible, and factors in possible kinds of variations to avoid these kinds of biases. AI is meant to give out fair, impartial predictions without any preset predispositions or bias, and this is one of the key challenges that is not yet overcome by the researchers and AI developers. The problem of Artificial Intelligence in cybersecurity As AI revolutionizes the security landscape, it is also raising the bar for the attackers. With passing time it is getting more difficult to breach security systems. To tackle this, attackers are resorting to adopting state-of-the-art machine learning and other AI techniques to breach systems, while security professionals adopt their own AI mechanisms to prevent and protect the systems from these attacks. A cybersecurity firm Darktrace reported an attack in 2017 that used machine learning to observe and learn user behavior within a network. This is one of the classic cases of facing disastrous consequences where technology falls into the wrong hands and necessary steps cannot be taken to tackle or prevent the unethical use of AI - in this case, a cyber attack. The threats posed by a vulnerable AI system with no security measures in place - it can be easily hacked into and misused, doesn’t need any new introduction. This is not a desirable situation for any organization to be in, especially when it has invested thousands or even millions of dollars into the technology. When the AI is developed, strict measures should be taken to ensure it is accessible to only a specific set of people and can be altered or changed by only its developers or by authorized personnel. Just because you can build an AI, should you? The more potent the AI becomes, the more potentially devastating its applications can be. Whether it is replacing human soldiers with AI drones, or developing autonomous weapons - the unmitigated use of AI for warfare can have consequences far beyond imagination. Earlier this year, we saw hundreds of Google employees quit the company over its ties with the Pentagon, protesting against the use of AI for military purposes. The employees were strong of the opinion that the technology they developed has no place on a battlefield, and should ideally be used for the benefit of mankind, to make human lives better. Google isn’t an isolated case of a tech giant lost in these murky waters. Microsoft employees too protested Microsoft’s collaboration with US Immigration and Customs Enforcement (ICE) over building face recognition systems for them, especially after the revelations that ICE was found to confine illegal immigrant children in cages and inhumanely separated asylum-seeking families at the US Mexican border. Amazon is also one of the key tech vendors of facial recognition software to ICE, but its employees did not openly pressure the company to drop the project. While these companies have assured their employees of no direct involvement, it is quite clear that all the major tech giants are supplying key AI technology to the government for defensive (or offensive, who knows) military measures. The secure and ethical use of Artificial Intelligence for non-destructive purposes currently remains one of the biggest challenges in its adoption today. Today, there are many risks and caveats associated with implementing an AI system. Given the tools and techniques we have at our disposal currently, it is far-fetched to think of implementing a flawless Artificial Intelligence within a given infrastructure. While we consider all the risks involved, it is also important to reiterate one important fact. When we look at the bigger picture, all technological advancements effectively translate to better lives for everyone. While AI has tremendous potential, whether its implementation is responsible is completely down to us, humans. Read more Sex robots, artificial intelligence, and ethics: How desire shapes and is shaped by algorithms New cybersecurity threats posed by artificial intelligence Google’s prototype Chinese search engine ‘Dragonfly’ reportedly links searches to phone numbers
Read more
  • 0
  • 0
  • 5063

article-image-working-with-azure-container-service-cluster-tutorial
Savia Lobo
29 Sep 2018
5 min read
Save for later

Working with Azure container service cluster [Tutorial]

Savia Lobo
29 Sep 2018
5 min read
Azure Container Services is a new variant of the classical Azure IaaS offer from Azure and uses virtual machines as the technological base. This tutorial is an excerpt taken from the book,  Implementing Azure Solutions written by Florian Klaffenbach, Jan-Henrik Damaschke, and Oliver Michalski. In this book, you will learn how to secure a newly deployed Azure Active Directory and also learn how Azure Active Directory Synchronization could be implemented. Today, you will learn how to create a process and work with Azure container service (ACS) cluster. As an important prerequisite, to work with the cluster, you need the Master FQDN (a URL). The Master FQDN can be found in the Essentials section of the dashboard of your container service. Now that we know the important prerequisites, we can go further. Each of the three available orchestrators provides you with a work surface. To work with these UIs, you must first create an SSH tunnel. You don't know how to create an SSH tunnel? Then I want to bring you closer to the procedure by the way of an example. I assume that you have followed my earlier advice and have installed the PuTTY toolset. I also assume that your SSH key pair is available. Everything okay? Then let's start: Search for the PuTTY tool and then open the tool: On the first page, fill in the Host Name field. The hostname is composed of the admin username (from your cluster) and the Master FQDN and has the format adminusername@masterfqdn. Change the Port to 2200: Now switch to the Connection | SSH | Auth site. Here you enter the path to your SSH private key: Move to the Connection | SSH | Tunnels site. In the Add new forwarded port section, type 80 in the Source port field and localhost:80 in the Destination field. Finally press the Add button: Now go back to the first page. Press the Open button and the SSH tunnel is built up: Have you created your SSH tunnel now? If yes, we will look now at the work surfaces once. Remember, since we have created our cluster based on DC/OS, it is the UIs of DC/OS. The UIs of the other types are very similar. Let's start with the DC/OS dashboard. To reach this UI, enter the following URL into the browser of your choice: http://localhost:80/ With the DC/OS dashboard, you can monitor the performance indicators of your cluster or display the health status of individual components: If you want to see the health status of individual components, click the View all 35 Components button in the Component Health tile:   A detailed list with the corresponding status information will open: In the DC/OS dashboard, you will also find another interesting application. Simply press the Universe button in the navigation area. This starts mesosphere universe. Mesosphere universe is a package repository that contains services like Spark, Cassandra, Jenkins, and many others that can be easily deployed onto your cluster: In addition to the solutions provided by mesosphere, there are still solutions from the community. Just scroll the page down: The next area is the MARATHON orchestration platform. To reach this UI, enter the following URL into the browser of your choice: http: //localhost:80/marathon/ With this UI, you can start a new container and other types of application in the cluster. In addition, the UI also provides information on executed containers and applications and is constantly on-going for planning tasks: Two short examples: The following screenshot shows the dialog for creating a group. A group in DC/OS is a collection of apps (services, and so on) that are related to each other (for example, over the organization): The next screenshot shows the dialog for creating a New Application. An application is a long-running service that may have one or more instances that map one to one with a task: Internally, the user creates an application by providing an application definition (a JSON file). Marathon then schedules one or more application instances as tasks depending on how much the definition is specified. In the original concept of DC/OS, there is still another part, the pods. A pod is a special case of an application. A pod is also a long-running service that may have one or more instances but map one to many with collocated tasks. Pod instances may include one or more tasks that share resources (for example, IPs, ports, or volumes). Pods are currently not supported by ACS. The last area is the Mesos. Mesos is a web UI for viewing cluster state, and above all tasks. To reach this UI, enter the following URL into the browser of your choice: http: //localhost:80/mesos/ The following screenshot shows the UI for Tasks: The next screenshot shows the UI for Frameworks. A framework running in Mesos consists of two components: a scheduler that registers with the master to be offered resources, and an executor process that is launched on agent nodes to run the tasks: The next screenshot shows the UI for Agents and its conditions: The last screenshot shows the UI for Offers. An offer in Mesos is simple a resource (for example, container), and is assigned to a framework for processing: In this post, we learned how to work with clusters in Azure Container Service cluster. If you've enjoyed this post, head over to the book, Implementing Azure Solutions to learn more on how to manage, access, and secure your confidential data, you will implement storage solutions. Microsoft’s Immutable storage for Azure Storage Blobs, now generally available Machine Learning as a Service (MLaaS): How Google Cloud Platform, Microsoft Azure, and AWS are democratizing Artificial Intelligence Modern Cloud Native architectures: Microservices, Containers, and Serverless – Part 1
Read more
  • 0
  • 0
  • 2429

article-image-brainnet-an-interface-to-communicate-between-human-brains-could-soon-make-telepathy-real
Sunith Shetty
28 Sep 2018
3 min read
Save for later

BrainNet, an interface to communicate between human brains, could soon make Telepathy real

Sunith Shetty
28 Sep 2018
3 min read
BrainNet provides the first multi-person brain-to-brain interface which allows a nonthreatening direct collaboration between human brains. It can help small teams collaborate to solve a range of tasks using direct brain-to-brain communication. How does BrainNet operate? The noninvasive interface combines electroencephalography (EEG) to record brain signals and transcranial magnetic stimulation (TMS) to deliver the required information to the brain. For now, the interface allows three human subjects to collaborate, handle and solve a task using direct brain-to-brain communication. Two out of three human subjects are “Senders”. The senders’ brain signals are decoded using real-time EEG data analysis. This technique allows extracting decisions which are vital in communicating in order to solve the required challenges. Let’s take an example of a Tetris-like game--where you need quick decisions to decide whether to rotate a block or drop as it is in order to fill a line. The senders’ signals (decisions) are transmitted to the third subject human brain via the Internet, the “Receiver” in this case. The decisions are sent to the receiver brain via magnetic stimulation of the occipital cortex. The receiver can’t see the game screen to decide if the rotation of the block is required. The receiver integrates the decisions received and makes an informed call using an EEG interface regarding turning the position of the block or keeping it in the same position. The second round of the game allows the senders to validate the previous move and provide the necessary feedback to the receiver’s action. How did the results look? The group of researchers has implemented this technique for the Tetris game to evaluate the performance of BrainNet considering the following factors: Group-level performance during the game True/False positive rates of subject’s decisions Mutual information between subjects This was implemented among five groups of three human brain subjects to perform the Tetris task using BrainNet interface. The average accuracy result for the task was 0.813. Furthermore, they also tried varying the information reliability by injecting artificially generated noise into one of the senders’ signals. However, the receiver was able to classify which sender is more reliable based on the information transmitted to their brains. These positive results have open the gates and the possibilities of future brain-to-brain interfaces which holds the power of enabling cooperative problem solving by humans using a "social network" of connected brains. To know more, you can refer to the research paper. Read more Diffractive Deep Neural Network (D2NN): UCLA-developed AI device can identify objects at the speed of light Baidu announces ClariNet, a neural network for text-to-speech synthesis Optical training of Neural networks is making AI more efficient
Read more
  • 0
  • 0
  • 4549

article-image-did-you-know-facebook-shares-the-data-you-share-with-them-for-security-reasons-with-advertisers
Natasha Mathur
28 Sep 2018
5 min read
Save for later

Did you know Facebook shares the data you share with them for ‘security’ reasons with advertisers?

Natasha Mathur
28 Sep 2018
5 min read
Facebook is constantly under the spotlight these days when it comes to controversies regarding user’s data and privacy. A new research paper published by the Princeton University researchers states that Facebook shares the contact information you handed over for security purposes, with their advertisers. This study was first brought to light by a Gizmodo writer, Kashmir Hill. “Facebook is not content to use the contact information you willingly put into your Facebook profile for advertising. It is also using contact information you handed over for security purposes and contact information you didn’t hand over at all, but that was collected from other people’s contact books, a hidden layer of details Facebook has about you that I’ve come to call “shadow contact information”, writes Hill. Recently, Facebook introduced a new feature called custom audiences. Unlike traditional audiences, the advertiser is allowed to target specific users. To do so, the advertiser uploads user’s PII (personally identifiable information) to Facebook. After the uploading is done, Facebook then matches the given PII against platform users. Facebook then develops an audience that comprises the matched users and allows the advertiser to further track the specific audience. Essentially with Facebook, the holy grail of marketing, which is targeting an audience of one, is practically possible; nevermind whether that audience wanted it or not. In today’s world, different social media platforms frequently collect various kinds of personally identifying information (PII), including phone numbers, email addresses, names and dates of birth. Majority of this PII often represent extremely accurate, unique, and verified user data. Because of this, these services have the incentive to exploit and use this personal information for other purposes. One such scenario includes providing advertisers with more accurate audience targeting. The paper titled ‘Investigating sources of PII used in Facebook’s targeted advertising’ is written by Giridhari Venkatadri, Elena Lucherini, Piotr Sapiezynski, and Alan Mislove. “In this paper, we focus on Facebook and investigate the sources of PII used for its PII-based targeted advertising feature. We develop a novel technique that uses Facebook’s advertiser interface to check whether a given piece of PII can be used to target some Facebook user and use this technique to study how Facebook’s advertising service obtains users’ PII,” reads the paper. The researchers developed a novel methodology, which involved studying how Facebook obtains the PII to provide custom audiences to advertisers. “We test whether PII that Facebook obtains through a variety of methods (e.g., directly from the user, from two-factor authentication services, etc.) is used for targeted advertising, whether any such use is clearly disclosed to users, and whether controls are provided to users to help them limit such use,” reads the paper. The paper uses size estimates to study what sources of PII are used for PII-based targeted advertising. Researchers used this methodology to investigate which range of sources of PII was actually used by Facebook for its PII-based targeted advertising platform. They also examined what information gets disclosed to users and what control users have over PII. What sources of PII are actually being used by Facebook? Researchers found out that Facebook allows its users to add contact information (email addresses and phone numbers) on their profiles. While any arbitrary email address or phone number can be added, it is not displayed to other users unless verified (through a confirmation email or confirmation SMS message, respectively). This is the most direct and explicit way of providing PII to advertisers. Researchers then further moved on to examine whether PII provided by users for security purposes such as two-factor authentication (2FA) or login alerts are being used for targeted advertising. They added and verified a phone number for 2FA to one of the authors’ accounts. The added phone number became targetable after 22 days. This proved that a phone number provided for 2FA was indeed used for PII-based advertising, despite having set the privacy controls to the choice. What control do users have over PII? Facebook allows users the liberty of choosing who can see each PII listed on their profiles, the current list of possible general settings being: Public, Friends, Only Me.   Users can also restrict the set of users who can search for them using their email address or their phone number. Users are provided with the following options: Everyone, Friends of Friends, and Friends. Facebook provides users a list of advertisers who have included them in a custom audience using their contact information. Users can opt out of receiving ads from individual advertisers listed here. But, information about what PII is used by advertisers is not disclosed. What information about how Facebook uses PII gets disclosed to the users? On adding mobile phone numbers directly to one’s Facebook profile, no information about the uses of that number is directly disclosed to them. This Information is only disclosed to users when adding a number from the Facebook website. As per the research results, there’s very little disclosure to users, often in the form of generic statements that do not refer to the uses of the particular PII being collected or that it may be used to allow advertisers to target users. “Our paper highlights the need to further study the sources of PII used for advertising, and shows that more disclosure and transparency needs to be provided to the user,” says the researchers in the paper. For more information, check out the official research paper. Ex-employee on contract sues Facebook for not protecting content moderators from mental trauma How far will Facebook go to fix what it broke: Democracy, Trust, Reality Mark Zuckerberg publishes Facebook manifesto for safeguarding against political interference
Read more
  • 0
  • 0
  • 3069
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at ₹800/month. Cancel anytime
article-image-implementing-identity-security-in-microsoft-azure-tutorial
Savia Lobo
28 Sep 2018
13 min read
Save for later

Implementing Identity Security in Microsoft Azure [Tutorial]

Savia Lobo
28 Sep 2018
13 min read
Security boundaries are often thought of as firewalls, IPS/IDS systems, or routers. Also, logical setups such as DMZs or other network constructs are often referred to as boundaries. But in the modern world, where many companies support dynamic work models that allow you to bring your own device (BYOD) or are heavily using online services for their work, the real boundary is the identity. Today's tutorial is an excerpt taken from the book, Implementing Azure Solutions written by Florian Klaffenbach, Jan-Henrik Damaschke, and Oliver Michalski. In this book, you will learn how to secure a newly deployed Azure Active Directory and also learn how Azure Active Directory Synchronization could be implemented. In this post, you will learn how to secure identities on Microsoft Azure. Identities are often the target of hackers as they resell them or use them to steal information. To make the attacks as hard as possible, it's important to have a well-conceived and centralized identity management. The Azure Active Directory provides that and a lot more to support your security strategy and simplify complex matters such as monitoring of privileged accounts or authentication attacks. Azure Active Directory Azure Active Directory (AAD) is a very important service that many other services are based on. It's not a directory service like many may think of when they hear the name Active Directory. The AAD is a complex structure without built-in Organizational Units (OUs) or Group Policy Objects (GPOs), but with a very high extensibility, open web standards for authorization and authentication, and a modern, (hybrid) cloud-focused approach to identity management. Azure Active Directory editions The following table describes the differences between the four Azure Active Directory editions: Services Common Basic Premium P1 Premium P2 Directory Objects X X X X User/Group Management (add/update/delete)/ User-based provisioning, Device registration X X X X Single Sign-On (SSO) X X X X Self-Service Password Change for cloud users X X X X Connect (Sync engine that extends on-premises directories to Azure Active Directory) X X X X Security/Usage Reports X X X X Group-based access management/ provisioning X X X Self-Service Password Reset for cloud users X X X Company Branding (Logon Pages/Access Panel customization) X X X Application Proxy X X X SLA 99.9% X X X Self-Service Group and app Management/Self-Service application additions/Dynamic Groups X X Self-Service Password Reset/Change/Unlock with on-premises write-back X X Multi-factor authentication (Cloud and On-premises (MFA Server)) X X MIM CAL + MIM Server X X Cloud App Discovery X X Connect Health X X Automatic password rollover for group accounts X X Identity Protection X Privileged Identity Management X Overview of Azure Active Directory editions Privileged Identity Management With the help of Azure AD Privileged Identity Management (PIM), the user can access various capabilities. These include the ability to view which users are Azure AD administrators and the possibility to enable administrative services on demand such as Office 365 or Intune. Furthermore, the user is able to receive reports about changes in administrator assignments or administrator access history. The AAD PIM allows the user to monitor the access in the organization. Additionally, it is possible to manage and control the access. Resources in Azure AD and services such as Office 365 or Microsoft Intune can be accessed. Lastly, the user can get alerts, if accesses to privileged roles are granted. Let's take a look at the PIM dashboard. To use Azure PIM, it needs to be activated first: Azure AD PIM should be found in the search easily: Azure AD PIM in the marketplace After clicking on Azure AD PIM in the marketplace, PIM will probably ask you to re-verify MFA for security reasons. To do this, the MFA token needs to be typed in, after clicking on Verify my identity: Re-verify identity for Azure AD PIM setup After a successful verification you will get an output as illustrated here: Successful re-verification Now the initial setup will start. The setup guides the user through the task of choosing all accounts in the tenant that have privileged rights. It is also possible to select them if they are eligible for requesting privileged role rights. If the wizard is completed without choosing any roles or user as eligible, it will by default assign the security administrator and privileged role administrator roles to the first user that does the PIM setup. Only with these roles it is possible to manage other privileged accounts and make them eligible or grant them rights: Setup wizard for Azure AD PIM In the following screenshot the tasks related to Privileged Identity Management are illustrated: Azure AD PIM main tasks As my subscription looks very boring after enabling Azure AD PIM I chose to show a demo picture from Microsoft that shows how Azure AD PIM could look in a real-world subscription: Azure AD PIM dashboard (https://docs.microsoft.com/en-us/azure/active-directory/media/active-directory-privileged-identity-management-configure/pim_dash.png) It's now possible to manage all chosen eligible privileged accounts and roles through Azure AD PIM. Besides removing and adding eligible users to Azure AD PIM, there is also a management of privileged roles, where the role activation setting is available. This setting is used to make privileged roles more transparent, trackable, and to implement the just-in-time (JIT) administration model. This is the activation setting blade for the Security Administrator: Role activation settings for the role Security Administrator It's also possible to use the rich monitoring and auditing capabilities of Azure AD PIM and never to lose track of the use of privileged accounts and to track misuse easily. Azure AD PIM is a very useful security feature and it is even more useful in combination with Azure AD Identity Protection. Identity protection Azure AD identity is a service that provides a central dashboard that informs administrators about potential threats to the organizations identities. It is based on behavioral analysis and it provides an overview of risks levels and vulnerabilities. The Azure AD anomaly detection capabilities are used by Azure AD Identity Protection to report suspicious activities. These enable one to identify anomalies of the organization's identity in real time, making it more powerful than the regular reporting capabilities. This system will calculate a user's risk level for each account, giving you the ability to configure risk-based policies in order to automatically protect the identities of your organization. Employing these risk-based policies among other conditional access controls provided by AAD and EMS enables an organization to provide remediation actions or block access to certain accounts. The key capabilities of Azure Identity Protection can be grouped into two phases. Detection of vulnerabilities and potential risky accounts This phase is basically about automatically classifying suspicious sign-in or user activity. It uses user-defined sign-in risk and user risk policies. These policies are described later. Another feature of this part is the automatic security recommendations (vulnerabilities) based on Azure provided rules. Investigation of potential suspicious events This part is all about investigating the alerts and events that are triggered by the risk policies. So basically, a security related person needs to review all users that got flagged based on policies and take a look at the particular risk events that triggered this alert and so contributed to the higher risk level. It's also important to define a workflow that is used to take the corresponding actions. It also needs someone who regularly investigates the vulnerability warnings and recommendations and estimates the real risk level for the organization. It's important to take a closer look at the risk policies that can be configured in Azure AD Identity Protection. We will skip the Multi-factor authentication registration configurations here. For more details on MFA, read the next paragraph. Just because it can't be said often enough, I highly recommend enforcing MFA for all accounts! The two policies we can configure are user risk policy and sign-in risk policy. The options look quite similar, but the real differentiation happens in the background: Sign-in risk policy In the following diagram, user risk policy view is illustrated: User risk policy The main differentiation between Sign-in and User risk policies is where the risk events are captured. The Sign-in policy defines what happens when a certain account appears to have a high number of suspicious sign-in events. This includes sign-in from an anonymous IP address, logins from different countries in a time frame where it would not be possible to travel to the other location, and a lot more. On the other hand, User risk policies trigger a certain amount of events that happen after the user was already logged in. Leaked credential or abnormal behavior are example risk events. Microsoft provides a guide to simulating some risks to verify that the corresponding policies trigger and events got recorded. This guide is provided at this address: https://docs.microsoft.com/en-us/azure/active-directory/active-directory-identityprotection-playbook. The interesting thing after choosing users is the Conditions setting. This setting defines the threshold of risk events that is required to trigger the policy. The different option for the threshold are Low and above, Medium and above, and High. When High is chosen, it needs much more risk events to trigger the policy, but it also has the lowest impact on users. When Low and above is chosen, the policy will trigger much more often, but the likelihood of false positives is much higher, too. Finding the right balance between security and usability is one more time the real challenge. The last option provides a preview of how many users will be affected by the new policy. This helps to review and identify possible misconfigurations of earlier steps. Multi-factor authentication Authentications that require more than a single type of verification method are defined as two-step verifications. This second layer of sign-in routine can critically improve the security of user transactions and sign-ins. This method can be described as employing two or more of the typical authentication factors, defined as either one, something you know (password), two, something you have (a trusted device such as a phone or a security token) or three, something you are (biometrics). Microsoft uses Azure MFA for this objective. The user's sign-in routine is kept simple, but Azure MFA improves safe access to data and applications. Several verification methods such as phone call, text message, or mobile app verification can help you to strengthen the authentication process. There are two pricing models for Azure MFA. One is based on users, the other is based on usage. Take your time and consider both models. Just go to the pricing calculator and calculate both to compare them. Now we will see how easy it is to activate MFA in Azure: First sign in to the old Azure portal (https://manage.windowsazure.com). After choosing the right Azure Active Directory, click on USERS: Azure Active Directory in the old Azure portal At the USERS page click on MANAGE MULTI-FACTOR AUTH at the very bottom: Multi-factor authentication settings Now a redirection to the MFA portal should take place. In this portal, the MFA management takes place. I will use my demo user Frederik to show the process of activating MFA: MFA portal Just choose the user that needs to be enabled for MFA and press Enable: Choosing users for MFA and enabling them is enabled for MFA After confirming the change, it takes a few seconds and the user is enabled for MFA. Do you really want to enable MFA? Search for the Multi-Factor Authentication in the search box and then click on it: Confirmation for enabling MFA To change the billing model for MFA, a new MFA provider needs to be created to replace the existing one. For this, the Multi-Factor Authentication resource should be created from the marketplace: MFA provider in the marketplace Click on Create button to proceed: Redirection at creation The little arrow in the box indicates a redirection to the old portal. In the old portal, the MFA provider page directly opens up. There is an exclamation mark next to the usage model. This and the text at the bottom warns that it's not possible to change the usage model afterwards: New MFA provider There are many more features related to Azure Active Directory and Identity Security that we are not able to discuss in this book. I encourage you to take a look at Azure AD Connect Health, Azure AD Cloud App Discovery, and Azure Information Protection (as part of EMS). It's important to know what services are offered and what advantages they could offer your company. This is an example dialogue that will be shown after typing the password when MFA is enabled and the Authenticator-App was chosen as the main method: MFA with authenticator-app Conditional access Another important security feature that is based on Azure Active Directory is conditional access. Although it's much more important when working with Office 365 it is also used to authenticate against Azure AD applications. A conditional access rule grants or denies access to a certain resource based on location, group membership, device state, and the application the user tries to access. After creating access rules that apply to all users who use the corresponding application, it's also possible to apply a rule to a security group or the other way around and exclude a group from applying. There are scenarios with MFA, where this could make sense. Currently, Conditional access is completely managed in the old Azure portal (https://manage.windowsazure.com). There is a conditional access feature in the new Azure portal, but it is still in preview and not supported for production. The administrator is also able to combine conditional access policies with Azure AD Multi-factor authentication (MFA). This will combine the MFA policies with those of other services such as Identity Protection or the basic MFA policy. This means that even if a user is per group excepted from authenticating with MFA to an application, all the other rules still apply. So if there is an MFA policy configured in Identity Protection that enforces MFA, the user still needs to log in using MFA. In the old portal the conditional access feature is configured on an application basis, in the new portal the conditional access rules are configured and managed in the Azure Active Directory resource: Per application management old Azure Portal Following screenshot shows view of new Azure AD: Central management in Azure Active Directory in the new portal To summarize, in this tutorial, we learned how to secure Azure identities from hackers. If you've enjoyed reading this post, do check out our book, Implementing Azure Solutions to manage, access, and secure your confidential data and to implement storage solutions. Azure Functions 2.0 launches with better workload support for serverless Microsoft’s Immutable storage for Azure Storage Blobs, now generally available Automate tasks using Azure PowerShell and Azure CLI [Tutorial]
Read more
  • 0
  • 0
  • 2821

article-image-9-recommended-blockchain-online-courses
Guest Contributor
27 Sep 2018
7 min read
Save for later

9 recommended blockchain online courses

Guest Contributor
27 Sep 2018
7 min read
Blockchain is reshaping the world as we know it. And we are not talking metaphorically because the new technology is really influencing everything from online security and data management to governance and smart contracting. Statistical reports support these claims. According to the study, the blockchain universe grows by over 40% annually, while almost 70% of banks are already experimenting with this technology. IT experts at the Editing AussieWritings.com Services claim that the potential in this field is almost limitless: “Blockchain offers a myriad of practical possibilities, so you definitely want to get acquainted with it more thoroughly.” Developers who are curious about blockchain can turn it into a lucrative career opportunity since it gives them the chance to master the art of cryptography, hierarchical distribution, growth metrics, transparent management, and many more. There were 5,743 mostly full-time job openings calling for blockchain skills in the last 12 months - representing the 320% increase - while the biggest freelancing website Upwork reported more than 6,000% year-over-year growth. In this post, we will recommend our 9 best blockchain online courses. Let’s take a look! Udemy Udemy offers users one of the most comprehensive blockchain learning sources. The target audience is people who have heard a little bit about the latest developments in this field, but want to understand more. This online course can help you to fully understand how the blockchain works, as well as get to grips with all that surrounds it. Udemy breaks down the course into several less complicated units, allowing you to figure out this complex system rather easily. It costs $19.99, but you can probably get it with a 40% discount. The one downside, however, is that content quality in terms of subject scope can vary depending on the instructor, but user reviews are a good way to gauge quality. Each tutorial lasts approximately 30 minutes, but it also depends on your own tempo and style of work. Pluralsight Pluralsight is an excellent beginner-level blockchain course. It comes in three versions: Blockchain Fundamentals, Surveying Blockchain Technologies for Enterprise, and Introduction to Bitcoin and Decentralized Technology. Course duration varies from 80 to 200 minutes depending on the package. The price of Pluralsight is $29 a month or $299 a year. Choosing one of these options, you are granted access to the entire library of documents, including course discussions, learning paths, channels, skill assessments, and other similar tools. Packt Publishing Packt Publishing has a wide portfolio of learning products on Blockchain for varying levels of experience in the field from beginners to experts. And what’s even more interesting is that you can choose your learning format from books, ebooks to videos, courses and live courses. Or you could simply subscribe to MAPT, their library to gain access to all products at a reasonable price of $29 monthly and $150 annually.  It offers several books and videos on the leading blockchain technology. You can purchase 5 blockchain titles at a discounted rate of $50. Here’s the list of top blockchain courses offered by Packt Publishing: Exploring Blockchain and Crypto-currencies: You will gain the foundational understanding of blockchain and crypto-currencies through various use-cases. Building Blockchain Projects: In this, you will be able to develop real-time practical DApps with Ethereum and JavaScript. Mastering Blockchain - Second Edition: You can learn about cryptography and cryptocurrencies, so you can build highly secure, decentralized applications and conduct trusted in-app transactions. Hands-On Blockchain with Hyperledger: This book will help you leverage the power of Hyperledger Fabric to develop Blockchain-based distributed ledgers with ease. Learning Blockchain Application Development [video ]: This interactive video will help you learn build smart contracts and DApps on Ethereum. Create Ethereum and Blockchain Applications using Solidity [video ]: This video will help you learn about Ethereum, Solidity, DAO, ICO, Bitcoin, Altcoin, Website Security, Ripple, Litecoin, Smart Contracts, and Apps. Cryptozombies Cryptozombies is an online blockchain course based on gamification elements. The tool teaches you to write smart contracts in Solidity through building your own crypto-collectibles game. It is entirely Ethereum-focused, but you don’t need any previous experience to understand how Solidity works. There is a step by step guide that explains to you even the smallest details, so you can quickly learn to create your own fully-functional blockchain-based game. The best thing about Cryptozombies is that you can test it for free and give up in case you don’t like it. Coursera The blockchain is the epicenter of the cryptocurrency world, so it’s necessary to study it if you want to deal with Bitcoin and other digital currencies. Coursera is the leading online resource in the field of virtual currencies, so you might want to check it out. After this course like Blockchain Specialization, you’ll know everything you need to be able to separate fact from fiction when reading claims about Bitcoin and other cryptocurrencies. You’ll have the conceptual foundations you need to engineer to secure software that interacts with the Bitcoin network. And you’ll be able to integrate ideas from Bitcoin in your own projects. The course is a 4-part course spanning a duration 4 weeks, but you can take each part separately. The price depends on the level and features you choose. LinkedIn Learning (formerly known as Lynda) LinkedIn Learning (what used to be Lynda) doesn't offer a specific blockchain course, but it does have a wide range of industry-related learning sources. A search for ‘blockchain’ will present you with almost 100 relevant video courses. You can find all sorts of lessons here, from beginner to expert levels. Lynda allows you to customize selection according to video duration, authors, software, subjects, etc. You can access the library for $15 a month. B9Lab B9Lab ETH-25 Certified Online Ethereum Developer Course is another course that promotes blockchain technology aimed at the Ethereum platform. It’s a 12-week in-depth learning solution that targets experienced programmers. B9Lab introduces everything there is to know about blockchain and how to build useful applications. Participants are taught about the Ethereum platform, the programming language Solidity, how to use web3 and the Truffle framework, and how to tie everything together. The price is €1450 or about $1700. IBM IBM made a self-paced blockchain course, titled Blockchain Essentials that lasts over two hours. The video lectures and lab in this course help you learn about blockchain for business and explore key use cases that demonstrate how the technology adds value. You can learn how to leverage blockchain benefits, transform your business with the new technology, and transfer assets. Besides that, you get a nice wrap-up and a quiz to test your knowledge upon completion. IBM’s course is free of charge. Khan Academy Khan Academy is the last, but certainly not the least important online course on our list. It gives users a comprehensive overview of blockchain-powered systems, particularly Bitcoin. Using this platform, you can learn more on cryptocurrency transactions, security, proof of work, etc. As an online education platform, Khan Academy won’t cost you a dime. [dropcap]B[/dropcap]lockchain is the groundbreaking technology that opens new boundaries in almost every field of business. It directly influences financial markets, data management, digital security, and a variety of other industries. In this post, we presented 9 best blockchain online courses you should try. These sources can teach you everything there is to know about the blockchain basics. Take some time to check them out and you won’t regret it! Author Bio: Olivia is a passionate blogger who writes on topics of digital marketing, career, and self-development. She constantly tries to learn something new and to share this experience on various websites. Connect with her on Facebook and Twitter. Google introduces Machine Learning courses for AI beginners Microsoft start AI School to teach Machine Learning and Artificial Intelligence.
Read more
  • 0
  • 0
  • 5428

article-image-whatsapp-co-founder-reveals-why-he-left-facebook-called-low-class-by-a-facebook-senior-executive
Sugandha Lahoti
27 Sep 2018
5 min read
Save for later

WhatsApp co-founder reveals why he left Facebook; is called ‘low class’ by a Facebook senior executive

Sugandha Lahoti
27 Sep 2018
5 min read
In an exclusive interview given to Forbes yesterday, WhatsApp co-founder Brian Actons opened up for the first time since he left Facebook 10 months ago, about why he left Facebook and why he regrets selling the company to Facebook, criticizing Facebook’s monetization strategy. In a post titled, “The other side of the story”, head of Facebook Messenger unit, David Marcus responded back to Acton’s allegations. About four years ago, Acton and his co-founder, Jan Koum, sold WhatsApp to Facebook for $22 billion making it Facebook’s largest acquisition till date. Ten months ago, Acton quit Facebook stating, he was looking to work for a non-profit. Then in March, as Facebook was at the receiving end of a public backlash after the Cambridge Analytica Scandal, he sent out a viral tweet: “It is time. #deletefacebook.” with no explanation. Pretty soon, momentum gathered behind the #DeleteFacebook campaign, with several media outlets publishing guides on how to permanently delete Facebook accounts. In April, the other Whatsapp co-founder Jan Koum also left Facebook due to the difference of opinion over data privacy and the messaging app’s business model, according to a report from The Washington Post. What Acton has said in the Forbes Interview After several months of this tweet, Acton has now opened up about why he left Facebook and his regrets over selling WhatsApp to Facebook. He also talks about his disagreements with Mark Zuckerberg. In an interview with Forbes, he seems to feel remorse for his actions, stating “I sold my users’ privacy to a larger benefit. I made a choice and a compromise. And I live with that every day.” He further states that Facebook “isn’t the bad guy, just very good businesspeople.” Facebook had tried to put a nondisclosure agreement in place, as part of his proposed settlement in the end. Acton says, “That was part of the reason that I got sort of cold feet in terms of trying to settle with these guys. It was like, okay, well, you want to do these things I don’t want to do,” Acton says. “It’s better if I get out of your way. And I did.” He refused to take the deal and lost $800 million in the process. Acton also says that he was never able to develop a rapport with Zuckerberg. “I couldn’t tell you much about the guy,” he says. Acton was unhappy with Facebook’s monetization strategy. Facebook wanted WhatsApp to make money via showing targeted ads in WhatsApp’s Status feature and also selling businesses (and later analytics) tools to chat with WhatsApp users. Both these proposals were criticized by Acton who felt it “broke a social compact with its users.” His vision for Whatsapp being “No ads, no games, no gimmicks”. David Marcus’ ‘The other side of the story’ Acton’s interview has not gone down well with many in Facebook evident from the strong emotional response applauding Marcus’ post from those within Facebook. David Marcus, one of Facebook’s high-level executives in the Facebook messaging unit, has publicly responded stating that the “interview of Brian Acton that contained statements, and recollection of events that differ greatly from the reality I witnessed first-hand.” However, he did clarify early on that this is his personal view and not that of Facebook’s and that he has not been asked by anyone at Facebook to post his account of what happened. He says that Mark Zuckerberg has always supported founders and their teams much better than others even at a cost to the company. Per his post, “WhatsApp founders requested a completely different office layout when their team moved on campus. Much larger desks and personal space, a policy of not speaking out loud in the space, and conference rooms made unavailable to fellow Facebookers nearby. This irritated people at Facebook, but Mark personally supported and defended it.” He also called Acton’s claims that Facebook wanted WhatsApp to become a money-minting app, dubious.  End-to-end encryption on WhatsApp happened after the acquisition. Jan Koum played a key role in convincing Mark of the importance of encryption. Once with Mark’s full support, whenever the encryption proposal faced backlash, it was defended by him—never for advertising or data collection but about concerns for ‘safety’. Mark’s view was that WhatsApp was a private messaging app, and encryption helped ensure that people’s messages were truly private. Marcus also accuses Acton of “slow-playing development” while advocating for business messaging. “If you have internal questions about it, then work hard to prove that your approach has legs and demonstrate the value. Don’t be passive-aggressive about it,” he added. He also completely demolished Acton’s statements by saying that he finds Acton’s actions a new standard of low-class. “I find attacking the people and company that made you a billionaire, and went to an unprecedented extent to shield and accommodate you for years, low-class.” He also took a hidden jibe at other tech companies saying, “Facebook is truly the only company that’s singularly about people. Not about selling devices. Not about delivering goods with less friction. Not about entertaining you. Not about helping you find information. Just about people.”. This part of his perspective has been criticized by the public most. People, in general, had varied views about Marcus’ post. While some agreed Others sided with Acton. Source: Facebook As of now, there is no official response from Facebook. How far will Facebook go to fix what it broke: Democracy, Trust, Reality. The White House is reportedly launching an antitrust investigation against social media companies. Mark Zuckerberg publishes Facebook manifesto for safeguarding against political interference.
Read more
  • 0
  • 0
  • 2778

article-image-building-an-android-app-using-the-google-faces-api-tutorial
Sugandha Lahoti
27 Sep 2018
11 min read
Save for later

Building an Android App using the Google Faces API [ Tutorial]

Sugandha Lahoti
27 Sep 2018
11 min read
The ability of computers to perform tasks such as identifying objects has always been a humongous task for both the software and the required architecture. This isn't the case anymore since the likes of Google, Amazon, and a few other companies have done all the hard work, providing the infrastructure and making it available as a cloud service. It should be noted that they are as easy to access as making REST API calls. Read Also: Machine Learning as a Service (MLaaS): How Google Cloud Platform, Microsoft Azure, and AWS are democratizing Artificial Intelligence This article is taken from the book Learning Kotlin by building Android Applications by Eunice Adutwumwaa Obugyei and Natarajan Raman. This book will teach you programming in Kotlin including data types, flow control, lambdas, object-oriented, and functional programming while building  Android Apps. In this tutorial, you will learn how to use the face detection API from Google's Mobile Vision API to detect faces and add fun functionalities such as adding rabbit ears to a user's picture. We will cover the following topics: Identifying human faces in an image Tracking human faces from a camera feed Identifying specific parts of the face (for example, eyes, ears, nose, and mouth) Drawing graphics on specific parts of a detected face in an image (for example, rabbit ears over the user's ears) Introduction to Mobile Vision The Mobile Vision API provides a framework for finding objects in photos and videos. The framework can locate and describe visual objects in images or video frames, and it has an event-driven API that tracks the position of those objects. Currently, the Mobile Vision API includes face, barcode, and text detectors. Faces API concepts Before diving into coding the features, it is necessary that you understand the underlying concepts of the face detection capabilities of the face detection API. From the official documentation: Face detection is the process of automatically locating human faces in visual media (digital images or video). A face that is detected is reported at a position with an associated size and orientation. Once a face is detected, it can be searched for landmarks such as the eyes and nose. A key point to note is that only after a face is detected, will landmark such as eyes and a nose be searched for. As part of the API, you could opt out of detecting these landmarks. Note the difference between face detection and face recognition. While the former is able to recognize a face from an image or video, the latter does the same and is also able to tell that a face has been seen before. The former has no memory of a face it has detected before. We will be using a couple of terms in this section, so let me give you an overview of each of these before we go any further: Face tracking extends face detection to video sequences. When a face appears in a video for any length of time, it can be identified as the same person and can be tracked. It is important to note that the face that you are tracking must appear in the same video. Also, this is not a form of face recognition; this mechanism just makes inferences based on the position and motion of the face(s) in a video sequence. A landmark is a point of interest within a face. The left eye, right eye, and nose base are all examples of landmarks. The Face API provides the ability to find landmarks on a detected face. Classification is determining whether a certain facial characteristic is present. For example, a face can be classified with regards to whether its eyes are open or closed or smiling or not. Getting started – detecting faces You will first learn how to detect a face in a photo and its associated landmarks. We will need some requirements in order to pursue this. With a minimum of Google Play Services 7.8, you can use the Mobile Vision APIs, which provide the face detection APIs. Make sure you update your Google Play Services from the SDK manager so that you meet this requirement. Get an Android device that runs Android 4.2.2 or later or a configured Android Emulator. The latest version of the Android SDK includes the SDK tools component. Creating the FunyFace project Create a new project called FunyFace. Open up the app module's build.gradle file and update the dependencies to include the Mobile Vision APIs: dependencies { implementation fileTree(dir: 'libs', include: ['*.jar']) implementation"org.jetbrains.kotlin:kotlin-stdlib-jre7:$kotlin_version" implementation 'com.google.android.gms:play-services-vision:11.0.4' ... } Now, update your AndroidManifest.xml to include meta data for the faces API: <meta-data android:name="com.google.android.gms.vision.DEPENDENCIES" android:value="face" /> Now, your app is ready to use the face detection APIs. To keep things simple, for this lab, you're just going to process an image that is already present in your app. Add the following image to your res/drawable folder: Now, this is how you will go about performing face detection. You will first load the image into memory, get a Paint instance, and create a temporary bitmap based on the original, from which you will create a canvas. Create a frame using the bitmap and then call the detect method on FaceDetector, using this frame to get back SparseArray of face objects. Well, let's get down to business—this is where you will see how all of these play out. First, open up your activity_main.xml file and update the layout so that it has an image view and a button. See the following code: <?xml version="1.0" encoding="utf-8"?> <FrameLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:tools="http://schemas.android.com/tools" android:layout_width="match_parent" android:layout_height="match_parent" xmlns:app="http://schemas.android.com/apk/res-auto" tools:context="com.packtpub.eunice.funyface.MainActivity"> <ImageView android:id="@+id/imageView" android:layout_width="match_parent" android:layout_height="match_parent" android:src="@mipmap/ic_launcher_round" app:layout_constraintBottom_toTopOf="parent" android:scaleType="fitCenter"/> <Button android:id="@+id/button" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_gravity="bottom|center" android:text="Detect Face"/> </FrameLayout> That is all you need to do here so that you have FrameLayout with ImageView and a button. Now, open up MainActivity.kt and add the following import statements. This is just to make sure that you import from the right packages as you move along. In your onCreate() method, attach a click listener to the button in your MainActivity layout file: package com.packtpub.eunice.funface import android.graphics.* import android.graphics.drawable.BitmapDrawable import android.os.Bundle import android.support.v7.app.AlertDialog import android.support.v7.app.AppCompatActivity import com.google.android.gms.vision.Frame import com.google.android.gms.vision.face.FaceDetector import kotlinx.android.synthetic.main.activity_main.* class MainActivity : AppCompatActivity() { override fun onCreate(savedInstanceState: Bundle?) { super.onCreate(savedInstanceState) setContentView(R.layout.activity_main) button.setOnClickListener { detectFace() } } } Loading the image In your detectFace() method, you will first load your image from the drawable folder into memory and create a bitmap image from it. Since you will be updating this bitmap to paint over it when the face is detected, you need to make it mutable. This is what makes your bitmap mutable: options.inMutable=true See the following implementation: private fun detectFace() { // Load the image val bitmapOptions = BitmapFactory.Options() bitmapOptions.inMutable = true val myBitmap = BitmapFactory.decodeResource( applicationContext.resources, R.drawable.children_group_picture, bitmapOptions) } Creating a Paint instance Use the Paint API to get an instance of the Paint class. You will only draw around the face, and not paint the whole face. To do this, set a thin stroke, give it a color, which in our case is red, and set the style of paint to STROKE using Paint.Style.STROKE: // Get a Paint instance val myRectPaint = Paint() myRectPaint.strokeWidth = 5F myRectPaint.color = Color.RED myRectPaint.style = Paint.Style.STROKE The Paint class holds the information related to the style and color related to the text, bitmap, and various shapes. Creating a canvas To get the canvas, first create a bitmap using the dimensions from the bitmap you created earlier. With this canvas, you will paint over the bitmap to draw the outline of the face after it has been detected: // Create a canvas using the dimensions from the image's bitmap val tempBitmap = Bitmap.createBitmap(myBitmap.width, myBitmap.height, Bitmap.Config.RGB_565) val tempCanvas = Canvas(tempBitmap) tempCanvas.drawBitmap(myBitmap, 0F, 0F, null) The Canvas class is used to hold the call made to draw. A canvas is a drawing surface and it provides various methods for drawing onto a bitmap. Creating the face detector All you have done thus far is basically housekeeping. You will now access the FaceDetector API by which you will, well, detect the face in the image. You will disable tracking for now, as you only want to detect the face at this stage. Note that on its first run, the Play Services SDK will take some time to initialize the Faces API. It may or may not have completed this process at the time you intend to use it. Therefore, as a safety check, you need to ensure its availability before using it. In this case, you will show a simple dialog to the user if the FaceDetector is not ready at the time the app is run. Also, note that you may need an internet connection as the SDK initializes. You also need to ensure you have enough space, as the initialization may download some native library onto the device: // Create a FaceDetector val faceDetector = FaceDetector.Builder(applicationContext).setTrackingEnabled(false) .build() if (!faceDetector.isOperational) { AlertDialog.Builder(this) .setMessage("Could not set up the face detector!") .show() return } Detecting the faces Now, you will use the detect() method from the faceDetector instance to get the faces and their metadata. The result will be SparseArray of Face objects: // Detect the faces val frame = Frame.Builder().setBitmap(myBitmap).build() val faces = faceDetector.detect(frame) Drawing rectangles on the faces Now that you have the faces, you will iterate through this array to get the coordinates of the bounding rectangle for the face. Rectangles require x, y of the top left and bottom right corners, but the information available only gives the left and top positions, so you have to calculate the bottom right using the top left, width, and height. Then, you need to release the faceDetector to free up resources. Here's the code: // Mark out the identified face for (i in 0 until faces.size()) { val thisFace = faces.valueAt(i) val left = thisFace.position.x val top = thisFace.position.y val right = left + thisFace.width val bottom = top + thisFace.height tempCanvas.drawRoundRect(RectF(left, top, right, bottom), 2F, 2F, myRectPaint) } imageView.setImageDrawable(BitmapDrawable(resources, tempBitmap)) // Release the FaceDetector faceDetector.release() Results All set. Run the app, press the DETECT FACE button, and wait a while...:   The app should detect the face and a square box should appear around the face, voila: Okay, let's move on and add some fun to their faces. To do this, you need to identify the position of the specific landmark you want, then draw over it. To find out the landmark's representation, you label them this time around, then later draw your filter to the desired position. To label, update the for loop which drew the rectangle around the face: // Mark out the identified face for (i in 0 until faces.size()) { ... for (landmark in thisFace.landmarks) { val x = landmark.position.x val y = landmark.position.y when (landmark.type) { NOSE_BASE -> { val scaledWidth = eyePatchBitmap.getScaledWidth(tempCanvas) val scaledHeight = eyePatchBitmap.getScaledHeight(tempCanvas) tempCanvas.drawBitmap(eyePatchBitmap, x - scaledWidth / 2, y - scaledHeight / 2, null) } } } } Run the app and take note of the labels of the various landmarks: There you have it! That's funny, right? Summary In this tutorial, we learned how to use the Mobile Vision APIs, in this case, the Faces API. There are a few things to note here. This program is not optimized for production. Some things you can do on your own are, load the image and do the processing in a background thread. You can also provide a functionality to allow the user to pick and choose images from different sources other than the static one used. You can get more creative with the filters and how they are applied too. Also, you can enable the tracking feature on the FaceDetector instance, and feed in a video to try out face tracking. To know more about Kotlin APIs as a preliminary for building stunning applications for Android, read our book Learning Kotlin by building Android Applications. 6 common challenges faced by Android App developers Google plans to let the AMP Project have an open governance model, soon! Entry level phones to taste the Go edition of the Android 9.0 Pie version
Read more
  • 0
  • 0
  • 7129
article-image-the-white-house-is-reportedly-launching-an-antitrust-investigation-against-social-media-companies
Sugandha Lahoti
26 Sep 2018
3 min read
Save for later

The White House is reportedly launching an antitrust investigation against social media companies

Sugandha Lahoti
26 Sep 2018
3 min read
According to information obtained by Bloomberg, The White House is reportedly making a draft executive order against online platform bias in Social Media firms. Per this draft, federal antitrust and law enforcement agencies are instructed to investigate into the practices of Google, Facebook, and other social media companies. The existence of the draft was first reported by Capital Forum. Federal law enforcers are required to investigate primarily against two violations. First, if an online platform has acted in violation of the antitrust laws. Second, to remove anti-competitive spirit among online platforms and address online platform bias. Per the sources by Capital Forum, the draft is written in two parts. The first part is a policy statement stating that online platforms are central to the flow of information and commerce and need to be held accountable through competition. The second part instructs agencies to investigate bias and anticompetitive conduct in online platforms where they have the authority. In case of lack of authorization, they are required to report concerns or issues to the Federal Trade Commission or the Department of Justice. No online platforms are mentioned by name in the draft. It’s unclear when or if the White House will decide to issue the order. Donald Trump and the White House have always been vocal about the prevalent bias in Social media platforms. In August, Trump tweeted about Social Media discriminating against Republican and Conservative voices. Source: Twitter He also went on to claim that Google search results for “Trump News” reports fake news. He accused the search engines’ algorithms of being rigged. However, that allegation having not been backed by evidence, let Google slam Trump’s accusations, asserting that its search engine algorithms do not favor any political ideology. Earlier this month, Facebook COO Sheryl Sandberg and Twitter CEO Jack Dorsey faced the Senate Select Intelligence Committee, to discuss foreign interference through social media platforms in US elections. Google, Facebook, and Twitter also released a Testimony ahead of appearing before the committee. As reported by the Wall Street Journal, Google CEO Sundar Pichai also plans to meet privately with top Republican lawmakers this Friday to discuss a variety of topics, including the company's alleged political bias in search results. This meeting is organized by the House Majority Leader, Kevin McCarthy. Pichai said on Tuesday that “I look forward to meeting with members on both sides of the aisle, answering a wide range of questions, and explaining our approach." Google is also facing public scrutiny over a report that it intends to launch a censored search engine in China. Google’s custom search engine would link Chinese users’ search queries to their personal phone numbers, thus making it easier for the government to track their searches. About a thousand Google employees frustrated with a series of controversies involving Google have signed a letter to demand transparency on building the alleged search engine. Google’s new Privacy Chief officer proposes a new framework for Security Regulation. Amazon is the next target on EU’s antitrust hitlist. Mark Zuckerberg publishes Facebook manifesto for safeguarding against political interference.  
Read more
  • 0
  • 0
  • 1931

article-image-how-to-develop-a-simple-to-do-list-app-tutorial
Sugandha Lahoti
26 Sep 2018
11 min read
Save for later

How to develop a Simple To-Do List App [Tutorial]

Sugandha Lahoti
26 Sep 2018
11 min read
In this article, we will build a simple to-do list app that allows a user to add and display tasks. In this process, we will learn the following: How to build a user interface in Android Studio Working with ListViews How to work with Dialogs This article is taken from the book Learning Kotlin by building Android Applications by Eunice Adutwumwaa Obugyei and Natarajan Raman. This book will teach you programming in Kotlin including data types, flow control, lambdas, object-oriented, and functional programming while building  Android Apps. Creating the project Let's start by creating a new project in Android Studio, with the name TodoList. Select Add No Activity on the Add an Activity to Mobile screen: When the project creation is complete, create a Kotlin Activity by selecting File | New | Kotlin Activity, as shown in the following screenshot: This will start a New Android Activity wizard. On the Add an Activity to Mobile screen, select Basic Activity, as shown in the following screenshot: Now, check Launcher Activity on the Customize the Activity screen, and click the Finish button: Building your UI In Android, the code for your user interface is written in XML. You can build your UI by doing either of the following: Using the Android Studio Layout Editor Writing the XML code by hand Let's go ahead and start designing our TodoList app. Using the Android Studio layout editor Android Studio provides a layout editor, which gives you the ability to build your layouts by dragging widgets into the visual editor. This will auto-generate the XML code for your UI. Open the content_main.xml file. Make sure the Design tab at the bottom of the screen is selected, as shown in the following screenshot: To add a component to your layout, you just drag the item from the Palette on the left side of the screen. To find a component, either scroll through the items on the Palette, or click on the Palette search icon and search for the item you need. If the Palette is not showing on your screen, select View | Tool Windows | Palette to display it. Go ahead and add a ListView to your view. When a view is selected, its attributes are displayed in the XML Attributes editor on the right side of the screen. The Attributes editor allows you to view and edit the attributes of the selected component. Go ahead and make the following changes: Set the ID as list_view Change both the layout_width and layout_height attributes to match_parent If the Attributes editor is not showing; select View | Tool Windows | Attributes to display it. Now, select Text at the bottom of the editor window to view the generated XML code. You'll notice that the XML code now has a ListView placed within the ConstraintLayout:  A layout always has a root element. In the preceding code, ConstraintLayout is the root element. Instead of using the layout editor, you could have written the previous code yourself. The choice between using the layout editor or writing the XML code is up to you. You can use the option that you're most comfortable with. We'll continue to make additions to the UI as we go along. Now, build and run your code. as shown in the following screenshot: As you can see, the app currently doesn't have much to it. Let's go ahead and add a little more flesh to it. Since we'll use the FloatingActionButton as the button the user uses to add a new item to their to-do list, we need to change its icon to one that makes its purpose quite clear. Open the activity_main.xml file: One of the attributes of the android.support.design.widget.FloatingActionButton is app:srcCompat. This is used to specify the icon for the FloatingActionButton. Change its value from @android:drawable/ic_dialog_email to @android:drawable/ic_input_add. Build and run again. The FloatingActionButton at the bottom now looks like an Add icon, as shown in the following screenshot: Adding functionality to the user interface At the moment, when the user clicks on the Add button, a ticker shows at the bottom of the screen. This is because of the piece of code in the onCreate() method that defines and sets an OnClickListener to the FloatingActionButton: fab.setOnClickListener { view -> Snackbar.make(view, "Replace with your own action", Snackbar.LENGTH_LONG) .setAction("Action", null).show() } This is not ideal for our to-do list app. Let's go ahead and create a new method in the MainActivity class that will handle the click event: fun showNewTaskUI() { } The method currently does nothing. We'll add code to show the appropriate UI soon. Now, replace the code within the setOnClickListener() call with a call to the new method: fab.setOnClickListener { showNewTaskUI() } Adding a new task For adding a new task, we'll show the user an AlertDialog with an editable field. Let's start by building the UI for the dialog. Right-click the res/layout directory and select New | Layout resource file, as shown in the following screenshot: On the New Resource File window, change the Root element to LinearLayout and set the File name as dialog_new_task. Click OK to create the layout,  as shown in the following screenshot: Open the dialog_new_task layout and add an EditText view to the LinearLayout. The XML code in the layout should now look like this: The inputType attribute is used to specify what kind of data the field can take. By specifying this attribute, the user is shown an appropriate keyboard. For example, if the inputType is set to number, the numbers keyboard is displayed: Now, let's go ahead and add a few string resources we'll need for the next section. Open the res/values/strings.xml file and add the following lines of code to the resources tag: <string name="add_new_task_dialog_title">Add New Task</string> <string name="save">Save</string> The add_new_task_dialog_title string will be used as the title of our dialog The save string will be used as the text of a button on the dialog The best way to use an AlertDialog is by encapsulating it in a DialogFragment. The DialogFragment takes away the burden of handling the dialog's life cycle events. It also makes it easy for you to reuse the dialog in other activities. Create a new Kotlin class with the name NewTaskDialogFragment, and replace the class definition with the following lines of code: class NewTaskDialogFragment: DialogFragment() { // 1 // 2 interface NewTaskDialogListener { fun onDialogPositiveClick(dialog: DialogFragment, task: String) fun onDialogNegativeClick(dialog: DialogFragment) } var newTaskDialogListener: NewTaskDialogListener? = null // 3 // 4 companion object { fun newInstance(title: Int): NewTaskDialogFragment { val newTaskDialogFragment = NewTaskDialogFragment() val args = Bundle() args.putInt("dialog_title", title) newTaskDialogFragment.arguments = args return newTaskDialogFragment } } override fun onCreateDialog(savedInstanceState: Bundle?): Dialog { // 5 val title = arguments.getInt("dialog_title") val builder = AlertDialog.Builder(activity) builder.setTitle(title) val dialogView = activity.layoutInflater.inflate(R.layout.dialog_new_task, null) val task = dialogView.findViewById<EditText>(R.id.task) builder.setView(dialogView) .setPositiveButton(R.string.save, { dialog, id -> newTaskDialogListener?.onDialogPositiveClick(this, task.text.toString); }) .setNegativeButton(android.R.string.cancel, { dialog, id -> newTaskDialogListener?.onDialogNegativeClick(this) }) return builder.create() } override fun onAttach(activity: Activity) { // 6 super.onAttach(activity) try { newTaskDialogListener = activity as NewTaskDialogListener } catch (e: ClassCastException) { throw ClassCastException(activity.toString() + " must implement NewTaskDialogListener") } } } Let's take a closer look at what this class does: The class extends the DialogFragment class. It declares an interface with the name NewTaskDialogListener, which declares two methods: onDialogPositiveClick(dialog: DialogFragment, task: String) onDialogNegativeClick(dialog: DialogFragment) It declares a variable of type NewTaskDialogListener. It defines a method, newInstance(), in a companion object. By doing this, the method can be accessed without having to create an instance of the NewTaskDialogFragment class. The newInstance() method does the following: It takes an Int parameter named title It creates an instance of the NewTaskDialogFragment and passes the title as part of its arguments It returns the new instance of the NewTaskDialogFragment It overrides the onCreateDialog() method. This method does the following: It attempts to retrieve the title argument passed It instantiates an AlertDialog builder and assigns the retrieved title as the dialog's title It uses the LayoutInflater of the DialogFragment instance's parent activity to inflate the layout we created Then, it sets the inflated view as the dialog's view Sets two buttons to the dialog: Save and Cancel When the Save button is clicked, the text in the EditText will be retrieved and passed to the newTaskDialogListener variable via the onDialogPositiveClick() method In the onAttach() method, we attempt to assign the Activity object passed to the newTaskDialogListener variable created earlier. For this to work, the Activity object should implement the NewTaskDialogListener interface. Now, open the MainActivity class. Change the class declaration to include implementation of the NewTaskDialogListener. Your class declaration should now look like this: class MainActivity : AppCompatActivity(), NewTaskDialogFragment.NewTaskDialogListener { And, add implementations of the methods declared in the NewTaskDialogListener by adding the following methods to the MainActivity class: override fun onDialogPositiveClick(dialog: DialogFragment, task:String) { } override fun onDialogNegativeClick(dialog: DialogFragment) { } In the showNewTaskUI() method, add the following lines of code: val newFragment = NewTaskDialogFragment.newInstance(R.string.add_new_task_dialog_title) newFragment.show(fragmentManager, "newtask") In the preceding lines of code, the newInstance() method in NewTaskDialogFragment is called to generate an instance of the NewTaskDialogFragment class. The show() method of the DialogFragment is then called to display the dialog. Build and run. Now, when you click the Add button, you should see a dialog on your screen,  as shown in the following screenshot: As you may have noticed, nothing happens when you click the SAVE button. In the  onDialogPositiveClick() method, add the line of code shown here: Snackbar.make(fab, "Task Added Successfully", Snackbar.LENGTH_LONG).setAction("Action", null).show() As we may remember, this line of code displays a ticker at the bottom of the screen. Now, when you click the SAVE button on the New Task dialog, a ticker shows at the bottom of the screen. We're currently not storing the task the user enters. Let's create a collection variable to store any task the user adds. In the MainActivity class, add a new variable of type ArrayList<String>, and instantiate it with an empty ArrayList: private var todoListItems = ArrayList<String>() In the onDialogPositiveClick() method, place the following lines of code at the beginning of the method definition: todoListItems.add(task) listAdapter?.notifyDataSetChanged() This will add the task variable passed to the todoListItems data, and call notifyDataSetChanged() on the listAdapter to update the ListView. Saving the data is great, but our ListView is still empty. Let's go ahead and rectify that. Displaying data in the ListView To make changes to a UI element in the XML layout, you need to use the findViewById() method to retrieve the instance of the element in the corresponding Activity of your layout. This is usually done in the onCreate() method of the Activity. Open MainActivity.kt, and declare a new ListView instance variable at the top of the class: private var listView: ListView? = null Next, instantiate the ListView variable with its corresponding element in the layout. Do this by adding the following line of code at the end of the onCreate() method: listView = findViewById(R.id.list_view) To display data in a ListView, you need to create an Adapter, and give it the data to display and information on how to display that data. Depending on how you want the data displayed in your ListView, you can either use one of the existing Android Adapters, or create your own. For now, we'll use one of the simplest Android Adapters, ArrayAdapter. The ArrayAdapter takes an array or list of items, a layout ID, and displays your data based on the layout passed to it. In the MainActivity class, add a new variable, of type ArrayAdapter: private var listAdapter: ArrayAdapter<String>? = null Add the method shown here to the class: private fun populateListView() { listAdapter = ArrayAdapter(this, android.R.layout.simple_list_item_1, todoListItems) listView?.adapter = listAdapter } In the preceding lines of code, we create a simple ArrayAdapter and assign it to the listView as its Adapter. Now, add a call to the previous method in the onCreate() method: populateListView() Build and run. Now, when you click the Add button, you'll see your entry show up on the ListView, as shown in the following screenshot: In this article, we built a simple TodoList app that allows a user to add new tasks, and edit or delete an already added task. In the process, we learned to use ListViews and Dialogs. Next, to learn about the different datastore options available and how to use them to make our app more usable, read our book, Learning Kotlin by building Android Applications. 6 common challenges faced by Android App developers. Google plans to let the AMP Project have an open governance model, soon! Entry level phones to taste the Go edition of the Android 9.0 Pie version
Read more
  • 0
  • 0
  • 26368

article-image-what-is-a-convolutional-neural-network-cnn-video
Richard Gall
25 Sep 2018
5 min read
Save for later

What is a convolutional neural network (CNN)? [Video]

Richard Gall
25 Sep 2018
5 min read
What is a convolutional neural network, exactly? Well, let's start with the basics: a convolutional neural network (CNN) is a type of neural network that is most often applied to image processing problems. You've probably seen them in action anywhere a computer is identifying objects in an image. But you can also use convolutional neural networks in natural language processing projects, too. The fact that they are useful for these fast growing areas is one of the main reasons they're so important in deep learning and artificial intelligence today. What makes a convolutional neural network unique? Once you understand how a convolutional neural network works and what makes it unique from other neural networks, you can see why they're so effective for processing and classifying images. But let’s first take a regular neural network. A regular neural network has an input layer, hidden layers and an output layer. The input layer accepts inputs in different forms, while the hidden layers perform calculations on these inputs. The output layer then delivers the outcome of the calculations and extractions. Each of these layers contains neurons that are connected to neurons in the previous layer, and each neuron has its own weight. This means you aren’t making any assumptions about the data being fed into the network - great usually, but not if you’re working with images or language. Convolutional neural networks work differently as they treat data as spatial. Instead of neurons being connected to every neuron in the previous layer, they are instead only connected to neurons close to it and all have the same weight. This simplification in the connections means the network upholds the spatial aspect of the data set. It means your network doesn’t think an eye is all over the image. The word ‘convolutional’ refers to the filtering process that happens in this type of network. Think of it this way, an image is complex - a convolutional neural network simplifies it so it can be better processed and ‘understood.’ What's inside a convolutional neural network? Like a normal neural network, a convolutional neural network is made up of multiple layers. There are a couple of layers that make it unique - the convolutional layer and the pooling layer. However, like other neural networks, it will also have a ReLu or rectified linear unit layer, and a fully connected layer. The ReLu layer acts as an activation function, ensuring non-linearity as the data moves through each layer in the network - without it, the data being fed into each layer would lose the dimensionality that we want to maintain. The fully connected layer, meanwhile, allows you to perform classification on your dataset. The convolutional layer The convolutional layer is the most important, so let’s start there. It works by placing a filter over an array of image pixels - this then creates what’s called a convolved feature map. "It’s a bit like looking at an image through a window which allows you to identify specific features you might not otherwise be able to see. The pooling layer Next we have the pooling layer - this downsamples or reduces the sample size of a particular feature map. This also makes processing much faster as it reduces the number of parameters the network needs to process. The output of this is a pooled feature map. There are two ways of doing this, max pooling, which takes the maximum input of a particular convolved feature, or average pooling, which simply takes the average. These steps amount to feature extraction, whereby the network builds up a picture of the image data according to its own mathematical rules. If you want to perform classification, you'll need to move into the fully connected layer. To do this, you'll need to flatten things out - remember, a neural network with a more complex set of connections can only process linear data. How to train a convolutional neural network There are a number of ways you can train a convolutional neural network. If you’re working with unlabelled data, you can use unsupervised learning methods. One of the best popular ways of doing this is using auto-encoders - this allows you to squeeze data in a space with low dimensions, performing calculations in the first part of the convolutional neural network. Once this is done you’ll then need to reconstruct with additional layers that upsample the data you have. Another option is to use generative adversarial networks, or GANs. With a GAN, you train two networks. The first gives you artificial data samples that should resemble data in the training set, while the second is a ‘discriminative network’ - it should distinguish between the artificial and the 'true' model. What's the difference between a convolutional neural network and a recurrent neural network? Although there's a lot of confusion about the difference between a convolutional neural network and a recurrent neural network, it's actually more simple than many people realise. Whereas a convolutional neural network is a feedforward network that filters spatial data, a recurrent neural network, as the name implies, feeds data back into itself. From this perspective recurrent neural networks are better suited to sequential data. Think of it like this: a convolutional network is able to perceive patterns across space - a recurrent neural network can see them over time. How to get started with convolutional neural networks If you want to get started with convolutional neural networks Python and TensorFlow are great tools to begin with. It’s worth exploring MNIST dataset too. This is a database of handwritten digits that you can use to get started with building your first convolutional neural network. To learn more about convolutional neural networks, artificial intelligence, and deep learning, visit Packt's store for eBooks and videos.
Read more
  • 0
  • 0
  • 27178
article-image-linux-programmers-opposed-to-new-code-of-conduct-threaten-to-pull-code-from-project
Melisha Dsouza
25 Sep 2018
6 min read
Save for later

Linux programmers opposed to new Code of Conduct threaten to pull code from project

Melisha Dsouza
25 Sep 2018
6 min read
Facts of the Controversy at hand To “help make the kernel community a welcoming environment to participate in”, Linux accepted a new Code of Conduct earlier this month. This created conflict among the developer community because of the clauses in the CoC. The CoC is derived from a Contributor Covenant , created by Coraline Ada Ehmke, a software developer, an open-source advocate, and an LGBT activist. Just 30 minutes after signing this CoC, Principal kernel contributor- Linus Torvalds sent a mail apologizing for his past behavior and announced temporary break to improve upon his behavior. “This week people in our community confronted me about my lifetime of not understanding emotions. My flippant attacks in emails have been both unprofessional and uncalled for. Especially at times when I made it personal. In my quest for a better patch, this made sense to me. I know now this was not OK and I am truly sorry” This decision of taking a break is speculated among many as a precautionary measure to prevent Torvalds from violating the newly created code of conduct. Controversies took better shape after Caroline’s sarcastic tweet-                                               Source: Twitter The new Linux Code of Conduct is causing huge conflict Linux’s move from its Code of Conflicts to a new Code of Conduct has not been received well by many of its developers. Some have threatened to pull out their blocks of code important to the project to revolt against the change. This could have serious consequences because Linux is one of the most important pieces of open source software in the world. If threats are put into action, large parts of the internet would be left vulnerable to exploits. Applications that use Linux would be like an incomplete Jenga stack that could collapse any minute. Why some Linux developers are opposed to the new code of conduct Here is a summary of developers views on the Code of Conduct that, according to them, justifies their decision: Amending to the CoC would mean good contributors are removed over trivial matters or even events that happened a long time ago–like Larry Garfield, a prominent Drupal contributor who was asked to step down after his sex fetish was made public. There is a lack of proper definitions for punishments, time frames, and what constitutes abuse or harassment, inappropriate behavior and leaves the Code of Conduct wide open for exploitation.  It gives the people charged with enforcement immense power. It could force acceptance of contributions that wouldn’t make the cut if made by cis white males. Developers are concerned that Linux will start accepting patches from anyone and everyone just to keep par with the Code of Conduct. It will no longer depend on the ability of a person to code, but on social parameters like race, color, sex, and gender. Why some developers believe in the new Linux Code of Conduct On the other side of the argument, here are some potential reasons why the CoC will foster social justice: Encouraging an inclusive and safe space for women, LGBTQIA+, and People of Color, who in the absence of the CoC are excluded, harassed, and sometimes even raped by cis white males. The CoC aims to overcome meritocracy, which in many organizations has consistently shown itself to mainly benefit those with privilege, to the exclusion of underrepresented people in technology VA vastmajority of Linux contributors are cis white males. CC’s Code of Conduct would enable the building of a more diverse demographic array of contributors What does this mean to the developer community? Linux includes programmers who are always free to contribute to its open source platform. Contributing good code would help them climb up the ladder and become a ‘maintainer’. The greatest strength of Linux was its flexibility. Developers would contribute to the kernel and be concerned about only a single entity- their code patch. The Linux community would judge the code based on its quality. However, with the new Code of Conduct, critics say this could make passing judgement on code more challenging, For them, the Code of Conduct is a set of rules that expects everyone to be at equal levels in the community. It could mean that certain patches are accepted for fear of contravening the Code of Conduct. Here is what Caroline Ada Ehmke was forthright in her criticism of this view:   Source: Twitter   Clearly, many of the fears of the Code of Conduct’s critics haven’t yet come to pass. What they’re ultimately worried about is that there could be negative consequences. Google Developer Ted Ts’o next on the hit list Earlier this week, activist Sage Sharp, tweeted about Ted Ts'o:                                                      Source: Twitter This perhaps needs some context - the beginning of this argument dates all the way back to 2011 when Ts’o when was a member of the Linux Foundation's technical advisory board-participated in a discussion on the mailing list for the Australian national Linux conference that year, making comments that were later interpreted by Aurora as rape apologism. Using Aurora's piece as a fuse, Google employee Matthew Garrett slammed Ts'o on his beliefs. In 2017, yielding to the demands of SJWs, Google threw out James Damore, an engineer who circulated an internal creed about reverse discrimination in hiring practices. The SJW’s are coming for him and the best way to go forward would be to “take a break”, just like Linus did. As claimed by Caroline, the underlying aim of the CoC was to guide people in behaving in a respectable way and create a positive environment for people irrespective of their race, ethnicity, religion, nationality and political views. However, overlooking this aim, developers are concerned with the loopholes in the CoC. Gain more insights on this news as well as views from members of the Linux community at itfloss. Linux drops Code of Conflict and adopts new Code of Conduct Linus Torvalds is sorry for his ‘hurtful behavior’, is taking ‘a break (from the Linux community) to get help’ NSA researchers present security improvements for Zephyr and Fucshia at Linux Security Summit 2018
Read more
  • 0
  • 0
  • 8512

article-image-how-to-set-up-a-c-environment-for-machine-learning-tutorial
Savia Lobo
25 Sep 2018
8 min read
Save for later

How to set up a C# environment for Machine Learning [Tutorial]

Savia Lobo
25 Sep 2018
8 min read
With ever-growing data and its accessibility, the applications of Machine Learning are rapidly rising across various industries. However, the pace of the growth in trained data scientists is yet to meet the pace of growth of ML needs in businesses. In spite of having abundant resources and software libraries that make building ML models easier, it takes time and experience for a data scientist and ML engineer to master such skill sets. The necessity for machine learning is everywhere, and most production enterprise applications are written in C# using tools such as Visual Studio, SQL Server, and Microsoft Azure. Machine Learning with C# uniquely blends together an understanding of various machine learning concepts, techniques of machine learning, and various available machine learning tools through which users can add intelligent features. These tools include image and motion detection, Bayes intuition, and deep learning, to C# .NET applications This tutorial is an excerpt taken from the book C# Machine Learning Projects written by Yoon Hyup Hwang. In this book, you will learn how to choose a model for your problem, how to evaluate the performance of your models, and how you can use C# to build machine learning models for your future projects. In today's post, we will learn how to set up our a C# environment for Machine Learning. We will first install and set up Visual Studio and then do the same for two packages (Accord.NET and Deedle). Setting up Visual Studio for C# Assuming you have some prior knowledge of C#, we will keep this part brief. In case you need to install Visual Studio for C#, go to https://www.visualstudio.com/downloads/ and download one of the versions of Visual Studio. In this article, we use the Community Edition of Visual Studio 2017. If it prompts you to download .NET Framework before you install Visual Studio, go to https://www.microsoft.com/en-us/download/details.aspx?id=53344 and install it first. Installing Accord.NET Accord.NET is a .NET ML framework. On top of machine learning packages, the Accord.NET framework also has mathematics, statistics, computer vision, computer audition, and other scientific computing modules. We are mainly going to use the ML package of the Accord.NET framework. Once you have installed and set up your Visual Studio, let's start installing the ML framework for C#, Accord.NET. It is easiest to install it through NuGet. To install it, open the package manager (Tools | NuGet Package Manager | Package Manager Console) and install Accord.MachineLearning and Accord.Controls by typing in the following commands: PM> Install-Package Accord.MachineLearning PM> Install-Package Accord.Controls Now, let's build a sample ML application using these Accord.NET packages. Open your Visual Studio and create a new Console Application under the Visual C# category. Use the preceding commands to install those Accord.NET packages through NuGet and add references to our project. You should see some Accord.NET packages added to your references in your Solutions Explorer and the result should look something like the following screenshot: The model we are going to build now is a very simple logistic regression model. Given two-dimensional arrays and an expected output, we are going to develop a program that trains a logistic regression classifier and then plot the results showing the expected output and the actual predictions by this model. The input and output for this model look like the following: The code for this sample logistic regression classifier is as follows: using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Threading.Tasks; using Accord.Controls; using Accord.Statistics; using Accord.Statistics.Models.Regression; using Accord.Statistics.Models.Regression.Fitting; namespace SampleAccordNETApp { class Program { static void Main(string[] args) { double[][] inputs = { new double[] { 0, 0 }, new double[] { 0.25, 0.25 }, new double[] { 0.5, 0.5 }, new double[] { 1, 1 }, }; int[] outputs = { 0, 0, 1, 1, }; // Train a Logistic Regression model var learner = new IterativeReweightedLeastSquares<LogisticRegression>() { MaxIterations = 100 }; var logit = learner.Learn(inputs, outputs); // Predict output bool[] predictions = logit.Decide(inputs); // Plot the results ScatterplotBox.Show("Expected Results", inputs, outputs); ScatterplotBox.Show("Actual Logistic Regression Output", inputs, predictions.ToZeroOne()); Console.ReadKey(); } } } Once you are done writing this code, you can run it by hitting F5 or clicking on the Start button on top. If everything runs smoothly, it should produce the two plots shown in the following figure. If it fails, check for references or typos. You can always right-click on the class name or the light bulb icon to make Visual Studio help you find which packages are missing from the namespace references: Plots produced by the sample program. Left: actual prediction results, right: expected output This sample code can be found at the following link: https://github.com/yoonhwang/c-sharp-machine-learning/blob/master/ch.1/SampleAccordNETApp.cs. Installing Deedle Deedle is an open source .NET library for data frame programming. Deedle lets you do data manipulation in a way that is similar to R data frames and pandas data frames in Python. We will be using this package to load and manipulate the data for our machine learning projects. Similar to how we installed Accord.NET, we can install the Deedle package from NuGet. Open the package manager (Tools | NuGet Package Manager | Package Manager Console) and install Deedle using the following command: PM> Install-Package Deedle Let's briefly look at how we can use this package to load data from a CSV file and do simple data manipulations. For more information, you can visit http://bluemountaincapital.github.io/Deedle/ for API documentation and sample code. We are going to use daily AAPL stock price data from 2010 to 2013 for this exercise. You can download this data from the following link: https://github.com/yoonhwang/c-sharp-machine-learning/blob/master/ch.1/table_aapl.csv. Open your Visual Studio and create a new Console Application under the Visual C# category. Use the preceding command to install the Deedle library through NuGet and add references to your project. You should see the Deedle package added to your references in your Solutions Explorer. Now, we are going to load the CSV data into a Deedle data frame and then do some data manipulations. First, we are going to update the index of the data frame with the Date field. Then, we are going to apply some arithmetic operations on the Open and Close columns to calculate the percentage changes from open to close prices. Lastly, we will calculate daily returns by taking the differences between the close and the previous close prices, dividing them by the previous close prices, and then multiplying it by 100. The code for this sample Deedle program is shown as follows: using Deedle; using System; using System.Collections.Generic; using System.IO; using System.Linq; using System.Text; using System.Threading.Tasks; namespace DeedleApp { class Program { static void Main(string[] args) { // Read AAPL stock prices from a CSV file var root = Directory.GetParent(Directory.GetCurrentDirectory()).Parent.FullName; var aaplData = Frame.ReadCsv(Path.Combine(root, "table_aapl.csv")); // Print the data Console.WriteLine("-- Raw Data --"); aaplData.Print(); // Set Date field as index var aapl = aaplData.IndexRows<String>("Date").SortRowsByKey(); Console.WriteLine("-- After Indexing --"); aapl.Print(); // Calculate percent change from open to close var openCloseChange = (( aapl.GetColumn<double>("Close") - aapl.GetColumn<double>("Open") ) / aapl.GetColumn<double>("Open")) * 100.0; aapl.AddColumn("openCloseChange", openCloseChange); Console.WriteLine("-- Simple Arithmetic Operations --"); aapl.Print(); // Shift close prices by one row and calculate daily returns var dailyReturn = aapl.Diff(1).GetColumn<double>("Close") / aapl.GetColumn<double>("Close") * 100.0; aapl.AddColumn("dailyReturn", dailyReturn); Console.WriteLine("-- Shift --"); aapl.Print(); Console.ReadKey(); } } } When you run this code, you will see the following outputs. The raw dataset looks like the following: After indexing this dataset with the date field, you will see the following: After applying simple arithmetic operations to compute the change rate from open to close, you will see the following: Finally, after shifting close prices by one row and computing daily returns, you will see the following: As you can see from this sample Deedle project, we can run various data manipulation operations with one or two lines of code, where it would have required more lines of code to apply the same operations using native C#. We will use the Deedle library frequently throughout this book for data manipulation and feature engineering. This sample Deedle code can be found at the following link: https://github.com/yoonhwang/c-sharp-machine-learning/blob/master/ch.1/DeedleApp.cs. Thus, in this post, we walked you through how to set up a C# environment for our future ML projects. We built a simple logistic regression classifier using the Accord.NET framework and used the Deedle library to load and manipulate the data. If you enjoyed reading this post, do check out C# Machine Learning Projects to put your skills into practice and implementing your machine learning knowledge in real projects. .NET announcements: Preview 2 of .NET Core 2.2 and Entity Framework Core 2.2, C# 7.3, and ML.NET 0.5 ReSharper 18.2 brings performance improvements, C# 7.3, Blazor support and spellcheck Best practices for C# code optimization [Tutorial]
Read more
  • 0
  • 0
  • 8791