Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Artificial Intelligence

86 Articles
article-image-bug-tracking
Packt
04 Jan 2017
11 min read
Save for later

Bug Tracking

Packt
04 Jan 2017
11 min read
In this article by Eduardo Freitas, the author of the book Building Bots with Node.js, we will learn about Internet Relay Chat (IRC). It enables us to communicate in real time in the form of text. This chat runs on a TCP protocol in a client server model. IRC supports group messaging which is called as channels and also supports private message. (For more resources related to this topic, see here.) IRC is organized into many networks with different audiences. IRC being a client server, users need IRC clients to connect to IRC servers. IRC Client software comes as a packaged software as well as web based clients. Some browsers are also providing IRC clients as add-ons. Users can either install on their systems and then can be used to connect to IRC servers or networks. While connecting these IRC Servers, users will have to provide unique nick or nickname and choose existing channel for communication or users can start a new channel while connecting to these servers. In this article, we are going to develop one of such IRC bots for bug tracking purpose. This bug tracking bot will provide information about bugs as well as details about a particular bug. All this will be done seamlessly within IRC channels itself. It's going to be one window operations for a team when it comes to knowing about their bugs or defects. Great!! IRC Client and server As mentioned in introduction, to initiate an IRC communication, we need an IRC Client and Server or a Network to which our client will be connected. We will be using freenode network for our client to connect to. Freenode is the largest free and open source software focused IRC network. IRC Web-based Client I will be using IRC web based client using URL(https://webchat.freenode.net/). After opening the URL, you will see the following screen, As mentioned earlier, while connecting, we need to provide Nickname: and Channels:. I have provided Nickname: as Madan and at Channels: as #BugsChannel. In IRC, channels are always identified with #, so I provided # for my bugs channel. This is the new channel that we will be starting for communication. All the developers or team members can similarly provide their nicknames and this channel name to join for communication. Now let's ensure Humanity: by selecting I'm not a robot and click button Connect. Once connected, you will see the following screen. With this, our IRC client is connected to freenode network. You can also see username on right hand side as @Madan within this #BugsChannel. Whoever is joining this channel using this channel name and a network, will be shown on right hand side. In the next article, we will ask our bot to join this channel and the same network and will see how it appears within the channel. IRC bots IRC bot is a program which connects to IRC as one of the clients and appears as one of the users in IRC channels. These IRC bots are used for providing IRC Services or to host chat based custom implementations which will help teams for efficient collaboration. Creating our first IRC bot using IRC and NodeJS Let's start by creating a folder in our local drive in order to store our bot program from the command prompt. mkdir ircbot cd ircbot Assuming we have Node.js and NPM installed and let's create and initialize our package.json, which will store our bot's dependencies and definitions. npm init Once you go through the npm init options (which are very easy to follow), you'll see something similar to this. On your project folder you'll see the result which is your package.json file. Let's install irc package from NPM. This can be located at https://www.npmjs.com/package/irc. In order to install it, run this npm command. npm install –-save irc You should then see something similar to this. Having done this, the next thing to do is to update your package.json in order to include the "engines" attribute. Open with a text editor the package.json file and update it as follows. "engines": { "node": ">=5.6.0" } Your package.json should then look like this. Let's create our app.js file which will be the entry point to our bot as mentioned while setting up our node package. Our app.js should like this. var irc = require('irc'); var client = new irc.Client('irc.freenode.net', 'BugTrackerIRCBot', { autoConnect: false }); client.connect(5, function(serverReply) { console.log("Connected!n", serverReply); client.join('#BugsChannel', function(input) { console.log("Joined #BugsChannel"); client.say('#BugsChannel', "Hi, there. I am an IRC Bot which track bugs or defects for your team.n I can help you using following commands.n BUGREPORT n BUG # <BUG. NO>"); }); }); Now let's run our Node.js program and at first see how our console looks. If everything works well, our console should show our bot as connected to the required network and also joined a channel. Console can be seen as the following, Now if you look at our channel #BugsChannel in our web client, you should see our bot has joined and also sent a welcome message as well. Refer the following screen: If you look at the the preceding screen, our bot program got has executed successfully. Our bot BugTrackerIRCBot has joined the channel #BugsChannel and also bot sent an introduction message to all whoever is on channel. If you look at the right side of the screen under usernames, we are seeing BugTrackerIRCBot below @Madan Code understanding of our basic bot After seeing how our bot looks in IRC client, let's look at basic code implementation from app.js. We used irc library with the following lines, var irc = require('irc'); Using irc library, we instantiated client to connect one of the IRC networks using the following code snippet, var client = new irc.Client('irc.freenode.net', 'BugTrackerIRCBot', { autoConnect: false }); Here we connected to network irc.freenode.net and provided a nickname as BugTrackerIRCBot. This name has been given as I would like my bot to track and report the bugs in future. Now we ask client to connect and join a specific channel using the following code snippet, client.connect(5, function(serverReply) { console.log("Connected!n", serverReply); client.join('#BugsChannel', function(input) { console.log("Joined #BugsChannel"); client.say('#BugsChannel', "Hi, there. I am an IRC Bot which track bugs or defects for your team.n I can help you using following commands.n BUGREPORT n BUG # <BUG. NO>"); }); }); In preceeding code snippet, once client is connected, we get reply from server. This reply we are showing on a console. Once successfully connected, we ask bot to join a channel using the following code lines: client.join('#BugsChannel', function(input) { Remember, #BugsChannel is where we have joined from web client at the start. Now using client.join(), I am asking my bot to join the same channel. Once bot is joined, bot is saying a welcome message in the same channel using function client.say(). Hope this has given some basic understanding of our bot and it's code implementations. In the next article, we will enhance our bot so that our teams can have effective communication experience while chatting itself. Enhancing our BugTrackerIRCBot Having built a very basic IRC bot, let's enhance our BugTrackerIRCBot. As developers, we always would like to know how our programs or a system is functioning. To do this typically our testing teams carry out testing of a system or a program and log their bugs or defects into a bug tracking software or a system. We developers later can take a look at those bugs and address them as a part of our development life cycle. During this journey, developers will collaborate and communicate over messaging platforms like IRC. We would like to provide unique experience during their development by leveraging IRC bots. So here is what exactly we are doing. We are creating a channel for communication all the team members will be joined and our bot will also be there. In this channel, bugs will be reported and communicated based on developers' request. Also if developers need some additional information about a bug, chat bot can also help them by providing a URL from the bug tracking system. Awesome!! But before going in to details, let me summarize using the following steps about how we are going to do this, Enhance our basic bot program for more conversational experience Bug tracking system or bug storage where bugs will be stored and tracked for developers Here we mentioned about bug storage system. In this article, I would like to explain DocumentDB which is a NoSQL JSON based cloud storage system. What is DocumentDB? I have already explained NoSQLs. DocumentDB is also one of such NoSQLs where data is stored in JSON documents and offered by Microsoft Azure platform. Details of DocumentDB can be referred from (https://azure.microsoft.com/en-in/services/documentdb/) Setting up a DocumentDB for our BugTrackerIRCBot Assuming you already have a Microsoft Azure subscription follow these steps to configure DocumentDB for your bot. Create account ID for DocumentDB Let's create a new account called botdb using the following screenshot from Azure portal. Select NoSQL API as of DocumentDB. Select appropriate subscription and resources. I am using existing resources for this account. You can also create a new dedicated resource for this account. Once you enter all the required information, hit Create button at the bottom to create new account for DocumentDB. Newly created account botdb can be seen as the following, Create collection and database Select a botdb account from account lists shown precedingly. This will show various menu options like Properties, Settings, Collections etc. Under this account we need to create a collection to store bugs data. To create a new collection, click on Add Collection option as shown in the following screenshot, On click of Add Collection option, following screen will be shown on right side of the screen. Please enter the details as shown in the following screenshot: In the preceding screen, we are creating a new database along with our new collection Bugs. This new database will be named as BugDB. Once this database is created, we can add other bugs related collections in future in the same database. This can be done in future using option Use existing from the preceding screen. Once you enter all the relevant data, click OK to create database as well as collection. Refer the following screenshot: From the preceding screen, COLLECTION ID and DATABASE shown will be used during enhancing our bot. Create data for our BugTrackerIRCBot Now we have BugsDB with Bugs collection which will hold all the data for bugs. Let's add some data into our collection. To add a data let's use menu option Document Explorer shown in the following screenshot: This will open up a screen showing list of Databases and Collections created so far. Select our database as BugDB and collection as Bugs from the available list. Refer the following screenshot: To create a JSON document for our Bugs collection, click on Create option. This will open up a New Document screen to enter JSON based data. Please enter a data as per the following screenshot: We will be storing id, status, title, description, priority,assignedto, url attributes for our single bug document which will get stored in Bugs collection. To save JOSN document in our collection click Save button. Refer the following screenshot: This way we can create sample records in bugs collection which will be later wired up in NodeJS program. Sample list of bugs can be seen in the following screenshot: Summary Every development team needs bug tracking and reporting tools. There are typical needs of bug reporting and bug assignment. In case of critical projects these needs become also very critical for project timelines. This article showed us how we can provide a seamless experience to developers while they are communicating with peers within a channel. To summarize so far, we understood how to use DocumentDB from Microsoft Azure. Using DocumentDB, we created a new collection along with new database to store bugs data. We also added some sample JSON documents in Bugs collection. In today's world of collaboration, development teams who would be using such integrations and automations would be efficient and effective while delivering their quality products. Resources for Article: Further resources on this subject: Talking to Bot using Browser [article] Asynchronous Control Flow Patterns with ES2015 and beyond [article] Basic Website using Node.js and MySQL database [article]
Read more
  • 0
  • 0
  • 2039

article-image-2018-new-year-resolutions-to-thrive-in-the-algorithmic-world-part-3-of-3
Sugandha Lahoti
05 Jan 2018
5 min read
Save for later

2018 new year resolutions to thrive in the Algorithmic World - Part 3 of 3

Sugandha Lahoti
05 Jan 2018
5 min read
We have already talked about a simple learning roadmap for you to develop your data science skills in the first resolution. We also talked about the importance of staying relevant in an increasingly automated job market, in our second resolution. Now it’s time to think about the kind of person you want to be and the legacy you will leave behind. 3rd Resolution: Choose projects wisely and be mindful of their impact. Your work has real consequences. And your projects will often be larger than what you know or can do. As such, the first step toward creating impact with intention is to define the project scope, purpose, outcomes and assets clearly. The next most important factor is choosing the project team. 1. Seek out, learn from and work with a diverse group of people To become a successful data scientist you must learn how to collaborate. Not only does it make projects fun and efficient, but it also brings in diverse points of view and expertise from other disciplines. This is a great advantage for machine learning projects that attempt to solve complex real-world problems. You could benefit from working with other technical professionals like web developers, software programmers, data analysts, data administrators, game developers etc. Collaborating with such people will enhance your own domain knowledge and skills and also let you see your work from a broader technical perspective. Apart from the people involved in the core data and software domain, there are others who also have a primary stake in your project’s success. These include UX designers, people with humanities background if you are building a product intended to participate in society (which most products often are), business development folks, who actually sell your product and bring revenue, marketing people, who are responsible for bringing your product to a much wider audience to name a few. Working with people of diverse skill sets will help market your product right and make it useful and interpretable to the target audience. In addition to working with a melange of people with diverse skill sets and educational background it is also important to work with people who think differently from you, and who have experiences that are different from yours to get a more holistic idea of the problems your project is trying to tackle and to arrive at a richer and unique set of solutions to solve those problems. 2. Educate yourself on ethics for data science As an aspiring data scientist, you should always keep in mind the ethical aspects surrounding privacy, data sharing, and algorithmic decision-making.  Here are some ways to develop a mind inclined to designing ethically-sound data science projects and models. Listen to seminars and talks by experts and researchers in fairness, accountability, and transparency in machine learning systems. Our favorites include Kate Crawford’s talk on The trouble with bias, Tricia Wang on The human insights missing from big data and Ethics & Data Science by Jeff Hammerbacher. Follow top influencers on social media and catch up with their blogs and about their work regularly. Some of these researchers include Kate Crawford, Margaret Mitchell, Rich Caruana, Jake Metcalf, Michael Veale, and Kristian Lum among others. Take up courses which will guide you on how to eliminate unintended bias while designing data-driven algorithms. We recommend Data Science Ethics by the University of Michigan, available on edX. You can also take up a course on basic Philosophy from your choice of University.   Start at the beginning. Read books on ethics and philosophy when you get long weekends this year. You can begin with Aristotle's Nicomachean Ethics to understand the real meaning of ethics, a term Aristotle helped develop. We recommend browsing through The Stanford Encyclopedia of Philosophy, which is an online archive of peer-reviewed publication of original papers in philosophy, freely accessible to Internet users. You can also try Practical Ethics, a book by Peter Singer and The Elements of Moral Philosophy by James Rachels. Attend or follow upcoming conferences in the field of bringing transparency in socio-technical systems. For starters, FAT* (Conference on Fairness, Accountability, and Transparency) is scheduled on February 23 and 24th, 2018 at New York University, NYC. We also have the 5th annual conference of FAT/ML, later in the year.  3. Question/Reassess your hypotheses before, during and after actual implementation Finally, for any data science project, always reassess your hypotheses before, during, and after the actual implementation. Always ask yourself these questions after each of the above steps and compare them with the previous answers. What question are you asking? What is your project about? Whose needs is it addressing? Who could it adversely impact? What data are you using? Is the data-type suitable for your type of model? Is the data relevant and fresh? What are its inherent biases and limitations? How robust are your workarounds for them? What techniques are you going to try? What algorithms are you going to implement? What would be its complexity? Is it interpretable and transparent? How will you evaluate your methods and results? What do you expect the results to be? Are the results biased? Are they reproducible? These pointers will help you evaluate your project goals from a customer and business point of view. Additionally, it will also help you in building efficient models which can benefit the society and your organization at large. With this, we come to the end of our new year resolutions for an aspiring data scientist. However, the beauty of the ideas behind these resolutions is that they are easily transferable to anyone in any job. All you gotta do is get your foundations right, stay relevant, and be mindful of your impact. We hope this gives a great kick start to your career in 2018. “Motivation is what gets you started. Habit is what keeps you going.” ― Jim Ryun Happy New Year! May the odds and the God(s) be in your favor this year to help you build your resolutions into your daily routines and habits!
Read more
  • 0
  • 0
  • 2017

article-image-neurips-2018-how-machine-learning-experts-can-work-with-policymakers-to-make-good-tech-decisions-invited-talk
Bhagyashree R
18 Dec 2018
6 min read
Save for later

NeurIPS 2018: How machine learning experts can work with policymakers to make good tech decisions [Invited Talk]

Bhagyashree R
18 Dec 2018
6 min read
At the 32nd annual  NeurIPS conference held earlier this month, Edward William Felten, a professor of computer science and public affairs at Princeton University spoke about how decision makers and tech experts can work together to make better policies. The talk was aimed at answering questions such as why should public policy matter to AI researchers, what role can researchers play in policy debates, and how can researchers help bridge divides between the research and policy communities. While AI and machine learning are being used in high impact areas and have seen heavy adoption in every field, in recent years, they have also gained a lot of attention from the policymakers. Technology has become a huge topic of discussion among policymakers mainly because of its cases of failure and how it is being used or misused. They have now started formulating laws and regulations and holding discussions about how society will govern the development of these technologies. Prof. Felten explained how having constructive engagement with policymakers will lead to better outcomes for technology, government, and society. Why tech should be regulated? Regulating tech is important, and for that researchers, data scientists, and other people in tech fields have to close the gap between their research labs, cubicles, and society. Prof. Felten emphasizes that it is up to the tech people to bridge this gap as we not only have the opportunity but also a duty to be more active and productive in participating in public life. There are many people coming to the conclusion that tech should be regulated before it is too late. In a piece published by the Wall Street Journal, three experts debated about whether the government should regulate AI. One of them, Ryan Calo explains, “One of the ironies of artificial intelligence is that proponents often make two contradictory claims. They say AI is going to change everything, but there should be no changes to the law or legal institutions in response.” Prof. Felten points out that law and policies are meant to change in order to adapt according to the current conditions. They are not just written once and for all for the cases of today and the future, rather law is a living system that adapts to what is going on in the society. And, if we believe that technology is going to change everything, we can expect that law will change. Prof. Felten also said that not only the tech researchers and policymakers but the society also should also have some say in how the technology is developed, “After all the people who are affected by the change that we are going to cause deserve some say in how that change happens, how it is used. If we believe in a society which is fundamentally democratic in which everyone has a stake and everyone has a voice then it is only fair that those lives we are going to change have some say in how that change come about and what kind of changes are going to happen and which are not.” How experts can work with decision makers to make good tech decisions The three key approaches that we can take to engage with policymakers to take a decision about technology: Engage in a two-way dialogue with policymakers As a researcher, we might think that we are tech experts/scientists and we do not need to get involved in politics. We need to just share the facts we know and our job is done. But if researchers really want to maximize their impact in policy debates, they need to combine the knowledge and preferences of policymakers with their knowledge and preferences. Which means, they need to take into account what policymakers might already have heard about a particular subject and the issues or approaches that resonate with them. Prof. Felten explains that this type of understanding and exchange of ideas can be done in two stages. Researchers need to ask several questions to policymakers, which is not a one-time thing, rather a multi-round protocol. They have to go back and forth with the person and need to build engagement over time and mutual trust. And, then they need to put themselves into the shoes of a decision maker and understand how to structure the decision space for them. Be present in the room when the decisions are being made To have their influence on the decisions that get made, researchers need to have “boots on the ground.” Though not everyone has to engage in this deep and long-term process of decision making, we need some people from the community to engage on behalf of the community. Researchers need to be present in the room when the decisions are being made. This means taking posts as advisers or civil servants. We already have a range of such posts at both local and national government levels, alongside a range of opportunities to engage less formally in policy development and consultations. Creating a career path and rewarding policy engagement To drive this engagement, we need to create a career path which rewards policy engagement. We should have a way through which researchers can move between policy and research careers. Prof. Felten pointed to a range of US-based initiatives that seek to bring those with technical expertise into policy-oriented roles, such as the US Digital Service. He adds that if we do not create these career paths and if this becomes something that people can do only after sacrificing their careers then very few people will do it. This needs to be an activity that we learn to respect when people in the community do it well. We need to build incentives whether it is in career incentives in academia, whether it is understanding that working in government or on policy issues is a valuable part of one kind of academic career and not thinking of it as deter or a stop. To watch the full talk, check out NeurIPS Facebook page. NeurIPS 2018: Rethinking transparency and accountability in machine learning NeurIPS 2018: Developments in machine learning through the lens of Counterfactual Inference [Tutorial] Accountability and algorithmic bias: Why diversity and inclusion matters [NeurIPS Invited Talk]
Read more
  • 0
  • 0
  • 1948
Visually different images
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €14.99/month. Cancel anytime
article-image-deepfakes-house-committee-hearing-risks-vulnerabilities-and-recommendations
Vincy Davis
21 Jun 2019
16 min read
Save for later

Deepfakes House Committee Hearing: Risks, Vulnerabilities and Recommendations

Vincy Davis
21 Jun 2019
16 min read
Last week, the House Intelligence Committee held a hearing to examine the public risks posed by “deepfake” videos. Deepfake is identified as a technology that alters audio or video and then is passed off as true or original content. In this hearing, experts on AI and digital policy highlighted to the committee, deepfakes risk to national security, upcoming elections, public trust and the mission of journalism. They also offered potential recommendations on what Congress could do to combat deepfakes and misinformation. The chair of the committee Adam B. Schiff, initiated the hearing by stating that it is time to regulate the technology of deepfake videos as it is enabling sinister forms of deception and disinformation by malicious actors. He adds that “Advances in AI or machine learning have led to the emergence of advance digitally doctored type of media, the so-called deepfakes that enable malicious actors to foment chaos, division or crisis and have the capacity to disrupt entire campaigns including that for the Presidency.” For a quick glance, here’s a TL;DR: Jack Clerk believes that governments should be in the business of measuring and assessing deepfake threats by looking directly at the scientific literature and developing a base knowledge of it. David Doermann suggests that tools and processes which can identify fake content should be made available in the hands of individuals, rather than relying completely on the government or on social media platforms to police content. Danielle Citron warns that the phenomenon of deepfake is going to be increasingly felt by women and minorities and for people from marginalized communities. Clint Watts provides a list of recommendations which should be implemented to prohibit U.S. officials, elected representatives and agencies from creating and distributing false and manipulated content. A unified standard should be followed by all social media platforms. Also they should be pressurized to have a 10-15 seconds delay in all videos, so that they can decide, to label a particular video or not. Regarding 2020 Presidential election: State governments and social media companies should be ready with a response plan, if a fake video surfaces to cause disrupt. It was also recommended that the algorithms to make deepfakes should be open sourced. Laws should be altered, and strict actions should be awarded, to discourage deepfake videos. Being forewarned is forearmed in case of deepfake technology Jack Clerk, OpenAI Policy Director, highlighted in his testimony that he does not think A.I. is the cause of any disruption, but actually is an “accelerant to an issue which has been with us for some time.'' He adds that computer software aligned with A.I. technology has become significantly cheaper and more powerful, due to its increased accessibility. This has led to its usage in audio or video editing, which was previously very difficult. Similar technologies  are being used for production of synthetic media. Also deepfakes are being used in valuable scientific research. Clerk suggests that interventions should be made to avoid its misuse. He believes that “it may be possible for large-scale technology platforms to try and develop and share tools for the detection of malicious synthetic media at both the individual account level and the platform level. We can also increase funding.” He strongly believes that governments should be in the business of measuring and assessing these threats by looking directly at the scientific literature and developing a base knowledge. Clerk concludes saying that “being forewarned is forearmed here.” Make Deepfake detector tools readily availaible David Doermann, the former Project Manager at the Defense Advanced Research Projects Agency mentions that the phrase ‘seeing is believing’ is no longer true. He states that there is nothing fundamentally wrong or evil about the technology, like basic image and video desktop editors, deepfakes is only a tool. There are a lot of positive applications of generative networks just as there are negative ones. He adds that, as of today, there are some solutions that can identify deepfakes reliably. However, Doermann fears that it’s only a matter of time before the current detection capabilities will be rendered less effective. He adds that “it's likely to get much worse before it gets much better.” Doermann suggests that tools and processes which can identify such fake content should be made available in the hands of individuals, rather than relying completely on the government or on social media platforms to police content. At the same time, there should also be ways to verify it or prove it or easily report it. He also hopes that automated detection tools will be developed, in the future, which will help in filtering and detection at the front end of the distribution pipeline. He also adds that “appropriate warning labels should be provided, which suggests that this is not real or not authentic, or not what it's purported to be. This would be independent of whether this is done and the decisions are made, by humans, machines or a combination.” Groups most vulnerable to Deepfake attacks Women and minorities Danielle Citron, a Law Professor at the University of Maryland, describes Deepfake as “particularly troubling when they're provocative and destructive.” She adds that, we as humans, tend to believe what our eyes and ears are telling us and also tend to share information that confirms our biases. It’s particularly true when that information is novel and negative, so the more salacious, we're more willing to pass it on. She also specifies that the deepfakes on social media networks are ad-driven. When all of this is put together, it turns out that the more provocative the deepfake is, the salacious will be the spread virally.  She also informed the panel committee about an incident, involving an investigative journalist in India, who had her posters circulated over the internet and deepfake sex videos, with her face morphed into pornography, over a provocative article. Citron thus states that “the economic and the social and psychological harm is profound”. Also based on her work in cyber stalking, she believes that this phenomenon is going to be increasingly felt by women and minorities and for people from marginalized communities. She also shared other examples explaining the effect of deepfake on trades and businesses. Citron also highlighted that “We need a combination of law, markets and really societal resilience to get through this, but the law has a modest role to play.” She also mentioned that though there are laws to sue for defamation, intentional infliction of emotional distress, privacy torture, these procedures are quite expensive. She adds that criminal law offers very less opportunity for the public to push criminals to the next level. National security Clint Watts, a Senior Fellow at the Foreign Policy Research Institute provided insight into how such technologies can affect national security. He says that “A.I. provides purveyors of disinformation to identify psychological vulnerabilities and to create modified content digital forgeries advancing false narratives against Americans and American interests.” Watts suspects that Russia, “being an enduring purveyor of disinformation is and will continue to pursue the acquisition of synthetic media capability, and employ the output against adversaries around the world.” He also adds that China, being the U.S. rival, will join Russia “to get vast amounts of information stolen from the U.S. The country has already shown a propensity to employ synthetic media in broadcast journalism. They'll likely use it as part of disinformation campaigns to discredit foreign detractors, incite fear inside western-style democracy and then, distort the reality of audiences and the audiences of America's allies.” He also mentions that deepfake proliferation can present a danger to American constituency by demoralizing it. Watts suspects that the U.S. diplomats and military personnel deployed overseas, will be prime target for deepfake driven disinformation planted by adversaries. Watts provided a list of recommendations which should be implemented to “prohibit U.S. officials, elected representatives and agencies from creating and distributing false and manipulated content.” The U.S. government must be the sole purveyor of facts and truth to constituents, assuring the effective administration of democracy via productive policy debate from a shared basis of reality. Policy makers should work jointly with social media companies to develop standards for content and accountability. The U.S. government should partner with private sectors to implement digital verification designating a date, time and physical origination of the content. Social media companies should start labeling videos, and forward the same across all platforms. Consumers should be able to determine the source of the information and whether it's the authentic depiction of people and events. The U.S. government from a national security perspective, should maintain intelligence on capabilities of adversaries to conduct such information. The departments of defense and state should immediately develop response plans, for deepfake smear campaigns and mobilizations overseas, in an attempt to mitigate harm. Lastly he also added that public awareness of deepfakes and signatures, will assist in tamping down attempts to subvert the  U.S. democracy and incite violence. Schiff asked the witnesses, if it's “time to do away with the immunity that social media platforms enjoy”, Watts replied in the affirmative and listed suggestions in three particular areas. If social media platforms see something spiking in terms of virality, it should be put in a queue for human review, linked to fact checkers, then down rate it and don't let it into news feeds. Also make the mainstream understand what is manipulated content. Anything related to outbreaks of violence and public safety should be regulated immediately. Anything related to elected officials or public institutions, should immediately be flagged and pulled down and checked and then a context should be given to it. Co-chair of the committee, Devin Nunes asked Citron what kind of filters can be placed on these tech companies, as “it's not developed by partisan left wing like it is now, where most of the time, it's conservatives who get banned and not democrats”. Citron suggested that proactive filtering won’t be possible and hence companies should react responsibly and should be bipartisan. She added that “but rather, is this a misrepresentation in a defamatory way, right, that we would say it's a falsehood that is harmful to reputation. that's an impersonation, then we should take it down. This is the default I am imagining.” How laws could be altered according to the changing times, to discourage deepfake videos Citron says that laws could be altered, like in the case of Section 230 C. It states that “No speaker or publisher -- or no online service shall be treated as a speaker or publisher of someone else's content.” This law can be altered to “No online service that engages in reasonable content moderation practices shall be treated as a speaker or publisher of somebody else's content.” Citron believes that avoiding reasonability could lead to negligence of law. She also adds that “I've been advising Twitter and Facebook all of the time. There is meaningful reasonable practices that are emerging and have emerged in the last ten years. We already have a guide, it's not as if this is a new issue in 2019. So we can come up with reasonable practices.” Also Watts added that if any adversary from big countries like China, Iran, Russia makes a deepfake video to push the US downwards, we can trace them back if we have aggressive laws at our hand. He says it could be anything from an “arrest and extradition, if the sanction permits, response should be individually, or in terms of cyber response”, could help us to discourage deepfake. How to slow down the spread of videos One of the reasons that these types of manipulated images gain traction is because it's almost instantaneous - they can be shared around the world, shared across platforms in a few seconds. Doermann says that these social media platforms must be pressurized to have a 10-15 seconds delay, so that it can be decided whether to label a particular video or not. He adds that “We've done it for child pornography, we've done it for human trafficking, they're serious about those things. This is another area that's a little bit more in the middle, but I think they can take the same effort in these areas to do that type of triage.” This delay will allow third parties or fact checkers to decide on the authenticity of videos and label them. Citron adds that this is where labelling a particular video can help, “I think it is incredibly important and there are times in which, that's the perfect rather than second best, and we should err on the side of inclusion and label it as synthetic.” The representative of Ohio, Brad Wenstrup added that we can have internal extradition laws, which can punish somebody when “something comes from some other country, maybe even a friendly country, that defames and hurts someone here”. There should be an agreement among nations that “we'll extradite those people and they can be punished in your country for what they did to one of your citizens.” Terri Sewell, the Representative of Alabama further probed about the current scenario of detecting fake videos, to which Doermann replied that currently we have enough solutions to detect a fake video, however with a constant delay of 15-20 minutes. Deepfakes and 2020 Presidential elections Watts says that he’s concerned about deepfakes acting on the eve of election day 2020. Foreign adversaries may use a standard disinformation approach by “using an organic content that suits their narrative and inject it back.” This can escalate as more people are making deepfakes each year. He also added that “Right now I would be very worried about someone making a fake video about electoral systems being out or broken down on election day 2020.” So state governments and social media companies should be ready with a response plan in the wake of such an event. Sewell then asked the witnesses for suggestions on campaigns to political parties/candidates so that they are prepared for the possibility of deepfake content. Watts replied that the most important thing to counter fake content would be a unified standard, that all the social media industries should follow. He added that “if you're a manipulator, domestic or international, and you're making deep fakes, you're going to go to whatever platform allows you to post anything from inauthentic accounts. they go to wherever the weak point is and it spreads throughout the system.” He believes that this system would help counter extremism, disinformation and political smear campaigns. Watts added any sort of lag in responding to such videos should be avoided as “any sort of lag in terms of response allows that conspiracy to grow.” Citron also pointed out that firstly all candidates should have a clear policy about deep fakes and should commit that they won’t use them or spread them. Should the algorithms to make deepfakes be open sourced? Doermann answered that the algorithms of deepfakes have to be absolutely open sourced. He says that though this might help adversaries, but they are anyway going to learn about it. He believes this is significant as, “We need to get this type of stuff out there. We need to get it into the hands of users. There are companies out there that are starting to make these types of things.” He also states that people should be able to use this technology. The more we educate them, more the tools they learn, more the correct choices people can make. On Mark Zuckerberg’s deepfake video On being asked to comment on the decision of Mark Zuckerberg to not take down his deepfake video from his own platform, Facebook, Citron replied that Mark gave a perfect example of “satire and parody”, by not taking down the video. She added that private companies can make these kinds of choices, as they have an incredible amount of power, without any liability, “it seemed to be a conversation about the choices they make and what does that mean for society. So it was incredibly productive, I think.” Watts also opined that he likes Facebook for its consistency in terms of enforcement and that they are always trying to learn better things and implement it. He adds that he really like Facebook as its always ready to hear “from legislatures about what falls inside those parameters. The one thing that I really like is that they're doing is identifying inauthentic account creation and inauthentic content generation, they are enforcing it, they have increased the scale,and it is very very good in terms of how they have scaled it up, it’s not perfect, but it is better.”   Read More: Zuckberg just became the target of the world’s first high profile white hat deepfake op. Can Facebook come out unscathed? On the Nancy Pelosi doctored video Schiff asked the witnesses if there is any account on the number of millions of people who have watched the doctored video of Nancy Pelosi, and an account of how many of them ultimately got to know that it was not a real video. He said he’s asking this as according to psychologists, people never really forget their once constructed negative impression. Clarke replied that “Fact checks and clarifications tend not to travel nearly as far as the initial news.” He added that its becomes a very general thing as “If you care, you care about clarifications and fact checks. but if you're just enjoying media, you're enjoying media. You enjoy the experience of the media and the absolute minority doesn’t care whether it's true.” Schiff also recalled how in 2016, “some foreign actresses, particularly Russia had mimicked black lives matter to push out continent to racially divide people.” Such videos gave the impression of police violence, on people of colour. They “certainly push out videos that are enormously jarring and disruptive.” All the information revealed in the hearing was described as “scary and worrying”, by one of the representatives. The hearing was ended by Schiff, the chair of the committee, after thanking all the witnesses for their testimonies and recommendations. For more details, head over to the full Hearing on deepfake videos by the House Intelligence Committee. Worried about Deepfakes? Check out the new algorithm that manipulate talking-head videos by altering the transcripts Lawmakers introduce new Consumer privacy bill and Malicious Deep Fake Prohibition Act to support consumer privacy and battle deepfakes Machine generated videos like Deepfakes – Trick or Treat?
Read more
  • 0
  • 0
  • 1859

article-image-mid-autumn-shoppers-dream-amazon-fulfilled-thanksgiving-look-like
Aaron Lazar
24 Nov 2017
10 min read
Save for later

A mid-autumn Shopper’s dream - What an Amazon fulfilled Thanksgiving would look like

Aaron Lazar
24 Nov 2017
10 min read
I’d been preparing for Thanksgiving a good 3 weeks in advance. One reason is that I’d recently rented out a new apartment and the troops were heading over to my place this year. I obviously had to make sure everything went well and for that, trust me, there was no resting even for a minute! Thanksgiving is really about being thankful for the people and things in your life and spending quality time with family. This Thanksgiving I’m especially grateful to Amazon for making it the best experience ever! Read on to find out how Amazon made things awesome! Good times started two weeks ago when I was at the AmazonGo store with my friend, Sue. [embed]https://www.youtube.com/watch?v=NrmMk1Myrxc[/embed] In fact, this was the first time I had set foot in one of the stores. I wanted to see what was so cool about them and why everyone had been talking about them for so long! The store was pretty big and lived up to the A to Z concept, as far as I could see. The only odd thing was that I didn’t notice any queues or a billing counter. Sue glided around the floor with ease, as if she did this every day. I was more interested in seeing what was so special about this place. After she got her stuff, she headed straight for the door. I smiled to myself thinking how absent minded she was. So I called her back and reminded her “You haven’t gotten your products billed.” She smiled back at me and shrugged, “I don’t need to.” Before I could open my mouth to tell her off for stealing, she explained to me about the store. It’s something totally futuristic! Have you ever imagined not having to stand in a line to buy groceries? At the store, you just had to log in to your AmazonGo app on your phone, enter the store, grab your stuff and then leave. The sensors installed everywhere in the store automatically detected what you’d picked up and would bill you accordingly. They also used Computer Vision and Deep Learning to track people and their shopping carts. Now that’s something! And you even got a receipt! Well, it was my birthday last week and knowing what an avid reader I was, my colleagues from office gifted me a brand new Kindle. I loved every bit of it, but the best part was the X-ray feature. With X-ray, you could simply get information about a character, person or terms in a book. You could also scroll through the long lists of excerpts and click on one to go directly to that particular portion of the book! That’s really amazing, especially if you want to read a particular part of the book quickly. It came in use at the right time - I downloaded a load of recipe books for the turkey. Another feather in the cap for Amazon! Talking about feathers in one’s cap, you won’t believe it, but Amazon actually got me rekognised at work a few days ago. Nah, that wasn’t a typo. I worked as a software developer/ML engineer in a startup and I’d been doing this for as long as I can remember. I recently built this cool mobile application that recognized faces and unlocked your phone even when you didn’t have something like Face ID on your phone and the app had gotten us a million downloads in a month! It could also recognize and give you information about the ethnicity of a person if you captured their photograph with the phone’s camera. The trick was that I’d used the AmazonRekognition APIs for enhanced face detection in the application. Rekognition allows you to detect objects, scenes, text, and faces, using highly scalable, deep learning models. I also enhanced the application using the Polly API. Polly converts text to whichever language you want the speech in and gives you the synthesized speech in the form of audio files.The app I built now converted input text into 18 different languages, helping one converse with the person in front of them in that particular language, should they have a problem doing it in English. I got that long awaited promotion right after! Ever wondered how I got the new apartment? ;) Since the folks were coming over to my place in a few days, I thought I’d get a new dinner set. You’d probably think I would need to sit down at my PC or probably pick up my phone to search for a set online, but I had better things to do. Thanks to Alexa, I simply needed to ask her to find one for me and she did it brilliantly. Now Alexa isn’t my girlfriend, although I would have loved that to be. Alexa is actually Amazon’s cloud-based voice service that provides customers with an engaging way of interacting with technology. Alexa is blessed with finely tuned ASR or Automatic Speech Recognition and NLU or Natural Language Understanding engines, that instantly recognize and respond to voice requests. I selected a pretty looking set and instantly bought it through my Prime account. With technology like this at my fingertips, the developer in me had left no time in exploring possibilities with Alexa. That’s when I found out about Lex, built on the same deep learning platform that Alexa works on, which allows developers to build conversational interfaces into their apps. With the dinner set out of the way, I sat back with my feet up on the table. I was awesome, baby! Oh crap! I forgot to buy the turkey, the potatoes, the wine and a whole load of other stuff. It was 3 AM and I started panicking. I remembered that mum always put the turkey in the fridge at least 3 days in advance. I had only 2! I didn’t even have the time to make it to the AmazonGo store. I was panicking again and called up Suzy to ask her if she could pick up the stuff for me. She sounded so calm over the phone when I narrated my horror to her. She simply told me to get the stuff from AmazonFresh. So I hastily disconnected the call and almost screamed to Alexa, “Alexa, find me a big bird!”, and before I realized what I had said, I was presented with this. [caption id="attachment_2215" align="aligncenter" width="184"] Big Bird is one of the main protagonist in Sesame Street.[/caption] So I tried again, this time specifying what I actually needed! With AmazonDash integrating with AmazonFresh, I was able to get the turkey and other groceries delivered home in no time! What a blessing, indeed! A day before Thanksgiving, I was stuck in the office, working late on a new project. We usually tinkered around with a lot of ML and AI stuff. There was this project which needed the team to come up with a really innovative algorithm to perform a deep learning task. As the project lead, I was responsible for choosing the tech stack and I’m glad a little birdie had recently told me about AWS taking in MXNet officially as a Deep Learning Framework. MXNet made it a breeze to build ML applications that train quickly and could run anywhere. Moreover, with the recent collaboration between Amazon and Microsoft, a new ML library called Gluon was born. Available in MXNet, Gluon made building ML models, even more, easier and quicker, without compromising on performance. Need I say the project was successful? I got home that evening and sat down to pick a good flick or two to download from Amazon PrimeVideo. There’s always someone in the family who’d suggest we all watch a movie and I had to be prepared. With that done I quickly showered and got to bed. It was going to be a long day the next day! 4 AM my alarm rang and I was up! It was Thanksgiving, and what a wonderful day it was! I quickly got ready and prepared to start cooking. I got the bird out of the freezer and started to thaw it in cold water. It was a big bird so it was going to take some time. In the meantime, I cleaned up the house and then started working on the dressing. Apples, sausages, and cranberry. Yum! As I sliced up the sausages I realized that I had misjudged the quantity. I needed to get a couple more packets immediately! I had to run to the grocery store right away or there would be a disaster! But it took me a few minutes to remember it was Thanksgiving, one of the craziest days to get out on the road. I could call the store delivery guy or probably Amazon Dash, but then that would be illogical cos he’d have to take the same congested roads to get home.  I turned to Alexa for help, “Alexa, how do I get sausages delivered home in the next 30 minutes?”. And there I got my answer - Try Amazon PrimeAir. Now I don’t know about you, but having a drone deliver a couple packs of sausages to my house, is nothing less than ecstatic! I sat it out near the window for the next 20 minutes, praying that the package wouldn’t be intercepted by some hungry birds! I couldn’t miss the sight of the pork flying towards my apartment. With the dressing and turkey baked and ready, things were shaping up much better than I had expected. The folks started rolling in by lunchtime. Mum and dad were both quite impressed with the way I had organized things. I was beaming and in my mind hi-fived Amazon for helping me make everything possible with its amazing products and services designed to delight customers. It truly lives up to its slogan: Work hard. Have fun. Make history. If you are one of those folks who do this every day, behind the scenes, by building amazing products powered by machine learning and big data to make other's lives better, I want to thank you today for all your hard work. This Thanksgiving weekend, Packt's offering an unbelievable deal - Buy any book or video for just $10 or any three for $25! I know what I have my eyes on! Python Machine Learning - Second Edition by Sebastian Raschka and Vahid Mirjalili Effective Amazon Machine Learning by Alexis Perrier OpenCV 3 - Advanced Image Detection and Reconstruction [Video] by Prof. Robert Laganiere In the end, there’s nothing better than spending quality time with your family, enjoying a sumptuous meal, watching smiles all around and just being thankful for all you have. All I could say was, this Thanksgiving was truly Amazon fulfilled! :) Happy Thanksgiving folks!    
Read more
  • 0
  • 0
  • 1708

article-image-fat-2018-conference-session-5-summary-fat-recommenders-etc
Savia Lobo
24 Feb 2018
6 min read
Save for later

FAT* 2018 Conference Session 5 Summary on FAT Recommenders, Etc.

Savia Lobo
24 Feb 2018
6 min read
This session of FAT 2018 is about Recommenders, etc. Recommender systems are algorithmic tools for identifying items of interest to users. They are usually deployed to help mitigate information overload. Internet-scale item spaces offer many more choices than humans can process, diminishing the quality of their decision-making abilities. Recommender systems alleviate this problem by allowing users to more quickly focus on items likely to match their particular tastes. They are deployed across the modern Internet, suggesting products in e-commerce sites, movies and music in streaming media platforms, new connections on social networks, and many more types of items. This session explains what Fairness, Accountability, and Transparency means in the context of recommendation. The session also includes a paper that talks about predictive policing, which is defined as ‘Given historical crime incident data for a collection of regions, decide how to allocate patrol officers to areas to detect crime.’ The Conference on Fairness, Accountability, and Transparency (FAT), which would be held on the 23rd and 24th of February, 2018 is a multi-disciplinary conference that brings together researchers and practitioners interested in fairness, accountability, and transparency in socio-technical systems. The FAT 2018 conference will witness 17 research papers, 6 tutorials, and 2 keynote presentations from leading experts in the field. This article covers research papers pertaining to the 5th session that is dedicated to FAT Recommenders, etc. Paper 1: Runaway Feedback Loops in Predictive Policing Predictive policing systems are increasingly being used to determine how to allocate police across a city in order to best prevent crime. To update the model, discovered crime data (e.g., arrest counts) are used. Such systems have been empirically shown to be susceptible to runaway feedback loops, where police are repeatedly sent back to the same neighborhoods regardless of the true crime rate. This paper is in response to this system, where the authors have developed a mathematical model of predictive policing that proves why this feedback loop occurs.The paper also empirically shows how this model exhibits such problems, and demonstrates ways to change the inputs to a predictive policing system (in a black-box manner) so the runaway feedback loop does not occur, allowing the true crime rate to be learned. Key takeaways: The results stated in the paper establish a link between the degree to which runaway feedback causes problems and the disparity in crime rates between areas. The paper also demonstrates ways in which reported incidents of crime (reported by residents) and discovered incidents of crime (directly observed by police officers dispatched as a result of the predictive policing algorithm) interact. In this paper, the authors have used the theory of urns (a common framework in reinforcement learning) to analyze existing methods for predictive policing. There are formal as well as empirical results which shows why these methods will not work. Subsequently, the authors have also provided remedies that can be used directly with these methods in a black-box fashion that improve their behavior, and provide theoretical justification for these remedies. Paper 2: All The Cool Kids, How Do They Fit In? Popularity and Demographic Biases in Recommender Evaluation and Effectiveness There have been many advances in the information retrieval evaluation, which demonstrate the importance of considering the distribution of effectiveness across diverse groups of varying sizes. This paper addresses this question, ‘do users of different ages or genders obtain similar utility from the system, particularly if their group is a relatively small subset of the user base?’ The authors have applied this consideration to recommender systems, using offline evaluation and a utility-based metric of recommendation effectiveness to explore whether different user demographic groups experience similar recommendation accuracy. The paper shows that there are demographic differences in measured recommender effectiveness across two data sets containing different types of feedback in different domains; these differences sometimes, but not always, correlate with the size of the user group in question. Demographic effects also have a complex— and likely detrimental—interaction with popularity bias, a known deficiency of recommender evaluation. Key takeaways: The paper presents an empirical analysis of the effectiveness of collaborative filtering recommendation strategies, stratified by the gender and age of the users in the data set. The authors applied widely-used recommendation techniques across two domains, musical artists and movies, using publicly-available data. The paper explains whether recommender systems produced equal utility for users of different demographic groups. The authors made use of publicly available datasets, they compared the utility, as measured with nDCG, for users grouped by age and gender. Regardless of the recommender strategy considered, they found significant differences for the nDCG among demographic groups. Paper 3: Recommendation Independence In this paper the authors have showcased new methods that can deal with variance of recommendation outcomes without increasing the computational complexity. These methods can more strictly remove the sensitive information, and experimental results demonstrate that the new algorithms can more effectively eliminate the factors that undermine fairness. Additionally, the paper also explores potential applications for independence enhanced recommendation, and discuss its relation to other concepts, such as recommendation diversity. Key takeaways from the paper: The authors have developed new independence-enhanced recommendation models that can deal with the second moment of distributions without sacrificing computational efficiency. The paper also explores applications in which recommendation independence would be useful, and reveal the relation of independence to the other concepts in recommendation research. It also presents the concept of recommendation independence, and discuss how the concept would be useful for solving real-world problems. Paper 4: Balanced Neighborhoods for Multi-sided Fairness in Recommendation In this paper, the authors examine two different cases of fairness-aware recommender systems: consumer-centered and provider-centered. The paper explores the concept of a balanced neighborhood as a mechanism to preserve personalization in recommendation while enhancing the fairness of recommendation outcomes. It shows that a modified version of the Sparse Linear Method (SLIM) can be used to improve the balance of user and item neighborhoods, with the result of achieving greater outcome fairness in real-world datasets with minimal loss in ranking performance. Key takeaways: In this paper, the authors examine applications in which fairness with respect to consumers and to item providers is important. They have shown that variants of the well-known sparse linear method (SLIM) can be used to negotiate the tradeoff between fairness and accuracy. This paper also introduces the concept of multisided fairness, relevant in multisided platforms that serve a matchmaking function. It demonstrates that the concept of balanced neighborhoods in conjunction with the well-known sparse linear method can be used to balance personalization with fairness considerations. If you’ve missed our summaries on the previous sessions, visit the article links to be on track. Session 1: Online Discrimination and Privacy Session 2: Interpretability and Explainability Session 3: Fairness in Computer Vision and NLP Session 4: Fair Classification
Read more
  • 0
  • 0
  • 1678
article-image-how-to-add-code-block-support-on-ck-editor-in-vue-3-project
Surajit Basak
27 May 2023
2 min read
Save for later

How to add code block support on CK Editor in Vue 3 Project

Surajit Basak
27 May 2023
2 min read
Exercitationem voluptatibus saepe veritatis quibusdam similique. Sed consequatur dolores sunt fuga et. Voluptas rerum reiciendis rerum velit et eos. Ut ut ex sunt consectetur rem cum. Quia ut veritatis minus ad. Aliquid suscipit dicta consequatur est sunt beatae. Quas vel unde dolorem maiores non reiciendis. Tempora laborum et necessitatibus suscipit error repellat. Doloremque quibusdam et nisi excepturi dolorum quia eveniet. Voluptas sed quibusdam numquam non sunt consequatur. Iste et illum provident modi aut qui. Et facere iusto ut earum repudiandae. Modi voluptatibus doloribus eaque iusto quos aspernatur. In officia eum et dolor. Atque minima odit harum omnis quos provident. Sequi deleniti id saepe quam iusto est omnis.Qoute Block StyleAn object at rest remains at rest, and an object in motion remains in motion at constant speed and in a straight line unless acted on by an unbalanced force.- By Isaac NewtonThe acceleration of an object depends on the mass of the object and the amount of force applied.- By Isaac NewtonWhenever one object exerts a force on another object, the second object exerts an equal and opposite on the first.- By Isaac NewtonTesting for image block using file uploader:Testing for image that was uploaded using url (Centered image): Slide image:How can I add code block support to CKEditor 5 on vue 2 projectAdd first you need to install the ckeditor code block:npm i @ckeditor/ckeditor5-code-blockWhen install is finished import the plugin:import { CodeBlock } from '@ckeditor/ckeditor5-code-block';Then Add that code block to your ck editor config:editorConfig: { plugins: [CodeBlock], toolbar: { items: ['codeBlock'] }, codeBlock: { languages: [ { language: 'plaintext', label: 'Plain text' }, // The default language. { language: 'c', label: 'C' }, { language: 'cs', label: 'C#' }, { language: 'cpp', label: 'C++' }, { language: 'css', label: 'CSS' }, { language: 'diff', label: 'Diff' }, { language: 'html', label: 'HTML' }, { language: 'java', label: 'Java' }, { language: 'javascript', label: 'JavaScript' }, { language: 'php', label: 'PHP' }, { language: 'python', label: 'Python' }, { language: 'ruby', label: 'Ruby' }, { language: 'typescript', label: 'TypeScript' }, { language: 'xml', label: 'XML' } ] } }Here is the demo php code:<?php class MySpecialClass { public function __construct(User $user, Service $service, Product $product) {} public function authenticate() { $this->user->login(); } public function process() { $this->product->generateUUID(); try { return $this->service->validate($this->product) ->deliver(); } catch (MySpecialErrorHandler $e) { $e->throwError(); } } }
Read more
  • 3
  • 7
  • 1613

article-image-hitting-the-right-notes-in-2017-ai-song-for-data-scientists
Aarthi Kumaraswamy
26 Dec 2017
3 min read
Save for later

Hitting the right notes in 2017: AI in a song for Data Scientists

Aarthi Kumaraswamy
26 Dec 2017
3 min read
A lot, I mean lots and lots of great articles have been written already about AI’s epic journey in 2017. They all generally agree that 2017 sets the stage for AI in very real terms.  We saw immense progress in academia, research and industry in terms of an explosion of new ideas (like capsNets), questioning of established ideas (like backprop, AI black boxes), new methods (Alpha Zero’s self-learning), tools (PyTorch, Gluon, AWS SageMaker), and hardware (quantum computers, AI chips). New and existing players gearing up to tap into this phenomena even as they struggled to tap into the limited talent pool at various conferences and other community hangouts. While we have accelerated the pace of testing and deploying some of those ideas in the real world with self-driving cars, in media & entertainment, among others, progress in building a supportive and sustainable ecosystem has been slow. We also saw conversations on AI ethics, transparency, interpretability, fairness, go mainstream alongside broader contexts such as national policies, corporate cultural reformation setting the tone of those conversations. While anxiety over losing jobs to robots keeps reaching new heights proportional to the cryptocurrency hype, we saw humanoids gain citizenship, residency and even talk about contesting in an election! It has been nothing short of the stuff, legendary tales are made of: struggle, confusion, magic, awe, love, fear, disgust, inspiring heroes, powerful villains, misunderstood monsters, inner demons and guardian angels. And stories worth telling must have songs written about them! Here’s our ode to AI Highlights in 2017 while paying homage to an all-time favorite: ‘A few of my favorite things’ from Sound of Music. Next year, our AI friends will probably join us behind the scenes in the making of another homage to the extraordinary advances in data science, machine learning, and AI. [box type="shadow" align="" class="" width=""] Stripes on horses and horsetails on zebras Bright funny faces in bowls full of rameN Brown furry bears rolled into pandAs These are a few of my favorite thinGs   TensorFlow projects and crisp algo models Libratus’ poker faces, AlphaGo Zero’s gaming caboodles Cars that drive and drones that fly with the moon on their wings These are a few of my favorite things   Interpreting AI black boxes, using Python hashes Kaggle frenemies and the ones from ML MOOC classes R white spaces that melt into strings These are a few of my favorite things   When models don’t converge, and networks just forget When I am sad I simply remember my favorite things And then I don’t feel so bad[/box]   PS: We had to leave out many other significant developments in the above cover as we are limited in our creative repertoire. We invite you to join in and help us write an extended version together! The idea is to make learning about data science easy, accessible, fun and memorable!    
Read more
  • 0
  • 0
  • 1461

article-image-new-test-title
661
23 May 2023
2 min read
Save for later

New test title

661
23 May 2023
2 min read
Lorem Ipsum is simply dummy text of the printing and typesetting industry. Lorem Ipsum has been the industry's standard dummy text ever since the 1500s, when an unknown printer took a galley of type and scrambled it to make a type specimen book. It has survived not only five centuries, but also the leap into electronic typesetting, remaining essentially unchanged. It was popularised in the 1960s with the release of Letraset sheets containing Lorem Ipsum passages, and more recently with desktop publishing software like Aldus PageMaker including versions of Lorem Ipsum.import java.util.Scanner; public class HelloWorld { public static void main(String[] args) { // Creates a reader instance which takes // input from standard input - keyboard Scanner reader = new Scanner(System.in); System.out.print("Enter a number: "); // nextInt() reads the next integer from the keyboard int number = reader.nextInt(); // println() prints the following line to the output screen System.out.println("You entered: " + number); } }# Python program to check if the input number is odd or even. # A number is even if division by 2 gives a remainder of 0. # If the remainder is 1, it is an odd number. num = int(input("Enter a number: ")) if (num % 2) == 0: print("{0} is Even".format(num)) else: print("{0} is Odd".format(num))
Read more
  • 0
  • 0
  • 549
article-image-test-by-lalith
Lalith
09 Oct 2024
1 min read
Save for later

test by lalith

Lalith
09 Oct 2024
1 min read
test content
Read more
  • 0
  • 0
  • 485

article-image-interact-with-chatgpt-api-using-open-ai-python-library
430
23 May 2023
8 min read
Save for later

Interact with ChatGPT API using Open AI Python Library

430
23 May 2023
8 min read
How to use the OpenAI Python library to interact with the ChatGPT API Using the ChatGPT API with Python is a relatively simple process. You'll first need to make sure you create a new PyCharm project called ChatGPTResponse as shown in the following screenshot:This is for testing purpose dshfku busdikfk sihdfios difh 9shfoisduhf osbidl dfhiosh lfos ighsog j idfshf os'd fids ij si'djfisd jf  sdfjops  PACKT PLUS                                                                                                      Fig 1: New Project Window in PyCharm Once you have that setup, you can use the OpenAI Python library to interact with the ChatGPT API. Open a new Terminal in PyCharm, make sure that you are in your project folder and install the openai package: $ pip install openai Next, you need to create a new Python file in your PyCharm project on the left top corner perform a Right-click on the folder ChatGPTResponse | New | Python File.  Name the file app.py and click hit Enter. You should now have a new Python file in your project directory:                                                                                                                   Fig 2: New Python File To get started, you'll need to import the openai library into your Python file. Also, you'll need to provide your OpenAI API Key. You can obtain an API Key from the OpenAI website by following the steps outlined in the previous sections of this book. Then you'll need to set it as a parameter in your Python code. Once your API Key is set up, you can start interacting with the ChatGPT API. import openaiopenai.api_key = “YOUR_API_KEY” Replace YOUR_API_KEY with the API key you obtained from the OpenAI platform page. Now, you can ask the user for a question using the input() function:question = input("What would you like to ask ChatGPT? ") The input() function is used to prompt the user to input a question they would like to ask the ChatGPT API. The function takes a string as an argument, which is displayed to the user when the program is run. In this case, the question string is "What would you Like to ask ChatGPT?". When the user types their question and presses enter, the input() function will return the string that the user typed. This string is then assigned to the variable question.To pass the user question from your Python script to ChatGPT, you will need to use the ChatGPT API Completion function: response = openai.Completion.create(    engine="text-davinci-003",    prompt=question,    max_tokens=1024,    n=1,    stop=None,    temperature=0.8,) The openai.Completion.create() function in the code is used to send a request to the ChatGPT API to generate a completion of the user's input prompt. The engine parameter specifies the ChatGPT engine to use for the request, and in this case, it is set to "text-davinci-003". The prompt parameter specifies the text prompt for the API to complete, which is the user's input question in this case. The max_tokens parameter specifies the maximum number of tokens the response should contain.  The n parameter specifies the number of completions to generate for the prompt. The stop parameter specifies the sequence where the API should stop generating the response. The temperature parameter controls the creativity of the generated response. It ranges from 0 to 1. Higher values will result in more creative but potentially less coherent responses, while lower values will result in more predictable but potentially fewer interesting responses. Later in the book, we will delve into how these parameters impact the responses received from ChatGPT. The function returns a JSON object containing the generated response from the ChatGPT API, which then can be accessed and printed to the console in the next line of code.print(response)In the project pane on the left-hand side of the screen, locate the Python file you want to run. Right-click on the app.py file and select Run app.py from the context menu. You should receive a message in the run window  that asks you to write a question to the ChatGPT.                                                                                                                              Fig 3: Run windowOnce you have entered your question, press the Enter key to submit your request to the ChatGPT API. The response generated by the ChatGPT API model will be displayed in the run window as a complete JSON object: { "choices": [    {     "finish_reason": "stop",     "index": 0,     "logprobs": null,     "text": "\n\n1. Start by getting in the water. If you're swimming in a pool, you can enter the water from the side, ………….    }  ], "created": 1681010983, "id": "cmpl-73G2JJCyBTfwCdIyZ7v5CTjxMiS6W", "model": "text-davinci-003", "object": "text_completion", "usage": {    "completion_tokens": 415,   "prompt_tokens": 4,   "total_tokens": 419  }} This JSON response produced by the OpenAI API contains information about the response generated by the GPT-3 model. This response consists of the following fields:The choices field contains an array of objects with the generated responses, which in this case only contains one response object. The text field within the response object contains the actual response generated by the GPT-3 model.The finish_reason field indicates the reason why the response was generated; in this case it was because the model reached the stop condition specified in the API request.The created field specifies the Unix timestamp of when the response was created. The id field is a unique identifier for the API request that generated this response.The model field specifies the GPT-3 model that was used to generate the response. The object field specifies the type of object that was returned, which in this case is text_completion.The usage field provides information about the resource usage of the API request. It contains information about the number of tokens used for the completion, the number of tokens in the prompt, and the total number of tokens used. The two most important parameter from the response is the text field, that contains the answer to the question asked to ChatGPT API. This is why most API users would like to access only that parameter from the JSON object. You can easily separate the text from the main body as follows: answer = response["choices"][0]["text"]print(answer) By following this approach, you can guarantee that the variable answer will hold the complete ChatGPT API text response, which you can then print to verify. Keep in mind that ChatGPT responses can significantly differ depending on the input, making each response unique.OpenAI: 1. Start by getting in the water. If you're swimming in a pool, you can enter the water from the side, ladder, or diving board. If you are swimming in the ocean or lake, you can enter the water from the shore or a dock. 2. Take a deep breath in and then exhale slowly. This will help you relax and prepare for swimming.SummaryIn this tutorial, you implemented a simple ChatGPT API response, by sending a request to generate a completion of a user's input prompt/question. You have also learned how to set up your API Key and how to prompt the user to input a question, and finally, how to access the generated response from ChatGPT in the form of a JSON object containing information about the response.  About the Author Martin Yanev is an experienced Software Engineer who has worked in the aerospace and medical industries for over 8 years. He specializes in developing and integrating software solutions for air traffic control and chromatography systems. Martin is a well-respected instructor with over 280,000 students worldwide, and he is skilled in using frameworks like Flask, Django, Pytest, and TensorFlow. He is an expert in building, training, and fine-tuning AI systems with the full range of OpenAI APIs. Martin has dual master's degrees in Aerospace Systems and Software Engineering, which demonstrates his commitment to both practical and theoretical aspects of the industry. https://www.linkedin.com/in/martinyanev/https://www.udemy.com/user/martin-yanev-3/ 
Read more
  • 0
  • 2
  • 328