Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech Guides

851 Articles
article-image-my-experience-keystonejs
Jake Stockwin
16 Sep 2016
5 min read
Save for later

My Experience with KeystoneJS

Jake Stockwin
16 Sep 2016
5 min read
Why KeystoneJS? Learning a new language can be a daunting task for web developers, but there comes a time when you have to bite the bullet and do it. I'm a novice programmer, with my experience mostly limited to a couple of PHP websites, using mySQL. I had a new project on the table and decided I wanted to learn Node. Any self-respecting Node beginner has written the very basic "Hello World" node server: var http = require('http'); var server = http.createServer(function(req, res) { res.writeHead(200, {"Content-Type": "text/plain"}); res.end("Hello World"); }); server.listen(80); Run node server.js and open localhost:80 in your web browser, and there it is. Great!It works, so maybe this isn't going to be so painful after all. Time to start writing the website! I quickly figure out that there is quite a jump between outputting "Hello World" and writing a fully functioning website. Some more research points me to the express package, which I install and learn how to use. However, eventually I have quite the list of packages to install, and all of these need configuring in the correct way to interact with each other. At this stage, everything is starting to get a little too complicated, and my small project seems like it's going to take lots of hours' work to get to the final website. Maybe I should just write it in PHP, since I at least know how to use it. Luckily, I was pointed toward KeystoneJS. I'm not going to explain how KeystoneJS works in this post, but by simply running yo keystone, my site was up and running. Keystone had configured all of those annoying modules for me, and I could concentrate on writing the code for my web pages. Adding new content types became as simple as adding a new Keystone "model" to the site, and then Keystone would automatically create all the database schemas for me and add this model to the admin UI. I was so impressed, and I had finished the whole website in just over an hour. KeystoneJS had definitely done 75% of the work for me, and I was incredibly pleased and impressed. I picked up Keystone so quickly, and I have used it for multiple projects since. It is without a doubt my go-to tool if I'm writing a website which has any kind of content management needs. Open Source Software and the KeystoneJS Community KeystoneJS is a completely open source project. You can see the source code on GitHub, and there is an active community of developers constantly improving and fixing bugs in KeystoneJS. It was developed by ThinkMill, a web design company. They use the software for their own work, so it benefits them to have a community helping to improve their software. Anyone can use KeystoneJS, and there is no need to give anything back, but a lot of people who find KeystoneJS really useful will want to help out. It also means if I discover a bug, I am able to submit a pull request to fix it, and hopefully that will get merged into the code. A few weeks ago, I found myself with some spare time and decided to get involved in the project, so I started to help out by adding some end-to-end (e2e) testing. Initially, the work I did was incorrect, but rather than my pull request just being rejected, the developers took the time to point me in the right direction. Eventually I worked out how everything worked, and my pull request was merged into the code. A few days later on, I had written a few more tests. I'd quite often need to ask questions on how things should be done, but the developers were all very friendly and helpful. Soon enough, I understood quite a bit about the testing and managed to add some more tests. It was not long before the project lead, Jed Watson, asked me if I would like to be a KeystoneJS member, which would give me access to push my changes straight into the code without having to make pull requests. For me, as a complete beginner, being able to say I was part of a project as big as this meant a lot. For me, to begin with, I felt as though I was asking so many questions that I must just be annoying everyone and should probably stop. However, Jed andeveryone else quickly changed that, and I felt like I was doing something useful. Into the future The entire team is very motivated to make KeystoneJS as good as it can be. Once version 0.4 is released, there will bemany exciting additions in the pipeline. The admin UI is going to be made more customizable, and user permissions and roles will be implemented, among many other things. All of this is made possible by the community, who dedicate lots of their time for free to make this work. The fact that everyone is contributing because they want to and not because it's what they're paid to do makes a huge difference. People want to see these features added so that they can use them for their own projects, and so they are all very committed to making it happen. On a personal note, I can't thank the community enough for all their help and support over the last few weeks, and am very much looking forward to being part of Keystone's development. About the author Jake Stockwin is a third-year mathematics and statistics undergraduate at the University of Oxford, and a novice full-stack developer. He has a keen interest in programming, both in his academic studies and in his spare time. Next year, he plans to write his dissertation on reinforcement learning, an area of machine learning. Over the past few months, he has designed websites for various clients and has begun developing in Node.js.
Read more
  • 0
  • 0
  • 2435

article-image-5-go-libraries-frameworks-and-tools-you-need-to-know
Julian Ursell
24 Jul 2014
4 min read
Save for later

5 Go Libraries, Frameworks, and Tools You Need to Know

Julian Ursell
24 Jul 2014
4 min read
Golang is an exciting new language seeing rapid adoption in an increasing number of high profile domains. Its flexibility, simplicity, and performance makes it an attractive option for fields as diverse as web development, networking, cloud computing, and DevOps. Here are five great tools in the thriving ecosystem of Go libraries and frameworks. Martini Martini is a web framework that touts itself as “classy web development”, offering neat, simplified web application development. It serves static files out of the box, injects existing services in the Go ecosystem smoothly, and is tightly compatible with the HTTP package in the native Go library. Its modular structure and support for dependency injection allows developers to add and remove functionality with ease, and makes for extremely lightweight development. Out of all the web frameworks to appear in the community, Martini has made the biggest splash, and has already amassed a huge following of enthusiastic developers. Gorilla Gorilla is a toolkit for web development with Golang and offers several packages to implement all kinds of web functionality, including URL routing, optionality for cookie and filesystem sessions, and even an implementation with the WebSockets protocol, integrating it tightly with important web development standards. groupcache groupcache is a caching library developed as an alternative (or replacement) to memcached, unique to the Go language, which offers lightning fast data access. It allows developers managing data access requests to vastly improve retrieval time by designating a group of its own peers to distribute cached data. Whereas memcached is prone to producing an overload of database loads from clients, groupcache enables a successful load out of a huge queue of replicated processes to be multiplexed out to all waiting clients. Libraries such as Groupcache have a great value in the Big Data space as they contribute greatly to the capacity to deliver data in real time anywhere in the world, while minimizing potential access pitfalls associated with managing huge volumes of stored data. Doozer Doozer is another excellent tool in the sphere of system and network administration which provides a highly available data store used for the coordination of distributed servers. It performs a similar function to coordination technologies such as ZooKeeper, and allows critical data and configurations to be shared seamlessly and in real time across multiple machines in distributed systems. Doozer allows the maintenance of consistent updates about the status of a system across clusters of physical machines, creating visibility about the role each machine plays and coordinating strategies for failover situations. Technologies like Doozer emphasize how effective the Go language is for developing valuable tools which alleviate complex problems within the realm of distributed system programming and Big Data, where enterprise infrastructures are modeled around the ability to store, harness and protect mission critical information.  GoLearn GoLearn is a new library that enables basic machine learning methods. It currently features several fundamental methods and algorithms, including neural networks, K-Means clustering, naïve Bayesian classification, and linear, multivariate, and logistic regressions. The library is still in development, as are the number of standard packages being written to give Go programmers the ability to develop machine learning applications in the language, such as mlgo, bayesian, probab, and neural-go. Go’s continual expansion into new technological spaces such as machine learning demonstrates how powerful the language is for a variety of different use cases and that the community of Go programmers is starting to generate the kind of development drive seen in other popular general purpose languages like Python. While libraries and packages are predominantly appearing for web development, we can see support growing for data intensive tasks and in the Big Data space. Adoption is already skyrocketing, and the next 3 years will be fascinating to observe as Golang is poised to conquer more and more key territories in the world of technology.
Read more
  • 0
  • 0
  • 2433

article-image-will-oracle-become-key-cloud-player-and-what-will-it-mean-development-architecture-com
Phil Wilkins
13 Jun 2017
10 min read
Save for later

Will Oracle become a key cloud player, and what will it mean to development & architecture community?

Phil Wilkins
13 Jun 2017
10 min read
This sort of question and provoke some emotive reactions, and many technologists despite the stereotype can get pretty passionate about our views. So let me put my cards on the table. My first book as an author is about Oracle middleware (Implementing Oracle Integration Cloud). I am Oracle Ace Associate (soon to be full Ace) which is comparable to a Java Rockstar, Microsoft MVP or SAP Mentor. I work for Capgemini as a Senior Consultant as large SI we work with many vendors, so I need to able have a feel for all options, even though I specialise in Oracle now. Before I got involved with Oracle I worked with primarily Open Source technologies particularly JBoss and Fuse (before and after both where absorbed into RedHat) and I have technically reviewed a number of Open source books for Packt. So I should be able to provide a balanced argument. So onto the … A lot has been said about Oracle’s CIO Larry Ellison and his position on cloud technologies. Most notably for rubbishing it 2008, which is ironic since those of us who remember the late 90s Oracle heavily committed to a concept called the Network Machine which could have led to a more cloud like ecosystem had the conditions been right. The interesting thing about cloud computing is that we’ve redefined cloud computing to include everything that we already do. ... The computer industry is the only industry that is more fashion-driven than women’s fashion. Maybe I’m an idiot, but I have no idea what anyone is talking about. What is it? It’s complete gibberish. It’s insane. When is this idiocy going to stop?[1] Since then we’ve seen a slow change.  The first cloud offerings we saw came in the form of Mobile Cloud Service which provided a Mobile Backend as a Service (MBaaS). At this time Oracle’s extensive programme to try and rationalize its portfolio and bring the best ideas and design together from Peoplesoft, E-Business Suite, Siebel to a single cohesive product portfolio started to show progress – Fusion applications. Fusion applications built with the WebLogic core and exploiting other investments provided the company with a product that had the potential to become cloud enabled. If that initiative hadn’t been started when it did then Oracle’s position may look very different.  But from a solid standardised container based product portfolio the transition to cloud has become a great deal easier, facilitated by the arrival of Oracle database 12c which provided the means to easily make the data storage at least multi-tenant. This combination gave Oracle its ability to then sell ERP modules as SaaS and meant that Oracle cloud start to think about competing with the SaaS darlings of SalesForce, NetSuite and Workday. However,ERPs don’t live in isolation. Any organisation has to deal with its oddities, special needs, departmental solutions as well as those systems that are unique and differentiate companies form their competition. This has driven the need to provide the means to provide PaaS and IaaS. Not only that, Oracle themselves admitted making SaaS as cost effective as possible it needed to revise the infrastructure and software platform to maximise the application density. A lesson that Amazon with AWS has long understood from the outset and done well in realizing. It has also had the benefit of being a later starter, looked at what has and hasn’t worked, and used to its deep pockets to ensure it got the best skills to build the ideal answers by passing many of the mistakes and issues the pioneers had to go through. This brought us to the state a couple of years ago, where its core products had a cloud existence and Oracle where making headway winning new mid-market customers – after all Oracle ERP is seen as something of a Rolls Royce of ERPs, globally capable and well tested and now cost accessible to more of the mid-market. So as an ERP vendor Oracle will continue to be a player, if there is a challenger, Oracle’s pockets are deep enough to buy the competition which is what happened with Netsuite.This maybe very interesting to enterprise architects who need to take off the shelf building blocks and provide the solid corporate foundation, but those of us who prefer to build, do something different not so exciting. In the last few years we have seen a lot of talk about digital disruptors, the need for serious agility (as in the ability to change and react rather than the development ethos). To have this capability you need to be able build, radically change solutions quickly and yet still work with those core backbone accounting tasks.  To use a Gartner expression, we need to be bimodal[2], to innovate.  When applications packages change comparatively slowly (they need to be slow and steady, if you want to show that your accounting isn’t going to look like Enron[3] or Lehmann Brothers[4]). With this growing need to drive innovation and change ever faster we have seen some significant changes in the way things tend to be done. In a way the need to innovate has impacted to the point that,you could almost say in the process of trying to disrupt existing businesses through IT we have achieved the disruption of software development. With the facilitation of the cloud particularly IaaS, the low cost of startupandtry new solutions and either grow them if they succeed or mothball them with minimal capital loss or delay if they don't; we have seen … The pace of service adoption accelerate exponentially meaning the rate of scale up and dynamic demand particularly for end user facing services has needed new techniques for scaling. Standards moving away from being formulated by committee of companies wanting to influence/dominate a market segment which while resulted in some great ideas (UDDI as a concept was fabulous) but often very unwieldy (ebXML, SOAP, UDDI for example) to simpler standards that have largely evolved through simplicity and quickly recognized value (JSON, REST) to become de-facto standards. New development paradigms that enable large solutions to be delivered whilst still keeping delivery on short cycles and supporting organic change (Agile, microservices). Continuous Integration and DevOps breaking down organisational structures and driving accountability – you build it, you make it run. The open source business model as a way to break into the industry with a new software technologywithout needing deep pockets for marketing etc has become the predominant. route in, and at the same time acceptance that open source software can be as well supported as a closed source product. For a long time, despite Oracle being the ‘guardian’ for Java and then a little more recently MySQL they haven't really managed to establish themselves as a ‘cool’ vendor. If you wanted, a cool vendor you’d historically probably look at RedHat one of the first businesses to really get open source and community thinking. The perception at least has been Oracle have acquired these technologies either as a biproduct of a bigger game or as a view as creating an ‘on ramp’ to their bigger more profitable products. Oracle have started to recognise that to be seriously successful in the cloud like AWS you need to be pretty pervasive and not only connect with the top of the decision tree but also those at the code face. To do that you need a bit of the ‘cool’ factor. That means doing things beyond just the database and your core middleware. These areas are becoming more and more frequently being subject to potential disruption such as Hadoop and big data, NoSQL and things like Kafka in the middleware space. This also fits with the narrative that do do well with SaaS you at least a very good IaaS and the way Oracle has approached SaaS you definitely need good PaaS. So they might as well also make these commercial offerings. This has resulted in Oracle moving from half dozen cloud offerings to something in the order of nearly forty offerings classified as PaaS. Plus a range of IaaS offerings that will appeal to developers and architects such as direct support for Docker through to Container Cloud which provides a simplified Docker model, and onto Kafka, Node.js, MySQL, NoSQL and others. The web tier is pretty interesting with JET which is an enterprise hardened certified version of Angular, React and Express with extra tooling which has been made available as open source. So the technology options are becoming a lot more interesting. Oracle are also starting to target new startups and looking to get new organisations onto the Oracle platform from day one, in the same way it is easy for a startup to leverage AWS. Oracle have made some commitment to the Java developer community though JavaOne which runs alongside the big brother conference of Open World. They are now seriously trying to reach out to the hardcore development community (not just Java as the new Oracle cloud offerings are definitely polyglot) through Oracle Code. I was fortunate enough to present at the London occurrence of the event (see my blog here). What Oracle has not yet quiet reached the point of being clearly easy to start working with compared to AWSand Azure. Yes, Oracle provide new sign ups with 300 dollars of credit but when you have a reputation (deserved or otherwise) of being expensive it isn't going to necessarily get people onboard in droves – say compared to AWS’s free micro-instance for a year. Conclusion  In all of this, I am of the view that Oracle are making headway, they are recognising what needs to be done to be a player; I have said in the past, and I believe it is still true – Oracle is a like an oil tanker or aircraft carrier, takes time to decide to turn, and turning isn't quick, but once a coarse is set a real head of stream and momentum will be built, and I wouldn't want to be in the company’s path.so let’s look at some hard facts – Oracle’s revenues remain pretty steady, surprisingly Oracle showed up in the last week on LinkedIn’s top employers list[5]. Oracle isn’t going to just disappear, it's Database business alone will keep it alive for a very long time to come. Its SaaS business appears to be on a good trajectory although more work on API enablement needs to take place. As an IaaS andPaaS technology provider Oracle appear to be getting a handle on things. Oracle is going to be attractive to end user executives as it is one of the very few vendors that covers all tiers of cloud from IaaS to PaaS providing the benefits of traditional hosting when needed and fully managed solutions and the benefits it offers.Oracle does still need to overcome some perception challenges, in many respects Oracle are seen in the same way Microsoft were in 90s and 2000s, something as a necessary evil and can be expensive. [1]http://www.businessinsider.com/best-larry-ellison-quotes-2013-4?op=1&IR=T/#oud-computing-maybe-im-an-idiot-but-i-have-no-idea-what-anyone-is-talking-about-1 [2]http://www.gartner.com/it-glossary/bimodal/ [3]http://www.investopedia.com/updates/enron-scandal-summary/ [4]https://en.wikipedia.org/wiki/Bankruptcy_of_Lehman_Brothers [5]https://www.linkedin.com/pulse/linkedin-top-companies-2017-where-us-wants-work-now-daniel-roth
Read more
  • 0
  • 0
  • 2431

article-image-an-ethical-mobile-operating-system-e-trick-or-treat
Prasad Ramesh
01 Nov 2018
2 min read
Save for later

An ethical mobile operating system, /e/ - Trick or Treat?

Prasad Ramesh
01 Nov 2018
2 min read
Previously known as eelo, /e/ is an ‘ethical’ operating system for mobile phones. Leading the project is Gaël Duval who is also the creator of Mandrake Linux. Is it a new OS? Well not exactly, it is a forked version of Lineage OS stripped of Google apps, with a focus on privacy and considered as an ethical OS. What’s so good about /e/? The good thing here is that this is a unique effort for an ethical OS. Something different from the data collection of Android or the expensive devices by Apple. With a functional ROM including all functionalities, Duval seems to be pretty serious about this. An OS that respects user privacy does sound like a very nice thing. However, as pointed out by people on Reddit, this is what Cyanogen was in the beginning. The ethical OS /e/ is not actually a new OS from scratch. Who has the time or funding for that today? You have /e/ services instead of Google services, but ummm can you trust them? Is /e/ a trick… or a treat? We have mixed feelings about this one, it is a commendable effort, the idea is right. But with the recent privacy debates everywhere trusting a new OS is tricky. We’ll reserve judgement till it is out of beta and has a name that you can Google search for.
Read more
  • 0
  • 0
  • 2430

article-image-why-containers-are-driving-devops
Diego Rodriguez
12 Jun 2017
5 min read
Save for later

Why containers are driving DevOps

Diego Rodriguez
12 Jun 2017
5 min read
It has been a long ride since the days where one application would just take a full room of computing hardware. Research and innovation in information technology (IT) have taken us far and will surely keep moving even faster every day. Let's talk a bit about the present state of DevOps, and how containers are driving the scene. What are containers? According to Docker (the most popular containers platform), a container is a stand-alone, lightweight package that has everything needed to execute a piece of software. It packs your code, runtime environment, systems tools, libraries, binaries, and settings. It's available for Linux and Windows apps. It runs the same everytime regardless of where you run it. It adds a layer of isolation, helping reduce conflicts between teams running different software on the same infrastructure. Containers are one level deeper in the virtualization stack, allowing lighter environments, more isolation, more security, more standarization, and many more blessings. There are tons of benefits you could take advantage of. Instead of having to virtualize the whole operating system (like virtual machines [VMs] do), containers take the advantage of sharing most of the core of the host system and just add the required, not-in-the-host binaries and libraries; no more gigabytes of disk space lost due to bloated operating systems with repeated stuff. This means a lot of things: your deployments can go packed in a much more smaller image than having to run it alone in a full operating system, each deployment boots up way faster, the idling resource usage is lower, there is less configuration and more standarization (remember "Convention over configuration"), less things to manage and more isolated apps means less ways to screw something up, therefore there is less attack surface, which subsequently means more security. But keep in mind, not everything is perfect and there are many factors that you need to take into account before getting into the containerization realm. Considerations It has been less than 10 years since containerization started, and in the technology world that is a lot, considering how fast other technologies such as web front-end frameworks and artificial intelligence [AI] are moving. In just a few years, development of this widely-deployed technology has gone mature and production-ready, coupled with microservices, the boost has taken it to new parts in the DevOps world, being now the defacto solution for many companies in their application and services deployment flow. Just before all this exciting movement started, VMs were the go-to for the many problems encountered by IT people, including myself. And although VMs are a great way to solve many of these problems, there was still room for improvement. Nowadays, the horizon seems really promising with the support of top technology companies backing tools, frameworks, services and products, all around containers, benefiting most of the daily code we develop, test, debug, and deploy on a daily basis. These days, thanks to the work of many, it's possible to have a consistent all-around lightweight way to run, test, debug, and deploy code from whichever platform you work from. So, if you code in Linux using VIM, but your coworker uses Windows using VS code, both can have the same local container with the same binaries and libraries where code is ran. This removes a lot of incompatibility issues and allows teams to enjoy production environments in their own machine, not having to worry about sharing the same configuration files, misconfiguration, versioning hassles, etc. It gets even better. Not only is there no need to maintain the same configuration files across the different services: there is less configuration to handle as a whole. Templates do most of the work for us, allowing you and your team to focus on creating and deploying your products, improving and iterating your services, changing and enhancing your code. In less than 10 lines you can specify a working template containing everything needed to run a simple Node.js service, or maybe a Ruby on Rails application, and how about a Scala cron job. Containerization supports most, if not all languages and stacks. Containers and virtualization Virtualization has allowed for acceleration in the speed in which we build things for many years. It will continue to provide us with better solutions as time goes by. Just as we went from Infrastructure as a Service (IaaS) to Platform as a Service (PaaS) and finally Software as a Service (SaaS) and others (Anything as a Service? AaaS?), I am certain that we will find more abstraction beyond containers, making our life easier everyday. As most of today's tools, many virtualization and containerization ones are open source, with huge communities around them and support boards, but keep the trust in good'ol Stack Overflow. So remember to give back something to the amazing community of open source, open issues, report bugs, share the best about it and help fix and improve the lacking parts. But really, just try to learn these new and promising technologies that give us IT people a huge bump in efficiency in pretty much all aspects. About the author Diego Rodriguez Baquero is a full stack developer specializing in DevOps and SysOps. He is also a WebTorrent core team member. He can be found at https://diegorbaquero.com/. 
Read more
  • 0
  • 0
  • 2426

article-image-art-hack-day
Michael Ang
28 Nov 2014
6 min read
Save for later

Art Hack Day

Michael Ang
28 Nov 2014
6 min read
Art Hack Day is an event for hackers whose medium is art and artists whose medium is tech. A typical Art Hack Day event brings together 60 artist-hackers and hacker-artists to collaborate on new works in a hackathon-style sprint of 48 hours leading up to a public exhibition and party. The artworks often demonstrate the expressive power of new technology, radical collaboration in art or a critical look at how technology affects society. The technology used is typically open, and sharing source code online is encouraged. Hacking an old reel-to-reel player for Mixtape. Photo by Vinciane Verguethen. As a participant (and now an organizer) of Art Hack Day I’ve had the opportunity to participate in three of the events. The spirit of intense creation in a collaborative atmosphere drew me to the Art Hack Day. As an artist working with technology it’s often possible to get bogged down in the technical details of realizing a project. The 48-hour hackathon format of Art Hack Day gives a concrete deadline to spur the process of creation and is short enough to encourage experimentation. When the exhibition of a new work is only 48 hours away, you’ve got to be focused and solve problems quickly. Going through this experience with 60 other people brings an incredible energy. Each Art Hack Day is based around a theme. Some examples include "Lethal Software", "afterglow", and "Disnovate". The Lethal Software art hack took place in San Francisco at Gray Area. The theme was inspired by the development of weaponized drones, pop-culture references like Robocop and The Terminator, and software that fights other software (e.g. spam vs spam filters). Artist-hackers were invited to create projects engaging with the theme that could be experienced by the public in-person and online. Two videogame remix projects included KillKillKill!!! where your character would suffer remorse after killing the enemy and YODO Mario (You Only Die Once) where the game gets progressively glitched out each time Mario dies, and the second player gets to move the holes in the ground in an attempt to kill Mario. DroneML presented a dance performance using drones and Cake or Death? (the project I worked on) repurposed a commercial drone into a CupCake Drone that delivered delicious pastries instead of deadly missiles. A video game character shows remorse in KillKillKill!!! The afterglow Art Hack Day in Berlin as part of the transmediale festival posed a question relating to the ever increasing amount of e-waste and overabundance of collected data: "Can we make peace with our excessive data flows and their inevitable obsolescence? Can we find nourishment in waste, overflow and excess?" Many of the projects reused discarded technology as source material. PRISM: The Beacon Frame caused controversy when a technical contractor thought the project seemed closer to the NSA PRISM surveillance project than an artistic statement and disabled the project. The Art Hack Day version of PRISM gave a demonstration of how easily cellular phone connections can be hijacked - festival visitors coming near the piece would receive mysterious text messages such as "Welcome to your new NSA partner network". With the show just blocks away from the German parliament and recent revelations of NSA spying the piece seemed particularly relevant. A discarded printer remade into a video game for PrintCade Disnovate was hosted by Parsons Paris as part of the inauguration for their MFA Design and Technology program. Art Hack Day isn’t shy of examining the constant drive for innovation in technology, and even the hackathon format that it uses: "Hackathons have turned into rallies for smarter, cheaper and faster consumption. What role does the whimsical and useless play in this society? Can we evaluate creation without resorting to conceptions of value? What worldview is implied by the language of disruption; what does it clarify and what does it obscure?" Many of the works in this Art Hack Day had a political or dystopian statement to make. WAR ZONE recreated historical missile launches inside Google Earth, giving a missile’s-eye view of the trajectory from launch site to point of impact. The effect was both mesmerizing and terrifying. Terminator Studies draws connections between the fictional Terminator movie and real-world developments in the domination of machines and surveillance. Remelt literally recast technology into a primitive form by melting down aluminum computer parts and forming them into Bronze Age weapons, evoking the fragility of our technological systems and often warlike nature. On a more light-hearted note Drinks At The Opening Party presented a table of empty beer bottles. As people took pictures of the piece using a flash a light sensor would trigger powerful shaking of the table that would actually break the bottles. Trying to preserve an image of the bottles would physically destroy them. Edward Snowden gets a vacation in Paris as Snowmba. Photo by Luca Lomazzi. The speed with which many of these projects were created is testament to the abundance of technology that is available for creative use. Rather than using technology in pursuit of "faster, better, more productive" artist-hackers are looking at the social impacts of technology and its possibilities for expression and non-utilitarian beauty. The collaborative and open atmosphere of the Art Hack Day gives rise to experimentation and new combinations of ideas. Technology is one of the most powerful forces shaping global society. The consummate artist-hacker uses technology in a creative way for social good. Art Hack Day provides an environment for these artist-hackers and hacker-artists to collaborate and share their results with the public. You can browse through project documentation and look for upcoming Art Hacks on the Art Hack Day website or via @arthackday on Twitter. Project credits Mixtape by John Nichols, Jenn Kim, Khari Slaughter and Karla Durango KillKillKill!!! by bigsley and Martiny DroneML by Olof Mathé, Dean Hunt, and Patrick Ewing YODO Mario (You Only Die Once) by Tyler Freeman and Eric Van Cake or Death? by Michael Ang, Alaric Moore and Nicolas Weidinger PRISM: The Beacon Frame by Julian Oliver and Danja Vasiliev  PrintCade by Jonah Brucker-Cohen and Michael Ang WAR ZONE by Nicolas Maigret, Emmanuel Guy and Ivan Murit Terminator Studies by Jean-Baptiste Bayle Remelt by Dardex Drinks At The Opening Party by Eugena Ossi, Caitlin Pickall and Nadine Daouk Snowmba by Evan Roth About the Author Michael Ang is a Berlin-based artist and engineer working at the intersection of art, engineering, and the natural world. His latest project is the Polygon Construction Kit, a toolkit for bridging the virtual and physical realms by constructing real-world objects from simple 3D models. He is a participant and sometimes organizer of Art Hack Day, an event for hackers whose medium is tech and artists whose medium is technology.
Read more
  • 0
  • 0
  • 2426
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at £15.99/month. Cancel anytime
article-image-soft-skills-every-data-pro-needs
Sam Wood
16 May 2016
4 min read
Save for later

‘Soft’ Skills Every Data Pro Needs

Sam Wood
16 May 2016
4 min read
Your technical data skills are at the top of your game - you've mastered machine learning, are a wizard at stats, and know the tools of the trade from Excel to R. But to be a truly top-notch data professional, you're going to need some exceptional 'soft' data skills as well. It's not just enough to be good at crunching numbers - you've got to know how to ask the right question, and then how to explain the answers in a way that your business or clients can act upon. So what are the essential soft skills that you need to know to ensure you're not just a good data scientist - you're a great data scientist? Asking Questions, Not Proving Hunches As a data analyst, how many times have you been asked to produce some figures that proves something that your boss or colleague already believes to be true? The key to good data analysis is not starting with an assertion and then looking for the evidence to support it. It's coming up with the perfect questions that will get you the valuable insight your business needs. Don't go trying to prove that customers leave your business because of X reason - ask your data 'Why do our customers leave'? Playing to the Audience Who's making a data request? The way you want to present your findings, and even the kind of answers you give, will depend on the role of the person asking. Project Managers and executives are likely to be looking for a slate of options, with multiple scenarios and suggestions, and raw results that they can draw their own conclusions from. Directors, CEOs, and other busy leadership types will be looking for a specific recommendation - usually in a polished, quick presentation that they can simply say 'Yes' or 'No' too. They're busy people - they don't want to have to wade through reams of results to get to the core. Instead it's often your job to do that for them. Keeping It Simple One of the most essential skills of a data wrangler is defining a problem, and then narrowing down the answers you'll need to find. There are an endless number of questions you can end up asking your data - understanding the needs of a data request and not getting bogged down in too much information is vital to solving the core issues of a business. There's a saying that "Smart people ask hard questions, but very smart people ask simple ones." Still feel like you keep getting asked stupid questions, or to provide evidence for an assertion that's already been made? Cut your not-data analyst colleagues some slack - you've got an advantage over them by already knowing how data works. Working directly with databases gives you the discipline you need to start asking better questions, and to structure questions with the precision and accuracy needed to get the big answers. Developing these skills will allow you to contribute towards solving the challenges that your business faces. Delivering Your Results Your amazing data insight isn't going to be worth squat if you don't present it in a way so that people can recognize its importance. You might have great results - but without a great presentation or stunning visualization, you're going to find your findings put on the back burner or even ditched from a road-map entirely. If you've managed to get the right message, you need to make sure your message is delivered right. If you're not the most confident public speaker, don't underestimate the power of a good written report. Billionaire tyrant Amazon CEO Jeff Bezos notably requires all senior staff to put their ideas forward in written memos which are read in silence in order to start meetings. Presenting your results in writing allows you to be clear about the 'story' of your data, and resist the temptation to attempt to explain the meanings of your charts on the fly. Why Soft Skills Are Essential You might think you'll be able to get by on your technical mastery alone - and you might be right, for a while. But the future of business is data, and more and more people are going to start seeking roles in data analysis; people who are already in possession of the creative thinking and expert presentation skills that make a great data worker. So make sure you stay on the top of your game - and hone your soft data skills with almost as much rigor as you keep on top of the latest data tech.
Read more
  • 0
  • 0
  • 2422

article-image-fizzbuzz-only-pandas-how-not-pass-job-interview
Greg Roberts
09 Jun 2016
10 min read
Save for later

FizzBuzz with only pandas: How to not pass a job interview

Greg Roberts
09 Jun 2016
10 min read
I love Python, and the Data Analysis library pandas in particular. It's a fantastic library for working with tabular data, contains connectors for many common data formats out of the box, has an excellent performance profile (as it sits on top of NumPy), and has many many common data operations built in. As a data analyst, I use pandas every day, for most of the day. Sometimes I think of ways I could make more use out of it, for example this gist for getting results from Neo4J into a DataFrame. Sometimes, like today, I think about how I could misuse it for a purely procrastinatory purpose. Want to learn Python? This week our Python Fundamentals course is free inside Mapt. It's an accessible introduction that's comprehensive enough to give you the confidence you need to explore Python further. Click here, log in, and go straight to it!  FizzBuzz FizzBuzz is a popular, simple test of a programmer's knowledge and ability, often employed in job interviews to test a candidate's programming/problem solving ability. The basic problem is usually stated as follows: "Write a program that prints the numbers from 1 to 100. But for multiples of three print 'Fizz' instead of the number and for the multiples of five print 'Buzz'. For numbers which are multiples of both three and five print 'FizzBuzz'." There is so much discussion of FizzBuzz on the internet, that it almost seems pointless to discuss it anymore. Regardless of your opinion of it as a tool to test programmer knowledge/ability, it is certainly an interesting, if somewhat basic challenge, and is still approached (with varying degrees of irreverence) to this day. The Challenge Partially inspired by Joel Grus' tongue in cheek article above, and partially out of pure interest, I recently started to ponder how one might go about implementing a FizzBuzz solution using pandas. The intention would be to rely on as few non-pandas operations as possible, whilst still producing legible code. First of all, here's a fairly standard vanilla Python solution to the problem: def fizzbuzz(x): '''returns the fizzbuzz output for an integer x''' output = '' if x % 3 == 0: output += 'Fizz' if x % 5 ==0: output += 'Buzz' if (x % 3) > 0 and (x % 5) > 0: output += str(x) return output for i in range(1,101): print fizzbuzz(i) Now, the most simple way to apply this with pandas would be to just use the apply function on a series of integers: import pandas as pd pd.Series(range(1,100)).apply(fizzbuzz) which is simple, terse and readable, but the logic is still being done outside of pandas. What I'm really after is a way to express that fizzbuzz logic entirely with pandas operations. My first crack at this is displayed below. #create a DataFrame containing all the values we need, all the integers, #and a series of Fizzes and Buzzes we can hack together to make our output values = pd.DataFrame( { 'n':range(1,101), 'fizz':'Fizz', 'buzz':'Buzz' } ) #get columns for each of the seperate output types fizzes = values[values['n'] % 3 == 0]['fizz'] buzzes = values[values['n'] % 5 == 0]['buzz'] ints = values[(values['n'] % 3 > 0) & (values['n'] % 5 > 0)].n.apply(str) #put the columns together as one dataframe again outputs = pd.concat([fizzes,buzzes,ints], axis=1) #for each row, concatenate the non-null values together outputs.apply(lambda x: x[~pd.isnull(x)].str.cat(),axis=1) First, we're taking advantage of pandas' quite clever constructor functions to create a DataFrame with all the values we need. Secondly, we use pandas' expressive filtering syntax to create three separate columns containing ONLY the values we need. The third part is to concatenate these columns together to give us one more dataframe containing only the values we need. This takes advantage of pandas' powerful and extensive indexing capabilities, which returns us a dataframe with a nice contiguous index, and all our values in order. Finally, we use apply again to turn each row into a single string. When you supply axis = 1 to the apply method, it feeds each row to your operation in the form of a Series, which makes it easier to work with the row in question. I was fairly happy with this as a first pass. All the logic for deciding what to print is done with pandas operations, and the flow is fairly clear. It's still pretty long though. Yes it could be condensed down to two (very long and ugly) lines of code, but it still doesn't feel like an 'optimal' solution to this (very silly) problem. We can condense this logic down further, and reach one-liner-nirvana by making even more impractical use of pandas' DataFrame constructor: pd.DataFrame( { 'f':pd.Series('Fizz',index=filter(lambda x: x% 3 == 0, range(1,100))), 'b':pd.Series('Buzz',index=filter(lambda x: x% 5 == 0, range(1,100))), 'n':pd.Series( filter(lambda x: x%3>0 and x%5>0,range(1,100)), index=filter(lambda x: x%3>0 and x%5>0,range(1,100)) ).apply(str) } ).apply( lambda x: x[~pd.isnull(x)].str.cat(), axis=1 ) This really feels like progress. The flow is essentially the same as before, but now we construct the Fizz, Buzz and output Series within the DataFrame constructor. This makes the code more succinct, and also serves the crucial purpose of saving some precious bytes of memory, reducing the evident strain on my 16GB, i5 work PC. We can still do better however. You'll note that the above solution contains a filter() function for the fizzbuzz logic, which is a step backwards (within this already backwards problem), Also, the FizzBuzz lines actually say BuzzFizz, which is very annoying and not actually fixable. Version 0.18 of pandas introduced some really neat changes including an expanded set of arithmetic operations for Timedeltas. Another neat little addition was the addition of the ability to filter Series objects using a callable condition. You can also supply an 'other' argument to return when the conditions aren't met. This allows us to bring that filter logic back into pandas, and create our best/worst solution yet: pd.concat( [ pd.Series('Fizz', index=range(1, 100)).where(lambda x: x.index % 3 ==0, other=''), pd.Series('Buzz', index=range(1, 100)).where(lambda x: x.index % 5 ==0, other=''), pd.Series(range(1,100),index=range(1,100)).where(lambda x:(x % 5 > 0) & (x % 3 > 0), '').apply(str), ], axis=1 ).apply(lambda x: x.str.cat(), axis=1) Ok, Ok, I admit it, I've gone too far. I looked at this daft code, and realised my quest for pandas based purity had driven me to creating this abomination. But hey, I learnt a bit about my favorite library in the process, so what's the harm? If I relax my condition for pandas purity slightly, this can be coaxed into something almost readable: pd.concat( [ pd.Series('Fizz', range(0, 100, 3)), pd.Series('Buzz', range(0, 100, 5)), pd.Series(range(100)).where(lambda x:(x % 5 > 0) & (x % 3 > 0), '').apply(str), ], axis=1 ).apply(lambda x: x.str.cat(), axis=1)[1:] And that's where I decided to stop. It's not going to get any better/worse than that, and besides, I have actual work to do. It's been an entertaining bit of work, and I'm satisfied this solution is about as small as I'm going to get within the ridiculous constraints I set myself. Also, I can hear the ardent code golfers screaming at me that NONE of my pandas solutions are valid, because they all print an index column as well as the fizzbuzz values. I don't think there is a way of overcoming this purely in pandas, so you'll have to settle for just wrapping any of them with 'n'.join(...) Performance So, we have several methods for fizzbuzzing with pandas. The next obvious question is about scalability and performance. What if you're interviewing to be a Big Data Engineer at Google, and they ask you for the fastest, most scalable, general fizzbuzz solution in your arsenal? Let's take our functions above for a test drive. We'll encapsulate each as a function, then see how they scale as we ask for ever larger ranges. I'm using a quick and dirty method of timing this as I don't actually care that much. def fizzbuzz(x): '''returns the fizzbuzz output for an integer x''' output = '' if x % 3 == 0: output += 'Fizz' if x % 5 ==0: output += 'Buzz' if (x % 3) > 0 and (x % 5) > 0: output += str(x) return output #our vanilla solution def fb_vanilla(rng): return map(fizzbuzz, range(1, rng+1)) #our trivial pandas solution def fb_pandas_vanilla(rng): return pd.Series(range(1, rng+1)).apply(fizzbuzz) #I'm going to skip the first big pandas solution, this is pretty much identical #our second pandas solution, down to one line def fb_pandas_long(rng): return pd.DataFrame( { 'f':pd.Series('Fizz',index=filter(lambda x: x% 3 == 0, range(1,rng+1))), 'b':pd.Series('Buzz',index=filter(lambda x: x% 5 == 0, range(1,rng+1))), 'n':pd.Series( filter(lambda x: x%3>0 and x%5>0,range(1,rng+1)), index=filter(lambda x: x%3>0 and x%5>0,range(1,rng+1)) ).apply(str) } ).apply( lambda x: x[~pd.isnull(x)].str.cat(), axis=1 ) #our more succinct, pandas only solution. def fb_pandas_shorter(rng): return pd.concat( [ pd.Series('Fizz', index=range(1, rng+1)).where(lambda x: x.index % 3 ==0, other=''), pd.Series('Buzz', index=range(1, 100)).where(lambda x: x.index % 5 ==0, other=''), pd.Series(range(1,rng+1),index=range(1,rng+1)).where(lambda x:(x % 5 > 0) & (x % 3 > 0), '').apply(str), ], axis=1 ).apply(lambda x: x.str.cat(), axis=1) #our shortest solution, relying on some non-pandas stuff def fb_pandas_shortest(rng): return pd.concat( [ pd.Series('Fizz', range(0, rng+1, 3)), pd.Series('Buzz', range(0, rng+1, 5)), pd.Series(range(rng+1)).where(lambda x:(x % 5 > 0) & (x % 3 > 0), '').apply(str), ], axis=1 ).apply(lambda x: x.str.cat(), axis=1)[1:] #Let's do some testing! functions = [ fb_vanilla, fb_pandas_vanilla, fb_pandas_long, fb_pandas_shorter, fb_pandas_shortest ] times = {x.__name__:[] for x in functions} tests = range(1,1000,100) from time import time for i in tests: for x in functions: t1 = time() _ = x(i) t2 = time() times[x.__name__].append(t2-t1) results = pd.DataFrame(times, index = tests) Well, so far so terrible. The first, longest solution actually scales O(n), which I find hilarious for some reason. The rest of my attempts fare a little better, but nothing compares to the vanilla python solution, which doesn't even show up on this chart, because it completes in <1ms. Let's discard that long solution, and try an even more strenuous test, up to 100,000. That seems pretty conclusive. Get me Google on the phone, I have solved this problem. When should this be used? NEVER. Like, seriously, those results are beyond terrible. I was expecting the pandas solutions to fall down compared to the vanilla python solutions, but this is ridiculous. If you're asked to implement fizzbuzz in your next interview, the only possible reason you would have for using any of these is because you're a flippant contrarian like me. Having said that, it's important to note that I still love pandas, and don't blame these results on the fantastic devs there. This is thoroughly not what the library is designed for, and I'm only doing this to test my own fluency with the tool. Overall I think I acheived my goal with this. I made myself chuckle a few times, and I learnt a few things about pandas in the process. If you have any suggestions for optimisations to this code, reach out in the comments or on Twitter. Equally, if you have any thoughts on why specifically these pandas functions are SOOO much slower than the vanilla solution, I'd be interested to hear them.
Read more
  • 0
  • 0
  • 2422

article-image-how-to-stay-safe-while-using-social-media
Guest Contributor
08 Aug 2018
7 min read
Save for later

How to stay safe while using Social Media

Guest Contributor
08 Aug 2018
7 min read
The infamous Facebook and Cambridge Analytica data breach has sparked an ongoing and much-needed debate about user privacy on social media. Given how many people are on social media today, and how easy it is for anyone to access the information stored on those accounts, it's not surprising that they can prove to be a goldmine for hackers and malicious actors. We often don’t think about the things we share on social media as being a security risk, but if we aren’t careful, that's exactly the case. On the surface, much of what we share on social media sites and services seem to be innocuous and of little danger as far as our privacy or security is concerned. However, the most adamant cybercriminals in the business have learned how they can exploit social media sites and gain access to them to gather information. Here’s a guide, to examine the security vulnerabilities of the most popular social media networks on the Internet. It provides precautionary guidelines that you should follow. Facebook’s third-party apps: A hacker’s paradise If you take cybersecurity seriously, you should consider deleting your Facebook altogether. Some of the revelations over the last few years show the extent to which Facebook has allowed its users’ data to be used. In many cases for purposes that directly oppose their best interests, the social media giant has made only vague promises about how it will protect its users’ data. If you are going to use Facebook, you should assume that anything you post there can and will be seen by third-parties. That's so because we now know that the data of Facebook users, whose friends have consented to share their data, can also be collected without their direct authorization. One of the most common ways that Facebook is used for undermining users’ privacy is in the form of what seems like a fun game. These games consist of a name generator, in which users generate a pet name, a name of a celebrity, etc., by combining two words. These words are usually things like “mother’s maiden name” or “first pet's name.” The more astute readers might recognize that such information is regularly used as answers to secret questions in case you forget your password. By posting that information on your Facebook account, you are potentially granting hackers the information they need to access your accounts elsewhere. As a rule of thumb, its best to grant as little access as possible for any Facebook app; a third-party app that asks for extensive privileges such as access to your real-time location, contact list, microphone, camera, email, etc., could prove to be a serious security liability. Twitter: privacy as a binary choice Twitter keeps things simple in regards to privacy. It's nothing like Facebook, where you can micro-manage your settings. Instead, Twitter keeps it binary; things are either public or private. You also don’t have the opportunity to change this for individual tweets. Whenever you use Twitter, ask yourself if you want other people to know where you are right now. Remember, if you are on holiday and your house is unattended, posting that information publically could put your property at risk. You should also remember that any photos you upload with embedded GPS coordinates could be used to track you back physically. Twitter automatically strips away EXIF data, but it still reads that data to provide suggested locations. For complete security, remove the data before you upload any picture. Finally, refrain from using third-party Twitter apps such as UberSocial, HootSuite, Tweetbot. If you’re going for maximum security, avoid using any at all! Instagram: location, location, location The whole idea behind Instagram is sharing of photos and videos. It’s true sharing your location is fun and even convenient, yet few users truly understand the implications of sharing such information. While it’s not a great idea to tell a random stranger on the street that you’re going out, the same concept applies to your posts and stories that indicate your current location. Make sure to refrain from location tagging as much as possible. It’s also a good idea to remove any EXIF data before posting any photo. In fact, you should consider turning off your location data altogether. Additionally, consider making your profile private. It’s a great feature that’s often overlooked. With this setting on, you’ll be able to review every single follower before they gain access to your content. Remember that if your profile remains public anyone can see your post and follow your stories, which in most instances highlights your daily activities. Giving that kind of information to total strangers online could have detrimental outcomes, to put it lightly. Reddit: a privacy safe haven Reddit is one of the best social media sites for anonymity. For one thing, you never have to share or disclose any personal information to register with Reddit. As long as you make sure never to share any personally identifiable information and you keep your location data turned off, it's easy to use Reddit with complete anonymity. Though Reddit’s track record is almost spotless when it comes to security and privacy, it’s essential to understand your account on this social media platform could still be compromised. That’s because your email address is directly linked to your Reddit account. Thus, if you want to protect your account from possible hacks, you must take precautionary steps to secure your email account as well. Remember - everything’s connected on the Internet. VPN: a universal security tool A virtual private network (VPN) will enhance your overall online privacy and security. When you use a VPN, even the website itself won’t be able to trace you; it will only know the location of the server you're connected to, which you can choose. All the data that will be sent or received will be encrypted with a military-grade cipher. In many cases, VPN providers offer further features to enhance privacy and security. As of now, quite a few VPN services can identify and blacklist potentially malicious ads, pop-ups, and websites. With the continuous updates of such databases, the feature will only get better. Additionally, DNS leak protection and automatic Kill Switches ensure that snoopers have virtually no chances of intercepting your connection in any imaginable way. Using a VPN is a no-brainer. If you still don’t have one, rest assured that it will be one of the best investments in regards to your online security and privacy. Staying safe on social media won’t happen automatically, unfortunately, It takes effort. Make sure to check the settings available on each platform, and carefully consider what you are sharing. Never share anything so sensitive that, if it were accidentally exposed to all your followers, it would be a disaster. Besides optimizing your privacy settings, make use of all virtual security solutions such as VPN services and antimalware tools. Take these security measures and remain vigilant - that way you’ll remain safe on social media. About the author   Harold Kilpatrick is a cybersecurity consultant and a freelance blogger. He's currently working on a cybersecurity campaign to raise awareness around the threats that businesses can face online.   Mozilla’s new Firefox DNS security updates spark privacy hue and cry Google to launch a censored search engine in China, codenamed Dragonfly Did Facebook just have another security scare? Time for Facebook, Twitter and other social media to take responsibility or face regulation
Read more
  • 0
  • 0
  • 2414

article-image-meet-the-whos-who-of-reinforcement-learning
Fatema Patrawala
12 Jul 2018
7 min read
Save for later

Meet the who's who of Reinforcement learning

Fatema Patrawala
12 Jul 2018
7 min read
Reinforcement learning is a branch of artificial intelligence that deals with an agent that perceives the information of the environment in the form of state spaces and action spaces and acts on the environment thereby resulting in a new state and receiving a reward as feedback for that action. This received reward is assigned to the new state. Just like when we had to minimize the cost function in order to train our neural network, here the reinforcement learning agent has to maximize the overall reward to find the optimal policy to solve a particular task. This article is an extract from the book Reinforcement Learning with TensorFlow.  How is reinforcement learning different from supervised and unsupervised learning? In supervised learning, the training dataset has input features, X, and their corresponding output labels, Y. A model is trained on this training dataset, to which test cases having input features, X', are given as the input and the model predicts Y'. In unsupervised learning, input features, X, of the training set are given for the training purpose. There are no associated Y values. The goal is to create a model that learns to segregate the data into different clusters by understanding the underlying pattern and thereby, classifying them to find some utility. This model is then further used for the input features X' to predict their similarity to one of the clusters. Reinforcement learning is different from both supervised and unsupervised. Reinforcement learning can guide an agent on how to act in the real world. The interface is broader than the training vectors, like in supervised or unsupervised learning. Here is the entire environment, which can be real or a simulated world. Agents are trained in a different way, where the objective is to reach a goal state, unlike the case of supervised learning where the objective is to maximize the likelihood or minimize cost. Reinforcement learning agents automatically receive the feedback, that is, rewards from the environment, unlike in supervised learning where labeling requires time-consuming human effort. One of the bigger advantages of reinforcement learning is that phrasing any task's objective in the form of a goal helps in solving a wide variety of problems. For example, the goal of a video game agent would be to win the game by achieving the highest score. This also helps in discovering new approaches to achieving the goal. For example, when AlphaGo became the world champion in Go, it found new, unique ways of winning. A reinforcement learning agent is like a human. Humans evolved very slowly; an agent reinforces, but it can do that very fast. As far as sensing the environment is concerned, neither humans nor and artificial intelligence agents can sense the entire world at once. The perceived environment creates a state in which agents perform actions and land in a new state, that is, a newly-perceived environment different from the earlier one. This creates a state space that can be finite as well as infinite. The largest sector interested in this technology is defense. Can reinforcement learning agents replace soldiers that not only walk, but fight, and make important decisions? Basic terminologies and conventions The following are the basic terminologies associated with reinforcement learning: Agent: This we create by programming such that it is able to sense the environment, perform actions, receive feedback, and try to maximize rewards. Environment: The world where the agent resides. It can be real or simulated. State: The perception or configuration of the environment that the agent senses. State spaces can be finite or infinite. Rewards: Feedback the agent receives after any action it has taken. The goal of the agent is to maximize the overall reward, that is, the immediate and the future reward. Rewards are defined in advance. Therefore, they must be created properly to achieve the goal efficiently. Actions: Anything that the agent is capable of doing in the given environment. Action space can be finite or infinite. SAR triple: (state, action, reward) is referred as the SAR triple, represented as (s, a, r). Episode: Represents one complete run of the whole task. Let's deduce the convention shown in the following diagram: Every task is a sequence of SAR triples. We start from state S(t), perform action A(t) and thereby, receive a reward R(t+1), and land on a new state S(t+1). The current state and action pair gives rewards for the next step. Since, S(t) and A(t) results in S(t+1), we have a new triple of (current state, action, new state), that is, [S(t),A(t),S(t+1)] or (s,a,s'). Pioneers and breakthroughs in reinforcement learning Here are the pioneers, industrial leaders, and research breakthroughs in the field of deep reinforcement learning. David Silver Dr. David Silver, with an h-index of 30, heads the research team of reinforcement learning at Google DeepMind and is the lead researcher on AlphaGo. David co-founded Elixir Studios and then completed his PhD in reinforcement learning from the University of Alberta, where he co-introduced the algorithms used in the first master-level 9x9 Go programs. After this, he became a lecturer at University College London. He used to consult for DeepMind before joining full-time in 2013. David lead the AlphaGo project, which became the first program to defeat a top professional player in the game of Go. Pieter Abbeel Pieter Abbeel is a professor at UC Berkeley and was a Research Scientist at OpenAI. Pieter completed his PhD in Computer Science under Andrew Ng. His current research focuses on robotics and machine learning, with a particular focus on deep reinforcement learning, deep imitation learning, deep unsupervised learning, meta-learning, learning-to-learn, and AI safety. Pieter also won the NIPS 2016 Best Paper Award. Google DeepMind Google DeepMind is a British artificial intelligence company founded in September 2010 and acquired by Google in 2014. They are an industrial leader in the domains of deep reinforcement learning and a neural turing machine. They made news in 2016 when the AlphaGo program defeated Lee Sedol, 9th dan Go player. Google DeepMind has channelized its focus on two big sectors: energy and healthcare. Here are some of its projects: In July 2016, Google DeepMind and Moorfields Eye Hospital announced their collaboration to use eye scans to research early signs of diseases leading to blindness In August 2016, Google DeepMind announced its collaboration with University College London Hospital to research and develop an algorithm to automatically differentiate between healthy and cancerous tissues in head and neck areas Google DeepMind AI reduced the Google's data center cooling bill by 40% The AlphaGo program As mentioned previously in Google DeepMind, AlphaGo is a computer program that first defeated Lee Sedol and then Ke Jie, who at the time was the world No. 1 in Go. In 2017 an improved version, AlphaGo zero was launched that defeated AlphaGo 100 games to 0. Libratus Libratus is an artificial intelligence computer program designed by the team led by Professor Tuomas Sandholm at Carnegie Mellon University to play Poker. Libratus and its predecessor, Claudico, share the same meaning, balanced. In January 2017, it made history by defeating four of the world's best professional poker players in a marathon 20-day poker competition. Though Libratus focuses on playing poker, its designers mentioned its ability to learn any game that has incomplete information and where opponents are engaging in deception. As a result, they have proposed that the system can be applied to problems in cybersecurity, business negotiations, or medical planning domains. You enjoyed an excerpt on Reinforcement learning and got to know about breakthrough research in this field. If you want to leverage the power of reinforcement learning techniques, grab our latest edition Reinforcement Learning with TensorFlow. Top 5 tools for reinforcement learning How to implement Reinforcement Learning with TensorFlow How to develop a stock price predictive model using Reinforcement Learning and TensorFlow
Read more
  • 0
  • 0
  • 2413
article-image-is-it-actually-possible-to-have-a-free-and-fair-election-ever-again-pulitzer-finalist-carole-cadwalladr-on-facebooks-role-in-brexit
Bhagyashree R
18 Apr 2019
6 min read
Save for later

“Is it actually possible to have a free and fair election ever again?,” Pulitzer finalist, Carole Cadwalladr on Facebook’s role in Brexit

Bhagyashree R
18 Apr 2019
6 min read
On Monday, Carole Cadwalladr, a British journalist and Pulitzer award finalist, in her TED talk revealed how Facebook impacted the Brexit voting by enabling the spreading of calculated disinformation. Brexit, short for “British exit”, refers to UK’s withdrawal from the European Union (EU). Back in June 2016, when the United Kingdom European Union membership referendum happened, 51.9% of the voters supported leaving the EU. The final conclusion was set to come out on 29 March 2019, but it is now extended to 31 October 2019. Cadwalladr was asked by the editor of The Observer, the newspaper she was working at the time, to visit South Wales to investigate why so many voters there had elected to leave EU. So, she decided to visit Ebbw Vale, a town at the head of the valley formed by the Ebbw Fawr tributary of the Ebbw River in Wales. She wanted to find out why this town had the highest percentage of ‘Leave’ votes (62%). Brexit in South Wales: The reel and the real After reaching the town, Cadwalladr recalls that she was “taken aback” when she saw how this town has evolved over the years. The town was gleaming with new infrastructures including entrepreneurship center, sports center, better roads, and more, all funded by the EU. After seeing this development, she felt “a weird sense of unreality” when a young man stated his reason for voting to leave the EU was that it has failed to do anything for him. Not only this young man but people all over the town also stated the same reason for voting to leave the EU. “They said that they wanted to take back control,” adds Cadwalladr. Another major reason behind Brexit was immigration. However, Cadwalladr adds that she barely saw any immigrants and was unable to relate to the immigration problem the citizens of the town were talking about. So, she verified her observation with the actual records and was surprised to find that Ebbw Vale, in fact, has one of the lowest immigration rates. “So I was just a bit baffled because I couldn’t really understand where people were getting their information from,” she adds. So, after her story got published, a reader reached out to her regarding some Facebook posts and ads, which she described to her as “quite scary stuff about immigration, and especially about Turkey.” These posts were misinforming people that Turkey was going to join the EU and its 76 million population will promptly emigrate to current member states. “What happens on Facebook, stays on Facebook” After getting informed about these ads, when Cadwalladr checked Facebook to look for herself, she could not find even a trace of them because there is no archive of ads that are shown to people on Facebook. She said,  “This referendum that will have this profound effect on Britain forever and it already had a profound effect. The Japanese car manufacturers that came to Wales and the North-East people who replaced the mining jobs are already going because of Brexit. And, this entire referendum took place in darkness because it took place on Facebook.” And, this is why the British parliament has called Mark Zuckerberg several times to get answers to their questions, but each time he refused. Nobody has a definitive answer to questions like what ads were shown to people, how these ads impacted them, how much money was spent on these ads, or what data was analyzed to target these people, but Facebook. Cadwalladr adds that she and other journalists observed that during the referendum multiple crimes happened. In Britain, there is a limited amount of budget that you are allowed to spend on election campaigns to prevent politicians from buying the votes. But, in the last few days before the Brexit vote the  “biggest electoral fraud in Britain” happened. It was found that the official Vote Leave campaign laundered £750,000 from another campaign entity that was ruled illegal by the electoral commission. This money was spent, as you can guess, on the online disinformation campaigns. She adds, “And you can spend any amount of money on Facebook or on Google or on YouTube ads and nobody will know, because they're black boxes. And this is what happened.” The law was also broken by a group named “Leave.EU”. This group was led by Nigel Farage, a British politician, whose Brexit Party is doing quite well in the European elections. The campaign was funded by Arron Banks, who is being referred to the National Crime Agency because the electoral commission was not able to figure out from where he was able to provide the money. Going further into the details, she adds, “And I'm not even going to go into the lies that Arron Banks has told about his covert relationship with the Russian government. Or the weird timing of Nigel Farage's meetings with Julian Assange and with Trump's buddy, Roger Stone, now indicted, immediately before two massive WikiLeaks dumps, both of which happened to benefit Donald Trump.” While looking into Trump’s relationship to Farage, she came across Cambridge Analytica. She tracked down one of its ex-employees, Christopher Wiley, who was brave enough to reveal that this company has worked for Trump and Brexit. It used data from 87 million people from Facebook to understand their individual fears and better target them with Facebook ads. Cadwalladr’s investigation involved so many big names, that it was quite expected to get some threats. The owner of Cambridge Analytica, Robert Mercer threatened to sue them multiple times. Later on, one day ahead of publishing, they received a legal threat from Facebook. But, this did not stop them from publishing their findings in the Observer. A challenge to the “gods of Silicon Valley” Addressing the leaders of the tech giants, Cadwalladr said, “Facebook, you were on the wrong side of history in that. And you were on the wrong side of history in this -- in refusing to give us the answers that we need. And that is why I am here. To address you directly, the gods of Silicon Valley: Mark Zuckerberg and Sheryl Sandberg and Larry Page and Sergey Brin and Jack Dorsey, and your employees and your investors, too.” These tech giants can’t get away by just saying that they will do better in the future. They need to first give us the long-overdue answers so that these type of crimes are stopped from happening again. Comparing the technology they created to a crime scene, she now calls for fixing the broken laws. “It's about whether it's actually possible to have a free and fair election ever again. Because as it stands, I don't think it is,” she adds. To watch her full talk, visit TED.com. Facebook shareholders back a proposal to oust Mark Zuckerberg as the board’s chairperson Facebook AI introduces Aroma, a new code recommendation tool for developers Facebook AI open-sources PyTorch-BigGraph for faster embeddings in large graphs
Read more
  • 0
  • 0
  • 2412

article-image-what-security-and-systems-specialists-are-planning-to-learn-in-2018
Savia Lobo
22 Jun 2018
3 min read
Save for later

What security and systems specialists are planning to learn in 2018

Savia Lobo
22 Jun 2018
3 min read
Developers are always on the verge of learning something new, which can add on to their skill and their experience. Organizations such as Red Hat, Microsoft, Oracle, and many more roll out certain courses and certifications for developers and other individuals. 2018 has brought in some exciting areas for security and system experts to explore. Our annual Skill Up survey highlighted few of the technologies that security and system specialists are planning to learn in this year. Docker emerged to be at the top with professionals wanting to learn more about it and its implementations in building up a software with the ‘everything at one place’ concept. The survey also highlighted specialists being interested in learning RedHat’s OpenStack, Microsoft Azure, and AWS technologies. OpenStack being a cloud OS keeps a check on large pools of compute, storage, and networking resources within any datacenter, all through a web interface. It provides users with a much modular architecture to build their own cloud platforms without restrictions faced in the traditional cloud infrastructure. OpenStack also offers a Red Hat® Certified System Administrator course using which one can secure private clouds on OpenStack. You can check out our book on OpenStack Essentials to get started. The survey also highlights that system specialists are interested in learning Microsoft Azure. The primary reason for their choice is it offers a varied range of options to protect one’s applications and the data. It offers a seamless experience for developers who want to build, deploy, and maintain applications on the cloud. It also supports compliance efforts and provides a cost-effective security for individuals and organizations. AWS also offers out-of-the-box features with its products such as Amazon EC2, Amazon S3, AWS Lambda, and many more. Read about why AWS is a preferred cloud provider in our article, Why AWS is the preferred cloud platform for developers working with big data? In response to another question in the same survey, developers expressed their interest in learning security. With a lot of information being hosted over the web, organizations fear that their valuable data might be attacked by hackers and can be used illegally. Read also: The 10 most common types of DoS attacks you need to know Top 5 penetration testing tools for ethical hackers Developers are also keen on learning about security automation that can aid them in performing vulnerability scans without any human errors and also decreases their time to resolution. Security automation further optimizes ROI of their security investments. Learn security automation using one of the popular tools Ansible with our book, Security Automation with Ansible 2. So here are some of the technologies that security and system specialists are planning to learn. This analysis was taken from Packt Skill Up Survey 2018. Do let us know your thoughts in the comments below. The entire survey report can be found on the Packt store. IoT Forensics: Security in an always connected world where things talk Top 5 cybersecurity assessment tools for networking professionals Pentest tool in focus: Metasploit
Read more
  • 0
  • 0
  • 2412

article-image-aiops-trick-or-treat
Bhagyashree R
31 Oct 2018
2 min read
Save for later

AIOps - Trick or Treat?

Bhagyashree R
31 Oct 2018
2 min read
AIOps, as the term suggests, is Artificial Intelligence for IT operations and was first introduced by Gartner last year. AIOps systems are used to enhance and automate a broad range of processes and tasks in IT operations with the help of big data analytics, machine learning, and other AI technologies. Read also: What is AIOps and why is it going to be important? In its report, Gartner estimated that, by 2020, approximately 50% of enterprises will be actively using AIOps platforms to provide insight into both business execution and IT Operations. AIOps has seen a fairly fast growth since its introduction with many big companies showing interest in AIOps systems. For instance, last month Atlassian acquired Opsgenie, an incident management platform that along with planning and solving IT issues, helps you gain insight to improve your operational efficiency. The reasons why AIOps is being adopted by companies are: it eliminates tedious routine tasks, minimizes costly downtime, and helps you gain insights from data that’s trapped in silos. Where AIOps can go wrong? AIOps alerts us about incidents beforehand, but in some situations, it can also go wrong. In cases where the event is unusual, the system will be less likely to predict it. Also, those events that haven’t occurred before will be entirely outside the ability for machine learning to predict or analyze. Additionally, it can sometimes give false negatives and false positives. False negatives could happen in the cases where the tests are not sensitive enough to detect possible issues. False positives can be the result of incorrect configuration. This essentially means that there will always be a need for human operators to review these alerts and warnings. Is AIOps a trick or treat? AIOps is bringing more opportunities for IT workforce such as AIOps Data Scientist, who will focus on solutions to correlate, consolidate, alert, analyze, and provide awareness of events. Dell defines its Data Scientist role as someone who will “contribute to delivering transformative AIOps solutions on their SaaS platform”. With AIOps, IT workforce won’t just disappear, it will evolve. AIOps is definitely a treat because it reduces manual work and provides an intuitive way of incident response. What is AIOps and why is it going to be important? 8 ways Artificial Intelligence can improve DevOps Tech hype cycles: do they deserve your attention?
Read more
  • 0
  • 0
  • 2408
article-image-quantum-intelligent-mix-quantum
Sugandha Lahoti
29 Nov 2017
6 min read
Save for later

Quantum A.I. : An intelligent mix of Quantum+A.I.

Sugandha Lahoti
29 Nov 2017
6 min read
“Mixed reality, Artificial intelligence and Quantum computing are the three path-breaking technologies that will shape the world in the coming years.” - Satya Nadella, CEO, Microsoft. The biggest scientific & technological revolution of the decade, Artificial Intelligence, has the potential to flourish human civilizations like never before. At the surface level, it seems to be all about automated functioning and intelligent coding. But at the core, algorithms require huge data, quality training, and complex models. Processing of these algorithmic computations need hardware. Presently, digital computers operate on the classical Boolean logic. Quantum computing is the next-gen hardware and software technology, based on the quantum law. It typically means that, they use qubits instead of the boolean logic in order to speed up calculations. The concoction of the both path-breaking techs, i.e. AI and Quantum Computing is said to be the future of technology. Quantum A.I. is all about implementing fast computation capabilities of quantum computers to Artificial intelligence based applications. Understanding Quantum Computing Before we jump into Quantum A.I., let us first understand Quantum Computing in detail. In physics terminology, quantum mechanics is the study of nature at the atomic and subatomic level. Totally opposite of classical physics theory which describes the nature at macroscopic level. At the quantum level, nature particles may take form of more than one state at the same time. Quantum computing utilizes this fundamental quantum phenomena of the nature to process information. Quantum computer stores information in the form of quantum bits, known as qubits, similar to the binary logic used by digital computers. However, the state of the bits is not defined. It can encode information as both 1s and 0s with the help of quantum mechanical principles of superposition, entanglement, and tunneling. The use of quantum logic enables a quantum computer to solve problems at an exponentially faster rate than present day computers. Physicists and researchers consider that quantum computers are powerful enough to outperform the present processors. Quantum Computing for Artificial Intelligence Regardless of smart AI algorithms, a high-processing hardware is essential for them to function. Current GPUs, allow algorithms to run at an operable speed, a speckle of what quantum computing does. Quantum computing approach helps AI algorithms undergo exponential speedups over existing digital computers. In this way it will ease problems related to machine learning, clustering, classification and finding constructive patterns in large quantities of data. Quantum learning amalgamates with AI to speed up ML and AI algorithms in order to develop systems which can better interpret, improve, and understand large data sets of information. Specific use cases in the area of Quantum AI: Random Number Generation Classical, digital computers are only able to generate pseudo-random numbers. They use computational difficulty for encryptions, making them easily crackable using quantum computers. Certain machine learning algorithms require pure random numbers to generate ideal results, specifically for financial applications. Quantum systems have the mechanism to generate pure random numbers as required by machine learning applications. QRNG (Quantum Random number generator) is a quantum computer by Certes Networks, used for generating high-level random numbers for secure encryption key generation. Quantum-enhanced Reinforcement Learning Reinforcement learning is an Artificial intelligence area which allows agents to learn about an environment and take actions to achieve rewards. Usually it is time consuming in the initial training process and choosing an optimal path. With the help of a quantum agent, the training time reduces dramatically. Additionally, a quantum agent is thorough with the description of the environment after the end of each learning process. This is marked as an advancement over the classical approach where reinforcement learning schemes are model-free. Quantum-Inspired Neural Nets Quantum neural networks leverage ideas from the quantum theory for a fuzzy logic based neural network implementation. Current Neural network in the areas of big data applications are generally difficult to train as they use a feedback loop to update parameters in the training phase. In quantum computers, quantum forces such as interference and entanglement can be used to quickly update parameters in the training phase, easing the entire training process. Big data Analytics Quantum computers have the ability to handle huge amount of data generated and will continue to do so at an exponential rate. Using quantum computing techniques for big data analytics, useful insights would be within every individual’s reach. This would lead to better portfolio management, optimal routing for navigation, best possible treatments, personalized medications, etc. Empowering Big data analytics with quantum computing will ease out sampling, optimizing, and analyzing large quantities of data, giving businesses and consumers better decision making ability. These are few examples in terms of measuring Quantum AI capabilities. Quantum computers powered by Artificial Intelligence is set to have tremendous impact in the field of science and engineering. Ongoing Research and Implementation Google plans to build a 49-qubit quantum chip by the end of 2017. Microsoft CEO, during his keynote session at Microsoft Ignite  made the announcement of a new programming language designed to work on quantum simulator as well as quantum computer. In this rat race, IBM successfully built and measured a 50 qubit quantum computer. Additionally, Google is collaborating with NASA to release a number of research papers pertaining to Quantum A.I. domain. Rigetti Computing plans to devise a computer that will leverage quantum physics for applications pertaining to artificial intelligence and chemistry simulations. They will offer a cloud based service, on the lines of Google and Microsoft for remote usage. Volkswagen, a German automaker, plans to collaborate with Google quantum AI to develop new-age digital features for cars and intelligent traffic-management system. They are also contemplating to build AI systems for autonomous cars. Future Scope In the near future, high-level quantum computers will help in development of complex AI models with ease. Such Quantum enhanced AI algorithms will influence application development in the field of finance, security, healthcare, molecular science, automobile and manufacturing etc. Artificial intelligence married to Quantum computing is said to be the key of a brighter, more tech-oriented future. A future that will take intelligent information processing at a whole new altitude.  
Read more
  • 0
  • 0
  • 2403

article-image-the-most-asked-questions-on-big-data-privacy-and-democracy-in-last-months-international-hearing-by-canada-standing-committee
Savia Lobo
16 Jun 2019
16 min read
Save for later

The most asked questions on Big Data, Privacy and Democracy in last month’s international hearing by Canada Standing Committee

Savia Lobo
16 Jun 2019
16 min read
The Canadian Parliament's Standing Committee on Access to Information, Privacy and Ethics hosted the hearing of the International Grand Committee on Big Data, Privacy and Democracy from Monday, May 27 to Wednesday, May 29.  Witnesses from at least 11 countries appeared before representatives to testify on how governments can protect democracy and citizen rights in the age of big data. This section of the hearing took place on May 28, and includes the following witnesses: - Jim Balsillie, Chair, Centre for International Governance Innovation; Retired Chairman and co-CEO of BlackBerry - Roger McNamee, Author of Zucked: Waking up to the Facebook Catastrophe - Shoshana Zuboff, Author of The Age of Surveillance Capitalism - Maria Ressa, CEO and Executive Editor, Rappler Witnesses were asked various questions based on data privacy, data regulation, the future of digital tech considering current data privacy model, and much more. Why we cannot enforce independent regulators to oversee user rights data privacy Damion Collins to McNamee:  “In your book you said as far as I can tell Zack has always believed that users value privacy more than they should. On that basis, do you think we will have to establish in law the standards we want to see enforced in terms of users rights data privacy with independent regulators to oversee them? because the companies will never do that effectively themselves because they just don't share the concerns we have about how the systems are being abused” Roger McNamee: “I believe that it's not only correct in terms of their philosophy, as Professor Zuboff points out, but it is also baked into their business model--this notion--that any data that exists in the world, claimed or otherwise, they will claim for their own economic use and framing. How you do that privacy, I think is extremely difficult and in my opinion, would be best done by simply banning the behaviors that are used to gather the data.” Zuckerberg is more afraid of privacy regulation Jo Stevens, Member of Parliament for Cardiff Central, asked McNamee,  “What you think Mark Zuckerberg is more frightened about privacy regulation or antitrust action?” McNamee replied saying that Zuckerberg is more afraid of privacy.  He further adds, “to Lucas I would just say the hardest part of this is setting the standard of what the harm is these guys have hidden behind the fact that's very hard to quantify many of these things.” In the future can our homes be without digital tech? Michel Picard, Member of the Canadian House of Commons asked Zuboff, “your question at the beginning is, can the digital future be our home? My reaction to that was, in fact, the question should be in the future home be without digital.” Zubov replied, “that's such an important distinction because I don't think there's a single one of us in this room that is against the digital per se. It's this is not about being anti-technology, it's about technology being hijacked by a rogue economic logic that has turned it to its own purposes. We talked about the idea that conflating the digital with surveillance capitalism is a dangerous category error. What we need is to be able to free the potential of the digital to get back to those values of Democritus democratization of knowledge and individual emancipation and empowerment that it was meant to serve and that it still can serve.” Picard further asks, “compared to the Industrial Revolution where somewhere although we were scared of the new technology, this technology was addressed to people for them to be beneficiaries of that progress, now, it's we're not beneficiary at all. The second step of this revolution, it is a situation where people become a producer of the raw material and as you mentioned as you write “Google's invention reveals new capabilities to infer and deduce the thoughts feelings intention interests of individual and groups with an automated architecture that operates as a one-way mirror irrespective of a person's awareness. So like people connected to the machine and matrix.” Zuboff replies, “From the very beginning the data scientists at Google, who are inventing surveillance capitalism, celebrated in their written patterns and in their research, published research, the fact that they could hunt and capture behavioral surplus without users ever being aware of these backstage operations. Surveillance was baked into the DNA of this economic logic essential to its strange form of value creation. So it's with that kind of sobriety and gravitas that it is called surveillance capitalism because without the surveillance piece it cannot exist.” Can Big data be simply pulled out of jurisdictions in the absence of harmonized regulation across democracies? Peter Kent, Member of Parliament Thornhill, asked Balsillie, “with regards to what we've seen that Google has said in response to the new federal elections, the education on advertising will simply withdraw from accepting advertising. Is it possible that big data could simply pull out of jurisdictions where regulations, in the absence of harmonized regulation, across the democracies are present?” To this, Balsillie replies, “ well that's the best news possible because as everyone's attested here. The purpose of surveillance capitalism is to undermine personal autonomy and yet elections democracy are centered on the sovereign self exercised their sovereign will. Now, why in the world would you want to undermine the core bedrock of election in a non-transparent fashion to the highest bidder at the very time your whole citizenry is on the line and in fact, the revenue for is immaterial to these companies. So one of my recommendations is, just banning personalized online ads during elections. We have a lot of things you're not allowed to do for six or eight weeks just put that into the package it's simple and straightforward.” McNamee further adds his point on the question by saying, “point that I think is being overlooked here which is really important is, if these companies disappeared tomorrow, the services they offer would not disappear from the marketplace. In a matter of weeks, you could replicate Facebook, which would be the harder one. There are substitutes for everything that Google does that are done without surveillance capitalism. Do not in your mind allow any kind of connection between the services you like and the business model of surveillance capitalism. There is no inherent link, none at all this is something that has been created by these people because it's wildly more profitable.” Committee lends a helping hand as an ‘act of Solidarity’ to press freedom Charlie Angus, a member of the Canada House of Commons, “Facebook and YouTube transformed the power of indigenous communities to speak to each other, to start to change the dynamic of how white society spoke about them. So I understand its incredible power for the good. I see more and more thought in my region which has self-radicalized people like the flat earthers, anti-vaxxers, 9/11 truthers and I've seen its effect in our elections through the manipulation of anti-immigrant anti-muslim materials. People are dying in Asia for the main implication of these platforms. I want to ask you is there some in an act of solidarity with our Parliament with our legislators if there are statements that should be made public through our Parliament to give you support so that we can maintain a link with you as an important ally on the front line.” Ressa replied, “Canada has been at the forefront of holding fast to the values of human rights of press freedom. I think the more we speak about this then the more the values are reiterated especially since someone like president Trump truly likes president detective and vice versa it's very personal. But sir, when you talked about  where people are dying you've seen this all over Asia there's Myanmar there is the drug war here in the Philippines, India and Pakistan just instances when this tool for empowerment just like in your district it is something that we do not want to go away not shut down and despite the great threats that we face that I face and my company faces Facebook the social media platforms still give us the ability to organize to create communities of action that had not been there before.” Do fear, outrage, hate speech, conspiracy theories sell more than truths? Edwin Tong, a member of the Singapore parliament asked McNamee, on the point McNamee made during his presentation that “the business model of these platforms really is focussed on algorithms that drive content to people who think they want to see this content. And you also mentioned that fear outraged hate speech conspiracy theories is what sells more and I assume what you mean to say by that is it sells more than truths, would that be right?” McNamee replied, “So there was a study done at MIT in Cambridge Massachusetts that suggested, disinformation spreads 70% further and six times faster than fact and there are actually good human explanations for why hate speech and conspiracy theories move so rapidly it's just it's about treating the flight-or-fight reflex.” Tong further highlighted what Ressa said about how this information is spread through the use of BOTS. “I think she said 26 fake accounts is translating the 3 million different accounts which spread the information. I think we are facing a situation where disinformation if not properly checked gets exponentially viral. People get to see it all the time and overtime unchecked this leads to a serious erosion of trust serious undermining of institutions we can't trust elections and fundamentally democracy becomes marginalized and eventually demolished.”   To this, McNamee said, “I agree with that statement completely to me the challenge is in how you manage it so if you think about this censorship and moderation were never designed to handle things at the scale that these Internet platforms operate at. So in my view, the better strategy is to do the interdiction upstream to either ask the fundamental question of what is the role of platforms like this in society right and then secondly what's the business model associated with them. So to me, what you really want to do my partner Renee de resto who's a researcher in this area it talks about the issue of freedom of speech versus freedom of reach. The latter being the amplification mechanism and so what's really going on on these platforms is the fact that the algorithms find what people engage with and amplify that more and sadly hate speech disinformation conspiracy theories are, as I said the catnip that's what really gets the algorithms humming and gets people to react and so in that context eliminating that amplification is essential and the question is how you're gonna go about doing that and how are you gonna how are you going to essentially verify that it's been done and in my mind the simplest way to do that's to prevent the data from getting in there in the first place.” Tong further said, “I think you must go upstream to deal with it fundamentally in terms of infrastructure and I think some witnesses also mentioned that we need to look at education which I totally agree with but when it does happen and when you have that proliferation of false information there must be a downstream or an end result kind of reach and that's where I think your example of Sri Lanka is very pertinent because it shows and demonstrates that left uncheck the platforms to do nothing about they're about the false information is wrong and what we do need is to have regulators and governments be clothed with powers and levers to intervene, intervene swiftly, and to disrupt the viral spread of online falsehoods very quickly would you agree as a generalization.” McNamee said, “I would not be in favor of the level of government intervention I have recommended here I simply don't see alternatives at the moment that in order to do what Shoshanna's talked about in order to do what Jim is talking about you have to have some leverage and the only leverage governments have today is their ability to shut these things down well nothing else works quickly enough.” Sun Xueling, another member from the Parliament of Singapore asked McNamee, “I like to make reference to the Christchurch shooting on the 15th of March 2019 after which the New York Times had published an article by Kevin Roos.” She quoted what Roos mentioned in his article, “We do know that the design of Internet platforms can create and reinforce extremist beliefs. Their recommendation algorithms often steer users towards a jeer content, a loop that results in more time spent on the app, and more advertising revenue for the company.” McNamee said, “not only do I agree with that I would like to make a really important point which is that the design of the Internet itself is part of the problem that I'm of the generation as Jim is as well that were around when the internet was originally conceived in design and the notion in those days was that people could be trusted with anonymity and that was a mistake because bad actors use anonymity to do bad things and the Internet is essentially enabled disaffected people to find each other in a way they could never find each other in the road and to organize in ways they could not in the real world so when we're looking at Christchurch we have to recognize that the first step this was this was a symphonic work this man went in and organized at least a thousand co-conspirators prior to the act using the anonymous functions of the internet to gather them and prepare for this act. It was then and only then after all that groundwork had been laid that the amplification processes of the system went to work but keep in mind those same people kept reposting the film; it is still up there today.” How can one eliminate the tax deductibility of specific categories of online ads? Jens Zimmermann, from the Republic of Germany asked Jim Basse to explain a bit more deeply “ the question of taxation”, which he mentioned in one of his six recommendations. To this Balsillie said, “I'm talking about those that are buying the ads. The core problem here is when your ad driven you've heard extremely expert testimony that they'll do whatever it takes to get more eyeballs and the subscription-based model is a much safer place to be because it's not attention driven and one of the purposes of taxes to manage externalities if you don't like the externalities that we're grappling with that are illuminated here then disadvantage those and many of these platforms are moving more towards subscription-based models anyway. So just use tax as a vehicle to do that and the good benefit is it gives you revenue this the second thing it could do is also begin to shift towards more domestic services. I think it attacks has not been a lever that's been used and it's right there for you all right.” Thinking beyond behavioral manipulation, data surveillance-driven business models Keit Pentus, the representative from Estonia asked McNamee, “If you were sitting in my chair today, what would be the three steps you would recommend or you would do if we leave those shutting down the platforms aside for a second.” McNamee said, “In the United States or in North America roughly 70% of all the artificial intelligence professionals are working at Google, Facebook, Microsoft, or Amazon and to a first approximation they're all working on behavioral manipulation. There are at least a million great applications of artificial intelligence and behavioral manipulation is not on them. I would argue that it's like creating time-release anthrax or cloning human babies. It's just a completely inappropriate and morally repugnant idea and yet that is what these people are doing. I would simply observe that it is the threat of shutting them down and the willingness to do it for brief periods of time that creates the leverage to do what I really want to do which is, to eliminate the business model of behavioral manipulation and data surveillance.” “I don't think this is about putting the toothpaste back into tubes, this is about formulating toothpaste that doesn't poison people. I believe this is directly analogous to what happened with the chemical industry in the 50s. The chemical industry used to pour its waste products, mercury, chromium, and things like that direct into freshwater, which left mine tailings on the side of hills. State petrol stations would pour spent oil into sewers and there were no consequences. So the chemical industry grew like crazy, had incredibly high marches. It was the internet platform industry of its era. And then one day society woke up and realized that those companies should be responsible for the externalities that they were creating. So, this is not about stopping progress this is my world this is what I do.” “I just think we should stop hurting people we should stop killing people in Myanmar, we should stop killing people in the Philippines, and we should stop destroying democracy everywhere else. We can do way better than that and it's all about the business model, and I don't want to pretend I have all the solutions what I know is the people in this room are part of the solution and our job is to help you get there. So don't view anything I say as a fixed point of view.” “This is something that we're gonna work on together and you know the three of us are happy to take bullets for all of you okay because we recognize it's not easy to be a public servant with these issues out there. But do not forget you're not gonna be asking your constituents to give up the stuff they love. The stuff they love existed before this business model and it'll exist again after this business pop.” To know more and listen to other questions asked by some other representatives, you can listen to the full hearing video titled, “Meeting No. 152 ETHI - Standing Committee on Access to Information, Privacy and Ethics” on ParlVU. Speech2Face: A neural network that “imagines” faces from hearing voices. Is it too soon to worry about ethnic profiling? UK lawmakers to social media: “You’re accessories to radicalization, accessories to crimes”, hearing on spread of extremist content Key Takeaways from Sundar Pichai’s Congress hearing over user data, political bias, and Project Dragonfly
Read more
  • 0
  • 0
  • 2397