Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech Guides

851 Articles
article-image-what-api-economy
Darrell Pratt
03 Nov 2016
5 min read
Save for later

What is the API Economy?

Darrell Pratt
03 Nov 2016
5 min read
If you have pitched the idea of a set of APIs to your boss, you might have run across this question. “Why do we need an API, and what does it have to do with an economy?” The answer is the API economy - but it's more than likely that that is going to be met with more questions. So let's take some time to unpack the concept and get through some of the hyperbole surrounding the topic. An economy (From Greek οίκος – "household" and νέμoμαι – "manage") is an area of the production, distribution, or trade, and consumption of goods and services by different agents in a given geographical location. - Wikipedia If we take the definition of economy from Wikipedia and the definition of API as an Application Programming Interface, then what we should be striving to create is a platform (as the producer of the API) that will attract a set of agents that will use that platform to create, trade or distribute goods and services to other agents over the Internet (our geography has expanded). The central tenet of this economy is that the APIs themselves need to provide the right set of goods (data, transactions, and so on) to attract other agents (developers and business partners) that can grow their businesses alongside ours and further expand the economy. This piece from Gartner explains the API economy very well. This is a great way of summing it up: "The API economy is an enabler for turning a business or organization into a platform." Let’s explore a bit more about APIs and look at a few examples of companies that are doing a good job of running API platforms. The evolution of the API economy If you asked someone what an API actually was 10 or more years ago, you might have received puzzled looks. The Application Programming Interface at that time was something that the professional software developer was using to interface with more traditional enterprise software. That evolved into the popularity of the SDK (Software Development Kit) and a better mainstream understanding of what it meant to create integrations or applications on pre-existing platforms. Think of the iOS SDK or Android SDK and how those kits and the distribution channels that Apple and Google created have led to the explosion of the apps marketplace. Jeff Bezos’s mandate that all IT assets have an API at Amazon was a major event in the API economy timeline. Amazon continued to build APIs such as SNS, SQS, Dynamo and many others. Each of these API components provided a well-defined service that any application can use and has significantly reduced the barrier to entry for new software and service companies. With this foundation set, the list of companies providing deep API platforms has steadily increased. How exactly does one profit in the API economy? If we survey a small set of API frameworks, we can see that companies use their APIs in different ways to add value to their underlying set of goods or create a completely new revenue stream for the company. Amazon AWS Amazon AWS is the clearest example of an API as a product unto itself. Amazon makes available a large set of services that provide defined functionality and for which Amazon charges with rates based upon usage of CPU and storage (it gets complicated). Each new service they launch addresses a new area of need and work to provide integrations between the various services. Social APIs Facebook, Twitter and others in the social space, run API platforms to increase the usage of their properties. Some of the inherent value in Facebook comes from sites and applications far afield from facebook.com and their API platform enables this. Twitter has had a more complicated relationship with its API users over time, but the API does provide many methods that allow both apps and websites to tap into Twitter content and thus extend Twitter’s reach and audience size. Chat APIs Slack has created a large economy of applications focused around its chat services and built up a large number of partners and smaller applications that add value to the platform. Slack’s API approach is one that is centered on providing a platform for others to integrate with and add content into the Slack data system. This approach is more open than the one taken by Twitter and the fast adoption has added large sums to Slack’s current valuation. Along side the meteoric rise of Slack, the concept of the bot as an assistant has also taken off. Companies like api.ai are offering services to enable chat services with AI as a service. The service offerings that surround the bot space are growing rapidly and offer a good set of examples as to how a company can monetize their API. Stripe Stripe competes in the payments as a service space along with PayPal, Square and Braintree. Each of these companies offers API platforms that vastly simplify the integration of payments into web sites and applications. Anyone who has built an e-commerce site before 2000 can and will appreciate the simplicity and power that the API economy brings to the payment industry. The pricing strategy in this space is generally on a per use case and is relatively straightforward. It Takes a Community to make the API economy work There are very few companies that will succeed by building an API platform without growing an active community of developers and partners around it. While it is technically easy to create and API given the tooling available, without an active support mechanism and detailed and easily consumable documentation your developer community may never materialize. Facebook and AWS are great examples to follow here. They both actively engage with their developer communities and deliver rich sets of documentation and use-cases for their APIs.
Read more
  • 0
  • 0
  • 4430

article-image-no-patch-human-stupidity-three-social-engineering-techniques-you-might-not-know-about
Sam Wood
12 Aug 2015
5 min read
Save for later

No Patch for Human Stupidity - Three Social Engineering Techniques You Might Not Know About

Sam Wood
12 Aug 2015
5 min read
There's a simple mantra beloved by pentesters and security specialists: "There's no patch for human stupidity!" Whether it's hiding a bunch of Greeks inside a wooden horse to breach the walls of Troy or hiding a worm inside the promise of a sexy picture of Anna Kournikova, the gullibility of our fellow humans has long been one of the most powerful weapons of anyone looking to breach security. In the penetration testing industry, working to exploit that human stupidity and naivety has a name - social engineering. The idea that hacking involves cracking black ICE and de-encrypting the stand-alone protocol by splicing into the mainframe backdoor - all whilst wearing stylish black and pointless goggles - will always hold a special place in our collective imagination. In reality, though, some of the most successful hackers don't just rely on their impressive tech skills, but on their ability to defraud. We're wise to the suspicious and unsolicited phone call from 'Windows Support' telling us that they've detected a problem on our computer and need remote access to fix it. We've cottoned on that Bob Hackerman is not in fact the password inspector who needs to know our login details to make sure they're secure enough. But hackers are getting smarter. Do you think you'd fall for one of these three surprisingly common social engineering techniques? 1. Rogue Access Points - No such thing as a free WiFi You've finally impressed your boss with your great ideas about the future of Wombat Farming. She thinks you've really got a chance to shine - so she's sent you to Wombat International, the biggest convention of Wombat Farmers in the world, to deliver a presentation and drum up some new investors. It's just an hour before you give the biggest speech of your life and you need check the notes you've got saved in the cloud. Helpfully, though, the convention provides free WiFi! Happily, you connect to WomBatNet. 'In order to use this WiFi, you'll need to download our app,' the portal page tells you. Well, it's annoying - but you really need to check your notes! Pressed for time, you start the download. Plot Twist: The app is malware. You've just infected your company computer. The 'free WiFi' is in fact a wireless hotspot set up by a hacker with less-than-noble intentions. You've just fallen victim to a Rogue Access Point attack. 2. The Honeypot - Seduced by Ice Cream You love ice cream - who doesn't? So you get very excited when a man wearing a billboard turns up in front of your office handing out free samples of Ben and Jerry’s. They're all out of Peanut Butter Cup - but it's okay! You've been given a flyer with a QR code that will let you download a Ben and Jerry’s app for the chance to win Free Ice Cream for Life! What a great deal! The minute you're back in the office and linked up to your work WiFi, you start the download. You can almost taste that Peanut Butter Cup. Plot Twist: The app is malware. Like Cold War spies seduced by sexy Russian agents, you've just fallen for the classic honeypot social engineering technique. At least you got a free ice cream out of it, right? 3. Road Apples - Why You Shouldn't Lick Things You Pick Up Off the Street You spy a USB stick, clearly dropped on the sidewalk. It looks quite new - but you pick it up and pop it in your pocket. Later that day, you settle down to see what's on this thing - maybe you can find out who it belongs to and return it to them; maybe you're just curious for the opportunity to take a sneak peek into a small portion of a stranger’s life. You plug the stick into your laptop and open up the first file called 'Government Secrets'... Plot Twist: It's not really much of a twist by now, is it? That USB is crawling with malware - and now it's in your computer. Early today, that pesky band of hackers went on a sowing spree scattering their cheap flash drives all over the streets near your company hoping to net themselves a sucker. Once again, you've fallen victim - this time to the Road Apples attack. What can you do? The reason people keep using social engineering attacks is simple - they work. As humans, we're inclined to be innately trusting - and certainly there are more free hotspots, ice cream apps, and lost USB sticks that are genuine and innocent than ones that are insidious schemes of hackers. There may be no patch for human stupidity, but that doesn't mean you need to be careless - keep your wits about you and remember security rules that you shouldn't break, no matter how innocuous the situation seems. And if you're a pentester or security professional? Keep on social engineering and make your life easy – the chink in almost any organisation’s armour is going to be its people. Find out more about internet security and what we can learn from attacks on the WEP protocol with this article. For more on modern infosec and penetration testing, check out our Pentesting page.
Read more
  • 0
  • 0
  • 4427

article-image-jakarta-ee-past-present-and-future
David Heffelfinger
16 Aug 2018
10 min read
Save for later

Jakarta EE: Past, Present, and Future

David Heffelfinger
16 Aug 2018
10 min read
You may have heard some talk about a new Java framework called Jakarta EE, in this article we will cover what Jakarta EE actually is, how we got here, and what to expect when it’s actually released. History and Background In September of 2017, Oracle announced it was donating Java EE to the Eclipse Foundation. Isn’t Eclipse a Java IDE? Most Java developers are familiar with the hugely popular Eclipse IDE, therefore for many, when they hear the word “Eclipse”, the Eclipse IDE comes to mind. Not everybody knows that the Eclipse IDE is developed by the Eclipse Foundation, an open source foundation similar to the Apache Foundation and the Linux Foundation. In addition to the Eclipse IDE, the Eclipse Foundation develops several other Java tools and APIs such as Eclipse Vert.x, Eclipse Yasson, and EclipseLink. Java EE was the successor to J2EE; which was a wildly popular set of specifications for implementing enterprise software. In spite of its popularity, many J2EE APIs were cumbersome to use and required lots of boilerplate code. Sun Microsystems, together with the Java community as part of the Java Community Process (JCP), replaced J2EE with Java EE in 2006. Java EE introduced a much nicer, lightweight programming model, making enterprise Java development much more easier than what could be accomplished with J2EE. J2EE was so popular that, to this day, it is incorrectly used as a generic term for all server-side Java technologies. Many, to this day still refer to Java EE as J2EE, and incorrectly assume Java EE is a bloated, convoluted technology. In short, J2EE was so popular that even Java EE can’t shake its predecessor’s reputation for being a “heavyweight” technology. In 2010 Oracle purchased Sun Microsystems, and became the steward for Java technology, including Java EE. Java EE 7 was released in 2013, after the Sun Microsystems acquisition by Oracle, simplifying enterprise software development even further, and adding additional APIs to meet new demands of enterprise software systems. Work on Java EE 8, the latest version of the Java EE specification, began shortly after Java EE 7 was released. In the beginning everything seemed to be going well, however  in early 2016, the Java EE community started noticing a lack of progress in Java EE 8, particularly Java Specification Requests (JSRs) led by Oracle. The perceived lack of Java EE 8 progress became a big concern for many in the Java EE community. Since the specifications were owned by Oracle, there was no legal way for any other entity to continue making progress on Java EE 8. In response to the perceived lack of progress, several Java EE vendors, including big names such as IBM and Red Hat, got together and started the Microprofile initiative, which aimed to introduce new APIs to Java EE, with a focus on optimizing Java EE for developing systems based on a microservices architecture. The idea wasn’t to compete with Java EE per se, but to develop new specifications in the hopes that they would be eventually added to Java EE proper. In addition to big vendors reacting to the perceived Java EE progress, a grassroots organization called the Java EE Guardians was formed, led largely by prominent Java EE advocate Reza Rahman. The Java EE Guardians provided a way for Java EE developers and advocates to have a united, collective voice which could urge Oracle to either keep working on Java EE 8, or to allow the community to continue the work themselves. Nobody can say for sure how much influence the Microprofile initiative and Java EE Guardians had, but many speculate that Java EE would have never been donated to the Eclipse Foundation had it not been for these two initiatives. One Standard, Multiple Implementations It is worth mentioning that Java EE is not a framework per se, but a set of specifications for various APIs. Some examples of Java EE specifications include the Java API for RESTful Web Services (JAX-RS), Contexts and Dependency Injection (CDI), and the Java Persistence API (JPA). There are several implementations of Java EE, commonly known as application servers or runtimes, examples include Weblogic, JBoss, Websphere, Apache Tomee, GlassFish and Payara. Since all of these implement the Java EE specifications, code written against one of these servers can easily be migrated to another one, with minimal or no modifications. Coding against the Java EE standard provides protection against vendor lock-in. Once Jakarta EE is completely migrated to the Eclipse Foundation, it will continue being a specification with multiple implementations, keeping one of the biggest benefits of Java EE. To become Java EE certified, application server vendors had to pay Oracle a fee to obtain a Technology Compatibility Kit (TCK), which is a set of tests vendors can use to make sure their products comply 100% with the Java EE specification. The fact that the TCK is closed source and not publicly available has been a source of controversy among the Java EE community. It is expected that the TCK will be made publicly available once the transition to the Eclipse Foundation is complete. From Java EE to Jakarta EE Once the announcement of the donation was made, it became clear that for legal reasons Java EE would have to be renamed, as Oracle owns the “Java” trademark. The Eclipse Foundation requested input from the community, hundreds of suggestions were submitted. The Foundation made it clear that naming such a big project is no easy task, there are several constraints that may not be obvious to the casual observer, such as: the name must not be trademarked in any country, it must be catchy, and it must not spell profanity in any language. Out of hundreds of suggestions, the Eclipse Foundation narrowed them down to two choices, “Enterprise Profile” and “Jakarta EE”, and had the community vote for their favorite. “Jakarta EE” won by a fairly large margin. It is worth mentioning that the name “Jakarta” carries a bit of history in the Java world, as it used to be an umbrella project under the Apache Foundation. Several very popular Java tools and libraries used to fall under the Jakarta umbrella, such as the ANT build tool, the Struts MVC framework, and many others. Where we are in the transition Ever since the announcement, the Eclipse Foundation along with the Java EE community at large has been furiously working on transitioning Java EE to the Eclipse Foundation. Transitioning such a huge and far reaching project to an open source foundation is a huge undertaking, and as such it takes some time. Some of the progress so far includes relicensing all Oracle led Java EE technologies, including reference implementations (RI), Technology Compatibility Kits (TCK) and project documentation.  39 projects have been created under the Jakarta EE umbrella, corresponding to 39 Java EE specifications being donated to the Eclipse Foundation. Reference Implementations Each Java EE specification must include a reference implementation, which proves that the requirements on the specification can be met by actual code. For example, the reference implementation for JSF is called Mojarra, the CDI reference implementation is called Weld, and the JPA is called EclipseLink. Similarly, all other Java EE specifications have a corresponding reference implementation. These 39 projects are in different stages of completion, a small minority are still in the proposal stage; some have provisioned committers and other resources, but code and other artifacts hasn’t been transitioned yet; some have had the initial contribution (code and related content) transitioned already, the majority of the projects have had the initial contribution committed to the Eclipse Foundation’s Git repository, and a few have had their first Release Review, which is a formal announcement of the project’s release to the Eclipse Foundation, and a request for feedback. Current status for all 39 projects can be found at https://www.eclipse.org/ee4j/status.php. Additionally, the Jakarta EE working group was established, which includes Java EE implementation vendors, companies that either rely on Java EE or provide products or services complementary to Java EE, as well as individuals interested in advancing Jakarta EE. It is worth noting that Pivotal, the company behind the popular Spring Framework, has joined the Jakarta EE Working Group. This is worth pointing out as the Spring Framework and Java EE have traditionally been perceived as competing technologies. With Pivotal joining the Jakarta EE Working Group some are speculating that “the feud may soon be over”, with Jakarta EE and Spring cooperating with each other instead of competing. At the time of writing, it has been almost a year since the announcement that Java EE is moving to the Eclipse foundation, some may be wondering what is holding up the process. Transitioning a project of such a massive scale as Java EE involves several tasks that may not be obvious to the casual observer, both tasks related to legal compliance as well as technical tasks. For example, each individual source code file needs to be inspected to make sure it has the correct license header. Project dependencies for each API need to be analyzed. For legal reasons, some of the Java EE technologies need to be renamed, appropriate names need to be found. Additionally, build environments need to be created for each project under the Eclipse Foundation infrastructure. In short, there is more work than meets the eye. What to expect when the transition is complete The first release of Jakarta EE will be 100% compatible with Java EE. Existing Java EE applications, application servers and runtimes will also be Jakarta EE compliant. Sometime after the announcement, the Eclipse Foundation surveyed the Java EE community as to the direction Jakarta EE should take under the Foundation’s guidance. The community overwhelmingly stated that they want better support for cloud deployment, as well as better support for microservices. As such, expect Jakarta EE to evolve to better support these technologies. Representatives from the Eclipse Foundation have stated that release cadence for Jakarta EE will be more frequent than it was for Java EE when it was under Oracle. In summary, the first version of Jakarta EE will be an open version of Java EE 8, after that we can expect better support for cloud and microservices development, as well as a faster release cadence. Help Create the Future of Jakarta EE Anyone, from large corporations to individual contributors can contribute to Jakarta EE. I would like to invite interested readers to contribute! Here are a few ways to do so: Subscribe to Jakarta EE community mailing list: [email protected] Contribute to EE4J projects: https://github.com/eclipse-ee4j You can also keep up to date with the latest Jakarta EE happenings by following Jakarta EE on Twitter at @JakartaEE or by visiting the Jakarta EE web site at https://jakarta.ee About the Author David R. Heffelfinger David R. Heffelfinger is an independent consultant based in the Washington D.C. area. He is a Java Champion, an Apache NetBeans committer, and a former member of the JavaOne content committee. He has written several books on Java EE, application servers, NetBeans, and JasperReports. David is a frequent speaker at software development conferences such as JavaOne, Oracle Code and NetBeans Day. You can follow him on Twitter at @ensode.  
Read more
  • 0
  • 0
  • 4425

article-image-what-i-learned-6-months-open-source-3d-printer
Michael Ang
26 Sep 2014
7 min read
Save for later

What 6 Months with an Open Source 3D Printer Taught Me

Michael Ang
26 Sep 2014
7 min read
3D printing is certainly a hot topic today, and having your own printer at home is becoming increasingly popular. There are a lot of options to choose from, and in this post I'll talk about why I chose to go with an open source 3D printer instead of a proprietary pre-built one, and what my experience with the printer has been. By sharing my 6 months of experience I hope to help you decide which kind of printer is best for you. My Prusa i3 Berlin 3D printer after 6 months Back in 2006 I had the chance to work with a 3D printer when the thought of having a 3D printer at home was mostly a fantasy. The printer in question was made by Stratasys, at the Eyebeam Art+Tech center in New York City. That printer cost upwards of $30,000—not exactly something to have at your house! The idea of doing something wrong with the printer and having to call a technician in to fix it was also a little intimidating. (My website has some of my early experiments with 3D printing.) Flash forward to today and there are literally dozens (or probably hundreds) of 3D printer designs available on the market. The designs range from high-end printers that can print plastic with embedded carbon fiber, to popular designs from MakerBot and DIY kits on eBay. One of the first low-cost 3D printers was the RepRap. The goal of the RepRap project is to create a self-replicating machine, where the parts for the machine can be fabricated by the machine itself. In practice this means that many of the parts of a RepRap-style 3D printer are actually printed on a RepRap printer. Most people who build RepRap printers start with a kit and then assemble the printer themselves. If the idea of a self-replicating machine sounds interesting, then RepRap may be for you. RepRap is now more of a philosophy and community than any specific printer. Once you assemble your printer you can make changes and upgrades to the machine by printing yourself new parts. There are certainly some challenges to building your own printer, though, so let's look at some of the advantages and disadvantages of going with an open source printer (building from a kit) versus a pre-packaged printer. Advantages of a pre-assembled commercial printer: Should print right out of the box Less tinkering needed to get good prints Each printer of a particular model is the same, making it easier to get support Advantages of an open source (RepRap-style) kit: Typically cheaper than pre-built Learn more about how the printer works Easier to make changes to the machine, and complete plans are available Easier to experiment with, for example different printing materials Disadvantages to pre-assembled: Making changes may void your warranty Typically more expensive May be locked into specific software or filament Disadvantages of open source: Can take a lot of work to get good prints Potentially lots of decisions to make, not pre-packaged May spend as much time on the machine as actually printing Technical differences aside, the idea of being part of an open source community based on the freedom to share knowledge and designs was really appealing. With that in mind I had a look at different open source 3D printer designs and capabilities. Since the RepRap designs are open source, anyone can modify them and create a "new" printer. In the end I settled on a variation of the Prusa i3 RepRap printer that is designed in Berlin, where I live. The process of getting a RepRap printer working can be challenging, because there's so much to learn at first. The Prusa i3 Berlin can be ordered as a kit with everything needed to build the printer, and with a workshop where you build the printer with the machine's designers over the course of a weekend. Two days to build a working 3D printer from a pile of parts? Yes, it can be done! Most of the parts in the printer kit Building the printer at the workshop saved an incredible amount of time. Questions like "does this look tight enough?" and "how does this part fit in here?" were answered on the spot. There are very active forums for RepRap printers with lots of people willing to help diagnose problems. But a few questions with even a one day turnaround time quickly adds up. By the end of the two days my printer was fully assembled and actually printed out a little plastic robot! This was pretty satisfying knowing that the printer had started the weekend as a bundle of parts. Quite a lot of wires Assembling the plastic extruders Thus began my 6-month (so far) adventure in 3D printing. It has been an awesome and at times frustrating journey. I mainly bought my printer to create connectors for my Polygon Construction Kit (Polycon). I'm printing connectors that assemble with some rods to make structures much larger than could be printed in one piece. My printer has been working well for that, but the main issue has been reliability and need for continual tweaking. Instead of just "hitting print" there is a constant struggle to keep everything lined up and printing smoothly. Printing on my RepRap is a lot more like baking a soufflé than ordering a burger. Completed printer in my studio Some highlights of the journey so far: Printing out parts strong enough to assemble some of my Polycon sculptures and show them at an art show in Berlin Designing my own accessories for the printer and having them downloaded more than 1,000 times on Thingiverse (not bad for some rather specialized tools) Printing upgrades for the printer, based on the continually updated source files Being able to get replacement parts at the hardware store, when one of the long threaded rods in the printer wore out Sculpture with 3D printed connectors. Image courtesy of Lehrter Siebzehn. And the lowlights: Never quite knowing if a print is going to complete successfully (though this can be a problem with many printers) Having enough trouble getting my first extruder working reliably for long prints that I haven't had time to get dual-extrusion prints working Accessory I designed for calibrating the printer, which I then shared with others As time goes on and I keep working on the printer, it's slowly getting more reliable, and I'm able to do more complicated prints without constant intervention. The learning process has been valuable too - I'm now able to look at basically every part of the machine and understand exactly what it's supposed to do. Once you really understand how a 3D printer works, you start to wonder what kind of upgrades are possible, or what other kinds of machine you could design. Printed upgrade parts A pre-packaged printer makes a lot of sense if you're mostly interested in printing things. The learning process for building your own printer can either be interesting or a frustrating obstacle, depending on your point of view. When you look at a print from your RepRap printer, it's incredible to consider that it is all built off the contributions and sharing of knowledge of a large community. If you're not just interested in making things, but making things that make things, then a RepRap printer might be for you! Upgraded printer with polygon sculpture About the author: Michael Ang is a Berlin-based artist and engineer working at the intersection of art, engineering, and the natural world. His latest project is the Polygon Construction Kit, a toolkit for bridging the virtual and physical worlds by translating simple 3D models into physical structures.
Read more
  • 0
  • 0
  • 4397

article-image-is-your-web-design-responsive
Savia Lobo
15 May 2018
12 min read
Save for later

Is your web design responsive?

Savia Lobo
15 May 2018
12 min read
Today we will explain what a responsive web design is, how it benefits websites, and what the core concepts are. The topics covered are: Responsive design philosophy, principles, grid & columns Smooth user experience & user friendly website Understanding responsive grid systems Adaptive design and methodologies Elegant mobile experience This article is an excerpt taken from the book, Responsive Web Design by Example, written by Frahaan Hussain. Responsive design philosophy There was a time when most web surfing occurred on a computer with a standard-sized/ratio monitor. It was more than adequate to create websites with a non-responsive layout in mind. But over the last 10 years, there has been an exponential boom of new devices in a plethora of form factors, from mobile phones, tablets, watches and a wide range of screen sizes. This growth has created a huge fragmentation problem, so creating websites with a single layout is no longer acceptable. A website with a lot of content that works great on desktops doesn't work very well on a mobile device that has a significantly smaller screen. Such content is unreadable, forcing the user to zoom in and out constantly. One might try making everything bigger so it looks good on mobiles, but then on a desktop, the content doesn't take advantage of the immense real estate offered by bigger screens. Responsive Web Design is a method that allows the design to respond based on the user's input and environment and thus based on the size of the screen, the device, and its orientation. This philosophy blends elements of flexible grids and layouts, images, and media queries in CSS. Enter Responsive Web Design. This alleviates this problem by allowing developers and designers to create websites that adapt to all screen sizes/ratios. There are various approaches that different websites take, but the core concept is illustrated in the following figure: The preceding figure shows how the same website's layout can be adapted to suit different devices. On the desktop, there is a lot more real estate, so the content is bigger and more can fit on a single row. But, as the screen size shrinks and its orientation changes, the content readjusts itself to accommodate this change. This provides a seamless and elegant experience for the user on all form factors. If you look closely at the preceding figure and at modern websites, you will see a grid that the content conforms to. The grid is used to lay out the content of a website, and both of these elements go hand in hand. This grid system is one of the most important aspects of how Responsive Web Design works. Responsive design principles This section will cover the main principles behind designing responsive websites. Though these aren't set in stone and will change over time, they will provide a great foundation. Should you go Responsive or Adaptive? Responsive designs constantly change website layouts depending on their size and orientation. A single pixel resize will tend to have an effect on the layout, usually not by a lot. Adaptive schemes, on the other hand, have preset layouts, which are loaded depending on the size of the screen. This technique doesn't look as fluid and seamless as do responsive designs. Modern-day Responsive Web Design usually incorporates both methods. Set layouts will be provided, as can be seen in the previous figure. But any changes made to a website's size will have an impact in real time through responsive scaling. What are Breakpoints? Breakpoints are points at which a website's layout is no longer fit for the screen size, device, and/or orientation, and we are able to use different and unique layouts to accommodate the various changes that can occur to screens. When these points occur, the current layout is switched to a more suitable layout. For example, a mobile device in portrait mode will not effectively be able to use a layout that is designed for a widescreen desktop display; this just isn't possible. However, by using breakpoints a single website can serve many screen variations whilst making the website feel like it was designed with the user's current screen, device, and/or orientation in mind. This does not occur when reloading the web page, but content moves around dynamically and is scaled accordingly. Without breakpoints, the website would appear with the same layout on all form factors and browser sizes, which using the example we just mentioned, would not be fit for purpose. These breakpoints usually occur when the width of the browser changes and falls into the category of another more appropriate layout. There are a few fundamentals that should be mentioned regarding the philosophy of Responsive Web Design: Why is screen resolution important for responsive design? This is immensely influential in responsive design. The first thought for many designers is to design based on the resolution of the screen. But modern-day phones have resolutions of 1080p and beyond, which for the most part is still the de facto standard for desktops with some exceptions in 4K and ultrawide displays. This would prevent us from fully targeting all devices, as there is so much crossover in resolutions between devices. That is the reason why pixel density is very important when deciding which layout should be used as a 5-inch 1080p mobile display will be cramming the pixels in a lot closer than a 32-inch 1080p display. They both have the same resolution for the mobile device and they have a significantly higher pixel density, which helps distinguish between the device types. The viewport should also be taken into consideration, which is the user's visible area of a web page. This would allow us to rearrange content based on how much content should be displayed. What are media queries? These are amazing facets within CSS that allow us to actually detect changes in a screen such as its size and an event device type. These are the things used to specify code for a specific layout, such as a mobile or desktop display. You can think of media queries as conditional statements, just as an "if" statement would only run a piece of code if the condition was true. A media query is the same, it's far more limited, but as are many things in CSS. I'm positive you will have used a website and noticed that it looks different on a computer compared to a mobile phone, or even a tablet. This is thanks to the use of breakpoints, which are very similar to conditional statements in other languages such as C++. When a certain condition is met, such as screen size range, or, change in form factor, different CSS is applied to provide a better-suited layout. What are Relative units? Relative units take into account the other content and more specifically the content's size, whereas static units do not and have an absolute value regardless of the amount of content. If relative units are not used then static units would be used, which essentially lays the content using fixed units such as pixels. With this method, a box with a width of 400px on an 800px screen would take half the width. But, if the screen size changes to 300px, the box will now be partially off-screen. Again, this would not provide the reader with that seamless experience, which we aim to provide. The units simply display your content relative to everything else, or, more specifically, the screen size or viewport. This allows us, as creators, to display content consistently. Take the previous example, if we would like to display the box at half the screen width, on an 800px screen the box would be 400px wide, and on a 600px screen the box would be 300px wide. Using percentages we can set the width to 50%, which forces the box to always be half the width of its parent container, making its size relative to the rest of the page's content. Why Maximum and minimum values are important? Scaling our content is greatly dependent on the screen size. But with screens such as ultrawide monitors, scaling the content may make it too big, or even too small, on mobile devices. Using maximum and minimum values, we are able to set upper and lower limits providing us with readable and clear results. What are Nested objects? If we displayed every object individually, we would have to make them all adjust accordingly, but nesting allows us to wrap elements using containers. Nested objects are like a paragraph tag, as they contain text, and any changes made to the paragraph tag, such as its position or color, also affect its contents. Objects nested within each other are affected by any change made to their parent containers. An object can be anything from text and images to HTML tags/elements. Take a look at the following example: In this example, there are four elements—a div, paragraph, span, and image tag. The paragraph, span, and image tags are nested within the div tag. If the div tag's maximum width and background color were changed, this would affect all its child objects/tags. But if we were to make a change to the paragraph tag, such as changing its text color, this would not affect any other sibling tags or its parent tag. It would only have an effect on its contents/objects. So, for example, if a container is moved or scaled, the content within the container is also updated. This is where pixels come in use. You may not always want a container to be displayed 10% from the right as, on mobile devices, 10% equates to a lot of real estate potentially being wasted; you could specify 50px instead for example. Should you go Mobile first or desktop first? You can design a website from a small screen such as a phone and scale it up or go the other way round and design it with a large screen in mind. There is actually no right or wrong answer. Depending on the intended target audience and the website's purpose, this will become clear to you. Usually, considering both angles at the same time is the best route to go down. Most responsive frameworks on the market have been designed with a mobile-first philosophy, but that doesn't mean you cannot use it for a desktop-first design; it is on you as the designer to decide how content should be displayed. Should you use Bitmaps or vectors for your images? Bitmaps are great for images with a lot of detail, such as backgrounds and usually logos. Common bitmap formats include .png and .jpg. But these images can be large in file size and require more bandwidth and time to load. On desktop devices, this isn't too much of a problem, but on mobile devices that are heavily reliant on cellular services that don't always provide unlimited data, this can be problematic. Also, when scaling bitmaps, there is a loss in quality, which results in jagged and blurry images. Vectors, on the other hand, are small in size and don't lose quality when scaling. I know you'll be tempted to scream, "Hail vectors!" at this book, but they do have their drawbacks. They are only useful for simple content such as icons. Also, some older browsers do not fully support vectors. Again there is no "right choice"; depending on the content to be displayed, bitmaps or vectors should be used. Understanding Responsive grids and columns The grid system is one of the universal concepts of Responsive Web Design, regardless of the framework a website is built upon. To put it simply, websites are split into rows and columns, and if an object/element occupies half the number of columns, it will always occupy that many regardless of the screen's size. So an element that occupies 3 of the 12 rows will occupy 25% of the width of its parent container, hence providing a responsive design. This is great for small variations in screen sizes, but when a website is viewed on platforms varying from desktops to mobiles, then breakpoints are introduced as covered previously. Though there is no fixed number of columns that a responsive website should have, 12 is a common number used by some of the most popular and widespread frameworks. A framework in this context is anything built on top of the built-in web features. JavaScript is a web feature, but jQuery is a framework built on top of that to allow easier manipulation of the website using prewritten libraries/code. Though a framework isn't absolutely necessary, neither is using an off-the-shelf web browser. You could create your own, but it would be an immense waste of time, and the case for using a responsive framework is essentially the same. The following is an example of a basic responsive grid: Rows allow us as developers to group content together, though there will be a fixed number of columns, not all columns have to be filled to go to the next row. A new row can be used explicitly, as can be seen in the following example: This may be different to how you have developed websites in the past, but if there is anything you are unsure about don’t worry, as things will become clearer when we start working on projects in future chapters. To summarize, we covered responsive design philosophy and principles that are essential to creating an intuitive user experience. If you have enjoyed this excerpt, check out Responsive Web Design by Example to learn how to build engaging responsive websites. What UX designers can teach Machine Learning Engineers? To start with: Model Interpretability 5 things to consider when developing an eCommerce website Responsive Web Design with WordPress  
Read more
  • 0
  • 0
  • 4396

article-image-how-rolls-royce-is-applying-ai-and-robotics-for-smart-engine-maintenance
Sugandha Lahoti
20 Jul 2018
5 min read
Save for later

How Rolls Royce is applying AI and robotics for smart engine maintenance

Sugandha Lahoti
20 Jul 2018
5 min read
Rolls Royce has been working in the civil aviation domain for quite some time now, to build what they call as ‘intelligent engines’. The IntelligentEngine vision was first announced at the Singapore Airshow in February 2018. The idea was built around how robotics could be used to revolutionise the future of engine maintenance. Rolls Royce aims to build engines which are: Connected, using cloud based nodes and IoT devices with other engines of the fleet, as well as with the customers and operators. Contextually aware, of its operations, constraints, and customers, with modern data analysis and big data mining techniques. Comprehending, of its own experiences and other engines in the fleet using state-of-the-art machine learning and recommendation algorithms. The company has been demonstrating steady progress and showing off their rapidly developing digital capabilities. Using tiny SWARM robots for engine maintenance Their latest inventions are, tiny roach-sized ‘SWARM’ robots, capable of crawling inside airplane engines and fix them. They look like they’ve just crawled straight out of a Transformers movie. This small robot, almost 10mm in size can perform a visual inspection of hard to reach airplane engine parts. The devices will be mounted with tiny cameras providing a live video feed to allow engineers to see what’s going on inside an engine without having to take it apart. These swarm robots will be deposited on the engine via another invention, the ‘snake’ robots. Officially called FLARE, these snake robots are flexible enough to travel through an engine, like an endoscope. Source Another group of robots, the INSPECT robots is a network of periscopes permanently embedded within the engine. These bots can inspect engines using periscope cameras to spot and report any maintenance requirements. Current prototypes of these bots are much larger than the desired size and not quite ready for intricate repairs. They may be production ready in almost two years. Reducing flight delays with data analysis R2 Data Labs (Rolls Royce data science department) offers technical insight capabilities to their Airline Support Teams (ASTs). ASTs generally assess incident reports, submitted after disruption events or maintenance is undertaken. The Technical Insight platform will help ASTs easily capture, categorize and collate report data in a single place. This platform builds a bank of high-quality data (almost 10 times the size of the database ASTs had access to previously), and then analyze it to identify trends and common issues for more insightful analytics. The technical insight platform has so far shown positive results and has been critical to achieving the company’s IntelligentEngine vision. According to their blog, it was able to avoid delays and cancellations in a particular operator’s 757 fleet by 30%, worth £1.5m per year. The social network for engines In May 2018, the company launched an engine network app. This app was designed to bring all of the engine data under a single hood, much like how Facebook brings all your friends on a single platform. In this app, all the crucial information regarding all the engines in a fleet is available in a single place. Much like Facebook, each engine has a ‘profile’, which shows data on how it’s been operated, the aircraft it has been paired with, the parts it contains, and how much service life is left in each component. It also has a ‘Timeline’ which shows the complete story of the engine’s operational history. In fact, you also have a ‘newsfeed’ to display the most important insights from across the fleet. Source The engine also has an in-built recommendation algorithm which suggests future maintenance work for individual engines, based on what it learns from other similar engines in the fleet. As Juan Carlos Cabrejas, Technical Product Manager, R2 Data Labs writes, “This capability is essential to our IntelligentEngine vision, as it underpins our ability to build a frictionless data ecosystem across our fleets.” Transforming Engine Health Management Rolls-Royce is taking Engine Health Management (EHM) to a new level of connectivity. Their latest EHM system can measure thousands of parameters and monitor entirely new parts of the engine. And interestingly, the EHM has a ‘talk back’ feature. An operational center can ask the system to focus on one particular part or parameter of the engine. The system listens and responds back with hundreds of hours of information specifically tailored to that request. Axel Voege, Rolls-Royce, Head of Digital Operations, Germany, says” By getting that greater level of detail, instantly, our engineering teams can work out a solution much more quickly.” This new system will go into service next year making it their most IntelligentEngine yet. As IntelligentEngine makes rapid progress, the company sees itself designing, testing, and managing engines entirely through their digital twin in the near future. You can read more about the IntelligentEngine vision and other stories to discover new products and updates at the Rolls Royce site. Unity announces a new automotive division and two-day Unity AutoTech Summit Apollo 11 source code: A small step for a woman, and a huge leap for ‘software engineering’
Read more
  • 0
  • 0
  • 4383
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $15.99/month. Cancel anytime
article-image-what-is-aiops-why-going-to-be-important
Aaron Lazar
19 Apr 2018
4 min read
Save for later

What is AIOps and why is it going to be important?

Aaron Lazar
19 Apr 2018
4 min read
Woah, woah, woah! Wait a minute! First there was that game SpecOps that I usually sucked at, then there came ITOps and DevOps that took the world by storm, now there’s another something-Ops?? Well, believe it or not, there is, and they’re calling it AIOps. What does AIOps stand for? AIOps basically means Artificial Intelligence for IT Operations. It means IT operations are enhanced by using analytics and machine learning to analyze the data that’s collected from various IT operations tools and devices. This helps in spotting and reacting to issues in real time. Coined by Gartner, the term has grown in popularity over the past year. Gartner believes that AIOps will be a major transformation for ITOps professionals mainly due to the fact that traditional IT operations cannot cope with the modern digital transformation. Why is AIOps important? With the massive and rapid shift towards cloud adoption, automation and continuous improvement, AIOps is here to take care of the new entrants into the digital ecosystem - Machine agents, artificial intelligence, IoT devices, etc. These new entrants are impossible to service and maintain by humans and with billions of devices connected together, the only way forward is to employ algorithms that tackle known problems. Some of the solutions it provides are maintaining high availability and monitoring performance, event correlation and analysis, automation and IT service management. How does AIOps work? As depicted in Gartner’s diagram, there are two primary components to AIOps. Big Data Machine Learning Data is gathered from the enterprise. You then implement a comprehensive analytics and machine learning strategy alongside the combined IT data (monitoring data + job logs + tickets + incident logs). The processed data yields continuous insights, continuous improvements and fixes. It bridges three different IT disciplines to accomplish its goals: Service management Performance management, and Automation To put it simply, it is a strategic focus. It argues for a new approach in a world where big data and machine learning have changed everything. How to move from ITOps to AIOps Machine Learning Most of AIOps will involve supervised learning and professionals will need a good understanding of the underlying algorithms. Now don’t get me wrong, they don’t need to be full blown data scientists to build the system, but just having sufficient knowledge to be able to train the system to pick up anomalies. Auditing these systems to ensure they’re performing the tasks as per the initial vision is necessary and this will go hand in hand with scripting them. Understanding modern application technologies With the rise of Agile software development and other modern methodologies, AIOps professionals are expected to know all about microservices, APIs, CI/CD, containers, etc. With the giant leaps that cloud development is taking, it is expected to gain visibility into cloud deployments, with an emphasis on cost and performance. Security Security is critical, for example, it’s important for personnel to understand how to engage a denial of service attack or maybe a ransomware attack, like the ones we’ve seen in the recent past. Training machines to detect/predict such events is pertinent to AIOps. The key tools in AIOps There are a wide variety of AIOps platforms available in the market that bring AI and Intelligence to IT Operations. One of the most noteworthy ones is Splunk, which has recently incorporated AI for intelligence driven operations. Another one is the Moogsoft AIOps platform, that is quite similar to Splunk. BMC also has entered the fray, launching TrueSight 11, their AIOps platform that promises to address use cases to improve performance and capacity management, the service desk, and application development disciplines. Gartner has a handy list of top platforms. If you’re planning the transition from ITOps, do check out the list. Companies like Frankfurt Cargo Services and Revtrak have already added the AI to their Ops. So, are you going to make the transition? According to Gartner, 40% of large enterprises would have made the transition to AIOps by 2022. If you’re one of them, I recommend you do it for the right reasons, but don’t do it overnight. The transition needs to be gradual and well planned. The first thing you need to do is getting your enterprise data together. If you don’t have sufficient data that’s worthy of analysis, AIOps isn’t going to help you much. Read more: Bridging the gap between data science and DevOps with DataOps.
Read more
  • 0
  • 0
  • 4379

article-image-things-remember-building-your-first-game
Raka Mahesa
07 Aug 2017
6 min read
Save for later

Things to remember before building your first game

Raka Mahesa
07 Aug 2017
6 min read
I was 11 years old when I decided I want to make games. And no, it wasn't because there was a wonderful game that inspired me to make games; the reason was something more childish. You see, Sony Playstation 2 was released just a year earlier and a lot of games were being made for the new console. The 11-year-old me, who only had a Playstation 1, got annoyed because games for PS1 weren't developed anymore. So, out of nowhere, I decided to just make those games myself. Two years after that, I finally built my first game, and after a couple more years, I actually started developing games for a living.  It was childish thinking, but it actually gave me a goal very early in life and helped me to choose a path for me to follow. And while I think my life turned out quite okay, there are still things that I wish the younger me would have known back then. Things that would have helped me a lot back when I was building my first game. And even though I can't go back and tell those things to myself, I can tell them to you. And hopefully, it will help you in your quest to build your first game.  Why do you want to build a game?  Let's start with the most important thing you need to understand when you want to build a game: yourself. Why do you want to build a game? What goal do you hope to achieve by developing a game? There are multiple possible reasons for someone to start creating a game. You could develop a game because you want to learn about a particular programming language or library, or because you want to make a living from selling your own game, or maybe because you have free time and think that building a game is a cool way to pass it.  Whatever it is, it's important for you to know your reasons, because it will help you decide what is actually needed for the game you're building. For example, if you develop a game to learn about programming, then your game doesn't need to use fancy graphics. On the other hand, if you develop a game to commercialize it, then having a visually appealing game is highly important.  One more thing to clarify before we go further. There are two phases that people have before they build their first game. The first one is when someone has the desire to build a game, but has absolutely no idea about how to achieve it, and the other one is when someone has both the desire and knowledge needed to build a game, but hasn't started doing it. Both of them have their own set of problems, so I'll try to address both phases here, starting with the first one.  Learn and master the tools  Naturally, one of the first questions that comes to mind when wanting to create games is how to actually start the creation process. Fortunately, the answer to this question is the same for any kind of creative projects you'll want to attempt: learn and master the tools. For game development, this means game creation tools, and they come in all shapes and sizes. There are those that don't need any kind of programming, like Twine and RPGMaker, those that require a tiny bit of programming like Stencyl and GameMaker, and the professional ones that need a lot of coding like Unity and Unreal Engine. Though, if you want to build games to learn about programming, there's no better way than to just use a programming language with some game-making library,like MonoGame.  With so many tools ready for you to use, will your choice of tools matter? If you're building your first game, then no, the tool that you use will not matter that much. Whilst it's true that you'd probably want to use a tool with better capabilities in the future, at this stage, what's important is learning the actual game development process. And this process is actually the same no matter what tool you use.  KISS: Keep It Simple, Sugar So, now that you know how to build a game, what else do you need to be aware of before you start building it? Well, here's the one thing that most people only realize after they actually start building their game: Game development is hard. For every feature that you want to add into a game, there will be dozens of cases that you have to think about to make sure the feature works fine. And that's why one of the most effective mantras when you're starting game development is KISS: Keep It Simple, Sugar (a change may have been made to the original, slightly more insulting acronym). Are you sure you need to add enemies to your game? Does your game actually need a health bar, or would a health counter be enough? If developing a game is hard, then finishing it is even harder. Keeping the game simple increases your chance of finishing it, and you can always build a more complex game in the future. After all, a released game is better than a perfect, but unreleased game.  That said, it's possible that you're building a game that you've been dreaming of since forever, and you'd never settle for less. If you're hell bent on completing this dream game of yours, who am I to tell you to not pursue it? After all, there are successful games out there that were actually the developer's first game project. If that's how it is, just remember that motivation loss is your biggest enemy and you need to actively combat it. Show your work to other people, or take a break when you've been working on it too much. Just make sure you don't lose your drive to complete your project. It’ll be worth it in the end!  I hope this advice is helpful for you. Good luck with building your games! About the Author  RakaMahesa is a game developer at Chocoarts (http://chocoarts.com/), who is interested in digital technology in general. Outside of work hours, he likes to work on his own projects, with Corridoom VR being his latest released game. Raka also regularly tweets as @legacy99
Read more
  • 0
  • 0
  • 4371

article-image-12-common-malware-types-you-should-know
Savia Lobo
24 May 2018
14 min read
Save for later

12 common malware types you should know

Savia Lobo
24 May 2018
14 min read
A malware is a software with malicious intent that changes the system without the knowledge of the user. A malware uses the same technologies that are used by genuine software but the intent is bad. The following are some examples: Software such as TrueCrypt uses algorithms and techniques to encrypt a file to protect privacy, but, at the same time, ransomware uses same algorithms to encrypt files to extort the user. Similarly, Firefox uses HTTP protocol to browse the web while malware uses HTTP protocol to post its stolen data to its command and control (C&C) server In this article we will focus on the different types of malware. They can be categorized into different types based on the damage it causes to the system. It does not necessarily use a single method to cause damage; it can employ multiple ways. We will look into some known malware types: Backdoor Downloader Virus or file infector Worm Botnet Remote Access Tool (RAT) Hacktool Keylogger and password stealer Banking malware POS malware Ransomware Exploit and exploit kits To be clear, malware can act as a backdoor as well a password stealer or can be a combination of any of them. Some of the definitions are simple enough to understand in one line while others need some detailed explanation. This article is an excerpt taken from the book, 'Preventing Ransomware', written by Abhijit Mohanta, Mounir Hahad, and Kumaraguru Velmurugan. Backdoor A backdoor can be a simple functionality for a malware. It opens a port on the victim machine so that the hacker can log in without the victim's knowledge and carry out their work. A piece of backdoor malware can create a new process of itself or inject malicious code that opens a port in legitimate code executing in the system. Backdoor activity was usually part of other malware. Most of the RAT tools have a backdoor module that opens a port on the victim machine for the hacker to get in. Downloader A downloader is a piece of malicious software that downloads other malware. It has a URL for the malware that needs to be downloaded. Hence, when executed, it downloads other malware. Bedep was mostly known to download CryptoLockers. Upatre was another popular downloader. Virus or file infector File infection malware piggybacks its code in clean software. It alters an executable file on a disk in such a way that malware code is executed before or after the clean code in the file is executed. A file infector is often termed a virus in the security industry. A lot of antivirus products tag it as a virus. In the context of PE executables of Windows, a file infector can work in the following manner: Malware adds malicious code at the end of a clean executable file. It changes the entry point of the file to point the malicious code located at the end. When the exe is double-clicked, the malware code is executed first. The malicious code keeps the address of the clean code which was earlier the entry point. After completing the malicious activity, the malware code transfers control to the clean code: A virus can infect a file in several ways. It can place its code at different places in the malicious code. File infection is a way to spread in the system. Many of these file infectors infect every system file on Windows. So malware code has to execute irrespective of whether you start Internet Explorer or a calculator program. Some very famous PE file infectors are Virut, Sality, XPAJ, and Xpiro. Worm A worm spreads in a system by various mechanisms. File infection can also be considered a worm-like behavior. A worm can spread in several ways: To other computers on the network by brute forcing default usernames and passwords of network shares or other machines. By exploiting the vulnerability in network protocols. Using pen drives. When an autorun worm is executed, it looks for a pen drive attached to a system. The worm creates a copy of itself in the pen drive and also adds an autorun.inf file to the pen drive. When an infected pen drive is inserted into a new machine, autorun.inf is executed by Windows, which in turn executes the copied .exe. The copied exe can now copy itself at different locations in the new machine where the pen drive is inserted. Botnet A botnet is a piece of malware that is based on the client-server model. The victim machine that is infected with the malware is called a bot. The hacker controls the bot by using a C&C server. This is also called a bot herder. A C&C server can issue commands to the bots. If a large number of computers are infected with bots, they can be used to direct a lot of traffic toward any server. If the server is not secure enough and is incapable of handling huge traffic, it can shut down. This is usually called a denial of service (DOS) attack. A bot can use internet protocols or custom protocols to communicate with its C&C server. ZeroAccess and GameOver are famous botnets of the recent past. Keylogger and password stealer Keyloggers have been well known for a long time. They can monitor keystrokes and log them to a file. The log file can be transferred to the hacker later on. A password stealer is a similar thing. It can steal usernames and passwords from the following locations: Browsers store passwords for social networking sites, movie sites, song sites, email, and gaming sites. FTP clients such as FileZilla and SmartFTP, which can be used in companies or individuals to save data in FTP servers. Email clients such as Thunderbird and Outlook are used to access emails easily. Database clients used mostly by engineers and students Banking applications Users store passwords in password managers so that they don't have to remember them. Malware can steal passwords from these applications. LastPass and KeePass are password manager applications. Hackers can use these credentials to steal more data or access the private information of somebody or to try to access military installations. They can target executives using this kind of malware to steal their confidential information. zeus and citadel are famous password stealers. Banking malware Banking malware is financial malware. It can include the functionality of keylogging and password-stealing from the browser. Banks have come up with virtual keyboards, which is a major blow to keyloggers. Now, most malware use a man-in-the-middle (MITM) attack. In this kind of attack, a piece of malware is able to intercept the conversation between the victim and the banking site. There are two popular MITM mechanisms used by banking malware these days: form grabbing and browser injects. In form grabbing, the malware hooks the browser APIs and sends the intercepted data to its C&C server. Simultaneously, it can send the same data to the bank website too. Web inject works in the following manner: Malware can perform API hooking in the browser to intercept the web page that as requested by the victim browser. An original web page is a form in which victim needs to input various things, such as the amount they need to transfer, credentials, and so on. The malware modifies extra fields in this intercepted web page to add some extra fields, such as CVV number, PIN, and OTP, which are used for additional authentication. These additional fields are injected using an HTML form. This form varies based on the bank. Malware keeps a configuration file which tells the malware which form needs to be injected in the page of which banking site. After modifying the web page, the malware sends data to the victim's browser. So the victim sees the page with extra fields as modified by the malware. Hence, the malware is able to steal the additional parameters needed for authentication. Tibna, Shifu, Carberp, and Zeus are some famous pieces of banking malware. POS malware The method of money transfer is changing. Cash transactions in shops are changing. POS devices are installed in a lot of shops these days. Windows has a Windows POS operating system for these kinds of POS devices. The POS software in these devices is able to read the credit card information when one swipes a card in the POS device. If malware infects a POS device, it scans the POS software for credit card patterns. Credit card numbers are 16 digits. Malware scans for 16-digit patterns in the memory to identify and then steal credit card numbers. BlackPOS, Dexter, JackPOS, and BackOff are famous pieces of POS malware. Hacktool Hacktools are often used to retrieve passwords from browsers, operating systems, or other applications. They can work by brute forcing or identifying patterns. Cain and Abel, John the Ripper, and Rainbow Crack were old hack tools. Mimikatz is one of the latest hack tools associated with some top ransomware such as Wannacry and NotPetya to decode and steal the credentials of the victim. RAT A RAT acts as a remote control, like the name suggests. It can be used for both good and bad intentions. RATs can be used by system administrators to solve the issues of their clients by accessing the client's machine remotely. But since RATS usually give full access to the person sitting remotely, they can be misused by hackers. RATs have been used in sophisticated hacks lots of times. They can be misused for multiple purposes, such as the following: Monitoring keystrokes using keyloggers Stealing credentials and data from the victim machine Wiping out all data from a remote machine Creating a backdoor so that a hacker can log in Gh0st Rat, Poison Ivy, Back Orifice, Prorat, and NjRat are well-known RATs. Exploit Software is written by humans and, obviously, there will be bugs. Hackers take advantage of some of these bugs to compromise a system in an unauthorized manner. We call such bugs vulnerabilities. Vulnerabilities occur due to various reasons, but mostly due to imperfect programming. If programmers have not considered certain scenarios while programming the software, this can lead to a vulnerability in the software. Here is a simple C program that uses the function sctrcpy() to copy a string from source to destination: The programmer has failed to notice that the size of the destination is 10 bytes and the source is 23 bytes. In the program, the source is allocated 23 bytes of memory while the destination is assigned 11 bytes of memory space. When the strcpy() function copies the source into the destination, the copied string goes beyond the allocated memory of the destination. The memory beyond the memory assigned to the destination can have important things related to the program which would be overwritten. This kind of vulnerability is called buffer overflow. Stack overflow and heap overflow are commonly known as buffer overflow vulnerability. There are other vulnerabilities, such as use-after-free when an object is used after it is freed (we don't want to go into this in depth as it requires an understanding of C++ programming concepts and assembly language). A program that takes advantage of these vulnerabilities for a malicious purpose is called an exploit. To explain an exploit, we will talk about a stack overflow case. Readers are recommended to read about C programs to understand this. Exploit writing is a more complex process which requires knowledge of assembly language, debuggers, and computer architecture. We will try to explain the concept as simply as possible. The following is a screenshot of a C program. Note that this is not a complete program and is only meant to illustrate the concept: The main() function takes input from the user (argv[1]) then passes it on to the vulnerable function vulnerable_function. The main function calls the vulnerable function. So after executing the vulnerable function, the CPU should come back to the main function (that is, line no 15). This is how the CPU should execute the program: line 14 | line 4 | line 5 | line 6 | line 15. Now, when the CPU is at line 6, how does it know that it has to return to line 15 after that? Well, the secret lies in the stack. Before getting into line 4 from line 14, the CPU saves the address of line 15 on the stack. We can call the address of line 15 the return address. The stack is also meant for storing local variables too. In this case, the buffer is a local variable in vulnerable_function. Here is what the stack should look like for the preceding program: This is the state of the stack when the CPU is executing the vulnerable_function code. We also see that return address (address of line 15) is placed on the stack. Now the size of the buffer is only 16 bytes (see the program). When the user provides an input(argv[1]) that is larger than 16 bytes, the extra length of the input will overwrite the return address when strcpy() is executed. This is a classic example of stack overflow. When talking about exploiting a similar program, the exploit will overwrite the RETURN ADDRESS. As a result, after executing line 6, the CPU will go to the address which has overwritten the return address. So now the user can create a specially crafted input (argv[1]) with a length greater than 16 bytes. The input contains three parts - address of the buffer, NOP, and shellcode. The address of the buffer is the virtual memory address of the variable buffer. NOP stands for no operation instruction. As the name implies, it does nothing when executed. Shellcode is nothing but an extremely small piece of code that can fit in a very small space. Shellcode is capable of doing the following: Opening a backdoor port in the vulnerable software Downloading another piece of malware Spawning a command prompt to the remote hacker, who can access the system of the victim Elevating the privileges of the victim so the hacker has access to more areas and functions in the system: The following image shows the same stack after the specially crafted input is provided as input to the program. Here, you can see return address is overwritten with the address of the buffer so, instead of line 15, the CPU will go to the address of the buffer. After this NOP, the shellcode will be executed: The final conclusion is, by providing an input to the vulnerable program, the exploit is able to execute shellcode which can open up a backdoor or download malware. The inputs can be as follows: An HTTP request is an input for a web server An HTML page is an input for a web browser A PDF is an input to Adobe Reader And so on - the list is infinite. You can explore these using the keywords provided as it cannot be explained in a few lines and goes beyond the scope of this book. We often see vulnerabilities mentioned in blogs. Usually, a CVE number is mentioned for a vulnerability. One can find the list of vulnerabilities at http://www.cvedetails.com/. The wannacry ransomware used CVE-2017-0144 . 2017 is the year when the vulnerability was discovered. 0144 denotes that this was the 144th vulnerability discovered in 2017. Microsoft also issues advisories for vulnerabilities in Microsoft software. https://www.cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2017-0144 gives the details of the vulnerability. The vulnerability description tells us that the bug lies in the SMBv1 server software installed in some of Microsoft operating system versions. Also, the URL can refer to some of the exploits. Now that you know what types of malware exist, do check out the book, Preventing Ransomware to further know about the techniques to prevent malware and perform effective malware analysis. IoT Forensics: Security in an always-connected world where things talk Top 5 penetration testing tools for ethical hackers Top 5 cloud security threats to look out for in 2018
Read more
  • 0
  • 0
  • 4367

article-image-4-insights-stack-overflow-survey-might-surprise-you
Richard Gall
27 Mar 2018
3 min read
Save for later

4 surprising things from Stack Overflow's 2018 survey

Richard Gall
27 Mar 2018
3 min read
This year’s Stack Overflow survey features a wealth of insights on developers around the world. There were some takeaways that are worth noting and open the door to wider investigation. Here are 4 Stack Overflow survey highlights we think merit further discussion... 25% of developers think a regulatory body should be responsible for AI ethics The number of developers who believed a regulatory body should be responsible for AI ethics was a minority - more believed developers themselves should be responsible for ethical decisions around the artificial intelligence that they help to build. However, the fact that 1 in 4 of Stack Overflow’s survey respondents believe we need a regulatory body to monitor ethics in AI is not to be ignored - even if for the most part developers believe they are best placed to make ethical decisions, that feeling is far from unanimous. This means there is some unease about ethics and artificial intelligence that is, at the very least, worth talking about in more detail. The ethics of code remains a gray area There were a number of interesting questions around writing code for ethical purposes in this year’s survey. 58.5% of respondents said they wouldn’t write unethical code if they were asked to, 4.8% said they would, with 35.6% saying that it depends on what it is. Clearly, the notion of ethical code remains something that needs to be properly addressed within the developer and tech community. The recent Facebook and Cambridge Analytica scandal have only served to emphasize this. Equally interesting was the responses to the question around responsibility for ethical code. 57.5% said upper management were responsible for code that accomplishes something ethical, but 22.8% said it was ‘the person who came up with the idea’ and 19.7% said ‘the developer who wrote it’. Hackathons and coding competitions are a crucial part of developer learning 26% of respondents learned new skills in hackathons. When you compare that to 35% of people who say they’re getting on the job training it’s easy to see just how important a role hackathons play in the professional development of developers. A similar proportion (24.3%) said coding competitions were also an important part of their technical education. When you put the two together, there’s obvious evidence that software learning is happening in the community more than in the workplace. Arguably, today’s organizations are growing and innovating on the back of developer curiosity and ingenuity. Transgender and non-binary programmers contribute to open source at high rates This probably will go largely unnoticed but it's worth underlining this. It was, in fact, one of the Stack Overflow survey's highlights: “developers who identify as transgender and non-binary contribute to open source at higher rates (58% and 60%, respectively) than developers who identify as men or women overall (45% and 33%).” This is a great statistic and one that’s important to recognize among the diversity problems within technology. It is, perhaps, a positive signal, that things are changing.
Read more
  • 0
  • 0
  • 4364
article-image-founder-ceo-of-odoo-fabien-pinckaers-discusses-the-new-odoo-13-framework
Vincy Davis
04 Nov 2019
6 min read
Save for later

Founder & CEO of Odoo, Fabien Pinckaers discusses the new Odoo 13 framework

Vincy Davis
04 Nov 2019
6 min read
Odoo, formerly known as OpenERP (Enterprise Resource Planning), is a popular open source, business application development software. It comes with many features like a powerful GUI, performance optimization, integrated in-app purchase features and more. It is used by companies to manage and organize their workloads like materials and warehouse management, human resources, finance, accounting, sales, and many other enterprise features. With a fast-growing community, Odoo is being used by companies of all sizes. At the Odoo Experience 2019 event conducted earlier this month, the Odoo team announced the release of Odoo 13, its latest version of the all-in-one business software. This release contains an abundance of major and minor improvements, including new features like sales coupons & promotions module, MRP subcontracting, website form builder, skill management module and more.  At the event, founder & CEO of Odoo, Fabien Pinckaers explained the many concepts behind the new Odoo framework, which he says is one of the best improvements in Odoo 13. New to Odoo? If you are a beginner in Odoo, read our book Working with Odoo 12 - Fourth Edition written by Greg Moss to learn how to start a new company database in Odoo and to understand the basics of Odoo sales management. You can also master customer relationship management in Odoo for setting up a modern business environment. This book will also take you through the OpenChatter feature with notes and messages associated with the Odoo documents. Also, learn how to use Odoo's API to integrate with other applications from our book.   The Odoo 13 framework is also called an In-Memory ORM, because it provides more considerable memory than before. When employed for operational measures, on an average, it runs 4.5 times faster when compared to earlier versions of Odoo. Key features of Odoo 13 framework Simplified Cache process Pinckaers says that in the new framework, they have simplified the cache as the stored fields will now only need a single value. On the other hand, the non-stored fields’ computed value will depend on the keywords present in the context (eg. translatable and context). He added that, in version 12, most fields did not need a cache so it contained only one global cache with an exception for fields that were text-dependent. It also had a new attribute for a multi-line inventory where the projects depend on “way roads”. However, the difficulty in this version is that when creating a field, users had to select the cache value and if the context of the field is changing, then the users had to again specify the new value of cache. This step is made simpler in version 13, as the user now needs to specify the value of the cache only once. “It seems simple but actually in the business code we're passing it to all the fields at the same time,” asserts Pinckaers. This simplified cache process will also reduce the alert memory access of the code. In-memory updates While specifying the various test field values, in the earlier versions, users had to update its validation value each time making it a time consuming process. To overcome this problem, the Odoo team has included all the data transactions in memory in the new version. Consequently,  in Odoo 13, when assigning the field value, the user can put it in the cache each time. Hence, when a field value needs to be read, it is taken from the cache itself. To manage all the dependencies in Python, Pinckaers demonstrated how users should always:  Use the inverse field, instead of SQL query Avoid using SELECT, as the implementation of the compute will read the same object When create(), set one2many to[] Delaying the computing field for faster transactions In order to delay a computing field in the line.product_quantity and the line.discount in the preceding Odoo versions, a user had to compute the dependency value for all the for line in order commands. Once the transaction was completed, the values were then recomputed and written in the code. This process is also made easy in Odoo 13, as the user can now mark all the line commands to recompute and use the self.flush() command to compute the values after the transaction is completed. This makes all the computation transactions to be conducted in memory. According to Pinckaers, this support will help users with more than 100 customers as it will make the process much faster and simpler. Optimize dependency tree to reduce Python and SQL computations Pinckaers takes the ‘change order’ example to demonstrate how version 13 of Odoo has a clean dependency tree. This means that if the price list of the order is changed, the total cost of the order will also change indirectly, thus optimizing the dependency tree. He explains that this indirect change will happen due to the indirect dependency between the pricelist identity and the total cost list of the field in Odoo 13. In the earlier versions, due to the recursive nature of the dependencies, each order of the line entailed the order ID of the field. This required the user to read sometimes even more than 100 lines of the list to get the order ID. In Odoo 13, this prolonged process is altered to get a more optimized dependency tree. This means that the user can now directly get the order ID from the dependent tree, without the Python and SQL computations.  Improvements in browse optimization() The major improvement instilled in Odoo 13 browse optimization() is the mechanism to avoid multiple format cache conversion. In the previous versions, users had to read and convert all the SQL queries to cache format followed by put in cache command. This meant that it required three commands just to read the data, making the process very tedious. With the latest version, the prefetch command will directly save all the similar data formats in the memory. “But if the format is different, then we have to apply everything a color conversion method. As  Python is extremely slow,” Pinckaers says, “applying a dictionary that we see from outside the cache” makes the process faster because a C implementation can be used to directly convert the data in the cache format. You can watch the full video to see Pinckaer’s demonstration of code cleanup and Python optimization. If you want to use Odoo to build enterprise applications and set up the functional requirements for your business, read our book ‘Working with Odoo 12 - Fourth Edition' written by Greg Moss to learn how to use the MRP module to create, process, and schedule the manufacturing and production order. This book will also guide you with in-depth knowledge about the business intelligence required in Odoo, its architecture and will also unveil how to customize Odoo to meet the specific needs of your business.  Creating views in Odoo 12 – List, Form, Search [Tutorial] How to set up Odoo as a system service [Tutorial] Handle Odoo application data with ORM API [Tutorial] Implement an effective CRM system in Odoo 11 [Tutorial] “Everybody can benefit from adopting Odoo, whether you’re a small start-up or a giant tech company” – An interview with Odoo community hero, Yenthe Van Ginneken
Read more
  • 0
  • 0
  • 4359

article-image-deep-dream-inceptionistic-art-neural-networks
Janu Verma
04 Jan 2017
9 min read
Save for later

Deep Dream: Inceptionistic art from neural networks

Janu Verma
04 Jan 2017
9 min read
The following image, known as dog-slug, was posted on Reddit and was reported to be generated by a convolution neural network. There was a lot of speculation about the validity of such a claim. It was later confirmed that this image was indeed generated by a neural network after Google described the mechanism for generation of such images, they called it deepdream and released their code for anyone to produce these images. This marks the begining of inceptionistic art creation using neural networks. Deep convolution neural networks (CNNs) have been very effective in image recognition problems. A deep neural network has an input layer, where the data is fed into, an output layer, which produces the prediction for each data point, and a lot of layers inbetween. The information moves from one layer to the next. CNNs work by progressively extracting higher-level features from the image at the successive layers of the network. Initial layers detect edges and corners, these features are then fed into next layers which combine them to produce features that make up the image e.g. segments of the image that discern the types of images. The final layer builds a classifier from these features and the output is the most likely category for the image. Deep dream works by reversing this process. An image is fed to the network, which is trained to recognize different categories for the images in the ImageNet dataset which contain 1.2 million images across 1000 categories. As each layer of the network 'learns' features at a different level, we can choose a layer and the output of that layer shows how that layer interprets the input image. The output of this layer is enhanced to produce an inceptionistic-looking picture. Thus a roughly puppy-looking segment of the image becomes super puppy-like. In this post, we will learn how to create inceptionistic images like deep dream using a pre-trained convolution neural network, called VGG (also known as OxfordNet). This network architecture is named after the Visual Geometry Group from Oxford, who developed it. It was used to win the ILSVR (ImageNet) competition in 2014. To this day, it is considered to be an excellent vision model, although it has been somewhat outperformed by more recent advances such as Inception (also known as GoogleNet) used by Google to produce deeo dream images. We will use a library called Keras for our examples. Keras Keras is a high-level library for deep learning, which is built on top of theano and tensorflow. It is written in python, and provides a scikit-learn type API for building neural networks. It enables developers to quickly build neural networks without worrying about the mathematical details of tensor algebra, optimization methods, and numerical methods. Installation Keras has the followinhg dependencies - numpy - scipy - pyyaml - hdf5 (for saving/loading models) - theano (for theano backend) - tensorflow (for tensorflow backend). The easiest way to install keras is using Python Project Index (PyPI): sudo pip install keras Deep dream in Keras The following script is taken from official Keras source code on GitHub. from __future__ import print_function from keras.preprocessing.image import load_img, img_to_array import numpy as np from scipy.misc import imsave from scipy.optimize import fmin_l_bfgs_b import time import argparse from keras.applications import vgg16 from keras import backend as K from keras.layers import Input parser = argparse.ArgumentParser(description='Deep Dreams with Keras.') parser.add_argument('base_image_path', metavar='base', type=str, help='Path to the image to transform.') parser.add_argument('result_prefix', metavar='res_prefix', type=str, help='Prefix for the saved results.') args = parser.parse_args() base_image_path = args.base_image_path result_prefix = args.result_prefix # dimensions of the generated picture img_width = 800 img_height = 800 # path to the model weights file weights_path = 'vgg_weights.h5' # some settings we found interesting saved_settings = { 'bad_trip': {'features':{'block4_conv1': 0.05, 'block4_conv2': 0.01, 'block4_conv3': 0.01}, 'continuity': 0.01, 'dream_l2': 0.8, 'jitter': 5}, 'dreamy': {'features': {'block5_conv1':0.05, 'block5_conv2': 0.02}, 'continuity': 0.1, 'dream_l2': 0.02, 'jitter': 0}, } # the settings we will use in this experiment settings = saved_settings['dreamy'] # print(settings['dream_']) # util function to open, resize and format picturs into appropriate tensors. def preprocess_image(image_path): img = load_img(image_path, target_size=(img_width, img_height)) img = img_to_array(img) img = np.expand_dims(img, axis=0) img = vgg16.preprocess_input(img) return img # util function to convert a tensor into a valid image def deprocess_image(x): if K.image_dim_ordering() == 'th': x = x.reshape((3, img_width, img_height)) x = x.transpose((1,2,0)) else: x = x.reshape((img_width, img_height, 3)) # remove zero-center by mean pixel x[:, :, 0] += 103.939 x[:, :, 1] += 116.779 x[:, :, 2] ++ 123.68 # BGR -> RGB x = x[:, :, ::-1] x = np.clip(x, 0, 255).astype('uint8') return x if K.image_dim_ordering() == 'th': img_size = (3, img_width, img_height) else: img_size = (img_width, img_height, 3) # this will contain our generated image dream = Input(batch_shape=(1,) + img_size) # build the VGG16 network with our placeholder # the model will be loaded with pre-trained ImageNet weights model = vgg16.VGG16(input_tensor=dream, weights='imagenet', include_top=False) print('Model loaded.') # get the symbolic outputs of each "key" layer (we gave them unique names). layer_dict = dict([(layer.name, layer) for layer in model.layers]) # continuity loss util function def continuity_loss(x): assert K.ndim(x) == 4 if K.image_dim_ordering() == 'th': a = K.square(x[:, :, :img_width - 1, :img_height - 1] - x[:, :, 1:, :img_height - 1]) b = K.square(x[:, :, :img_width - 1, :img_height - 1] - x[:, :, :img_width - 1, 1:]) else: a = K.square(x[:, :img_width - 1, :img_height-1, :] - x[:, 1:, :img_height - 1, :]) b = K.square(x[:, :img_width - 1, :img_height-1, :] - x[:, :img_width - 1, 1:, :]) return K.sum(K.pow(a + b, 1.25)) # define the loss loss = K.variable(0.) for layer_name in settings['features']: # add the L2 norm of the features of a layer to the loss assert layer_name in layer_dict.keys(), 'Layer ' + layer_name + ' not found in model.' coeff = settings['features'][layer_name] x = layer_dict[layer_name].output shape = layer_dict[layer_name].output_shape # we avoid border artifacts by only involving non-border pixels in the loss if K.image_dim_ordering() == 'th': loss -= coeff * K.sum(K.square(x[:, :, 2: shape[2] - 2, 2: shape[3] - 2])) / np.prod(shape[1:]) else: loss -= coeff * K.sum(K.square(x[:, 2: shape[1] - 2, 2: shape[2] - 2, :])) / np.prod(shape[1:]) # add continuity loss (gives image local coherence, can result in an artful blur) loss += settings['continuity'] * continuity_loss(dream) / np.prod(img_size) # add image L2 norm to loss (prevents pixels from taking very high values, makes image darker) loss += settings['dream_l2'] * K.sum(K.square(dream)) / np.prod(img_size) # feel free to further modify the loss as you see fit, to achieve new effects... # compute the gradients of the dream wrt the loss grads = K.gradients(loss, dream) outputs = [loss] if type(grads) in {list, tuple}: outputs += grads else: outputs.append(grads) f_outputs = K.function([dream], outputs) def eval_loss_and_grads(x): x = x.reshape((1,) + img_size) outs = f_outputs([x]) loss_value = outs[0] if len(outs[1:]) == 1: grad_values = outs[1].flatten().astype('float64') else: grad_values = np.array(outs[1:]).flatten().astype('float64') return loss_value, grad_values # this Evaluator class makes it possible # to compute loss and gradients in one pass # while retrieving them via two separate functions, # "loss" and "grads". This is done because scipy.optimize # requires separate functions for loss and gradients, # but computing them separately would be inefficient. class Evaluator(object): def __init__(self): self.loss_value = None self.grad_values = None def loss(self, x): assert self.loss_value is None loss_value, grad_values = eval_loss_and_grads(x) self.loss_value = loss_value self.grad_values = grad_values return self.loss_value def grads(self, x): assert self.loss_value is not None grad_values = np.copy(self.grad_values) self.loss_value = None self.grad_values = None return grad_values evaluator = Evaluator() # run scipy-based optimization (L-BFGS) over the pixels of the generated image # so as to minimize the loss x = preprocess_image(base_image_path) for i in range(15): print('Start of iteration', i) start_time = time.time() # add a random jitter to the initial image. This will be reverted at decoding time random_jitter = (settings['jitter'] * 2) * (np.random.random(img_size) - 0.5) x += random_jitter # run L-BFGS for 7 steps x, min_val, info = fmin_l_bfgs_b(evaluator.loss, x.flatten(), fprime=evaluator.grads, maxfun=7) print('Current loss value:', min_val) # decode the dream and save it x = x.reshape(img_size) x -= random_jitter img = deprocess_image(np.copy(x)) fname = result_prefix + '_at_iteration_%d.png' % i imsave(fname, img) end_time = time.time() print('Image saved as', fname) print('Iteration %d completed in %ds' % (i, end_time - start_time)) This script can be run using the following schema - python deep_dream.py path_to_your_base_image.jpg prefix_for_results For example: python deep_dream.py mypic.jpg results Examples I created the following pictures using this script. More examples at Google Inceptionism gallery About the author Janu Verma is a researcher in the IBM T.J. Watson Research Center, New York. His research interests are in mathematics, machine learning, information visualization, computational biology and healthcare analytics. He had held research positions at Cornell University, Kansas State University, Tata Institute of Fundamental Research, Indian Institute of Science, and Indian Statistical Institute. He has written papers for IEEE Vis, KDD, International Conference on healthcare informatics, computer graphics and applications, nature genetics, IEEE sensors journals, and so on. His current focus is on the development of visual analytics systems for prediction and understanding. He advises startups and companies on data science and machine learning in Delhi-NCR area, email to schedule a meeting.
Read more
  • 0
  • 0
  • 4352

article-image-nodejs-its-easy-get-things-done
Packt Publishing
05 Sep 2016
4 min read
Save for later

With Node.js, it’s easy to get things done

Packt Publishing
05 Sep 2016
4 min read
Luciano Mammino is the author (alongside Mario Casciaro) of the second edition of Node.js Design Patterns, released in July 2016. He was kind enough to speak to us about his life as a web developer and working with Node.js – as well as assessing Node’s position within an exciting ecosystem of JavaScript libraries and frameworks. Follow Luciano on Twitter – he tweets from @loige.  1.     Tell us about yourself – who are you and what do you do? I’m an Italian software developer living in Dublin and working at Smartbox as Senior Engineer in the Integration team. I’m a lover of JavaScript and Node.js and I have a number of upcoming side projects that I am building with these amazing technologies.  2.     Tell us what you do with Node.js. How does it fit into your wider development stack? The Node.js platform is becoming ubiquitous; the range of problems that you can address with it is growing bigger and bigger. I’ve used Node.js on a Raspberry Pi, in desktop and laptop computers and on the cloud quite successfully to build a variety of applications: command line scripts, automation tools, APIs and websites. With Node.js it’s really easy to get things done. Most of the time I don't need to switch to other development environments or languages. This is probably the main reason why Node.js fits very well in my development stack.  3.     What other tools and frameworks are you working with? Do they complement Node.js? Some of the tools I love to use are RabbitMq, MongoDB, Redis and Elastic Search. Thanks to the Npm repository, Node.js has an amazing variety of libraries which makes integration with these technologies seamless. I was recently experimenting with ZeroMQ, and again I was surprised to see how easy it is to get started with a Node.js application.  4.     Imagine life before you started using Node.js. What has its impact been on the way you work? I started programming when I was very young so I really lived "a life" as a programmer before having Node.js. Before Node.js came out I was using JavaScript a lot to program the frontend of web applications but I had to use other languages for the backend. The context-switching between two environments is something that ends up eating up a lot of time and energy. Luckily today with Node.js we have the opportunity to use the same language and even to share code across the whole web stack. I believe that this is something that makes my daily work much easier and enjoyable.  5.     How important are design patterns when you use Node.js? Do they change how you use the tool? I would say that design patterns are important in every language and in this case Node.js makes no difference. Furthermore due to the intrinsically asynchronous nature of the language having a good knowledge of design patterns becomes even more important in Node.js to avoid some of the most common pitfalls.  6.     What does the future hold for Node.js? How can it remain a really relevant and valuable tool for developers? I am sure Node.js has a pretty bright future ahead. Its popularity is growing dramatically and it is starting to gain a lot of traction in enterprise environments that have typically bound to other famous and well-known languages like Java. At the same time Node.js is trying to keep pace with the main innovations in the JavaScript world. For instance, in the latest releases Node.js added support for almost all the new language features defined in the ECMAScript 2015 standard. This is something that makes programming with Node.js even more enjoyable and I believe it’s a strategy to follow to keep developers interested and the whole environment future-proof.  Thanks Luciano! Good luck for the future – we’re looking forward to seeing how dramatically Node.js grows over the next 12 months. Get to grips with Node.js – and the complete JavaScript development stack – by following our full-stack developer skill plan in Mapt. Simply sign up here.
Read more
  • 0
  • 0
  • 4343
article-image-winnti-malware-chinese-hacker-group-attacks-major-german-corporations-for-years
Fatema Patrawala
26 Jul 2019
9 min read
Save for later

Winnti Malware: Chinese hacker group attacks major German corporations for years, German public media investigation reveals

Fatema Patrawala
26 Jul 2019
9 min read
German public broadcasters, Bavarian Radio & Television Network (BR) and Norddeutscher Rundfunk (NDR), have published a joint investigation report on a hacker group spying on certain businesses since years. Security researchers, Hakan Tanriverdi, Svea Eckert, Jan Strozyk, Maximilian Zierer and Rebecca Ciesielski have contributed to this report. They shed light on how this group of hackers operate and how widespread they are. The investigation started with one of the reporters receiving this code daa0 c7cb f4f0 fbcf d6d1 which eventually led to the team discovering a hacking group with Chinese origins operating on Winnti Malware. BR and NDR reporters, in collaboration with several IT security experts, have analyzed the Winnti malware. Moritz Contag of Ruhr University Bochum extracted information from different varieties of the malware and wrote a script for this analysis. Silas Cutler, an IT security expert with US-based Chronicle Security, confirmed it. The report analyses cases from the below listed targeted companies: Gaming: Gameforge, Valve Software: Teamviewer Technology: Siemens, Sumitomo, Thyssenkrupp Pharma: Bayer, Roche Chemical: BASF, Covestro, Shin-Etsu Hakan Tanriverdi one of the reporters wrote on Twitter, “We looked at more than 250 samples, wrote Yara rules, conducted nmap scans.” Yara rules is a tool primarily used in malware research and detection. Nmap is a free and open source network scanner used to discover hosts and services on a computer network. Additionally in the report, the team has presented ways to find out if one is infected by the Winnti malware. To learn about these methods in detail, check out the research report. Winnti malware is complex, created by “digital mercenaries” of Chinese origin Winnti is a highly complex structure that is difficult to penetrate. The term denotes both a sophisticated malware and an actual group of hackers. IT security experts like to call them digital mercenaries. According to a Kasperky Lab research held in 2011, the Winnti group has been active for several years and in their initial days, specialized in cyber-attacks against the online video game industry. However, according to this investigation the hacker group has now honed in on Germany and its blue-chip DAX corporations. BR and NDR reporters analyzed hundreds of malware versions used for unsavory purposes. They found that the hacker group has targeted at least six DAX corporations and stock-listed top companies of the German industry. In October 2016, several DAX corporations, including BASF and Bayer, founded the German Cyber Security Organization (DCSO). The job of DCSO’s IT security experts was to observe and recognize hacker groups like Winnti and to get to the bottom of their motives. In Winnti’s case, DCSO speaks of a “mercenary force” which is said to be closely linked with the Chinese government. The reporters of this investigation also interviewed few company staff, IT security experts, government officials, and representatives of security authorities. An IT security expert who has been analyzing the attacks for years said, “Any DAX corporation that hasn’t been attacked by Winnti must have done something wrong.” A high-ranking German official said to the reporters, “The numbers of cases are mind-boggling.” And claims that the group continues to be highly active—to this very day. Winnti hackers are audacious and “don’t care if they’re found out” The report points out that the hackers choose convenience over anonymity. Working with Moritz Contag the reporters found that the hackers wrote the names of the companies they want to spy on directly into their malware. Contag has analyzed more than 250 variations of the Winnti malware and found them to contain the names of global corporations. According to reporters, hackers usually take precautions, which experts refer to as Opsec. But the Winnti group’s Opsec was dismal to say the least. Somebody who has been keeping an eye on Chinese hackers on behalf of a European intelligence service believes that they didn’t really care: “These hackers don’t care if they’re found out or not. They care only about achieving their goals." The reporters believed that every hacking operation leaves digital traces. They also believe that if you notice hackers carefully, each and every step can be logged. To decipher the traces of the Winnti hackers, they took a closer look at the program code of the malware itself. They used a malware research engine known as “VirusTotal” created by Google. The hacker group initially attacked the gaming industry for financial gain In the early days, the Winnti group of hackers were mainly interested in money making. Their initial target was Gameforge, a gaming company based in the German town of Karlsruhe. In 2011, an email message found its way into Gameforge’s mailbox. A staff member opened the attached file and unaware to him started the Winnti program. Shortly afterwards, the administrators became aware that someone was accessing Gameforge’s databases and raising the account balance. Gameforge decided to implement Kaspersky antivirus software and  arranged for Kaspersky's IT security experts to visit the office.The security experts found suspicious files and analyzed them. They noticed that the system had been infiltrated by hackers acting like Gameforge’s administrators. It turned out that the hackers had taken over a total of 40 servers. “They are a very, very persistente group,” says Costin Raiu, who has been watching Winnti since 2011 and was in charge of Kaspersky’s malware analysis team. “Once the Winnti hackers are inside a network, they take their sweet time to really get a feel for the infrastructure,” he says. The hackers will map a company’s network and look for strategically favorable locations for placing their malware. They keep tabs on which programs are used in a company and then exchange a file in one of these programs. The modified file looks like the original, but was secretly supplemented by a few extra lines of code. Thereafter the manipulated file does the attackers’ bidding. Raiu and his team have been following the digital tracks left behind by some of the Winnti hackers. “Nine years ago, things were much more clear-cut. There was a single team, which developed and used Winnti. It now looks like there is at least a second group that also uses Winnti.” This view is shared by many IT security companies. And it is this second group which is getting the German security authorities worried. One government official says, “Winnti is very specific to Germany. It is the attacker group that's being encountered most frequently." Second group of Winnti hackers focused on industrial espionage The report says that by 2014, the Winnti malware code was no longer limited to game manufacturers. The second group’s job was mainly industrial espionage. Hackers targeted high-tech companies as well as chemical and pharmaceutical companies. They also attacked companies in Japan, France, the U.S. and Germany. The report sheds light on how Winnti hackers broke into Henkel’s network in 2014. The reporters present three files containing the website belonging to Henkel and the name of the hacked server. For example, one starts with the letter sequence DEDUSSV. They realized that server names can be arbitrary, but it is highly probable that DE stands for Germany and DUS for Düsseldorf, where the Henkel headquarters are located. The hackers were able to monitor all activities running on the web server and reached systems which didn't have direct internet access: The company also confirmed the Winnti incident and issued the following statement: “The cyberattack was discovered in the summer of 2014 and Henkel promptly took all necessary precautions.” Henkel claims that a “very small portion” of its worldwide IT systems had been affected— the systems in Germany. According to Henkel, there was no evidence suggesting that any sensitive data had been diverted. Other than Henkel, Winnti also targeted companies like Covestro, manufacturers of adhesives, lacquers and paints, Japan’s biggest chemical company, Shin-Etsu Chemical, Roche, one of the largest pharmaceutical companies in the world. Winnti hackers also penetrated the BASF and Siemens networks. A BASF spokeswoman says that in July 2015, hackers had successfully overcome “the first levels” of defense. “When our experts discovered that the attacker was attempting to get around the next level of defense, the attacker was removed promptly and in a coordinated manner from BASF’s network.” She added that no business relevant information had been lost at any time. According to Siemens, they were penetrated by the hackers in June 2016. “We quickly discovered and thwarted the attack,” Siemens spokesperson said. Winnti hackers also involved in political espionage The hacker group also is interested in penetrating political groups and there were several such indicators according to the report. The Hong Kong government was spied on by the Winnti hackers. The reporters found four infected systems with the help of the nmap network scan, and proceeded to inform the government by email. The reporters also found out a telecommunications provider from India had been infiltrated, the company happens to be located in the region where the Tibetan government has its headquarters. Incidentally, the relevant identifier in the malware is called “CTA.” A file which ended up on VirusTotal in 2018 contains a straightforward keyword: “tibet”. Other than this the report also throws light on attacks which were not directly related to political espionage but had connection among them. For example, the team found out Marriott hotels in USA was attacked by hackers. The Indonesian airline Lion Air networks were also penetrated by them. They wanted to get to the data of where people travel and where they were located, at any given time. The team confirmed this by showing the relevant coded files in the report. To read the full research report, check out the official German broadcsaster’s website. Hackers steal bitcoins worth $41M from Binance exchange in a single go! VLC media player affected by a major vulnerability in a 3rd library, libebml; updating to the latest version may help An IoT worm Silex, developed by a 14 year old resulted in malware attack and taking down 2000 devices
Read more
  • 0
  • 0
  • 4343

article-image-tackle-trolls-machine-learning-filtering-inappropriate-content
Amarabha Banerjee
15 Aug 2018
4 min read
Save for later

Tackle trolls with Machine Learning bots: Filtering out inappropriate content just got easy

Amarabha Banerjee
15 Aug 2018
4 min read
The most feared online entities in the present day are trolls. Trolls, a fearsome bunch of fake or pseudo online profiles, tend to attack online users, mostly celebrities, sports person or political profiles using a wide range of methods. One of these methods is to post obscene or NSFW (Not Safe For Work) content on your profile or website where User Generated Content (USG) is allowed. This can create unnecessary attention and cause legal troubles for you too. The traditional way out is to get a moderator (or a team of them). Let all the USGs pass through this moderation system. This is a sustainable solution for a small platform. But if you are running a large scale app, say a publishing app where you publish one hundred stories a day, and the success of these stories depend on the user interaction with them, then this model of manual moderation becomes unsustainable. More the number of USGs, more is the turn-around time, larger the moderation team size. This results in escalating costs, for a purpose that’s not contributing to your business growth in any manner. That’s where Machine Learning could help. Machine Learning algorithms that can scan images and content for possible abusive or adult content is a better solution that manual moderation. Tech giants like Microsoft, Google, Amazon have a ready solution for this. These companies have created APIs which are commercially available for developers. You can incorporate these APIs in your application to weed out the filth served by the trolls. The different APIs available for this purpose are Microsoft moderation, Google Vision, AWS Rekognition & Clarifai. Dataturks have made a comparative study on using these APIs on one particular dataset to measure their efficiency. They used a YACVID dataset with 180 images, manually labelled 90 of these images as nude and the rest as non-nude. The dataset was then fed to the 4 APIs mentioned above, their efficiency was tested based on the following parameters. True Positive (TP): Given a safe photo, the API correctly says so False Positive (FP): Given an explicit photo but the API incorrectly classifies it as safe. False negative (FN): Given a safe photo but the API is not able to detect so and True negative(TN): Given an explicit photo and the API correctly says so. TP and TN are two cases which meant the system behaved correctly. An FP meant that the app was vulnerable to attacks from trolls, FN meant the efficiency of the systems were low and hence not practically viable. 10% of the cases would be such that the API can’t decide whether its explicit or not. Those would be sent for manual moderation. This would bring down the maintenance cost of the moderation team. The results that they received are shown below: Source: Dataturks As it is evident from the above table, the best standalone API is Google vision with a 99% accuracy and 94% recall value. Recall value implies that if the same images are repeated, it can recognize them with 94% precision. The best results however were received with the combination of Microsoft and Google. The comparison of the response times are mentioned below: Source: dataturks The response time might have been affected with the fact that all the images accessed by the APIs were stored in Amazon S3. Hence AWS API might have had an unfair advantage on the response time. The timings were noted for 180 image calls per API. The cost is the lowest for AWS Rekognition - $1 for 1000 calls to the API. It’s $1.2 for Clarifai, $1.5 for both Microsoft and Google. The one notable drawback of the Amazon API was that the images had to be stored as S3 objects, or converted into that. All the other APIs accepted any web links as possible source of images. What this study says is that the power of filtering out negative and explicit content in your app is much easier now. You might still have to have a small team of moderators, but their jobs will be made a lot easier with the ML models implemented in these APIs. Machine Learning is paving the way for us to be safe from the increasing menace of Trolls, a threat to free speech and open sharing of ideas which were the founding stones of internet and the world wide web as a whole. Will this discourage Trolls from continuing their slandering or will it create a counter system to bypass the APIs and checks? We can only know in time. Facebook launches a 6-part Machine Learning video series Google’s new facial recognition patent uses your social network to identify you! Microsoft’s Brad Smith calls for facial recognition technology to be regulated
Read more
  • 0
  • 0
  • 4336