Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech Guides

851 Articles
article-image-you-cant-disrupt-print-book
Richard Gall
12 Feb 2016
4 min read
Save for later

You Can’t Disrupt the Print Book

Richard Gall
12 Feb 2016
4 min read
We live in a time of relentless technological change, with the mantra of innovation lingering over our everyday experience – in our jobs and our private lives. But one technology has endured for more than half a millennium – the print book. As the world around us has changed, the print book has proven to be a reliable way for delivering and sharing stories. The digital age has always looked like being the death knell for the print book. To a certain extent, you can trace the transformation of content alongside changes in the types of devices we use. Interactive, short, and easily digestible – these are the sort of things that seemed antithetical to the print book. Yet the pattern of disruption has somehow failed. Even as we have grown to love our Kindles and iPads (and we know many of you adore eBooks!), still print books show no sign of disappearing. The New York Times noted that Digital books accounted for 20% of the market in 2014, ‘roughly the same as they did a few years ago’. This was a sign not so much that eBooks had failed to displace print, but more that sales had plateaued, neatly integrating into our consumption habits and lifestyles, rather than transforming them. Perhaps this shows that sometimes the next big thing isn’t always the best – a lesson for anyone working in tech, who lives their lives committed to innovation and improvement. Arguably print books have never been more important. At a time when just about every interface we encounter is electronic, from the electric light that stares out at us as we work, to the connectivity we carry in our handbags and pockets, print books are an alternative interface that reminds us that there’s another world out there – that there’s another way to learn and explore new ideas. If your computer and your mobile connect you, your print book disconnects you. It says: stop. I’m focusing on this for a minute; I’m focusing on me. It’s also that moment when you disconnect that problems look different and you can tackle them in a way you had never thought of before. Sometimes the answer isn’t going to be found by simply gazing at your IDE – you simply need to switch off, and withdraw into a book. It’s easy to think that problem solving is all about connecting, talking, networking, but sometimes it isn’t. It’s about reading, thinking, and deliberating. Print books don’t simply offer a way to solve a problem without a backlight – they also become material evidence of your ideas and experience. In our frameless and flat online world where everything is available at a few clicks of a button, it can be difficult to orient yourself – eBook collections can be great, but they can easily be submerged in the mountains of data on your hard drive, alongside the films you’ll never watch, the songs you’ll never listen to. But it’s not just about those books you have read – it’s those print books in your library, however small, that are important too – they signal where you might go next, what skills you might learn. They let you visualize the future in a way that even your Packt account isn’t quite able to do (although we’re working on that…). So if you want to take control of your skills, and your imagination, maybe it’s time to rethink the value of an old-fashioned print book. Here at Packt we’re dedicated to innovation, driven by what’s new, but sometimes even we have to concede that the old stuff is the best… If you want to upgrade your Packt eBooks to print, it’s simple – and you can grab the print copy at half-price! Simply follow these steps:
Read more
  • 0
  • 0
  • 1493

article-image-getting-started-devops
Michael Herndon
10 Feb 2016
7 min read
Save for later

Getting Started with DevOps

Michael Herndon
10 Feb 2016
7 min read
DevOps requires you to know many facets of people and technology. If you're interested in starting your journey into the world of DevOps, then take the time to know what you are getting yourself into, be ready to put in some work, and be ready to push out of your comfort zone. Know What You're Getting Yourself Into Working in a DevOps job where you're responsible for both coding and operational tasks means that you need to be able to shift mental gears. Mental context switching comes at a cost. You need to be able to pull yourself out of one mindset and switch to another, and you need to be able to prioritize. Accept your limitations and know when it's prudent to request more resources to handle the load. The amount of context switching will vary depending on the business. Let's say that you join a startup, and you're the only DevOps person on the team. In this scenario, you're most likely the operations team and still responsible for some coding tasks as well. This means that you need to tackle operations tasks as they come in. In this instance, Scrum and Agile will only carry you so far, you'll have to take more of a GTD approach. If you come from a development background, you will be tempted to put coding first as you have deadlines. However, if you are the operations team, then operations must come first. When you become a part of the operations team, employees at your business are now your customers too. Some days you can churn out code, other days are going to be an onslaught of important, time-sensitive requests. At the business that I currently work for, I took on the DevOps role so that other developers could focus on coding. One of the developers that I work with has exceptional code output. However, operational tasks were impeding their productivity. It was an obvious choice for me to jump in and take over the operational tasks so that the other developer could focus his efforts on bringing new features to customers. It's simply good business. Ego can get in the way of good business and DevOps. Leave your ego at home. In a bigger business, you may have a DevOps team where there is more breathing room to focus on things that you're more interested in, whether it's more coding or working with systems. Emergencies happen. When an emergency arises, you need to be able to calmly assess the situation, avoid the blame game, and provide solutions. Don't react. Respond. If you're too excitable or easily get caught up in the emotions of a given situation, DevOps may be your trial of fire. Work on pulling yourself outside of a situation so that you can see the whole picture and work towards solving the problem. Never play the blame game. Be the person who gets things done. Dive Into DevOps Start small. Taking on too much will overwhelm you and stifle progress. After you’ve done a few iterations of taking small steps, you'll be further along the journey than you realize. "It's a dangerous business, Frodo, going out your door. You step onto the road, and if you don't keep your feet, there's no knowing where you might be swept off to.” - Bilbo Baggins. If you're a developer, take one of your side projects and set up continuous delivery for the project. I would keep it simple and use something like Travis CI or AppVeyor and have your final output published somewhere. If you're using something like node, you could set up nightly builds for NPM. If its .NET you could use a service like MyGet. The second thing I would do as a developer is to focus on learning SSH, security access, and scheduled tasks. One of the things I've seen developers struggle with is locking down systems, so it's worth taking the time to dive into user access permissions. If you're on Windows, learn the windows task scheduler. If you're on Linux, learn to setup cron jobs. If you're from the operations and systems side of things, pick a scripting language that suits your needs. If you're working for a company that uses Microsoft technology, I'd suggest that you learn the Powershell scripting language and a language that compiles to .NET like C# or F#. If you're using open source technologies, I'd suggest learning bash and a language like Ruby or Python. Puppet and Chef use Ruby. Salt Stack uses Python. Build a simple web application with the language of your choice. That should give you enough familiarity with a language for you to start creating scripts that automate tasks. Read into DevOps books like Continuous Delivery or Continuous Delivery and DevOps Quickstart Guid. Expand your knowledge. Explore tools. Work on your intercommunication skills. Create a list of tasks that you wish to automate. Then create a habit out of reducing that list. Build A Habit Out Of Automating Infrastructure. Make it a habit to find time to automate your infrastructure while continuing to support your business. It's rare to get into a position that only focuses on automating infrastructure constantly as one's sole job, so it's important to be able to carve out time to remove mundane work so that you can focus your time and value on tasks that can't be automated. A habit loop is made up of three things. A cue, a routine, and a reward. For example, at 2pm your alarm goes off (cue). You go for a short run (routine). You feel awake and refreshed (reward). Design a cue that works for you. For example, every Friday at 2pm you could switch gears to work on automation. Spend some time on automating a task or infrastructure need (Routine), then find a reward that suits your lifestyle. A reward could be having a treat on Friday to celebrate all the hard work for the week or going home early (if your business permits this). Maybe learning something new is the reward and in that case, you spend a little time each week with a new DevOps related technology. Once you've removed some of the repetitive tasks that waste time, then you'll find yourself with enough time to take on bigger automation projects that seemed impossible to get to before. Repeat this process ad infinitum (To infinity and beyond). Lastly, Always Write and Communicate Whether you plan on going into DevOps or not, the ability to communicate will set you apart from others in your field. In DevOps, communication becomes a necessity because the value you provide may not always be apparent to everyone around you. Furthermore, you need to be able to resolve group conflicts, persuasively elicit buy-in, and provide a vision that people can follow. Always strive to improve your communication skills. Read books. Write. Work on your non-verbal communication skills. Non-verbal communication accounts for 93% of communication. It's worth knowing that messages that your body language sends could be preventing you from getting your ideas across. Communicating in a plain language to the lowest common denominator of your intended audience is your goal. People that are technical and nontechnical need to understand problems, solutions, and the value that you are giving them. Learn to use the right adjectives to paint bright illustrations in the minds of your readers to help them conceptualize hard-to-understand topics. The ability to persuade with writing is almost a lost art. It is a skill that transcends careers, disciplines, and fields of study. Used correctly, you can provide vision to guide your business into becoming a lean competitor that provides exceptional value to customers. At the end of the day, DevOps exists so that you can provide exceptional value to customers. Let your words guide and inspire the people around you. Off You Go All this is easier said than done. It takes time, practice, and years of experience. Don't be discouraged and don't give up. Instead, find things that light up your passion and focus on taking small incremental steps that allow you to win. You'll be there before you know it. About the author Michael Herndon is the head of DevOps at Solovis, creator of badmishka.co, and all around mischievous nerdy guy. 
Read more
  • 0
  • 0
  • 4694

article-image-angularjs-love-affair-decade
Richard Gall
05 Feb 2016
6 min read
Save for later

AngularJS: The Love Affair of the Decade

Richard Gall
05 Feb 2016
6 min read
AngularJS stands at the apex of the way we think about web development today. Even as we look ahead to Angular 2.0, the framework serves as a useful starting point for thinking about the formation of contemporary expectations about what a web developer actually does and the products and services they create. Notably (for me at least) Angular is closely tied up with Packt’s development over the past few years. It’s had an impact on our strategic focus, forcing us to think about our customers in new ways. Let’s think back to the world before AngularJS. This was back in the days when Backbone.js meant something, when Knockout was doing the rounds. As this article from October has it, AngularJS effectively took advantage of a world suffering from ‘Framework fatigue’. It’s as if there was a ‘framework bubble’, and it’s only when that bubble burst that the way forward becomes clearer. This was a period of experimentation and exploration; improvement and efficiency were paramount, but a symptom of this was the way in which trends – some might say fads – took hold of the collective imagination. This period was a ‘framework’ bubble which, I’d suggest, prefigures the startup bubble, a period in which we’re living today. Developers were looking for new ways of doing things; they wanted to be more efficient, their projects more scalable, fast, and robust. All those words that are attached to development (in both senses of the word) took on particular urgency. As you might expect, this unbelievable pace of growth and change was like catnip for Packt. This insatiable desire for new tools was something that we could tapped into, delivering information and learning materials on even the most niche new tools. It was exciting. But it couldn’t last. It was thanks to AngularJS that this changed. Ironically, if AngularJS burst the framework bubble, ending what seemed like an endless stream of potential topics to cover, it also supplied us with some of our most popular titles. AngularJS Web Application Development Cookbook, for example, was a huge success. Written by Matt Frisbie, it helped us to forge a stronger relationship with the AngularJS world. It was weird – its success also brought an end to a very exciting period of growth, where Packt was able to reach out to new customers, small communities that other publishers could not. But we had to grow up. AngularJS was like a friend’s wedding; it made us realise that we needed to become more mature, more stable. But why, we should ask, was AngularJS so popular? Everyone is likely to have their own different story, their own experience of adopting AngularJS, and that, perhaps, is precisely the point. Brian Rinaldi, in the piece to which I refer above, notes a couple of things that made Angular a framework to which people could commit. Its ties with Google, for example gave it a mark of authority and reliability, while its ability to integrate with other frameworks means developers still have the flexibility to use the tools they want to while still having a single place to which they could return. Brian writes: The point is, all these integrations not only made the choice of Angular easier, but make leaving harder. It’s no longer just about the code I write, but Angular is tied into my entire development experience. Experience is fundamental here. If the framework bubble was all about different ways of doing the same thing faster and more effectively, today the reverse is true. Developers want to work in one way, but to be able to do lots of things. It’s a change in priorities; the focus of the modern web developer in 2016 has changed. The challenges are different, as mobile devices, SPAs, cloud, personalization, have become fundamental issues for web developers to reckon with. Good web developers looks beyond the immediacy of their project, and need to think carefully about users and about how they can deliver a great product or service. That’s what we’ve found at Packt. The challenges faced by the customers we serve are no longer quite so transparent or simple. If, just a few years ago, we relied upon the simple need to access information about a new framework, today the situation is more nuanced. Many of the challenges are due to changing user behaviour, a fragmentation of needs and contexts. For example, maybe you want to learn responsive web design? Or need to build a mobile app? Of course, these problems haven’t just appeared in the last 12 months, but they are no longer additional extras, but central to success. It’s these problems that have had a part in causing the startup bubble – businesses solving (or, if they’re really good, disrupting) customer needs with software. A framework such as React might be seen as challenging AngularJS. But despite its dedicated, almost evangelical core of support, it’s nevertheless relatively small. And it would also be wrong to see the emergence of React (alongside other tools, including Meteor), as a return to the heady days of the framework bubble. Instead it has grown out of a world inculcated by Angular – it is, remember, a tool designed to build a very specific type of application. The virtual DOM, after all, is an innovation that helps deliver a truly immediate and fast user experience. The very thing that makes React great is why it won’t supplant Angular – why would it even want to? If you do one thing, and do it well, you’re adding value that people couldn’t get from anywhere else. Fear of obsolescence – that’s the world in which AngularJS entered, and the world in which Packt grew. But today, the greatest fear isn’t so much obsolescence, it’s ‘Am I doing the right thing for my users? Are my customers going to like this website – this new app?’ So, as we await Angular 2.0, don’t forget what AngularJS does for you – don’t forget the development experience and don’t forget to think about your users. Packt will be ready when you want to learn 2.0 – but we’ll also still have the insights and guidance you need to do something new with AngularJS. Progress and development isn’t linear; it’s never a straight line. So don’t be scared to explore, rediscover what works. It’s not always about what’s new, it’s about what’s right for you. Save up to 70% on some of our very best web development titles from 11th to 17th April. From Flask to React to Angular 2, it's the perfect opportunity to push your web development skills forward. Find them here.
Read more
  • 0
  • 0
  • 1423

article-image-angular-2-new-world-web-dev
Owen Roberts
04 Feb 2016
5 min read
Save for later

Angular 2 in the new world of web dev

Owen Roberts
04 Feb 2016
5 min read
This week at Packt we’re all about Angular, and with the release of Angular 2 just on the horizon there’s no better time to be an Angular user. Our first book on Angular was Mastering Web Application Development with AngularJS back in 2013 and it’s amazing to see how much the JS landscape is a completely different place than what it was just 3 or 4 years ago. How so? Well, Backbone was expected to lord over other frameworks as The Top Dog, while others like Ember and Knockout were carving their own respectable niches and fans. When Angular started to pick up steam it was seen as a breath of fresh air thanks to its simplicity and host of features. Compared to the more niche driven frameworks at the time the appeal of the Google lead powerhouse drove developers all over to give it a go, and managed to keep them hooked. Of course web dev is a different world than it was in 2013. We’ve seen the growth of full-stack JS development, JS promises are starting to become more in use, components are the latest step in building web apps, and a host of new frameworks and libraries have burst onto the scene as older ones begin to fade into the background. Libraries like React and Polymer are fantastic alternatives to frameworks for developers who want to pick and choose the best stuff for their apps; while Ember has gone from strength to strength in the last few years with a diehard fanbase. A different world means that rewriting Angular from the ground for 2.0 makes sense, but it’s not without its risks too. So, what does Angular need to avoid falling behind? Here are a few ideas (And hopes!) Ease-of-use One of Angular’s greatest strengths was how easy it was to use; not just in the actual coding, but also in integration. Angular has always had that bonus over the competition – one of the biggest reasons it became so popular was because so many other projects allowed for easy Angular integration. However, the other side of the coin was Angular’s equally difficult learning curve; before the book and tutorials found their way onto the market everyone was trying to find as much as they could about Angular in order to get the most out of the more complex or difficult parts of the framework. With 2.x being a complete rewrite every developer is back in the same place again, what the Angular team needs to ensure is that Angular is just as welcoming as its new competition - React, Ember, and even Polymer offer a host of ways to get into their development mindsets. Angular needs to do the same. Debugging Does anyone actually like debugging? My current attempts at Python usually grind to a halt when I reach the debugging phase and for a lot of developers there’s always that whisper of “Urgh” under their breath when they finally get around to bugs. Angular isn’t any different, and you can find a lot of articles and Stack Overflow questions all about debugging in Angular. For what it’s worth Angular seem to have learnt from their experiences with 1.x. They’ve worked directly with the team at Rangle.io to create Batarangle, which is a Chrome plugin that checks Angular 2 apps. Only time will tell how well debugging in Angular will work for every developer, but this is the sort of thing that the Angular team need to give developers – work with other teams to build better tools that help developers breeze through the more difficult tasks. The future devs vs the old With the release of Angular 2 in the coming months we’re going to see React and Angular 2 fight for dominance as the defacto framework on the JS market. The rewrite of Angular is arguably the biggest weakness and strength that Angular 2 offers. For previous Angular 1.x users there are two routes you can go down: Take the jump to Angular 2 and learn everything again. Decide the clean slate is an opportunity to give React a try – maybe even stick with it. What does Angular need to do to ensure after the release of 2 to get old users back on the Angular horse? A few of the writers that I’ve worked with in the past have talked about Angular as the Lego of the JS world – it’s simpler to pick up and everything fits snug together. There’s a great simplicity in building good looking Angular apps – the team needs to remind more jaded Angular 1.x fans that 2.x is the same Angular they love rebuilt for the new challenges of 2016 onwards. It’s still fun Lego, but shinier. If you’re new to the framework and want to see why it’s become such a beloved framework then be sure to check out our Angular tech page; this page has all our best eBooks and videos, as well as the chance to preorder our upcoming Angular 2 titles to download the chapters as soon as they’re finished.
Read more
  • 0
  • 0
  • 3691

article-image-increase-your-productivity-ipython
Marin Gilles
03 Feb 2016
5 min read
Save for later

Increase your productivity with IPython

Marin Gilles
03 Feb 2016
5 min read
IPython is an alternative Python prompt, adding a lot of new features such as autocompletion, easy history calls, code coloration, and much more. It is also the project that started developing the IPython notebooks, now known as Jupyter notebooks. IPython implements a lot of helpers, allowing you to get object information much more easily than in the standard Python prompt. As stated in the IPython documentation, the four most helpful commands in the IPython prompt are ?, %quickref, help, and object?, giving very general information about IPython or a particular object. If you read through %quickref, you will see it is very complete, but also quite dense. We will try and get out some of the most helpful commands when in an interactive coding session. One of the most useful functions added by IPython, is the %pdb function. By typing this in your prompt, any exception raised but uncaught will put you in PDB, the Python debugger. For example, we will have the following: In [1]: %pdb Automatic pdb calling has been turned ON In [2]: 1/0 --------------------------------------------------------------------------- ZeroDivisionError Traceback (most recent call last) <IPython-input-2-05c9758a9c21> in <module>() ----> 1 1/0 ZeroDivisionError: division by zero > <IPython-input-2-05c9758a9c21>(1)<module>() ----> 1 1/0 ipdb> On In[1], we enable pdb mode. We then try and compute 1/0, obviously raising an error. The prompt then changed, putting us in ipdb, being the IPython debugger (which has all the pdb functionalities, and more, such as syntax coloration). From there we can start using the debugger by launching it from the shell. As an alternative, you can also call %debug afterwards. You will find more detail in the documentation. If you’ve ever done interactive work with the prompt, you know it is good for one liners, but as soon as you increase the length of your tested code (which happens almost every time), it starts to get difficult, because the prompt was not made to be a text editor. In order to solve this problem, you can use the %edit command. You will be put in the editor defined in your $EDITOR environment variable, or a default text editor. You can re-edit a previously edited code part using: %edit _ Or for a precise history item: Out[5]: 'print("Hello, World!")n' ... In[12]: %edit _5 This will edit the print statement created during the edit on Out[5]. Using this function, trying functions or classes becomes much more easy (remember when you had to go back 20 lines up by keeping the left arrow down, and it took you about 3 years?). See the documentation for more information. Easy repetition : During a session, you will often find yourself repeating lines of code. Even though using the up arrow is very useful, if you have to repeat 20 lines of code each time, it gets quite time-consuming. To avoid doing this, you can use the %macro command. It allows you to repeat a series of commands from your history. For example, if you typed the commands: In [1]: a = 1 In [2]: b = 2 In [3]: c = 3 In [4]: d = 4 In [5]: e = 5 You can repeat some of these commands by defining a macro: In[6]: %macro my_macro 1-3 5 which will repeat the commands 1 to 3 and 5, skipping 4. In fact, you can define a macro with the commands in any order you want. You can for example write: %macro my_macro 5 2 3 1 4 executing the commands in the macro in the order you defined. You can also make a macro from a script, just by passing the filename: %macro my_macro filename.py By calling my_macro, you will execute every command loaded from the file. For more details, see the documentation. Single file loading : You can also load any Python file in your environment with the %load command: %load filename.py You will load every object defined in the file and make them available for use. You can also choose to load only certain symbols by using: %load filename.py -s symbols This is useful if you want a single function from a file without loading everything. Increase your productivity with editing tools : External editing and macros are very useful tools, but they are session-specific. However, you can save them for later, just by using the %save command. %save macro.py my_macro This will save the macro my_macro in the file macro.py in the current working directory. If you want to save an edited file, from the history element 53, you can do: %save edited.py _53 Which will create the file edited.py in the current directory. Using %edit, %macro, and %load with the %save command can enable you to increase your productivity a lot. If you want to generate a testing environment with predefined objects, create it once with %edit, then save it. The next time you want to use this environment again, you can either %load it, or put it in a %macro, if you plan to reuse it quite often. Increase your productivity even further by finding out how to customize IPython in our next article. About the author Marin Gilles is a PhD student in Physics, in Dijon, France. A large part of his work is dedicated to physical simulations for which he developed his own simulation framework using Python, and contributed to open-source libraries such as Matplotlib or IPython.
Read more
  • 0
  • 0
  • 1770

article-image-oocss
Liz Tom
02 Feb 2016
6 min read
Save for later

OOCSS!

Liz Tom
02 Feb 2016
6 min read
Object Oriented Programming is something you've probably heard a lot about. While CSS is not a programming language, you can still use Object Oriented principles to write highly maintainable CSS. One problem I have when jumping into an already existing code base is figuring out how the heck everything is styled. What is the worst case scenario I've ever heard about? A project with over 100 important tags on various link classes all over the code base because an important tag was placed on the base[KB1]  of a tag[RJ2] . What happens when you inherit CSS files structured like this? Sometimes you're good and make things better, but often times a combination of time line, other things on your plate and a little bit of laziness contribute to you adding things in to make the code base even worse. What is OO CSS? First off, what is Object Oriented CSS? How abstract should you get? These are some important questions. One, Object Oriented CSS is abstracting. Two, you shouldn't be so abstract that new people coming onto the project have no idea what your classes mean. If you have a bunch of names like .blue, .padding-left-10, .big, .small, it's basically making your CSS the same maintainability as if you had inline styles. There's a good balance between abstraction and practicality. BEM! I've used BEM on personal projects and with a team. I love it. I don't want to go back to writing my CSS without using the BEM naming convention. Here's the basics: BEM stands for block__element--modifier Block is the component you're building. Element is the element of the component (example: header, text, image, icon, etc). Modifier is modifying a component. The trick is you don't want to do something like this: block__element--modifier__element or block__element--modifier--modifier. If you're using a preprocessor, you can break your files up into components so that each file deals with only one block. This makes it very easy for other developers joining a project to find your component. Don't go more than 2 levels deep. Keep your CSS as flat as possible, and it's okay to have long class names. The arguments against BEM include things like: "It makes my markup ugly." I would rather have more maintainable CSS and HTML than pretty CSS and HTML. Also, after working with BEM for about a year now, the syntax has grown on me and now I find it beautiful. If you'd like a more in depth view on BEM you should check out this great post. How Does This All Fit Together? You have to find out what works and what doesn't work for you. I've found that I prefer to think of everything on my page as various components that I can use anywhere. I borrow a little from SMACCS and a little from BEM (you can read a lot more about both subjects) and I've created my own way of writing CSS (I use SCSS; you use what makes you comfy). I like to keep layout out of the equation when writing components (thanks SMACCS!). I'm going to use generic layout classes because everyone likes their own particular framework. Basically you have a basic card that you're trying to create. You create a base component named card placed in a file named card.scss or whatever. Each component contains the elements and modifiers below them. I personally like to nest within my modifiers. A 'large card' should always have a .card__header that has a green background. I don't like to make a class like .card__header--large. I keep both the .card class in addition to the .card--large class on my div. This way I get all the classes that a card has and also modify the parts I want to modify with a --large modifier. Different people have different opinions on this but I have found this works great in maintainability as well as ease of copy-pasting markup into various parts of your page. Your CSS can look a little something like this: .card { color: $blue; } .card__header { font-size: 1.2rem; background-color: $red; } .card__header--blue { background-color: $blue; } .card__title { color: $green; } .card__author { font-size: rem-calc(20); } ... .card--large { font-size: rem-calc(40); .card__header { background-color: $green; } .card__title { font-size: 2.0rem; } } Now for your markup: <div class="column"> <div class="card"> <div class="card__header"> <h2 class="card__title">Hi</h2> <h3 class="card__author">I'm an author</h3> </div> <div class="card__body"> <p class="card__copy">I'm the copy</p> <img class="card__thumbnail"> </div> </div> </div> <div class="column"> <div class="card card--large"> <div class="card__header"> <h2 class="card__title">Hi</h2> <h3 class="card__author">I'm an author</h3> </div> <div class="card__body"> <p class="card__copy">I'm the copy</p> <img class="card__thumbnail"> </div> </div> </div> Conclusion Object Oriented principles can help to make your CSS more maintainable. At the end of the day, it's what works for your team and for your personal taste. I've shown you how I like to organize my CSS. I'd love to hear back from you with what's worked for you and your teams! Discover more object-oriented principles in our article on mutability and immutability now! From 11th to the 17th April, save up to 70% on some of our top web development titles. From Angular 2 to Flask, we're sure you'll find something to keep you learning... Find them here. About the author Liz Tom is a Software Developer at Pop Art, Inc in Portland, OR.  Liz’s passion for full stack development and digital media makes her a natural fit at Pop Art.  When she’s not in the office, you can find Liz attempting parkour and going to check out interactive displays at museums.
Read more
  • 0
  • 0
  • 2124
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €14.99/month. Cancel anytime
article-image-firebase-and-react
Asbjørn Enge
27 Jan 2016
5 min read
Save for later

Firebase and React

Asbjørn Enge
27 Jan 2016
5 min read
Firebase is a realtime database for the modern web. Data in your Firebase is stored as JSON and synchronized in realtime to every connected client. React is a JavaScript library for creating user interfaces. It is declarative, composable, and promotes functional code. React lets you represent your UI as a function of its state. Together they are dynamite! In this post we will explore creating a React application with a Firebase backend. We will take a look at how we can use Firebase as a Flux store to drive our UI. Create a project Our app will be an SPA (Single Page Application) and we will leverage modern JavaScript tooling. We will be using npm as a package manager, babel for ES2015+ transpilation, and browserify for bundling together our scripts. Let's get started. $ mkdir todo && cd todo $ npm init $ npm install --save [email protected] [email protected] firebase $ npm install --save-dev budo babelify We have now made a folder for our app todo. In that, we have created a new project using npm init (defaults are fine) and we have installed React, Firebase, and some other packages. I didn't mention budo before, but it is a great browserify development server. NOTE: At the time of writing, React is at version 0.13.3. In the next major release (0.14), React itself is decoupled from the DOM rendering. In order for this blogpost not to be obsolete in a matter of weeks we use the 0.14.0-rc1 version of React. Since we intend to render DOM nodes from our React components, we also need to install the react-dom module. React basics Let's start by creating a basic React application. We will be making a list of TODO items. Make a file index.js in the todo directory and open it in your favorite editor. import React from 'react' import ReactDOM from 'react-dom' class TodoApp extends React.Component { constructor(props) { super(props) this.state = { todos : [ { text : 'Brew coffee', id : 1 }, { text : 'Drink coffee', id : 2 } ] } } render() { let todos = this.state.todos.map((todo) => <li key={todo.id}>{todo.text}</li>) return ( <div className="todoAppWrapper"> <ul> {todos} </ul> </div> ) } } ReactDOM.render(<TodoApp />, document.body) Now we can run that application in a browser using budo (browserify under the hood). $ budo index.js --live -- -t babelify Navigate to http://localhost:9966/ and verify that you can see our two TODO items on the page. Notice the --live parameter we pass to budo. It automatically enables livereload for the bundle. If you're not familiar with it, get familiar with it! Set up Firebase For setting up a new Firebase, check out the below details on the subject. FireFlux To build large applications with React a good approach is to use the Flux application architecture. At the heart of Flux sits the store. A Flux store holds application state and trigger events whenever that state changes. Components listen to these events and re-render accordingly. This aligns perfectly with how Firebase works. Firebase holds your data/state and triggers events whenever something changes. So what we are going to do is use our Firebase as a Flux store. We'll call it FireFlux :-) Make a file fireflux.js in the todo directory and open it in your favorite editor. import Firebase from 'firebase/lib/firebase-web' import { EventEmitter } from 'events' const ref = new Firebase('https://<name>.firebaseio.com/todos') const fireflux = new EventEmitter() fireflux.store = { todos : [] } fireflux.actions = { addTodo : function(text) { ref.push({ text : text }) }, removeTodo : function(todo) { ref.child(todo.id).remove() } } ref.on('value', (snap) => { let val = snap.val() || [] if (typeof val == 'object') val = Object.keys(val).map((id) => { let todo = val[id] todo.id = id return todo }) fireflux.store.todos = val fireflux.emit('change') }) export { fireflux as default } Notice we import the firebase/lib/firebase-web library from the Firebase module. This module includes both a browser and node version of the Firebase library; we want the browser version. The fireflux object is an EventEmitter. This means it has functionality like .on() to listen and .emit() to trigger events. We attach some additional objects: store and actions. The store will hold our todo, and the actions are just convenience functions to interact with our store. Whenever Firebase has updated data - ref.on('value', fn) - it will update fireflux.store.todos and trigger the change event. Let's see how we can hook this up to our React components. import React from 'react' import ReactDOM from 'react-dom' import fireflux from './fireflux' class TodoApp extends React.Component { constructor(props) { super(props) this.state = { todos : [] } } render() { let todos = this.state.todos.map((todo) => { return ( <li key={todo.id}> <button onClick={this.removeTodo.bind(this, todo)}>done</button> {todo.text} </li> ) }) return ( <div className="todoAppWrapper"> <button onClick={this.addTodo}>Add</button> <ul> {todos} </ul> </div> ) } addTodo() { let todo = window.prompt("Input your task") fireflux.actions.addTodo(todo) } removeTodo(todo) { fireflux.actions.removeTodo(todo) } componentDidMount() { fireflux.on('change', () => { this.setState({ todos : fireflux.store.todos }) }) } } ReactDOM.render(<TodoApp />, document.body) First, take a look at TodoApp's componentDidMount. It sets up a listener for FireFlux's change event and updates its state accordingly. Calling this.setState on a React component triggers a re-render of the component. We have also included an Add and some done buttons. They make use of fireflux.actions.* to interact with Firebase. Give them a try and notice how the interface automatically updates when you add and finish items. Hopefully you can now hit done for the last one! About the author Asbjørn Enge is a software enthusiast living in Sandes, Norway. He is passionate about free software and the web. He cares about modular design, simplicity, and readability and his preferred languages are Python and JavaScript. He can be found on Twitter @asbjornenge
Read more
  • 0
  • 0
  • 1562

article-image-year-machine-learning
Owen Roberts
22 Jan 2016
5 min read
Save for later

This Year in Machine Learning

Owen Roberts
22 Jan 2016
5 min read
The world of data has really boomed in the last few years. When I first joined Packt Hadoop was The Next Big Thing on the horizon and what people are now doing with all the data we have available to us was unthinkable. Even in the first few weeks of 2016 we’re already seeing machine learning being used in ways we probably wouldn’t have thought about even a few years ago – we’re using machine learning for everything from discovering a supernova that was 570 billion times brighter than the sun to attempting to predict this year’s Super Bowl winners based on past results, but So what else can we expect in the next year for machine learning and how will it affect us? Based on what we’ve seen over the last three years here are a few predictions about what we can expect to happen in 2016 (With maybe a little wishful thinking mixed in too!) Machine Learning becomes the new Cloud Not too long ago every business started noticing the cloud, and with it came a shift in how companies were structured. Infrastructure was radically adapted to take full advantage that the benefits that the cloud offers and it doesn’t look to be slowing down with Microsoft recently promising to spend over $1 billion in providing free cloud resources for non-profits. Starting this year it’s plausible that we’ll see a new drive to also bake machine learning into the infrastructure. Why? Because every company will want to jump on that machine learning bandwagon! The benefits and boons to every company are pretty enticing – ML offers everything from grandiose artificial intelligence to much more mundane such as improvements to recommendation engines and targeted ads; so don’t be surprised if this year everyone attempts to work out what ML can do for them and starts investing in it. The growth of MLaaS Last year we saw Machine Learning as a Service appear on the market in bigger numbers. Amazon, Google, IBM, and Microsoft all have their own algorithms available to customers. It’s a pretty logical move and why that’s not all surprising. Why? Well, for one thing, data scientists are still as rare as unicorns. Sure, universities are creating new course and training has become more common, but the fact remains we won’t be seeing the benefits of these initiatives for a few years. Second, setting up everything for your own business is going to be expensive. Lots of smaller companies simply don’t have the money to invest in their own personal machine learning systems right now, or have the time needed to fine tune it. This is where sellers are going to be putting their investments this year – the smaller companies who can’t afford a full ML experience without outside help. Smarter Security with better protection The next logical step in security is tech that can sense when there are holes in its own defenses and adapt to them before trouble strikes. ML has been used in one form or another for several years in fraud prevention, but in the IT sector we’ve been relying on static rules to detect attack patterns. Imagine if systems could detect irregular behavior accurately or set up risk scores dynamically in order to ensure users had the best protection they could at any time? We’re a long way from this being fool-proof unfortunately, but as the year progresses we can expect to see the foundations of this start being seen. After all, we’re already starting to talk about it. Machine Learning and Internet of Things combine We’re already nearly there, but with the rise in interest in the IoT we can expect that these two powerhouses to finally combine. The perfect dream for IoT hobbyists has always been like something out of the Jetsons or Wallace and Gromit –when you pass that sensor by the frame of your door in the morning your kettle suddenly springs to life so you’re able to have that morning coffee without waiting like the rest of us primals; but in truth the Internet of Things has the potential to be so much more than just making the lives of hobbyists much easier. By 2020 it is expected that over 25 billion ‘Things’ will be connected to the internet, and each one will be collating reams and reams of data. For a business with the capacity to process this data they can collect the insight they could collect is a huge boon for everything from new products to marketing strategy. For IoT to really live up to the dreams we have for it we need a system that can recognize and collate relevant data, which is where a ML system is sure to take center stage. Big things are happening in the world of machine learning, and I wouldn’t be surprised if something incredibly left field happens in the data world that takes us all by surprise, but what do you think is next for ML? If you’re looking to either start getting into the art of machine learning or boosting your skills to the next level then be sure to give our Machine Learning tech page a look; it’s filled our latest and greatest ML books and videos out right now along with the titles we’re realizing soon, available to preorder in your format of choice.
Read more
  • 0
  • 0
  • 1983

article-image-your-machine-learning-plotting-kill-you
Sam Wood
21 Jan 2016
4 min read
Save for later

Is Your Machine Learning Plotting To Kill You?

Sam Wood
21 Jan 2016
4 min read
Artificial Intelligence is just around the corner. Of course, it's been just around the corner for decades, but in part that's our own tendency to move the goalposts about what 'intelligence' is. Once, playing chess was one of the smartest things you could do. Now that a computer can easily beat a Grand Master, we've reclassified it as just standard computation, not requiring proper thinking skills. With the rise of deep learning and the proliferation of machine learning analytics, we edge ever closer to the moment where a computer system will be able to accomplish anything and everything better than a human can. So should we start worrying about SkyNet? Yes and no. Rule of the Human Overlords Early use of artificial intelligence will probably look a lot like how we used machine learning today. We'll see 'AI empowered humans' being the Human Overlords to their robot servants. These AI are smart enough to come up with the 'best options' to address human problems, but haven't been given the capability to execute them. Think about Google Maps - there, an extremely 'intelligent' artificial program comes up with the quickest route for you to take to get from point A to point B. But it doesn't force you to take it - you get to decide from the options offered which one will best suit your needs. This is likely what working alongside the first AI will look like. Rise of the Driverless Car The problem is that we are almost certainly going to see the power of AI increase exponentially - and any human greenlighting will become an increasingly inefficient part of the system. In much the same way that we'll let the Google Maps AI start to make decisions for us when we let it drive our driverless cars, we'll likely start turning more and more of our decisions over for AI to take responsibility for. Super smart AI will also likely be able to comprehend things that humans just can't understand. The mass of data that it's analysed will be beyond any one human to be able to judge effectively. Even today, financial algorithms are making instantaneous choices about the stock market - with humans just clicking 'yes' because the computer knows best. We've already seen electronic trading glitches leading to economic crises - six years ago! Just how much responsibility might we start turning over to smart machines? The Need to Solve Ethics If we've given power to an AI to make decisions for us, we'll want to ensure it has our best interests at heart, right? It's vital to program some sort of ethical system into our AI - the problem is, humans aren't very good at deciding what is and isn't ethical! Think about a simple and seemingly universal rule like 'Don't kill people'. Now think about all the ways we disagree about when it's okay to break that rule - in self-defence, in executing dangerous criminals, to end suffering, in combat. Imagine trying to code all of that into an AI, for every different moral variation. Arguably, it might be beyond human capacity. And as for right and wrong, well, we've had thousands of years of debate about that and we still can't agree exactly what is and isn't ethical. So how can we hope to program a morality system we'd be happy to give to an increasingly powerful AI? Avoiding SkyNet It may seem a little ridiculous to start worrying about the existential threat of AI when your machine learning algorithms keep bugging out on your constantly. And certainly, the possibilities offered by AI are amazing - more intelligence means faster, cheaper, and more effective solutions to humanity's problems. So despite the risk of us being outpaced by alien machine minds that have no concept of our human value system, we must always balance that risk against the amazing potential rewards. Perhaps what's most important is just not to be blase about what super-intelligent means for AI. And frankly, I can't remember how I lived before Google Maps.
Read more
  • 0
  • 0
  • 1856

article-image-why-algorithm-never-win-pulitzer
Richard Gall
21 Jan 2016
6 min read
Save for later

Why an algorithm will never win a Pulitzer

Richard Gall
21 Jan 2016
6 min read
In 2012, a year which feels a lot like the very early years of the era of data, Wired published this article on Narrative Science, an organization based in Chicago that uses Machine Learning algorithms to write news articles. Its founder and CEO, Kris Hammond, is a man whose enthusiasm for algorithmic possibilities is unparalleled. When asked whether an algorithm would win a Pulitzer in the next 20 years he goes further, claiming that it could happen in the next 5 years. Hammond’s excitement at what his organization is doing is not unwarranted. But his optimism certainly is. Unless 2017 is a particularly poor year for journalism and literary nonfiction, a Pulitzer for one of Narrative Science’s algorithms looks unlikely to say the least. But there are a couple of problems with Hammond’s enthusiasm. He fails to recognise the limitations of algorithms, the fact that the job of even the most intricate and complex Deep Learning algorithm is very specific is quite literally determined by the people who create it. “We are humanising the machine” he’s quoted as saying in a Guardian interview from June 2015. “Based on general ideas of what is important and a close understanding of who the audience is, we are giving it the tools to know how to tell us stories”. It’s important to notice here how he talks - it’s all about what ‘we’re’ doing. The algorithms that are central to Narrative Science’s mission are things that are created by people, by data scientists. It’s easy to read what’s going on as a simple case of the machines taking over. True, perhaps there is cause for concern among writers when he suggests that in 25 years 90% of news stories will be created by algorithms, but in actual fact there’s just a simple shift in where labour is focused. It's time to rethink algorithms We need to rethink how we view and talk about data science, Machine Learning and algorithms. We see, for example, algorithms as impersonal, blandly futuristic things. Although they might be crucial to our personalized online experiences, they are regarded as the hypermodern equivalent of the inauthentic handshake of a door to door salesman. Similarly, at the other end, the process of creating them are viewed as a feat of engineering, maths and statistics nerds tackling the complex interplay of statistics and machinery. Instead, we should think of algorithms as something creative, things that organize and present the world in a specific way, like a well-designed building. If an algorithm did indeed win a Pulitzer, wouldn’t it really be the team behind it that deserves it? When Hammond talks, for example, about “general ideas of what is important and a close understanding who the audience is”, he is referring very much to a creative process. Sure, it’s the algorithm that learns this, but it nevertheless requires the insight of a scientist, an analyst to consider these factors, and to consider how their algorithm will interact with the irritating complexity and unpredictability of reality. Machine Learning projects, then, are as much about designing algorithms as they are programming them. There’s a certain architecture, a politics that informs them. It’s all about prioritization and organization, and those two things aren’t just obvious; they’re certainly not things which can be identified and quantified. They are instead things that inform the way we quantify, the way we label. The very real fingerprints of human imagination, and indeed fallibility are in algorithms we experience every single day. Algorithms are made by people Perhaps we’ve all fallen for Hammond’s enthusiasm. It’s easy to see the algorithms as the key to the future, and forget that really they’re just things that are made by people. Indeed, it might well be that they’re so successful that we forget they’ve been made by anyone - it’s usually only when algorithms don’t work that the human aspect emerges. The data-team have done their job when no one realises they are there. An obvious example: You can see it when Spotify recommends some bizarre songs that you would never even consider listening to. The problem here isn’t simply a technical one, it’s about how different tracks or artists are tagged and grouped, how they are made to fit within a particular dataset that is the problem. It’s an issue of context - to build a great Machine Learning system you need to be alive to the stories and ideas that permeate within the world in which your algorithm operates - if you, as the data scientist lack this awareness, so will your Machine Learning project. But there have been more problematic and disturbing incidents such as when Flickr auto tags people of color in pictures as apes, due to the way a visual recognition algorithm has been trained. In this case, the issue is with a lack of sensitivity about the way in which an algorithm may work - the things it might run up against when it’s faced with the messiness of the real-world, with its conflicts, its identities, ideas and stories. The story of Solid Gold Bomb too, is a reminder of the unintended consequences of algorithms. It’s a reminder of the fact that we can be lazy with algorithms; instead of being designed with thought and care they become a surrogate for it - what’s more is that they always give us a get out clause; we can blame the machine if something goes wrong. If this all sounds like I’m simply down on algorithms, that I’m a technological pessimist, you’re wrong. What I’m trying to say is that it’s humans that are really in control. If an algorithm won a Pulitzer, what would that imply – it would mean the machines have won. It would mean we’re no longer the ones doing the thinking, solving problems, finding new ones. Data scientists are designers As the economy becomes reliant on technological innovation, it’s easy to remove ourselves, to underplay the creative thinking that drives what we do. That’s what Hammond’s doing, in his frenzied excitement about his company - he’s forgetting that it’s him and his team that are finding their way through today’s stories. It might be easier to see creativity at work when we cast our eyes towards game development and web design, but data scientists are designers and creators too. We’re often so keen to stress the technical aspects of these sort of roles that we forget this important aspect of the data scientist skillset.
Read more
  • 0
  • 0
  • 2944
article-image-data-science-new-alchemy
Erol Staveley
18 Jan 2016
7 min read
Save for later

Data Science Is the New Alchemy

Erol Staveley
18 Jan 2016
7 min read
Every day I come into work and sit opposite Greg. Greg (in my humble opinion) is a complete badass. He directly turns information that we’ve had hanging around for years and years into actual currency. Single handedly, he generates more direct revenue than any one individual in the business. When we were shuffling seating positions not too long ago (we now have room for that standing desk I’ve always wanted ❤), we were afraid to turn off his machine in fear of losing thousands upon thousands of dollars. I remember somebody saying “guys, we can’t unplug Skynet”. Nobody fully knows how it works. Nobody except Greg. We joked that by turning off his equipment, we’d ruin Greg's on-the-side Bitcoin mining gig that he was probably running off the back of the company network. We then all looked at one another in a brief moment of silence. We were all thinking the same thing — it wouldn’t surprise any of us if Greg was actually doing this. We wouldn’t know any better. To many, what Greg does is like modern day alchemy. In reality, Greg is a data scientist — an increasingly crucial role that helps businesses deliver more meaningful, relevant interactions with their customers. I like to think of them more as new-age alchemists, who wield keyboards instead of perfectly choreographed vials and alembics. This week - find out how to become a data alchemist with R. Save 50% on some of our top titles... or pick up any 5 for $50! Find them all here!  Content might have been king a few years back. Now, it’s data. Everybody wants more — and the people who can actually make sense of it all. By surveying 20,000 developers, we found out just how valuable these roles are to businesses of all shapes and sizes. Let’s take a look. Every Kingdom Needs an Alchemist Even within quite a technical business, Greg’s work lends a fresh perspective on what it is other developers want from our content. Putting the value of direct revenue generation to one side, the insight we’ve derived from purchasing patterns and user behaviour is incredibly valuable. We’re constantly challenging our own assumptions, and spending more time looking at what our customers are actually doing. We’re not alone in taking this increasingly data-driven approach. In general, the highest data science salaries are paid by large enterprises. This isn’t too surprising considering that’s where the real troves of precious data reside. At such scale, the aggregation and management of data alone can warrant the recruitment of specialised teams. On average though, SMEs are not too far behind when it comes to how much they’re willing to pay for top talent. Average salary by company size. Apache Spark was a particularly important focus going forward for folks in the Enterprise segment. What’s clear is that data science isn’t just for big businesses any more. It’s for everybody. We can see that in the growth of data-related roles for SMEs. We’re paying more attention to data because it represents the actions of our customers, but also because we’ve just got more of it lying around all over the place. Irrespective of company size, the range of industries we captured (and classified) was colossal. Seems like everybody needs an alchemist these days. They Double as Snake Charmers When supply is low and demand is high in a particular job market, we almost always see people move to fill the gap. It’s a key driver of learning. After all, if you’re trying to move to a new role, you’re likely to be developing new skills. It’s no surprise that Python is the go-to choice for data science. It’s an approachable language with some great introductory resources out there on the market like Python for Secret Agents. It also has a fantastic ecosystem of data science libraries and documentation that can help you get up and running quite quickly. Percentage of respondents who said they used a given technology. When looking at roles in more detail, you see strong patterns between technologies used. For example, those using Python were most likely to also be using R. When you dive deeper into the data you start to notice a lot of crossover between certain segments. It was at this point where we were able to also start seeing the relationships between certain technologies in specific segments. For example, the Financial sector was more likely to use R, and also paid (on average) higher salaries to those who had a more diverse technical background. Alchemists Have Many Forms Back at a higher level, what was really interesting is the natural technology groupings that started to emerge between four very distinct ‘types’ of data alchemist. “What are they?”, I hear you ask. The Visualizers Those who bring data to life. They turn what otherwise would be a spreadsheet or a copy-and-paste pie chart into delightful infographics and informative dashboards. Welcome to the realm of D3.js and Tableau. The Wranglers The SME all-stars. They aggregate, clean and process data with Python whilst leveraging the functionality of libraries like pandas to their full potential. A jack of all trades, master of all. The Builders Those who use Hadoop and other OS tools to deploy and maintain large-scale data projects. They keep the world running by building robust, scalable data platforms. The Architects Those who harness the might of the enterprise toolchain. They co-ordinate large scale Oracle and Microsoft deployments, the sheer scale of which would break the minds of mere mortals. Download the Full Report With 20,000 developers taking part overall, our most recent data science survey contains plenty of juicy information about real-world skills, salaries and trends. Packtpub.com In a Land of Data, the Alchemist is King We used to have our reports delivered in Excel. Now we have them as notebooks on Jupyter. If it really is a golden age for developers, data scientists must be having a hard time keeping their inbox clear of all the recruitment spam. What’s really interesting going forward is that the volume of information we have to deal with is only going to increase. Once IoT really kicks off and wearables become more commonly accepted (the sooner the better if you’re Apple), businesses of all sizes will find dealing with data overload to be a key growing pain — regardless of industry. Plenty of web services and platforms are already popping up, promising to deliver ‘actionable insight’ to everybody who can spare the monthly fees. This is fine for standardised reporting and metrics like bounce rate and conversion, but not so helpful if you’re working with a product that’s unique to you. Greg’s work doesn’t just tell us how we can improve our SEO. It shows us how we can make our products better without having to worry about internal confirmation bias. It helps us better serve our customers. That’s why present-day alchemists like Greg, are heroes.
Read more
  • 0
  • 0
  • 2207

article-image-redis-cluster-features-overview-0
Zhe Lin
15 Jan 2016
4 min read
Save for later

Redis Cluster Features Overview

Zhe Lin
15 Jan 2016
4 min read
After months of developing and testing, Redis 3.0 cluster was released on April 1st, 2015. A Redis Cluster is a set of Redis instances connecting each other with the gossip protocol, and each instance serves an nonoverlapping subset of all caching data. In this post, I'd like to talk about that how users can benefit from it, and also what's the cost of those benefits. The essence of Redis you may already know is that no matter what kinds of structure Redis supports, it is simply a key-value caching utility. Things are the same with Redis Cluster. A Redis Cluster is not something that magically shards your data into different Redis instances separately. The keys are still the unit and not splittable. For example, if you have a list of 100 elements, they will still be stored in one key, in one Redis, no matter how many instances in the cluster. More precisely, Redis Cluster uses CRC16 of a key string mod 16384 as the slot number of the key, and each master Redis instance serves some of the all 16384 slots, so that each instance just takes responsibility for keys in their owning slots. Knowing this you may soon realize that Redis Cluster finally catches up with the multiple cores fashion. As we know, Redis is designed as an asynchronous single-threaded program, which means although it behaves non-blocking. It can, however, use up to only 1 CPU. Since Redis Cluster simply splits keys into different instances by hash and they could serve data simultaneously, as many CPUs as the number of instances in a cluster are possible to be used so that Redis QPS may become much more than a standalone Redis. Another good news is that Redis instances on different hosts can be joined into one cluster, which means the memory a Redis service could use won't be limited to one host machine any longer, and you won't always worry about how much memory Redis may consume three month later because if memory is about to run out, we can extend Redis capacity by starting some more cluster mode instances, joining them into the cluster and doing a reshard. There is also a great news for those who turns on persistence options (RDB or AOF). When a Redis do persistence it will fork before writing data, which probably causes a latency if your dataset is really large. But there is no large thing in a cluster, since it's all sharded, and each instance just persists its own subset. The next advantage you should know is the availability improvement. A Redis Cluster will be much more robust than a standalone Redis, if you deploy a slave for each master Redis. The slaves in cluster mode are different from those in standalone mode, as they can automatically failover its master if its master is disconnected (accidentally killed or network fault, etc). And "the gossip protocol" we mentioned before means there is no central controller in a Redis Cluster, so that if one master is down and replaced by its slave, other masters will tell you who's the new guy to access. Besides the good things Redis Cluster offers to us, we should also take a look at what a cluster cannot do, or do well. The cluster model which Redis chooses sacrifices consistency for availability. It is good enough for a data caching solution. But as a consequence you may soon find some problems with multiple-keys command like MGET, since Redis Cluster requires that all keys manipulated in each operation shall be in one slot (otherwise you'll get a CROSSSLOT error). This restriction is so strong that those operations, not only MGET, MSET, but also EVAL, SUNION, BRPOPLPUSH, etc, are generally unavailable in a cluster. However, if you store all keys in one slot intendedly, the cluster loses it meaning. Another practice to avoid is to store large object intensively, like overwhelmingly huge lists, hashes, sets which are unable to shard. You may break hashes down to individual keys, but therefore you cannot do a HGETALL. You should also think about how to split lists or sets if you want to take advantage of cluster. Those are things you should know about Redis Cluster if you decide to use it. We must say it's a great improvement in availability and performance, as long as you don't the particular multi-keys commands frequently. So, stay with standalone Redis, or proceed to Redis Cluster, it's time to make your choice.
Read more
  • 0
  • 0
  • 2950

article-image-essential-tools-for-go-programming
Nicholas Maccharoli
14 Jan 2016
5 min read
Save for later

Essential Tools for Go Programming

Nicholas Maccharoli
14 Jan 2016
5 min read
Golang as a programming language is a pleasure to work with, but the reason for this also comes largely in part from the great community around the language and its modern tool set, both from standard distribution and third-party tools. The go command On a system with go installed, type go with no arguments to see its quick help menu. Here, you will see the basic go commands, such as build, run, get, install, fmt, and so on. Go ahead and take a minute to run go help on some verbs that look interesting; I promise I'll be here when you get back. Basic Side options The go build and go run commands do what you think they do, as is also the case with go test, which runs any test files in the directory it is passed. The go clean command wipe out all the compiled and executable files from the directory in which it is run. Run this command when you want to force a build to be made entirely from source again. The go version command prints out the version and build info, as you might expect. The go env command is very useful when you want to see exactly how your environment is set up. Running it will show where all your environment variables point and will also make you aware of which ones are still not properly set. go doc: Which arguments did this take again? Whenever in doubt, just give go doc a call. Just running go doc [Package Name] will give you a high-level readout of the types, interfaces, and behavior defined in this package; that is, go doc net/http will give you all the function stubs and types defined. If you just need to check the order or types of arguments that a function takes, run go doc on the package and use a tool like grep to grab the relevant line, such as go doc net/http | grep -i servecontent This will produce just what we need! func ServeContent(w ResponseWriter, req *Request, name string, modtime time.Time, content io.ReadSeeker) If you need more detail on the function or type, just run the go doc command with the package and function name, and you will get a quick description of this function or type. gofmt This little tool is quite a time-saver. I mainly use it to ensure that my source files are stylistically correct, and I also use the -s flag to let gofmt simplify my code. Just run gofmt -w on a file or an entire directory to fix up the files in place. After running this command, you should see the proper use of white space and indentation corrected to eight space tabs. Here is a diff of a file with poor formatting that I ran through gofmt: Original package main import "fmt" func main() { hello_to := []string{"Dust", "Trees", "Plants", "Carnivorous plants"} for _, value := range hello_to { fmt.Printf("Hello %v!n",value) } } After running gofmt -w Hello.go package main import "fmt" func main() { hello_to := []string{"Dust", "Trees", "Plants", "Carnivorous plants"} for _, value := range hello_to { fmt.Printf("Hello %v!n", value) } } As you can see, the indentation looks much better and reads way easier! The magic of gofmt -s The -s flag to gofmt helps clean up unnecessary code; so, the intentionally ignored values in the following code: hello_to := []int{1, 2, 3, 4, 5, 6} for count, _ := range hello_to { fmt.Printf("%v: Hello!n", count) } Would get converted to the following after running –s: hello_to := []int{1, 2, 3, 4, 5, 6} for count, _ := range hello_to { fmt.Printf("%v: Hello!n", count) } The awesomeness of go get One of the really cool features of the go command is that go get it works seamlessly with code hosted on GitHub as well as repositories hosted elsewhere. A note of warning Make sure that $GOPATH is properly set (this is usually exported as a variable in your shell). You may have a line such as “export GOPATH=$HOME” in your shell's profile file. Nabbing a library off of GitHub Say, we see this really neat library we want to use called fastHttp. Using only the go tool, we can fetch the library and get it ready for use all with just: go get github.com/valyala/fasthttp Now, all we have to do is import it with the exact same path, and we can start using the library right away! Just type this and it should do the trick: import "github.com/valyala/fasthttp" In the event that you want to have a look around in the library you just downloaded with go get, just type cd into $GOPATH/src/[Path that was provided to get command]—in this case, $GOPATH/src/github.com/valyala/fasthttp—and feel free to inspect the source files. I am also happy to inform you that you can also use go doc with the libraries you download in the exact same way as you use go doc when interacting with the standard library! Try it: type go doc fasthttp (you might want to tack on less since its a little bit long to type go doc fasthttp | less). Those are only stock features and options! The go tool is great and gets the job done, but there are also other great alternatives to some of the go tool's features, such as the godep package manager. If you have some time, I think it’s worth the time investment to learn! About the author Nick Maccharoli is an iOS/backend developer and an open source enthusiast working at a start-up in Tokyo and enjoying the current development scene. You can see what he is up to at @din0sr or github.com/nirma.
Read more
  • 0
  • 0
  • 3811
article-image-year-python
Sam Wood
04 Jan 2016
4 min read
Save for later

The Year of the Python

Sam Wood
04 Jan 2016
4 min read
When we asked developers for our $5 Skill Up report what the most valuable skill was in 2015, do you know what they said? Considering the title of this blog and the big snake image, you can probably guess. Python. Python was the most valuable skill they learned in 2015. But 2015 is over - so what did developers say they're hoping to learn from scratch, or increase their skills in in 2016? Correct guess again! It's Python. Despite turning 26 this Christmas (it's the same age as Taylor Swift, you know), the language is thriving. Set to be the most widely adopted new language for two years running is impressive. So why are people flocking to it? Why are we living in the years of the Python? There are three main reasons. 1. It's being learned by non-developers In the Skill Up survey, the people who were most likely to mention Python as a valuable skill that they learned also did not tend to describe themselves as traditional software developers. The job role most likely to be learning Python were 'Academics', followed by analysts, engineers, and people in a non-IT related role. These aren't the people who live to code - but they are the people who are likely finding the ability to program an increasingly useful professional skill. Rather than working with software every day, they are using Python to perform specific and sophisticated tasks. Much like knowledge of working in Microsoft became the essential office skill of the Nineties/Noughties, it looks like Python is becoming the language of choice for those who know they need to be able to code but don't necessarily define themselves as solely working in dev or IT. 2. It's easy to pick up I don't code. When I talked to my friends who did code, mumbling about maybe learning and looking for suggestions, they told me to learn Python. One of their principal reasons was because it was so bloody easy! This also ties in heavily to why we see Python being adopted by non-developers. Often being learned as a first programming language, the speed and ease with which you can pick up Python is a boon - even with minimal prior exposure to programming concepts. With much less of an emphasis on syntax, there's less chance of tripping up with missing parentheses or semicolons than with more complex languages. Originally designed (and still widely used) as a scripting language, Python has become extremely effective for writing standalone programs. The shorter learning curve means that new users will find themselves creating functioning and meaningful programs in a much shorter period of time than with, say, C or Java. 3. It's a good all-rounder Python can do a ton. From app development, to building games, to its dominance of data analysis, to its continued colonization of JavaScript's sovereign territory of web development through frameworks like Django and Flask, it's a great language for anyone who wants to learn something non-specialized. This isn't to say it's a Jack of All Trades, Master of None, however. Python is one of the key languages of scientific computing, aided by fast (and C-based) libraries like NumPy. Indeed, the strength of Python's versatility is the power of its many libraries to allow it to specialize so effectively. Welcoming Our New Python Overlords Python is the double threat - used across the programming world by experienced and dedicated developers, and extensively and heartily recommended as the first language for people to pick up when they start working with software and coding. By combining ease-of-entry with effectiveness, it's come to stand as the most valuable tech skill to learn for the middle of the decade. How many years of the Python do you think lie ahead?
Read more
  • 0
  • 0
  • 4245

article-image-taking-advantage-spritekit-cocoa-touch-0
Milton Moura
04 Jan 2016
7 min read
Save for later

Taking advantage of SpriteKit in Cocoa Touch

Milton Moura
04 Jan 2016
7 min read
Since Apple announced SpriteKit at WWDC 2013, along with iOS 7, it has been promoted as a framework for building 2D games with high-performance graphics and engaging gameplay. But, as I will show you in this post, by taking advantage of some of it's features in your UIKit-based application, you'll be able to add some nice visual effects to your user interface without pulling too much muscle. We will use the latest stable Swift version, along with Xcode 7.1 for our code examples. All the code in this post can be found in this github repository. SpriteKit's infrastructure SpriteKit provides an API for manipulating textured images (sprites), including animations, applying image filters, with optional physics simulation and sound playback. Although Cocoa Touch also provides other frameworks for these things, like Core Animation, UIDynamics and AV Foundation, SpriteKit is especially optimized for doing these operations in batch and performs them on a lower lever, transforming all graphics operations directly into OpenGL commands. The top-level user interface object for SpriteKit are SKView's, that can be added to any application view controller, and then are used to present scene objects, of type SKScene, composed of possibly multiple nodes with content, that will render seamlessly with other layers or views that might also be contained in the application's current view hierarchy. This allows us to add smooth and optimized graphical effects to our application UI, enriching the user experience and keeping our refresh rate at 60hz. Our sample project To show how to combine typical UIKit controls with SpriteKit, we'll build a sample login screen, composed of UITextFields, UIButtons and UILabels, for our wonderful new WINTER APP. But instead of a boring, static background, we'll add an animated particle effect to simulate falling snow and apply a Core Image vignette filter to mask them under a niffy spotlight-type effect. 1. Creating the view hierarchy We'll start with a brand new Swift Xcode project, selecting the iOS > Single View Application template and opening the Main Storyboard. In the existing View Controller Scene, we add a new UIView that anchors to it's parent view's sides, top and bottom and change it's class from the default UIView to SKView. Also make sure the background color for this view is dark, so that the particles that we'll add later have a nice contrast. Now, we'll add a few UITextFields, UILabels and UIButtons to replicate the following login screen. Also, we need an IBOutlet to our SKView. Let's call it sceneView. This is the SpriteKit view where we will add the SKScene with the particle and image filter effect. 2. Adding a Core Image filter We're done with UIKit for now. We currently have a fully (well, not really) functional login screen and it's now time to make it more dynamic. The first thing we need is a scene, so we'll add a new Swift class called ParticleScene. In order to use SpriteKit's objects, let's not forget to add an import statement for that and declare that our class is an SKScene.    import SpriteKit    class ParticleScene : SKScene    {        ...    } The way we initialize a scene in SpriteKit is by overriding the didMoveToView(_:) method, which is called when a scene is added to an SKView. So let's do that and setup the Core Image filter. If you are not familiar with Core Image, it is a powerful image processing framework that provides over 90 filters that can be applied in real time to images, videos and, coincidentally, to SpriteKit nodes, of type SKNode. An SKNode is the basic unit of content in SpriteKit and our SKScene is one big node for rendering. Actually, SKScene is an SKEffectNode, which is a special type of node that allows its content to be post processed using Core Image filters. In the following snippet, we add a CIVignetteEffect filter centered on our scene and with a radius equal to the width of our view frame:    override func didMoveToView(view: SKView) {        scaleMode = .ResizeFill               // initialize the Core Image filter        if let filter = CIFilter(name: "CIVignetteEffect") {            // set the default input parameter values                filter.setDefaults()            // make the vignette center be the center of the view               filter.setValue(CIVector(CGPoint: view.center), forKey: "inputCenter")            // set the radius to be equal to the view width                filter.setValue(view.frame.size.width, forKey: "inputRadius")            // apply the filter to the current scene                self.filter = filter                self.shouldEnableEffects = true            }            presentingView = view        } If you run the application as is, you'll notice a nice spotlight effect behind our login form. But we're not done yet. 3. Adding a particle system Since this is a WINTER APP, let's add some falling snow flakes in the background. Add a new SpriteKit Particle File to the project and select the Snow template. Next, we add a method to setup our particle node emitter, an SKEmitterNode, that hides all the complexity of a particle system:    func startEmission() {        // load the snow template from the app bundle        emitter = SKEmitterNode(fileNamed: "Snow.sks")        // emit particles from the top of the view        emitter.particlePositionRange = CGVectorMake(presentingView.bounds.size.width, 0)        emitter.position = CGPointMake(presentingView.center.x, presentingView.bounds.size.height)        emitter.targetNode = self        // add the emitter to the scene        addChild(emitter) } To finish things off, let's create a new property to hold our particle scene in the ViewController and start the particle in the viewDidAppear() method:      class ViewController: UIViewController {    ...    let emitterScene = ParticleScene()    ...    override func viewDidAppear(animated: Bool) {        super.viewDidAppear(animated)      emitterScene.startEmission()    } } And we're done! We now have a nice UIKit login form with an animated background that is much more compelling than a simple background color, gradient or texture. Where to go from here You can explore more Core Image filters to add stunning effects to your UI but be warned that some are not prepared for real-time, full-frame rendering. Indeed, SpriteKit is very powerful and you can even use OpenGL shaders in nodes and particles. You are welcome to checkout the source code for this article and you'll see that it has a little extra Core Motion trick, that shifts the direction of the falling snow according to the position of your device. About the author Milton Moura (@mgcm) is a freelance iOS developer based in Portugal. He has worked professionally in several industries, from aviation to telecommunications and energy and is now fully dedicated to creating amazing applications using Apple technologies. With a passion for design and user interaction, he is also very interested in new approaches to software development. You can find out more at http://defaultbreak.com
Read more
  • 0
  • 0
  • 2207