Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Author Posts

103 Articles
article-image-machine-learning-can-useful-almost-every-problem-domain-interview-sebastian-raschka
Packt Editorial Staff
04 Sep 2017
9 min read
Save for later

Has Machine Learning become more accessible?

Packt Editorial Staff
04 Sep 2017
9 min read
Sebastian Raschka is a machine learning expert. He is currently a researcher at Michigan State University, where he is working on computational biology. But he is also the author of Python Machine Learning, the most popular book ever published by Packt. It's a book that has helped to define the field, breaking it out of the purely theoretical and showing readers how machine learning algorithms can be applied to everyday problems. Python Machine Learning was published in 2015, but Sebastian is back with a brand new edition, updated and improved for 2017, working alongside his colleague Vahid Mirjalili. We were lucky enough to catch Sebastian in between his research and working on the new edition to ask him a few questions about what's new in the second edition of Python Machine Learning, and to get his assessment of what the key challenges and opportunities in data science are today. What's the most interesting takeaway from your book? Sebastian Raschka: In my opinion, the key take away from my book is that machine learning can be useful in almost every problem domain. I cover a lot of different subfields of machine learning in my book: classification, regression analysis, clustering, feature extraction, dimensionality reduction, and so forth. By providing hands-on examples for each one of those topics, my hope is that people can find inspiration for applying these fundamental techniques to drive their research or industrial applications. Also, by using well-developed and maintained open source software, makes machine learning very accessible to a broad audience of experienced programmers as well as people who are new to programming. And introducing the basic mathematics behind machine learning, we can appreciate machine learning being more than just black box algorithms, giving readers an intuition of the capabilities but also limitations of machine learning, and how to apply those algorithms wisely. What's new in the second edition? SR: As time and the software world moved on after the first edition was released in September 2015, we decided to replace the introduction to deep learning via Theano. No worries, we didn't remove it! But it got a substantial overhaul and is now based on TensorFlow, which has become a major player in my research toolbox since its open source release by Google in November 2015. Along with the new introduction to deep learning using TensorFlow, the biggest additions to this new edition are three brand new chapters focussing on deep learning applications: A more detailed overview of the TensorFlow mechanics, an introduction to convolutional neural networks for image classification, and an introduction to recurrent neural networks for natural language processing. Of course, and in a similar vein as the rest of the book, these new chapters do not only provide readers with practical instructions and examples but also introduce the fundamental mathematics behind those concepts, which are an essential building block for understanding how deep learning works. What do you think is the most exciting trend in data science and machine learning? SR: One interesting trend in data science and machine learning is the development of libraries that make machine learning even more accessible. Popular examples include TPOT and AutoML/auto-sklearn. Or, in other words, libraries that further automate the building of machine learning pipelines. While such tools do not aim to replace experts in the field, they may be able to make machine learning even more accessible to an even broader audience of non-programmers. However, being to interpret the outcomes of predictive modeling tasks and being to evaluate the results appropriately will always require a certain amount of knowledge. Thus, I see those tools not as replacements but rather as assistants for data scientists, to automate tedious tasks such as hyperparameter tuning. Another interesting trend is the continued development of novel deep learning architectures and the large progress in deep learning research overall. We've seen many interesting ideas from generative adversarial neural networks (GANs), densely connected neural networks (DenseNets), and  ladder networks. Large profress has been made in this field thanks to those new ideas and the continued improvements of deep learning libraries (and our computing infrastructure) that accelerate the implementation of research ideas and the development of these technologies in industrial applications. How has the industry changed since you first started working? SR: Over the years, I have noticed that more and more companies embrace open source, i.e., by sharing parts of their tool chain in GitHub, which is great. Also, data science and open source related conferences keep growing, which means more and more people are not only getting interested in data science but also consider working together, for example, as open source contributors in their free time, which is nice. Another thing I noticed is that as deep learning becomes more and more popular, there seems to be an urge to apply deep learning to problems even if it doesn't necessarily make sense -- i.e., the urge to use deep learning just for the sake of using deep learning. Overall, the positive thing is that people get excited about new and creative approaches to problem-solving, which can drive the field forward. Also, I noticed that more and more people from other domains become more familiar with the techniques used in statistical modeling (thanks to "data science") and machine learning. This is nice, since good communication in collaborations and teams is important, and a given, common knowledge about the basics makes this communication indeed a bit easier. What advice would you give to someone who wants to become a data scientist? SR: I recommend starting with a practical, introductory book or course to get a brief overview of the field and the different techniques that exist. A selection of concrete examples would be beneficial for understanding the big picture and what data science and machine learning is capable of. Next, I would start a passion project while trying to apply the newly learned techniques from statistics and machine learning to address and answer interesting questions related to this project. While working on an exciting project, I think the practitioner will naturally become motivated to read through the more advanced material and improve their skill. What are the biggest misunderstandings and misconceptions people have about machine learning today? Well, there's this whole debate on AI turning evil. As far as I can tell, the fear mongering is mostly driven by journalists who don't work in the field and are apparently looking for catchy headlines. Anyway, let me not iterate over this topic as readers can find plenty of information (from both viewpoints) in the news and all over the internet. To say it with one of the earlier comments, Andrew Ng's famous quote: “I don’t work on preventing AI from turning evil for the same reason that I don’t work on combating overpopulation on the planet Mars." What's so great about Python? Why do you think it's used in data science and beyond? SR: It is hard to tell which came first: Python becoming a popular language so that many people developed all the great open-source libraries for scientific computing, data science, and machine learning or Python becoming so popular due to the availability of these open-source libraries. One thing is obvious though: Python is a very versatile language that is easy to learn and easy to use. While most algorithms for scientific computing are not implemented in pure Python, Python is an excellent language for interacting with very efficient implementations in Fortran, C/C++, and other languages under the hood. This, calling code from computationally efficient low-level languages but also providing users with a very natural and intuitive programming interface, is probably one of the big reasons behind Python's rise to popularity as a lingua franca in the data science and machine learning community. What tools, frameworks and libraries do you think people should be paying attention to? There are many interesting libraries being developed for Python. As a data scientist or machine learning practitioner, I'd especially want to highlight the well-maintained tools from Python core scientific stack: -       NumPy and SciPy as efficient libraries for working with data arrays and scientific computing -       Pandas to read in and manipulate data in a convenient data frame format -       matplotlib for data visualization (and seaborn for additional plotting capabilities and more specialized plots) -       scikit-learn for general machine learning There are many, many more libraries that I find useful in my project. For example, Dask is an excellent library for working with data frames that are too large to fit into memory and to parallelize computations across multiple processors. Or take TensorFlow, Keras, and PyTorch, which are all excellent libraries for implementing deep learning models. What does the future look like for Python? In my opinion, Python's future looks very bright! For example, Python has just been ranked as top 1 programming language by IEEE Spectrum as of July 2017. While I mainly speak of Python from the data science/machine learning perspective, I heard from many people in other domains that they appreciate Python as a versatile language and its rich ecosystem of libraries. Of course, Python may not be the best tool for every problem, it is very well regarded as a "productive" language for programmers who want to "get things done." Also, while the availability of plenty of libraries is one of the strengths of Python, I must also highlight that most packages that have been developed are still being exceptionally well maintained, and new features and improvements to the core data science and machine learning libraries are being added on a daily basis. For instance, the NumPy project, which has been around since 2006, just received a $645,000 grant to further support its continued developed as a core library for scientific computing in Python. At this point, I also want to thank all the developers of Python and its open source libraries that have made Python to what it is today. It's an immensely useful tool to me, and as Python user, I also hope you will consider getting involved in open source -- every contribution is useful and appreciated, small documentation fixes, bug fixes in the code, new features, or entirely new libraries. Again, and with big thanks to the awesome community around it,  I think Python's future looks very bright.
Read more
  • 0
  • 0
  • 2260

article-image-piwars-mike-hornes-world-raspberry-pi-robotics
Fahad Siddiqui
09 Dec 2015
6 min read
Save for later

PiWars - Mike Horne's world of Raspberry Pi Robotics

Fahad Siddiqui
09 Dec 2015
6 min read
Robotics competitions have evolved from the time I participated in themduring my college days. Thanks to microboards such as the Raspberry Pi, it’s much more accessible – it could quite literally be described as ‘child’s play’. Mike Horne, the organizer of PiWars and co-organiser of CamJam, alongside his friend Tim Richardson, has taken his close connection to the Raspberry Pi project to inspire tech fans and hackers of all ages. PiWars is unique- it’s not just about knocking over your combatant’s robot, or following the terrain, it’s about the entire learning and development process.I was lucky enough to get to talk to Michael about PiWars, robotics and the immense popularity of Raspberry Pi. What kick-started PiWars and CamJam? CamJam started because I couldn’t understand why there wasn’t a Raspberry Jam in the Pi’s home town. There had been a couple of Cambridge Jams but they stopped quite early. I resurrected it by starting small (with just 30 people in one room) and it’s grown from there. Tim Richardson came onboard as co-planner after my second Jam and encouraged me to get a larger venue where we could run workshops as well as talks. We now work hand-in-hand to make the events as good as possible. PiWars was Tim’s idea. We both fondly remember the television programme ‘Robot Wars’ and he wondered whether we couldn’t do something similar, but with challenges instead of ‘fights’. And it all went from there. What sets PiWars’ apart from other robotics challenge? What is your vision 2020? What sets it apart first of all is that it is ‘non-destructive’. Although we used the name PiWars, no robots are intentionally damaged. We believe this is key to the enjoyment of the competitors as it means their good work isn’t destroyed. Apart from that, the use of the Raspberry Pi makes it unique – each robot must have a Pi at its core. When was the last time you competed in a robotics challenge or created a robot? I’ve personally never competed in a robotics challenge – the opportunities just haven’t been there. I did actually go and see Robot Wars being filmed once, which was exciting! I created a robot about two weeks ago whilst preparing for the launch of CamJamEduKit 3. It’s a robotics kit that’s available from The Pi Hut for £17 and contains everything you need to build a robot except batteries and a chassis (although the box it comes in makes a really good chassis!) You guys did a great job in organising thePiWars, CamJam and RaspberryPi birthday party. What are the challenges you faced, and ideas you came up with? Mostly the challenge is two-fold: 1. Persuading people to come and do talks, help with workshops and give general help on the day. 2. Logistics – it takes a lot of paperwork, spreadsheets and checklists to run an event on this scale. It’s always about working out what scale of event you want to run. CamJam is pretty steady now as we’ve got a structure. Pi Wars, being the second year, has expanded and changed organically. For The Big Birthday Weekend we came up with the idea of having two lots of workshops running at the same time as two lots of talks. Ideas-wise, we use beer to get things kicked off J. Tim’s great with coming up with new ways to make the events better. The Marketplace area was his idea. Show-and-Tell was mine. It’s a great collaboration. Not everyone could participate/physically be present in such competitions, do you think hosting a virtual competition though skype can be possible? We did consider it last year, actually! Someone from Australia wanted to send his robot via freight and control it over the Internet. We didn’t think that would work due to technical limitations. The main problem with holding a virtual competition is: where do you put the challenge courses? Do you have them in one location and then have robots remote-controlled or do you have the competitors recreate the courses in their location somehow? Then, how do you deal with the video streaming to spectators? How can robotics be taught in an effective manner with limited resources? How do you think Packt is contributing? The main barrier to entry with robotics is not the cost of equipment, although that does play a part. The main barrier is lack of material to support the learning. It’s one of the things we’ve concentrated on with the EduKits – good, solid resource worksheets. Packt have been doing a great job by publishing several books which contain at least an element of robotics, and sometimes by devoting entire publications to the subject. You may have seen some of our books mentioned in the MagPi, but do you use books to learn about Raspberry Pi yourself? I do. I’ve learned a lot of the basics from Adventures in Raspberry Pi (by Carrie Anne Philbin) and use Alex Bradbury and Ben Everard’s Python book as a reference. I’ve also looked at several Packt publications for inspiration for Raspberry Pi projects. Complete these sentences… Robotics challenge is not about smashing, it is… about learning how to give your robot the skills it needs. PiZero is…incredibly cute and brings a lot of hope for the future of embedded and IoT Raspberry Pi projects. Code quality, build quality, aesthetics and blogging is not just to rank the robot, it helps to… focus the minds of the competitors in building the best robot they can. My favourite Raspberry Pi project… at the moment is probably the S.H.I.E.L.D.-inspired ‘den’ I blogged about recently. Long-term, I really like Dave Akerman’s work on getting pictures from near-space using high-altitude balloons with a Pi and camera module. My words of wisdom for young hacker are… “Don’t be limited by anything, not even your imagination. Push yourself to come up with new and interesting things to do, and don’t be afraid to take someone else’s idea and run with it.” This or That- Tea or coffee? Coffee Linux or Python? Both GeekGurl or RaspberryPi Guy? They’re both friends – I’m not landing myself in hot water for that one! Terminators or Transformers? Transformers, but the ones from the 1980s, not Michael Bay’s questionable version! Raspberry Pi or BBC micro:bit? Raspberry Pi, all the way. The micro:bit just doesn’t do enough to really stretch youngsters. We’re big fans of DIY tech at Packt – like Raspberry Pi, we’re passionate about innovation and new ideas. That’s why from Monday 7th to Sunday 13th December we’re celebrating Maker Week. We’re giving away free eBooks on some of the world’s most exciting microcomputers – including Raspberry Pi and Arduino – and offering 50% off some of our latest guides to creative tech.
Read more
  • 0
  • 0
  • 2526

article-image-interview-christoph-korner
Christoph Körner
14 Jul 2015
3 min read
Save for later

An Interview with Christoph Körner

Christoph Körner
14 Jul 2015
3 min read
Christoph is the CTO of GESIM, a Swiss start-up company, where he is responsible for their simulation software and web interface built with Angular and D3. He is a passionate, self-taught, software developer and web-enthusiast with more than 7 years’ experience in designing and implementing customer-oriented web-based IT solutions. Curious about new technologies and interested in innovation Christoph immediately started using AngularJS and D3 with the first versions. We caught up with him to get his insights into writing with Packt. Why did you decide to write with Packt, what convinced you? Initially, I wasn’t sure about taking on such a big project. However after doing some research and discussing Packt’s reputation with my University colleagues, I was sure I wanted to go ahead. I was also really passionate about the topic, Angular is one of my favourite tools for frontend JavaScript. As a first-time Packt author, what type of support did you receive to develop your content effectively? I started off working independently, researching papers, developing code for the project and reading other books on similar topics, and I got some great initial feedback from my University colleagues. As the project progressed with Packt, I received a lot of valuable feedback from the technical reviewers and the process really provided a lot of valuable and constructive insights. What were your main aims when you began writing with us, and how did Packt in particular match those aims? I was aiming to help other people get started with an awesome front-end technology stack (Angular and D3). I love to look closely at topics that interest me, and enjoy exploring all the angles, both practical and theoretical, and helping others understand it. My book experience was great and Packt allowed me to explore all the theory and practical concepts that the target reader will find really interesting. What was the most rewarding part of the writing experience? The most rewarding part of writing is getting constructive, critical feedback – particularly readers who leave comments about the book as well as the comments from my reviewers. It was a pleasure to have such skilled, motivated and experienced reviewers on-board who helped me develop the concepts of the book. And of course, holding your own book in your hands after 6 months of hard work is a fantastic feeling. What do you see as the next big thing in your field, and what developments are you excited about? The next big thing will be Angular 2.0 and Typescript 1.5; and this will have a big impact on the JavaScript world. Combining – for example – new Typescript features such as annotations with D3js, opening up a whole new world of writing visualizations using annotations for transitions or styling – which will make the code much cleaner. Do you have any advice for new authors? Proper planning is the key, it will take time to write, draw graphics and develop your code at the same time. Don't cut a chapter because you think you don't have time to write it as you wanted – find the time! And get feedback as soon as possible. Experienced authors and users can give very good tips, advice and critique. You can connect with Christoph here: Github: https://github.com/chaosmail Twitter: https://twitter.com/ChrisiKrnr LinkedIn: https://ch.linkedin.com/in/christophkoerner Blog: http://chaosmail.github.io/ Click here to find out more about Christoph’s book Data Visualization with D3 and AngularJS
Read more
  • 0
  • 0
  • 1839
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $15.99/month. Cancel anytime
article-image-brief-interview-lorenzo-bettini
Lorenzo Bettini
24 Jun 2015
4 min read
Save for later

A Brief Interview with Lorenzo Bettini

Lorenzo Bettini
24 Jun 2015
4 min read
Lorenzo Bettini is an Assistant Professor in Computer Science at the Department of Informatics at the University of Turin, Italy, and the author of Implementing Domain-Specific Languages with Xtext and Xtend. You can learn more about Lorenzo here and here. You can also find him on Twitter: @lorenzo_bettini We spoke to him about his book and his experience of writing it, and found out a little more about his research and daily work… How will readers benefit from this book? Did you learn anything new while writing the book? At the time I started writing the book (and also currently) there was no other book on Xtext (at least in English). For this reason I hope that new users of Xtext can benefit from it: following it chapter by chapter they should be able to get acquainted with this cool and powerful framework. My intention was also to describe my own experiences with Xtext (I've been using it for several years, since version 0.7), in particular I tried to describe some programming techniques and best practices. My two favorite chapters are the one on Testing (Testing your software is truly crucial, and DSLs implemented in Xtext are definitely no exception; the whole book is based on tests) and the one on Scoping (Scoping is one of the most difficult concepts in Xtext, but it is also one of the most important; I hope I managed to describe scoping so that it is easier for readers to understand). For these reasons, I hope that also readers who are already familiar with Xtext can learn something new. Our authors usually have full-time jobs whilst writing for us. Was this the case for you and how did you manage your time? I am a full-time Assistant Professor (Researcher) in Computer Science; this might sound like I have lot of spare time but that's not the case: we too have lots of deadlines… However, since I've always used Xtext for implementing the languages I'm doing research on, the time I spent on the book has been a real scientific investment for me. During the writing process, did you come across any issues/ difficulties that affected your writing and how did you overcome these? Especially for the more advanced chapters I was kind of blocked at least on some example implementation. The authors of the Xtext framework were really available and they helped me solving such issues (not to mention that two of them, Jan Koehnlein and Sebastian Zarnekow, also reviewed the book). I'm really grateful to all of them (especially for creating Xtext) Was there anything interesting that happened during the writing of the book? Well, when I started to write the book, Xtext was version 2.3... After writing half the book, Xtext 2.4 was released. The new release created a new version of the projects (Xtext comes with project wizards that setup most of the things to get you started), in particular, Xtext 2.4 started to rely mostly on Xtend (a Java-like language, completely interoperable with Java and its type system). This meant that all the examples had to be rewritten, and also many parts of the chapters that had been already delivered. I think that this makes the code of the examples (also shown in the book) much more readable, and that's why the title has been changed so that “Xtend” appears in the title as well. How did you find the overall experience of writing your book for Packt? It was a lot of stress, but also a whole lotta fun in the end! What disturbed me most was that I had to use WYSIWYG editors like LibreOffice and Word... I use LaTeX type setting system all the time; LaTeX is so powerful once you learned it that it was a real shock (and nightmare) to fight against the rigidity of Word. What tools or configuration do you use for your workstation? I've been a Linux user for decades, and I've written the book on a very pleasant Linux Mint distribution. I only had to switch to Windows to deal with some problems in the files that required Word instead of LibreOffice. Thanks Lorenzo! If you want to learn more about Xtext and Xtend, you can buy Lorenzo’s book here. Packed with plenty of examples it’s been designed to give readers a practical and accessible insight into a complex area of development.
Read more
  • 0
  • 0
  • 1179

article-image-interview-heather-mahalik
Heather Mahalik
18 Jun 2015
2 min read
Save for later

An Interview with Heather Mahalik

Heather Mahalik
18 Jun 2015
2 min read
Heather Mahalik is currently Principle Forensic Scientist and Program Manager at Oceans Edge, Inc, and the course lead for the SANS mobile device and advanced smartphone forensics courses. With over 11 years' experience in digital forensics, she currently focuses on mobile device investigations, forensic course development and instruction, and research on smartphone forensics. As a prolific forensics professional, Heather brought a great deal of expertise and knowledge to Packt, helping to make Practical Mobile Forensics a great success, and we caught up with her to get some thoughts on her experiences as an author. Why did you decide to write with Packt, what convinced you? Packt approached me with the idea and introduced me to the other authors, who ended up being co-authors of the book. I was lucky to be sought out and not have to seek a publisher. As a first-time Packt author, what type of support did you receive to develop your content effectively? Packt provided our team an editor and others to support our efforts on the book. Our Acquisition Editor was fantastic and always responded immediately. I never felt that any question was unanswered or that I didn’t have the support I needed. They were also very flexible with us submitting chapters out of order to allow the normal flow of writing. What were your main aims when you decided to write a book, and how did Packt, in particular, match those aims? I wanted to release a book quickly on mobile forensics that emphasized the use of open source tools. Packt allowed us to progress quickly, update as needed and get the book out. What was the most rewarding part of the writing experience? Working with my co-authors. Seeing their perspectives on each topic was eye opening. What do you see as the next big thing in your field, and what developments are you excited about? Handling smartphone security – device security, encryption, and application obfuscation. Do you have any advice for new authors? Stay positive, write a little bit every day and hang in there. Follow Heather on Twitter (@HeatherMahalik) or take a look at her blog. If you have been inspired to write, get in touch with our experienced team who are always open to discussing and designing potential book ideas with talented people. Click here for more details.
Read more
  • 0
  • 0
  • 1809

article-image-interview-mario-casciaro
Mario Casciaro
11 Jun 2015
5 min read
Save for later

An Interview with Mario Casciaro

Mario Casciaro
11 Jun 2015
5 min read
Mario Casciaro is a software engineer and technical lead with a passion for open source. He began programming with a Commodore 64 when he was 12, and grew up with Pascal and Visual Basic. His programming skills evolved by experimenting with x86 assembly language, C, C++, PHP, and Java. His relentless work on side projects led him to discover JavaScript and Node.js, which quickly became his new passion. Mario now works in a lighthouse at D4H Technologies, where he led the development of a real-time platform to manage emergency operations (Node.js to save lives!). As Mario is at the cutting-edge of Node development, we asked him to share his thoughts on the future of his field, and also on what led him to want to write a book with Packt.   Why did you decide to write with Packt, what convinced you? I already knew Packt from some excellent books I read in the past and that was certainly a good start. As an author I was given a lot of freedom in the definition of the contents of my book and this was probably what I liked the most as it allowed me to leverage my knowledge and passion at its fullest. Besides that, Packt has one of the best royalties package out there, something to not overlook. As a first-time Packt author, what type of support did you receive? Packt provided me with some interesting education material, and besides the mandatory style guide, there were some articles with advice on how to organize the work and deal with the ups and downs of the writing process. However, what was most helpful was the patience of the editors in providing feedback and fixes, especially during the writing of the initial chapters. I think this is a make-or-break factor as the first 2/3 months are probably the toughest and it's important to have the support of an empathetic team to succeed. What were your main aims when you began writing? My main goal was to write a book worth reading, something that I would have bought and read myself if I wasn't the author. One of the things I wanted to avoid was to include trivial content, knowledge that could easily be found elsewhere, such as in a good free tutorial or in some official documentation. I wanted every topic to teach the reader something new, I wanted to 'wow' the reader with content and notions it was unlikely they knew before. What was the most rewarding part of the writing experience? Reading a message from a reader saying 'thank you' is probably the most rewarding part of being an author. It brightens up my day and reminds me that the time writing the book was well spent. In general, knowing that I helped somebody learn something new and valuable is an amazing feeling. What do you see as the next big thing in your field, and what developments are you excited about? JavaScript is taking over the world, it is spreading well beyond the browser and the web. This change is already happening, we can use JavaScript to implement server applications (node.js), in databases (couchdb), in connected devices (tessel), on mobiles (phonegap) and on desktops (nw.js). I'm also looking forward to the spreading of the new ES6 and ES7 standards which should add even more powerful features to the language. Do you have any advice for new authors? Dream big but keep it simple. If you want to explain complex topics always start from the basic notions and build on top of those as you move forward. Always assume there might be some reader that doesn't have a prior knowledge to completely grasp a concept or a code sample. Spend a few words on the background of a problem/solution, why the reader should learn it and what advantages it brings. No one likes to be fed with knowledge without knowing why. On the organizational side, I would say that the most important aspect is sticking with the schedule, no matter what. Build the habit of writing for the book at a given time of the day and move away from the computer screen only when you spent all your mental energy. For me, that was the most rewarding moment of the day, knowing that I’ve done my job and that I couldn’t have done more. Always focus on the current chapter, try to get it as much production-ready as you can (there will be little time to change things later), don’t think to the amount of work left to finish the book, one chapter at the time you will have achieved what you previously considered impossible. Follow Mario on Twitter, connect with him on Linkedin, or find him on Github. You can buy Mario’s book Node.js Design Patterns from packtpub or find out more about the book here: http://www.nodejsdesignpatterns.com If you have been inspired to write, get in touch with our experienced team who are always open to discussing and designing potential book ideas with talented people. Click here for more details.
Read more
  • 0
  • 0
  • 1430
article-image-translating-between-virtual-and-real-interview-artist-scott-kildall
Michael Ang
13 Feb 2015
9 min read
Save for later

Translating between the virtual and the real: Interview with artist Scott Kildall

Michael Ang
13 Feb 2015
9 min read
Scott Kildall is an artist whose work often explores themes of future-thinking and translation between the virtual and the real. His latest projects use physical data visualization - the transformation of data sources into physical objects via computer algorithms. We're currently collaborating on the Polygon Construction Kit, a software toolkit for building physical polygon structures from wireframe 3D models. I caught up with Scott to ask him about his work and how 3D printing and digital fabrication are changing the production of artwork. How would you describe your work?I write computer algorithms that generate physical sculptures from various datasets. This has been a recent shift in my art practice. Just five years ago, digital fabrication techniques, 3D printing, CNC machinery and other forms of advanced fabrication were simply too expensive for artists. Specifically, I've been diving into what the media ominously calls "big data," which entails thousands upon thousands of data points ranging from city infrastructure data to biometric feedback. From various datasets, I have been generating 3D-printed sculptures. Water Works - Imaginary Drinking Hydrants (2014) 3D-Printed Sculpture with laser-etched wood map What are some of the tools that you use?I write my own software code from the ground up to both optimize the production process, and create a unique look for my work. My weapon of choice is openFrameworks, a C++ toolkit that is relatively easy to use for a seasoned applications programmer. The other open source tool I use is Processing, which is a quick and dirty way to prototype ideas. Python, my new favorite language, is excellent for transforming and cleaning datasets, which is the not-so-glamorous side of making "data art". You've just completed some residencies and fellowships, can you tell us about those?In 2014, I was an Artist In Residence at Autodesk in San Francisco, where I live. Autodesk has an amazing shop facility including 6 state-of-the art Objet 500 printers. The resulting prints are resin-based and capture accurate details. During a several month period, I was able to iteratively experiment with 3D printing at a rate that was much faster than maintaining my own extrusion 3D printer. Data Crystals (2014) Incidents of Crime Data from San Francisco The first project I worked on is called Data Crystals, which uses public data sets from the city government of San Francisco, which anyone can download from the data portal at SFGov.org. The city's open data includes all sorts of goodies such as geolocated points for incidents of crime and every parking meter in the city. I mapped various data points on an x-y plane using the latitude and longitude coordinates. The z-plane was then a dimension of time or space. To generate the "Crime Data" crystal, I worked with over 30,000 data points. My code represented each data points as a simple cube with the size being proportional to the severity of the crime. I then ran clustering algorithms to create one cohesive object, which I call a "crystal", like a synthetic rock that a data miner might find. In a sense you're mining an abstract data source into a physical object...It was more like finding a concrete data source and then turning it into an abstract physical object. With conventional 2D data visualizations you can clearly see where the hotspots of crime, or other data points, might be on a map. However, the Data Crystals favor aesthetics over legibility. The central question I wanted to answer was "what does data look like?" When people create screen-based data visualizations, they focus on what story to tell. I was intrigued by the abstract data itself and so made art objects which you could look at from different vantage points. What is it about having the data occupy a physical space that's important to you?When data occupies a physical space, it is static, like a snapshot in time. Rather than controlling time, like you would on a slider with a screen-based visualization, you can examine the minutiae of the physical characteristics. The data itself invites a level of curiosity that you don't get with a mediated screen-based interaction. Real objects tap into the power of conventional perception, which is innate to how our brains interact with the world. Tell us a bit about your series of sculptures that were created in the virtual world of Second Life and then physically created using Pepakura software and papercraft techniques.An earlier project that I worked on, in collaboration with my partner Victoria Scott, is No Matter (2008). This project was instrumental in my development about how to transform the imaginary into the real. For this project, we worked with the concept of imaginary objects, which are things that have never physically been built, but exist in our shared imagination. They include items from mythology like the Holy Grail or the Trojan Horse, from fiction like the Maltese Falcon or the Yellow submarine, or impossible objects/thought experiments like the Time Machine or Schrodinger's Cat. We constructed these objects in the imaginary world of Second Life and then extracted them as "digital plunder" then rebuilt them as paper sculptures in real space. No Matter (2008) Second Life Installation at Ars Virtua No Matter (2008) Yellow Submarine paper sculpture Because they were paper sculptures that were physically fabricated, there were physical constraints such that the forms themselves had to be vastly simplified to smaller faceted objects. The collision between this faceted proto-object with beautiful high resolution prints resonate with most viewers on an aesthetic level. Working with that kind of virtual space led me to thinking about the question of data -- could this intangible "thing" also be represented materiality? No Matter (2008) Installation at Huret & Spector Gallery What are you working on now?I'm developing a new project called Machine Data Dreams, which examines the question of "how do machines think?" To make computers function, humans program code in languages such as JavaScript or Python or C++. There are whole sets of people that are literate with these machine languages, while most others in the world don't "speak" them. Knowing how to code gives you power and money, though usually not prestige. However, understanding how machines process language will be increasingly important as they will undoubtedly be increasingly integrated with human biology. My proposition is to create a room-based installation that reflects the structure of language and how machines might view the world through a datasets representing machine syntax. What I will be doing is taking machine language (e.g. Javascript, Python, C++) and translating that to a language-based data sets. From these, I will algorithmically generate a cavelike 3D model with triangulated faces. The Polycon Construction Kit -- which you developed -- will be instrumental in making this happen. Last year, I had sketched out ideas in my notebook about creating custom 3D-printed connectors for large-scale sculptural installations, and then I found out that you had already been working on this technology. So, thank you for inviting me to collaborate on Polycon! I'm grateful to be figuring out how to do this with a trusted colleague. What are some of the trends or new technologies that you're excited about?There's so much happening in field of digital fabrication! For example, 3D printing technology has so much to offer now, and we're at a pioneering stage, akin to Photoshop 1.0. Artists are experimenting with the medium and can shape the dialog of what the medium itself is about. It's clear to me that 3D printing / 3D fabrication will be a larger part of our economy and our universe. It fundamentally changes the way materials are produced. Many have made this observation, but it is still understated. 3D printing is redefining the paradigm of material production, which also affects art production and factory production. This is how capitalist-based economies will operate in the next 20 or 30 years. From an artistic standpoint, working with digital fabrication technology has changed the way I think about sculpture. For example, I can create something with code that I never thought was even possible. I don't even know what the forms will look like when I write the code. The code generates forms and I find unexpected results ranging from the amazing to the mediocre to the crappy. Then I can tweak the algorithms to focus on what works best. You've had a chance to work with some of the most advanced technologies through the Autodesk residency. Now you have your own 3D printer in your garage. Do you see a difference in your creative process using the really high-end machines versus something you can have in your garage? Working with the high-end machines is incredible because it gives you access to making things as perfect as you can make them. I went from working with printers that probably cost a half a million dollars to a printer that I got for $450. Like half a million to half a thousand?Yes! The Autodesk printers are three orders of magnitude more expensive. The two types of printers have vastly different results. With my garage setup, I have the familiar tales: messed up 3D prints and the aches and pains of limit switches, belts and stepper motors. I've been interested in exploiting the glitches and mistakes. What happens when you get bad data? What happens when you get glitchy material errata? My small extrusion printer gives me a lot more appreciation for fixing things, but when making something precise I'd much rather work with the high-quality 3D printers. Where can we see more of your work?All my work is on my website at http://kildall.com. My blog at http://kildall.com/blog has my current thought processes. You can follow me on Twitter at @kildall About the Author: Michael Ang is a Berlin-based artist and engineer working at the intersection of art, engineering, and the natural world. He is the creator of the Polygon Construction Kit, a toolkit for bridging the virtual and physical worlds by translating simple 3D models into physical structures.
Read more
  • 0
  • 0
  • 2727

article-image-john-o-nolan-talks-ghost-blogging
Oli Huggins
29 Oct 2014
5 min read
Save for later

John O’Nolan - Ghost Q&A

Oli Huggins
29 Oct 2014
5 min read
John O'Nolan took some time out of his busy schedule to talk to Packt Publishing about the meteoric rise of Ghost - the stripped back blogging platform. What were the biggest issues you faced/continue to face in the development phase? Open source development is always a challenge. You’re taking a group of people from entirely different backgrounds, with widely varying skillsets, from all over the world, who’ve never met each other before - and trying to get them to work together in a way which is both functional and cohesive. Organising such a large project tends to attract an even larger group of developers, so managing that whole process is generally one of the biggest challenges which we face on a day to day basis. It’s incredibly rewarding when it all comes together and works correctly, but it takes a lot of time and attention to get right. We’ve learned a tremendous amount about this process in the last 12 months, and I’m sure we’ll learn just as much in the next 12 months. Would you recommend Kickstarter for other software devs? Are there any lessons other open-source developers could take away from it? Crowdfunding is something I’d recommend to anyone - but whether or not you use Kickstarter is a really hard one to answer, and I’ve gone back and forth on it a few times. I think these days my answer is more or less: “It depends” - as there are both advantages and drawbacks to each approach. On the plus side, Kickstarter gives a project a certain degree of credibility, puts in in front of people who might not necassarily have seen it otherwise, and gives you quite a lot of tools to manage the campaign. On the downside, they take quite a large chunk of the money you raise, you have to play by their rules, and the tools for managing the campaign are good - but not great. I think if you have an existing network or audience, independent crowdfuning might be a more compelling proposition. What are your views on the situation with Express.js, and the questions is raises overall about the OpenSource movement? The situation with Express.js was interesting, but realistically, things like this happen in Open Source all the time. It’s not a new paradigm, and to some degree it’s a bit of a storm in a teacup. I don’t think it raises any new questions about the Open Source Sofware movement - in fact in some way it points to the very strength of it. Had Express been a closed project which got sold, everyone would’ve been left high and dry. With Open Source, there is always the option to fork and create a new project if the original branch loses its way. It’s not a perfect solution (it’s often not even a good one), but in the face of the alternative: no solution at all - it’s a significant step in the direction of freedom. What's your biggest hope for the future of Ghost? Mass adoption? An increase in dialog about how we distribute our blog content? Mass adoption is always a hope, of course, but I’m definitely more passionate about quality than quantity. I would rather have 5 respected major news organisations running on Ghost than 10,000 cat blogs. I think that for journalism to continue to remain relevant it needs to continue to be free and open, both in the tools being used as well as in the organisations behind the content being created. I hope that Ghost can move the needle on independent publishing, as opposed to venture-backed click-bait. Besides a different use philosophy (just blogging!) Ghost is notable for its embrace of Node.js. Do you think more CMS systems will start to make the transition to Node.js? Yes and no. I don’t believe many (if any) existing systems are going to start transitioning to Node.js. There are far too many reasons for why it doesn’t make much sense. But I do think that we’re already starting to see a definite transition in the technologies chosen for building new platforms. Node.js is certainly towards the front of the pack, but it’s by no means the only player. There are a great deal of exciting new technologies which are lining up to power the next generation of the web, and I’m pretty excited to see where they all go. With Ghost(Pro), and fairly easy going third-party hosting, Ghost's more accessible than many open-source blogging/CMS platforms. What do you think Ghost offers over and above more packaged blog solutions? The choices right now are beautiful and closed, or clunky and open. We’re trying to combine the best of both worlds and create something really special.
Read more
  • 0
  • 0
  • 1603

article-image-sam-erskine-talks-microsoft-system-center
Samuel Erskine
29 Aug 2014
1 min read
Save for later

Sam Erskine talks Microsoft System Center

Samuel Erskine
29 Aug 2014
1 min read
  How will System Center be used in the next 2 years? Samuel Erskine (MCT), experienced System Centre Admin and Packt author, talks about the future of Microsoft System Center. Samuel shares his insights on the challenges of achieving automation with the Cloud, and effective reporting to determine business ROI.
Read more
  • 0
  • 0
  • 1382
article-image-interview-david-dossot
David Dossot
01 Jul 2014
4 min read
Save for later

An Interview with David Dossot

David Dossot
01 Jul 2014
4 min read
David Dossot is a highly experienced software engineer and architect of close to two decades. Armed with knowledge of RabbitMQ gathered since 2009, he has written a fantastic book called RabbitMQ Essentials, a fast-paced tutorial that covers the fundamentals of RabbitMQ when used in Message Queuing. As a renowned developer, we asked David to share with us his approach to and experience with authoring a technology book with Packt. We also asked him for some of his insights and running thoughts on the current state and future development of RabbitMQ and Message Queuing. You can see his answers below and find more information about his book here. Q. What initially attracted you to write your book for Packt Publishing? Packt Publishing approached me with a project for a RabbitMQ book. Since it's a technology I know quite well and appreciate a lot, and because the time was right, I decided to embark upon the adventure. Moreover, having never worked with Packt before, I was curious to experiment with a new publishing workflow. Q. When you began writing, what were your main aims? I wanted to produce a short book that would have a mix of high-level concerns, like architectural principles around messaging, and low-level concerns, like the technical details involved in dealing with RabbitMQ. I also wanted to make the book easy to read by telling the story of a fictitious company discovering RabbitMQ and rolling it out up to production. Q. What did you enjoy most and what was most rewarding about the experience of writing? I really enjoyed the pace at which the book was produced: three months of writing and an extra month of revisions and production was the fastest project I ever worked on. Progressing at such speed, without sacrificing the quality of the end product, was extremely rewarding. Q. Why, in your opinion, is RabbitMQ exciting to discover, read, and write about? RabbitMQ is an open source project with a very active community: I'm convinced that open source can use any coverage it can receive, so writing a book about it was a way for me to pay back a little for the great piece of software I have been using for free. Moreover, though there were already many excellent books written about it, none had the brevity and mix of high- and low-level concerns I was envisioning for my book. Q. What is different about RabbitMQ from other open source message queuing software? The richness and interoperability of the AMQP protocol is an important factor for RabbitMQ's success. Another important factor is the solid engineering and sound design decisions that have been made by RabbitMQ's creators. The fact that it's built on Erlang brings some extra guarantees in terms of stability. Finally, the RabbitMQ team is excellent at offering powerful and complex features in an easy package: this is far from the norm in our industry. Q. What do you see on the horizon for RabbitMQ and message queuing, as a whole? RabbitMQ's popularity will keep rising, especially via cloud offerings that relieve users from the tedium of maintaining their own servers. In general, the usage of message queuing is poised to increase as developers become more and more aware of the principles at play when building scalable applications. The recent Reactive Manifesto (http://www.reactivemanifesto.org/), which somewhat rehashes and refreshes old principles of software design, emphasizes the necessity to decouple application components: message queuing is one of the main ways to achieve this. Q. Any tips for new authors? Writing a book is a fractal process where you proceed in several passes, with a higher level of details each time. The following approach has worked very well for me so it may work for others too: Start with the table of contents (TOC): write down the main ideas for each chapter and pay attention to the overall narrative, making sure that ideas develop progressively and logically When you write a chapter, start by copy/pasting the ideas from the TOC and flesh them out a little: don't write sentences yet but instead drop in notes and ideas for figures Prepare the code samples and capture screenshots of executing applications at that time Now you're ready for the last pass: finalize the chapter by writing the complete text and creating any extra diagrams needed Here are a few extra tips: Do not write in order: write paragraphs and even chapters in the way you feel most inspired to. This can relieve you from author's block. The first chapter is the hardest to write: get ready to come back to it several times. Find some music that helps you write: sometimes when you’re be tired and have a hard time getting started, music can get you back on track.
Read more
  • 0
  • 0
  • 1551

article-image-interview-hussein-nasser
Hussein Nasser
01 Jul 2014
4 min read
Save for later

An Interview with Hussein Nasser

Hussein Nasser
01 Jul 2014
4 min read
What initially drew you to write your book for Packt Publishing? In 2009, I started writing technical articles on my personal blog. I would write about my field, Geographic Information Systems, or any other technical articles. Whenever a new technology emerged, a new product,or sometimes even mere tips or tricks,I would write an article about it. My blog became a well-known site in GIS, and that is when Packt approached me with a proposed title. I always wanted to write a book but I never expected that the opportunity would knock on my door. I thank Packt for giving me that opportunity. When you began writing, what were your main aims? My main aim was to write a book that readers in my domain could grab and benefit from. While working on a chapter, I would always imagine a reader picking up the book and reading that particular chapter and asked myself, what could I do better? And then I tried to make the chapter as simple as possible and leave nothing unexplained. What did you enjoy most and what was most rewarding about the experience of writing? Think about all the knowledge, information, ideas, and tips that you possess. You knew you had it in you somewhere but you didn’t know the joy and delight you would feel when this knowledge slipped through your fingertips into a physical medium. With each reading I would reread and polish the chapters;it seems there is always room for improvement in writing. Why, in your opinion, is ArcGIS exciting to discover, read, and write about? ArcGIS is not a new technology; it has been around for more than 14 years. It has become mature and polished during these years. It has expanded and started touching other bleeding-edge technologies like mobile, web, and the cloud. Everyday this technology is increasingly worth discovering and everyday it benefits areas like health, utilities, transportation, and so on. Why do you think interest in GIS is on the rise? If you read The Tipping Point,by Malcolm T. Gladwell, you will understand that the smartphone was actually a tipping point for the GIS technology. GIS was only used by enterprises and big companies who wanted to add the location dimension to their tabular data so it helped them better visualize and analyze their information. With smartphones and GPS, geographic location became more relevant. Pictures taken with smartphones are tagged with location information. Applications were developed to harness the power of GIS for routing, finding the best restaurants in an area, calculating shortest routes, finding information based on geo-fencing technology that sends you text messages when you pass by a shop, and so on. The popularity of GIS is rising and so is the interest in adapting this technology. What do you see on the horizon for GIS? High end processing servers are being sent to the cloud while we are carrying smaller and smaller gadgets. Networking is getting stronger every day with the LTE and 4G networks already setup in many countries. Storage has become no issue at all. The Web architecture is dominant so far and it is the most open and compatible platform that has ever existed. As long as we keep using devices, we will need geographic information systems. The data can be consumed and fetched swiftly from anywhere in the world from the smallest device. I believe this will evolve to an extent that everything valuable we own can be tagged with a location, so when we misplace something or lose it, we can always use GIS to locate it. Any tips for new authors? My role model author is Seth Godin; the first book I ever read was his. When I told him about my new book and asked him for any advice he might give me as a new author, he told me and I quote,″Congratulations, Hussein .This is thrilling to hear; my only advice is to keep writing!″ I took his advice and now I′m working on my second book with Packt. Another personal tip I can give to new authors is thatwriting needs focus, and I find music the best soul feeding source. While working on my first book,I discovered this site www.stereomood.com, which plays music that will help you write. Another thing is to use a clutter free word processor application that will blank the entire screen so you are only left with your words. I use WriteMonkey for Windows and Focus writer for Mac.
Read more
  • 0
  • 0
  • 1930

article-image-interview-jay-lacroix
Jay LaCroix
30 Jun 2014
7 min read
Save for later

An Interview with Jay LaCroix

Jay LaCroix
30 Jun 2014
7 min read
Jeremy, or Jay, is a Linux Administrator with over 12 years of experience and nine certifications. As a technologist, Jay enjoys all aspects of technology, and when not buried under a plethora of computer books, he enjoys music, photography, and writing. As well as tech books, Jay is also a published fiction author, having written his very own Sci-Fi novel, Escape to Planet 55. Jay's passion for open source software and its long term adoption led him to write Linux Mint Essentials for Packt. We asked Jay to discuss his experience of authoring a technology book with Packt and his insights on the future for open source technology. You can see his answers below and find more information about his books at LINKS. What initially drew you to write your book for Packt Publishing? When Packt approached me to write a book for Linux Mint, I was absolutely thrilled. I have always thought about how cool it would be to write a computer book, but never thought to actually attempt to do it. The thought of having a published book excited me. In addition, I also like very much how Packt donates proceeds back to the project being written about, so it felt good that I would help the Mint community in addition. When you began writing what were your main aims? What did you want your book to teach your readers? We've all been beginners at one point or another. For me, I started using Linux around 2002 when it was very difficult to get used to, and I didn't have much in the way of guidance or insight on how to use it. I stuck with it, and eventually became good at it. For my book, I wanted to make the process of getting accustomed to Linux as easy as possible, and for it to be the reference book I could have used at the time when I started. What did you enjoy most about the writing process and what was most rewarding about the experience of writing? The entire process was very rewarding, and fun. The experience I liked the most about writing was the fact that I was empowered to do it. For me, I like to teach others so I think the most rewarding part for me was the prospect of guiding others to enjoy using Linux as much as I have. If my book has this impact on others, then that will be the most rewarding part. What parts of the writing process were the most challenging and how did you overcome these challenges? The most challenging part of writing about open source software is how frequently it changes. During the writing process, two versions of Mint were released. This required going back to previous chapters and correcting things that were no longer true or had changed in some way. This was overcome by the rewrite phase of the project, where I had a chance to go back through the steps, provide new screenshots, and ensure the content was still compatible. Why, in your opinion, is Linux, or open source software, exciting to discover, read, and write about? Open source software, especially Linux, is extremely fun to learn and write about. I spend hours each day reading blogs, articles, and books, trying to keep up to date. Never does it feel tiring or laborious in any way. During my day job, I manage primarily Linux servers and workstations. When I get home, I read about open source and what's happening. While commuting, I listen to Linux-related podcasts (such as the Linux Action Show, Linux Unplugged, and so on) to keep current on upcoming trends. As I learn, I watch as my workflow changes and improves. While I initially found Linux difficult to learn back in 2002, nowadays I can't imagine my life without it! Why do you think interest in Linux, specifically Mint, is on the rise? I personally feel that Canonical (the makers of Ubuntu) are severely out of touch with what their users, or any user in general, wants and expects from a computing environment. I'm not saying that Ubuntu is a bad distribution, I actually like it. But the fact of the matter is that users have expectations of their computing environment. If these expectations are not meant (regardless of whether the expectations are logical or not) adoption will suffer. Linux Mint takes Ubuntu's base, improves on its weaknesses, and makes it much more convenient for the beginner user. In addition, Mint is scalable – it's perfect for a beginner, and is still very useful for when that beginner becomes an expert. People that care about how the internals of the distribution work will seek to learn about it, while those that don't care just want something that works. Whether you're a general user that just wants to check your Facebook account, or a developer writing the next big application – Mint fits just about every use case. What do you see on the horizon for Linux Mint, and Linux in general? In the short term, I think we'll continue to see Mint grow and expend its user base. Desktop environments, such as Cinnamon and MATE (featured heavily in Mint) will see quite a bit of expansion in the coming years, due to renewed developer focus. In the long term, I can see Mint splitting off into its own complete distribution, away from Ubuntu as its base. While there are no signs of that now, I can see Mint outgrowing its current base and moving off on its own in five years or so. Any tips/stories to share for aspiring/new technical authors? I think the best piece of advice is “yes, you can!” For me, I was very worried about whether or not I would even be good at this. When I sent my first chapter over, I thought the reaction would be that I was horrible and I would be excused from writing. I was really surprised to find out that Packt really liked what I was doing – even assuming that I had done this before! You never know what you're capable of until you give it a shot. I'm glad I did! Even if you're not the best writer in the world (I know I'm not) this is a valuable experience and you'll harness your writing skills as you go. Packt was there for me to work through the process, and it was very rewarding. Another piece of advice I can give is “just start writing.” Don't spend time worrying about how to write, what to say, or any of those bad things that can lead to writers block. Open up a word processor, or even a text editor, and just start writing something. You can always go back and correct/revise your sentences. The trick is to get your brain working, even if you don't plan on using what you're writing at that very minute. Just keep your mind busy and the rest will follow. Another important tip, which may seem like common knowledge to some, is to use some sort of versioning backup for your directory which includes your book files. A simple periodic copy to a flash drive or removable media isn't going to cut it, you want something that not only backs up your files but also allows you to go back to previous versions of a document in case you blow away content you didn't mean to. Examples of this include Dropbox, or Crashplan, though I'd recommend SpiderOak a bit more for its focus on security, a higher feature set, and the ability to sync between machines. All three are multi-platform.
Read more
  • 0
  • 0
  • 1492

article-image-interview-sascha-gundlach-and-michelle-k-martin
Sascha Gundlach
23 Jun 2014
4 min read
Save for later

An Interview with Sascha Gundlach and Michelle K. Martin

Sascha Gundlach
23 Jun 2014
4 min read
What initially drew you to write your book for Packt Publishing? Trying to bundle all the CryENGINE knowledge and everything we have learned over the past decade working with CryENGINE was something we wanted to do for a long time. With a publisher like Packt we felt confident we could deliver a comprehensive CryENGINE guide. When you began writing what were your main aims? What did you want the book to teach your readers? It was really about taking the readers' CryENGINE skills to the next level. There are plenty of beginner tutorials and articles out there that focus on getting to know the engine or on building simple examples. What was really missing was a book that focused on users who already have the basic experience and know their way around CryENGINE. This is why Mastering CryENGINE covers a lot of advanced topics and contains a lot of tips and tricks useful for people using the engine in a real production environment. What did you enjoy most about the writing process and what was most rewarding about the experience of writing? Seeing the book come together and grow larger and larger over the course of many months really felt good. And of course knowing that somewhere, someone will read it one day and improve their skills with this book is something that just makes you feel good about all the work it takes to finish a book. What parts of the writing process were the most challenging? How did you overcome the The single biggest problem was cutting down the content so that it would all fit within the scope of the book. There were many more chapters, examples, and images we would have liked to add to the book. It is easy to keep on adding more content and have the book lose its focus, so we had to take a lot of care to keep the book streamlined and on topic. Why, in your opinion, is CryENGINE exciting to discover, read, and write about? CryENGINE is one of the most advanced 3D engines out there and it offers so many features and possibilities that it makes it hard to really discover everything. Even after having worked with CryENGINE for ten years I still sometimes discover new things and new best practices I didn't know about. And of course CryENGINE is always evolving. New features are being added and old features optimized. This means there are new things to discover on a regular basis that help you to build your games more efficiently. Why do you think interest in CryENGINE has continued to maintain such popularity? Today it is easier than ever to develop and publish games. You don't need to be a big company anymore to get your game published. CryENGINE gives you the tools to build amazing games in a short amount of time and release them to the public. More and more people are discovering that it is easy and fun to try to put their own games together. I think this is why CryENGINE continues to be that popular. What do you see on the horizon for CryENGINE? This year has been a busy year for Crytek and CryENGINE. A lot of features have been added. I think we will see even more features and optimization to the engine in the next months. Focus points of this development will be improvements to the ease of use of the SDK tools as well as to the pipeline and the steps necessary to actually get a release-ready game. Any tips or stories to share for aspiring/new technical authors? The beginning is always the hardest part. Once you start writing, it starts to flow. Especially with a great publisher by your side there is no need to be afraid of trying.
Read more
  • 0
  • 0
  • 1525